There are multiple reasons for evaluating journals: Information is needed, for example, on which journal subscriptions to continue or which journals a researcher could submit their manuscripts for. The quality of a journal can be evaluated for different reasons and using different methods, which is why the results of different evaluations often differ.

One of the most common ways of evaluating journals is citation analysis, where the popularity and recognition of a journal article is evaluated based on the number of citations it has received. Journals can also be evaluated through surveys carried out by experts. The most reliable way to measure the quality of a journal is to use several different indicators.

The indicators should be selected according to what is being evaluated in each case. The comparison of journals must also be carried out within a specific field of science, as publication and citation practices differ significantly between fields of science. Some indicators can also be used to compare journals from different fields of science and multidisciplinary journals.

However, journal quality indicators should not be used to draw conclusions about the level of individual articles or researchers. 


How are journals analysed?

Subjective evaluation methods

Expert evaluations 

Expert journal evaluation is applied, for example, when organisations decide on journal subscriptions. Expert evaluations are particularly useful for journals whose quality cannot be evaluated by the usage or citation indicators.

Expert evaluations require a lot of work. While many researchers can name the leading journals in their field, evaluating hundreds of journals is very difficult in itself. Journal lists based on expert evaluation are subjective and the scores given to a particular journal may vary from one list to another. However, they provide information on the prestige of the journal and complement the qualitative indicators based on statistical data.

Examples of qualitative evaluation systems for journals  

The Publication Forum (JUFO) is a publication channel classification system to support the quality assessment of scientific publishing activities. The classification has been designed to take into account the different publication cultures. The Publication Forum classification rates the major foreign and domestic publication channels of all fields of science as follows:

1 = basic level

2 = leading level

3 = highest level

0 = publication channels that do not meet the criteria for level 1.

The evaluation of publication channels is performed by discipline-specific expert panels composed of Finnish or Finland-based researchers.

National publication channel classifications are used in many countries. The Publication Forum classification is similar to the classifications used in Norway and Denmark, for example:

Different fields of science also have lists of journals that are considered to be the most relevant. These include ERIH PLUS (humanities and social sciences), Association of Business Schools Academic Journal Quality Guide (ABS; economics and business administration) and Nature Index (natural sciences).

Peer review  

The peer-review practice of the manuscripts submitted for publication in a journal, i.e. the referee practice, is considered to be an indicator of the quality of the journal. The aim of the peer-review practice is to ensure that researchers adhere to the practices recognised in their field and to prevent misstatements and misinterpretations. 

Information on the journal’s peer review practices should be checked on the journal’s own website. The paid Ulrichsweb service also includes information on whether the journal has been peer reviewed. 

Journal reputation 

The reputation of a journal can be based on the reputation of the organisation, publisher or editor-in-chief that publishes it. However, the publisher’s good reputation does not guarantee the high quality of the journal. On the other hand, the reputation of the publisher is an important criterion for newer journals. 

Some journals are internationally recognised and appreciated by a broader audience than just the researchers of the field of science they represent. These journals include Nature, Science and JAMA.

Although reputations of journals are fairly stable, they fluctuate over time and are therefore not a very good criterion for evaluation. The good reputation of a journal is usually taken as given and rarely re-evaluated.


Journal-indexing databases

Indexing a journal on key databases increases its visibility. Indexing in databases such as Scopus or Web of Science has been considered a sign of reliability, because journals must meet certain criteria. Learn more about the journal selection criteria in the chapters Web of Science and Scopus.

In natural sciences and medical science, the inclusion of a journal in the Web of Science database has been considered a merit, as the Impact Factor (IF) can be calculated for journals in the database. Impact factor is one of the most recognised bibliometric indicators and it has been widely used for a variety of purposes, including some that are inappropriate.  There have also been attempts to manipulate impact factors: for example, some journals try to increase the impact factor by unethical means. This is possible because the impact factors also take self-citations into account.  When calculating the annual impact factors, attention may be drawn to the journal’s exceptionally high self-citation rate, or the citations may raise suspicions of a citation cartel (certain journals start to systematically cite each other). Such journals will not be removed from the JCR database, but new impact factors will not be calculated for them until the anomalous citation activity has been resolved.

Learn more about the responsible use of indicators: Indicators used to evaluate journals.

You can check which databases journals are indexed in using the paid Ulrichsweb database.


Quantitative evaluation methods

The evaluation of journals using quantitative methods is based on impact evaluation measured by the citation data, usage statistics or other statistics.

The advantages of quantitative evaluation methods are objectivity and easy access to research data. However, they are not without their problems. Their results are often difficult to interpret because they do not directly measure quality. Moreover, it is not always easy to determine how the different indicators are calculated and what citation data they are based on. Citation data also contain a broad range of errors and can be deliberately manipulated. It is not possible to compare the journals in different fields of science, either, as publication and citation practices differ between fields of science. 


Citation impact indicators for journals

Evaluating the quality of a journal by the number of citations received by the articles published in it is based on the assumption that articles that are frequently cited contain information that is relevant to the discipline. A journal’s citation impact indicator calculated from citations can only be calculated years after the journal has been published, because citations will come from articles released later. It is worth noting that the citation numbers of journals are also affected by the number of volumes, the citation practices of the field of science and the type of journal.  Large-scale journals that publish review articles tend to receive more citations than specialised journals with a smaller number of publications.

Journal-specific citation impact indicators are always averages of the citation impact of the articles published in them.

Citation impact indicators calculated in different ways can be used to evaluate journals. All indicators are calculated from a specific set of journals, for example from journals in the Scopus or Web of Science databases.  This makes it difficult to compare different indicators, as they differ not only in the way they are calculated, but also in the journal and citation data from the database used as a base for the analysis. It should also be taken into account that journals from different fields of science are included in different databases to varying degrees. Indicators based on the number of citations are generally useful in fields where research is generally published in international scientific series and where citation practices are well established. In fields that frequently publish monographs and/or publish material in less widely spoken languages, the indicators are not applicable.

Some of the indicators are based on simple calculation formulas, while others are based on complex algorithms. New indicators are constantly being developed and new variations of existing indicators are created. 

Indicators vary depending on the time period over which citations are examined and which citations are included in the analysis. The calculation method of the total number of articles also varies. Some indicators take into account all article types published in a journal, others only certain types of articles. Indicators also differ according to whether all citations are equal in the calculation.

•    Further information on indicators

Journal usage statistics and article acceptance or rejection rates

Journals can also be evaluated by their usage. For many journals, it is possible to obtain information on, for example, the number of article downloads.

Scientific journals are read not only for research purposes, but also out of sheer interest and the desire to follow developments in science. Scientific journals are also read by people who do not publish in them. Therefore, analysing how journals are read could provide different information about their status than analysing the citations they receive.

Journals that are offered a high number of manuscripts and have a high rejection rate are considered high-quality journals. Journals often report the acceptance or rejection rates of manuscripts. However, these figures are somewhat unreliable for comparisons between journals because the way they are calculated varies between journals.

Journals also reject manuscripts for reasons other than scientific weaknesses, such as the topic of the manuscript being outside their field. Acceptance rates and reasons for rejection vary widely between fields of science and even between journals in the same field. Rejection rates are not constant and can fluctuate quite quickly.


Tools for evaluating journals  

The Data sources and tools chapter of this guide presents the main tools for evaluating journals.

Lists on the reliability/unreliability of journals

When using different lists of reliable or unreliable journals, it is important to remember that they are not always up to date – a journal not being listed as a predatory journal does not necessarily mean that it is reliable. It is a good idea to check the journal information from as many different sources as possible. 

The Publication Forum Classification can also be used to check the reliability of journals: if a journal is included in levels 1–3, its reliability has been verified by an expert panel. Level 0 is ambiguous: a journal may be non-scientific and even predatory, but it may just as well be a reliable scientific publication that is so new that it has not yet achieved an established position in the field, for example. More on the ambiguity of level 0


The following lists can also be used to check the reliability of journals:

•    Scholarly Open Access

The free Scholarly Open Access service maintains lists of scientifically questionable open-access journals and publishers.

•    Cabells Journalytics and Predatory Reports

Cabells’ paid databases. The Journalytics database lists journals that are considered reliable, while the Predatory Reports database contains information on potential predatory journals.


The methods used to evaluate journals provide an estimate of the average visibility of a journal. Indicators based on the number of citations should be used with caution, and it should be taken into account that, due to the variation between fields of science and publication types, they are not suitable for evaluating a single article, researcher or research organisation. 


Attention should be paid in particular to the use of the Publication Forum classification and Impact Factors:

  • When using the Publication Forum classification, it should be noted that the classification is intended for examining the average quality of large volumes of publications by universities.

    The concerns about Impact Factors have already been raised in the San Francisco Declaration on Research Assessment in 2012. The declaration drew attention to the fact that the Impact Factors were originally developed as a tool for selecting journal acquisitions, not for assessing the scientific quality of journals or articles.

Quantitative publication indicators should be used to support the qualitative expert evaluation. The evaluation should also take into account the differences between fields of science and multidisciplinarity.

When indicators are used, their limitations should be explained:

  • the disciplinary nature of the indicators

  • the susceptibility of the indicators to manipulation

  • the lack of transparency of the data used in the calculation


Good practice in researcher evaluation. Recommendation for the responsible evaluation of a researcher in Finland.

National recommendation on the responsible use of publication metrics. (PDF in Finnish)

User guide for the Publication Forum classification. Recommendations on the responsible use of the Publication Forum classification