Kirjaudu Wikiin oikeasta yläkulmasta, jos haluat kommentoida opasta.

|

Log in from the top right corner if you want to comment on the guide.


How are universities and research institutions analysed?

Overall evaluations of university research 

Publication metrics have become an established part of evaluating research at universities and organisations. Scholarly publications provide some insight into the activities and impact of the units under review.  

Publication analyses require the extraction of publications belonging to each unit under review, usually from the organisation’s own research information system for the period selected for the review. It is advisable to invest in maintaining the quality and coverage of the data in the research information systems at an early stage, as adding new publications only at the evaluation stage can delay and complicate the evaluation process. 

Bibliometric analysis and other publication statistics are usually compiled on publications. The time window for publication analyses should be chosen so that there has been time for a comprehensive number of publications and citations to accumulate in the last review year.

In bibliometric analyses of universities, it is common to use an external provider for the analysis. In Europe, the most commonly used evaluation provider is Leiden University’s research organisation CWTS B.V. The analyses are usually based on the Web of Science database, which covers incompletely some fields of science, like social sciences, humanities, computer science and engineering. It will therefore have to be decided whether to apply the same analysis principles to all, or whether to exclude some evaluation units from the consistent citation analysis and, in the case of the latter, whether to carry out parallel analyses for them. Often the latter will be done by the publication metrics experts in the organisation itself, in consultation with the unit being evaluated. 

The publication statistics section is usually carried out in accordance with the levels of Publication Forum classification. In such cases, it is worth taking into account the most significant changes in the Ministry of Education and Culture’s publication reporting and in the Publication Forum’s own practices during the review period. 

Publications can be selected for analysis using a retrospective analysis, which includes all publications of the unit in the analysis period. In prospective analysis, publications are limited to those by authors who are employed by the unit at the time specified in the analysis. In this case, the publications must be selected by person.  Other approaches may include career path analysis, but these require efficient and often laborious use of the organisations’ information systems. 

Examination of collaboration between organisations 

Collaboration analysis are often carried out as a background to meetings between organisations and for the development and orientation of activities. There are traces of collaboration between research organisations when an article is written by authors from different organisations. Bibliometric analysis can therefore be used as a tool to map out the collaboration between organisations. This can be useful for purposes such as reporting on activities and planning research collaboration. 

Actual bibliometric databases (e.g. Web of Science and Scopus) provide indicators of the publishing activities of organisations. The selection of co-publications can be limited by affiliations. Databases usually have only one classification of a field of science available.

Usually, impact analysis tools such as InCites and SciVal are used for the analysis. In these tools, publication metadata is processed in such a way that the different subjects (affiliations, publications, etc.) are clearly defined and the most relevant classification of a field of science can be selected for the analysis in each case. The aim is generally to include comparisons with world averages and to highlight the key common research areas of organisations.

In collaboration analysis, attention should be paid to hyper collaborations that may involve researchers from dozens or hundreds of different countries. These publications often have a very high impact, but their inclusion in the analysis can create a skewed perception of the collaboration between organisations.  

Another factor to consider is changes in the organisation, such as mergers with other organisations, which may affect the definition of the subjects of the evaluation. 

Determining benchmarking universities and organisations

Organisations that are sufficiently similar in size, research profile and research topics in analysed disciplines are selected as controls. In this way, organisations’ bibliometric indicators can be compared with each other with sufficient reliability.

In terms of publications, there should be a sufficient number of comparable publications. The scholarly publication data to be examined must be sufficiently coherent – meaning that the research activities must be sufficiently focused on the topics of the study. Where necessary, more detailed information on research topics can be obtained through cluster reviews. 

In general, a single benchmarking university is not suitable for all fields of science in an organisation. Instead, benchmarking universities are selected by field of science.

National and international reviews

Finnish universities, higher education institutions and research institutions report their publications to the Ministry of Education and Culture. The Finnish National Agency for Education’s Education Statistics Finland service Vipunen provides not only reported data, but also long-term analyses based on the data from Web of Science and Scopus. CSC holds data that allows, among other things, the fractionalisation of collaborative publications by organisation.

Every two years since 2012, the Academy of Finland has published State of scientific research in Finland summaries that provide a more accessible approach to publication metrics than the Vipunen service. In the future, these reviews will be published annually.

Finnish organisations are analysed using the Top 10% indicator, which tracks the proportion of most cited publications. As these analyses use the publication data from Web of Science, the analyses are not equally comprehensive across all fields of science. In the future, the State of scientific research in Finland reports will be complemented by quality reviews based on the Publication Forum classification. 

An overview of the research carried out by Nordic organisations is available in a review produced by NordForsk, which is updated every few years. It is compiled in cooperation between the Nordic countries. The latest review Comparing Research at Nordic Higher Education Institutions Using Bibliometric Indicators covers the years up to 2014. 

OECD monitors bibliometric STI indicators in different countries (Science, Technology & Innovation) at country level.


 

University rankings


The use of publication analysis for international comparisons started in around 2000. Initially, the comparison focused on researchers (Highly Cited Researchers, HCR).  In 2003, China started using HCR results and the data from Web of Science, among others, to compare universities. The ARWU ranking (Academic Ranking of World Universities) of the Shanghai Jiao Tong University uses several indicators based on publication metrics. ARWU is a ranking based strongly on publications. It also takes into account the Nobel Prizes and Fields Medals (in mathematics) that have been awarded to the organisations. 

In addition to ARWU, a wide range of university rankings have entered the market. They are often produced by various commercial operators. Their approaches and methodologies vary quite a lot. In most rankings, publications make up only one section, with other scores being awarded according to a wide range of criteria. The indicators used in the rankings are collected from universities, while some of the scores may come from surveys such as reputation surveys. The detailed methodology of each ranking is described on their respective websites. 

In addition to ARWU, international rankings that are widely followed include the Times Higher Education (THE) and QS rankings, which use data from the Scopus database for reviewing publications, and the US News and NTU rankings, which use data from Web of Science. In the latter, the proportion of publication reviews has been higher. Other rankings based purely on bibliometric data include the NTU ranking of National Taiwan University and the Dutch CWTS ranking of Leiden University. 

Despite the various problems with rankings, universities monitor rankings closely and report on their own rankings, especially if they are good. Rankings by fields of science and rankings from different perspectives (age, sustainable development, location, etc.) have also entered the market. They have their own evaluation criteria, and many organisations are often ranked higher in them than in the general rankings. 


Responsible analysis of universities and research institutes

It is essential to recognise that there are many different types of university and research organisations. They differ in size, age, research and teaching focus, objectives, etc. Comparison of organisations does not always indicate which one is superior, but it can highlight differences and characteristics between organisations. 

The indicators chosen for the university rankings are often incompatible, sometimes even arbitrary, and the overall result is somewhat arbitrary. The methods are usually not transparent, and we usually cannot check what is happening in each analytics company’s laboratory. However, comparing universities has become a well-established business in which it is often virtually impossible for the universities to remain uninvolved. 

The Research Evaluation Group working within INORMS (International Network of Research Management Societies) has been developing tools for more responsible evaluation and university ranking practices.  Good practices, transparency and accuracy of methods are essential. Furthermore, it is necessary to measure the essentials – the things towards which it is important and worthwhile for the organisation to steer change. 


  • No labels