Research Assessment Based on the Number of top Researchers

Bibliometrics provides accurate, cheap, and simple descriptions of research systems. The number of highly cited researchers reported by Clarivate Analytics (HCR) has been used for that purpose, assuming that a research system that employs many HCR will be more successful than others with fewer of them. Here, we study the use of the number of top researchers (TR) reported by Ioannidis, Boyack and Baas, J. in 2020 for research assessment at country level. Firstly, we validated the number of TR by correlation with the P top 5% indicator of the Leiden Ranking. After identifying the number of TR for the Leiden Ranking universities, we found a high correlation between the numbers of TR and top 5% papers when the data were aggregated at country level. Once the number of TR was validated, we constructed country rankings with the raw data and after normalization by the number of inhabitants or gross domestic product. The correlation between the numbers of TR and HCR is high, but with important divergences in countries with small numbers of HCR, which precludes the use of the number of HCR in many countries. The number of TR and HCR are approximately equivalent to the numbers of the top 5% and 0.05% of most cited papers, respectively. We conclude that the number of TR is a simple and reliable indicator of research success that can be used in much more countries than the number of HCR. It can also be used for institutions if affiliations are correctly identified.


INTRODUCTION
A country's or institution's research system is a productive system that research policymakers need to understand in depth; bibliometrics provides an accurate, cheap and simple way of obtaining a scientific description of this system.Therefore, one of the most important applications of bibliometrics, if not the most important, is to provide a foundation for research policy. [1]en pursuing an improvement of a research system, it should be expected that policymakers will make use of current bibliometric knowledge and take decisions based thereon.However, overall, disconnections between current scientific knowledge and policy frequently cause misguided policy decisions. [2]This was detected in research policy in the EU 15 years ago, [3] and this disconnection is still present. [4]Although the reasons for this may be complex, it appears that the best way to correct this disconnection may be to use a simple indicator."It is worth reminding that an indicator is successful not only if it is effective, but also if it is easily understood." [5]similar problem has been described previously as a tension between professional and citizen bibliometrics: [29] The tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Metrics based on Nobel Prizes and prestigious medals and awards [6] are simple and might be a solution, except for their scarcity.

Highly cited researchers
Instead of metrics based on a small number of prizes and awards, a metric based on the number of highly cited researchers [7,8] appears to be more convenient because their number can be high.The rational for this approach is that a research system that produces and employs many highly cited researchers will probably be more successful than others with fewer of them. [9]This is similar to counting prizes and awards but at a much lower strict level, while retaining the notion that scientific progress is infrequent and that the success distribution of research results is heavy-tailed. [10]][13][14] However, the number of highly cited researchers can be counted in many ways and it might be biased by the counting procedure.In the case of HCR, it has been demonstrated that it is not a good predictor of the number of Nobel achievements, [15] and it has also been considered "a popular, albeit flawed, indicator of outstanding individual researchers." [8] 2019, Ioannidis et al. [16] created another list of 105,026 top researchers (TR) drawn from 6,880,389 scientists who had published at least five papers during their career.In 2020, this list was updated to include a total of 159,684 TR, who lie within the top 2% of their main subfield discipline (176) according to the Science-Metrix classification. [17]Ioannidis et al. identified their TR using an elaborate approach that resulted in a composite indicator specifically focused on capturing research success.This indicator is based on the number of citations but also on other details of publications during the career of a researcher. [17]As explained above, assuming that the TR were researchers with the most successful careers, the simple notion that this list can convey to policymakers is that the distribution of TR across countries reflects the research success of each country and its potential to make future discoveries.

Aims and rationale of this study
The aim of this study was to provide a research assessment of countries based on the number of TR, similar to those based on the number of HCR, [12,14] but at a different level of stringency.In fact, the global numbers are very different: around 160,000 for TR and 3,000-6,000 for HCT.Moreover, although Ioannidis et al. [17] validated their selected TR at the individual researcher level, we addressed a different validation by comparing the distribution of TR across countries with evaluations based on the data in the Leiden Ranking.Although these two evaluations are similar in some aspects, they are different in others.The Leiden Ranking is purely based on citations, [18] ranking universities according to the share of papers in four top percentiles (50, 10, 5, and 1) when the world's papers are ordered based on the number of citations (the US National Science Board also uses these indicators).In contrast, TR are selected using a composite index, which is not purely based on citations. [17]Because in the evaluation of universities the number of papers in top percentiles was validated against the highest scores given in peer review, [19,20] the Leiden Ranking seemed to be a convenient benchmark for the TR ranking.To perform the comparison the first step was to find a certain top percentile (P top x % ) in the Leiden Ranking for which the number of papers is similar to the number of TR across countries.
The second part of this study was aimed to investigate the relationship between the numbers of TR and HCR, because, as mentioned above, to be among the latter has been taken as an indicator of research excellence.A reasonable hypothesis was that the numbers of TR and HCR measure the same property of the research of a country (let us call it success or excellence) but at different levels of stringency as explained above.
The obvious test of this hypothesis would be to also find a certain top percentile (P top y % ) in the Leiden Ranking for which the number of papers is similar to the number of HCR across countries.However, the global number of HCR is very small, and the hypothetical corresponding top percentile in the Leiden Ranking would be lower than the lowest top percentile (1%) reported by the Leiden Ranking.
In these circumstances, an alternative test for the hypothesis above was to demonstrate that, across countries, the numbers of HCR and TR are two points of the distribution function of success/excellence among researchers at two different levels of success/excellence, being about 50 times higher for the HCR than for the TR.
Therefore, if the number of TR is associated with a certain P top x % , as hypothesised above, it should be possible to associate the number of HCR with another P top y % .Consequently, because any P top y % can be calculated from another P top x % if the total number of papers is known, [21] our hypothesis predicts that the number of HCR can be calculated from the number of TR; this is a hypothesis that can be tested.

Data and methods
As already explained above, our aim was to compare the number of TR with the number of papers in a certain percentile across countries.However, neither Ioannidis et al. [22] nor the Leiden Ranking provides data at the country level that could be directly compared and used for such a study.Therefore, to obtain comparable measures at country level, we identified the same universities in both the Leiden Ranking's and Ioannidis at al.'s lists and aggregated the data to obtain the country level.
For the TR data, we downloaded Table 6 from the paper by Ioannidis et al. [22] this is an Excel file containing the names and affiliations of 159,684 researchers who are ranked using a composite score that assesses scientists based on their career-long citation impact until the end of 2019.We also downloaded the Leiden Ranking 2020 (https://www.leidenranking.com/;August 21, 2020), which is also an Excel file, containing bibliometric data on 1,176 universities in six research fields and ten 4-year periods.To compare the Leiden data with the number of TR according to Ioannidis et al., we selected the "All sciences" field and "fractional counting."For our study, the relevant data from the Leiden Ranking were the number of papers in the four top percentiles: 1, 5, 10 and 50 (P top 1% , P top 5% , P top 10% and P top 50% ).
Then, each university in the Leiden list was identified in the Ioannidis et al.'s Table 6.Most universities are listed in the Leiden Ranking under an English name, but this is not the norm in the Ioannidis et al.Table 6, thus the identification of the universities in this list was carried out based on the name both in the language of the country and in English.Furthermore, in the list by Ioannidis et al., many departments are recorded separately from the university to which they belong; in these cases, the number of TR was obtained for the university as a whole by adding the numbers of TR given separately for the university and its departments.We also checked the names of the departments carefully, as these are sometimes given in English and sometimes in the language of the country.The number of TR in university hospitals was not included in the number of TR for the university.
The HCR data were downloaded from https://clarivate.com/webofsciencegroup/thanks/?org=65406 (July 15, 2021).We downloaded a folder with Excel files containing the data corresponding to 2001 and to each year from 2014 to 2020.The data for years 2014-2020 include the names and affiliations, including the country, of the HCR.
For the second part of our study, i.e. the calculation of the number of HCR from the number of TR, we used the method that is applied to calculate P top y% from P top x % . [21,23]In the first part of the study, we identified the P top x % that can substitute for the number of TR.If we found a P top y % that could substitute for the number of HCR, the following equations would apply: P top y% = P top x% • e p (lg where e p is a constant that can be substituted by its proxy, the P top 10% /P ratio; [21,23] this ratio can be calculated from the Leiden Ranking data used in the first part of our study (Supplementary Data Table S1).
Therefore, our hypothesis predicts that, across countries: and a comparison of the empirical and calculated numbers of HCR will reveal whether the numbers of HCR and TR are measuring the same property of a country's research but at two different levels of stringency.

Selection of the Leiden data and comparison to the number of TR
From the process of matching universities in the Leiden Ranking and in the TR list, after eliminating ambiguities and universities with zero records, we matched 1,111 universities in the two lists, belonging to 55 countries (Supplementary Data Table S1).
As described in Section 3, the first objective of our study was to compare the number of TR against data in the Leiden Ranking 2020, which reports the results for ten 4-year periods, from 2006-2009 to 2015-2018.Because the number of TR is a single value for each country, we had to select a particular percentile and period in order to calculate the correlation.
When selecting the percentile, because the number of TR represent about the top 2% of the total number of researchers, it seemed reasonable to use the top 5 or 1 percentile of the Leiden Ranking, P top 1% or P top 5% .For statistical reasons P top 5% was the best choice because P top 1% is too low for some universities.Furthermore, we eventually found that the number of TR and the P top 5% in the 2006-2009 Leiden period had similar values (see below).
The selection of the period had to be carried out empirically because the lists of the TR correspond to the whole career of researchers while the numbers of highly cited papers in the Leiden Ranking correspond to fixed periods of four years.The TR list includes the year in which each researcher published her/his first paper, thus making it possible to determine which citation period was dominant in the TR. Figure 1 shows that 50% of the TR published their first paper before 1984.Thus, many of the publications of the TR were published before the first Leiden period (2006-2009), meaning that this period may be the most suitable for the validation since earlier periods are not available.
To make a correct selection of the period, we compared the number of TR with the values of the P top 5% for all periods (Supplementary Data Table S2).The scatter plot of countries constructed from the number of TR and the P top 5% was consistent with the tight correlation described.This conclusion was more evident in the scatter plot of ranks when counties were ordered from higher to lower numbers, in which no large deviations from the general trend were observed (Supplementary Data Table S3; Figure 2).The larger deviations occurred in countries that are still developing efficient research systems (Qatar, Slovakia, Saudi Arabia, South Africa, and Turkey).
In summary, the high correlation coefficients between P top 5% and the number of TR in the 55 countries that we identified at university level in the lists of the Leiden Ranking and   ).The first two countries, i.e., USA and UK, accounts for more than 50% of the total number of TR.
Ioannidis et al. strongly suggest that the number of TR is a reliable indicator of research performance at the top 5% level of highly cited papers.

The TR-based country ranking
Using a minimum threshold of 30 TR, we listed 65 countries based on the numbers of TR (Supplementary Data Table S4; these data refer to all country institutions not to a selection of universities as in the previous section).The USA makes up 42.6% while the UK accounts for 9.4% of all TR.Thus, these two countries account for more than 50% of all TR, and the first 10 countries account for 80% of all TR (Figure 3).
A country ranking based on the number of TR is a sizedependent ranking that does not reflect size-independent characteristics such as research efficiency and commitment to research excellence.To reveal these country characteristics, the number of TR can be normalised by the number of inhabitants or by the gross domestic product (GDP).
Normalisation by the number of inhabitants shows that high  The case of China merits particular attention.Although the number of TR in China is currently at the same level as Australia or France, when normalised by the number of inhabitants or by the GDP, it ranks very low.However, it is worth noting that, between the first (2006-2009) and the last (2015-2018) periods of the Leiden Ranking, the positions of China's universities have improved enormously (Supplementary Data Table S2).
Relevant differences in research success between neighbouring advanced countries (Germany and France versus Switzerland and The Netherlands) have been revealed using several bibliometric approaches, [4] and these results are reproduced by the number of TR.The number of TR per million inhabitants for Switzerland and the Netherlands is 297 and 193, while for Germany and France this Figure is 106 and 75, respectively.

The numbers of HCR and TR are correlated
As mentioned above (Section 3), the second aim of our study was to demonstrate that the number of HCR could be calculated from the number of TR across countries.For this purpose, our first step was to investigate whether these two numbers are correlated.
In the data downloaded from Clarivate, we selected the first year with sufficient information for our purposes (2014) because we compared the number of TR with P top 5% in the Leiden period of 2006-2009 (Section 5.1).
In the downloaded list, the number of HCR was 3,216; in comparison, the number of TR was 159,684.The large difference between these numbers implies that, as might be expected, countries with a small number of HCR had a highly variable number of TR (Supplementary Data Table S5).Therefore, to enable a reliable comparison between the numbers of TR and HCR without the noise resulting from countries with small numbers of HCR, we deleted nine countries having one HCR and eight countries having two HCR (in integer numbers, the precision of 0, 1, and 2 is very low).We also eliminated China because it deviated too much from the general trend (a deviation explained by the above-mentioned rapid growth of research in China).These deletions reduced the number of countries from 49 to 33.Table 2 presents the numbers of HCR and TR for these 33 countries.The Spearman rank correlation coefficient between these two numbers was 0.80 (two-sided p-value, 2.1•10 -8 ), and the Pearson correlation coefficient when eliminating the USA data because of its position as an outlier was 0.97 (two-sided p-value, 9.3•10 -20 ).
The scatter plot of the numbers of HCR and TR was consistent with the tight correlation described but also showed that several countries, especially those with small numbers of HCR, deviated from the general trend.This conclusion becomes not know the P top y % to which the HCR could be associated (Section 3).Considering the global numbers of HCR and TR of 3,216 and 159,684, respectively, if the total population of researchers from which these two sets of researchers were drawn were the same, the percentile corresponding to HCR would be 0.1.
However, the total populations of researchers from which the TR and HCR are drawn and the selection procedures cannot be compared easily.The TR is restricted to researchers who have published at least five papers over their career up to 2019, while the population of researchers in the 2014 list of HCR are those who have published a paper in the period 2002-2012. [8]hese considerations suggested that 0.1 was only a guiding percentile and we calculated several P top y % from P top 5% and compared the results with numbers of HCR.The results of these comparisons suggested that the best percentile to be associated to HCR was 0.05.
Next, according to this finding and based on Eq. 3, the calculated number of HCR in each country is equal to the number of TR multiplied by the square (lg 5 -lg 0.05) of the P top 10% /P ratio; this ratio for the university system could be obtained from the Leiden data (data aggregated at country level in Supplementary Data Table S1).The ratio for the university system might not be the ratio for the whole country's research system.However, conjecturing that the efficiency of research in the universities of a country cannot be very different from the efficiency of the whole research system of the country, we performed our calculations using the P top10% /P ratio of the university system as the ratio for the research of the whole country.more evident in the scatter plot of ranks when the counties are ordered from higher to lower numbers.Figure 4 shows that Saudi Arabia, Poland, India, Israel, Norway and New Zealand notably deviate from the line with unity slope and zero intercept.All these countries are situated in the upperright part of the plot, which implies that they are situated in the lower part of the ranking.

Calculation of the number of HCR from the number of TR
Next, we tested whether the number of HCR could be calculated from the number of TR.For this calculation, we know that the number of TR is associated with P top 5% but numbers of HCR.The scatter plot of ranks shows that many countries deviate very little from the line with unity slope and zero intercept.However, at least eight countries (Japan, Sweden, Saudi Arabia, South Korea, Israel, Spain Iran, and New Zealand) show notable deviations from this line.

DISCUSSION
The idea transmitted by the number of TR is that their distribution across countries reflects the research success of each country and its potential to make future discoveries.Indeed, the number of HCR has been used previously for research assessment [7,9,[11][12][13][14] and our study follows the same idea at a much lower level of stringency.

The number of TR is a reliable indicator of research success
In this study, we used the Leiden Ranking data as a benchmark of comparison for the number of TR in the same universities.
We have demonstrated that the number of TR is similar and shows a high correlation with the number of the top 5% of highly cited papers reported in the Leiden Ranking (P top 5% ) for the period 2006-2009, in the field of "All sciences", and using "Fractional counting."The Pearson and Spearman rank correlation coefficients are very high, around 0.97, when eliminating the data for the USA data from the calculation of the Pearson coefficient because of its position as an outlier.
Consistent with the notion that a large proportion of papers by the TR were published before 2006 (Figure 1), our data also suggest that the correlation might have been even higher for periods before 2006-2009 (Supplementary Data Table S2), which are not reported in the Leiden Ranking.A Pearson correlation coefficient possibly higher than 0.97 for periods before 2006-2009 means an almost complete dependence of the two measures P top 5% and number of TR.
In the scatter plot of ranks in Figure 2, countries as China, Saudi Arabia, Qatar, South Africa, and a few others deviate and the Pearson coefficient excluding the USA due its position as an outlier was 0.97 (2 sided p-value 5.1•10 -18 ).
Calculations: number of HCR = number of TR • (P top 10% /P) 2 see text Overall, the scatter plots of data and ranks (Figure 5; with ranks ordered from the highest to lowest numbers) show good agreement between the calculated and observed numbers of HCR.For our purpose, the scatter plot of ranks, ranking the countries from higher to lower number of researchers, is more informative than the scatter plot of data because it shows more details of the comparison in countries with the smallest  3).Left panel: scatter plot of data, excluding the USE because of its position as an outlier.Right panel: scatter plot of ranks, ordered from higher to lower numbers.The lime with unity slop and ero intercept in the right panel is drawn as a guide to the eye.
appreciably from the general trend.These countries are currently developing new research systems, and their rapid growth might make it difficult to evaluate them based on the number of TR, who are selected considering a long research career.At least in the case of Saudi Arabia, the high number of TR may be because of a policy of extensively hiring highly positioned foreign researchers. [24]eviously, it has been shown that the numbers of papers in top percentiles correlate with the highest scores given in peer review in the Research Excellence Framework in the UK. [19,20]Consequently, it can be assumed that P top 5% is a validated measure of research success and that the number of TR is another measure of it.It is worth noting that even if the number of papers in top percentiles had not been validated against peer review, the high correlation between the number of TR and P top 5% would have also suggested that both parameters are measures of the same property of research.
The number of TR and P top 5% are obtained by two completely independent methods.P top 5% is exclusively based on citations counts while the calculation of the parameter that support the selection of TR is much more complex (Section 2).They have in common only the fact that they measure the highest success, either in number of papers or researchers, respectively.
The Leiden Ranking includes only research universities and our previous comparison refers to these universities.However, research universities are a reliable sample of country research.Indeed, they play a central role in all country's research systems, [20] and most Nobel laureates [25] and highly cited researchers [9] work in universities.Therefore, it can be concluded that the number of TR reasonably reflects the research success of countries.
The 65 studied countries with at least 30 TR account for 99% of the TR, but only 19 countries account for 90% of the TR, indicating that a very low proportion of countries contribute significantly to global scientific progress (Figure 3; Supplementary Table S4).However, some countries make a significant contribution because of their large size.
Normalisation of the number of TR by the population or GDP of the country shows that some of these countries that contribute significantly because of their size are not efficient: For example, Italy and Spain rank 9 th and 14 th by the number of TR, respectively, but drop to 30 th and 32 nd position if their GDP is taken into consideration (Table 1).With respect to their GDP, these two countries are approximately four times less efficient that the leading countries.
The already mentioned differences in research success between countries that are economically and socially similar (the Netherlands, Switzerland, Germany, and France; Rodríguez-Navarro and Brito [4] ) are also shown by the number of TR normalized by GDP.While the comparisons made in Rodríguez-Navarro and Brito [4] have a complex bibliometric basis, the comparisons made on the basis of TR have a simple basis and might be more convincing for policymakers and citizens without bibliometric knowledge.
The high correlations that we found when comparing the number of TR with the Leiden data at country level-by aggregating universities-were not found at the university level (data not shown).For the universities of some countries, we found good correlations, whereas in other countries the correlations were low.We suspect that, at university level, correct matching of affiliations between the TR and the Leiden data is difficult, [26,27] while affiliations at the country level can be matched without difficulties.However, in the research policy of countries, the assessment of the research of institutions based on the number of TR should not face difficulties because, in most cases, policymakers should not have difficulties in identifying the correct affiliations of the TR in their own country.
The TR are classified into 22 scientific fields and 176 subfields. [22]Although we have not studied the number of TR in these fields and subfields as indicators for research assessment, there is no reason to believe that they could not be used for this purpose.In this case and in the case of research assessment at the institutional level, the only reasonable limitation is that the number of TR be sufficiently high that it is statistically reliable.

The number of HCR is a reliable indicator of research success in most countries
][13][14] We found that the numbers of TR and HCR (Table 2) are tightly correlated; this supports the notion that both numbers are similar measures of research success but at very different levels of stringency.
To further investigate this conclusion, under the rationale described above (Section 3), we associated the numbers of TR and HCR with P top 5% and P top 0.05% , respectively, and calculated the number of HCR from the number of TR as described above (Section 4).The calculated and reported numbers of HCR are similar and tightly correlated across countries (Figure 4).This result supports the hypothesis that the number of HCR can be calculated from the number of TR, taking into account the research efficiency of each country as measured by the P top 10% /P ratio in research universities (Table 3).
The scatter plot of ranks of the calculated and reported numbers of HCR (Figure 5) reveals a picture that is consistent with the tight correlation observed between the two numbers, although some countries show notable divergences.Excluding Japan and Sweden, the most important divergences occur in countries with small numbers of HCR.These divergences indicate that, in these countries, the probability that the number of HCR fails to measure research success correctly is higher than in countries with many HCR.Apart from this possibility, overall the deviations that occur in Japan, Sweden, South Korea, Israel, Spain and New Zealand seems to be a problem of the indicator.
Several reasons might explain this failure.Although study of these lies beyond the scope of this study, it should be noted that the HCR are selected from the papers that are highly cited using total counts [8] and many of these papers have a large number of authors, some of whom have not contributed to highly cited original research. [28]These peculiar highly cited authors might have a high weight in some countries.
In summary, the number of HCR may be a "flawed, indicator of outstanding individual researchers" [8] but after aggregation, it is correct in many countries.

CONCLUSION
A country's number of researchers in the TR list of Ioannidis et al. [22] is a simple indicator that can be obtained by simply counting researchers in a list.Moreover, it transmits a simple idea of country research success: the more successful a country, the greater the number of successful researchers.In this study, we have confirmed that the number of TR can be used as indicator of research success at the country level and for all scientific fields together.However, nothing raises the suspicion that it could not be used at the specific scientific field and institutional levels if the number of TR is sufficiently high to be statistically reliable.

Figure 1 :
Figure 1: Distribution of papers in the ioannidis et al. list of top researchers (TR) ordered by the year that is recorder for the first paper.The upper plot shows the number of TR, while the lower pot shows the cumulative number of TR.

Figure 2 :
Figure 2: Scatter plot of countries: number of Ioannidis top researchrs (TR) versus p-top 5%.Left panel: scatter plot of data, excluding the USA because its position as an outlier.Right panel: scatter plot of ranks, ordered from higher to lower values.The lime with unity slope and zero intercept in right panel is drawn as a guide to the eye.

Figure 3 :
Figure 3: Cumulative number the Ioannidis et al. top researchers (TR) as a function of the ranking of the country (Supplementary Data 4).The first two countries, i.e., USA and UK, accounts for more than 50% of the total number of TR.
research efficiency and commitment to research excellence are restricted to very few countries.The first three countries (Switzerland, Denmark and Sweden) have around 250 TR per million inhabitants, but in the countries in positions 20, 21 and 22 (France, Italy and Greece) this Figure decreases by a factor of four (Supplementary Data

Figure 4 :
Figure 4: Scatter plot of countries: numbers of Ioannidis et al. top researchers (TR) versus Clarivate analytics highly cited researchers (HCR).Left panel: scatter plot of data, excluding the USA because of its position as an outlier.Right panel: scatter plot of ranks, ordered from higher to lower values.The line with unity slope and zero intercept in right panel is drawn as a guide to the eye.

Figure 5 :
Figure 5: Plots of the number of highly cited researchers recorded by Clarivate analytics (HCR) versus the homologous number calculated from the number of Ioannidis et al. highly cited researchers (Table3).Left panel: scatter plot of data, excluding the USE because of its position as an outlier.Right panel: scatter plot of ranks, ordered from higher to lower numbers.The lime with unity slop and ero intercept in the right panel is drawn as a guide to the eye.

Table S4
). Table 1 presents these data for 46 countries with more than 100 TR.A different ranking is obtained when normalising by GDP, but a similar decrease in the value of the indicator is observed.The top-ranked country in this case is UK with 5.3 TR per billion US$ of GDP, while Spain and Portugal in positions 25 and 26 have only 1.6 TR per billion US$ of GDP (Table 1).Journal of Scientometric Research, Vol 11, Issue 3, Sep-Dec 2022

Table 1 : Number of Ioannidis et al.'s top researchers: total and normalized by number of inhabitants and GDP.
a World Bank data for 2019, except Taiwan Table 3 presents the calculated numbers of HCR and the numbers of HCR reported by Clarivate Analytics for 31 countries (with respect to Table 2, Hong Kong (China in the Leiden Ranking) and Iceland are omitted because they are not recorded in Supplementary Data Table S1 and the P top 10% /P ratios are unknown).When comparing the reported and calculated numbers of HCR, the Spearman rank correlation coefficient was 0.83 (two-sided p-value 7.6•10 -9 )