Grading business journals: A comparative analysis of ABS, ABDC Grading business journals: A comparative analysis of ABS, ABDC and JCR quartiles and proposing an algorithm based and JCR quartiles and proposing an algorithm based classification classification

There are multiple journal rankings that measure academic journals in business research. Among them, ABS (AJG), ABDC and JCR quartiles are the most widely used in business schools across the globe. Which is better in grading business journals based upon their academic performance? In this study, we used the Principal Component Analysis (PCA) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and evaluated 103 business journals based on six scientometric indicators. Then we proposed a grading simulation approach to simulate the original grades based on the TOPSIS scores and the results suggest that the JCR quartile is the closest to our simulation, while ABDC and ABS ranked second and third respectively. Lastly, drawing on the K-means clustering algorithm, we grouped the journals into four ordinal classes based on ABS, ABDC, JCR quartile and TOPSIS scores.


INTRODUCTION
Journal grading has been used for the evaluation of research performance within and across institutions in many countries. [1,2]Business journals comprise a significant portion of social science journals in academia, and there are multiple business journal rankings proposed by various institutions.In this paper, we are focusing on three commonly referenced business journal rankings or classifications, which are: the UK's Chartered Association of Business Schools' Journal Guide (ABS), Australian Business Deans Council's List (ABDC) and Journal Citation Reports' (JCR) journal quartiles released by Clarivate Analytics.These rankings are not exclusively for business journals, for example, ABS and ABDC also cover many economic and psychological journals, while JCR includes journals across almost all disciplines.Nevertheless, ABS and ABDC are both business-focused and proposed by business research organizations, while JCR provides a "business" category among its indexed journals, so that all three assessments can serve as important criteria in gauging the quality of research outputs of business journals.
The three rankings, however, taking very different approaches in assessing business journals.ABS employs a combination of expert panels and objective data measurements from various metrics such as JCR and SCImago Journal Rank. [3]ABDC's methodology is predominantly subjective which is validated by panels of experts. [4]The JCR quartiles, on the other hand, is solely based on every year's Journal Impact Factors (JIF) published by the Clarivate.Journals in the same research field are partitioned into four equal groups in which Q1 refers to journals with the highest 25% of JIFs of the previous year.In a nutshell, the three rankings represent three conventional techniques in academic journal assessment: subjective (ABDC), objective (JCR) and mixture (ABS).While each ranking is not without its critics, [4][5][6][7] in this study, we aim to quantitatively evaluate and compare the three rankings using uniform standards.Moreover, based on the evaluation results, we endeavour to propose an algorithm-based business journal classification, which can serve as an alternative reference for business journal grading.

Data
First, journal selection.To facilitate data analysis, we stipulated the candidate journals to be included in the latest versions of ABS, ABDC and JCR's "business" category concurrently.This has resulted in 126 journals that appeared in ABS (2018), ABDC (2019) and JCR (2020).Moreover, to exclude some emerging/recently-included journals that may not have been paid justified attention yet, we required the journals to have a five-year impact factor in the JCR and the final sample is 103 journals.
The Kaiser-Meyer-Olkin result (KMO = 0.79, p < 0.001) indicates the data are suitable for factor analysis. [16]Then we obtained two PCs: Z 1 , Z 2 , which cumulatively explained 80.2% of the total variance.

Compute indicators' coefficients in the linear combinations.
B . , where the subscript z refers to the two PCs and subscript j denotes the six indicators.L zj is the loading of the two PCs on the six indicators, E z is the extraction of the eigenvalues of the two PCs (3.796, 1.016).Each indicator obtained two coefficients: B 1j and B 2j .
3. Mean-weighted the two coefficients and produce a composite coefficient R j .
where var 1 and var 2 are the respective percentages of variance of the two PCs (62.26%, 16.93%).
4. Normalise R j and obtain the standardized weight Results are presented in the rightmost column of Table 1.AIS, which measures each article's impact-adjusted citations, is the most important indicator (19.8%).IF5 has a higher weight (18.2%) than JIF (17.1%), which highlights the significance of the long-term influence of a journal.

Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)
TOPSIS is a multiple-criteria decision-making (MCDM) method originally proposed by Hwang and Yoon [17] and has been extensively used in a wide spectrum of decision-making applications. [18]The central idea of TOPSIS is that the selection of an alternative solution should have the shortest geometric distance from the positive ideal solution (PIS) and the longest distance from the negative ideal solution (NIS). [17]TOPSIS analysis produces a cardinal ranking for solutions based on the full use of attributes. [19]Nevertheless, a major deficiency of the original TOPSIS method is that the attribute weights were subjectively determined by the expert evaluation, [20] thereby many studies extend the TOPSIS by proposing different weight-determine approaches. [21,22]In this study, we extended the TOPSIS by applying the PCA to determine indicator weights.Next, following Hwang and Yoon's [17] approach and their later improvements, [23] we use the TOPSIS to rank the 103 journals, the steps are as follows.
Second, indicator selection.To ensure indicators across journals come from a singular and reliable source, the source must cover all the 103 journals.After comparisons, we decided to use part of the "key indicators" provided by JCR.There are 13 key indicators listed on Web of Science and they fall under "impact metrics", "influence metrics" and "source metrics" classifications. [8]To be noted that JCR quartiles are based solely on JIFs, and most other key indicators did not show significant multicollinearity issues with JIF, such that the results are supposed to be different from the JCR quartiles.
Among the 13 indicators, we excluded those not directly related to journals' academic performance, such as citable items, percentage of articles in citable items, cited and citing half-life. [9]We also excluded indicators that exhibit a high level of multicollinearity with others: total cites, Eigenfactor score and average JIF percentile.Accordingly, six indicators were retained: JIF, 5-year impact factor (IF5), impact factor without journal self-cites (IFNS), immediacy index (IMI), article influence score (AIS) and normalized Eigenfactor (NEF).Eigenfactor (EF) is a score that reflects the total importance of a journal, it is based on articles published in the previous five years and cited in the JCR year, but citations from highly ranked journals are adjusted to a greater weight. [10]In addition, to reduce the self-citation bias, [11,12] we propose an other-cited rate (OCR) by dividing IFNS by JIF, and thus IFNS was replaced by OCR.Note that although JIF and IF5 exhibited a moderate but acceptable level of multicollinearity (VIF= 4.09 < 10), we decided to keep them both as they measure different aspects of journal influences.A description of the six indicators is presented in Table 1.

Determine the weights of indicators
In this study, we employ the Principal Component Analysis (PCA) method to determine indicators' respective weights.[15] Although details of methodologies vary, the main steps of applying PCA are to calculate indicators' coefficients in the linear combination of principal components (PC), and then weighted-mean each indicator's coefficients (normally there will be more than one PC) and normalise it to obtain the weights.The following shows the concise steps of weight calculation.
1. Construct the journal-indicator decision matrix and run the PCA test.Let M be the decision matrix: 1. Construct the normalised decision matrix Z based on matrix M in Equation (1): 2. Construct the weighted normalised matrix V by multiplying by Wj produced in equation ( 3): 3. Determine the ideal best and ideal worst values.
where B 1 is associated with the benefit criteria and B 2 denotes the cost criteria. [24]In this study, all the six indicators are positive indices which means the higher the value the better.Therefore, the maximum V ij value is the ideal best value A+ while the minimum V ij value is the ideal worst value A -.
4. Calculate the Euclidean distances from V ij to PIS (Di + ), V ij to NIS (Di -) of each journal.

Evaluate the relative closeness to the ideal solution:
where C i is the final TOPSIS evaluation score that will be used for journal rankings.

T-Rank grading simulation
To evaluate the three rankings based on the TOPSIS results, the commonly used paired sample t test is not a suitable method because ABS and ABDC are using grades rather than exact ranking positions to evaluate journals. [25]To address this issue, herein we propose a T-Rank grading simulation method.Specifically, we use the T-rank to mimic the original ABS/ ABDC/JCR grade/quartile (i.e., the original category).Then we create cross tabulations for simulated categories and original categories and calculate their consistencies. [1]For example, the distributions of the 103 journals in the four grades of ABDC To quantify the results of the grading simulation, we propose the S i and D i scores.S i measures the overall match between the original and simulated rankings, and D i calculates the

D
P P P P where k is the number of journals in an original category, and n denotes the number of journals in the whole ranking (103).Px is a journal's original category, and Px' refers to its T-Rank simulated category.|Px -Px'| calculates the distance between the original and simulated categories.For example, if a journal is assigned as 'A' (converted to numeric value 3) in ABDC and 'T-Rank-S4' (converted to 1) in the simulated ranking, the distance is 2. Px -Px' > 0 means the original ranking over-grades a journal, while Px -Px' < 0 suggests the journal is under-graded.By aggregating the distances of all journals and normalized by the squared sample size of each category, Si measures the overall deviation between the original and simulated journal rankings, such that the higher the S i score, the more deviated the original ranking is from the TOPSIS results.By dividing the aggregated absolute value of over-graded journals by under-graded journals, D i score > 1 suggests the over-graded journals outweigh the under-graded journals in a ranking, and vice versa.Table 3 displays cross tabulations of original journal categories and T-rank grading simulations.The Si scores suggest that the JCR quartiles is the closest to the grading simulation.This is not surprising because the six indicators we used in TOPSIS analysis were all derived from the JCR report.As for the other two rankings, ABDC (S i =1.537) performs better than ABS (S i =1.760) in grading simulation.ABS also has the only distance-3 journal (No.93, G4 in ABS and G1 in simulation) among all simulations.As for Di scores, an interesting finding is that all the three Di scores are equal to 1, and the ratios in brackets are the sum of over-graded distances to the sum of under-graded distances.This suggests that although the three rankings have different grading deviations, they all exhibit the over-grading versus under-grading issues to the same extent.

Proposing a new classification using K-means clustering
Although TOPSIS provides a cardinal ranking for the journals, it does not come up with a solution to data clustering.Considering journal grading is based on journal clustering rather than journal ranking, hence, we attempt to use the K-means clustering technique to propose a new business journal classification.K-means clustering is an iterative, data-partitioning algorithm that assigns n multidimensional observations into one of the k clusters. [26]The algorithm begins with an initial partition with k clusters and  1).then iterates the partitioning process until the within-cluster squared Euclidean distances are minimized. [27]In this study, we used the original categories of the three rankings and the TOPSIS score data as four dimensions of a journal to perform the K-means clustering analysis.
First, the four dimensions were normalized into a range of [0, 1].Second, to determine the cluster number k, we used the "elbow" method. [28]Specifically, it tests different values of k (1 to 10), and computes the total within-cluster sum of square (WSS).The elbow method estimates the best compromise between WSS and k.Using "factoextra" R package [29] and "fviz_ nbclust" function (kmeans, method = "wss"), the relation between WSS and k is plotted in Figure 2, and the elbow of the curve is considered the optimal k.In this case, k = 4.The third step is data clustering.Using the "fviz_cluster" function (ellipse.type= "euclid", ggtheme = theme_minimal()), the results of clustering are presented in Figure 3 and a summary of journal classification is provided in Table 4, where Class A represents the top class of journals and Class D is the bottom class.

DISCUSSION AND CONCLUSION
Journal ranking and grading is a never-ending debate in academia, especially in scientometrics research.In many countries and institutions across the globe, journal ranking serves as an important criterion in measuring the research productivity of researchers and institutions.Academics acquired tenure and promotions and universities received accreditation by publishing in high grading journals. [1,3]This has motivated us to revisit the established journal rankings and make a comparison based on quantitative indicators.In this study, using the PCA-TOPSIS approach, we ranked 103 journals in the business research field and compared the results with the three widely adopted business journal rankings by using a T-Rank grading simulation method.The analysis suggests that JCR's business-journal quartiles is the closest to the grading simulation, while ABDC performs better than ABS.Based on the three rankings and TOPSIS scores, we proposed a new classification for business journals using the K-means clustering algorithm.
This study also comes with some limitations which might be addressed in future studies.First, as aforementioned, the six indicators for TOPSIS analysis were all from the JCR, although the other five indicators do not exhibit significant multicollinearity with JIF, the results can still be somewhat biased.Further study can use multi-source indicators such as Hirschberg and Lye's [1] research.Nevertheless, we believe the comparisons between ABDC and ABS based on the TOPSIS analysis to be more objective.Second, we acknowledge that the calculation of grading simulation that we proposed in this paper has its limitations.The Si score is likely to be affected by the denominators in equation ( 12), i.e., the numbers of journals in original journal categories.Lastly, the K-means algorithm is also subject to criticism.For example, K-means identifies spherical clusters in analysis and each cluster has a roughly equivalent number of observations. [27,30]However, judging from our TOPSIS analysis, the top 5 journals (Journal No.1, 10, 4, 6, 7) have significantly higher Ci scores than other journals, and that explains why they were positioned out of the spherical cluster in the Figure .Hence, we suggest that artificial adjustment by expert panels is still needed in an algorithm based journal grading process.Lastly, we must acknowledge that each measure has its own limitations and could be subject to variation.Future studies should look at more scientometric measures to ensure the reliability and validity of journal evaluations.
are: A*(25), A(56), B(17), and C(5).Then we simulate the original ABDC distribution by the T-Rank and check how many journals are mismatched from the cross tabulations.As shown in Table3, the 'T-Rank-S1' column cross tabbed with the 'ABDC' rows refers to the top 25 T-ranked journals as suggested in Table2.While 17 out of the top 25 T-ranked journals fall under 'A*' in the original ABDC ranking, the other 8 journals are 'A' in the original ranking.To facilitate comparison, we merged Grade 4* and 4 in ABS, thus all the three rankings have four categories.

Figure 1 :
Figure 1: Scatter plots of TOPSIS ranking, JCR impact factor ranking and ABS (a), ABDC (b) grades.Dots in different colours represent different ABS and ABDC grades.The number on top of each dot denotes the JCR ranking of the corresponding journal (the "No." column in Table1).
No.: ranked by JCR impact factors (2019).Journal abbreviations were obtained from the JCR report.C i score: the TOPSIS evaluation score.T-Rank: TOPSIS rankings.JCR-Q: JCR quartiles.Note that the quarter of the journals are based on the 103 journals used for this analysis, thus the quartiles are not entirely identical to the original JCR data.ABS: ABS grade (2018), 4* denotes the top grade and 1 is the bottom.ABDC: ABDC grade (2019), A* is the top grade and C is the bottom.

Figure 3
Figure 3 shows that Class A and D have more deviated journals than the other two classes.This is mainly due to their disparate performance across the four dimensions.For example, Journal No.35 (Corp Soc Resp Env Ma) is Q2 in JCR quartile and ranked 48 th in T-Rank, but G1 in ABS and 'C' in ABDC.The algorithm assigned it to Class D as the journal situates nearest to D's cluster centroid than the other three clusters.

Figure 3 :
Figure 3: K-means clustering of business journals.

Table 2
The two scatter plots in Figure1visualize the relationship between the three journal rankings and T-Rank (JCR quartiles can be differentiated by intercepting value 25, 50 and 75 on the horizontal axis).As can be seen from the Figure, while most journals have broadly remained in similar positions across the rankings, noticeable differences can still be found.For example, most 4 presents the C i scores, TOPSIS rankings (T-Rank, ranked by C i scores), ABS and ABDC grades and JCR quartiles of the 103 business journals.Journals are ranked by their JCR impact factors.*/ABS and A*/ABDC journals are ranked top30 in the T-Rank, whereas a few of them are placed 40+ or even 50+ in the JCR ranking.Quant Mark Econ is a 3/ ABS and A/ABDC journal, and placed 62 nd in the T-Rank, but it is surprisingly ranked 100 th in the JCR ranking, almost at the bottom of the 103 journals.In light of this, in the next section, we evaluate how the three rankings are performed based on C i scores.

Table 2 : TOPSIS results, JCR ranking and quartile, ABS and ABDC grades of business journals.
Continued.... overall over-grade/under-grade of the rankings.D i and S i are calculated as follows: