Context based Adoption of Ranking and Indexing Measures for Cricket Team Ranks

There is an international cricket governing body that ranks the expertise of all the cricket playing nations, known as the International Cricket Council (ICC). The ranking system followed by the ICC relies on the winnings and defeats of the teams. The model used by the ICC to implement rankings is deficient in certain key respects. It ignores key factors like winning margin and strength of the opposition. Various measures of the ranking concept are presented in this research. The proposed methods adopt the concepts of h-Index and PageRank for presenting more comprehensive ranking metrics. The proposed approaches not only rank the teams on their losing/winning stats but also take into consideration the margin of winning and the quality of the opposition. Three cricket team ranking techniques are presented i.e., (1) Cricket Team-Index (ct-index), (2) Cricket Team Rank (CTR) and (3) Weighted Cricket Team Rank (WCTR). The proposed metrics are validated through the collection of cricket dataset, extracted from Cricinfo, having instances for all the three formats of the game i.e., T20 International (T20i), One Day International (ODI) and Test matches. The comparative analysis between the proposed and existing techniques, for all the three formats, is presented as well.

attempts to find a transitive relationship in a given dataset. For instance, if Team One wins over Team Two and Team Two wins over Team Three, it can be said that: (team one>team two>team three). However, complications may occur while relying on a system that is completely dependent upon winning and losing. If Team Three wins a game played against Team One, the relationship in the data is intransitive, as (team one>team two>team three>team one) and if it is the only data available, violation in ranking may take place. Situations like this may repetitively prevail in sports and need to be tackled. The International Cricket Council (ICC) is the governing body of crickets. In the past, it managed team ranking for all cricket playing nations, using an impromptu system that is based primarily on winning and losing. The ICC ranking process was simply a system that was used to regulate all international cricket matches on a regular schedule. Afterward, a new concept was introduced and implemented, all the teams are assigned a certain amount of points based on their opponent's performance as well as the result of the match. The ICC implemented the idea to ensure that such dead rubbers still have some significance. In the past, if a team won the first three matches of a five-game series, they didn't have much to play for in the final two. The series was decided and there was no advantage in winning it 5-0 compared to 3-2. Teams could rest key players and give inexperienced players a chance to get away. But now, the 5-0 series gives them more points and more opportunities to move up the points table. This paper proposes the Cricket Team Index for the cricket team ranking, adoption of the h-index [Hirsch (2005)]. The h-index is a state-of-the-art indexing strategy that is used to measure the productivity and citation impact of scholars, based on their most cited work i.e., the research papers and the number of citations received in other publications. This paper maps the citations, used in the h-index, to the winning margin in terms of the number of wickets and runs. Higher the average citations, the higher h-index. Therefore, the higher the winning margin, the higher it should be ranked. This paper argues that a single run can't be worth a wicket, so the average wicket worth is computed using batting statistics from the last two years. The ct-index only considers the statistical figures of the wickets and runs in terms of winning margin, but the strength of the opponent teams is neglected. To measure the rank and strength of the team, this paper proposes the cricket team rank (CTR) which is an adoption of the PageRank [Page, Brin, Motwani et al. (1999)]. Instinctively, the more matches a team wins in competition to a stronger team, the higher its rank will be. Team Ranking observes the strength and impotence of teams while neglecting the numeric figures of runs and wickets by which a match is won. The third proposed technique is Weighted Cricket Team Rank (WCTR) which is also a modification of PageRank like CTR but it includes weight considering the figures of runs and wickets by which a match is won. The rest of the paper is organized as follows: Section two presents a literature review, section three discusses the current ranking methods and the proposed methods in more detail, and section four provides the data which was used to experiment and the results of the experiments. Section five provides a discussion and brief analysis of the results while section six concludes the presented research.

Related work
As sport is such a finely tuned competitive endeavor, and because multiple millions of dollars can be connected to just one match, the task of accurate team ranking is critical. By relying upon outdated ranking techniques, the rankings are not reliable. The h-index [Hirsch (2005)] and PageRank [Farooq, Khan, Malik et al. (2016); Page, Brin, Motwani et al. (1999)] approaches are more modern and deliver more reliable results. The ranking is a practice used for almost all sports, and different methods for producing ranks are presented in the past. Looking at the batsman's performance using a parametric control chart, Bracewell and Ruggiero documented interesting outcomes [Bracewell and Ruggiero (2009)]. Qader et al. [Qader, Zaidan, Zaidan et al. (2017)] presented a technique for ranking football players. They used multiple criteria for decision making, i.e., 12 tests belonging to the three categories (five fitness, three anthropometrics, and four skill tests). As test data, twenty-four players from U17 were taken, and the results were similar to the existing system. Applying a social network analysis, Duch et al. [Duch, Waitzman and Amaral (2010)] created a method of ranking an individual soccer player. Previous researchers have attempted to use PageRank for delivering a reliable ranking of various cricket teams [Mukherjee (2012)] and/or cricket players, but these attempts did not harness the power the h-index brings to ranking teams, nor did they employ a graphical or non-graphical evaluation routine to calculate the values of runs and wickets. Mukherjee [Mukherjee (2012)] concluded that there is no real way to accurately determine rank based on only the number of wins. Quality of a win is also important in creating a metric to analyze a team's strength in play. Using the PageRank algorithm, the author created a formula better to understand the strength of a team and its captain. Likewise, Borooah et al. [Borooah and Mangan (2010)] see that the existing traditional ranking system has several drawbacks. When creating a batsman ranking system that relies on the batting average alone, the system does not take into account the time factor throughout the matches. A batsman with a consistent scoring of lower value might fare better, at least temporarily, than a batsman who has a typically high average but suffers from a rough patch. In the current system, authors claim that the runs a player scores for his team are entirely discounted, and should be represented with value. The proposed research is an attempt to resolve these perceived flaws. Amin et al. [Amin and Sharma (2014)] presented a cricket batsman ranking mechanism for the Indian Premier League (IPL). The authors adopted ordered weighted averaging (OWA) parameter by using the highest score, batting average, strike rate, number of fours, and sixes hit by the batsman. The OWA score was subject to regression for the final ranking of the player. Pradhan et al. [Pradhan, Paul, Maheswari et al. (2017)] argued that h-index and popular adaptations were good at ranking highly-cited authors but not much successful in resolving the ties in between medium and low cited authors. As the majority of the authors comes under the category of low-medium cited ones, they proposed methodology, C 3 -index, to resolve the ties between low and medium cited categories and predicting the future rankings of the authors during their earlier career. It was shown that the proposed C 3 -index remained more consistent and efficient than h-index and its well-known adoptions. Citation-based metrics like Relative Citation Ratio (RCR) are used as alternate ranking techniques to different PageRank adoptions. Dadelo et al. [Dadelo, Turskis, Zavadskas et al. (2014)] argued that current Basketball player ranking systems lack objectivity as they use situational factors (performance statistics) of the game. They proposed a multi-criteria systematic solution that uses the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and design principles for algorithm based on the method. Mukherjee [Mukherjee (2014)] argued that the rating of bowlers and batsmen in cricket is done by their wickets and runs averages respectively but it does not take into account the 'quality' of those statistics i.e., rank of the batsman dismissed or rank of bowler against which a batsman scored. He proposed a refined method to quantify the 'quality' of the statistics used for ranking by using the application of Social Network Analysis (SNA) to rate the players in team performance. Min et al. [Min, Kim, Choe et al. (2008)] presented a methodology to predict the outcome of a match by combining the Bayesian classifier along with rule-based reasoning. They have combined them by looking into the fact that the results are not only stochastic but the team planning can be represented by the rules. They tested their framework over the football matches and called that system as Football Result Expert System.

H-index and it's significant extensions
When attempting to measure the value and productivity of a researcher or scientist against the broader scientific community, it was complicated because there were both quantitative comparisons and qualitative comparisons to be made. The h-index [Hirsch (2005)] was designed to accommodate that exact situation, and allow for an evaluation of the number of papers by a specific scientist, as well as their impact on the field. The impact question was studied by looking at the number of citations of that scientist's work among others. The hindex is a useful tool, but when the data sets reach a certain high level of reference, there are unreliable results. Egghe [Egghe (2006)] created the g-index to compensate for this flaw. However, both indexing techniques, i.e., g-index and h-index, compromised the calculated time in which the paper was published and received the citation. As a result, Burrell [Burrell (2007)] brought forth the m-quotient, which included the career length in the current h-index, worked through dividing the h-index value by the total time of his research activity. But this technique has not yet been implemented in other areas. Adding to the field of study, Daud et al. [Daud, Muhammad, Dawood et al. (2015)] proposed the t-index. By adopting the h-index, they created a weighted environment in which they could consider the values of the runs and the wickets by which a match is won. However, the t-index uses the same weight for both runs and wickets, which is ineffective. Even laypeople are aware that wickets should be given a higher weight than should runs. Nykl et al. [Nykl, Campr and Ježek (2015)] proposed a personalized method to rank authors of scientific papers using the journal values. They used the adoption of the PageRank algorithm as well as other popular measures like h-index, citation count, publication count, and publication's author count to rank authors. Pérez-Rosés et al. [Pérez-Rosés, Sebé and Ribó (2016)] proposed authority score computation method to rank profiles using their skills and endorsements. The authors' calculated authority score by keeping in view the relations between different skills and ranking is done using a PageRank algorithm on weighted graphs generated against different skills. Degree and other centrality-based heuristics are commonly used in literature to estimate the impact of individuals on social media, Zhang et al. [Zhang, Wang, Jin et al. (2015)] argued that these techniques have major design flaws and proposed heuristic scheme based on PageRank to maximize the impact on social media.

PageRank and it's major extensions
PageRank was developed as a means by which web pages could be evaluated comparatively. PageRank is tasked with calculating the relative strength of a web page, ignoring the frequency with which the page is requested. Frequency of requests can be important; however, so Haveliwala [Haveliwala (2002)] created a modified version in 2002. In the modification, the value of the page was read as a relationship between the page and its linked pages. Manaskasemsak et al. [Manaskasemsak, Rungsawang and Yamana (2011)] proposed the PageRank Time Weighted approach to calculate the impact of the page over time. For instance, the factor of how new the page is in relationship to other pages, events such as special events are respected as drivers and trends allow for the incorporation of revision counts in the evaluation of a page's impact. The PageRank algorithm helps the world monetize and commoditize the web page. PageRank helps to create a metric by which web site owners or buyers can value a web page. Text and word ranking are used by many researchers in various fields [Gao, Wang and Chen (2019); Xiang, Wu, Li et al. (2018)]. Features, attributes and Z-number ranking based techniques are presented as well [Ezadi and Allahviranloob (2018); Wang, Ren, Davis et al. (2017); Yeh (2018)]. TextRank, an adoption of PageRank, can even evaluate the data in natural language and can extract keywords and phrases from the various documents [Mihalcea and Tarau (2004)]. It is concluded that the results obtained were an accurate reflection of the proportionate results expected. In addition, Haley [Haley (2016)] explored the negative impacts of h-index ranking for a scholar keeping in mind the uncertainty, insurance, and lobbies involved. The author provided an economist's views on how the scholar h-index impact faculty promotions, awards, and other incentives. Cerchiello et al. [Cerchiello and Giudici (2014)] argued that significant research is done using h-index, but few have studied its statistical properties and implications. In order to focus on said issues, the author proposed a statistical approach to come up with h-index distribution by focusing on two major h-index components, i.e., the total papers produced and count vector of their citation, by introducing convolution models. Springer [Springer (2016)] proposed h-Index adoption to assess the impact of science, technology, and engineering (STE) on an institution to provide a basis for funding. Iván et al. [Iván and Grolmusz (2011)] took the framework on a protein interaction network. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] suggested a Team Rank (TR) and Weighted Team Rank (WTR) approaches. The TR approach was only resulting in false points for teams that win more, with teams that winless being "dampened". Another way to illustrate the problem: If a team plays two matches and wins one match, do they not deserve a better result than a ranking of 0.5? If a team plays fifty games and wins twenty-five, they will be ranked the same way, with a 0.5 ranking. This is inherently unfair, even though it is mathematically correct under the ranking system.

ICC cricket teams ranking system
International Cricket Council (ICC) is using the following system of rating formulae for ODI and Test matches, which determines a champion and provides team rankings.

ODI matches
Before the match, the ODI Method looks at the points of the teams in play. If they are different by less than forty points, the match will award fifty bonus points plus the amount of the losing team's ranking, to the winner. In the case of a tie, the teams will each be awarded the opposing team's rank score. If the two team's rank scores are different by more than forty points, the leading team will be awarded ten bonus points, plus the losing team's ranking score, should they win. Should the leading team lose, they will be penalized ninety points. Likewise, should the lower-ranked team win, they will be awarded ninety points. Should the trailing team lose, they will be penalized with ninety score ratings. Should there be a tie, the leading (or stronger) team is penalized forty points, and, the trailing (or weaker) team is awarded forty points. The new scores from the match are added to the team's pre-match ranking. The number of matches played is updated for each time, dropping off any matches and points that accrue from a period longer than three years in the past. For each match, the winning team is awarded one extra point for winning. Should there be a draw, one-half point is awarded to each team. The updated score is then divided by the updated match score. The rating is thus updated for each time. The teams are organized by strength using this rating scale.

Test matches
If the pre-match scores of the two teams are different by less than forty points, then the analyst will multiply the series scores by fifty, adding that number to the winning team's series result times fifty points less than the opponent's score. If the two teams' scores are different by more than forty points, the winning team's series result is multiplied by the stronger team's score plus ten points. That result is then added to the opposing team's series score, which total is then multiplied by the team's rating less ninety points. For the weaker team, a similar formula is used, adding ninety points to the team's ranking, then adding that total to the opposing team's series score and multiplying by the team's rating minus ten points. The new scores are updated in the rankings when they are added to the pre-match scores. Matches and points that are outside of the three-year threshold are removed. The Match Total is updated by incrementing the number of matches in the series by one. Then the updated scores are divided by the updated match totals. For each match, the winning team is awarded one extra point for winning. Should there be a draw, one-half point is awarded to each team. The team that wins the series gets a bonus point. If the series ends on a draw, then each team is awarded a half a point.

Research limitations
From the above presented existing techniques, it may be observed that there is still room for research in cricket teams ranking. One of the reference research by Daud et al. [Daud, Muhammad, Dawood et al. (2015)] presented their criteria for cricket team ranking i.e., tindex, TR and WTR. We have already identified some inconsistencies [Daud, Muhammad, Dawood et al. (2015)] and presented corrections for the observed inconsistency [Saqlain and Usmani (2017)]. Even now, there are several limitations to the cricket team ranking criteria, and the objective of the presented research is to focus on those limitations. A list of the limitations in the existing team ranking methodologies is as follows: • t-index gives the same weight to winning margins in terms of runs and wickets, which is not correct as winning by seven runs is a close margin while winning by seven wickets is a considerable margin. This needs to be tackled.

•
In case of presenting TeamRank (TR) technique, authors [Daud, Muhammad, Dawood et al. (2015)] used a constant damping factor for all the teams and only considered the winning ratio of a team A against B to winning ratios of other teams against B. TR does involve how many matches B played against A and the other teams. This gives false results as if B is a newer team which, if has not played the matches ultimately did not lose many matches from other teams. Winning from such a team A should not be awarded maximum reward.
• While presenting their techniques of WTR and UWTR [Daud, Muhammad, Dawood et al. (2015)] did the same as said above that they did not use the strength of the opponent team along with winning margins in a proper manner.
• ICC ranks the teams based on the strength of the teams, and they do not count the winning margin while ranking the teams.

Measuring team ranks through cricket team-index (ct-index)
Classifying the productivity of a scientist or researcher based on their relative importance in the field was subjective until Hirsch [Hirsch (2005)] proposed the h-index in 2005. The number of published papers and citations of their work by other researchers is used to calculate impact and productivity. In addition, the breadth of publication and citation reflects the researcher's reputation and credibility in the field and the scientific community at large. The h-index is one of the most important ranking systems, particularly because of its ability to take into account co-authors and co-researchers in a network. h-index is calculated through: where NcT denotes the scientist's citation sum for his/her papers and is a proportionality constant, with "a" values between 3 and 5, but its value is usually set as 4, which gives non-fraction value in the denominator. Definition 1. Given the set of teams T = {T1, T2, ......,Tn}, the measure ct-index ranks the team Tj ∈ T on the basis of the sum of winning margins of runs, Tr, and the sum of winning margin of wickets, Tw, from other teams i.e., T-{Tj}. ct-index is an adoption of h-index [Hirsch (2005)]. The ct-index uses the same method of thinking that the h-index does. Replace papers with the total winning margin in terms of runs and wickets of the winning team, and replace the "team" with the authorteam with the highest margin of win should score higher, and in fact, that is what happens when using the h-index. ct-index is adopted as follows, Tw is the sum of winning margin by wickets, and Tr is the sum of winning margin by runs. The value of "a" can be chosen between 1 and 5. We have used a=4 for experimentation. This is chosen to avoid the fractional value in the denominator of Eq.
(2). The value of a wicket is assigned by accumulating the batting records of each team for three years (in this case, 2013-2015). By calculating the average of runs scored against lost wickets, it is determined that the value of a single wicket is 30.02 runs for the ODI matches, 32.04 runs for the Test matches, and 21.45 runs for the T20i matches. When calculating the ctindex, it is necessary to substitute a consistent value of wickets.

Measuring team ranks through Cricket Team Rank (CTR)
Page et al. [Page, Brin, Motwani et al. (1999)] brought forth a ranking algorithm known as PageRank that is used to rank web pages. PageRank [Page, Brin, Motwani et al. (1999)] is said to be one of the most significant graph-based page rank algorithms. The idea behind web page ranking is deemed as simple; it conducts the linkage of a web page with various web pages that cater to the same subject. Here it must be noted that the in-links provided in a page are more significant than the web page itself. The rank of any page (node) can be calculated by using the following formula, where PR(A) is PageRank of A, PR(Ti) are PageRanks of pages which are providing the links to page A. CTi is the number of out-links given by a page Ti to other web pages in the network, N is the total number of pages, and d is the damping factor having value 0.85. Definition 2. Given the set of teams T = {T1, T2, ......,Tn}, CTR measurement ranks the team A ∈ T based on statistics of results (win/loss), R(Ui) and R(Oi) ∀i≤1≤n, and dynamic damping factor di for every opponent team Ti. CTR is the adoption of the PageRank algorithm. If a team has won matches from stronger teams (provided those teams were also victorious against stronger teams), the CTR of the winning team should be high. CTR of a team "A" is calculated as: where, is the Cricket Team Rank of team A, GL(Ui) is the number of games lost against A by team i and TG(Ui) are the total games played between A and team i, consequently, R(Ui) is the ratio between GL(Ui) and TG(Ui). GL(Oi) is the games lost against other opponents (excluding A) by team i and TG(Oi) is the games played between team i and other opponents (excluding A), so R(Oi) is the ratio between GL(Oi) and TG(Oi). Where di is the damping factor, the value of di depends on the number of matches played by the opponent team. If the number of matches played by the opponent is greater than or equal to mean matches, the value of di is 1. If the number of matches played by the opponent is less than the mean matches, the value of di is the ratio of the number of matches played by the opponent and the number of mean matches. This is to handle the situations in which winning from a new team who has not played enough matches, resultantly haven't lost many matches should not be given a high weightage. The benefit is reduced as di would be in a fraction for new teams.

Measuring team ranks through Weighted Cricket Team Rank (WCTR)
Weighted Cricket Team Rank (WCTR) uses winning margins of a team in terms of wickets and runs while calculating its rank. The WCTR is defined as: Definition 3. Given the set of teams T={T1, T2, ......,Tn}, WCTR measurement ranks the team A∈T based on statistics of results (win/loss), R(Ui) and R(Oi) and statistics of the margin of win/loss, M(Ui) and M(Oi) ∀i≤1≤n, and dynamic damping factor di for every opponent team Ti. WCTR is an improvised form of CTR that relies upon weightage. The weights are added by taking into account the margin of matches lost by runs/wickets by the opponent teams. The proposed WCTR asserts that an opponent team Ti should have a higher impact on the ranking of the team A if it loses to team A by a big margin of runs/wickets but loses to other teams at low margin and lower impact if it loses to team A by a small margin of wickets/runs but loses to other teams by a big margin of wickets/runs. The WCTR score of a team A is calculated as: where, is the number of total games played between team A and team i while GL(Ui) represents the number of games lost by team i to team A. Consequently, R(Ui) is the ratio between GL(Ui) and TG(Ui). GL(Oi) is the number of games lost against other opponents (excluding A) by team i and TG(Oi) is the games played between team i and other opponents (excluding A). Therefore, R(Oi) is the ratio between GL(Oi) and TG(Oi). MGL(Ui) is the losing margin in games lost by team i against A and MTG(Ui)is the sum of margin in total games played between A and i th team. Consequently, M(Ui) is the ratio between MGL(Ui) and MTG(Ui). MGL(Oi) is the losing margin in games lost against other opponents (excluding A) by team i and MTG(Oi) is the sum of margins in games played between team i and other opponents (excluding A). Therefore, R(Oi) is the ratio between GL(Oi) and TG(Oi). Where di is the damping factor; the value of di depends on the number of matches played by the opponent team. If the number of matches played by the opponent is greater than or equal to mean matches, the value of di is 1. If the number of matches played by the opponent is less than the mean matches, the value of di is the ratio of the number of matches played by the opponent and the number of mean matches.

Experiments
The research in this paper used a dataset that is specified in this section. This section also illustrates the results of each of the techniques investigated in this research and discusses the results in the context of international cricket (ODI, Test, and T20i matches). This section also discusses the use of a damping factor for each of the proposed techniques. A comparative analysis of the techniques is presented, as well.

Dataset
The experiments are conducted using the CricInfo website's dataset. This data corresponds to the data used in the latest rankings provided by the ICC of ODI, Test and T20i matches (as of July 20, 2016). The batting statistics are captured from January 2013-December 2015 from each international match, and these statistics were used to determine the weighted average of a single wicket.

Results and discussions
The results achieved through the proposed techniques, i.e., ct-index, CTR, and WCTR, are presented in the following subsection. The experimental results are shown for all three formats of international cricket matches.

ODI matches
In this sub-section, the comparative results are presented for all three proposed techniques over the ODI dataset. The details of the data are explained in Section 4.1. Tab. 4 shows the different rankings using the different rank measuring techniques, for ODI matches. Conceptually the team rankings may be visualized into two halves. In the top five of the rankings, from all the three proposed techniques, are Australia, India, New Zealand, South Africa, and Sri Lanka. There are, however, differences in the results achieved from each formula. The ct-index results in India ranking as the number one team. India's wins are by high-margin in terms of runs and wickets. CTR ranks New Zealand higher since New Zealand won the most matches against other high-ranking teams compared to other member teams. The WCTR ranking ranks Australia as the topranking team. The WCTR took into account Australia's multiple wins against highly ranked teams, and it also won many matches with high margins in terms of runs and wickets. The latter half of Tab. 4 illustrates the lower-achieving teams. No matter which method is used, the same teams result at the bottom of the list: Bangladesh, England, Pakistan, West Indies, and Zimbabwe. No matter which method is used, Zimbabwe maintains its rank as the tenth i.e., bottom team. This is due to the fact that Zimbabwe won matches against lower-ranked competitors, won those by low margins of runs and wickets, and did not win many matches against stronger ranked teams.

Test matches
The following are the ranking results of our proposed methods for test cricket matches based on the data set explained in the previous section. The rank of the test cricketing nations based on the "test cricket data" is shown in Tab. 5. The ranking shows the ranking and scores that are gained through the proposed measurements. The results show that the ranking is easily read divided into two halves using the Test cricket data. The top-ranking half and the bottom ranking half of the teams each have five teams. The highest-ranking are Australia, England, India, Pakistan and South Africa. When the ct-index, CTR and WCTR methods are examined on the same data set, the highest-ranking nations are England, South Africa and Pakistan. Using ctindex, the English team scores highest at 49.85. This high score is a result of England's wins with high margins. The CTR and WCTR methods resulted in South Africa being issued the highest performance ranking. South Africa won a high number of matches, with a very strong ratio against other strong teams. The lower half of the ranking list resulted in the same team names no matter what evaluation technique was used. The teams in the second half of the results were: Bangladesh, New Zealand, Sri Lanka, West Indies and Zimbabwe. For each of the three different methods tested, Zimbabwe was the lowest ranking team. This is because Zimbabwe did not emerge the victor in any test cricket match during the test data time frame.

T20 international (T20i) matches
The following are the ranking results of our proposed methods for T20i matches based on the data set explained in the previous section. Tab. 6 is showing the cricket team ranking results of the proposed measurements when applied over the T20i matches dataset. Unlike the ODI and test matches, we cannot divide the achieved ranking into two of the groups. Using the ct-index, the Indian team is ranked number one. Team India won many matches with extremely high margins. For CTR and WCTR evaluations, the English team stayed on the top of the rankings. England won many matches against strong teams, and their WCTR margins were stronger against strong teams. Australia landed at the bottom of the list for all of the three different proposed methods. The reason that Australia fell to the bottom of the list was that their number of wins was lower, and the matches which they did win were of lower value. The value assessment is based on the low ranking scores of the teams that they opposed and won over, and the lower margins in the area of runs and wickets which those wins employed.

Comparative analysis of the proposed and existing rank measuring techniques
The goal of this paper is to determine the differences in the ICC ranking, which currently dominates the international ranking platforms, and the ranking options outlined by Daud et al. [Daud, Muhammad, Dawood et al. (2015)] to determine the most effective technique. The paper evaluated the proposed techniques (ct-index, CTR, WCTR), techniques proposed by Daud et al. [Daud, Muhammad, Dawood et al. (2015)] and ICC Team Rankings. We normalized the scores (0-1), achieved by all the techniques.

Comparison with ICC cricket ranking
The ICC implements an impromptu system that is completely dependent upon winning and losing. ICC cricket ranking system is used for ranking of cricket teams competing in all three international cricket platforms i.e., ODI, Test and T20i matches. In this section, this paper will compare the results of proposed techniques with current ICC cricket team rankings. ODI Matches. Fig. 1 illustrate that different techniques have a different impact on the ranking of teams. For example, Australia dropped from 1 st in ICC ranking to 4 th in ct-index. This illustrates that the margin of victory for Australia was considerably less than other teams like India who topped the ranking in the ct-index method. Also, Australia remains top in CTR but narrowly misses out in WCTR due to the impact of the winning margin. Test Matches. As illustrated in Fig. 2, techniques have a different impact on the rating of the teams. For example, South Africa jumped from 6 th in ICC ranking to 3 rd in ct-index. This illustrates that the margin of victory for South Africa was considerably higher than the other teams like India and Pakistan, who dropped the ranking in the ct-index method. Also, South Africa tops in CTR and WCTR, illustrating the benefit of not losing less against strong teams.  Fig. 3 it may be observed that New Zealand dropped from 1 st in ICC ranking to 5 th in ct-index. This illustrates that the margin of victory for New Zealand was considerably less than other teams like India who topped the ranking in ct-index method. Also, New Zealand narrowly misses a top spot in CTR and drops to 4 th in WCTR, illustrating the impact of the margin of victory in WCTR.
. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] Daud et al. [Daud, Muhammad, Dawood et al. (2015)] proposed four different ranking measurements. Here a comparison of the proposed methods with three of the relevant ones is presented, and for a fair comparison, experiments are performed on the same dataset as done by Daud et al. [Daud, Muhammad, Dawood et al. (2015)]. The data was collected for the duration of (2010-Mid 2012). The results are presented for the ODI matches only.

Comparing ct-index with t-index
The first method to compare is the Cricket Team-Index (ct-index) with Team-index (tindex) [Daud, Muhammad, Dawood et al. (2015)]. The t-index is weak in ranking that it uses the same weighting mechanism to wickets and runs. It is common knowledge that wickets have more value than runs in the outcome of a game. As outlined in the deeper discussion of the data, the valuation of each wicket is arrived at by calculating the average of the batting information from the past three years. Comparative rankings are presented in Tab. 7: The results, Tab. 7, clearly show an increase in the index values due to the impact of using the weightage of wicket instead of using wicket and run with the same weightage. For example, India jumped from 5 th in t-index to 1 st in ct-index due to high margin victories by runs/wickets.

Comparing Cricket Team Rank (CTR) with TeamRank (TR)
When comparing the Cricket Team Rank (CTR) with the Team Rank (TR), there is a proposed adoption of the PageRank algorithm in the CTR method. This allows the additional consideration of the ratios of lost matches between the teams playing, and lost matches against the other teams. The assumption was that the team which lost less would be the strongest. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] took advantage of a static damping factor. This brought about a result that was inaccurate and lower for "normal" teams but gave teams with less of a playing history too much credit. For example, if a team plays 2 matches and wins 1, its rank will be 0.5 according to this calculation, the team that played 50 matches and won 25 has the same rank. To compensate for this issue, we used a dynamic damping factor (di). Tab. 8 shows the comparative team rankings achieved by both the techniques:

Comparing Weighted Cricket Team Rank (WCTR) with Weighted Team Rank (WTR)
The final comparison is between the Weighted Cricket Team Rank (WCTR) and the Weighted Team Rank (WTR). The WCTR is a later evolution, or extended formula, based upon CTR. The important characteristic of WCTR is that it allows teams which win by the biggest margin, and lose by the smallest margin, to receive the highest rankings. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] supposed that WTR would have the same impact, but in fact, WTR has the same issue as the t-index method. Namely, there cannot be the same ranking issued to runs and wickets. The following comparison shows the differences achieved when employing the two methods: The results presented in Tab. 9 clearly show the increase in ranking due to the impact of using the weightage of wicket instead of using wicket and run with the same weightage. For example, Sri Lanka is dropped from 2 nd in WTR to 5 th in WCTR due to low margin victories by runs/wickets.

Discussion
In this section, a detailed discussion of the proposed techniques is presented. The proposed techniques are elaborated by choosing and evaluating example data.

Presenting relation between winning margins
While presenting ct-index, a transformation between runs and wickets in proposed. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] used the same weightage for both of the runs and wickets in terms of winning margins. This is totally against the spirit and statistics of the cricket game. It is clear noting that the winning margin for a team batting second can be in terms of wickets while winning margin for the team batting first can be in terms of runs. There are eleven player sides in cricket games, opponents have to get out ten players to bowl out the whole team. The maximum number of wickets for a team is ten and this can be a maximum winning margin while team bats second and chases the runs successfully without losing its single-player out. On the other hand, a team that bats first and restricts its opponent to lower than its runs scored is considered a winner in terms of runs. This winning margin can vary from 1 to hundreds of runs. The weight of the quantities whose ranges are quite different should not be the same. By considering an example, a discussion is presented. In Tab. 10, statistics for two teams A and B are presented. Both the teams A and B won five matches each while batting first and second. The total winning margins for team A in terms of runs and wickets are 220 and 37, respectively, while the same for team B are 250 and 26, respectively. If the weights of both winning runs and wickets are kept same then the ct-index score for both teams would be: Team B would be ranked higher than team B. It is not the correct ranking as team A has a higher margin of winning in terms of wickets than team B. The difference between wicket margin is 11, which must be weighted quite higher than the winning runs margin which is just 25. To find out the relation between runs and wickets, we calculated total runs scored by both the teams, whether batting first or second. In the same manner, total wickets lost by both the teams for all the matches are calculated. TR=Total Runs scored by both the teams=10520 TW=Total Wickets lost by both the teams=288 Runs Per Wicket (RPW)= (TR)/ (TW)=10520/288=36.53 The ranking through the proposed technique, ct-index, incorporating relation between runs and wickets is calculated as: Team A is ranked higher than B which is more realistic as team A won their matches by quite higher margins of wickets than team B.

Presenting and incorporating strength of opposition teams
A team is supposed to be a strong one if it wins more matches than it loses. When a team wins against strong opposition, it should be awarded more points than one's winning a weaker team. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] in their technique, TR, managed to adopt the concept of opposition strength while awarding the points to winners. It may be observed from Eq. (3) the contribution from team ti while calculating the TR score of team A, that it depends upon the following ratio i.e., ( ) . ( ) represents matches lost by i th team against A while CTi is the total number of matches lost by Ti. The existing technique does not count for the number of matches team Ti played rather only the result count in the form of losing is taken. There are many situations where the results are not the correct ones. To illustrate the inherited problems in ranking while not incorporating the statistics of total played matches an example data in Tab. 11 is presented below: As shown in (3), while calculating the rank through TR(A) [Daud, Muhammad, Dawood et al. (2015)], all the other teams make their contribution to it. The contribution made by B would be PR(B)/CTB; the values from Tab. 11 would return it as 0.33. Same as the contribution made by team C towards the ranking of team A would be 0.33. It doesn't look great as team C looks far stronger than team B. Although both the teams have the same statistics for games lost against A and total games lost against all teams. Team B lost quite frequently, for instance, lost 3 matches from A while only 5 matches were played in between them. The overall behavior of team B is not different, it lost 6 matches in all while its total matches are 10. On the other hand, team C has quite better statistics, i.e., it lost 3 matches to team A while won 7 matches. Overall performance of team C is the same as against A, i.e., in all teams, C played 30 matches and won 24 matches and lost only 6 matches. The contribution made by both the teams is the same as TR [Daud, Muhammad, Dawood et al. (2015)] does not use the overall statistics of the teams in terms of the total number of matches played against team A.  The proposed CTR and WCTR solved the above-discussed issues by incorporating the number of matches played by i th team against team A along with the total number of matches played by i th team against all the teams other than A. In Eq. (4) and Eq. (5) the contribution made by team B in the ranking of team A is calculated by While ranking A the contribution by team B is 1 while it is 5 by team C (Tab. 12). The calculated contributions are logical and true as team C is a stronger opposition and winning from a stronger opposition must reward higher than winning from a weaker opposition.

Dynamic damping factor
Static damping factor was used by Daud et al. [Daud, Muhammad, Dawood et al. (2015)] but using static damping factor is not the optimum solution for the techniques using ratios as the impact is the same for all the teams. The TR approach was only resulting in false points for teams that win more, with teams that winless being "dampened". The use of a static damping factor is just a scaling factor and seems useless. The proposed techniques CTR and WCTR use dynamic damping factors that help determine the regular and nonregular teams. The overall use of dynamic damping factor assigns different weights to the emerging and regular teams and thus proper strength of the teams is calculated. Given below is an example data in tabular form, which would help to illustrate the importance of dynamic damping factor. Tab. 13 is showing the statistical records for six teams. The statistics for team A are not shown as in this example. Its rank is being calculated. Suppose team A has played 65 matches in all. As discussed earlier, while calculating the rank of team A through proposed CTR, WCTR, and Daud et al. [Daud, Muhammad, Dawood et al. (2015)] all the teams contribute in it. The contributions made by B and for ranking team A through TR [Daud, Muhammad, Dawood et al. (2015)] is 0.25. Even though the proposed CTR and WCTR, the contributions made by team B and C are equal i.e., 1. This is not logical as B is just an emerging team, while team C is an experienced and regular team. Winning from B must not be the same as winning from team C but the tricky statistics with static damping factor would make the proposed technique behave the same as it was through TR. To solve such issues the concept of dynamic factor is introduced in this paper. To calculate the damping factor, di, for i th team following steps are performed. i. Find the mean of the matches (mm) played by all the teams.
ii. (a) If the number of matches played by the i th team is greater than or equal to mm then di =1, otherwise (b) di = Number of Matches Played by Opposition The total number of matches played by all the seven teams is 418, and the mean number of matches (mm) is 59.71. The damping factor for team C, which played 80 matches is greater than mm, is 1 while the damping factor for team B, which played 8 matches would be calculated through the rule (i), i.e., dB=8/59.71=0.13. The contributions made by team B and C would in ranking team A would be calculated as: Contribution through team B= * ( ) ( ) = 0.13 * 1 = 0.13 * ( ) ( ) = 1 * 1 = 1 Contribution through team C= * ( ) ( ) = 1 * 1 = 1 The above two contributions for team B and C reflect that winning from regular team weighs more than an emerging one and the strength of the teams are accurately determined. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] proposed WTR, which is the combination of TR and t-index. WTR inherits the problems of both techniques (as discussed above) and has factual, Nomenclature and conceptual problems, discussed in detail by Saqlain et al. [Saqlain and Usmani (2017)]. The winning margin is an important factor that should be incorporated while ranking the teams. Suppose a team B lost its match against team A by a margin of 200 runs, and team C lost its match against A by a margin of 1 run. Although both the matches end with the same result but winning by 200 runs must be given more weightage than winning by 1 run. The proposed WCTR ranks the teams by incorporating not only the strength of opposition and dynamic damping factor but the winning margins are used as well. Suppose the rank of team A is to be calculated. In Eq. (5) where WCTR score of team A is calculated, ( ) ( ) represents the winning margins. This represents the ratios of losing margins against team A to all other teams. This not only looks for one match-winning margin but the overall history between A and i th team along with the overall history of winning/losing margins of team A against all other teams is encountered. The use of historical winning/losing margins make the proposed WCTR a robust ranking technique.

Conclusion
Adoptions of PageRank and h-index are presented for cricket team ranking. In this regard, three ranking measurements are proposed i.e., ct-index, CTR and WCTR. The investigation focuses on the importance of a margin in a win and quality of opposition. If a win is won by a larger margin of runs and wickets, the impact is significant and affects the team's overall ranking. The use of a dynamic damping factor produces a significant difference from the use of a static damping factor. The weighting factor used comes into play most prominently when two teams happen to win a similar number of games against competitors who are ranked approximately the same. Adopting the h-index and PageRank produces an accurate ranking of international cricket teams. The result is positive because the opposing team is weighted as a strong or weak opponent, and the margin of the win (in terms of runs and wickets) is taken into consideration for both the winning and losing team.
Funding Statement: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.