skip to main content
10.1145/3281375.3281406acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmedesConference Proceedingsconference-collections
research-article

Preliminary investigation on quantitative evaluation method of scientific papers based on text analysis

Published:25 September 2018Publication History

ABSTRACT

Recently, there are many resources of scientific research on the Web. Because the gap between the amount of academic information available on the Web and human processing abilities becomes large, several problems have arisen: (1) losing opportunities of research presentation, (2) loosing opportunities of gathering research information, (3) increasing burden of peer review, (4) difficulty in selecting papers to read. In order to solve these problems, quantitative evaluation index of a paper as a selection criterion is needed.

This paper proposes quantitative evaluation methods of scientific papers on the basis of text analysis. The journal similarity of a target journal to an authoritative journal is defined with using distributed representations of papers. When the similarity of a target journal is high, its quality in terms of writing and organization is expected to be high. This paper also proposes an evaluation method using ROUGE (Recall-Oriented Understudy for Gisting Evaluation).

Proposed evaluation methods are evaluated by experiments. Experiments results show that the journal similarity has rough correspondence to Scimago Journal Rank (SJR). The result also implies the possibility of evaluating journals that have not yet been indexed in some authoritative journal indices using the proposed methods. The evaluation method using ROUGE is shown to have the possibility of evaluating the consistency of papers.

References

  1. P. Wouters. 2011. Journal ranking biased against interdisciplinary research. Retrieved August 23, 2018 from https://citationculture.wordpress.com/2011/11/15/journal-ranking-biased-against-interdisciplinary-research/.Google ScholarGoogle Scholar
  2. M. Kovanis, R. Porcher, P. Ravaud, and L. Trinquart. 2016. The global burden of journal peer review in the biomedical literature: Strong imbalance in the collective enterprise. PLoS ONE, 11 (11).Google ScholarGoogle Scholar
  3. V. P. Guerrero-Bote and F. Moya-Anegón. 2012. A further step forward in measuring journals' scientific prestige: The SJR2 indicator. Journal of Informetrics, 6 (4), 674--688.Google ScholarGoogle ScholarCross RefCross Ref
  4. E. Callaway. 2016. Publishing elite turns against impact factor. Nature, 535 (14), 210--211.Google ScholarGoogle ScholarCross RefCross Ref
  5. E. Garfield. 2006. The History and Meaning of the Journal Impact Factor. Journal of the American Medical Association, 295, 90--93.Google ScholarGoogle ScholarCross RefCross Ref
  6. C. Y. Lin and E. Hovy. 2003. Automatic evaluation of summaries using N-gram co-occurrence statistics. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - NAACL '03, 71--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Y. Lin. 2004. Rouge: A package for automatic evaluation of summaries. Proceedings of the workshop on text summarization branches out (WAS 2004), 25--26.Google ScholarGoogle Scholar
  8. J. Mingers and L. Leydesdorff. 2015. A review of theory and practice in scientometrics. European Journal of Operational Research, 246 (1), 1--19.Google ScholarGoogle ScholarCross RefCross Ref
  9. R. M. Alguliyev and R. M. Aliguliyev. 2016. Modified Impact Factors. Journal of Scientometric Research, 3, 197--208.Google ScholarGoogle Scholar
  10. J. D. West, T. C. Bergstrom, and C. T. Bergstrom. 2010. The Eigenfactor Metrics<sup>™</sup>: A Network Approach to Assessing Scholarly Journals. College & Research Libraries, 71 (3), 236--244.Google ScholarGoogle ScholarCross RefCross Ref
  11. H. F. Moed. 2010. Measuring contextual citation impact of scientific journals. Journal of Informetrics, 3, 265--277.Google ScholarGoogle ScholarCross RefCross Ref
  12. J. E. Hirsch. 2005. An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102 (46), (Nov. 2005).Google ScholarGoogle ScholarCross RefCross Ref
  13. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the Workshop of the First International Conference on Learning Representations (ICLR 2013).Google ScholarGoogle Scholar
  14. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. 2013. Distributed representations of words and phrases and their compositionality. Proceedings of the 26th International Conference on Neural Information Processing Systems, 2, 3111--3119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Q. Le and T. Mikolov. 2014. Distributed representations of sentences and documents. Proceedings of the 31st International Conference on Machine Learning, 32 (2), 1188--1196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov. 2017. Bag of tricks for efficient text classification. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2, 427--431.Google ScholarGoogle Scholar
  17. P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. 2017 Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135--146.Google ScholarGoogle ScholarCross RefCross Ref
  18. Z. S. Harris. 1954. Distributional Structure. WORD, (2--3), 146--162.Google ScholarGoogle Scholar
  19. A. M. Dai, C. Olah, and Q. V. Le. 2014. Document embedding with paragraph vectors. Neural Information Processing Systems (NIPS) Deep Learning Workshop.Google ScholarGoogle Scholar
  20. D. M. Blei, B. B. Edu, A. Y. Ng, A. S. Edu, M. I. Jordan, and J. B. Edu. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3, 993--1022. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Preliminary investigation on quantitative evaluation method of scientific papers based on text analysis

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      MEDES '18: Proceedings of the 10th International Conference on Management of Digital EcoSystems
      September 2018
      253 pages
      ISBN:9781450356220
      DOI:10.1145/3281375

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 September 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      MEDES '18 Paper Acceptance Rate29of77submissions,38%Overall Acceptance Rate267of682submissions,39%
    • Article Metrics

      • Downloads (Last 12 months)5
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader