Monetary Equivalent Value (MEV) of a Published Article in Psychology

Publishing one’s research in peer-reviewed journals is generally acknowledged to be a valuable enterprise. This is particularly the case for academic and research psychologists who rely on publications for career status, stability, and advancement. Psychological researchers can devote extensive amounts of time to planning, conducting, writing up, and getting their research published in respected psychology journals, yet their work efforts in this regard have heretofore never been quantified monetarily. This article introduces the concept of a monetary equivalent value (MEV) of a published article in psychology. An initial basic linear equation is introduced that sets the dollar (or Euro) value of an article based on the median number of hours involved in publishing an article, the mean hourly wage of psychologists, and the 5-year Impact Factor (IF) of the journal in which the article is published. MEVs were calculated for the full range of journals published by the American Psychological Association (APA) that have IF ratings. MEV values varied widely, from a low of $4,562 for an article published in the journal "Dreaming", to a high of $131,613 for an article appearing in "Psychological Bulletin". This article represents the first to explore the MEV as an additional metric to understand the impact of published articles, and as such this exploratory study has numerous limitations. Chief among these is the study’s reliance on the controversial Journal Citation Reports (JCR) journal impact factor metric, as well as its extrapolation from a limited medical literature on the average number of hours involved in publishing a study.

Psychological researchers spend significant amounts of time planning, conducting, writing-up, and submitting their research studies. Components of the process include conceiving an idea and conceptualizing the project; selecting collaborators and co-authors and delineating team member responsibilities; conducting and integrating the literature review; setting the research paradigm and statistical models; locating and securing research measures and protocols; preparing and securing Institutional Review Board (IRB) approval for the study; data collection; data cleaning and data analysis; manuscript preparation; selecting an appropriate journal and preparing for submission; responding to Reviewer comments and revising the manuscript; preparing the revision cover letter; and copy-editing and final page proof review. In total, researchers can spend hundreds of hours to see a single study through to publication in a peer-reviewed psychology journal (Song, Abedi, Macadam, & Arneya, 2013). What material value should be placed on this effort? More specifically, what is the equivalent monetary value of all the work that goes into getting a research study published in a peer-reviewed journal?
This article introduces the exploratory concept of the Monetary Equivalent Value (MEV) of a published article in a peer-reviewed psychology journal. A basic linear equation is presented that establishes a dollar (or Euro) value of a published article in psychology based on 1) the median number of hours it takes to publish an article from conceptualization to final publication, 2) the mean hourly wage of psychologists, and 3) the 5-year Impact Factor (IF) of the particular journal in which the article is published. It should be noted that there is no research assessing the average number of hours it takes to publish different types of journal articles in psychology, and the present MEV model borrows from a limited medical research literature. Further it should be noted at the outset, that the cultural and economic context for the present study is primarily anchored within the North American sphere, and international replication, adaptation, and expansion of methods described herein is highly encouraged.
As the MEV formula relies heavily on Impact Factors set by the Institute for Scientific Information's (ISI) Journal Citation Reports (JCR), published by Clarivate Analytics (formerly Thomson Reuters), it is important to begin the discussion with a review of the benefits and limitations of the bibliometric IF.

Value and Limits of the Impact Factor
There is something alluring about rankings and ratings, a standard metric for easy comparisons. Take, for example, the popularity of the annual U.S. News & World Report's "Best Colleges Ranking." College Deans and other university administrators often keep close track of the rankings with the goal of either maintaining or raising their ranking in their respective university or college categories. Top ranked universities often highlight their ranking in advertisements or profiles of their university in both social and print media.
In the area of professional journal prestige, perhaps the most popular and often cited rating or bibliometric is the journal's Impact Factor (IF) (Chorus & Waltman, 2016). Anchoring the IF is the proportion of citations to articles in the journal relative to the number of articles published in the journal over specified time frames. Journal editors across disciplines, particularly in the physical, behavioral, and social sciences, often call attention to the importance of IFs. Take, for example, the following quotes from two journal editorial statements. The first is from Cuellar (2016), opening a special issue on IFs for the Journal of Transcultural Nursing, where the editor noted the value of IFs. The second is written by Reich-Erkelenz, Schmitt, and Falkai (2016) and expresses pride in the latest IF rating for their journal, European Archives of Psychiatry and Clinical Neuroscience, "Publishing in a journal with an impact factor means that scholars are reading the journal and citing it in their work. It is the number of times that an article is being cited in someone else's articles. It means that the articles that we are publishing uphold scientific integrity and are being used to help other scientists to advance their work. It means that our work is getting out to others by not just having someone read them but by saying 'go read this' as a citation in someone else's work." (Cuellar, 2016, p. 437) "We are proud of opening this issue with awesome news: ISI has just released the new impact factor list for 2015, according to which European Archives of Psychiatry and Clinical Neuroscience (EAPCN) for the fourth time in a row has risen its impact factor and now has achieved a rank of 4.113, thus for the first time negotiating the hurdle of 4.0." (Reich-Erkelenz et al., 2016, p. 475) Such quotes by editors are not uncommon (see also Cacchione, 2017), and various journals state their IFs on the journal's homepage. IFs have taken on a level importance and status that may or may not be warranted depending on the use of the bibliometric. For psychologists and scientists in more developing countries, publishing in higher IF journals promotes aspirations of joining the larger scientific community (see Mishra & Neupane, 2018).
The rationale and use of IFs have evolved over the last 90 years. Gross and Gross (1927) first suggested that the reference count could be used to rank the use of scientific journals. The term "impact" was introduced by Garfield (1955), who is often credited with the concept of "impact factor," though "factor" was not added until the 1961 Science Citation Index (Garfield, 1963(Garfield, , 1996. The initial intent of the IF was to help librarians compare the quality of diverse journals within particular scientific disciplines (Garfield, 1955;Kiesslich, Weineck, & Koelblinger, 2016). Such data could help in librarian decision-making in terms of which journals to subscribe to given available budgets. The journal IF was never intended to evaluate the merits of a particular article or the scholar(s) publishing the article. However, in recent decades, the use and interpretation of journal IFs have expanded markedly beyond their original intent, even to the point of impacting researchers' neural reward signals.
A fairly recent fMRI study of 35 neuroscientists from the fields of psychology, psychiatry, and neurology, showed significantly greater neural activity in the nucleus accumbens (NAcc; the reward signal center of the Cerebrum) of these scientists when responding to a stimulus of potentially publishing in a high IF neuroscience journal over a medium or low IF journal (Paulus, Rademacher, Schafer, Muller-Pinzler, & Krach, 2015). This laboratory study provided the first evidence that consideration of journal IFs can effect neural response patterns, and further demonstrates "how deeply entrenched the concept of [journal] IF has become on the neural systems level" (Paulus et al., 2015, p. 11).
Are IFs worthy of the attention and significance they now garner across the sciences? Concern for the inappro- A third limitation of journal IFs is that they can be manipulated (or "gamed") by editorial policy (DORA, 2012).
More specifically in this regard, journals editors (and review boards) can raise the impact factor by accepting fewer articles for publication in the journal, thus increasing the citation to publication ratio (Kiesslich et al., 2016). Fourth, the journal IFs do not exclude self-citations, and by promoting self-citations, journals, or authors writing for specific journals, can boost IFs (Chorus & Waltman, 2016;Liu et al., 2015). Fifth, online-to-print delays in article publication can artificially raise IFs, particularly for longer print delays as IFs are mainly based on date of publication in print form (Tort, Targino, & Amaral, 2012). Sixth, according to DORA: "data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public" (DORA, 2012, p. 1).
Access to Clarivate's Journal Citation Reports (JCR) and their IF ratings, or Scopus's SCImago journal ratings require expensive subscriptions only available to researchers or the public with institutional access to these sources.
Despite the known limitations of relying on IFs as measures of individual or institutional merit, they hold some value in assessing impact. Eyre-Walker and Stoletzki (2013) examined three methods of assessing the merit of an empirical paper: independent scholar rating of the paper post publication; the number of citations the paper accrued within 6 years after publication; and the IF of the journal in which the study appeared. Though there was a moderate correlation (r = .38; medium effect size) between scholar ratings of the published article and the number of citations eventually accrued by the article, the scholars over-rated (i.e., were influenced by) papers in high IF journals. When this favorability bias was controlled for statistically, the correlation was negligible (r = .15, small effect size). Eyre-Walker and Stoltezki (2013) suggested that "scientists have little ability to judge either the intrinsic merit of a paper or its likely impact" (p. 1). The authors concluded that while none of the three methods is a good measure of merit, the IF may be the most satisfactory given it is a pre-publication measure.
Acknowledging that IFs do not describe the merits of an individual article, but instead acknowledge the impact and influence of the journal itself on its target discipline, this bibliometric serves a useful limited purpose (see also Friedman, 2016;Postma, 2007). With the proliferation of print and online professional journals, IFs do provide some standard metric on the article citation to publication ratio. Most researchers would prefer their articles be published in journals that are often cited by other scholars within their own fields. Of course, at times weaker Monetary Equivalent Value (MEV) 332 articles do reach publication in high impact journals, and very good articles appear in lesser known and cited journals.
As highlighted by Garfield (1996), "In the final analysis, impact simply reflects the ability of journals and editors to attract the best papers available" within their discipline (p. 411). Certainly, researchers would like to see (at least some of) their research and major conceptual/review/theoretical articles published in the journals that attract the best papers. The reality is that despite the known limitations of IFs, articles published in high IF journals have greater value in that they are more often cited by other researchers. It is likely then, that researchers who have a good number of published articles in high impact journals, will garner a collectively greater professional status.
The primary purpose of this study is not to further debate the merits and limits of bibliometrics, particularly in our case, the Impact Factor (IF). The purpose is to stimulate thought and reflection on the material monetary value-equivalent to the extensive amounts of work researchers devote to the research and publication process.
IFs can be quite abstract to members both within and outside the scientific community, whereas the concept of money, or monetary value, may be more easily understood by those attempting to understand the value and work involved in publishing articles in select journals. A second purpose of this initial exploratory study is to stimulate follow-up research, from different national, statistical, and conceptual angles, on the Monetary Equivalent Value (MEV) formula as an additional metric to consider when weighing the impact of a published article in psychology.

Method Median Number of Hours Involved in Publishing a Research Article
A review of the psychology literature did not uncover any research calculating the average number of hours involved in publishing a study. However, in the field of medicine, such a study was conducted. Song et al.
(2013) had 13 surgeons specify time on task for 171 studies that they had published from 1990 through 2012.
The studies focused upon involved retrospective designs rather than randomized control trials, given the much greater frequency of retrospective studies (Song et al., 2013). The number of hours per study ranged from 29 hours to 1,287 hours. Given the positive skew of the distribution the researchers calculated the median number of hours per study which was 177 hours. This total represents the efforts of the collective team, not just the lead author. Naturally, the distribution of work tasks and hours across co-authors or team members will vary dependent on the size of the research team and scope of the study conducted. Table 1 summarizes the results of the Song et al. (2013) study organized along the median number of hours involved in eight components of the research and publication process. The greatest amount of time was devoted to data collection (22% of total time), followed by manuscript preparation (22%).
Though the Song et al. (2013) study was in medical surgery research, and not in psychology, aspects of the positivist and post-positivist research paradigm, the scientific method, and the associated quantitative procedures transcend both medical and psychological research (see Ponterotto, 2005). Furthermore, as in medical research, retrospective studies are much more common than randomized control trials in many subfields of psychology research (Heppner, Kivlighan, & Wampold, 2007). Thus for the purpose of this study, and until the Song et al. (2013) study is replicated in various subfields of psychology, we will use the median hours per study

Impact Factors and Journal Selection
Naturally, the hypothetical dollar value of a published study will be impacted by the quality or reputation of the journal in which the study is published. The popular, though controversial bibliometric of the 5-year Impact Fac-

tor (IF) established by the Institute for Scientific Information (ISI) and published in the annual Journal Citation
Reports (Clarivate Analytics, formerly Thomson Reuters) was selected for the present study. stantive articles and reviews only) published in the journal in the preceding 2 years (denominator) (Garfield, 2006). The 5-year IF bibliometric extends this formula back 5 years, rather than two.
For an initial sample of journals, the author selected all journals affiliated with the American Psychological Association (APA) that reported IF ratings (some of the newest journals had not yet reported IFs). If available, we utilized the Journal Citation Report (JCR) 5-year Impact Factor (IF) ratings (in two cases where the 5-year IF was not available, we incorporated the latest IF rating). On APA's webpage (www.apa.org), under the Browse Journal link, the IFs are listed with the journal descriptions.

Final MEV Linear Equation
The    The present study is limited by the conceptual foundation of the MEV which relies on central tendency measures of the number of hours worked by psychologists and their average income/hourly wages. Further, this study is limited to the North American research publishing context, and may or may not be generalizable to other continents and countries. Additionally, the MEV is also anchored in the popular JCR journal Impact Factor rating, which as has been noted, is open to debate and controversy. Consistent with the recommendation from DORA (2012), the MEV value is theoretical and applies to an average article in the specific journal, and not to any particular article. Second, this study calculated the average hourly wage of psychologists based only in North America. Salaries and forms of compensation for published research can vary widely within and across nations and across firstand third-world economies. Also, the pressure to publish in higher impact journals can vary from country to country and from institution to institution within countries. This topic needs to be studied from a wider context and it would behoove psychologists to partner with sociologists and economists in interdisciplinary research.

Discussion
Furthermore, research can attend more specifically to the salary or work hour variance among psychologists within research communities. For instance, the average U.S. salary of an assistant professor in psychology across all institution types is $61,500 (in 2016-2017 survey) (Christidis et al., 2018) or $29.57 per hour for a 40hour work-week. The average psychologist in private practice, however, who may charge $150 per hour, may earn $312,000 a year for their 40-hour work weeks. The loss in income for a private practitioner to take time away from their patients to conduct and publish a study is far greater than for an academic who often has a flexible work week and where research is part of the job description and allotted time within the work week.
Naturally, many private practitioners or practitioners in diverse settings do publish in psychology journals.
Third, MEVs will need to be recalculated each year or every few years as updated information on psychologists' salaries, work hours involved in publishing a manuscript, and the IF becomes available. For the many new psychology journals appearing each year, 5-year impact ratings are not yet available, thus researchers may want to rely on single or 2-year IFs in some calculations. Naturally, journals without current IF ratings should not be assessed with the MEV model. It should also be noted that while some online-only psychology journals, such as PLoSONE Psychology and Frontiers in Psychology do report IFs, many newer online journals have not yet joined the JCR system.
Fourth, it is suggested that follow-up research examine variations on the MEV and the variables used to calculate the coefficient. For example, more complex extensions of the MEV formula can incorporate page lengths of published articles (on the assumption, for example, that a 12 page article published in the Journal of Consulting and Clinical Psychology is more "valuable" than a three page article).
It is acknowledged that the present study is just a first step to introduce a new concept to the growing discourse on the prestige and status of publication outlets in psychology. Money is universally understood, and publishing does impact the financial status of researchers and their home institutions. It is hoped that the present study will stimulate follow-up research internationally.