Quality and quantity in publishing

At the end of last week, I came across a blog entry by Olaf Storbeck, reporting that a rising star from German business economics, Prof. Ulrich Lichtenthaler, is faced with numerous inquiries concerning his publishing record. Two journals have already retracted three of his articles and additional articles are under scrutiny. At present, Lichtenthaler is confronted with three charges: unethical serial publishing (or auto-plagiarism), inconsistent research across articles, and sloppy statistical editing.

On the first count, journals criticize Lichtenthaler for publishing articles that were substantively too close without cross-referencing his own work. The journals argue that if they had known about his parallel work, they would not have accepted his articles because of their lack of originality. The second charge is that Lichtenthaler shows little consistency in some empirical analyses. In substantively closely related articles, he is using different variables in the empirical models without good justification. This suggests that the models were tailored so that they yield results amenable for publication.

The third point is a statistical one. Lichtenthaler’s articles include regression tables that assign a significant effect to variables that are in fact non-significant. The point is not that data was made up. It is simply that one finds a significance marker behind an estimated effect, while the ratio of that effect to standard error shows that it does not reach the corresponding level of significance. For example, in one article (Lichtenthaler, U. and H. Ernst (2012): Integrated Knowledge Exploitation: The Complementarity of Product Development and Technology Licensing. Strategic Management Journal 33 (5): 513-534.), the variable “technology licensing” has an effect of 0.69 and a standard error of 0.96 (OLS regression, model 14 on page 527). The t-value of about 0.72 is far from any conventional t-value used in empirical research, but the effect is designated as significant at .10.

I think that each of the three points offers interesting implications that hold beyond the field of business economics, as there is nothing in this matter that is specific to this discipline. Starting with the third point, the misreporting seems to be a matter of sloppiness because it is easy to detect in principle (which does not make it any less serious an issue). On the one hand, the misreporting could have been detected by the reviewers. On the other hand, as a reviewer, one usually trusts in the accuracy of the regression tables and focuses one’s attention on other parts of the paper. Now, the Lichtenthaler papers suggest that one best look at the tables as well, but the significance of the effect is not as easy to calculate as for individual variables in OLS regressions. A labor-intensive alternative would be to have an editorial assistant inspect all the tables of accepted publications for their accuracy. However, the better alternative would be that journals require the authors to publish their data and code alongside the article. It is hard to understand why this is still the exception to the rule of not publishing data as it would have many advantages beyond the opportunity to check the accuracy of published work.

The other two points are very interesting because they are related to somewhat more common practices in science. The second charge concerns inconsistency of the theory and empirical models across publications. Of course, one expects researchers to be consistent in their research and to incorporate their previous findings into their current research. However, I think Lichtenthaler’s inconsistency is not that uncommon in empirical research. At least, I personally have come across some inconsistencies in social science publications and one wonders from time to time why some publications fail to control for a variable that is commonly included in empirical models. The Lichtenthaler case might bring this issue to the forefront, at least in business economics, and raise the bar in regards to the consistency and cumulative nature of empirical research.

The first charge against Lichtenthaler, i.e., some variety of auto-plagiarism, is related to the standard policy of journals to only publish original work. I do not know the details, but it seems that Lichtenthaler submitted very similar pieces to different journals at similar points in time. If this was the case, the editors and reviewers are not to blame because they could not know that multiple articles were simultaneously under review. Lichtenthaler should have notified the journals and, in case of doubt, send them all articles in order to let them decide whether the respective submitted article still qualified as original. Since this was not done, it only seemed a matter of time until someone noticed the similarity between the articles. More generally, auto-plagiarism is very likely to be detected (as in the case of Bruno Frey) meaning that it is unlikely that one could succeed with this. However, it would be better to prevent occurrences instead of retracting articles. The only way I can think of (in addition to raising the awareness for not doing this) would be a central registry of all articles that are currently under review. This would allow editors to determine whether the submitting researcher has another paper under review that could be too close to the submitted work. If in doubt, the editor could then approach the researcher and request clarification. In addition to the question of whether all publishers would participate at a central registry, it would shift the information balance between editors and researchers more towards the editors because they potentially get a full overview of all the work one currently has under review. Publishers should ensure that their editors keep this information absolutely confidential, but in my view, the shift of the information balance is worth the price to avoid retractions of published articles.

On a broader scale, this case indicates that Lichtenthaler either lacked awareness of what is and is not legitimate in the publishing arena, or he bent the rules in order to boost his output. In either case, this incident underscores once more the direction in which that pressure to publish has led science (and, in this particular instance, the field of business economics). The pursuit of publications might have also led to blindness of the field; Lichtenthaler has published about 50 peer-reviewed articles in about 8 years. This is not impossible, but it should ring some warning bells (also because he argued that he did not work more than 40 hours per week). This is not to blame the scientific community for the Lichtenthaler case because he was the one who wrote and submitted the articles. But the pressure for publishing creates an environment that is conducive to such behavior and one should not be surprised if similar incidents occur again.

About Ingo Rohlfing

I am a political scientist. My teaching and research covers social science methods with an emphasis on case studies, multi-method research, causation, and causal inference. I also became interested in matters of research transparency and credibility. ORCID: 0000-0001-8715-4771
This entry was posted in publishing, science and tagged , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.