Skip to main content

Investigating the Validity of Botometer-Based Social Bot Studies

  • Conference paper
  • First Online:
Book cover Disinformation in Open Online Media (MISDOOM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13545 ))

Abstract

The idea that social media platforms like Twitter are inhabited by vast numbers of social bots has become widely accepted in recent years. Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion. They are credited with the ability to produce content autonomously and to interact with human users. Social bot activity has been reported in many different political contexts, including the U.S. presidential elections, discussions about migration, climate change, and COVID-19. However, the relevant publications either use crude and questionable heuristics to discriminate between supposed social bots and humans or—in the vast majority of the cases—fully rely on the output of automatic bot detection tools, most commonly Botometer. In this paper, we point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots. Furthermore, we empirically investigate the validity of peer-reviewed Botometer-based studies by closely and systematically inspecting hundreds of accounts that had been counted as social bots. We were unable to find a single social bot. Instead, we found mostly accounts undoubtedly operated by human users, the vast majority of them using Twitter in an inconspicuous and unremarkable fashion without the slightest traces of automation. We conclude that studies claiming to investigate the prevalence, properties, or influence of social bots based on Botometer have, in reality, just investigated false positives and artifacts of this approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://botometer.osome.iu.edu/bot-repository/datasets.html.

  2. 2.

    Notably, this 15% estimate was obtained using an earlier version of Botometer as a classifier and without manually verifying those results [16].

  3. 3.

    This resembles a technique of adjusting a bedridden patient’s blood pressure reading by heavily tilting the bed, as described in Samuel Shem’s satirical novel House of God: “You can get any blood pressure you want out of your gomer”.

  4. 4.

    A similar problem arises when human labelers are instructed to rate accounts as “bots” or “humans”. The ratings will typically be based on unrealistically high expectations of the bot prevalence p(Bot) (fueled by Botometer-based publications and media coverage), a limited understanding of the state of the art in artificial intelligence combined with misconceptions of what features might be “bot-like” (i.e. a bad estimate of \(p(\boldsymbol{x} | Bot\))), as well as false and narrow expectations of what a “normal” human behavior on Twitter might be (i.e. a bad estimate of \(p(\boldsymbol{x} | Human)\)). As a result, many accounts that are clearly not automated but were rated “bots” by human labelers can be found in the “bot repository” used to train Botometer.

  5. 5.

    Surprisingly, in May 2019, Botometer performed dramatically better on the members of Congress; the false positive rate dropped from 47% to 0.4%. Possibly, these accounts had been added to the Botometer training data as examples of human users in the meantime.

  6. 6.

    https://www.in.th-nuernberg.de/Professors/Gallwitz/gk-md22-suppl.pdf.

  7. 7.

    https://www.in.th-nuernberg.de/Professors/Gallwitz/gk-md22-suppl.pdf.

References

  1. Allem, J.P., Ferrara, E.: Could social bots pose a threat to public health? Am. J. Public Health 108(8), 1005 (2018)

    Article  Google Scholar 

  2. Davis, C.A., Varol, O., Ferrara, E., Flammini, A., Menczer, F.: Botornot: a system to evaluate social bots. In: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274 (2016)

    Google Scholar 

  3. Dreißel, J.: 77 nützliche Twitter Retweet Bots. Onlinelupe.de, 11 July 2010 (2010). https://www.onlinelupe.de/online-marketing/77-nutzliche-twitter-retweet-bots/

  4. Dunn, A.G., et al.: Limited role of bots in spreading vaccine-critical information among active twitter users in the United States: 2017–2019. Am. J. Public Health 110(S3), S319–S325 (2020)

    Article  MathSciNet  Google Scholar 

  5. Gallwitz, F., Kreil, M.: The Social Bot Fairy Tale. Tagesspiegel Background, 3 June 2019 (2019). https://background.tagesspiegel.de/digitalisierung/the-social-bot-fairy-tale

  6. Gorwa, R., Guilbeault, D.: Unpacking the social media bot: a typology to guide research and policy. Policy Internet 12(2), 225–248 (2020)

    Article  Google Scholar 

  7. He, L., et al.: Why do people oppose mask wearing? A comprehensive analysis of us tweets during the COVID-19 pandemic. J. Am. Med. Inform. Assoc. 28(7), 1564–1573 (2021)

    Article  Google Scholar 

  8. Howard, P.N., Kollanyi, B., Woolley, S.: Bots and Automation over Twitter during the US Election. Technical report, Oxford Internet Institute (2016)

    Google Scholar 

  9. Keller, T.R., Klinger, U.: Social bots in election campaigns: theoretical, empirical, and methodological implications. Polit. Commun. 36(1), 171–189 (2019)

    Article  Google Scholar 

  10. Kreil, M.: Social Bots, Fake News und Filterblasen. 34th Chaos Comminication Congress (34C3), December 2017 (2017). https://media.ccc.de/v/34c3-9268-social_bots_fake_news_und_filterblasen

  11. Kreil, M.: The army that never existed: the failure of social bots research. OpenFest Conference, November 2019 (2019). https://michaelkreil.github.io/openbots/

  12. Kreil, M.: People are not bots. 13th HOPE Conference, July 2020 (2020). https://archive.org/details/hopeconf2020/20200726_2000_People_Are_Not_Bots.mp4

  13. Lee, K., Eoff, B., Caverlee, J.: Seven months with the devils: a long-term study of content polluters on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 5, pp. 185–192 (2011)

    Google Scholar 

  14. Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of Change: Impact of COVID-19 on Vaccine-related Opinions of Twitter users. arXiv preprint arXiv:2111.10667 (2021)

  15. Rauchfleisch, A., Kaiser, J.: The False positive problem of automatic bot detection in social science research. PLoS ONE 15(10), e0241045 (2020)

    Article  Google Scholar 

  16. Varol, O., Ferrara, E., Davis, C., Menczer, F., Flammini, A.: Online human-bot interactions: detection, estimation, and characterization. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 11 (2017)

    Google Scholar 

  17. Wischnewski, M., Bernemann, R., Ngo, T., Krämer, N.: Disagree? You must be a bot! How beliefs shape twitter profile perceptions. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–11 (2021)

    Google Scholar 

  18. Yang, K.C., Ferrara, E., Menczer, F.: Botometer 101: social bot practicum for computational social scientists. arXiv preprint arXiv:2201.01608 (2022)

  19. Yang, K., Varol, O., Davis, C., Ferrara, E., Flammini, A., Menczer, F.: Arming the public with artificial intelligence to counter social bots. Hum. Behav. Emerg. Technol. 1(1), 48–61 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to Adam Dunn for sharing with us the relevant raw data we used in Sect. 4.2. We sincerely appreciate the valuable comments and suggestions by Adrian Rauchfleisch, Darius Kazemi, and Jürgen Hermes, which helped us to improve the quality of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Gallwitz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gallwitz, F., Kreil, M. (2022). Investigating the Validity of Botometer-Based Social Bot Studies. In: Spezzano, F., Amaral, A., Ceolin, D., Fazio, L., Serra, E. (eds) Disinformation in Open Online Media. MISDOOM 2022. Lecture Notes in Computer Science, vol 13545 . Springer, Cham. https://doi.org/10.1007/978-3-031-18253-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18253-2_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18252-5

  • Online ISBN: 978-3-031-18253-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics