Skip to main content

Linked Credibility Reviews for Explainable Misinformation Detection

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12506))

Abstract

In recent years, misinformation on the Web has become increasingly rampant. The research community has responded by proposing systems and challenges, which are beginning to be useful for (various subtasks of) detecting misinformation. However, most proposed systems are based on deep learning techniques which are fine-tuned to specific domains, are difficult to interpret and produce results which are not machine readable. This limits their applicability and adoption as they can only be used by a select expert audience in very specific settings. In this paper we propose an architecture based on a core concept of Credibility Reviews (CRs) that can be used to build networks of distributed bots that collaborate for misinformation detection. The CRs serve as building blocks to compose graphs of (i) web content, (ii) existing credibility signals –fact-checked claims and reputation reviews of websites–, and (iii) automatically computed reviews. We implement this architecture on top of lightweight extensions to Schema.org and services providing generic NLP tasks for semantic similarity and stance detection. Evaluations on existing datasets of social-media posts, fake news and political speeches demonstrates several advantages over existing systems: extensibility, domain-independence, composability, explainability and transparency via provenance. Furthermore, we obtain competitive results without requiring finetuning and establish a new state of the art on the Clef’18 CheckThat! Factuality task.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation.

  2. 2.

    https://www.truly.media/ https://www.disinfobservatory.org/.

  3. 3.

    https://invid.weblyzard.com/.

  4. 4.

    https://status.crowdtangle.com/.

  5. 5.

    https://www.blog.google/products/search/fact-check-now-available-google-search-and-news-around-world/.

  6. 6.

    In our opinion, current AI systems cannot truly assess veracity since this requires human skills to access and interpret new information and relate them to the world.

  7. 7.

    https://credweb.org/signals-beta/.

  8. 8.

    Note that ClaimReview is not suitable since it is overly restrictive: it can only review Claims (and it assumes the review aspect is, implicitly, accuracy).

  9. 9.

    The source code is available at https://github.com/rdenaux/acred.

  10. 10.

    https://www.newsguardtech.com/, https://www.mywot.com/.

  11. 11.

    http://expert.ai.

  12. 12.

    Note that usability evaluation of the generated explanations is not in the scope of this paper.

  13. 13.

    Our implementation has support for machine translation of sentences, however this adds a confounding factor hence we leave this as future work.

  14. 14.

    https://github.com/KaiDMML/FakeNewsNet, although we note that text for many of the articles could no longer be retrieved, making a fair comparison difficult.

  15. 15.

    https://github.com/co-inform/Datasets.

  16. 16.

    acred’s data collector is used to build the ClaimReview database described in Sect. 4; it does not store the itemReviewed URL values; only the claimReviewed strings.

  17. 17.

    As stated above, we used the results of this analysis to inform the changes implemented in \(\mathtt {acred}^{\mathtt {+}}\).

References

  1. Babakar, M., Moy, W.: The state of automated factchecking. Technical report (2016)

    Google Scholar 

  2. Boland, K., Fafalios, P., Tchechmedjiev, A.: Modeling and contextualizing claims. In: 2nd International Workshop on Contextualised Knowledge Graphs (2019)

    Google Scholar 

  3. Cazalens, S., Lamarre, P., Leblay, J., Manolescu, I., Tannier, X.: A content management perspective on fact-checking. In: The Web Conference (2018)

    Google Scholar 

  4. Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity multilingual and cross-lingual focused evaluation. In: Proceedings of the 10th International Workshop on Semantic Evaluation, pp. 1–14 (2018)

    Google Scholar 

  5. Guha, R.V., Brickley, D., Macbeth, S.: Schema.org: evolution of structured data on the web. Commun. ACM 59(2), 44–51 (2016)

    Article  Google Scholar 

  6. Hassan, N., et al.: ClaimBuster: the first-ever end-to-end fact-checking system. In: Proceedings of the VLDB Endowment, vol. 10, pp. 1945–1948 (2017)

    Google Scholar 

  7. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. Technical report (2019)

    Google Scholar 

  8. Marwick, A., Lewis, R.: Media Manipulation and Disinformation Online. Data & Society Research Institute, New York (2017)

    Google Scholar 

  9. Mensio, M., Alani, H.: MisinfoMe: who’s interacting with misinformation? In: 18th International Semantic Web Conference: Posters & Demonstrations (2019)

    Google Scholar 

  10. Mensio, M., Alani, H.: News source credibility in the eyes of different assessors. In: Conference for Truth and Trust Online (2019, in press)

    Google Scholar 

  11. Nakov, P., et al.: Overview of the CLEF-2018 CheckThat! Lab on automatic identification and verification of political claims. In: Bellot, P., et al. (eds.) CLEF 2018. LNCS, vol. 11018, pp. 372–387. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98932-7_32

    Chapter  Google Scholar 

  12. Papadopoulos, S., Bontcheva, K., Jaho, E., Lupu, M., Castillo, C.: Overview of the special issue on trust and veracity of information in social media. ACM Trans. Inf. Syst. (TOIS) 34(3), 1–5 (2016)

    Article  Google Scholar 

  13. Pérez-Rosas, V., Kleinberg, B., Lefevre, A., Mihalcea, R.: Automatic detection of fake news. In: COLING (2018)

    Google Scholar 

  14. Pomerleau, D., Rao, D.: The fake news challenge: exploring how artificial intelligence technologies could be leveraged to combat fake news (2017)

    Google Scholar 

  15. Schiller, B., Daxenberger, J., Gurevych, I.: Stance detection benchmark: how robust is your stance detection? (2020)

    Google Scholar 

  16. Shu, K., Mahudeswaran, D., Wang, S., Lee, D., Liu, H.: FakeNewsNet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. Technical report (2018)

    Google Scholar 

  17. Tchechmedjiev, A., et al.: ClaimsKG: a knowledge graph of fact-checked claims. In: Ghidini, C., et al. (eds.) ISWC 2019. LNCS, vol. 11779, pp. 309–324. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30796-7_20

    Chapter  Google Scholar 

  18. Thorne, J., Vlachos, A., Cocarascu, O., Christodoulopoulos, C., Mittal, A.: The FEVER 2.0 shared task. In: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pp. 1–6 (2019)

    Google Scholar 

  19. Wang, D., Simonsen, J.G., Larsen, B., Lioma, C.: The Copenhagen team participation in the factuality task of the competition of automatic identification and verification of claims in political debates of the CLEF-2018 fact checking lab. CLEF (Working Notes) 2125 (2018)

    Google Scholar 

  20. Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., Procter, R.: Detection and resolution of rumours in social media: a survey. ACM Comput. Surv. 51(2), 1–36 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

Work supported by the European Commission under grant 770302 – Co-Inform – as part of the Horizon 2020 research and innovation programme. Thanks to Co-inform members for discussions which helped shape this research and in particular to Martino Mensio for his work on MisInfoMe. Also thanks to Flavio Merenda and Olga Salas for their help implementing parts of the pipeline.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronald Denaux .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Denaux, R., Gomez-Perez, J.M. (2020). Linked Credibility Reviews for Explainable Misinformation Detection. In: Pan, J.Z., et al. The Semantic Web – ISWC 2020. ISWC 2020. Lecture Notes in Computer Science(), vol 12506. Springer, Cham. https://doi.org/10.1007/978-3-030-62419-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62419-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62418-7

  • Online ISBN: 978-3-030-62419-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics