Automated Fact-Checking Resources
Updates:
- 2024.6: Added a section for LLM-generated text in Related Tasks. Added papers from EACL, NAACL, and AAAI 2024
Overview
This repo contains relevant resources from our survey paper A Survey on Automated Fact-Checking in TACL 2022 and the follow up multimodal survey paper Multimodal Automated Fact-Checking: A Survey. In this survey, we present a comprehensive and up-to-date survey of automated fact-checking (AFC), unifying various components and definitions developed in previous research into a common framework. As automated fact-checking research evolves, we will provide timely updates on the survey and this repo.
Task Definition
Figure below shows a NLP framework for automated fact-checking (AFC) with text consisting of three stages:
- Claim detection to identify claims that require verification;
- Evidence retrievalto find sources supporting or refuting the claim;
- Claim verification to assess the veracity of the claim based on the retrieved evidence.
Evidence retrieval and claim verification are sometimes tackled as a single task referred to asfactual verification, while claim detection is often tackled separately. Claim verificationcan be decomposed into two parts that can be tackled separately or jointly: verdict prediction, where claims are assigned truthfulness labels, and justification production, where explanations for verdicts must be produced.
In the follow up multimodal survey, we extends the first stage with a claim extraction step, and generalises the third stage to cover tasks that fall under multimodal AFC:
- Claim Detection and Extraction: multiple modalities can be required to understand and extract a claim at this stage. Simply detecting misleading content is often not enough – it is necessary to extract the claim before fact-checking it in the subsequent stages.
- Evidence Retrieval: similarly to fact-checking with text, multimodal fact-checking relies on evidence to make judgments.
- Verdict Prediction and Justification Production: it is decomposed into three tasks considering prevalent ways that multimodal misinformation can be conveyed:
- Manipulation Classification: classify misinformative claims with manipulated content or correct claims accompanied by manipulated content.
- Out-of-context Classification: detect unchanged content from a different context.
- Veracity Classification: classify the veracity of textual claims given retrieved evidence.
Datasets
Claim Detection and Extraction Dataset
- MR2: A Benchmark for Multimodal Retrieval-Augmented Rumor Detection in Social Media (Hu et al., 2023) [Paper] [Dataset] SIGIR 2023
- FakeSV: A Multimodal Benchmark with Rich Social Context for Fake News Detection on Short Video Platforms (Qi et al., 2023) [Paper] [Dataset] AAAI 2023
- SciTweets - A Dataset and Annotation Framework for Detecting Scientific Online Discourse (Hafid et al., 2022) [Paper] [Dataset] CIKM 2022
- Empowering the Fact-checkers! Automatic Identification of Claim Spans on Twitter (Sundriyal et al., 2022) [Paper] [Dataset] EMNLP 2022
- Stanceosaurus: Classifying Stance Towards Multilingual Misinformation (Zheng et al., 2022) [Paper] [Dataset] EMNLP 2022
- Challenges and Opportunities in Information Manipulation Detection: An Examination of Wartime Russian Media (Park et al., 2022) [Paper] Findings EMNLP 2022
- CoVERT: A Corpus of Fact-checked Biomedical COVID-19 Tweets (Mohr et al., 2022) [Paper] [Dataset] LREC 2021
- MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset (Nielsen et al., 2022) [Paper] [Dataset] SIGIR 2021
- STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media (Rao et al., 2021) [Paper] [Dataset] EMNLP 2021
- Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society (Alam et al., 2021) [Paper] [Dataset] Findings EMNLP 2021
- Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection (Konstantinovskiy et al., 2021) [Paper] ACM Digital Threats: Research and Practice 2021
- The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News (Nakov et al., 2021) [Paper] [Dataset]
- Mining Dual Emotion for Fake News Detection (Zhang et al., 2021) [Paper] [Dataset] WWW 2021
- Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media (Barrón-Cedeño et al., 2020) [Paper] [Dataset]
- Citation Needed: A Taxonomy and Algorithmic Assessment of Wikipedia's Verifiability (Redi et al., 2019) [Paper] [Dataset]
- SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours (Gorrell et al., 2019). [Paper] [Dataset]
- Joint Rumour Stance and Veracity (Lillie et al., 2019) [Paper] [Dataset]
- Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness (Atanasova et al., 2018) [Paper] [Dataset]
- Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter (Volkova et al., 2017) [Paper] [Dataset] ACL 2017
- A Context-Aware Approach for Detecting Worth-Checking Claims in Political Debates (Gencheva et al., 2017) [Paper] [Dataset] RANLP 2017
- Multimodal Fusion with Recurrent Neural Networks for Rumor Detection on Microblogs (Jin et al., 2017) [Paper] ACM MM 2017
- SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours (Derczynski et al., 2017). [Paper] [Dataset]
- Detecting Rumors from Microblogs with Recurrent Neural Networks (Ma et al., 2016) [Paper] [Dataset] IJCAI 2016
- Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads (Zubiaga et al., 2016). [Paper] [Dataset] PLOS One 2016
- CREDBANK: A Large-Scale Social Media Corpus with Associated Credibility Annotations (Mitra and Gilbert, 2015). [Paper] [Dataset] ICWSM 2015
- Detecting Check-worthy Factual Claims in Presidential Debates (Hassan et al., 2015) [Paper] CIKM 2015
Verdict Prediction Dataset
Veracity Classification Dataset
Natural Claims
- Do Large Language Models Know about Facts? (Xu et al., 2024) [Paper] [Dataset] [Code] ICLR 2024
- What Makes Medical Claims (Un)Verifiable? Analyzing Entity and Relation Properties for Fact Verification (Wührl et al., 2024) [Paper] [Dataset] EACL 2024
- COVID-VTS: Fact Extraction and Verification on Short Video Platforms (Liu et al., 2023) [Paper] [Dataset] [Code] EACL 2023
- End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models (Yao et al., 2023) [Paper] [Dataset] SIGIR 2023
- Modeling Information Change in Science Communication with Semantically Matched Paraphrases (Wright et al., 2022) [Paper] [Dataset] [Code] EMNLP 2022
- Generating Literal and Implied Subquestions to Fact-check Complex Claims (Chen et al., 2022) [Paper] [Dataset] EMNLP 2022
- SciFact-Open: Towards open-domain scientific claim verification (Wadden et al., 2022) [Paper] [Dataset] EMNLP 2022
- CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking (Hu et al., 2022) [Paper] [Dataset] NAACL 2022
- WatClaimCheck: A new Dataset for Claim Entailment and Inference (Khan et al., 2022) [Paper] [Dataset] ACL 2022
- Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources (Abdelnabi et al., 2022) [Paper] [Dataset] CVPR 2022
- MMM: An Emotion and Novelty-aware Approach for Multilingual Multimodal Misinformation Detection (Gupta et al., 2022)