Automated-Fact-Checking-Resources

Automated-Fact-Checking-Resources

自动事实核查资源库 数据集、模型与研究进展

该项目整理了自动事实核查领域的全面资源,包括最新数据集、模型和研究进展。涵盖从声明检测到结果预测的完整流程,并包含多模态事实核查内容。项目持续更新,为研究人员提供便捷的参考资料库。

自动事实核查数据集多模态虚假信息检测社交媒体Github开源项目

Automated Fact-Checking Resources

Maintenance Last Commit Contribution_welcome

Updates:

  • 2024.6: Added a section for LLM-generated text in Related Tasks. Added papers from EACL, NAACL, and AAAI 2024

Overview

This repo contains relevant resources from our survey paper A Survey on Automated Fact-Checking in TACL 2022 and the follow up multimodal survey paper Multimodal Automated Fact-Checking: A Survey. In this survey, we present a comprehensive and up-to-date survey of automated fact-checking (AFC), unifying various components and definitions developed in previous research into a common framework. As automated fact-checking research evolves, we will provide timely updates on the survey and this repo.

Task Definition

Figure below shows a NLP framework for automated fact-checking (AFC) with text consisting of three stages:

  1. Claim detection to identify claims that require verification;
  2. Evidence retrievalto find sources supporting or refuting the claim;
  3. Claim verification to assess the veracity of the claim based on the retrieved evidence.

Framework

Evidence retrieval and claim verification are sometimes tackled as a single task referred to asfactual verification, while claim detection is often tackled separately. Claim verificationcan be decomposed into two parts that can be tackled separately or jointly: verdict prediction, where claims are assigned truthfulness labels, and justification production, where explanations for verdicts must be produced.

In the follow up multimodal survey, we extends the first stage with a claim extraction step, and generalises the third stage to cover tasks that fall under multimodal AFC:

Framework

  1. Claim Detection and Extraction: multiple modalities can be required to understand and extract a claim at this stage. Simply detecting misleading content is often not enough – it is necessary to extract the claim before fact-checking it in the subsequent stages.
  2. Evidence Retrieval: similarly to fact-checking with text, multimodal fact-checking relies on evidence to make judgments.
  3. Verdict Prediction and Justification Production: it is decomposed into three tasks considering prevalent ways that multimodal misinformation can be conveyed:
    • Manipulation Classification: classify misinformative claims with manipulated content or correct claims accompanied by manipulated content.
    • Out-of-context Classification: detect unchanged content from a different context.
    • Veracity Classification: classify the veracity of textual claims given retrieved evidence.

Datasets

Claim Detection and Extraction Dataset

  • MR2: A Benchmark for Multimodal Retrieval-Augmented Rumor Detection in Social Media (Hu et al., 2023) [Paper] [Dataset] SIGIR 2023
  • FakeSV: A Multimodal Benchmark with Rich Social Context for Fake News Detection on Short Video Platforms (Qi et al., 2023) [Paper] [Dataset] AAAI 2023
  • SciTweets - A Dataset and Annotation Framework for Detecting Scientific Online Discourse (Hafid et al., 2022) [Paper] [Dataset] CIKM 2022
  • Empowering the Fact-checkers! Automatic Identification of Claim Spans on Twitter (Sundriyal et al., 2022) [Paper] [Dataset] EMNLP 2022
  • Stanceosaurus: Classifying Stance Towards Multilingual Misinformation (Zheng et al., 2022) [Paper] [Dataset] EMNLP 2022
  • Challenges and Opportunities in Information Manipulation Detection: An Examination of Wartime Russian Media (Park et al., 2022) [Paper] Findings EMNLP 2022
  • CoVERT: A Corpus of Fact-checked Biomedical COVID-19 Tweets (Mohr et al., 2022) [Paper] [Dataset] LREC 2021
  • MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset (Nielsen et al., 2022) [Paper] [Dataset] SIGIR 2021
  • STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media (Rao et al., 2021) [Paper] [Dataset] EMNLP 2021
  • Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society (Alam et al., 2021) [Paper] [Dataset] Findings EMNLP 2021
  • Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection (Konstantinovskiy et al., 2021) [Paper] ACM Digital Threats: Research and Practice 2021
  • The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News (Nakov et al., 2021) [Paper] [Dataset]
  • Mining Dual Emotion for Fake News Detection (Zhang et al., 2021) [Paper] [Dataset] WWW 2021
  • Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media (Barrón-Cedeño et al., 2020) [Paper] [Dataset]
  • Citation Needed: A Taxonomy and Algorithmic Assessment of Wikipedia's Verifiability (Redi et al., 2019) [Paper] [Dataset]
  • SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours (Gorrell et al., 2019). [Paper] [Dataset]
  • Joint Rumour Stance and Veracity (Lillie et al., 2019) [Paper] [Dataset]
  • Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness (Atanasova et al., 2018) [Paper] [Dataset]
  • Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter (Volkova et al., 2017) [Paper] [Dataset] ACL 2017
  • A Context-Aware Approach for Detecting Worth-Checking Claims in Political Debates (Gencheva et al., 2017) [Paper] [Dataset] RANLP 2017
  • Multimodal Fusion with Recurrent Neural Networks for Rumor Detection on Microblogs (Jin et al., 2017) [Paper] ACM MM 2017
  • SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours (Derczynski et al., 2017). [Paper] [Dataset]
  • Detecting Rumors from Microblogs with Recurrent Neural Networks (Ma et al., 2016) [Paper] [Dataset] IJCAI 2016
  • Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads (Zubiaga et al., 2016). [Paper] [Dataset] PLOS One 2016
  • CREDBANK: A Large-Scale Social Media Corpus with Associated Credibility Annotations (Mitra and Gilbert, 2015). [Paper] [Dataset] ICWSM 2015
  • Detecting Check-worthy Factual Claims in Presidential Debates (Hassan et al., 2015) [Paper] CIKM 2015

Verdict Prediction Dataset

Veracity Classification Dataset

Natural Claims
  • Do Large Language Models Know about Facts? (Xu et al., 2024) [Paper] [Dataset] [Code] ICLR 2024
  • What Makes Medical Claims (Un)Verifiable? Analyzing Entity and Relation Properties for Fact Verification (Wührl et al., 2024) [Paper] [Dataset] EACL 2024
  • COVID-VTS: Fact Extraction and Verification on Short Video Platforms (Liu et al., 2023) [Paper] [Dataset] [Code] EACL 2023
  • End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models (Yao et al., 2023) [Paper] [Dataset] SIGIR 2023
  • Modeling Information Change in Science Communication with Semantically Matched Paraphrases (Wright et al., 2022) [Paper] [Dataset] [Code] EMNLP 2022
  • Generating Literal and Implied Subquestions to Fact-check Complex Claims (Chen et al., 2022) [Paper] [Dataset] EMNLP 2022
  • SciFact-Open: Towards open-domain scientific claim verification (Wadden et al., 2022) [Paper] [Dataset] EMNLP 2022
  • CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking (Hu et al., 2022) [Paper] [Dataset] NAACL 2022
  • WatClaimCheck: A new Dataset for Claim Entailment and Inference (Khan et al., 2022) [Paper] [Dataset] ACL 2022
  • Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources (Abdelnabi et al., 2022) [Paper] [Dataset] CVPR 2022
  • MMM: An Emotion and Novelty-aware Approach for Multilingual Multimodal Misinformation Detection (Gupta et al., 2022)

编辑推荐精选

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

TRELLIS

TRELLIS

用于可扩展和多功能 3D 生成的结构化 3D 潜在表示

TRELLIS 是一个专注于 3D 生成的项目,它利用结构化 3D 潜在表示技术,实现了可扩展且多功能的 3D 生成。项目提供了多种 3D 生成的方法和工具,包括文本到 3D、图像到 3D 等,并且支持多种输出格式,如 3D 高斯、辐射场和网格等。通过 TRELLIS,用户可以根据文本描述或图像输入快速生成高质量的 3D 资产,适用于游戏开发、动画制作、虚拟现实等多个领域。

ai-agents-for-beginners

ai-agents-for-beginners

10 节课教你开启构建 AI 代理所需的一切知识

AI Agents for Beginners 是一个专为初学者打造的课程项目,提供 10 节课程,涵盖构建 AI 代理的必备知识,支持多种语言,包含规划设计、工具使用、多代理等丰富内容,助您快速入门 AI 代理领域。

AEE

AEE

AI Excel全自动制表工具

AEE 在线 AI 全自动 Excel 编辑器,提供智能录入、自动公式、数据整理、图表生成等功能,高效处理 Excel 任务,提升办公效率。支持自动高亮数据、批量计算、不规则数据录入,适用于企业、教育、金融等多场景。

UI-TARS-desktop

UI-TARS-desktop

基于 UI-TARS 视觉语言模型的桌面应用,可通过自然语言控制计算机进行多模态操作。

UI-TARS-desktop 是一款功能强大的桌面应用,基于 UI-TARS(视觉语言模型)构建。它具备自然语言控制、截图与视觉识别、精确的鼠标键盘控制等功能,支持跨平台使用(Windows/MacOS),能提供实时反馈和状态显示,且数据完全本地处理,保障隐私安全。该应用集成了多种大语言模型和搜索方式,还可进行文件系统操作。适用于需要智能交互和自动化任务的场景,如信息检索、文件管理等。其提供了详细的文档,包括快速启动、部署、贡献指南和 SDK 使用说明等,方便开发者使用和扩展。

Wan2.1

Wan2.1

开源且先进的大规模视频生成模型项目

Wan2.1 是一个开源且先进的大规模视频生成模型项目,支持文本到图像、文本到视频、图像到视频等多种生成任务。它具备丰富的配置选项,可调整分辨率、扩散步数等参数,还能对提示词进行增强。使用了多种先进技术和工具,在视频和图像生成领域具有广泛应用前景,适合研究人员和开发者使用。

爱图表

爱图表

全流程 AI 驱动的数据可视化工具,助力用户轻松创作高颜值图表

爱图表(aitubiao.com)就是AI图表,是由镝数科技推出的一款创新型智能数据可视化平台,专注于为用户提供便捷的图表生成、数据分析和报告撰写服务。爱图表是中国首个在图表场景接入DeepSeek的产品。通过接入前沿的DeepSeek系列AI模型,爱图表结合强大的数据处理能力与智能化功能,致力于帮助职场人士高效处理和表达数据,提升工作效率和报告质量。

Qwen2.5-VL

Qwen2.5-VL

一款强大的视觉语言模型,支持图像和视频输入

Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。

下拉加载更多