Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
-
Updated
Jun 24, 2025 - Python
Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
This repository provides scripts and workflows for translating fact-checking datasets and automating claim classification using large language models (LLMs).
Code associated with the NAACL 2025 paper "COVE: COntext and VEracity prediction for out-of-context images"
Code associated with the preprint: "M4FC: a Multimodal, , Multilingual, Multicultural, Multitask real-world Fact-Checking Dataset"
debunkr.org Dashboard is a Browser extension that helps you analyze suspicious content on the web using AI-powered analysis. Simply highlight text on any website, right-click, and let our egalitarian AI analyze it for bias, manipulation, and power structures.
Tathya (तथ्य, "truth") is an Agentic fact-checking system that verifies claims using multiple sources including Google Search, DuckDuckGo, Wikidata, and news APIs. It provides structured analysis with confidence scores, detailed explanations, and transparent source attribution through a modern Streamlit interface and FastAPI backend.
🔍 ABCheckers 💬 is a data-driven project that analyzes Twitter discourse to uncover misinformation around 🇵🇭 inflation and the weakening peso, empowering users with contextual insights.
OpenSiteTrust is an open, explainable, and reusable website scoring ecosystem
Media Literacy System powered by AI - Analyze news for bias and manipulation.
An advanced AI-powered fake news detection system that verifies text, images, and social media posts using Gemini AI, FastAPI, and Next.js. Includes a modern web interface, a lightweight Streamlit app, and a Chrome extension for real-time fake content detection. Built to combat misinformation with explainable AI results and contextual source links.
This project implements a complete NLP pipeline for Persian tweets to classify topics and detect fake news. Using a Random Forest classifier, it compares tweet content with trusted news sources, achieving 70% accuracy in fake news detection.
Adventure Guardian AI is a unified safety intelligence system designed to protect adventure travellers in India. It verifies trek information, analyzes health risks, and detects fraud using AI-powered vision, geodata, weather intelligence, and pattern analysis. By combining truth, health, and fraud assessments, it generates a single Verified Trek S
Fine-tuned roberta-base classifier on the LIAR dataset. Aaccepts multiple input types text, URLs, and PDFs and outputs a prediction with a confidence score. It also leverages google/flan-t5-base to generate explanations and uses an Agentic AI with LangGraph to orchestrate agents for planning, retrieval, execution, fallback, and reasoning.
Fact-checking Reddit posts with machine learning: comparing traditional and transformer-based approaches
A React + Vite + Tailwind CSS web app that verifies text for potential misinformation in real time using Gemini AI. Delivers a minimal, responsive UI with clear verdicts, confidence scores, and category tags. Includes a dashboard-ready structure and components for insights and community upvoting.
Node.js + Express API that powers misinformation verification by integrating Gemini AI and MongoDB. Exposes endpoints for verification, category summaries, upvoting, and health checks, designed for low-latency responses. Persists flagged content with confidence and metadata for analytics and auditability.
A transparent, agentic system for multimodal misinformation detection. Verifies text, image, and video authenticity using LLM & VLM agents with explainable reasoning.
Imagine Hashing embeds cryptographic hashes into images using steganography and SHA256 to ensure authenticity, integrity, and resilience against tampering or manipulation.
A machine learning classifier using Multinomial Naive Bayes to detect fake news articles with 95%+ accuracy through NLP and TF-IDF text vectorization.
Watermarking System | AI-Generated Media Detection A system for detecting and flagging AI-generated images using ML and steganography. Ensures authenticity with imperceptible, resilient watermarks embedded at creation.
Add a description, image, and links to the misinformation-detection topic page so that developers can more easily learn about it.
To associate your repository with the misinformation-detection topic, visit your repo's landing page and select "manage topics."