RefChecker for Fine-grained Hallucination Detection
| 🔥 News | 🤖️ Demo | 🚀 Quick Start | 💾 Benchmark | 📖 Docs |
RefChecker provides a standardized assessment framework to identify subtle hallucinations present in the outputs of large language models (LLMs).
Figure: RefChecker Framework
🌟 Highlighted Features
- Finer granularity - RefChecker breakdowns the claims in the LLM’s response into knowledge triplets, as opposed to paragraph, sentence or sub-sentence. Detecting at knowledge triplets will test the truthfulness of facts. Importantly, this finer granularity subsumes other coarse granularity and is therefore more informative and precise. One can arbitrarily roll up the granularity ladder to derive coarse level metrics if needed.
- Wider Coverage - RefChecker differentiates three distinctive settings based on the quality and quantity of context provided for LLM’s response:
- Zero Context: the prompt is a factual question without any context (eg. Open QA).
- Noisy Context: the prompt is a question as well as a list of retrieved document (eg. RAG).
- Accurate Context: the prompt is a question as well as one document (eg. Summarization).
- Human Evaluation - RefChecker includes 2.1k human annotated LLM’s responses consist of 300 test samples, each responded by 7 popular LLMs: GPT4, GPT-3.5-Turbo, InstructGPT, Falcon (Falcon-40B-Instruct), Alpaca (Alpaca-7B), LLaMA2(70B-Chat) and Claude 2. We will release the data and results upon approval.
- Modular Architecture — RefChecker is a 3-stage pipeline, consisting of a claim extractor $E$, a hallucination checker $C$, and aggregation rules $\tau$. They can be invoked and configured individually from command-line. Other than the 3 core stages, there are 3 auxiliary components:
- human labeling tool (coming soon) to label claims,
- call to search engine for Zero Context setting
- a localization model to map each knowledge triple back to the corresponding snippets of the reference.
You can explore RefChecker in the following ways:
- Demo Website - Setup a website and check your responses with user interfaces.
- Quick Start - Setup the environment and check your responses in a console.
- Automatic Checker - Check our automatic hallucination checker with strong performance and efficiency.
🔥 News
- [07/22/2024] Add support for joint checking the claims for better checking efficency.
- [06/24/2024] RefChecker supports most of the LLMs by employing litellm and vllm.
- [05/23/2024] RefChecker paper is on Arxiv: https://arxiv.org/pdf/2405.14486
- [12/07/2023] RefChecker 0.1 release.
❤️ Citation
Please check out the paper here: https://arxiv.org/pdf/2405.14486
If you use RefChecker in your work, please cite us:
@article{hu2024refchecker,
title={RefChecker: Reference-based Fine-grained Hallucination Checker and Benchmark for Large Language Models},
author={Xiangkun Hu and Dongyu Ru and Lin Qiu and Qipeng Guo and Tianhang Zhang and Yang Xu and Yun Luo and Pengfei Liu and Yue Zhang and Zheng Zhang},
year={2024},
eprint={2405.14486},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
🤖️ Demo Website
You can first setup a demo website and then use the web UI to try RefChecker as the animation shows above. There are four steps to perform hallucination detection in it:
- Extract Triplets: You can start with typing what you want to check in the top-left box. Then click the
Next Step
button on the right side. The checker will extract triplets in your text and show them in the bottom-left area. - Gather Reference: You can then add reference text in the top-right box and click the
Next Step
button. If you don’t have reference text, leave the box empty and click the button anyway. We will retrieve some references with the text to be checked using search engines. - Fact Checking: With the text to be checked and the retrieved reference text, the checker will perform fact checking then. The checking results will be shown in the bottom-left area, with ✅/❌/❓ indicating factual/hallucinatory/neutral. An overall factuality score will be given aside.
- Localization: You can then click the
Next Step
button and the checker will perform triplet localization. You can click the button on the left of each triplet to see the localization result.
🚀 Quick Start
Setup Environment
First create a python environment using conda or virtualenv. Clone this repo and change path into the root directory. Then install:
pip install -e .
python -m spacy download en_core_web_sm
Install optional dependencies to use open source extractors (Mistral, Mixtral) or enable acceleration for RepCChecker.
pip install -e .[open-extractor,repcex]
Code Examples
Choose Models for the Extractor and Checker
We use litellm as to invoke the LLMs. Please check the document for how to setup the model for different LLM providers: https://docs.litellm.ai/docs/providers . We give some examples below:
- Amazon Bedrock
Setup the enviroment variables if you are not using AWS EC2 instance
If you are using AWS EC2, make sure your region has the access to the model
export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
export AWS_REGION_NAME=<your_aws_region_name>
import os
from refchecker import LLMExtractor, LLMChecker
# Claude 3 Sonnet from Amazon Bedrock
model = 'bedrock/anthropic.claude-3-sonnet-20240229-v1:0'
extractor = LLMExtractor(model=model, batch_size=8)
checker = LLMChecker(model=model, batch_size=8)
You can also setup the enviroment variables in terminal to avoid disclosing these information in the code:
export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
export AWS_REGION_NAME=<your_aws_region_name>
- OpenAI
import os
from refchecker import LLMExtractor, LLMChecker
os.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"
# GPT-4o from OpenAI
model = 'gpt-4o'
extractor = LLMExtractor(model=model, batch_size=8)
checker = LLMChecker(model=model, batch_size=8)
- Open source LLMs
Please use vllm to setup the API server for open source LLMs. For example, use the following command to deploy a Llama 3 8B hosted on HuggingFace:
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--tensor-parallel-size 8 \
--dtype auto \
--api-key sk-123456789 \
--gpu-memory-utilization 0.9 \
--port 5000
Setup the api key:
export OPENAI_API_KEY=sk-123456789
Then we can initilize the extractor and checker with api_base
:
import os
from refchecker import LLMExtractor, LLMChecker
# Note the prefix "openai/" here
model = "openai/meta-llama/Meta-Llama-3-8B-Instruct"
api_base = "http://0.0.0.0:5000/v1"
extractor = LLMExtractor(model=model, batch_size=8, api_base=api_base)
checker = LLMChecker(model=model, batch_size=8, api_base=api_base)
- Fine-tuned Mistral 7B Claim Extractor
We fine-tuned a Mistral 7B model for claim extraction. Deploy it with vllm:
python -m vllm.entrypoints.openai.api_server \
--model dongyru/Mistral-7B-Claim-Extractor \
--tensor-parallel-size 8 \
--dtype auto \
--api-key sk-123456789 \
--gpu-memory-utilization 0.9 \
--port 5000
Then we can initilize the extractor as follows:
extractor = LLMExtractor(
model="openai/dongyru/Mistral-7B-Claim-Extractor",
batch_size=8,
api_base="http://0.0.0.0:5000/v1"
)
- Non-LLM based Checkers
We also offer non-LLM checker for efficent checking:
from refchecker import AlignScoreChecker, NLIChecker
# Details see paper: https://arxiv.org/abs/2305.16739
checker = AlignScoreChecker(device=0, batch_size=128)
# See https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli
checker = NLIChecker(device=0, batch_size=128)
Run Extraction and Checking
Both the extractor and checker takes a batch of inputs:
# Batch of questions (optional)
questions = ['question 1', 'question 2']
# Batch of model responses
responses = ['response 1', 'response 2']
extraction_results = extractor.extract(
batch_responses=responses,
batch_questions=questions,
max_new_tokens=1000
)
batch_claims = [[c.content for c in res.claims] for res in extraction_results]
references = ['reference 1', 'reference 2']
batch_labels = checker.check(
batch_claims=batch_claims,
batch_references=references,
max_reference_segment_length=0
)
The extraction_results
is a list of RCClaim
objects defined in refchecker/base.py.
Command Line Interface
We provide a command-line interface to run RefChecker in a console:
usage: refchecker-cli [-h] --input_path INPUT_PATH --output_path OUTPUT_PATH
[--cache_dir CACHE_DIR]
[--extractor_name EXTRACTOR_NAME]
[--extractor_max_new_tokens EXTRACTOR_MAX_NEW_TOKENS]
[--claim_format {triplet, subsentence}]
[--checker_name CHECKER_NAME]
[--extractor_api_base EXTRACTOR_API_BASE]
[--checker_api_base CHECKER_API_BASE]
[--repc_classifier_name {svm,svm_ensemble,nn,nn_ensemble}]
[--retriever_name {google}]
[--aggregator_name {strict,soft,major}]
[--use_retrieval]
[--batch_size_extractor BATCH_SIZE_EXTRACTOR]
[--batch_size_checker BATCH_SIZE_CHECKER]
[{extract,check,extract-check}]
positional arguments:
{extract,check,extract-check}
extract: Extract claims from provided responses.
check: Check whether the provided claims are factual.
extract-check: Extract claims and check whether they are factual.
options:
-h, --help show this help message and exit
--input_path INPUT_PATH
Input path to the json file.
--output_path OUTPUT_PATH
Output path to the result json file.
--cache_dir CACHE_DIR
Path to the cache directory. Default: ./.cache.
--extractor_name EXTRACTOR_NAME
Model used for extracting claims. Default: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
--extractor_max_new_tokens EXTRACTOR_MAX_NEW_TOKENS
Max generated tokens of the extractor, set a larger value for longer documents. Default: 500
--claim_format {triplet, subsentence}
The format of the extracted claims. Default: triplet
--checker_name CHECKER_NAME
Model used for checking whether the claims are factual. Default: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
--extractor_api_base EXTRACTOR_API_BASE
API base URL if using vllm for deploying the extractor.
--checker_api_base CHECKER_API_BASE
API base URL if using vllm for deploying the checker
--repc_classifier_name {svm,svm_ensemble,nn,nn_ensemble}
Classifier Model used for RepC checker, only valid when RepC checker is used.
Default: nn_ensemble, neural network classifier with layer ensemble.
--retriever_name {google}
Model used for retrieving reference (currently only google is supported).
Default: google.
--aggregator_name {strict,soft,major}
Aggregator used for aggregating the results from multiple triplets.
Default: soft.
* strict: If any of the triplets is Contradiction, the response is
Contradiction. If all of the triplets are Entailment, the response is
Entailment. Otherwise, the response is Neutral.
* soft: The ratio of each category is calculated.
* major: The category with the most votes is selected.
--use_retriever
Whether to use retrieval to find the reference for checking. Required
if the reference field in input data is not provided.
--serper_api_key SERPER_API_KEY
Path to the serper api key file. Required if the google retriever is
used.
--batch_size_extractor BATCH_SIZE_EXTRACTOR
Batch size for batching inference of eatractor. Default: 16.
--batch_size_checker BATCH_SIZE_CHECKER
Batch size for batching inference of checker. Default: 16.
To extract claim triplets from LLM-generated responses, do:
refchecker-cli extract \
--input_path {INPUT_PATH} \
--output_path {OUTPUT_PATH} \
--extractor_name {EXTRACTOR_NAME} \
--extractor_api_base {EXTRACTOR_API_BASE}
The input json file contains a list of
{
"response": "", # required, the response to be checked
"question": "", # optional if the question is not important (e.g., in summarization)
"reference": "", # required, the reference for checking
...
}
In the output json file, each item is added with a claims
field, containing a list of [head, relation, tail]
.
To check hallucinations at triplet level, do:
refchecker-cli check \
--input_path {INPUT_PATH} \
--output_path {OUTPUT_PATH} \
--checker_name {CHECKER_NAME} \
--checker_api_base {CHECKER_API_BASE} \
--aggregator_name {strict,soft,major}
The input json file contains a list of
{
"response": "", # required, the response to be checked
"claims": [
["head1", "relation1", "tail1"],
["head2", "relation2", "tail2"],
...
] # required, the corresponding triplets of the response
"reference": "", # optional if a retriever is used to get reference
...
}
In the output json file, each item is added with the following fields:
{
"Y": Union[str, dict], # aggregated predictions on the whole response
"ys": [
"Entailment",
"Neutral",
"Contradiction",
...
] # checker predictions on each triplet
"reference": "", # added if a retriever is used to get reference
...
}
The format of aggregated predictions Y
depends on the selected aggregator. It is a str
as “Entailment”, “Neutral”, or “Contradiction” if strict
or major
aggregators are used. It is a dict
containing ratios of each category if the soft
aggregator is used. We additionally include a special category “Abstain” introduced in Evaluation Metric.
Note that the retriever is required in the zero-context setting, where no reference is provided by users. You can activate it by adding the --use_retriever
flag and specifying --retriever_name
. Currently we only support a google-based retriever. Feel free to try your own retrieval system and welcome to contribute.
For using the google retriever and/or the OpenAI models, you should provide corresponding API keys by specifying --serper_api_key
and/or --openai_key
.
Finally, you can use the whole extraction and checking pipeline by:
refchecker-cli extract-check \
--input_path {INPUT_PATH} \
--output_path {OUTPUT_PATH} \
--extractor_name {EXTRACTOR_NAME} \
--checker_name {CHECKER_NAME} \