English | Chinese
IEPile: A Large-Scale Information Extraction Corpus
This is the official repository for IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus
Datasets | Paper | Usage | Limitations | Statement & License | Citation
Please note that our IEPile may undergo updates (we will inform you upon their release). It is recommended to utilize the most current version.
- IEPile: A Large-Scale Information Extraction Corpus
News
- [2024/05] The paper IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus is accepted by ACL 2024 main conference.
- [2024/04] We release a new bilingual (Chinese and English) schema-based information extraction model called OneKE based on Chinese-Alpaca-2-13B.
- [2024/02] We release a large-scale (0.32B tokens) high-quality bilingual (Chinese and English) Information Extraction (IE) instruction dataset named IEPile, along with two models trained with
IEPile
, baichuan2-13b-iepile-lora and llama2-13b-iepile-lora. - [2023/10] We released a new bilingual (Chinese and English) theme-based Information Extraction (IE) instruction dataset named InstructIE with paper.
- [2023/08] We introduced a dedicated 13B model for Information Extraction (IE), named knowlm-13b-ie.
- [2023/05] We initiated an instruction-based Information Extraction project.
1.Introduction
IEPile
dataset download links: Google Drive | Hugging Face | WiseModel | ModelScpoe
Please be aware that the data contained in the dataset links provided above has already excluded any part related to the ACE2005 dataset. Should you require access to the unfiltered, complete dataset and have successfully obtained the necessary permissions, please do not hesitate to contact us via email at guihonghao@zju.edu.cn or zhangningyu@zju.edu.cn. We will provide the complete dataset resources for your use.
Model download links for LLaMA2-IEPile
| Baichuan2-IEPile
| OneKE
: zjunlp/llama2-13b-iepile-lora | zjunlp/baichuan2-13b-iepile-lora | zjunlp/OneKE
We have collected and cleaned existing Information Extraction (IE) datasets, integrating a total of 26 English IE datasets and 7 Chinese IE datasets. As shown in the Figure, these datasets cover multiple domains including general, medical, financial, and others.
In this study, we adopted the proposed "schema-based batched instruction generation strategy
" to create a large-scale, high-quality, bilingual (Chinese and English) IE instruction tuning dataset named IEPile, containing approximately 0.32B
tokens.
Based on IEPile, we fine-tuned the Baichuan2-13B-Chat
and LLaMA2-13B-Chat
models using the Lora
technique. Experiments have demonstrated that the fine-tuned Baichuan2-IEPile
and LLaMA2-IEPile
models perform remarkably on fully supervised training sets and have achieved improvements in zero-shot information extraction tasks.
Supervision Results
2.Data
2.1Construction of IEPile
We concentrate on instruction-based IE, thus the construction of schema within the instructions is crucial. This is because they reflect the specific extraction requirements and are dynamically variable. Previous approaches with existing IE datasets often employ a rather extensive schema processing strategy when constructing instructions, utilizing all schemas within a label set for instruction building, raising two potential issues:
- Inconsistency in the number of schema queries within instruction between training and evaluation. For example, the model's performance will decrease if it is trained on about 20 schema queries but tested with either 10 or 30, even if the training and evaluation schemas are similar in content.
- Inadequate differentiation among schemas in the instructions. For example, semantically similar schemas like "layoffs", "depart" and "dismissals", may present co-occurrence ambiguities that could confuse the LLMs. Such schemas should co-occur more frequently within the instruction.
Therefore, we introduce the following solutions: 1)Hard Negative Schema; and 2) Batched Instruction Generation.
Hard Negative Schema
Assuming that dataset $\mathcal{D}$ possesses a full label set $L$. For a given text $S$, the schemas present in its annotation constitute the positive schema set $Pos_L$, while others form the negative schema set $Neg_L$. In our analysis, we discover that the primary cause of model misjudgment stems from the semantic ambiguity of the schema. In traditional approaches, the $Neg_L$ is simply defined as $L - Pos_L$. However, they overlook a critical aspect: it is important to pay special attention to negative schemas that are semantically close to positive schemas. Inspired by the theory of contrastive learning, we construct a hard negative schema dictionary $\mathcal{K}$, where each key represents a unique schema and the associated value is a collection of schemas that are semantically similar to the key schema. Based on this, we define the hard negative schema set as $Hard_L = \mathcal{K}[Pos_L]$, and the other negative schema set as $Other_L = L - Pos_L - Hard_L$. The final $Neg_L$ is constituted by $Hard_L$ and a small subset of $Other_L$. Through this strategy, we not only present semantically similar schemas more frequently within the instruction but also reduce the number of training instances without sacrificing model performance.
Batched Instruction Generation
Subsequently, we obtain the final schema set $L' = Pos_L + Neg_L$. We employ a batched instruction generation method, limiting the number of schemas inquired in each instruction to the number of $split_num$, which ranges between 4 to 6. Therefore, $L'$ will be divided into $|L'|/split_num$ batches for querying, with each batch querying $split_num$ schemas. Consequently, even if the number of schemas inquired during the evaluation phase differs from that of training, the batched mechanism allows us to distribute the inquiries across $split_num$ schemas, thereby mitigating the decline in generalization performance.
2.2Data Format of IEPile
Each instance in IEPile
contains four fields: task
, source
, instruction
, and output
.
Below is a data example:
{
"task": "NER",
"source": "CoNLL2003",
"instruction": "{\"instruction\": \"You are an expert in named entity recognition. Please extract entities that match the schema definition from the input. Return an empty list if the entity type does not exist. Please respond in the format of a JSON string.\", \"schema\": [\"person\", \"organization\", \"else\", \"location\"], \"input\": \"284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )\"}",
"output": "{\"person\": [\"Robert Allenby\", \"Allenby\", \"Miguel Angel Martin\"], \"organization\": [], \"else\": [], \"location\": [\"Australia\", \"Spain\"]}"
}
The data instance belongs to the NER
task, is part of the CoNLL2003
dataset, the schema list to be extracted includes ["person
", "organization
", "else
", "location
"], and the text to be extracted from is "284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )". The output is {"person": ["Robert Allenby", "Allenby", "Miguel Angel Martin"], "organization": [], "else": [], "location": ["Australia", "Spain"]}
.
Note that the order of schemas in the output is consistent with the order in the instruction.
More Tasks Instance
{
"task": "EE",
"source": "PHEE",
"instruction": "{\"instruction\": \"You are an expert in event extraction. Please extract events from the input that conform to the schema definition. Return an empty list for events that do not exist, and return NAN for arguments that do not exist. If an argument has multiple values, please return a list. Respond in the format of a JSON string.\", \"schema\": [{\"event_type\": \"potential therapeutic event\", \"trigger\": true, \"arguments\": [\"Treatment.Time_elapsed\", \"Treatment.Route\", \"Treatment.Freq\", \"Treatment\", \"Subject.Race\", \"Treatment.Disorder\", \"Effect\", \"Subject.Age\", \"Combination.Drug\", \"Treatment.Duration\", \"Subject.Population\", \"Subject.Disorder\", \"Treatment.Dosage\", \"Treatment.Drug\"]}, {\"event_type\": \"adverse event\", \"trigger\": true, \"arguments\": [\"Subject.Population\", \"Subject.Age\", \"Effect\", \"Treatment.Drug\", \"Treatment.Dosage\", \"Treatment.Freq\", \"Subject.Gender\", \"Treatment.Disorder\", \"Subject\", \"Treatment\", \"Treatment.Time_elapsed\", \"Treatment.Duration\", \"Subject.Disorder\", \"Subject.Race\", \"Combination.Drug\"]}], \"input\": \"Our findings reveal that even in patients without a history of seizures, pregabalin can cause a cortical negative myoclonus.\"}",
"output": "{\"potential therapeutic event\": [], \"adverse event\": [{\"trigger\": \"cause \", \"arguments\": {\"Subject.Population\": \"NAN\", \"Subject.Age\": \"NAN\", \"Effect\": \"cortical negative myoclonus\", \"Treatment.Drug\": \"pregabalin\", \"Treatment.Dosage\": \"NAN\", \"Treatment.Freq\": \"NAN\", \"Subject.Gender\": \"NAN\", \"Treatment.Disorder\": \"NAN\", \"Subject\": \"patients without a history of seizures\", \"Treatment\": \"pregabalin\", \"Treatment.Time_elapsed\": \"NAN\", \"Treatment.Duration\": \"NAN\", \"Subject.Disorder\": \"NAN\", \"Subject.Race\": \"NAN\", \"Combination.Drug\": \"NAN\"}}]}"
}
{
"task": "RE",
"source": "NYT11",
"instruction": "{\"instruction\": \"You are an expert in relationship extraction. Please extract relationship triples that match the schema definition from the input. Return an empty list for relationships that do not exist. Please respond in the format of a JSON string.\", \"schema\": [\"neighborhood of\", \"nationality\", \"children\", \"place of death\"], \"input\": \" In the way New Jersey students know that Thomas Edison 's laboratory is in West Orange , the people of Colma know that Wyatt Earp 's ashes are buried at Hills of Eternity , a Jewish cemetery he was n't ; his wife was , and that Joe DiMaggio is at Holy Cross Cemetery , where visitors often lean bats against his gravestone . \"}",
"output": "{\"neighborhood of\": [], \"nationality\": [], \"children\": [], \"place of death\": [{\"subject\": \"Thomas Edison\", \"object\": \"West Orange\"}]}"
}
Below are the explanations for each field:
Field | Description |
---|---|
task | The task to which the instance belongs, one of the five types (NER , RE , EE , EET , EEA ). |
source | The dataset to which the instance belongs. |
instruction | The instruction for inputting into the model, processed into a JSON string via json.dumps, including three parts: "instruction" , "schema" , and "input" . |
output | The output in the format of a dictionary's JSON string, where the key is the schema, and the value is the extracted content. |
In IEPile
, the instruction format of IEPile
adopts a JSON-like string structure, which is essentially a dictionary-type string composed of the following three main components:
(1) 'instruction'
: Task description, which outlines the task to be performed by the instruction (one of NER
, RE
, EE
, EET
, EEA
).
(2) 'schema'
: A list of schemas to be extracted (entity types
, relation types
, event types
).
(3) 'input'
: The text from which information is to be extracted.
The file instruction.py provides instructions for various tasks.
3.Using IEPile to Train Models
3.1Environment
Before you begin, make sure to create an appropriate virtual environment following the instructions below:
conda create -n IEPile python=3.9 # Create a virtual environment
conda activate IEPile # Activate the environment
pip install -r requirements.txt # Install dependencies
3.2Download Data and Models
IEPile
dataset download links: Google Drive | Hugging Face
IEPile
├── train.json # Training set
└── dev.json # Validation set
Here are some of the models supported by