Evaluation Papers for ChatGPT
News
- 2023/03/15: OpenAI released gpt4, which can be accessed on ChatGPT's plus service, we view it as a latest version of ChatGPT.
- 2023/04/28: We are maintaining a dataset ChatLog, which collects ChatGPT responses everyday from 2023-03-05 to now. We evaluate ChatGPT's performance on 21 benchmarks across time and find that previous evaluation results may change at new dates. Based on the colleted data, we build OpenChatLog, a search engine for LLM generated texts. Try our website (If your ip is in China).
- 2023/06/08: We propose Language-Model-as-an-Examiner, a novel benchmarking framework where the LM serves as a knowledgeable examiner that formulates questions based on its knowledge and evaluates responses in a reference-free manner. Try our dataset LMExamQA and benchmarking result at here.
- 2023/06/16: We are delighted to announce the official release of KoLA, a continuously evolving world knowledge evaluation platform that encompasses a 4-layer cognitive structure and 19 tasks. Our goal is to provide unbiased evaluation results to assist in enhancing the capabilities of knowledge systems and large models. You can participate in the evaluation or provide feedback through the platform at https://kola.xlore.cn/ or via GitHub.
Introduction
This repository stores Dataset Resources, Evaluation Papers and Detection Tools for ChatGPT.
0. Survey
-
ChatGPT: A Meta-Analysis after 2.5 Months.
Christoph Leiter, Ran Zhang, Yanran Chen, Jonas Belouadi, Daniil Larionov, Vivian Fresen, Steffen Eger. [abs], 2023.2
-
Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models.
Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, Bao Ge. [abs], 2023.4
-
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond.
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, Xia Hu. [abs], 2023.4
-
A Survey on Evaluation of Large Language Models.
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie. [abs], 2023.7
-
GPTEval: A Survey on Assessments of ChatGPT and GPT-4.
Rui Mao, Guanyi Chen, Xulang Zhang, Frank Guerin, Erik Cambria. [abs], 2023.8
1. Dataset Resource
-
How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.
Biyang Guo, Xin Zhang , Ziyuan Wang, Minqi Jiang , Jinran Nie, Yuxuan Ding, Jianwei Yue , Yupeng Wu. [abs],[github], 2023.1
-
ChatGPT: Jack of all trades, master of none.
Jan Kocoń , Igor Cichecki , Oliwier Kaszyca , Mateusz Kochanek , Dominika Szydło , Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocoń, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radliński, Konrad Wojtasik, Stanisław Woźniak and Przemysław Kazienko. [abs],[github], 2023.2
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao. [abs],[github], 2023.2
-
Is ChatGPT A Good Translator? A Preliminary Study.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu. [abs],[github], 2023.1
-
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective.
Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie . [abs],[github], 2023.2
-
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP).
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, Lakshmivihari Mareedu. [abs][github], 2023.2
-
Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions.
Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi. [abs][github], 2023.3
-
Instruction Tuning with GPT-4.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao. [abs][github], 2023.4
-
medAlpaca: Finetuned Large Language Models for Medical Question Answering.
Keno Bressem, Tianyu Han, Shan Chen, et al. [github], 2023.4
-
ChatLog: Recording and Analyzing ChatGPT Across Time.
Shangqing Tu, Chunyang Li, Jifan Yu, Xiaozhi Wang, Lei Hou, Juanzi Li. [abs][github], 2023.4
-
Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs.
Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Chenhao Ma, Kevin C.C. Chang, Fei Huang, Reynold Cheng, Yongbin Li. [abs][github], 2023.5
-
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets.
Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, Jimmy Xiangji Huang. [abs][github], 2023.5
Data statistics of these resources:
Paper with Dataset | Task | #Examples |
---|---|---|
How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection | QA + Dialog | 40,000 |
ChatGPT: Jack of all trades, master of none | 25 classification/ QA/reasoning task | 38,000 |
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT | sentiment analysis / Paraphrase / NLI | 475 |
Is ChatGPT A Good Translator? A Preliminary Study | Translation | 5,609 |
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective | Robustness | 2,237 |
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP). | Reasoning | 1,000 |
Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions. | Complex QA | 194,782 |
Instruction Tuning with GPT-4. | Instruction Following | 172,000 |
medAlpaca: Finetuned Large Language Models for Medical Question Answering. | Medical QA | 1.5 million |
ChatLog: Recording and Analyzing ChatGPT Across Time. | 21 NLU and NLG tasks | 73,730 (growing everyday) |
ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models. | Essay Writing | 9,647 |
Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs. | Text-to-SQL | 12,751 |
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets. | NLP Benchmarks | 255K |
2. Evaluation Papers
2.1 Natural Language Understanding
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao. [abs],[github], 2023.2
-
ChatGPT: Jack of all trades, master of none.
Jan Kocoń , Igor Cichecki , Oliwier Kaszyca , Mateusz Kochanek , Dominika Szydło , Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocoń, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radliński, Konrad Wojtasik, Stanisław Woźniak and Przemysław Kazienko. [abs],[github], 2023.2
-
How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks.
Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie Zhou, Tao Gui, Qi Zhang, Xuanjing Huang. [abs], 2023.3
-
Consistency Analysis of ChatGPT.
Myeongjun Jang, Thomas Lukasiewicz. [abs], 2023.3
-
Does ChatGPT resemble humans in language use?
Zhenguang G. Cai, David A. Haslett, Xufeng Duan, Shuqi Wang, Martin J. Pickering. [abs], 2023.3
-
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models.
Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang Shen, Jie Zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang. [abs], 2023.3
-
Can we trust the evaluation on ChatGPT?
Rachith Aiyappa, Jisun An, Haewoon Kwak, Yong-Yeol Ahn. [abs], 2023.3
-
A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability.
Aiwei Liu, Xuming Hu, Lijie Wen, Philip S. Yu. [abs][github], 2023.3
-
ChatGPT or Grammarly? Evaluating ChatGPT on Grammatical Error Correction Benchmark.
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu. [abs], 2023.3
-
Safety Analysis in the Era of Large Language Models: A Case Study of STPA using ChatGPT.
Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang. [abs], 2023.4
-
Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study.
Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, Rui Xia. [abs], 2023.4
-
A Preliminary Evaluation of ChatGPT for Zero-shot Dialogue Understanding.
Wenbo Pan, Qiguang Chen, Xiao Xu, Wanxiang Che, Libo Qin.