Paper List for In-context Learning
Contents
Introduction
This is a paper list (working in progress) about In-context learning
Keywords Convention
abbreviation
section in our survey
main feature
conference
Papers
Survey
-
A Survey for In-context Learning.
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, Zhifang Sui. [pdf], 2022.12,
Model Training for ICL
This section contains the pilot works that might contributes to the training strategies of ICL.
Pre-training
- MEND: meta demonstration distillation for efficient and effective in-context learning. Yichuan Li, Xiyao Ma, Sixing Lu, Kyumin Lee, Xiaohu Liu, Chenlei Guo. [pdf], [project], 2024.3,
- Pre-training to learn in context. Yuxian Gu, Li Dong, Furu Wei, Minlie Huang. [pdf], [project], 2023.7,
- In-context pretraining: Language modeling beyond document boundaries. Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Gergely Szilvasy, Rich James, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis. [pdf], [project], 2023.7,
Warmup
-
MetaICL: Learning to Learn In Context NAACL 2022 a pretrained language model is tuned to do in-context learning on a large set of training tasks..
Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi. [pdf], [project], 2021.10,
-
Improving In-Context Few-Shot Learning via Self-Supervised Training..
Mingda Chen, Jingfei Du, Ramakanth Pasunuru, Todor Mihaylov, Srini Iyer, Veselin Stoyanov, Zornitsa Kozareva. [pdf], [project], 2022.5,
-
Calibrate Before Use: Improving Few-shot Performance of Language Models..
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh. [pdf], [project], 2021.2,
- Using N/A string to calibrate LMs away from common token bias
-
Symbol tuning improves in-context learning in language models. Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le. [pdf], [project], 2023.5,
-
Fine-tune language models to approximate unbiased in-context learning. Timothy Chu, Zhao Song, Chiwun Yang. [pdf], 2023.10,
-
ICL Markup: Structuring In-Context Learning using Soft-Token Tags. Marc-Etienne Brunet, Ashton Anderson, Richard Zemel. [pdf], 2023.12,
-
Cross-task generalization via natural language crowdsourcing instructions. Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi.: [pdf], [project], 2022.5,
-
Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. [pdf], 2021.9,
-
Scaling instruction-finetuned language models. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei [pdf], [project], 2022.10,
-
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, Xudong Shen [pdf], [project], 2022.4,
Prompt Tuning for ICL
This section contains the pilot works that might contributes to the prompt selection and prompt formulation strategies of ICL.
-
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model.
Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, Sungdong Kim, HyoungSeok Kim, Boseop Kim, Kyunghyun Cho, Gichang Lee, Woomyoung Park, Jung-Woo Ha, Nako Sung. [pdf], 2022.04,
- how in-context learning performance changes as the training corpus varies, investigate the effects of the source and size of the pretraining corpus on in-context learning
-
Chain of Thought Prompting Elicits Reasoning in Large Language Models.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou. [pdf], 2022.01,
-
Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi. [pdf], 2022.05,
-
Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator.
Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk Kim, Kang Min Yoo, Sang-goo Lee. [pdf], 2022.06,
-
Iteratively Prompt Pre-trained Language Models for Chain of Thought.
Boshi Wang, Xiang Deng, Huan Sun. [pdf], [project], 2022.03,
-
Automatic Chain of Thought Prompting in Large Language Models.
Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola. [pdf], [project], 2022.10,
-
Learning To Retrieve Prompts for In-Context Learning NAACL 2022 Learn an example retriever via contrastive learning.
Ohad Rubin, Jonathan Herzig, Jonathan Berant. [pdf], [project], 2022.12,
-
Finetuned Language Models Are Zero-Shot Learners instruction tuning.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. [pdf], [project], 2021.09,
- finetuning language models on a collection of tasks described via instructions
- substantially improves zero-shot performance on unseen tasks
-
Active Example Selection for In-Context Learning.
Yiming Zhang, Shi Feng, Chenhao Tan. [pdf], [project], 2022.11,
-
Prompting GPT-3 To Be Reliable
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, Lijuan Wang. [pdf], [project], 2022.10,
-
An lnformation-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Taylor Sorensen, Joshua Robinson, Christopher Rytting, Alexander Shaw, Kyle Rogers, Alexia Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate. [pdf],