LLM-Continual-Learning-Papers
关于大型语言模型(LLM)持续学习的重要论文
-
Towards Continual Knowledge Learning of Language Models
Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo. [abs]. ICLR 2022.
-
Continual Pre-Training Mitigates Forgetting in Language and Vision
Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacci. [abs]. Preprint 2022.05.
-
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora
Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, Xiang Ren. [abs]. NAACL 2022
-
Continual Training of Language Models for Few-Shot Learning
Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu. [abs]. EMNLP 2022.
-
Continual Pre-training of Language Models
Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, Bing Liu. [abs]. ICLR 2023.
-
Progressive Prompts: Continual Learning for Language Models
Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi. [abs]. ICLR 2023.
-
A Unified Continual Learning Framework with General Parameter-Efficient Tuning
Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang. [abs]. ICCV 2023.
-
Semiparametric Language Models Are Scalable Continual Learners
Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, Houfeng Wang. [abs]. Preprint 2023.02.
-
Continual Pre-Training of Large Language Models: How to (re)warm your model?
Kshitij Gupta, Benjamin Thérien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene Belilovsky, Irina Rish, Timothée Lesort. [abs]. ICML 2023 Workshop.
-
ConPET: Continual Parameter-Efficient Tuning for Large Language Models
Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, Maosong Sun, Tao Yang. [abs]. Preprint 2023.09.
-
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models
Xiao Wang, Yuansen Zhang, Tianze Chen, Songyang Gao, Senjie Jin, Xianjun Yang, Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, Qi Zhang, Xuanjing Huang. [abs]. Preprint 2023.10.
-
A Study of Continual Learning Under Language Shift
Evangelia Gogoulou, Timothée Lesort, Magnus Boman, Joakim Nivre. [abs]. Preprint 2023.11.
-
Scalable Language Model with Generalized Continual Learning
ICLR 2024 Conference Submission1284 Authors. [openreview]. Preprint 2023.
-
Efficient Continual Pre-training for Building Domain Specific Large Language Models
ICLR 2024 Conference Submission4091 Authors. [openreview]. Preprint 2023.