Awesome Adapter Resources
This repository collects important tools and papers related to adapter methods for recent large pre-trained neural networks.
Adapters (aka Parameter-Efficient Transfer Learning (PETL) or Parameter-Efficient Fine-Tuning (PEFT) methods) include various parameter-efficient approaches of adapting large pre-trained models to new tasks.
Content
- Why Adapters?
- Frameworks and Tools
- Surveys
- Natural Language Processing
- Computer Vision
- Audio Processing
- Multi-Modal
- Contributing
Why Adapters?
Large pre-trained (Transformer-based) models have become the foundation of various ML domains in recent years. While the most prevalent method of adapting these models to new tasks involves costly full fine-tuning of all model parameters, a series of parameter-efficient and lightweight alternatives, adapters, have been established in recent time.
Using adapters provides multiple benefits. They are ...
- ... parameter-efficient, i.e. they only update a very small subset (e.g. under 1%) of a model's parameters.
- ... modular, i.e. the updated parameters can be extracted and shared independently of the base model parameters
- ... easy to share and easy to deploy at scale due to their small file sizes. E.g. requiring only ~3MB per task instead of ~500MB for sharing a full model.
- ... often composable, i.e. can be stacked, fused or mixed to leverage their combined knowledge.
- ... often on-par in terms of performance with full fine-tuning.
Frameworks and Tools
-
AdapterHub: A Framework for Adapting Transformers
Conference on Empirical Methods in Natural Language Processing
Jonas Pfeiffer, Andreas Rücklé, Clifton A. Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych (2020)
TLDR
AdaptersHub is proposed, a framework that allows dynamic “stiching-in” of pre-trained adapters for different tasks and languages that enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios. -
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Conference on Empirical Methods in Natural Language Processing
Clifton A. Poth, Hannah Sterz, Indraneil Paul, Sukannya Purkayastha, Leon Arne Engländer, Timo Imhof, Ivan Vuli'c, Sebastian Ruder, Iryna Gurevych, Jonas Pfeiffer (2023)
TLDR
Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models and allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups, is introduced. -
OpenDelta
-
PEFT: State-of-the-art Parameter-Efficient Fine-Tuning
-
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Conference on Empirical Methods in Natural Language Processing
Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, R. Lee, Lidong Bing, Soujanya Poria (2023)
TLDR
LLM-Adapters is presented, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks, demonstrating that using adapter- based PEFT in smaller-scale LLMs with few extra trainable parameters yields comparable, and in some cases superior, performance to powerful LLMs in zero-shot inference on both reasoning tasks. -
Alpaca-LoRA
Surveys
-
Modular Deep Learning
arXiv.org
Jonas Pfeiffer, Sebastian Ruder, Ivan Vulic, E. Ponti (2023)
TLDR
A survey of modular architectures is offered, providing a unified view over several threads of research that evolved independently in the scientific literature, and various additional purposes of modularity are explored, including scaling language models, causal inference, programme induction, and planning in reinforcement learning. -
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
arXiv.org
Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky (2023)
TLDR
A taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models is provided. -
PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
arXiv.org
Mohammed Sabry, Anya Belz (2023)
TLDR
A reference architecture is presented which standardises aspects shared by different PEFT techniques, while isolating differences to specific locations and interactions with the standard components, supporting not only direct comparison of different techniques and their efficiency and task performance, but also systematic exploration of reusability and composability of the different types of finetuned modules. -
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
arXiv.org
Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, Sai Qian Zhang (2024)
TLDR
This survey presents comprehensive studies of various PEFT algorithms, examining their performance and computational overhead, and overview of applications developed using different PEFT algorithms and discusses common techniques employed to mitigate computation costs for PEFT.
Natural Language Processing
Methods
-
Parameter-Efficient Transfer Learning for NLP
International Conference on Machine Learning
N. Houlsby, A. Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, S. Gelly (2019)
TLDR
To demonstrate adapter's effectiveness, the recently proposed BERT Transformer model is transferred to 26 diverse text classification tasks, including the GLUE benchmark, and adapter attain near state-of-the-art performance, whilst adding only a few parameters per task. -
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
Findings
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou (2020)
TLDR
K-Adapter is proposed, which remains the original parameters of the pre-trained model fixed and supports continual knowledge infusion and captures richer factual and commonsense knowledge than RoBERTa. -
Parameter-Efficient Transfer Learning with Diff Pruning
Annual Meeting of the Association for Computational Linguistics
Demi Guo, Alexander M. Rush, Yoon Kim (2020)
TLDR
Diff pruning can match the performance of finetuned baselines on the GLUE benchmark while only modifying 0.5% of the pretrained model’s parameters per task and scales favorably in comparison to popular pruning approaches. -
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Annual Meeting of the Association for Computational Linguistics
Xiang Lisa Li, Percy Liang (2021)
TLDR
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix. -
The Power of Scale for Parameter-Efficient Prompt Tuning
Conference on Empirical Methods in Natural Language Processing
Brian Lester, Rami Al-Rfou, Noah Constant (2021)
TLDR
This work explores “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient “Prompt ensembling.” -
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Neural Information Processing Systems
Joe Davison (2021)
TLDR
Compacter is proposed, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work, and accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers. -
LoRA: Low-Rank Adaptation of Large Language Models
International Conference on Learning Representations
J. E. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Weizhu Chen (2021)
TLDR
Low-Rank Adaptation, or LoRA, is proposed, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.[Paper PDF] [Code] [[Semantic