Efficient Large Language Models: A Survey
Efficient Large Language Models: A Survey [arXiv] (Version 1: 12/06/2023; Version 2: 12/23/2023; Version 3: 01/31/2024; Version 4: 05/23/2024, camera ready version of Transactions on Machine Learning Research)
Zhongwei Wan1, Xin Wang1, Che Liu2, Samiul Alam1, Yu Zheng3, Jiachen Liu4, Zhongnan Qu5, Shen Yan6, Yi Zhu7, Quanlu Zhang8, Mosharaf Chowdhury4, Mi Zhang1
1The Ohio State University, 2Imperial College London, 3Michigan State University, 4University of Michigan, 5Amazon AWS AI, 6Google Research, 7Boson AI, 8Microsoft Research Asia
⚡News: Our survey has been officially accepted by Transactions on Machine Learning Research (TMLR) 2024. Camera ready version is available at: [OpenReview]
@article{wan2023efficient,
title={Efficient large language models: A survey},
author={Wan, Zhongwei and Wang, Xin and Liu, Che and Alam, Samiul and Zheng, Yu and others},
journal={arXiv preprint arXiv:2312.03863},
volume={1},
year={2023},
publisher={no}
}
❤️ Community Support
This repository is maintained by tuidan (wang.15980@osu.edu), SUSTechBruce (wan.512@osu.edu), samiul272 (alam.140@osu.edu), and mi-zhang (mizhang.1@osu.edu). We welcome feedback, suggestions, and contributions that can help improve this survey and repository so as to make them valuable resources to benefit the entire community.
We will actively maintain this repository by incorporating new research as it emerges. If you have any suggestions regarding our taxonomy, find any missed papers, or update any preprint arXiv paper that has been accepted to some venue, feel free to send us an email or submit a pull request using the following markdown format.
Paper Title, <ins>Conference/Journal/Preprint, Year</ins> [[pdf](link)] [[other resources](link)].
📌 What is This Survey About?
Large Language Models (LLMs) have demonstrated remarkable capabilities in many important tasks and have the potential to make a substantial impact on our society. Such capabilities, however, come with considerable resource demands, highlighting the strong need to develop effective techniques for addressing the efficiency challenges posed by LLMs. In this survey, we provide a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from model-centric, data-centric, and framework-centric perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.
🤔 Why Efficient LLMs are Needed?
Although LLMs are leading the next wave of AI revolution, the remarkable capabilities of LLMs come at the cost of their substantial resource demands. Figure 1 (left) illustrates the relationship between model performance and model training time in terms of GPU hours for LLaMA series, where the size of each circle is proportional to the number of model parameters. As shown, although larger models are able to achieve better performance, the amounts of GPU hours used for training them grow exponentially as model sizes scale up. In addition to training, inference also contributes quite significantly to the operational cost of LLMs. Figure 2 (right) depicts the relationship between model performance and inference throughput. Similarly, scaling up the model size enables better performance but comes at the cost of lower inference throughput (higher inference latency), presenting challenges for these models in expanding their reach to a broader customer base and diverse applications in a cost-effective way. The high resource demands of LLMs highlight the strong need to develop techniques to enhance the efficiency of LLMs. As shown in Figure 2, compared to LLaMA-1-33B, Mistral-7B, which uses grouped-query attention and sliding window attention to speed up inference, achieves comparable performance and much higher throughput. This superiority highlights the feasibility and significance of designing efficiency techniques for LLMs.
📖 Table of Content
- 🤖 Model-Centric Methods
- 🔢 Data-Centric Methods
- 🧑💻 System-Level Efficiency Optimization and LLM Frameworks
🤖 Model-Centric Methods
Model Compression
Quantization
Post-Training Quantization
Weight-Only Quantization
- I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit Large Language Models, arXiv, 2024 [Paper]
- IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact, arXiv, 2024 [Paper]
- OmniQuant: OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models, ICLR, 2024 [Paper] [Code]
- OneBit: Towards Extremely Low-bit Large Language Models, arXiv, 2024 [Paper]
- GPTQ: Accurate Quantization for Generative Pre-trained Transformers, ICLR, 2023 [Paper] [Code]
- QuIP: 2-Bit Quantization of Large Language Models With Guarantees, arXiv, 2023 [Paper] [Code]
- AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration, arXiv, 2023 [Paper] [Code]
- OWQ: Lessons Learned from Activation Outliers for Weight Quantization in Large Language Models, arXiv, 2023 [Paper] [Code]
- SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression, arXiv, 2023 [Paper] [Code]
- FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs, NeurIPS-ENLSP, 2023 [Paper]
- LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale, NeurlPS, 2022 [Paper] [Code]
- Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning, NeurIPS, 2022 [Paper] [Code]
- QuantEase: Optimization-based Quantization for Language Models, arXiv, 2023 [Paper] [Code]
Weight-Activation Co-Quantization
- OmniQuant: OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models, ICLR, 2024 [Paper] [Code]
- Intriguing Properties of Quantization at Scale, NeurIPS, 2023 [Paper]
- ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation, arXiv, 2023 [Paper] [Code]
- ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats, NeurIPS-ENLSP, 2023 [Paper] [Code]
- OliVe: Accelerating Large Language Models via Hardware-friendly Outlier-Victim Pair Quantization, ISCA, 2023 [Paper] [Code]
- RPTQ: Reorder-based Post-training Quantization for Large Language Models, arXiv, 2023 [Paper] [Code]
- Outlier Suppression+: Accurate Quantization of Large Language Models by Equivalent and Optimal Shifting and Scaling, arXiv, 2023 [Paper] [Code]
- QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models, arXiv, 2023 [Paper]
- SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, ICML, 2023 [Paper] [Code]
- ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers, NeurIPS, 2022 [Paper]
Evaluation of Post-Training Quantization
- Evaluating Quantized Large Language Models, arXiv, 2024 [Paper]