SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning)
ModelScope Community Website
中文 | English
📖 Table of Contents
📝 Introduction
SWIFT supports training(PreTraining/Fine-tuning/RLHF), inference, evaluation and deployment of 300+ LLMs and 50+ MLLMs (multimodal large models). Developers can directly apply our framework to their own research and production environments to realize the complete workflow from model training and evaluation to application. In addition to supporting the lightweight training solutions provided by PEFT, we also provide a complete Adapters library to support the latest training techniques such as NEFTune, LoRA+, LLaMA-PRO, etc. This adapter library can be used directly in your own custom workflow without our training scripts.
To facilitate use by users unfamiliar with deep learning, we provide a Gradio web-ui for controlling training and inference, as well as accompanying deep learning courses and best practices for beginners. SWIFT web-ui is available both on Huggingface space and ModelScope studio, please feel free to try!
SWIFT has rich documentations for users, please feel free to check our documentation website:
☎ Groups
You can contact us and communicate with us by adding our group:
🎉 News
- 2024.08.06: Support for minicpm-v-v2_6-chat is available. You can use
swift infer --model_type minicpm-v-v2_6-chat
for inference experience. Best practices can be found here. - 2024.08.06: Supports internlm2.5 series of 1.8b and 20b. Experience it using
swift infer --model_type internlm2_5-1_8b-chat
. - 🔥2024.08.05: Support evaluation for multi-modal models! Same command with new datasets.
- 🔥2024.08.02: Support Fourier Ft. Use
--sft_type fourierft
to begin, Check parameter documentation here. - 🔥2024.07.29: Support the use of lmdeploy for inference acceleration of LLM and VLM models. Documentation can be found here.
- 🔥2024.07.24: Support DPO/ORPO/SimPO/CPO alignment algorithm for vision MLLM, training scripts can be find in Document. support RLAIF-V dataset.
- 🔥2024.07.24: Support using Megatron for CPT and SFT on the Qwen2 series. You can refer to the Megatron training documentation.
- 🔥2024.07.24: Support for the llama3.1 series models, including 8b, 70b, and 405b. Support for openbuddy-llama3_1-8b-chat.
- 2024.07.20: Support mistral-nemo series models. Use
--model_type mistral-nemo-base-2407
and--model_type mistral-nemo-instruct-2407
to begin. - 2024.07.19: Support Q-Galore, this algorithm can reduce the training memory cost by 60% (qwen-7b-chat, full, 80G -> 35G), use
swift sft --model_type xxx --use_galore true --galore_quantization true
to begin! - 2024.07.17: Support newly released InternVL2 models:
model_type
are internvl2-1b, internvl2-40b, internvl2-llama3-76b. For best practices, refer to here. - 2024.07.17: Support the training and inference of NuminaMath-7B-TIR. Use with model_type
numina-math-7b
. - 🔥2024.07.16: Support exporting for ollama and bitsandbytes. Use
swift export --model_type xxx --to_ollama true
orswift export --model_type xxx --quant_method bnb --quant_bits 4
- 2024.07.08: Support cogvlm2-video-13b-chat. You can check the best practice here.
- 2024.07.08: Support internlm-xcomposer2_5-7b-chat. You can check the best practice here.
- 🔥2024.07.06: Support for the llava-next-video series models: llava-next-video-7b-instruct, llava-next-video-7b-32k-instruct, llava-next-video-7b-dpo-instruct, llava-next-video-34b-instruct. You can refer to llava-video best practice for more information.
- 🔥2024.07.06: Support InternVL2 series: internvl2-2b, internvl2-4b, internvl2-8b, internvl2-26b.
- 2024.07.06: Support codegeex4-9b-chat.
- 2024.07.04: Support internlm2_5-7b series: internlm2_5-7b, internlm2_5-7b-chat, internlm2_5-7b-chat-1m.
- 2024.07.02: Support for using vLLM for accelerating inference and deployment of multimodal large models such as the llava series and phi3-vision models. You can refer to the Multimodal & vLLM Inference Acceleration Documentation for more information.
- 2024.07.02: Support for
llava1_6-vicuna-7b-instruct
,llava1_6-vicuna-13b-instruct
and other llava-hf models. For best practices, refer to here. - 🔥2024.06.29: Support eval-scope&open-compass for evaluation! Now we have supported over 50 eval datasets like
BoolQ, ocnli, humaneval, math, ceval, mmlu, gsk8k, ARC_e
, please check our Eval Doc to begin! Next sprint we will support Multi-modal and Agent evaluation, remember to follow us : )
More
- 🔥2024.06.28: Support for Florence series model! See document
- 🔥2024.06.28: Support for Gemma2 series models: gemma2-9b, gemma2-9b-instruct, gemma2-27b, gemma2-27b-instruct.
- 🔥2024.06.18: Supports DeepSeek-Coder-v2 series model! Use model_type
deepseek-coder-v2-instruct
anddeepseek-coder-v2-lite-instruct
to begin. - 🔥2024.06.16: Supports KTO and CPO training! See document to start training!
- 2024.06.11: Support for tool-calling agent deployment that conform to the OpenAI interface.You can refer to Agent deployment best practice
- 🔥2024.06.07: Support Qwen2 series LLM, including Base and Instruct models of 0.5B, 1.5B, 7B, and 72B, as well as corresponding quantized versions gptq-int4, gptq-int8, and awq-int4. The best practice for self-cognition fine-tuning, inference and deployment of Qwen2-72B-Instruct using dual-card 80GiB A100 can be found here.
- 🔥2024.06.05: Support for glm4 series LLM and glm4v-9b-chat MLLM. You can refer to glm4v best practice.
- 🔥2024.06.01: Supports SimPO training! See document to start training!
- 🔥2024.06.01: Support for deploying large multimodal models, please refer to the Multimodal Deployment Documentation for more information.
- 2024.05.31: Supports Mini-Internvl model, Use model_type
mini-internvl-chat-2b-v1_5
andmini-internvl-chat-4b-v1_5
to train. - 2024.05.24: Supports Phi3-vision model, Use model_type
phi3-vision-128k-instruct
to train. - 2024.05.22: Supports DeepSeek-V2-Lite series models, model_type are
deepseek-v2-lite
anddeepseek-v2-lite-chat
- 2024.05.22: Supports TeleChat-12B-v2 model with quantized version, model_type are
telechat-12b-v2
andtelechat-12b-v2-gptq-int4
- 🔥2024.05.21: Inference and fine-tuning support for MiniCPM-Llama3-V-2_5 are now available. For more details, please refer to minicpm-v-2.5 Best Practice.
- 🔥2024.05.20: Support for inferencing and fine-tuning cogvlm2-llama3-chinese-chat-19B, cogvlm2-llama3-chat-19B. you can refer to cogvlm2 Best Practice.
- 🔥2024.05.17: Support peft=0.11.0. Meanwhile support 3 new tuners:
BOFT
,Vera
andPissa
. use--sft_type boft/vera
to use BOFT or Vera, use--init_lora_weights pissa
with--sft_type lora
to use Pissa. - 2024.05.16: Supports Llava-Next (Stronger) series models. For best practice, you can refer to here.
- 🔥2024.05.13: Support Yi-1.5 series models,use
--model_type yi-1_5-9b-chat
to begin! - 2024.05.11: Support for qlora training and quantized inference using hqq and eetq. For more information, see the LLM Quantization Documentation.
- 2024.05.10: Support split a sequence to multiple GPUs to reduce memory usage. Use this feature by
pip install .[seq_parallel]
, then add--sequence_parallel_size n
to your DDP script to begin! - 2024.05.08: Support DeepSeek-V2-Chat model, you can refer to this script.Support InternVL-Chat-V1.5-Int8 model, for best practice, you can refer to here.
- 🔥2024.05.07: Supoprts ORPO training! See document to start training!
- 2024.05.07: Supports Llava-Llama3 model from xtuner,model_type is
llava-llama-3-8b-v1_1
. - 2024.04.29: Supports inference and fine-tuning of InternVL-Chat-V1.5 model. For best practice, you can refer to here.
- 🔥2024.04.26: Support LISA and unsloth training! Specify
--lisa_activated_layers=2
to use LISA(to reduce the memory cost to 30 percent!), specify--tuner_backend unsloth
to use unsloth to train a huge model(full or lora) with lesser memory(30 percent or lesser) and faster speed(5x)! - 🔥2024.04.26: Support the fine-tuning and inference of Qwen1.5-110B and Qwen1.5-110B-Chat model, use this script to start training!
- 2024.04.24: Support for inference and fine-tuning of Phi3 series models. Including: phi3-4b-4k-instruct, phi3-4b-128k-instruct.
- 2024.04.22: Support for inference, fine-tuning, and deployment of chinese-llama-alpaca-2 series models. This