自我奖励语言模型
实现了MetaAI提出的自我奖励语言模型中的训练框架
他们真的很认真地对待了DPO论文的标题。
这个库还包含了SPIN的实现,Nous Research的Teknium对此表示了乐观。
致谢
- 感谢A16Z开源AI资助计划和🤗 Huggingface的慷慨赞助,以及我的其他赞助者,让我能够独立地开源当前的人工智能研究
安装
$ pip install self-rewarding-lm-pytorch
使用方法
import torch
from torch import Tensor
from self_rewarding_lm_pytorch import (
SelfRewardingTrainer,
create_mock_dataset
)
from x_transformers import TransformerWrapper, Decoder
transformer = TransformerWrapper(
num_tokens = 256,
max_seq_len = 1024,
attn_layers = Decoder(
dim = 512,
depth = 1,
heads = 8
)
)
sft_dataset = create_mock_dataset(100, lambda: (torch.randint(0, 256, (256,)), torch.tensor(1)))
prompt_dataset = create_mock_dataset(100, lambda: 'mock prompt')
def decode_tokens(tokens: Tensor) -> str:
decode_token = lambda token: str(chr(max(32, token)))
return ''.join(list(map(decode_token, tokens)))
def encode_str(seq_str: str) -> Tensor:
return Tensor(list(map(ord, seq_str)))
trainer = SelfRewardingTrainer(
transformer,
finetune_configs = dict(
train_sft_dataset = sft_dataset,
self_reward_prompt_dataset = prompt_dataset,
dpo_num_train_steps = 1000
),
tokenizer_decode = decode_tokens,
tokenizer_encode = encode_str,
accelerate_kwargs = dict(
cpu = True
)
)
trainer(overwrite_checkpoints = True)
# 每个微调阶段后的检查点将保存到 ./checkpoints
SPIN可以按以下方式训练 - 它也可以添加到微调流程中,如readme的最后一个示例所示。
import torch
from self_rewarding_lm_pytorch import (
SPINTrainer,
create_mock_dataset
)
from x_transformers import TransformerWrapper, Decoder
transformer = TransformerWrapper(
num_tokens = 256,
max_seq_len = 1024,
attn_layers = Decoder(
dim = 512,
depth = 6,
heads = 8
)
)
sft_dataset = create_mock_dataset(100, lambda: (torch.randint(0, 256, (256,)), torch.tensor(1)))
spin_trainer = SPINTrainer(
transformer,
max_seq_len = 16,
train_sft_dataset = sft_dataset,
checkpoint_every = 100,
spin_kwargs = dict(
λ = 0.1,
),
)
spin_trainer()
假设你想尝试自己的奖励提示(而不是LLM作为裁判)。首先你需要导入RewardConfig
,然后将其作为reward_prompt_config
传递给训练器
# 首先导入
from self_rewarding_lm_pytorch import RewardConfig
# 然后假设你想尝试礼貌地询问transformer
# reward_regex_template 是将在LLM响应中查找的字符串,用于解析出奖励,其中 {{ reward }} 被定义为一个数字
trainer = SelfRewardingTrainer(
transformer,
...,
self_reward_prompt_config = RewardConfig(
prompt_template = """
请礼貌地为以下用户提示和回应评分
用户: {{ prompt }}
回应: {{ response }}
请按以下格式给出你的评分:
评分: <0到10之间的整数评分>
""",
reward_regex_template = """
评分: {{ reward }}
"""
)
)
最后,如果你想尝试任意顺序的微调,你也可以通过将FinetuneConfig
实例作为列表传递给finetune_configs
来获得这种灵活性
例如,假设你想进行交替SPIN、外部奖励和自我奖励的研究
这个想法源自私人Discord频道中Teknium的建议。
# 导入配置
from self_rewarding_lm_pytorch import (
SFTConfig,
SelfRewardDPOConfig,
ExternalRewardDPOConfig,
SelfPlayConfig,
)
trainer = SelfRewardingTrainer(
model,
finetune_configs = [
SFTConfig(...),
SelfPlayConfig(...),
ExternalRewardDPOConfig(...),
SelfRewardDPOConfig(...),
SelfPlayConfig(...),
SelfRewardDPOConfig(...)
],
...
)
trainer()
# 每个微调阶段后的检查点将保存到 ./checkpoints
待办事项
-
泛化采样,使其可以在批次中的不同位置进行,修复所有采样为批处理。同时允许左填充序列,以防有些人使用允许相对位置的transformer
-
处理eos
-
展示使用自定义奖励提示而不是默认LLM作为裁判的示例
-
允许不同的配对采样策略
-
早期停止器
- 在主进程上处理所有完成时的中断信号
- 接受评估模块,可以是验证损失或更复杂的东西。返回一个标量张量或单个整数/浮点数
-
任意顺序的sft、spin、自我奖励dpo、带外部奖励模型的dpo
-
允许对奖励进行验证函数(比如奖励必须是整数、浮点数、在某个范围内等)
-
找出最佳处理不同kv缓存实现的方法,目前暂时不使用
-
环境标志,自动清除所有检查点文件夹
引用
@misc{yuan2024selfrewarding,
title = {Self-Rewarding Language Models},
author = {Weizhe Yuan and Richard Yuanzhe Pang and Kyunghyun Cho and Sainbayar Sukhbaatar and Jing Xu and Jason Weston},
year = {2024},
eprint = {2401.10020},
archivePrefix = {arXiv},
primaryClass = {cs.CL}
}
@article{Chen2024SelfPlayFC,
title = {Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author = {Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
journal = {ArXiv},
year = {2024},
volume = {abs/2401.01335},
url = {https://api.semanticscholar.org/CorpusID:266725672}
}
@article{Rafailov2023DirectPO,
title = {Direct Preference Optimization: Your Language Model is Secretly a Reward Model},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Stefano Ermon and Christopher D. Manning and Chelsea Finn},
journal = {ArXiv},
year = {2023},
volume = {abs/2305.18290},
url = {https://api.semanticscholar.org/CorpusID:258959321}
}
@inproceedings{Guo2024DirectLM,
title = {Direct Language Model Alignment from Online AI Feedback},
author = {Shangmin Guo and Biao Zhang and Tianlin Liu and Tianqi Liu and Misha Khalman and Felipe Llinares and Alexandre Rame and Thomas Mesnard and Yao Zhao and Bilal Piot and Johan Ferret and Mathieu Blondel},
year = {2024},
url = {https://api.semanticscholar.org/CorpusID:267522951}
}