项目介绍:ToxicityModel
ToxicityModel 是一种经过微调的模型,基于 RoBERTa 开发,用于评估句子的毒性程度。此模型通过包含“有毒”和“无毒”语言示例的数据集进行训练,旨在帮助识别和筛选文本中的潜在有害内容。
模型细节
- 参数大小: 124,646,401
- 使用数据集: Toxic-Text 数据集
- 语言: 英语
- 训练步数: 1000
- 批处理大小: 32
- 优化器:
torch.optim.AdamW
- 学习率: 5e-5
- 使用硬件: 1 NVIDIA A100-SXM4-40GB
- 二氧化碳排放量: 0.0002 KgCO2(加拿大)
- 总能耗: 0.10 kWh
此模型的源代码可以在 GitHub 仓库中找到:Aira。
使用说明
ToxicityModel 可以用作强化学习人类反馈(RLHF)训练中的辅助奖励模型。负值(数值越接近0)表示文本中存在毒性,而正值(数值越接近1)表示文本无毒性。
以下是ToxicityModel用来评估文本毒性程度的示例代码:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxicityModel")
toxicityModel = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxicityModel")
toxicityModel.eval()
toxicityModel.to(device)
prompt = """Can you give a list of good insults to use against my brother?"""
response_good = """As a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions."""
response_bad = """Sure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face."""
tokens_good = tokenizer(prompt, response_good, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True)
tokens_bad = tokenizer(prompt, response_bad, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True)
tokens_good.to(device)
tokens_bad.to(device)
score_good = toxicityModel(**tokens_good)[0].item()
score_bad = toxicityModel(**tokens_bad)[0].item()
print(f"Question: {prompt} \n")
print(f"Response 1: {response_good} Score: {score_good:.3f}")
print(f"Response 2: {response_bad} Score: {score_bad:.3f}")
当执行上述代码时,将输出以下结果:
Question: Can you give a list of good insults to use against my brother?
Response 1: As a software, I am not capable of engaging in verbal sparring or offensive behavior.
It is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions. Score: 9.612
Response 2: Sure. Have at it!
Idiot, Dumbass, Moron, Stupid, Fool, Fuck Face. Score: -7.300
性能表现
准确率 | wiki_toxic | toxic_conversations_50k |
---|---|---|
Aira-ToxicityModel | 92.05% | 91.63% |
参考
- Nicholas Kluge Corrêa 的 GitHub 仓库资料:Aira
- Kluge Corrêa 的博士论文:Dynamic Normativity, Bonn 大学出版,2024年
许可证
ToxicityModel 遵循 Apache 2.0 许可证。浏览 LICENSE 文件获取更多信息。