项目介绍:twitter-roberta-large-2022-154m
项目背景
twitter-roberta-large-2022-154m 是一个基于 RoBERTa-large 的模型,专门训练于截至2022年12月底的1.54亿条推文。开发这个模型的目的是为了更好地理解和分析推特上的自然语言。在这个项目下,还提供了一个基于相同数据的基础模型,用户可以在该链接找到 基础模型。
为了训练这个模型,数据来源于Twitter Academic API,这些数据涵盖了从2018年1月至2022年12月的推文月度数据。经过过滤后的数据量从2.2亿条推文减至1.54亿条,具体的过滤和预处理的详情可以参考 TimeLMs 论文。
数据预处理
在对文本数据进行处理时,需要用占位符替换用户名和链接,例如将用户名替换为“@user”,链接替换为“http”。如果用户希望保留已经验证的用户名称,可以参考项目中的用户列表。这一预处理操作可以帮助在分析和预测过程中减少不必要的噪音。
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
示例:掩码语言模型
该模型可以用于填充掩码语言模型的任务,它使用 Transformers 接口进行操作。下面是一个简单的示例,展示如何使用模型进行掩码语言预测,并提供了代码例子。
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-large-2022-154m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
示例:推文嵌入
模型还支持计算推文的嵌入向量,并可用于计算相似度。以下代码展示了如何通过简单的余弦相似度来评估推文间的相似性。
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-large-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
特征提取示例
除了语言理解任务,该模型也可以用于特征提取,帮助研究人员进行自定义分析。
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-large-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
引用信息
如在研究中使用该模型,请引用相关论文。
@article{loureiro2023tweet,
title={Tweet Insights: A Visualization Platform to Extract Temporal Insights from Twitter},
author={Loureiro, Daniel and Rezaee, Kiamehr and Riahi, Talayeh and Barbieri, Francesco and Neves, Leonardo and Anke, Luis Espinosa and Camacho-Collados, Jose},
journal={arXiv preprint arXiv:2308.02142},
year={2023}
}