生成式推荐系统
这个仓库存放了论文《行动胜于言语:万亿参数序列转换器用于生成式推荐》(https://arxiv.org/abs/2402.17152, 将发表于ICML'24)的代码。
目前仅包含复现论文中公开实验(第4.1.1节)的代码和前向传播的Triton内核(第4.2节)。我们计划在稍后添加HSTU的集成代码和其他内核,以进行吞吐量/性能基准测试。
入门指南
公开实验
要复现论文中的公开实验(传统序列推荐设置,第4.1.1节),请按以下步骤操作:
安装依赖
按照官方说明安装PyTorch。然后,
pip3 install gin-config absl-py scikit-learn scipy matplotlib numpy apex hypothesis pandas fbgemm_gpu iopath
下载并预处理数据
mkdir -p tmp/ && python3 preprocess_public_data.py
运行模型训练
大多数数据集需要24GB或更多HBM的GPU。
CUDA_VISIBLE_DEVICES=0 python3 main.py --gin_config_file=configs/ml-1m/hstu-sampled-softmax-n128-large-final.gin --master_port=12345
configs/ml-1m、configs/ml-20m和configs/amzn-books中包含其他配置,以便更轻松地复现这些实验。
验证结果
默认情况下,我们将实验日志写入exps/。我们可以使用以下类似命令启动tensorboard:
tensorboard --logdir ~/generative-recommenders/exps/ml-1m-l200/ --port 24001 --bind_all
tensorboard --logdir ~/generative-recommenders/exps/ml-20m-l200/ --port 24001 --bind_all
tensorboard --logdir ~/generative-recommenders/exps/amzn-books-l50/ --port 24001 --bind_all
使用提供的配置(.gin)文件,您应该能够复现以下结果(截至2024年4月15日验证):
MovieLens-1M (ML-1M):
方法 | HR@10 | NDCG@10 | HR@50 | NDCG@50 | HR@200 | NDCG@200 |
---|---|---|---|---|---|---|
SASRec | 0.2853 | 0.1603 | 0.5474 | 0.2185 | 0.7528 | 0.2498 |
BERT4Rec | 0.2843 (-0.4%) | 0.1537 (-4.1%) | ||||
GRU4Rec | 0.2811 (-1.5%) | 0.1648 (+2.8%) | ||||
HSTU | 0.3097 (+8.6%) | 0.1720 (+7.3%) | 0.5754 (+5.1%) | 0.2307 (+5.6%) | 0.7716 (+2.5%) | 0.2606 (+4.3%) |
HSTU-large | 0.3294 (+15.5%) | 0.1893 (+18.1%) | 0.5935 (+8.4%) | 0.2481 (+13.5%) | 0.7839 (+4.1%) | 0.2771 (+10.9%) |
MovieLens-20M (ML-20M):
方法 | HR@10 | NDCG@10 | HR@50 | NDCG@50 | HR@200 | NDCG@200 |
---|---|---|---|---|---|---|
SASRec | 0.2889 | 0.1621 | 0.5503 | 0.2199 | 0.7661 | 0.2527 |
BERT4Rec | 0.2816 (-2.5%) | 0.1703 (+5.1%) | ||||
GRU4Rec | 0.2813 (-2.6%) | 0.1730 (+6.7%) | ||||
HSTU | 0.3273 (+13.3%) | 0.1895 (+16.9%) | 0.5889 (+7.0%) | 0.2473 (+12.5%) | 0.7952 (+3.8%) | 0.2787 (+10.3%) |
HSTU-large | 0.3556 (+23.1%) | 0.2098 (+29.4%) | 0.6143 (+11.6%) | 0.2671 (+21.5%) | 0.8074 (+5.4%) | 0.2965 (+17.4%) |
亚马逊评论(图书):
方法 | HR@10 | NDCG@10 | HR@50 | NDCG@50 | HR@200 | NDCG@200 |
---|---|---|---|---|---|---|
SASRec | 0.0306 | 0.0164 | 0.0754 | 0.0260 | 0.1431 | 0.0362 |
HSTU | 0.0416 (+36.4%) | 0.0227 (+39.3%) | 0.0957 (+27.1%) | 0.0344 (+32.3%) | 0.1735 (+21.3%) | 0.0461 (+27.7%) |
HSTU-large | 0.0478 (+56.7%) | 0.0262 (+60.7%) | 0.1082 (+43.7%) | 0.0393 (+51.2%) | 0.1908 (+33.4%) | 0.0517 (+43.2%) |
对于上述三个表格,"SASRec"行基于自注意力序列推荐,但原始的二元交叉熵损失被替换为重新审视神经检索加速器中提出的采样softmax损失。这些行可以通过"configs//sasrec--final.gin"复现。"BERT4Rec"和"GRU4Rec"行基于将糟粕变成黄金损失:BERT4Rec真的比SASRec更好吗?报告的结果 - 需要注意的是,这种比较略微有利于这两者,因为它们使用了完整的负样本,而其他行使用了128/512个采样的负样本。"HSTU"和"HSTU-large"行基于行胜于言:用于生成推荐的万亿参数序列转换器;特别是,HSTU行利用了与SASRec相同的配置。"HSTU"和"HSTU-large"的结果可以通过"configs//hstu--final.gin"复现。
效率实验
"ops/triton"目前包含了效率实验(前向传播)所需的triton内核。更多代码(包括集成粘合代码)将在稍后添加。如果紧急,请随时提交PR。
许可证
此代码库采用Apache 2.0许可证,详见LICENSE文件。
贡献者
整个项目得以实现要归功于许多技术贡献者的共同努力(按字母顺序列出):
Adnan Akhundov, Bugra Akyildiz, Shabab Ayub, Alex Bao, Renqin Cai, Jennifer Cao, Xuan Cao, Guoqiang Jerry Chen, Lei Chen, Sean Chen, Xianjie Chen, Huihui Cheng, Weiwei Chu, Ted Cui, Shiyan Deng, Nimit Desai, Fei Ding, Shilin Ding, Francois Fagan, Lu Fang, Leon Gao, Zhaojie Gong, Fangda Gu, Liang Guo, Liz Guo, Jeevan Gyawali, Yuchen Hao, Daisy Shi He, Michael Jiayuan He, Samuel Hsia, Jie Hua, Yanzun Huang, Hongyi Jia, Rui Jian, Jian Jin, Rahul Kindi, Changkyu Kim, Yejin Lee, Fu Li, Hong Li, Shen Li, Rui Li, Wei Li, Zhijing Li, Lucy Liao, Xueting Liao, Emma Lin, Hao Lin, Jingzhou Liu, Xing Liu, Xingyu Liu, Kai Londenberg, Yinghai Lu, Liang Luo, Linjian Ma, Matt Ma, Yun Mao, Bert Maher, Ajit Mathews, Matthew Murphy, Satish Nadathur, Min Ni, Jongsoo Park, Jing Qian, Lijing Qin, Alex Singh, Timothy Shi, Yu Shi, Dennis van der Staay, Xiao Sun, Colin Taylor, Shin-Yeh Tsai, Rohan Varma, Omkar Vichare, Alyssa Wang, Pengchao Wang, Shengzhi Wang, Wenting Wang, Xiaolong Wang, Yueming Wang, Zhiyong Wang, Wei Wei, Bin Wen, Carole-Jean Wu, Yanhong Wu, Eric Xu, Bi Xue, Hong Yan, Zheng Yan, Chao Yang, Junjie Yang, Wen-Yun Yang, Zimeng Yang, Chunxing Yin, Daniel Yin, Yiling You, Jiaqi Zhai, Keke Zhai, Yanli Zhao, Zhuoran Zhao, Hui Zhang, Jingjing Zhang, Lu Zhang, Lujia Zhang, Na Zhang, Rui Zhang, Xiong Zhang, Ying Zhang, Zhiyun Zhang, Charles Zheng, Erheng Zhong, Xin Zhuang.
关于描述生成推荐问题formulation和HSTU架构的初始论文,请参阅"行胜于言:用于生成推荐的万亿参数序列转换器"(https://arxiv.org/abs/2402.17152,ICML'24),[海报](https://tinyurl.com/gr-icml24),幻灯片(待添加)。更多文档,包括扩展技术报告,将稍后跟进。