Project Icon

transfomers-silicon-research

Transformer模型硬件实现研究进展

本项目汇集了Transformer模型硬件实现的研究资料,包括BERT及其优化方案。内容涵盖算法-硬件协同设计、神经网络加速器、量化和剪枝等技术。项目提供了详细的论文列表,涉及FPGA实现、功耗优化等多个领域,全面展示了Transformer硬件加速的最新研究进展。

Transformer Models Silicon Research

Research and Materials on Hardware implementation of Transformer Models

How to Contribute

You can add new papers via pull requests, Please check data/papers.yaml and if your paper is not in list, add entity at the last item and create pull request.

Transformer and BERT Model

  • BERT is a method of pre-training language representations, meaning that we train a general-purpose language understanding model on a large text corpus (like Wikipedia) and then use that model for downstream NLP tasks.

  • BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks.

  • BERT is a Transformer-based model.
    • The architecture of BERT is similar to the original Transformer model, except that BERT has two separate Transformer models: one for the left-to-right direction (the “encoder”) and one for the right-to-left direction (the “encoder”).
    • The output of each model is the hidden state output by the final Transformer layer. The two models are pre-trained jointly on a large corpus of unlabeled text. The pre-training task is a simple and straightforward masked language modeling objective.
    • The pre-trained BERT model can then be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

Reference Papers

1. Attention Is All You Need

DOI-Link PDF-Download

Code-Link Code-Link

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

DOI-Link PDF-Download Code-Link Code-Link

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).

Hardware Research

2018

Algorithm-Hardware Co-Design of Single Shot Detector for Fast Object Detection on FPGAs

DOI-Link

SparseNN: An energy-efficient neural network accelerator exploiting input and output sparsity

DOI-Link PDF-Link


2019

A Power Efficient Neural Network Implementation on Heterogeneous FPGA and GPU Devices

DOI-Link

A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning

DOI-Link

An Evaluation of Transfer Learning for Classifying Sales Engagement Emails at Large Scale

DOI-Link

MAGNet: A Modular Accelerator Generator for Neural Networks

DOI-Link PDF-Link

mRNA: Enabling Efficient Mapping Space Exploration for a Reconfiguration Neural Accelerator

DOI-Link PDF-Link

Pre-trained bert-gru model for relation extraction

DOI-Link

Q8BERT: Quantized 8Bit BERT

DOI-Link PDF-Link

Structured pruning of a BERT-based question answering model

DOI-Link PDF-Link

Structured pruning of large language models

DOI-Link PDF-Link

Tinybert: Distilling bert for natural language understanding

DOI-Link PDF-Link


2020

A Low-Cost Reconfigurable Nonlinear Core for Embedded DNN Applications

DOI-Link

A Multi-Neural Network Acceleration Architecture

DOI-Link

A Primer in BERTology: What We Know About How BERT Works

DOI-Link

A Reconfigurable DNN Training Accelerator on FPGA

DOI-Link

A^3: Accelerating Attention Mechanisms in Neural Networks with Approximation

DOI-Link

Emerging Neural Workloads and Their Impact on Hardware

DOI-Link

Accelerating event detection with DGCNN and FPGAS

DOI-Link

An Empirical Analysis of BERT Embedding for Automated Essay Scoring

DOI-Link

**An investigation on different underlying quantization schemes for pre-trained language

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号