Project Icon

GPU-Benchmarks-on-LLM-Inference

GPU和Apple芯片在LLaMA 3推理性能基准对比

项目对比测试了NVIDIA GPU和Apple芯片在LLaMA 3模型上的推理性能,涵盖从消费级到数据中心级的多种硬件。测试使用llama.cpp,展示了不同量化级别下8B和70B模型的推理速度。结果以表格形式呈现,包括生成速度和提示评估速度。此外,项目提供了编译指南、使用示例、VRAM需求估算和模型困惑度比较,为LLM硬件选型和部署提供全面参考。

GPU-Benchmarks-on-LLM-Inference

Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference? 🧐

Description

Use llama.cpp to test the LLaMA models inference speed of different GPUs on RunPod, 13-inch M1 MacBook Air, 14-inch M1 Max MacBook Pro, M2 Ultra Mac Studio and 16-inch M3 Max MacBook Pro for LLaMA 3.

Overview

Average speed (tokens/s) of generating 1024 tokens by GPUs on LLaMA 3. Higher speed is better.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB70.94OOMOOMOOM
3080 10GB106.40OOMOOMOOM
3080 Ti 12GB106.71OOMOOMOOM
4070 Ti 12GB82.21OOMOOMOOM
4080 16GB106.2240.29OOMOOM
RTX 4000 Ada 20GB58.5920.85OOMOOM
3090 24GB111.7446.51OOMOOM
4090 24GB127.7454.34OOMOOM
RTX 5000 Ada 32GB89.8732.67OOMOOM
3090 24GB * 2108.0747.1516.29OOM
4090 24GB * 2122.5653.2719.06OOM
RTX A6000 48GB102.2240.2514.58OOM
RTX 6000 Ada 48GB130.9951.9718.36OOM
A40 48GB88.9533.9512.08OOM
L40S 48GB113.6043.4215.31OOM
RTX 4000 Ada 20GB * 456.1420.587.33OOM
A100 PCIe 80GB138.3154.5622.11OOM
A100 SXM 80GB133.3853.1824.33OOM
H100 PCIe 80GB144.4967.7925.01OOM
3090 24GB * 4104.9446.4016.89OOM
4090 24GB * 4117.6152.6918.83OOM
RTX 5000 Ada 32GB * 482.7331.9411.45OOM
3090 24GB * 6101.0745.5516.935.82
4090 24GB * 8116.1352.1218.766.45
RTX A6000 48GB * 493.7338.8714.324.74
RTX 6000 Ada 48GB * 4118.9950.2517.966.06
A40 48GB * 483.7933.2811.913.98
L40S 48GB * 4105.7242.4814.995.03
A100 PCIe 80GB * 4117.3051.5422.687.38
A100 SXM 80GB * 497.7045.4519.606.92
H100 PCIe 80GB * 4118.1462.9026.209.63
M1 7‑Core GPU 8GB9.72OOMOOMOOM
M1 Max 32‑Core GPU 64GB34.4918.434.09OOM
M2 Ultra 76-Core GPU 192GB76.2836.2512.134.71
M3 Max 40‑Core GPU 64GB50.7422.397.53OOM

Average 1024 tokens prompt eval speed (tokens/s) by GPUs on LLaMA 3.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB2283.62OOMOOMOOM
3080 10GB3557.02OOMOOMOOM
3080 Ti 12GB3556.67OOMOOMOOM
4070 Ti 12GB3653.07OOMOOMOOM
4080 16GB5064.996758.90OOMOOM
RTX 4000 Ada 20GB2310.532951.87OOMOOM
3090 24GB3865.394239.64OOMOOM
4090 24GB6898.719056.26OOMOOM
RTX 5000 Ada 32GB4467.465835.41OOMOOM
3090 24GB * 24004.144690.50393.89OOM
4090 24GB * 28545.0011094.51905.38OOM
RTX A6000 48GB3621.814315.18466.82OOM
RTX 6000 Ada 48GB5560.946205.44547.03OOM
A40 48GB3240.954043.05239.92OOM
L40S 48GB5908.522491.65649.08OOM
RTX 4000 Ada 20GB * 43369.244366.64306.44OOM
A100 PCIe 80GB5800.487504.24726.65OOM
A100 SXM 80GB5863.92681.47796.81OOM
H100 PCIe 80GB7760.1610342.63984.06OOM
3090 24GB * 44653.935713.41350.06OOM
4090 24GB * 49609.2912304.19898.17OOM
RTX 5000 Ada 32GB * 46530.782877.66541.54OOM
3090 24GB * 65153.055952.55739.40927.23
4090 24GB * 89706.8211818.921336.261890.48
RTX A6000 48GB * 45340.106448.85539.20792.23
RTX 6000 Ada 48GB * 49679.5512637.94714.931270.39
A40 48GB * 44841.985931.06263.36900.79
L40S 48GB * 49008.272541.61634.051478.83
A100 PCIe 80GB * 48889.3511670.74978.061733.41
A100 SXM 80GB * 47782.25674.11539.081834.16
H100 PCIe 80GB * 411560.2315612.811133.232420.10
M1 7‑Core GPU 8GB87.26OOMOOMOOM
M1 Max 32‑Core GPU 64GB355.45418.7733.01OOM
M2 Ultra 76-Core GPU 192GB1023.891202.74117.76145.82
M3 Max 40‑Core GPU 64GB678.04751.4962.88OOM

Model

Thanks to shawwn for LLaMA model weights (7B, 13B, 30B, 65B): llama-dl. Access LLaMA 2 from Meta AI. Access LLaMA 3 from Meta Llama 3 on Hugging Face or my Hugging Face repos: Xiongjie Dai.

Usage

Build

  • For NVIDIA GPUs, this provides BLAS acceleration using the CUDA cores of your Nvidia GPU:

    !make clean && LLAMA_CUBLAS=1 make -j
    
  • For Apple Silicon, Metal is enabled by default:

    !make clean && make -j
    

Text Completion

Use argument -ngl 0 to only use the CPU for inference and -ngl 10000 to ensure all layers are offloaded to the GPU.

!./main -ngl 10000 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf --color --temp 1.1 --repeat_penalty 1.1 -c 0 -n 1024 -e -s 0 -p """\
First Citizen:\n\n\
Before we proceed any further, hear me speak.\n\n\
\n\n\
All:\n\n\
Speak, speak.\n\n\
\n\n\
First Citizen:\n\n\
You are all resolved rather to die than to famish?\n\n\
\n\n\
All:\n\n\
Resolved. resolved.\n\n\
\n\n\
First Citizen:\n\n\
First, you know Caius Marcius is chief enemy to the people.\n\n\
\n\n\
All:\n\n\
We know't, we know't.\n\n\
\n\n\
First Citizen:\n\n\
Let us kill him, and we'll have corn at our own price. Is't a verdict?\n\n\
\n\n\
All:\n\n\
No more talking on't; let it be done: away, away!\n\n\
\n\n\
Second Citizen:\n\n\
One word, good citizens.\n\n\
\n\n\
First Citizen:\n\n\
We are accounted poor citizens, the patricians good. What authority surfeits on would relieve us: if they would yield us but the superfluity, \
while it were wholesome, we might guess they relieved us humanely; but they think we are too dear: the leanness that afflicts us, the object of \
our misery, is as an inventory to particularise their abundance; our sufferance is a gain to them Let us revenge this with our pikes, \
ere we become rakes: for the gods know I speak this in hunger for bread, not in thirst for revenge.\n\n\
\n\n\
"""

Note: For Apple Silicon, check the recommendedMaxWorkingSetSize in the result to see how much memory can be allocated on the GPU and maintain its performance. Only 70% of unified memory can be allocated to the GPU on 32GB M1 Max right now, and we expect around 78% of usable memory for the GPU on larger memory. (Source: https://developer.apple.com/videos/play/tech-talks/10580/?time=346) To utilize the whole memory, use -ngl 0 to only use the CPU for inference. (Thanks to: https://github.com/ggerganov/llama.cpp/pull/1826)

Chat template for LLaMA 3 🦙🦙🦙

!./main -ngl 10000 -m ./models/8B-v3-instruct/ggml-model-Q4_K_M.gguf --color -c 0 -n -2 -e -s 0 --mirostat 2 -i --no-display-prompt --keep -1 \
-r '<|eot_id|>' -p '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHi!<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' \
--in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n'

Benchmark

!./llama-bench -p 512,1024,4096,8192 -n 512,1024,4096,8192 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf

Total VRAM Requirements

ModelQuantized size (Q4_K_M)Original size (f16)
8B4.58 GB14.96 GB
70B39.59 GB131.42 GB

You may estimate that VRAM requirement using this tool: LLM RAM Calculator

Perplexity table on LLaMA 3 70B

Less perplexity is better. (credit to: dranger003)

QuantizationSize (GiB)Perplexity (wiki.test)Delta (FP16)
IQ1_S14.299.8655 +/- 0.0625248.51%
IQ1_M15.608.5193 +/- 0.0530201.94%
IQ2_XXS17.796.6705 +/- 0.0405
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号