Project Icon

microservices-framework-benchmark

微服务框架性能基准测试 多语言全面对比

本项目对Java、Go、Node.js等多语言的微服务框架进行了全面的性能基准测试。测试涵盖吞吐量、延迟等关键指标,并提供了详细的测试数据和配置信息。结果显示Light-4j、Go FastHttp等框架性能优异。这些数据为开发者选择适合的微服务框架提供了客观参考,有助于理解不同技术栈的特点。

Stack Overflow | Google Group | Gitter Chat | Subreddit | Youtube Channel | Documentation | Contribution Guide |

FrameworkLanguageMax ThroughputAvg LatencyTransfer
Go FastHttpGo1,396,685.8399.98ms167.83MB
Light-4jJava1,344,512.652.36ms169.25MB
ActFrameworkJava945,429.132.22ms136.15MB
Go IrisGo828,035.665.77ms112.92MB
VertxJava803,311.312.37ms98.06MB
Node-uwsNode/C++589,924.447.22ms28.69MB
Dotnet.Net486,216.932.93ms57.03MB
Jooby/UndertowJava362,018.073.95ms47.99MB
SeedStack-FilterJava343,416.334.41ms51.42MB
Spring Boot ReactorJava243,240.177.44ms17.86MB
Pippo-UndertowJava216,254.569.80ms31.35MB
SparkJava194,553.8313.85ms32.47MB
Pippo-JettyJava178,055.4515.66ms26.83MB
Play-JavaJava177,202.7512.15ms21.80MB
Go HttpRouterGo171,852.3114.12ms21.14MB
Go HttpGo170,313.0215.01ms20.95MB
JFinalJava139,467.8711.89ms29.79MB
Akka-HttpJava132,157.9612.21ms19.54MB
RatPackJava124,700.7013.45ms10.82MB
Pippo-TomcatJava103,948.1823.50ms15.29MB
Bootique + Jetty/JerseyJava65,072.2039.08ms11.17MB
SeedStack-Jersey2Java52,310.1126.88ms11.87MB
BaseioJava50,361.9822.20ms6.39MB
NinjaFrameworkJava47,956.4355.76ms13.67MB
Play 1Java44,491.8710.73ms18.75MB
Spring Boot UndertowJava44,260.6138.94ms6.42MB
Nodejs ExpressNode42,443.3422.30ms9.31MB
DropwizardJava33,819.9098.78ms3.23MB
Spring Boot TomcatJava33,086.2282.93ms3.98MB
Node-LoopbackNode32,091.9534.42ms11.51MB
Payra MicroJava24,768.69118.86ms3.50MB
WildFly SwarmJava21,541.0759.77ms2.83MB

We are using pipeline.lua to generate more requests per second and the pipeline.lua is located at microservices-framework-benchmark/pipeline.lua.

Here is the light-java server performance with the same command line with other frameworks.

wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   774.30us    1.77ms  40.31ms   91.51%
    Req/Sec     0.93M    81.46k    1.06M    87.83%
  Latency Distribution
     50%  286.00us
     75%  529.00us
     90%    1.92ms
     99%    0.00us
  110603088 requests in 30.01s, 12.88GB read
Requests/sec: 3685234.33
Transfer/sec:    439.31MB

Here is another test with for light-java to push more requests but some other frameworks get server errors.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 50
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.35ms    2.80ms  54.23ms   89.24%
    Req/Sec   550.10k    64.34k    1.22M    74.54%
  Latency Distribution
     50%    1.56ms
     75%    2.87ms
     90%    5.41ms
     99%   22.17ms
  65803650 requests in 30.10s, 6.50GB read
Requests/sec: 2186203.44
Transfer/sec:    221.00MB

Here is the spring-boot-tomcat (tomcat embedded) performance.

steve@joy:~/tool/wrk$  wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    82.93ms  108.77ms   1.58s    89.45%
    Req/Sec     8.40k     3.68k   22.19k    68.54%
  Latency Distribution
     50%   45.66ms
     75%  101.59ms
     90%  197.72ms
     99%  542.87ms
  995431 requests in 30.09s, 119.79MB read
Requests/sec:  33086.22
Transfer/sec:      3.98MB

Here is the spring-boot-undertow (undertow embedded) performance.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    38.94ms   39.29ms 456.82ms   89.28%
    Req/Sec    11.21k     4.97k   28.16k    68.14%
  Latency Distribution
     50%   27.58ms
     75%   49.62ms
     90%   80.73ms
     99%  201.87ms
  1331312 requests in 30.08s, 192.98MB read
Requests/sec:  44260.61
Transfer/sec:      6.42MB

Basically, light-4j is 44 times faster than spring-boot with tomcat embedded just for the raw performance to serve Hello World!

In order to have a closer comparison, I have created another project spring-boot-undertow with embedded undertow servlet container (light-java is using undertow core only) and the performance is getting a little better. Light-Java is about 33 times faster than spring-boot with undertow embedded.

Upon requests from the community, I have added nodejs and golang examples and here are the testing result.

Node express framework. To start the server

cd node-express
npm install
node server.js

The test result.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    22.30ms   24.35ms 592.24ms   49.18%
    Req/Sec    10.70k     0.87k   11.95k    94.82%
  Latency Distribution
     50%   47.94ms
     75%    0.00us
     90%    0.00us
     99%    0.00us
  1274289 requests in 30.02s, 279.51MB read
Requests/sec:  42443.34
Transfer/sec:      9.31MB

Go Standard Http

To start the server

cd go-http
go run server.go -prefork

The testing result.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.01ms   15.35ms 180.11ms   87.10%
    Req/Sec    42.80k     5.46k   62.49k    70.58%
  Latency Distribution
     50%   10.03ms
     75%   19.96ms
     90%   34.55ms
     99%   72.99ms
  5123194 requests in 30.08s, 630.28MB read
Requests/sec: 170313.02
Transfer/sec:     20.95MB

Go FastHttp

To start the server

cd go-fasthttp
go run server.go -prefork

The testing result.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    99.98ms  127.12ms 653.72ms   82.12%
    Req/Sec   351.24k    46.23k  525.74k    77.09%
  Latency Distribution
     50%   30.76ms
     75%  175.44ms
     90%  299.14ms
     99%  476.20ms
  41989168 requests in 30.06s, 4.93GB read
Requests/sec: 1396685.83
Transfer/sec:    167.83MB

After I post this online, one of spring developers recommended to test against Spring Boot with Reactor which is Netty based without servlet container. I am very new to this and might miss something and everyone is welcomed to submit pull request to enhance this project.

Here is the test result.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:3000 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:3000
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.44ms   12.88ms 285.71ms   94.23%
    Req/Sec    61.44k    12.25k   88.29k    79.23%
  Latency Distribution
     50%    4.62ms
     75%    8.11ms
     90%   15.03ms
     99%   42.60ms
  7305649 requests in 30.03s, 536.48MB read
Requests/sec: 243240.17
Transfer/sec:     17.86MB

Add Spark test case and here is the result. It is much better than most frameworks with servlet containers.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:4567 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:4567
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    13.85ms   30.58ms 687.89ms   96.29%
    Req/Sec    49.10k    16.55k  107.63k    73.41%
  Latency Distribution
     50%    7.28ms
     75%   13.96ms
     90%   24.71ms
     99%  158.21ms
  5855187 requests in 30.10s, 0.95GB read
Requests/sec: 194553.83
Transfer/sec:     32.47MB

Add Jooby test case and here is the result. It is better than Spring Boot as it is using Netty directly.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.57ms   25.97ms 431.00ms   94.64%
    Req/Sec    30.50k     8.66k   77.57k    75.68%
  Latency Distribution
     50%    9.93ms
     75%   16.49ms
     90%   27.27ms
     99%  123.80ms
  3613008 requests in 30.05s, 392.80MB read
Requests/sec: 120232.86
Transfer/sec:     13.07MB

@jknack submitted a pull request for Jooby to switch to Undertow instead of Netty and here is the updated result.

steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    13.22ms   20.07ms 439.26ms   94.77%
    Req/Sec    32.96k     7.71k   52.00k    78.38%
  Latency Distribution
     50%    9.06ms
     75%   14.98ms
     90%   23.88ms
     99%  
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号