Stack Overflow | Google Group | Gitter Chat | Subreddit | Youtube Channel | Documentation | Contribution Guide |
Framework | Language | Max Throughput | Avg Latency | Transfer |
---|---|---|---|---|
Go FastHttp | Go | 1,396,685.83 | 99.98ms | 167.83MB |
Light-4j | Java | 1,344,512.65 | 2.36ms | 169.25MB |
ActFramework | Java | 945,429.13 | 2.22ms | 136.15MB |
Go Iris | Go | 828,035.66 | 5.77ms | 112.92MB |
Vertx | Java | 803,311.31 | 2.37ms | 98.06MB |
Node-uws | Node/C++ | 589,924.44 | 7.22ms | 28.69MB |
Dotnet | .Net | 486,216.93 | 2.93ms | 57.03MB |
Jooby/Undertow | Java | 362,018.07 | 3.95ms | 47.99MB |
SeedStack-Filter | Java | 343,416.33 | 4.41ms | 51.42MB |
Spring Boot Reactor | Java | 243,240.17 | 7.44ms | 17.86MB |
Pippo-Undertow | Java | 216,254.56 | 9.80ms | 31.35MB |
Spark | Java | 194,553.83 | 13.85ms | 32.47MB |
Pippo-Jetty | Java | 178,055.45 | 15.66ms | 26.83MB |
Play-Java | Java | 177,202.75 | 12.15ms | 21.80MB |
Go HttpRouter | Go | 171,852.31 | 14.12ms | 21.14MB |
Go Http | Go | 170,313.02 | 15.01ms | 20.95MB |
JFinal | Java | 139,467.87 | 11.89ms | 29.79MB |
Akka-Http | Java | 132,157.96 | 12.21ms | 19.54MB |
RatPack | Java | 124,700.70 | 13.45ms | 10.82MB |
Pippo-Tomcat | Java | 103,948.18 | 23.50ms | 15.29MB |
Bootique + Jetty/Jersey | Java | 65,072.20 | 39.08ms | 11.17MB |
SeedStack-Jersey2 | Java | 52,310.11 | 26.88ms | 11.87MB |
Baseio | Java | 50,361.98 | 22.20ms | 6.39MB |
NinjaFramework | Java | 47,956.43 | 55.76ms | 13.67MB |
Play 1 | Java | 44,491.87 | 10.73ms | 18.75MB |
Spring Boot Undertow | Java | 44,260.61 | 38.94ms | 6.42MB |
Nodejs Express | Node | 42,443.34 | 22.30ms | 9.31MB |
Dropwizard | Java | 33,819.90 | 98.78ms | 3.23MB |
Spring Boot Tomcat | Java | 33,086.22 | 82.93ms | 3.98MB |
Node-Loopback | Node | 32,091.95 | 34.42ms | 11.51MB |
Payra Micro | Java | 24,768.69 | 118.86ms | 3.50MB |
WildFly Swarm | Java | 21,541.07 | 59.77ms | 2.83MB |
We are using pipeline.lua to generate more requests per second and the pipeline.lua is located at microservices-framework-benchmark/pipeline.lua.
Here is the light-java server performance with the same command line with other frameworks.
wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 774.30us 1.77ms 40.31ms 91.51%
Req/Sec 0.93M 81.46k 1.06M 87.83%
Latency Distribution
50% 286.00us
75% 529.00us
90% 1.92ms
99% 0.00us
110603088 requests in 30.01s, 12.88GB read
Requests/sec: 3685234.33
Transfer/sec: 439.31MB
Here is another test with for light-java to push more requests but some other frameworks get server errors.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 50
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.35ms 2.80ms 54.23ms 89.24%
Req/Sec 550.10k 64.34k 1.22M 74.54%
Latency Distribution
50% 1.56ms
75% 2.87ms
90% 5.41ms
99% 22.17ms
65803650 requests in 30.10s, 6.50GB read
Requests/sec: 2186203.44
Transfer/sec: 221.00MB
Here is the spring-boot-tomcat (tomcat embedded) performance.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 82.93ms 108.77ms 1.58s 89.45%
Req/Sec 8.40k 3.68k 22.19k 68.54%
Latency Distribution
50% 45.66ms
75% 101.59ms
90% 197.72ms
99% 542.87ms
995431 requests in 30.09s, 119.79MB read
Requests/sec: 33086.22
Transfer/sec: 3.98MB
Here is the spring-boot-undertow (undertow embedded) performance.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 38.94ms 39.29ms 456.82ms 89.28%
Req/Sec 11.21k 4.97k 28.16k 68.14%
Latency Distribution
50% 27.58ms
75% 49.62ms
90% 80.73ms
99% 201.87ms
1331312 requests in 30.08s, 192.98MB read
Requests/sec: 44260.61
Transfer/sec: 6.42MB
Basically, light-4j is 44 times faster than spring-boot with tomcat embedded just for the raw performance to serve Hello World!
In order to have a closer comparison, I have created another project spring-boot-undertow with embedded undertow servlet container (light-java is using undertow core only) and the performance is getting a little better. Light-Java is about 33 times faster than spring-boot with undertow embedded.
Upon requests from the community, I have added nodejs and golang examples and here are the testing result.
Node express framework. To start the server
cd node-express
npm install
node server.js
The test result.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 22.30ms 24.35ms 592.24ms 49.18%
Req/Sec 10.70k 0.87k 11.95k 94.82%
Latency Distribution
50% 47.94ms
75% 0.00us
90% 0.00us
99% 0.00us
1274289 requests in 30.02s, 279.51MB read
Requests/sec: 42443.34
Transfer/sec: 9.31MB
Go Standard Http
To start the server
cd go-http
go run server.go -prefork
The testing result.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 15.01ms 15.35ms 180.11ms 87.10%
Req/Sec 42.80k 5.46k 62.49k 70.58%
Latency Distribution
50% 10.03ms
75% 19.96ms
90% 34.55ms
99% 72.99ms
5123194 requests in 30.08s, 630.28MB read
Requests/sec: 170313.02
Transfer/sec: 20.95MB
Go FastHttp
To start the server
cd go-fasthttp
go run server.go -prefork
The testing result.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 99.98ms 127.12ms 653.72ms 82.12%
Req/Sec 351.24k 46.23k 525.74k 77.09%
Latency Distribution
50% 30.76ms
75% 175.44ms
90% 299.14ms
99% 476.20ms
41989168 requests in 30.06s, 4.93GB read
Requests/sec: 1396685.83
Transfer/sec: 167.83MB
After I post this online, one of spring developers recommended to test against Spring Boot with Reactor which is Netty based without servlet container. I am very new to this and might miss something and everyone is welcomed to submit pull request to enhance this project.
Here is the test result.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:3000 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:3000
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.44ms 12.88ms 285.71ms 94.23%
Req/Sec 61.44k 12.25k 88.29k 79.23%
Latency Distribution
50% 4.62ms
75% 8.11ms
90% 15.03ms
99% 42.60ms
7305649 requests in 30.03s, 536.48MB read
Requests/sec: 243240.17
Transfer/sec: 17.86MB
Add Spark test case and here is the result. It is much better than most frameworks with servlet containers.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:4567 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:4567
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 13.85ms 30.58ms 687.89ms 96.29%
Req/Sec 49.10k 16.55k 107.63k 73.41%
Latency Distribution
50% 7.28ms
75% 13.96ms
90% 24.71ms
99% 158.21ms
5855187 requests in 30.10s, 0.95GB read
Requests/sec: 194553.83
Transfer/sec: 32.47MB
Add Jooby test case and here is the result. It is better than Spring Boot as it is using Netty directly.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 15.57ms 25.97ms 431.00ms 94.64%
Req/Sec 30.50k 8.66k 77.57k 75.68%
Latency Distribution
50% 9.93ms
75% 16.49ms
90% 27.27ms
99% 123.80ms
3613008 requests in 30.05s, 392.80MB read
Requests/sec: 120232.86
Transfer/sec: 13.07MB
@jknack submitted a pull request for Jooby to switch to Undertow instead of Netty and here is the updated result.
steve@joy:~/tool/wrk$ wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 13.22ms 20.07ms 439.26ms 94.77%
Req/Sec 32.96k 7.71k 52.00k 78.38%
Latency Distribution
50% 9.06ms
75% 14.98ms
90% 23.88ms
99%