I’ve been conducting HTTP/3 performance tests using NGINX 1.27.5 and 1.27.4, and my results differ significantly from those presented in the official blog post:
Thank you for the clarification regarding the MTU and http3_stream_buffer_size settings.
We’ve double-checked our test environment, and the MTU is correctly set to 1500 on both NGINX 1.27.4 and 1.27.5 hosts, as shown below:
#ifconfig | grep mtu
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
The network interface name on both hosts is enp0s3.
The http3_stream_buffer_size directive is also explicitly set to 50m in our nginx.conf.
Given that these key parameters are aligned with the blog post, we’re still observing significant performance degradation in 1.27.5 under both normal and 1% packet loss conditions compared to 1.27.4.
May I ask if there are any other system-level tunings, NGINX directives, or testing tool configurations (e.g., specific gtlsclient flags or QUIC settings) that were used in the original tests but not mentioned in the blog post?
Any additional insights would be greatly appreciated. Thanks again for your support!
Performance Regression Observed in NGINX HTTP/3 After Switching to QUIC Cubic Congestion Control?
We conducted HTTP/3 performance tests across NGINX versions 1.25.5, 1.27.4, 1.27.5, 1.28.0, and 1.29.0 under consistent network emulation conditions shown above. Starting from version 1.27.5, NGINX switched its QUIC congestion control algorithm from reno to cubic. Interestingly, we observed a significant drop in performance in versions using cubic, especially under packet loss conditions. While we are unsure whether this is a coincidence or the root cause, we suggest that future NGINX releases provide a configuration parameter allowing users to select the QUIC congestion control algorithm based on their deployment needs.
(New Test Result for nginx-1.29.1) NGINX HTTP/3 Performance Evaluation
We conducted HTTP/3 performance tests across NGINX versions 1.25.5, 1.27.4, 1.27.5, 1.28.0, 1.29.0, and the newly released 1.29.1 under consistent network emulation conditions. Starting from version 1.27.5, NGINX switched its QUIC congestion control algorithm from reno to cubic. We observed that versions using cubic generally showed lower performance, especially under packet loss conditions. While we cannot confirm whether this is the root cause, we suggest that future NGINX releases provide a configuration parameter allowing users to select the QUIC congestion control algorithm based on their deployment needs.
Test Results(Average of 100 runs per version, in seconds)
Version
No Packet Loss (sec)
1% Packet Loss (sec)
nginx-1.25.5
12.329
21.339
nginx-1.27.4
13.592
24.173
nginx-1.27.5
27.251
221.720
nginx-1.28.0
19.456
195.452
nginx-1.29.0
19.498
186.282
nginx-1.29.1
20.957
187.009
Test Configuration Summary
nginx.conf setting:http3_stream_buffer_size 50m;
Network emulation (tc + netem):tc qdisc add dev lo root netem limit 6000 delay 50ms and tc qdisc add dev lo root netem limit 6000 delay 50ms loss 1%
Test file size: 47MB
We welcome any feedback or suggestions from the community regarding these findings.