Seeking Clarification on HTTP/3 Performance Differences Between NGINX 1.27.5 and 1.27.4

Hi everyone,

I’ve been conducting HTTP/3 performance tests using NGINX 1.27.5 and 1.27.4, and my results differ significantly from those presented in the official blog post:

Reference:
NGINX Blog – Congestion Control Enhancements for QUIC in NGINX

Test Environment:

  • Isolated network setup
  • HTTP/3 client VM testing against HTTP/3 server VM
  • Server VM specs: 2 vCPU / 3GB RAM / Oracle Linux 8.9
  • Both NGINX versions installed via Pre-Built Packages
  • Tools used: gtlsclient (same as in the blog)
  • nginx.conf includes: http3_stream_buffer_size 50m;
  • Test file size: 47MB (approx. one-tenth of the file size used in the blog)

Network Emulation Parameters:

tc qdisc add dev lo root netem limit 6000 delay 50ms
tc qdisc add dev lo root netem limit 6000 delay 50ms loss 1%

Test Results (Average of 100 runs per version, in seconds):

Packet Loss Condition NGINX 1.27.4 (sec) NGINX 1.27.5 (sec)
No Packet Loss 13.59 27.25
1% Packet Loss 24.17 221.72

Questions:

  • Were there any additional test details not mentioned in the blog?
  • Specific versions or configurations of the testing tools/scripts?
  • Particular load patterns or request types used?
  • Any system-level tuning or additional NGINX parameters?

Any feedback or suggestions would be greatly appreciated. Thanks in advance!

1 Like

Thanks for your feedback. The important details are MTU 1500 and http3_stream_buffer_size 50m.

2 Likes

Thank you for the clarification regarding the MTU and http3_stream_buffer_size settings.

We’ve double-checked our test environment, and the MTU is correctly set to 1500 on both NGINX 1.27.4 and 1.27.5 hosts, as shown below:

#ifconfig | grep mtu
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

The network interface name on both hosts is enp0s3.

The http3_stream_buffer_size directive is also explicitly set to 50m in our nginx.conf.

Given that these key parameters are aligned with the blog post, we’re still observing significant performance degradation in 1.27.5 under both normal and 1% packet loss conditions compared to 1.27.4.

May I ask if there are any other system-level tunings, NGINX directives, or testing tool configurations (e.g., specific gtlsclient flags or QUIC settings) that were used in the original tests but not mentioned in the blog post?

Any additional insights would be greatly appreciated. Thanks again for your support!

Performance Regression Observed in NGINX HTTP/3 After Switching to QUIC Cubic Congestion Control?

We conducted HTTP/3 performance tests across NGINX versions 1.25.5, 1.27.4, 1.27.5, 1.28.0, and 1.29.0 under consistent network emulation conditions shown above. Starting from version 1.27.5, NGINX switched its QUIC congestion control algorithm from reno to cubic. Interestingly, we observed a significant drop in performance in versions using cubic, especially under packet loss conditions. While we are unsure whether this is a coincidence or the root cause, we suggest that future NGINX releases provide a configuration parameter allowing users to select the QUIC congestion control algorithm based on their deployment needs.

Test Results

Version No Packet Loss (sec) 1% Packet Loss (sec)
nginx-1.25.5 12.329 21.339
nginx-1.27.4 13.592 24.173
nginx-1.27.5 27.251 221.720
nginx-1.28.0 19.456 195.452
nginx-1.29.0 19.498 186.282

(New Test Result for nginx-1.29.1) NGINX HTTP/3 Performance Evaluation

We conducted HTTP/3 performance tests across NGINX versions 1.25.5, 1.27.4, 1.27.5, 1.28.0, 1.29.0, and the newly released 1.29.1 under consistent network emulation conditions. Starting from version 1.27.5, NGINX switched its QUIC congestion control algorithm from reno to cubic. We observed that versions using cubic generally showed lower performance, especially under packet loss conditions. While we cannot confirm whether this is the root cause, we suggest that future NGINX releases provide a configuration parameter allowing users to select the QUIC congestion control algorithm based on their deployment needs.

Test Results(Average of 100 runs per version, in seconds)

Version No Packet Loss (sec) 1% Packet Loss (sec)
nginx-1.25.5 12.329 21.339
nginx-1.27.4 13.592 24.173
nginx-1.27.5 27.251 221.720
nginx-1.28.0 19.456 195.452
nginx-1.29.0 19.498 186.282
nginx-1.29.1 20.957 187.009

Test Configuration Summary

  • nginx.conf setting: http3_stream_buffer_size 50m;

  • Network emulation (tc + netem): tc qdisc add dev lo root netem limit 6000 delay 50ms and tc qdisc add dev lo root netem limit 6000 delay 50ms loss 1%

  • Test file size: 47MB

We welcome any feedback or suggestions from the community regarding these findings.

1 Like