Seeking Clarification on HTTP/3 Performance Differences Between NGINX 1.27.5 and 1.27.4

Hi everyone,

I’ve been conducting HTTP/3 performance tests using NGINX 1.27.5 and 1.27.4, and my results differ significantly from those presented in the official blog post:

Reference:
NGINX Blog – Congestion Control Enhancements for QUIC in NGINX

Test Environment:

  • Isolated network setup
  • HTTP/3 client VM testing against HTTP/3 server VM
  • Server VM specs: 2 vCPU / 3GB RAM / Oracle Linux 8.9
  • Both NGINX versions installed via Pre-Built Packages
  • Tools used: gtlsclient (same as in the blog)
  • nginx.conf includes: http3_stream_buffer_size 50m;
  • Test file size: 47MB (approx. one-tenth of the file size used in the blog)

Network Emulation Parameters:

tc qdisc add dev lo root netem limit 6000 delay 50ms
tc qdisc add dev lo root netem limit 6000 delay 50ms loss 1%

Test Results (Average of 100 runs per version, in seconds):

Packet Loss Condition NGINX 1.27.4 (sec) NGINX 1.27.5 (sec)
No Packet Loss 13.59 27.25
1% Packet Loss 24.17 221.72

Questions:

  • Were there any additional test details not mentioned in the blog?
  • Specific versions or configurations of the testing tools/scripts?
  • Particular load patterns or request types used?
  • Any system-level tuning or additional NGINX parameters?

Any feedback or suggestions would be greatly appreciated. Thanks in advance!

1 Like

Thanks for your feedback. The important details are MTU 1500 and http3_stream_buffer_size 50m.

2 Likes

Thank you for the clarification regarding the MTU and http3_stream_buffer_size settings.

We’ve double-checked our test environment, and the MTU is correctly set to 1500 on both NGINX 1.27.4 and 1.27.5 hosts, as shown below:

#ifconfig | grep mtu
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

The network interface name on both hosts is enp0s3.

The http3_stream_buffer_size directive is also explicitly set to 50m in our nginx.conf.

Given that these key parameters are aligned with the blog post, we’re still observing significant performance degradation in 1.27.5 under both normal and 1% packet loss conditions compared to 1.27.4.

May I ask if there are any other system-level tunings, NGINX directives, or testing tool configurations (e.g., specific gtlsclient flags or QUIC settings) that were used in the original tests but not mentioned in the blog post?

Any additional insights would be greatly appreciated. Thanks again for your support!