Please use this template for troubleshooting questions.
My issue:
I’m a beginner who recently started learning about the HTTP protocol. While studying, I encountered something I couldn’t understand, so I’m posting here. To directly experience the difference between HTTP/2 and HTTP/3, I set up NGINX servers using Docker on an AWS instance. I added 1000 emojis and one video to a test page, then used Chrome DevTools’ Network tab to compare loading times. I expected HTTP/3 to be faster, but surprisingly, HTTP/2 consistently performed better. Additionally, under server load, responses would fall back to HTTP/2 instead of HTTP/3, which I don’t fully understand.
How I encountered the problem:
At first, I suspected that the instance specs were the issue, so I upgraded from t2.micro to t2.medium. However, the results remained the same. I then changed the cache policy—from disabling cache to enabling browser caching using the cache-control header. This slightly improved HTTP/3 performance, but it was still slower than HTTP/2. Also, the issue of HTTP/3 falling back to HTTP/2 under load still persisted.
Solutions I’ve tried:
Upgraded AWS instance specs
Modified cache policy
Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric):
Using this custom Docker image:
Hey @ehgkals! Can you share your NGINX config? I had a quick look at the GitHub org you linked and whilst I think I did find your config it would probably be best for debugging and visibility if you share it in here directly.
My contribution to this discussion is going to be more high level with some info for your consideration. In theory Stream Multiplexing in the QUIC Transport Protocol, is supposed to address what I think your illustrating here which is “head-of-line blocking due to TCP’s sequential delivery” however I have seen several articles testing this and getting inconsistent results much like what you’re seeing.
The reasons they attribute this to are varied and wide. The most plausible being HTTP/2 server-side optimizations being more mature than those in HTTP/3. One item of note, Akamai one of the largest CDN providers in existence has spent years testing and tuning HTTP/3 to put this in perspective.
This is a fantastic topic, but it leads me to my question/clarification are you looking for recommendations for server-side optimizations specifically via NGINX or something else entirely?
Hi, I’m part of the team that did the HTTP version-specific performance tests with @ehgkals .
We found that HTTP/2.0 is faster than HTTP/3.0, as shown in the text. I looked up
and found information that HTTP/2.0 is better optimized than HTTP/3.0.
Below is the nginx configuration code
Hey @bellmin! If you could provide your benchmarking environment details and confirm you are using the NGINX configs provided in the original post we might be able to look further into it!
It’s also worth noting that the HTTP/3 implementation is currently being actively developed and optimized and as such is still in experimental mode, which might explain some of your results.
Heya! I have shared this topic internally and one of the main suggestions seems to be to increase the value of http3_stream_buffer_size to a much bigger value, even as big as 10m. There are also some HTTP3 improvements in the backlog that should help with these use cases.
http3_stream_buffer_size is I understand that this is the size of the temporary buffer that one stream in http3 will use when passing to or from the network.
If this value is increased, will it increase the amount of data that is sent at once?
Would this speed up the processing speed when there is a lot of traffic?
Increasing the http3_stream_buffer_size should increase the amount of data that can be processed at once so it should indeed improve the processing speed.