About NGINX HTTP/2 & HTTP/3

Please use this template for troubleshooting questions.

My issue:
I’m a beginner who recently started learning about the HTTP protocol. While studying, I encountered something I couldn’t understand, so I’m posting here. To directly experience the difference between HTTP/2 and HTTP/3, I set up NGINX servers using Docker on an AWS instance. I added 1000 emojis and one video to a test page, then used Chrome DevTools’ Network tab to compare loading times. I expected HTTP/3 to be faster, but surprisingly, HTTP/2 consistently performed better. Additionally, under server load, responses would fall back to HTTP/2 instead of HTTP/3, which I don’t fully understand.

How I encountered the problem:
At first, I suspected that the instance specs were the issue, so I upgraded from t2.micro to t2.medium. However, the results remained the same. I then changed the cache policy—from disabling cache to enabling browser caching using the cache-control header. This slightly improved HTTP/3 performance, but it was still slower than HTTP/2. Also, the issue of HTTP/3 falling back to HTTP/2 under load still persisted.

Solutions I’ve tried:

  • Upgraded AWS instance specs
  • Modified cache policy

Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric):
Using this custom Docker image:

https://hub.docker.com/r/macbre/nginx-http3

Deployment environment:
Server

  • AWS EC2: t2.micro & t2.medium (Ubuntu)
  • Docker: 27.5.1
  • Docker Compose: 1.29.2

Client

  • OS: Windows 11
  • Browser: Chrome

Minimal NGINX config to reproduce your issue
This is our Git

2 Likes

Hey @ehgkals! Can you share your NGINX config? I had a quick look at the GitHub org you linked and whilst I think I did find your config it would probably be best for debugging and visibility if you share it in here directly.

1 Like

My contribution to this discussion is going to be more high level with some info for your consideration. In theory Stream Multiplexing in the QUIC Transport Protocol, is supposed to address what I think your illustrating here which is “head-of-line blocking due to TCP’s sequential delivery” however I have seen several articles testing this and getting inconsistent results much like what you’re seeing.

The reasons they attribute this to are varied and wide. The most plausible being HTTP/2 server-side optimizations being more mature than those in HTTP/3. One item of note, Akamai one of the largest CDN providers in existence has spent years testing and tuning HTTP/3 to put this in perspective.

This is a fantastic topic, but it leads me to my question/clarification are you looking for recommendations for server-side optimizations specifically via NGINX or something else entirely?

1 Like

Hi, I’m part of the team that did the HTTP version-specific performance tests with @ehgkals .
We found that HTTP/2.0 is faster than HTTP/3.0, as shown in the text. I looked up
and found information that HTTP/2.0 is better optimized than HTTP/3.0.
Below is the nginx configuration code

I also used https://hub.docker.com/r/macbre/nginx-http3 for the docker image.

HTTP/3.0 nginx

worker_processes  2;
error_log  /var/log/nginx/error.log warn;
# pid        /var/run/nginx.pid;

events {
    worker_connections 65535;  # 워커당 연결 수
    multi_accept on;           # 가능하면 한 번에 여러 연결 수락
    use epoll;                 # 리눅스 환경이면 epoll 사용 (성능 높음)
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  quic  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" "$http3"';

    access_log  /var/log/nginx/access.log  quic;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    # security, reveal less information about ourselves
    server_tokens off; # disables emitting nginx version in error messages and in the “Server” response header field
    more_clear_headers 'Server';
    more_clear_headers 'X-Powered-By';

    # prevent clickjacking attacks
    more_set_headers 'X-Frame-Options: SAMEORIGIN';

    # help to prevent cross-site scripting exploits
    more_set_headers 'X-XSS-Protection: 1; mode=block';

    # help to prevent Cross-Site Scripting (XSS) and data injection attacks
    # https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
    more_set_headers "Content-Security-Policy: object-src 'none'; frame-ancestors 'self'; form-action 'self'; block-all-mixed-content; sandbox allow-forms allow-same-origin allow-scripts allow-popups allow-downloads; base-uri 'self';";

    # enable response compression
    gzip on;

    # https://github.com/google/ngx_brotli#configuration-directives
    brotli on;
    brotli_static on;

    # https://github.com/tokers/zstd-nginx-module#directives
    zstd on;
    zstd_static on;

    # 한 QUIC 연결 당 동시에 열 수 있는 요청 최대 수
    http3_max_concurrent_streams 1000;

    # 스트림 입출력 버퍼 크기
    http3_stream_buffer_size 64k;

    # 활성화할 Connection ID 개수 제한
    quic_active_connection_id_limit 4;

    # QUIC 최대 페이로드 크기
    # quic_max_recv_buffer_size 16m;

    server {
        server_name www.httplab.shop;

        # http/3
        listen 443 quic reuseport;
        listen 443 ssl http2;

        ssl_certificate /etc/nginx/certs/fullchain.pem;
        ssl_certificate_key /etc/nginx/certs/privkey.pem;
        ssl_protocols TLSv1.3;
        
        # 0-RTT QUIC connection resumption
        ssl_session_timeout 1d;
        ssl_session_cache   shared:SSL:50m;
        ssl_session_tickets on;
        ssl_early_data on;

        add_header alt-svc 'h3=":30443"; ma=86400'; # Add Alt-Svc header to negotiate HTTP/3.
        add_header Cache-Control "public, max-age=10, must-revalidate";
        add_header QUIC-Status $http3; # Sent when QUIC was used
        add_header Accept-Ranges bytes; # 스트림 재생

        root /var/www/html;
        index index.html;

        location / {
            try_files $uri $uri/ =404;
        }
        
        # 동영상 파일 매핑
        location /video/ {
            alias /var/www/html/video/;    # /videos/ → /var/www/html/video/
            mp4;                       # mp4 모듈 활성화 (옵션)
            mp4_buffer_size     1m;
            mp4_max_buffer_size 5m;

            expires 30d;
            add_header Cache-Control "public, immutable";
        }
    }
}


HTTP/2.0 nginx

worker_processes  2;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections 65535;  # 워커당 연결 수
    multi_accept on;           # 가능하면 한 번에 여러 연결 수락
    use epoll;                 # 리눅스 환경이면 epoll 사용 (성능 높음)
}

http {

    include     /etc/nginx/mime.types;
    sendfile    on;

    # 한 연결당 최대 동시 스트림 수
    http2_max_concurrent_streams 1000;

    server {
        listen 80;
        return 301 https://$host$request_uri;
    }
    
    server {
        # http/2.0
        listen 443 ssl http2;
        server_name www.httplab.shop;

        ssl_protocols TLSv1.3;

        ssl_certificate /etc/letsencrypt/live/www.httplab.shop/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/www.httplab.shop/privkey.pem;

        root /var/www/html;
        index index.html;

        add_header Cache-Control "public, max-age=10, must-revalidate";

        location / {
            try_files $uri $uri/ =404;
        }

        # /videos/ 로 들어오는 요청을 /var/www/videos/ 에 매핑
        location /videos/ {
            alias /var/www/videos/;       # 주의: alias 뒤에는 디렉토리 경로, 끝에 슬래시!

            # MP4 프레임 단위 스트리밍 (ngx_http_mp4_module 필요)
            mp4;
            mp4_buffer_size       1m;
            mp4_max_buffer_size   5m;

            # HTTP Range 요청(구간 재생) 허용
            add_header  Accept-Ranges  bytes;

            # 캐시 제어 (브라우저·CDN)
            expires      30d;
            add_header   Cache-Control  "public, immutable";
        }
    }
}

1 Like

Thank you for your thoughtful response.

What our team is curious about is why HTTP/2.0 was always faster than HTTP/3.0 in our tests.

In the request to fetch 1000 images, HTTP/2.0 fetched them all at the same time in parallel, while HTTP/3.0 fetched them sequentially, one by one.

We would like to know if this is simply due to less optimization on the server side, or if our test scenario is wrong.

2 Likes

Thank you for the this, I think your reply makes it much clearer what you’re looking to understand. I appreciate the quick and concise response.

It may take a bit to arrive at a thorough response, please bear with us while we look into it.

1 Like

Hey @bellmin! If you could provide your benchmarking environment details and confirm you are using the NGINX configs provided in the original post we might be able to look further into it!

It’s also worth noting that the HTTP/3 implementation is currently being actively developed and optimized and as such is still in experimental mode, which might explain some of your results.

2 Likes

hello @alessandro. Here are the environments where I tested HTTP requests.

1. AWS EC2 t2.medium

2. nginx

3. docker

I used a docker image uploaded to https://hub.docker.com/r/macbre/nginx-http3. Is there any more information you need?

Heya! I have shared this topic internally and one of the main suggestions seems to be to increase the value of http3_stream_buffer_size to a much bigger value, even as big as 10m. There are also some HTTP3 improvements in the backlog that should help with these use cases.

1 Like

@alessandro

Thanks for the quick answer!

http3_stream_buffer_size is I understand that this is the size of the temporary buffer that one stream in http3 will use when passing to or from the network.

If this value is increased, will it increase the amount of data that is sent at once?

Would this speed up the processing speed when there is a lot of traffic?

1 Like

Increasing the http3_stream_buffer_size should increase the amount of data that can be processed at once so it should indeed improve the processing speed.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.