Express Server with Nginx Reverse proxy: Handling 502 when streaming file uploads

My setup & use-case scenario:

I have an expressjs server with nginx as reverse proxy. I am working on streaming file upload where files will be streamed to S3. The reason I’m opting for streaming option is I don’t want to overload the server.

NGINX Server Config - disabled proxy_request_buffering

On receiving the file buffer while streaming on my server, I have to achieve the following checks:

  1. File Size limit (Not relying on NGINX file size limit to do this because it’s specific to my clients who will use the API)
  2. File type check (I’m accepting application/octet-stream in the Content-Type of API request, so will stack some buffers to run the magic-byte check)

Once validated, we will stream the file to S3.

**
My issue & How I encountered the problem:**

  1. While uploading a 20MB file, I can come up with two scenarios that causes the client to run into 502 Gateway error before it could receive the erroraneos response.
    1. To check for file size limit, I’m accumulating the chunk sizes I’m receiving and checking if my limit is reached - when my limit is reached, I throw an error and send a response back to the proxy.
    2. To check for file type, I accumulate the bytes to run my magic-byte check - If the file type is not acceptable and I send the same error response back to the proxy/client.

Solutions I’ve tried:

  1. If I don’t send back the error response, I’ll have to wait for my server to drain the remaining chunks and then give back the response. However, this won’t work for large files and low size limits.
  2. NGINX Custom Routing for Error Response: 502 errors could be caught and given a new page to show instead of 502 default error page. This won’t work because we have two checks( could be more later) with different error codes.

Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric): nginx/1.27.5

Deployment environment: Local

Minimal NGINX config to reproduce your issue (preferably running on https://tech-playground.com/playgrounds/nginx for ease of debugging, and if not as a code block): (Tip → Run nginx -T to print your entire NGINX config to your terminal.)

upstream backends {
    server host.docker.internal:3000;
}

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

upstream websocket {
    server host.docker.internal:24678;
}

proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;
large_client_header_buffers 4 16k;

server {
    listen       443 ssl http2;
    server_name  *.local.myclear.io;

    ssl_certificate      /etc/mc/ssl/certs/app.crt;
    ssl_certificate_key  /etc/mc/ssl/certs/private.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    error_log /dev/stdout info;

    location /static/images {
        root /usr/share/nginx/html;
    }

    location / {
        proxy_read_timeout 90s;
        proxy_pass http://backends;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_request_buffering off;
        client_max_body_size 0;
    }

}

NGINX access/error log: (Tip → You can usually find the logs in the /var/log/nginx directory.)

nginx-1    | 2025/12/18 09:50:17 [error] 22#22: *35 writev() failed (32: Broken pipe) while sending request to upstream, client: 172.21.0.1, server: , request: "POST /upload HTTP/1.1", upstream: "http://172.21.0.2:3000/upload", host: "localhost"

Hi @aayushmau5 ,
I suggest reading about this directive https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size. This error could happen when nginx is unable to create a temporary file for the request’s body.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.