[HTTP Upgraded Connection] : Memory not getting released after connection termination

My issue:
We are building an app where we have front facing HTTP server which performs a proxy pass to a STREAM server. And thereby delivering a stream service over HTTP just like a websocket connection (it will be longed lived upgraded connection).
The actual Problem: Over a period of time we see a memory buildup when clients connect/reconnect and even after the disconnect of these clients, we do not see the memory getting freed up for these nginx worker processes. Is there any known caviates when using HTTP server against the STREAM server.

How I encountered the problem:
Upon multiple client connect/reconnect we observed memory build up for Nginx worker process, even after disconnect of these clients.

Solutions I’ve tried:
We need to know if this a known issue, if not how do we debug and track the memory being held up with HTTP and STREAM server.
What are the best practices to follow to debug and overcome this problem?

Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric):
1.26.3

Deployment environment:
Centos 7.9

Minimal NGINX config to reproduce your issue (preferably running on https://tech-playground.com/playgrounds/nginx for ease of debugging, and if not as a code block): (Tip → Run nginx -T to print your entire NGINX config to your terminal.)

Below is the snippet of config we are using:

http {
  server{
  ...
   location /my-stream {
       proxy_pass http://$my_upstream:7890;
       proxy_http_version 1.1;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       proxy_set_header Host $host;
       proxy_set_header X-Client-IP $remote_addr;
   }
  }
}

stream {
    server {
        # Listen for connection from nginx https server
        listen 7890 reuseport;
        listen [::]:7890 reuseport;
        proxy_pass $upstream_server;
    }
}

NGINX access/error log: (Tip → You can usually find the logs in the /var/log/nginx directory.)

We captured below logs corresponding to memory operations from nginx debug logs for the use case where we connect and then disconnect the client.

When client connected:
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 59284410:1024
2025/07/09 04:59:17 [debug] 1041#0: *32 free: 59284410
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 59284410:1024
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 59294F40:4096
2025/07/09 04:59:17 [debug] 1041#0: *32 free: 59294F40
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 59294CE8:4096
2025/07/09 04:59:17 [debug] 1041#0: *32 free: 59294CE8
2025/07/09 04:59:17 [debug] 1041#0: *32 free: 59294E20, unused: 88
2025/07/09 04:59:17 [debug] 1041#0: *35 malloc: 59297FA8:16384
2025/07/09 04:59:17 [debug] 1041#0: *36 malloc: 5929BFB0:16384
2025/07/09 04:59:17 [debug] 1041#0: *35 free: 59294B80, unused: 216
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 59297FA8:4096
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 5929FFB8:16384
2025/07/09 04:59:17 [debug] 1041#0: *32 malloc: 5929A738:4096

When client disconnected:
2025/07/09 04:59:37 [debug] 1041#0: *36 free: 59295250, unused: 216
2025/07/09 04:59:37 [debug] 1041#0: *32 free: 592942B0, unused: 88

After some 1-2 mins:
2025/07/09 05:01:37 [debug] 1041#0: slab free: F0071000
2025/07/09 05:01:37 [debug] 1041#0: slab free: F0071100
2025/07/09 05:01:37 [debug] 1041#0: *37 malloc: 592A79F0:1024
2025/07/09 05:01:37 [debug] 1041#0: *37 malloc: 592A89B8:16384
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 592A5340, unused: 0
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 5929B740, unused: 3308
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 592A89B8
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 592A79F0
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 592950B0, unused: 24
2025/07/09 05:01:37 [debug] 1041#0: *37 free: 5928EF10, unused: 136
2025/07/09 05:01:37 [debug] 1041#0: *38 malloc: 592A7670:1024
2025/07/09 05:01:37 [debug] 1041#0: *38 malloc: 592AA0D8:4096
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592AA0D8
2025/07/09 05:01:37 [debug] 1041#0: *38 malloc: 592AC0E8:16384
2025/07/09 05:01:37 [debug] 1041#0: *38 malloc: 592B00F0:4096
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592B00F0
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 00000000
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592A5340, unused: 4
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592AB0E0, unused: 1760
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592B1110, unused: 2032
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592A7670
2025/07/09 05:01:37 [debug] 1041#0: *38 hc free: 00000000
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592AC0E8
2025/07/09 05:01:37 [debug] 1041#0: *38 malloc: 592A7670:1024
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592A7670
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 00000000
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 592950B0, unused: 24
2025/07/09 05:01:37 [debug] 1041#0: *38 free: 5928F420, unused: 136

1 Like

My issue:
We are building an app where we have front facing HTTP server which performs a proxy pass to a STREAM server. And thereby delivering a stream service over HTTP just like a websocket connection (it will be longed lived upgraded connection).

The actual Problem:
When the client disconnects, we observe the socket towards the client which the HTTP server has is in CLOSE_WAIT state for a long period of time, but the sockets between the HTTP server and the STREAM server are in the closed state.

How I encountered the problem:
Once the client gets connected to the HTTP Server and then when it initiates a disconnect towards the HTTP server, this is a control packet which is meant for the STREAM server.
The HTTP server recieves that packet and proxy passes it to the STREAM server which understands this packet and hence triggers a connection close towards the HTTP server. The HTTP server also then closes the socket connection towards the STREAM server.
But the HTTP server does not close the connection towards the Client and hence that socket towards the client remains in the CLOSE_WAIT state.

Solutions I’ve tried:
We tried out the nginx config directive proxy_ignore_client_abort on;, which is not helping.

Version of NGINX or NGINX adjacent software (e.g. NGINX Gateway Fabric):
1.26.3

Deployment environment:
Centos 7.9

Minimal NGINX config to reproduce your issue (preferably running on https://tech-playground.com/playgrounds/nginx for ease of debugging, and if not as a code block): (Tip → Run nginx -T to print your entire NGINX config to your terminal.)
Below is the snippet of config we are using:

http {
    server {
    …
        location /my-stream {
            proxy_pass http://$my_upstream:7890;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection “upgrade”;
            proxy_set_header Host $host;
            proxy_set_header X-Client-IP $remote_addr;
        }
    }
}

stream {
    server {
        # Listen for connection from nginx https server
        listen 7890 reuseport;
        listen [::]:7890 reuseport;
        proxy_pass $upstream_server;
    }
}

NGINX access/error log: (Tip → You can usually find the logs in the /var/log/nginx directory.)
Below are the strace data that we captured for this problem:

On the HTTP server:
08:29:36.026292 accept4(13, {sa_family=AF_INET, sin_port=htons(64949), sin_addr=inet_addr("10.34.10.29")}, [112->16], SOCK_NONBLOCK) = 8
Hence, fd 8 is assigned to the client connection
 
 
08:29:36.056501 socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 22
08:29:36.056574 write(3, "2025/07/08 08:29:36 [debug] 11871#0: *1 stream socket 22\n", 57) = 57
08:29:36.056628 ioctl(22, FIONBIO, [1]) = 0
08:29:36.056677 write(3, "2025/07/08 08:29:36 [debug] 11871#0: *1 epoll add connection: fd:22 ev:80002005\n", 80) = 80
08:29:36.056717 epoll_ctl(18, EPOLL_CTL_ADD, 22, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=4010071104, u64=18441883927087207488}}) = 0
08:29:36.056762 write(3, "2025/07/08 08:29:36 [debug] 11871#0: *1 connect to 10.34.11.44:7890, fd:22 #2\n", 78) = 78
...
08:29:36.057875 getsockname(22, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("0.0.0.0")}, [16]) = 0
08:29:36.057929 bind(22, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("10.34.11.44")}, 16) = 0
08:29:36.057991 connect(22, {sa_family=AF_INET, sin_port=htons(7890), sin_addr=inet_addr("10.34.11.44")}, 16) = -1 EINPROGRESS (Operation now in progress)
Hence, fd 22 opens to stream server
 
 
So on HTTP server:
fd 8: client <-> nginx
fd 22: nginx <-> stream server (localhost:7890)
 
------------------------------
 
On the Stream server:
08:29:36.061303 write(3, "2025/07/08 08:29:36 [debug] 11871#0: accept on 0.0.0.0:7890, ready: 0\n", 70) = 70
08:29:36.061342 accept4(15, {sa_family=AF_INET, sin_port=htons(50893), sin_addr=inet_addr("10.34.11.44")}, [112->16], SOCK_NONBLOCK) = 23
 
 
accept4(15, {sa_family=AF_INET, sin_port=htons(44343), sin_addr=inet_addr("10.34.11.44")}, [112->16], SOCK_NONBLOCK) = 26
...
getsockname(26, {sa_family=AF_INET, sin_port=htons(7890), sin_addr=inet_addr("10.34.11.44")}, [16]) = 0
...
tcp_nodelay
setsockopt(26, SOL_TCP, TCP_NODELAY, [1], 4) = 0
...
send(26, ...)
recv(26, ...)
 
So on stream server:
fd 23: HTTP server<-> stream server (i.e., stream server's view of the socket corresponding to HTTP server's fd 22)
fd 26: stream server <-> backend resource
 
 
Now when close is triggered, we see below logs:
 
08:29:36.066227 write(3, "finalize stream session: 200\n", 69) = 69
08:29:36.066270 write(3, "stream log handler\n", 59) = 59
08:29:36.066309 write(3, "close stream connection: 23\n", 68) = 68
08:29:36.066348 close(23) = 0
 
...and later...
 
08:29:40.008809 write(3, "close stream connection: 26\n", 68) = 68
08:29:40.008853 close(26) = 0
 
Hence, Stream side (fd 23 and fd 26) are closed.
 
Now on HTTP server side (fd 22 and fd 8):
08:29:40.009395 write(3, "close http upstream connection: 22\n", 75) = 75
08:29:40.009463 close(22) = 0
 
Hence, Nginx (http server) closed fd towards the stream server.
 
There is NO close on fd 8, which is towards the client.

Related Open ticket
We have opened a separate ticket which is on the same setup but it talks about memory build up issue:

1 Like

Heya! I’m merging both topics since I suspect the core issue might be the same.

Per your comments, seems like memory starts to be freed up after 1-2 mins and you have some pending connections between the client and NGINX that are not getting promptly closed. Could you share your entire NGINX config? Do you define an upstream block anywhere in your config? There are some default NGINX keepalive settings that by default make connections last 60s.