NGINX does not drop connections on bad requests with empty host headers

My issue:
I have configured a default server for port 80 with simply a “return 444;” statement in it. I expected nginx to close all connections that are matched by this server, but it doesn’t. If the client sends a bad request, nginx will return the bad request page instead of dropping the connection.

How I encountered the problem:
I see in the logs that there are not many 444 responses returned, but mostly 400 responses. The correct server block seems to be matched as it logs to its own file.

Solutions I’ve tried:
I have the return statement directly in the server block and I’ve tried in a location / block, but it still does not trigger as expected.

My config:

http {
    include       mime.types;
    default_type  application/octet-stream;
    index         index.html index.htm;

    log_format  main  '$host $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/httpblock-default.log  main;

    sendfile           on;
    tcp_nopush         on;
    keepalive_timeout  65;
    server_tokens      off;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        access_log   logs/server80-default.log  main;

        return 444;
    }

    server {
        listen          80;
        listen          [::]:80;
        server_name     example.com www.example.com;
        access_log      logs/server80-domain.log  main;

        return 301 https://$host$request_uri;
    }
}

Log examples:

# Remove host: header from request
curl -v -H 'Host:' http://127.0.0.1
# Result:
_ 127.0.0.1 - - [16/Apr/2025:12:48:59 +0100] "GET / HTTP/1.1" 400 150 "-" "curl/8.10.1" "-"

# Set blank host: header on request
curl -v -H 'Host: ' http://127.0.0.1
# Result (user agent suddenly not logged):
_ 127.0.0.1 - - [16/Apr/2025:12:49:08 +0100] "GET / HTTP/1.1" 400 150 "-" "-" "-"

# Requesting a host that doesn't exist works as expected:
curl -v -H 'Host: wrong-host' http://127.0.0.1
# Result ok:
wrong-host 127.0.0.1 - - [16/Apr/2025:12:49:31 +0100] "GET / HTTP/1.1" 444 0 "-" "curl/8.10.1" "-"
1 Like

Can you try the example listed here:

server {
    listen      80;
    server_name "";
    return      444;
}

Thanks for the reply! I tried just now, and it did not seem to have any effect unfortunately.

I tried with this setup first:

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        access_log   logs/server80-default.log  main;

        return 444;
    }

    server {
        listen      80;
        server_name "";
        access_log  logs/server80-nohost.log  main;
        return      444;
    }

curl -v -H 'Host:' http://127.0.0.1 was matched by second server block according to the logs but returned the 400 error like previously:
127.0.0.1 - - [28/Apr/2025:20:29:17 +0100] "GET / HTTP/1.1" 400 150 "-" "curl/8.10.1" "-"

Sending a blank host: curl -v -H 'Host: ' http://127.0.0.1 matched the default server block and also returned the 400 error:
_ 127.0.0.1 - - [28/Apr/2025:20:31:35 +0100] "GET / HTTP/1.1" 400 150 "-" "-" "-"

Removing the default server block, both requests are matched by the nohost server, but still returns 400 errors as before:

 127.0.0.1 - - [28/Apr/2025:20:35:14 +0100] "GET / HTTP/1.1" 400 150 "-" "curl/8.10.1" "-"
 127.0.0.1 - - [28/Apr/2025:20:35:32 +0100] "GET / HTTP/1.1" 400 150 "-" "-" "-"

Heya! This behaviour is due to NGINX adhering to the HTTP 1.1 spec detailed in RFC 7230 RFC 7230 - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing. As such, even if you manually try to force a different behaviour, NGINX will still adhere to the actual spec.

Thanks for the reply! I reported this in general, but only added examples manipulating Host: headers as that was easy to reproduce with curl.

I see various other invalid requests where nginx disregards the “return 444” and responds with http 400 bad request. For instance, what I presume to be bots sending binary data to the port nginx listens on:

[15/Jul/2025:21:30:12 +0000] "\x12\xEA\x8E8\x97\xB9a\x84\xDF\xC8Z\xBE\xA8\xA1b\x06Ez\xEDp\x1Ek/|\xBF\x06nY#\xA5\xC2\xC8\x12\xF3mV\x98\x11\xEE\x88 B\xBA9\xDF\xEE\xDD\xC4\xDB\xBD\xA1d,\xB9a5\xA8#\xEA\x16\xB2\xE8o\x1AG\xA1\x8F\xDE\xC2\xB0an\x9EM\x04\xFB\xE4\x8Ej(P\xD0<\x7F\xCFb" 400 150 "-" "-" "-"

I really would like nginx to just drop this connection as it obviously isn’t an http request, and I thought the “return 444” could be used for that.

Given the strict adherence to various RFCs I would assume any case where you get a 400 by default no matter what is probably due to some official spec :slightly_smiling_face:

Hmm, ok. Then I guess I didn’t understand the purpose of the"return 444". Given a valid http request, valid host and so on, I’m happy to return 403, 404 or similar. But if the request isn’t valid, I thought it would make the most sense to just drop the connection and not speak any more with the client, since they don’t “speak http” properly.

But it seems then that only if I get a valid request, I can select to drop the connection, and if the request is not valid, then there will be a forced “400 bad request” response.

It just seems a litte opposite to me.

I can’t speak for all scenarios with non valid HTTP requests since there are more RFCs/specs out there than I have the bandwidth to dig through, I am not one of the core NGINX developers, and I am not that familiar with the actual NGINX code base. However, based on what I have been able to find, it seems as if most RFCs out there force a 400 bad request response if a HTTP request is not valid for whatever reason. Any web server that allows you to do something different would therefore be non compliant to the official RFCs.

I understand that nginx is an http server, but having an open port on the internet means it can receive random stuff that is not http. Which mine does, and quite a lot of. So my thinking was that it should not reply with http when what it receives isn’t http, but I guess that might be hard to detect.

In any case, what started this issue is that i tried a variant of this configuration listed in the documentation here:

After the reply from a colleague of yours I tried the example exactly as specified and it still did not work, and now you say it shouldn’t work according the to rfc’s. I guess that example should be removed from the documentation then, since it isn’t in line with what is possible to achieve?

That example actually covers requests without a host header. Check this example to see it in action https://tech-playground.com/snippet/hopeful-versatile-sponge/.

Clarification edit: The example listed in how_to_prevent_undefined_server_names works as intended when using HTTP 0.9/1.1.

Interesting site, thanks for sharing!

However, please add “-v” to the curl command and you will see that curl sends a Host: header, with the ip address as value, and in that scenario nginx closes the connection as expected.

Please see the first curl example in my original post. The first command actually removes the Host: header completely, and it doesn’t work like you say but returns the bad request page, which is not in line with the documentation

Ah, my bad! I’ll dig a bit more into it.

I am back with some info! The example works as intended when using HTTP < 1.1. E.g. if you use HTTP 1.0 the config will work as expected (see https://tech-playground.com/snippet/hopeful-versatile-sponge/). I would suggest opening an issue in the nginx.org docs repo to get the wording slightly adjusted :slightly_smiling_face:

Hmm, okay. This has quite a narrow use-case then, or a lot more narrow than I thought and hoped when I found it. Even on the return statement documentation, there is no mention of any if’s and but’s: Module ngx_http_rewrite_module

“Stops processing and returns the specified code to a client. The non-standard code 444 closes a connection without sending a response header.”

I think I can be excused for thinking this would close all connections immediately, as it says “stops processing”. I read that as “If this server block, close connection, end”, while there seems to be a lot of processing going on before the return statement is even checked, so it might never actually be called.

My experiments made me wonder if the return 444 statement was working as intended, as the documentation seemed quite straight forward: “If matched, close connection”, or so I thought I could read it as anyway.

This is not the biggest issue and not the end of the world, it has just stumped me for a little bit. I see your activity here, so I want to take the opportunity to thank you for your time, responses and effort! Others might have simply responded with “it is what it is, accept it or move on” but you seem to have taken it seriously and dug in and provided answers, so thank you for that!

I’ll see about creating an issue for the documentation to have it clarified a little.

1 Like

Thank you for the kind words! I am happy I was able to clarify this issue for you and hopefully any future NGINX users that run into this or a similar issue/question :blush:

If you do end up opening an issue, let me know and I can try to help make the right folks get an eye on it!

1 Like

I actually found the solution! Experimenting with error_page to set up custom error pages, I came across this gem:

error_page 404 =200 /empty.gif;

This changes from one response code to another, and in this example, instead of returning 404 Not Found it would return 200 OK and the /empty.gif as content.

Changing to

error_page 400 =444 /nonexistent.html;

makes nginx drop the connection instead of returning the 400 bad request page, which is what I’ve been after all along!

I don’t know how strict the RFCs you referenced are, but I guess this means nginx can be coerced into breaking compliance with them after all.

You will have to set up one error_page line for each error type the clients manage to trigger, so far I’ve seen them trigger 405 and 406 as well, so my full, working example for http is this:

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        access_log   logs/server80-default.log  main;

        error_page 400 =444 /nonexistent.html;
        error_page 405 =444 /nonexistent.html;
        error_page 406 =444 /nonexistent.html;

        return 444;
    }

Then for https I found I can close the connection if the client sends a regular http request to the https port with code 497. Documented on the Wikipedia page “List of HTTP status codes”.

So this is the solution to how to make nginx drop connection on bad requests.

Oh hey, looks like you found a workaround! The RFCs are quite strict in that NGINX does return a 400 error code when there is no host header. But like you figured out, you can then tell NGINX to treat a 400 error code as a 444 error code instead :upside_down_face:

FWIW, after some internal conversations the consensus seems to be that it might be better to return something like a 400 bad request code than to drop the connection entirely with a 444. The reasoning is that most clients nowadays will automatically try to reconnect if the connection drops, whereas something like a 400 bad request is treated as a complete connection and will not cause the client to try to reconnect. This obviously depends on your environment and client so it might make sense for you – I figured I’d let you know just in case anyways!

I think we may be talking about slightly different things here. To clarify, I reported this issue with title “Nginx does not drop connections on bad requests”. I meant that in general, so for any bad request received. The only examples I could provide at the time that was easily reproducible was no host header, and I guess that is why you changed the title of my post to say that? I’ll take note of that and try to provide a better variety of examples in the future should I ask about another issue here.

If you take a look at my configuration which I provided in the first post, the return 444 is in the default_server block. Then I have an actual domain server-block below which redirects to https. For https I have a similar setup, one default_server with return 444 and self-signed certificate, and then the actual servers with the domains and actual certificates I am hosting that are serving content, but I didn’t include those as I didn’t think it necessary.

With the setup described above, people using clients like web-browsers, curl or whatever will make a request based on a domain, that is what they will be typing into their client. That client will then do a DNS lookup, find my IP, connect, and send the domain in the Host: header. NGINX will then match them to the corresponding server block, and all will be fine. I am confident you know all of this, probably even better than me, I just include it to make sure we are on the same page.

Those that match and will be served by the default_server block however, will be bots connecting directly to my IP, probing to see what is there. They are not people, they are not “real” clients, but various kinds of scanners and possibly bad actors. I base that on the fact that they don’t send any of the domains I host in the header, but rather have variants of my ip, 0.0.0.0, 127.0.0.1, no host header at all and so on. I also provided an example of random binary data that is sent to my server, which isn’t even an HTTP request. I don’t know what it is, if they are trying to exploit something to break in, crash the server or what, but it is just binary, not text, so not a valid HTTP request.

I consider it highly unlikely that a person enters my IP into their browser, decides to continue despite the “no ssl” browsers warn very clearly about now, and is then surprised and tries again when there isn’t an response to the request.

So I quite strongly disagree with your consensus that a 400 bad request page should be returned to these bots. They don’t care about that. They persist, and I think especially so when they get a response. Then they know something is there, and they continue probing.

My experience with this is from another server using other software, and while I returned 403 Forbidden for months and months to these requests, nothing seemed to happen, they were constantly probing for various domains and what not. Then I started dropping connections, and eventually the bots probing for domains at least stopped trying. I do still get the binary rubbish sent to the server, CONNECT attempts looking for open proxies and so on, and I absolutely do not see any reason to respond to that as again, they don’t care, a bad request page doesn’t deter them.

So I disagree, and actually think the approach of responding to these bots’ rubbish connections is the wrong approach.

Actual clients will request based on domain and get the content if it is there, the rest of the connections I think it is best to just drop.

I am happy I can finally do that with NGINX now. :slight_smile:

1 Like

And that’s why I said the feedback might depend on your environment/setup! Thanks for going in depth into your setup! I am sure future users checking this topic will find it useful :blush:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.