`limit_req` directive is not allowed inside an if condition under `server` block

Hi Team,

Getting below error when I start nginx,

bash-3.2$ dodkcer compose up
running [docker compose up]
/Users/e059244/IdeaProjects/e059244/nginx-proxy/nginx-java-script-rate-limiter-solution
WARN[0000] Found orphan containers ([nginx-java-script-rate-limiter-solution-email-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 3/3
:check_mark: Network nginx-java-script-rate-limiter-solution_default C… 0.0s
:check_mark: Container nginx-java-script-rate-limiter-solution-nginx-1 Created 0.0s
:check_mark: Container nginx-java-script-rate-limiter-solution-hello-1 Created 0.0s
Attaching to hello-1, nginx-1
hello-1 | 2025/09/15 17:47:03 [INFO] server is listening on :5678
nginx-1 | 2025/09/15 17:47:03 [emerg] 1#1: “limit_req” directive is not allowed here in /etc/nginx/nginx.conf:137
nginx-1 | nginx: [emerg] “limit_req” directive is not allowed here in /etc/nginx/nginx.conf:137
nginx-1 exited with code 1

nginx.conf file content:

#In the main context of the nginx.conf file (outside of http, server, or location blocks),
#add the load_module directive, specifying the path to your module's shared object file.
#You can use a relative path if the module is in the default modules directory, or a full path.
load_module modules/ngx_http_js_module.so;

#The worker_processes directive in Nginx controls the number of worker processes that Nginx will spawn to handle
#incoming client connections and requests
worker_processes  auto;

#The error_log directive is typically placed in the nginx.conf file and can be specified in various contexts: main,
#http, stream, server, and location. Settings in lower-level contexts override those inherited from higher levels.
error_log  /var/log/nginx/error.log notice;

#NGINX operates with a master process and multiple worker processes. The PID file specifically stores the ID of the
#master process, which is responsible for managing the worker processes and handling signals
#(e.g., for graceful shutdown, reloading configuration).
pid        /var/run/nginx.pid;

#In Nginx, the "events" context is a crucial part of the configuration that defines how Nginx handles connections
#and manages its worker processes. It is a top-level context within the nginx.conf file, residing within the "main"
#context.
events {
    #The worker_connections directive in NGINX specifies the maximum number of simultaneous connections that
    #a single NGINX worker process can handle. This limit includes all types of connections, such as connections
    #with clients, proxied servers, and other backend services.
    worker_connections  1024;
}

http {
    #The include directive in NGINX is used to incorporate content from other configuration files into the main
    #nginx.conf file or other configuration contexts. This promotes modularity and organization within the NGINX
    #configuration, making it easier to manage large or complex setups.
    include       /etc/nginx/mime.types;

    #Defines the default MIME type of a response. Mapping of file name extensions to MIME types can be set
    #with the types directive.
    default_type  application/octet-stream;

    #Specifies log format
    log_format client_dn_log '$remote_addr - $remote_user [$time_local] '
                             '"$request" $status $body_bytes_sent '
                             '"$http_referer" "$http_user_agent" '
                             '$ssl_client_s_dn';

    #Sets the path, format, and configuration for a buffered log write. Several logs can be specified
    #on the same configuration level
    access_log  /var/log/nginx/access.log client_dn_log;

    #In this configuration, sendfile() is called with the SF_NODISKIO flag which causes it not to block on disk I/O,
    #but, instead, report back that the data are not in memory. nginx then initiates an asynchronous data load by
    #reading one byte
    sendfile        on;

    #The first parameter sets a timeout during which a keep-alive client connection will stay open on the server side.
    #The zero value disables keep-alive client connections. The optional second parameter sets a value in the
    #“Keep-Alive: timeout=time” response header field.
    keepalive_timeout  65;

    #Nginx java script import
    js_path "/etc/nginx/njs/";
    js_import utils.js;
    js_import appNameFinder from http/findAppName.js;
    js_import appZoneNameFinder from http/findAppZoneName.js;

    #js_set $rate main.findAppName;
    #js_var $rate "1r/s";

    # The ngx_http_limit_req_module module (0.7.21) is used to limit the request processing rate per a defined key,
    # in particular, the processing rate of requests coming from a single IP address. The limitation is done using the
    # “leaky bucket” method. (https://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
    # Default rate limiting configurations
    # !!! dynamic variables are not allowed as value for 'zone' and 'rate' variables
    #limit_req_zone "$ssl_client_s_dn" zone=rate-limiting-zone:10m rate="$rate";
    limit_req_zone "$app_name" zone=one:10m rate=1r/s;
    limit_req_zone "$app_name" zone=two:10m rate=2r/s;
    limit_req_zone "$app_name" zone=three:10m rate=3r/s;
    limit_req_zone "$app_name" zone=four:10m rate=4r/s;
    limit_req_zone "$app_name" zone=five:10m rate=5r/s;

    #Sets configuration for a virtual server. There is no clear separation between IP-based (based on the IP address)
    #and name-based (based on the “Host” request header field) virtual servers. Instead, the listen directives describe
    #all addresses and ports that should accept connections for the server, and the server_name directive lists all
    #server names. Example configurations are provided in the “How nginx processes a request” document.
    server {
      #Sets the address and port for IP, or the path for a UNIX-domain socket on which the server will accept requests.
      #Both address and port, or only address or only port can be specified. An address may also be a hostname
      listen 443 ssl;

      #Sets names of a virtual server
      server_name hello.com;

      #Specifies a file with the certificate in the PEM format for the given virtual server.
      #If intermediate certificates should be specified in addition to a primary certificate,
      #they should be specified in the same file in the following order: the primary certificate comes first,
      #then the intermediate certificates. A secret key in the PEM format may be placed in the same file
      ssl_certificate /certs/cert.pem;

      #Specifies a file with the secret key in the PEM format for the given virtual server.
      ssl_certificate_key /certs/key.pem;

      #Enables verification of client certificates. The verification result is stored in the $ssl_client_verify variable.
      ssl_verify_client on;

      # CA chain for client certs
      ssl_client_certificate /certs/cert.pem;

      # Sets the verification depth in the client certificates chain.
      ssl_verify_depth 2;

      js_set $app_name appNameFinder.find;
      js_set $app_zone appZoneNameFinder.find;

      #Sets configuration depending on a request URI.
      location /api {

           if ($request_method = "GET") {
               limit_req zone=one;
           }

           if ($request_method = "POST") {
               limit_req zone=two;
           }

          #Sets the shared memory zone and the maximum burst size of requests. If the requests rate exceeds the rate
          #configured for a zone, their processing is delayed such that requests are processed at a defined rate.
          #Excessive requests are delayed until their number exceeds the maximum burst size in which case the request
          #is terminated with an error. By default, the maximum burst size is equal to zero. For example, the directives
          #limit_req zone=$app_zone;
          #limit_req zone=one;

          #Sets the desired logging level for cases when the server refuses to process requests due to rate exceeding,
          #or delays request processing. Logging level for delays is one point less than for refusals; for example,
          #if “limit_req_log_level notice” is specified, delays are logged with the info level.
          limit_req_log_level error;

          #Sets the status code to return in response to rejected requests.
          limit_req_status 429;

          #Sets the protocol and address of a proxied server and an optional URI to which a location should be mapped.
          #As a protocol, “http” or “https” can be specified. The address can be specified as a domain name
          #or IP address, and an optional port:
          proxy_pass http://hello:5678;

          #The proxy_set_header directive in NGINX is used to modify or add header fields to requests that are proxied
          #to an upstream server. When NGINX acts as a reverse proxy, it typically modifies the "Host" and "Connection"
          #headers by default and removes any header fields with empty values. The proxy_set_header directive allows for
          #customization of this behavior.
          proxy_set_header X-Client-DN $ssl_client_s_dn;

          #Add response header (to resolve $rate variable)
          #add_header custom_header $rate;

          #Add response header (to resolve $app_name & $app_zone variable)
          add_header app_name $app_name;
          add_header app_zone $app_zone;
      }
    }
}

Quesitons:

In above code, I have added limit_req directie inside if block. Is that not allowed ?

I’m definitely not an NGINX guru but I think you have spotted the problem.

The error message nginx: [emerg] "limit_req" directive is not allowed here is specific. It indicates an invalid use of the limit_req directive within your nginx.conf file.

There re a number of directives that aren’t allowed. (Keep in mind that if doesn’t work like the common if-then-else you might be expecting.)

Take a look at the mapdirective in the Module ngx_http_map_module

Also some of the docs might help:
ngx_http_limit_req_module: The context will tell you where they can be used.

Hope this helps

davemc

Yes, it is not allowed. Allowed contexts are http, server and location.

Instead you need to define both limit requests with no ‘if’ around them. To enable only 1 limit you can derive your limit key from app_name and request_method. If request is ‘GET’ then zone_two key should be empty. When request is POST - zone_one key should be empty. This will enable only 1 limit at a time.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

Heya! Moving this topic over to the NGINX category since limit_req is not NGINX Plus specific :smiley: