How to Implement Email Rate‑Limiting (100 Emails/Hour) on NGINX Mail Proxy

Hello, I am looking for guidance on whether the NGINX Mail Proxy (SMTP/IMAP/POP3) can enforce any form of message-based throttling.

Requirement:

We want to restrict an internal application so that it cannot send more than 100 emails per hour. The application routes SMTP traffic through our NGINX Mail Proxy Server, and we want the limit enforced at the proxy layer rather than modifying the application itself.

Current setup:

- NGINX Mail Proxy using the mail {} module

- SMTP AUTH is handled by an external auth server

- Application sends outbound email through NGINX → backend relay

- Basic timeouts and SSL are already configured

What I found so far:

- The mail module does not support limit_req or rate-limiting similar to HTTP

- I only see options for limit_conn (concurrent connections) and limit_rate (bandwidth throttling)

- I cannot find any native method to enforce a limit such as “100 messages per hour per IP or per authenticated user”

My question:

Is there a native way in the NGINX mail module (or stream module) to rate-limit SMTP messages per hour?

If not, what is the recommended workaround?

- External policy server?

- Custom module?

- Implementing the rate limit on the backend SMTP server instead?

- Third-party modules that can enforce SMTP command throttling?

Any advice, examples, or configuration guidance would be greatly appreciated.

Thank you.

While the proxy could limit the number of SMTP connections made to the relay (server), this would not necessarily limit the message sending rate, which is the action which you want to limit.

The reason for this is that it’s only the connection to the mail relay (server) which is proxied. Once connected, a client can send effectively unlimited messages or messages to effectively unlimited recipients at unlimited domains.
The ability to limit actual delivery attempts - especially to a specific domain - can only be managed by the SMTP relay.

To intercept TLS, interfere with SMTP, handle recipients and rates, manage a queue, and all the other steps necessary to implement such a feature would involve coding a substantial portion of a mail relay.

Hi AJCzZ0,

Thanks for your reply.

Let me explain our requirement in detail.

We have application teams hosting their applications on the public cloud (Google Cloud Platform). Whenever such applications need to send email from the GCP network, they route it through an NGINX mail proxy server hosted within the GCP environment.

Before traffic reaches the NGINX mail proxy server, it is routed through a load balancer, which ensures availability, scalability, and failover support. The mail proxy server acts as an SMTP relay, accepting outgoing emails from clients and passing them to a backend mail server for actual delivery.

Currently, the backend mail server enforces a limit of 2000 emails per hour. However, we recently experienced an incident where a rogue application sent over 6000 emails within minutes, which caused the backend mail server to block the entire mail service for that hour.

To prevent such incidents, we want to enforce a stricter limit at the NGINX mail proxy level: specifically, 100 emails per hour per IP address.

Could you please confirm if this is possible to implement? If yes, we would appreciate your guidance on the technical steps, as I am not an expert in NGINX configuration.

Please let me know if you need any further details.

Since I’ve not used Nginx SMTP proxy in any capacity worth mentioning, a response from someone familiar with its capabilities should weigh in.

One application running on one host could make one SMTP connection and attempt to relay one email to a bazillion recipients at each of the major mail hosting services which will blacklist your network.

While I’m sure that was a pain to handle as an incident, this sounds close to the ideal handling of such a scenario. The problem was isolated to your infrastructure, protecting you reputation and deliverability.

If you really need to protect your backend mail server from your application hosts, then consider replacing (or supplementing) the Nginx SMTP proxy with something like a Postfix relay. This will give you all the controls you need at the correct protocol level.
How you divide responsibilities between the two relays will depend on you local specifics.