Secure and optimize your web server with Nginx Rate Limit, controlling request processing within a given time frame. Also, define a whitelist.

How to Configure Nginx Rate Limit and Whitelist

In today’s interconnected world, web servers face constant threats from malicious actors and high-volume traffic. Nginx Rate Limit is a powerful tool that allows you to control the number of requests your server processes within a given time frame. This tutorial will explore configuring Nginx Rate Limit to enhance your server’s security and performance.


Before diving into the configuration, ensure you have the following:

  • A web server running Nginx
  • Basic knowledge of Nginx configuration files (nginx.conf)
  • Administrative access to make changes to the server’s configuration

Obtaining the Real Client IP

Obtaining the real client IP address is crucial to apply rate-limiting rules accurately. By default, Nginx obtains client IP information from the connecting TCP socket. However, if your server is behind a reverse proxy or load balancer, you must configure Nginx to extract the real client IP from the headers.

To retrieve the real client IP, add the following configuration directive within the http block of your nginx.conf file:

http {
  real_ip_header X-Forwarded-For;
  set_real_ip_from <ip_address_of_proxy>;

Replace <ip_address_of_proxy> with the actual IP address of your reverse proxy or load balancer.

Managing Security Concerns

While obtaining the real client IP simplifies rate-limiting configuration, it introduces some security concerns. Attackers can forge or modify headers, bypassing rate-limiting rules by pretending to be a different IP. To mitigate this risk, it is essential to enable trusted proxy validation.

Add the following directive inside the server block of your nginx.conf file:

server {
  set_real_ip_from <ip_address_of_proxy>;<optional_list_of_trusted_proxies>;
  real_ip_header X-Real-IP;
  real_ip_recursive on;

Replace <ip_address_of_proxy> with the actual IP address of your reverse proxy or load balancer. Optionally, include additional trusted proxies separated by semicolons (;).

Let’s go deep on the set_real_ip_from.

Understanding the set_real_ip_from Directive

The set_real_ip_from directive in Nginx allows you to specify the IP address or addresses from which Nginx should trust the real client IP information. This directive is essential when using Nginx behind a reverse proxy or load balancer, as it ensures that Nginx accurately identifies the originating client IP address for rate limiting and logging purposes.

By default, Nginx obtains client IP information from the connecting TCP socket, which may be the IP address of a reverse proxy or load balancer rather than the actual client. Through the set_real_ip_from directive, you instruct Nginx to replace the client IP with the one specified, effectively extracting it from the headers provided by the proxy server.


The syntax for set_real_ip_from is as follows:

set_real_ip_from <ip_address_or_block> | <variable_value>;
  • <ip_address_or_block>: Specifies a single IP address or an IP address block using CIDR notation.
  • <variable_value>: Specifies a variable name whose value contains the IP address or addresses.

Multiple IP Addresses

You can provide multiple IP addresses or blocks by separating them with a semicolon (;). For example:


In this case, Nginx will trust the client IP information supplied by any IP address within the specified range.

Trailing Semicolon

Note that if you include multiple set_real_ip_from directives, each line must end with a semicolon except for the last line. The trailing semicolon on a line signifies that additional IP addresses or blocks will follow. The final line should not have a trailing semicolon.

set_real_ip_from;  # No trailing semicolon here

Variable Value

Instead of specifying the IP address or block directly, you can use a variable to store the IP address and retrieve it dynamically. For instance:

map $http_x_custom_header $proxy_ip {
set_real_ip_from $proxy_ip;

Here, the value of $proxy_ip is determined based on the value of the X-Custom-Header HTTP header sent by the proxy server.

Order of Execution

It’s important to note that Nginx processes set_real_ip_from directives in order from top to bottom. Therefore, only the last one will take effect if multiple directives overlap with the same IP or block.


In this example, only requests where the client IP matches will have their IP replaced, regardless of whether they fall within the range.

Let’s summarize for going forward

The set_real_ip_from directive is a crucial configuration option in Nginx when dealing with reverse proxies or load balancers. By specifying the IP addresses or blocks Nginx should trust to extract the real client IP, you ensure accurate identification for rate limiting, logging, and other purposes.

Understanding how to use set_real_ip_from and its various syntax options empower you to configure Nginx correctly in your specific environment, allowing for precise management of incoming requests based on real client IP addresses.

Should You Turn real_ip_recursive On or Off?

When configuring your NGINX server, one feature you might consider is real_ip_recursive. This directive allows you to process proxy servers defined recursively. It can be a helpful tool within the right context, but it’s not necessary or ideal for every situation. Here are some factors to take into consideration to decide if you should turn real_ip_recursive on or off.

Understanding Real IP Recursive

Before we delve into the specifics of when and why it may be necessary, let’s provide a simple explanation of what real_ip_recursive does.

The real IP module in Nginx is configured to find the original client IP address to log it in a variable like HTTP_X_REAL_IP when using more than one load balancer. This can also be used in applications where knowing the client’s IP is beneficial.

real_ip_recursive directive is part of this module and is turned off by default. When enabled, the directive instructs NGINX to replace the client address with the one from the latest non-trusted intermediate proxy.

When to Use Real IP Recursive

Turning this option on is particularly useful if you have chained proxies before requests reach your server. If each proxy appends its IP to the X-Forwarded-For field and real_ip_recursive is set to off, only the first (i.e., leftmost) non-trusted IP will be treated as the client’s real address, which could be an intermediate proxy still.

On the contrary, if you enable real_ip_recursive, it processes all intermediate hops until it finds a non-trusted IP further down in the X-Forwarded-For field, which becomes the client’s real address.

So, if you’re dealing with multiple chained proxies, setting real_ip_recursive may better serve your needs as it enables nginx to recursively parse X-Forwarded-For headers for a trusted client IP.

When Not To Use Real IP Recursive

If your setup doesn’t include multiple proxies network(reversed or chained), having real_ip_recursive set to on may be overkill and unnecessary.

It’s also essential to remember that turning on this feature without understanding appropriate use cases and configurations of trusted proxies could lead to potential misuse. This could end up by trusting an unsecured or wrong source IP address.


In conclusion, whether you should turn real_ip_recursive on or off depends on your configuration and needs:

  • Turn On: If you’re working with multiple chained proxy networks where intermediate hops are involved between the client(browser) and nginx server.
  • Turn off: For simpler applications without multi-level proxy networks.

As with any server configuration decision, careful consideration and testing should always come first. Reading NGINX’s official documentation can help reduce the confusion surrounding complex directives like real_ip_recursive.

Configuring Nginx Rate Limit

In this tutorial, we will walk you through configuring rate limits in Nginx. Rate limiting allows you to control the requests a client can make within a specified period.

Step 1: Define Rate Limit Parameters

To define the rate limit, you need to specify three key parameters:

  • $limit_req_zone: This directive sets the shared memory zone for rate limiting. It is typically defined in the http block of the Nginx configuration file.
http {
    limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;

Here, we create a shared memory zone called “mylimit” with a size of 10 megabytes and a limit of 1 request per second per client’s IP address.

  • $limit_req: This directive is used within a specific location block to apply rate limiting. You can set the desired limit and define what should happen when a request exceeds the limit.
location /api {
    limit_req zone=mylimit burst=5 nodelay;

In this example, we apply rate limiting to the /api location. The burst parameter specifies the number of requests that can be burst beyond the rate limit before further requests are delayed. The nodelay parameter ensures that additional requests exceeding the burst limit are not delayed.

Step 2: Customize Error Messages (Optional)

If a client exceeds the rate limit, Nginx will return a 503 error by default. However, you can customize this error message and its appearance to provide more meaningful feedback to your users.

error_page 503 @ratelimit;

location @ratelimit {
    return 429 "Too Many Requests";
    # You can also use an HTML page or redirect to another endpoint as per your requirements

Here, we define an error_page directive to catch the 503 error. Within the @ratelimit location, we return a 429 status code and the message “Too Many Requests“. You can modify this response as needed.

Step 3: Test and Monitor

After configuring the rate limit, testing and monitoring its effectiveness is important. Send requests to your server and check if the rate-limiting rules are correctly enforced.

You can use tools like curl or specialized load testing to simulate different traffic scenarios and verify that the rate limits are functioning as expected.

Defining a Whitelist for VPN Users

Sometimes, you may want to exclude certain IP addresses or IP ranges from rate-limiting rules. For example, if your organization uses a VPN service that assigns the same egress IP to multiple users, you can create a whitelist to exempt them from rate limiting.

To define a whitelist, add the following configuration inside the http block of your nginx.conf file:

http {

  geo $whilelist {
        default 0; 1; 1;
        2a03:eec0:1415::/48 1;

    map $whilelist $limit {
        0 $binary_remote_addr;
        1 "";

    limit_req_zone $limit zone=global-zone:10m rate=10r/s;
    limit_req zone=global-zone burst=5 nodelay;
    limit_req_status 429;


Replace IPs with the IP range assigned to your VPN service. Then, utilize the $limit variable in rate limiting rules to exclude VPN users from restrictions.


The configuration above only works if you are using $real_ip_header and $set_real_ip_from

Because the ngx_http_realip_module will get the real client from a header, like X-Forwarded-For, and replace the value on $binary_remote_addr

It means that if you are not using $real_ip_header and $set_real_ip_from the $binary_remote_addr will have the wrong IP.

Also, if you are using WAF, like Impervia, you must define all IPs from Impervia as trust sources to look for X-Forwarded-For and use that to get the client IP. Because the WAF is in front of your application, they will hold the X-Forwarded-For header with the client IP.

Besides WAF, you may have a Load Balance; you must also specify the IP Range or IP from Load Balance as a trusted source (set_real_ip_from) to define the Client IP (real_ip).

Second and Most Important

limit_req_zone $http_x_forwarded_for zone=global-zone:10m rate=10r/s;

The line of code provided sets a request rate limit using the limit_req_zone the directive in the nginx configuration file.

This line specifically says that the limit request zone is determined by $http_x_forwarded_for, which usually contains the client’s IP address when the request was forwarded to nginx via a reverse proxy. Herein lies the potential security issue.

If an attacker is aware that you’re using $http_x_forwarded_for to set your rate limit, they could potentially spoof this value with any IP address they choose. They can bypass the rate limitations by changing their IP address for every request. This may allow an attacker to send more requests per second than the defined limit of 10 requests per second (10r/s).

The zone=global-zone:10m part declares a shared memory zone to keep the states of all incoming requests, holding up to 10MB of data (enough for about 160 thousand IP addresses). If an attacker can spoof their IP address consistently, this could fill up the allocated memory zone and prevent legitimate users from accessing your web resources.

In addition, if you have multiple proxies before reaching nginx, $http_x_forwarded_for you may have a list of IP addresses. By default, without parsing, this directive might handle it incorrectly and take only the first or last value, which might be easily spoofed.

For securing your setup, it’s recommended to apply limitations on values malicious actors cannot manipulate, such as $binary_remote_addr, which will use the direct remote address even if requests pass through multiple proxies. Also, trust each proxy involved in forwarding HTTP requests to Nginx and configure them securely.

to make your configuration more secure and mitigate the risks of IP spoofing associated with using, $http_x_forwarded_for, you can use the real_ip_header and set_real_ip_from directives instead.

Always check official nginx documentation and consult with security professionals when configuring such important parameters.

Define Rate Limit by API Key

Besides limiting the rate by the client IP address, Nginx can also limit the rate by using an API key. This method offers more precise management of API usage and simplifies the tracking of usage patterns.

You can use the limit_req_zone directive with the $http_api_key variable in your nginx to rate the limit by API key.conf file:

http {
    limit_req_zone $http_api_key zone=api_zone:10m rate=1r/s;

Here, we create a shared memory zone called api_zone with a limit of 1 request per second per API key.

Next, you can use the limit_req directive within a specific location block to apply the rate limit:

location /api {
    limit_req zone=api_zone burst=5 nodelay;

In this example, we confidently implement rate limiting for the /api location by utilizing the api_zone shared memory zone. The burst parameter effectively sets the maximum number of requests that can be allowed beyond the rate limit before further requests are delayed. Additionally, the nodelay parameter ensures that any additional requests exceeding the burst limit will not be subject to delays.

Clients must include an API key in their requests to use this approach. You can provide the API key to authorized clients and track their usage to ensure they do not exceed their allotted rate limit.


By configuring Nginx Rate Limit, you can protect your server from high-volume traffic and potential DDoS attacks. Remember to obtain the real client IP by configuring Nginx to extract it from headers and enable trusted proxy validation to safeguard against header forgery. Use whitelists to exempt specific IP addresses or ranges from rate limiting.

Monitor your server’s performance and adjust rate-limiting settings to balance security and user experience. Stay vigilant against emerging threats and regularly update your web applications to maintain a robust defense.

Additionally, rate limiting based on API keys allows you to provide more granular control over API usage and track usage patterns. Monitor your server’s performance and adjust rate-limiting settings to balance security and user experience. Stay vigilant against emerging threats and regularly update your web applications to maintain a robust defense.

Leave a Comment

Your email address will not be published. Required fields are marked *

Free PDF with a useful Mind Map that illustrates everything you should know about AWS VPC in a single view.