nginx 概念和基本功能

来源:互联网 发布:简述数据库设计步骤 编辑:程序博客网 时间:2024/04/29 00:04

  • 1 Getting Started
    • 1-1 Starting Stopping and Reloading Configuration
    • 1-2 Configuration Files Structure
  • 2 Basic Functionality
    • 2-1 Web Server
      • 2-1-1 Setting Up Virtual Servers
      • 2-1-2 Configuring Location
      • 2-1-3 Using Variables
      • 2-1-4 Returning Specific Status Codes
      • 2-1-5 Rewriting URIs in Requests
      • 2-1-6 Rewriting HTTP Response
      • 2-1-7 Handing Errors
    • 2-2 Serving static content
      • 2-2-1 Root Directory and Index Files
      • 2-2-2 Trying Several Options
      • 2-2-3 Optimizing Nginx Speed for Serving Content
        • 2-2-3-1 Enabling sendfile
        • 2-2-3-2 Enabling tcp_nopush
        • 2-2-3-3 Enabling tcp_nodelay
        • 2-2-3-4 Optimizing the Backlog Queue
          • Measuring the Listen Queue
          • Tuning the Operating System
          • Tuning Nginx
    • 2-3 Reverse Proxy
      • 2-3-1 Introduction
      • 2-3-2 Passing a Request to a Proxied Server
      • 2-3-3 Passing Request Headers
      • 2-3-4 Configuring Buffers
      • 2-3-5 Choosing an Outgoing IP Address
    • 2-4 Compression and Decompression
      • 2-4-1 Enabling Compression
      • 2-4-2 Enabling Decompression
      • 2-4-3 Sending Compressed Files
    • 2-5 Web Content cache
      • 2-5-1 Enabling the Caching of Responses
      • 2-5-2 NGINX Processes Involved in Caching
      • 2-5-3 Specifying Which Requests to Cache
      • 2-5-4 Limiting or Bypassing Caching
      • 2-5-5 Purging Content From The Cache
        • 2-5-5-1 Configuring Cache Purge
        • 2-5-5-2 Sending the Purge Command
        • 2-5-5-3 Restricting Access to the Purge Command
        • 2-5-5-4 Completely Removing Files from the Cache
        • 2-5-5-5 Cache Purge Configuration Example
      • 2-5-6 Byte-Range Caching
      • 2-5-7 Combined Configuration Example

1) Getting Started

1-1) Starting, Stopping ,and Reloading Configuration

The way nginx and its modules work is determined in the configuration file. By default, the configuration file is named nginx.conf and placed in the directory /usr/local/nginx/conf, /etc/nginx. or /usr/local/etc/nginx.

To start nginx, run the executable file. Once nginx is started, it can be controlled by invoking the executable with the -s parameter. Use the following syntax:

nginx -s signal

Where signal may be one of the following:

  • stop - fast shutdown
  • quit - graceful shutdown
  • reload - reloading the configuration file
  • reopen - reopening the log files

For example, to stop nginx processes with waiting for the worker processess to finish serving current requests, the following command can be executed:

nginx -s quit

This command should be executed under the same user that started nginx.

For more information on sending signals to nginx, see Controlling nginx

1-2) Configuration File’s Structure

Nginx consists of modules which are controlled by directives specified in the configuration file. Directives are divided into simple directives and block directives. A simple directive consists of the name and parameters separated by spaces and ends with a semicolon(;). A block directive has the same structure as a simple directives, but instead of the semicolon it ends with a set of additional instructions surround by braces ({ and }). If a block directive can have other directives inside braces, it is called a context (example: events,http,server, andlocation)

Directives placed in the configuration file outside of any contexts are considered to be in the main context. The events and http directives reside in the main context, server in http, and location in server.

The rest of a line after the # sign is considered a comment.

To make the configuration file easier to maintain, we recommend that you split it into a set of feature-specific files stored in the /etc/nginx/conf.d directory and use the include directive in the main nginx.conf file to reference the contents of the feature-specific files.

include conf.d/http;include conf.d/stream;include conf.d/exchange-enhanced;

A few top-level directives, referred to as contexts, group together the directives that apply to different traffic types:

  • events - General connection processing
  • http - HTTP traffic
  • mail - Mail traffic
  • stream - TCP traffic

Directives placed outside of these contexts are said to be in the main context.

In each of traffic-handling contexts, you include one or more server contexts to define virtual servers that control the processing of requests. The directives you can include within a server context vary depending on the traffic type.

For Http traffic (the http context). each server directive controls the processing of requests for resources at particular domains or IP addresses. One or more location contexts in the server context define how to process specific sets of URIs.

For mail and TCP traffic (the mail and stream contexts) the server directives each control the processing of traffic arriving at a particular TCP port or UNIX socket.

The following configuration illustrates the use of contexts:

user nobody; # a directive in the 'main' contextevents {    # configuration of connection processing}http {    # Configuration specific to HTTP and affecting all virtual servers    server {        # configuration of HTTP virtual server 1        location /one {            # configuration for processing URIs with '/one'        }        location /two {            # configuration for processing URIs with '/two'        }    }    server {        # configuration of HTTP virtual server 2    }}stream {    # Configuration specific to TCP and affecting all virtual servers    server {        # configuration of TCP virtual server 1     }}

2) Basic Functionality

At a high level, configuring nginx plus as a web server is a matter of defining which URLs it handles and how it processes HTTP requests for resources at those URIs. At a low level, the configuration defines a set of virtual server that control the processing of requests for particular domains or IP addresses.

Each virtual sever for HTTP traffic defines special configuration instances called locations that control processing of specific sets of URIs. Each location defines its own scenario of what happens to requests that the mapped to this location. Nginx Plus provides full control over this process. Each location can proxy the request or return a file. In addition, the URI can be modified, so that request is redirected to another location or virtual server. Also, a specific error code can be returned and you can configure a specific page to correspond to each error code.

2-1) Web Server

2-1-1) Setting Up Virtual Servers

The nginx plus configuration file must include at least one server directive to define a virtual server. When nginx plus processes a request, it first selects the virtual server that will serve the request.

A virtual server is defined by a server directive in the http context, for example:

http {    server {        # Server configuration    }}

It is possible to add multiple server directives into the http context to define multiple virtual servers.

The server configuration block usually includes a listen directive to specify the IP address and port (or Unix domain socket and path) on which the server listens for requests. Both IPv4 and IPv6 addresses are accepted; enclose IPv6 addresses in square brackets.

The example below shows configuration of a server that listens on IP address 127.0.0.1 and port 8080:

server {    listen 127.0.0.1:8080;    # The rest of server configuration}

If a port is omitted, the standard port is used. Likewise, if an address is omitted, the server listens on all address. if the listen directive is not included at all, the “standard” port is 80/tcp and the “default” port is 8000/tcp, depending on superuser privileges.

If there are several servers that match the IP address and port of the request, nginx plus tests the request’s host header field against the server_name directives in the server blocks. The parameter to server_name can be a full name, a wildcard, or a regular expression. A wildcard is a character string that includes the asterisk (*) at its beginning, end or both; the asterisk matches any sequence of characters. nginx plus uses the Perl syntax for regular expressions; precede them with tilde (~). This example illustrates an exact name.

server {    listen      80;    server_name example.org www.example.org;    ...}

If several names match the host server, nginx plus selects one by searching for names in the following order and using the first match it finds:
1. Exact name
2. Longest wildcard starting with an asterisk, such as *.example.org
3. Longest wildcard ending with an asterisk, such as mail.*
4. First matching regular expression (In order of appearance in the configuration file)

If the host header field does not match a server name, nginx plus routes the request to the default server for the port on which request arrived. The default server is the first one listed in the nginx.conf file, unless you include the default_server parameter to the listen directive to explicitly designate a server as the default:

server {    listen      80 default_server;    ...}

2-1-2) Configuring Location

Ngnix can send traffic to different proxies or serve different files based on the request URIs. These blocks are defined using the location directive placed within a server directive.

For example, you can define three location blocks to instruct the virtual server to send some requests to one proxied server, send other requests to a different proxied server, and sever the rest of the requests by delivering files from the local file system.

There are two types of parameter to the location directive: prefix strings (pathnames) and regular expressions. For a request URI to match a prefix string, it must start with the prefix string.

The following sample location with a pathname parameter matches request URI that begin with /some/path/, such as /some/path/document.html. (it does not match /my-site/some/path because /some/path does not occur at the start of that URI.)

location /some/path/ {    ...}

A regular expression is preceded with the tilde (~) for case-sensitive matching, or the tilde-asterisk (~*) for case-insensitive matching. The following example matches URIs that include that string .html or .htm in any position.

location ~ \.html? {    ...}

To find the location that best matches a URI, nginx first compares the URI to the location with a prefix string, it then searches the locations with a regular expression.

Higher priority is given to regular expressions, unless the ^~ modifier is used. Among the prefix strings nginx selects the most specific one ( that is , the longest and most complete string). The exact logic for selecting a location to process a request is given below:

  1. Test the URI against all prefix strings.
  2. The = (equals sign) modifier defines an exact match for the URI and a prefix string. If the exact match is found, the search stop.
  3. If the ^~ (caret-tilde) modifier prepends the longest matching prefix string, the regular expressions are not checked.
  4. Store the longest matching prefix string.
  5. Test the URI against regular expressions.
  6. Break on the first matching regular expressions and use the corresponding location
  7. If on regular expression matches, use the location corresponding to the stored prefix string.

A typical use case for the = modifier is requests for / (forward slash). If requests for / are frequent, specifying = / as the parameter to the location directive speeds up processing, because the search for matches stops after the first comparison.

location = / {    ...}

A location context can contain directives that define how to resolve a request – either serve a static file or pass the request to a proxied server. In the following example, requests that match the first location context are served files from the /data directory and the requests that match the second are passed to the proxied server that hosts content for the www.example.com domain.

server {    location /images/ {        root /data;    }    location / {        proxy_pass http://www.example.com;    }}

The root directive specifies the file system path in which to search for the static files to sever. The request URI associated with the location is appended to the path to obtain the full name of the static file to sever. In the example above, in response to a request for /images/example.png, NGINX Plus delivers the file
/data/images/example.png.

The proxy_pass directive passes the request to the proxied server accessed with the configured URL. The response from the proxied server is then passed back to the client. In the example above, all requests with URIs that do not start with /images/ are be passed to the proxied server.

2-1-3) Using Variables

You can use variables in the configuration file to have NGINX Plus process requests differently depending on defined circumstances. Variables are named values that are calculated at runtime and are used as parameters to directives. A variable is denoted by the $ (dollar) sign at the beginning of its name. Variables define information based upon NGINX’s state, such as the properties of the request being currently processed.

There are a number of predefined variables, such as the core HTTP variables, and you can define custom variables using the set, map, and geo directives. Most variables are computed at runtime and contain information related to a specific request. For example, remoteaddrcontainstheclientIPaddressanduri holds the current URI value.

2-1-4) Returning Specific Status Codes

Some website URIs require immediate return of a response with a specific error or redirect code, for example when a page has been moved temporarily or permanently. The easiest way to do this is to use the return directive. For example:

location /wrong/url {    return 404;}

The first parameter of return is a response code. The optional second parameter can be the URL of a redirect (for code 301, 302, 303, and 307) or the text to return in the response body. For example:

location /permanently/moved/url {    return 301 http://www.example.com/moved/here;}

The return directive can be included in both the location and server contexts

2-1-5) Rewriting URIs in Requests

A request URI can be modified multiple times during the request processing through the use of the rewrite directive. which has one optional and two required parameters. The first (required) parameter is the regular expression that the request URI must match. The second parameter is the URI to substitute for the matching URI. The optional third parameter is a flag that can halt processing of further rewrite directives or send a redirect (code 301 or 302). For example:

location /users/ {    rewrite ^/users/(.*)$ /show?user=$1 break;}

As this example shows, the second parameter users captures though matching of regular expressions.

You can include multiple rewrite directives in both the server and location context. nginx execute the directives one-by-one in the order they occur. The rewrite directive in a server context are executed once when that context is selected.

The following example shows rewrite directives in combination with a return directive.

server {    ...    rewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3 last;    rewrite ^(/download/.*)/audio/(.*)\..*$ $1/mp3/$2.ra  last;    return  403;    ...}

This example configuration distinguishes between two sets of URIs. URIs such as /download/some/media/file are changed to /download/some/mp3/file.mp3. Because of the last flag, the subsequent directive (the second rewrite and the return directive) are skipped but nginx continues processing the request, which now has a different URI. Similarly, URIs such as /download/some/audio/file are placed with /download/some/mp3/file.ra. If a URI doesn’t match either rewrite directive, nginx returns the 403 error code to the client.

There two parameters that interrupt processing of rewrite directives.

  • last - Stops execution of the rewrite directives in the current server or location context, but nginx searches for locations that match the Rewritten URI, and any rewrite directives in the new location are applied (meaning the URI can be changed again).
  • break - Like the break directive, stops processing of rewrite directive in the current context and cancels the search for locations that match the new URI. The rewrite directive in the new location are not executed.

2-1-6) Rewriting HTTP Response

Sometimes you need to rewrite or change the content in an HTTP response, substituting one string for another. You can use the sub_filter directive to define the rewrite to apply. The directive supports variables and chains of substitutions, making the more complex changes possible.

For example, you can change absolute links that refer to a server other than the proxy:

location / {    sub_filter      /blog/ /blog-staging/;    sub_filter_once off;}

Another example changes the method from http:// to https:// and replaces the localhost address to the host name from the request header field. The sub_filter_once directive tells NGINX to apply sub_filter directives consecutively within a location:

location / {    sub_filter     'href="http://127.0.0.1:8080/'    'href="https://$host/';    sub_filter     'img src="http://127.0.0.1:8080/' 'img src="https://$host/';    sub_filter_once on;}

Note that the part of the response already modified with the sub_filter will not be replaced again if another sub_filter match occurs.

2-1-7) Handing Errors

With the error_page directive, you can configure nginx to return a custom page along with an error code, substitute a different error code in the response, or redirect the browser to a different URI. In the following example, the error_page directive specifies the page (/404.html) to return with the 404 error code.

error_page 404 /404.html;

Note that this directive does not mean that the error is returned immediately (the return directive does that). but simple specified how to treat errors when they occur. The error code can come from a proxied server or occur during processing by nginx (for example, the 404 results when nginx can’t find the file requested by the client).

In the following example, when nginx can not find a page, it substitutes code 301 for code 404, and redirect the client to http:/example.com/new/path.html. This configuration is useful when clients are still trying to access a page at its old URI. The 301 code informs the browser that page has moved permanently, and it needs to replace the old address with the new one automatically upon return.

location /old/path.html {    error_page 404 =301 http:/example.com/new/path.html;}

The following configuration is an example of passing a request to the back end when a file is not found. Because there is no status code specified after equals sign in the error_page directive, the response to the client has the status code returned by the proxied server (not necessarily 404).

server {    ...    location /images/ {        # Set the root directory to search for the file        root /data/www;        # Disable logging of errors related to file existence        open_file_cache_errors off;        # Make an internal redirect if the file is not found        error_page 404 = /fetch$uri;    }    location /fetch/ {        proxy_pass http://backend/;    }}

The error_page directive instructs nginx to make an internal redirect when a file is not found. The $uri variable in the final parameter to the error_page directive holds the URI of the current request.

The error_page directive instructs NGINX Plus to make an internal redirect when a file is not found. The $uri variable in the final parameter to the error_page directive holds the URI of the current request, which gets passed in the redirect.

For example, if /images/some/file is not found, it is replaced with /fetch/images/some/file and a new search for a location starts. As a result, the request ends up in the second location context and is proxied to http://backend/.

The open_file_cache_errors directive prevents writing an error message if a file is not found. This is not necessary here since missing files are correctly handled.

2-2) Serving static content

This section describe how to use the nginx to sever static content. the ways to define the path that are searched to find requested files, and how to set up index files.

2-2-1) Root Directory and Index Files

The root directive specified the root directory that will be used to search for a file. To obtain the path of a requested file. Nginx appends the request URI to the path specified by the root directive. The directive can be placed in any level within the http, server, or location contexts. In the example below, the root directive is defined for a virtual server. It applies to the all location blocks where the root directive is not included to explicitly redefine the root.

server {    root /www/data;    location / {    }    location /images/ {    }    location ~ \.(mp3|mp4) {        root /www/media;    }}

Here, NGINX searches for a URI that starts with /images/ in the /www/data/images/ directory on the file system. But if the URI ends with the .mp3 or .mp4 extension, NGINX instead searches for the file in the /www/media/ directory because it is defined in the matching location block.

If a request ends with a slash, Nginx treats it as a request for a directory and tries to find an index file in the directory. The index directive defines the index file’s name (the default value is index.html). To continue with the example, if the request URI is /images/some/path/, Nginx delivers the file /www/data/images/some/path/index.html if it exists. If it does not, nginx returns HTTP code 404 (Not Found) by default. To configure nginx to return an automatically generated directory listing instead, include the on parameter to the autoindex directive:

location /images/ {    autoindex on;}

You can list more than one filename in the index directive. Nginx searches for files in the specified order and returns the first one it finds.

location / {    index index.$geo.html index.htm index.html;}

The $geo variable used here is a custom variable set through the geo directive. The value of the variable depends on the client’s IP address.

To return the index file, nginx checks for its existence and then makes an internal redirect to the URI obtained by appending the name of the index file to the base URI. The internal redirect results in a new search of a location and can end up in another location as in the following example:

location / {    root /data;    index index.html index.php;}location ~ \.php {    fastcgi_pass localhost:8000;    ...}

Here , if the URI in a request is /path/, and /data/path/index.html does not exist but /data/path/index.php does, the internal redirect to /path/index.php is mapped to the second location, As a result, the request is proxied.

2-2-2) Trying Several Options

The try_file directive can be used to check whether the specified file or directory exists and make an internal redirect, or return a specific status code if they don’t. For example, to check the existence of a file corresponding to the request URI, use the try_files directive and the $uri variable as follows:

server {    root /www/data;    location /images/ {        try_files $uri /images/default.gif;    }}

The file specified in the form of the URI, which is processed using the root or alias directive set in context of the current location or virtual server. In this case, if the file corresponding to the original URI doesn’t exist, nginx makes an internal redirect to the URI specified in the last parameter, returning /www/data/images/default.gif.

The last parameter can also be a status code (directly preceded by the equals sign) or the name of a location. In the following example, a 404 error code will be returned if none of the parameters to the try_files directive resolve to an existing file or directory.

location / {    try_files $uri $uri/ $uri.html =404;}

In the next example, if neither the original URI nor the URI with the appended trailing slash resolve into an existing file or directory, the request is redirected to the named location which passes it to proxied server.

location / {    try_files $uri $uri/ @backend;}location @backend {    proxy_pass http://backend.example.com;}

2-2-3) Optimizing Nginx Speed for Serving Content

Loading speed is a crucial factor of serving any content. Making minor optimizations to your nginx configuration may boost the productivity and help reach optimal performance.

2-2-3-1) Enabling sendfile

By default, nginx handles file transmission itself and copies the file into the buffer before sending it. Enabling the sendfile directive will eliminate the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection to entirely occupy the worker process, you can limit the amount of data transferred in a single sendfile() call by defining the sendfile_max_chunk directive:

location /mp3 {    sendfile           on;    sendfile_max_chunk 1m;    ...}

2-2-3-2) Enabling tcp_nopush

Use the tcp_nopush option together with sendfile on; The option will enable nginx to send HTTP response header in one packet right after the chunk of data has been obtained by sendfile.

location /mp3 {    sendfile   on;    tcp_nopush on;    ...}

2-2-3-3) Enabling tcp_nodelay

The tcp_nodelay option allows overriding the Nagle’s algorithm, originally desgined to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into the larger one and sends the packet with the 200 ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay would also affect online application (ssh, online game, online trading). By default, the tcp_nodelay directive is set to no which means that the Nagle’s algorithm is disabled. The option is used only for keepalive connections:

location /mp3  {    tcp_nodelay       on;    keepalive_timeout 65;    ...}

2-2-3-4) Optimizing the Backlog Queue

One of the important factor is how fast nginx can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, there i s either low queue, or there is no queue at all. But under high load, the queue may dramatically grow which may result in uneven performance, connections dropping, and latency.

Measuring the Listen Queue

Let’s measure the current listen queue. Run the command:

netstat -Lan

The command output may be the follows:

Current listen queue sizes (qlen/incqlen/maxqlen)Listen         Local Address         0/0/128        *.12345            10/0/128        *.80       0/0/128        *.8080

The command shows that there are 10 unaccepted connections in the listen queue on Port 80, while the connection limit is 128 connections, and this situation is normal.

However, the command output may be as follows:

Current listen queue sizes (qlen/incqlen/maxqlen)Listen         Local Address         0/0/128        *.12345            192/0/128        *.80       0/0/128        *.8080

The command output shows 192 unaccepted connections which exceeds the limit of 128 connections. This is quite common when a web site experience heavy traffic. To achieve optimal performance you will need to increase the maximum number of connections that can be queued for acceptance by nginx in both your operating system and nginx configuration.

Tuning the Operating System

Increase the value of the net.core.somaxconn key from its default value (128) to the value high enough to be able to handle a high burst of traffic:
* For FreeBSD, run the command:

sudo sysctl kern.ipc.somaxconn=4096
  • For Linux, run the command:
sudo sysctl -w net.core.somaxconn=4096

Open the file:/etc/sysctl.conf

vi /etc/sysctl.conf

add the line to the file and save the file:

net.core.somaxconn = 4096
Tuning Nginx

if you set the somaxconn key to a value greater than 512, change the backlog parameter of the Nginx listen directive to match:

server {    listen 80 backlog=4096;    # The rest of server configuration}

2-3) Reverse Proxy

This article describe the basic configuration of a proxy server. You will learn how to pass a request from nginx to proxied servers over different protocols, modify client request headers that are sent to the proxied server, and configure buffering of response coming from the proxied server.

2-3-1) Introduction

Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocol other than HTTP.

2-3-2) Passing a Request to a Proxied Server

When nginx proxies a request, it sends the requests to a specified proxied server, fetches the response, and sends it back to the client. It is possible to proxy requests to an HTTP server (another nginx server or any other server) or a non-HTTP server (which can run an application developed with a specific framework, such as PHP or Python) using a specified protocol. Supported protocols include FastCGI, uwsgi, SCGI and memcached.

To pass a request to an HTTP proxied server, the proxy_pass directive is specified inside a location. For example:

location /some/path/ {    proxy_pass http://www.example.com/link/;}

This example configuration results in passing all requests processing in this location to the proxied server at the specified address. This address can be specified as a domain name or IP address. The address may also include a port:

location ~ \.php {    proxy_pass http://127.0.0.1:8000;}

Note that in the first example above, the address of the proxied server is followed by a URI /link/. If the URI is specified along with the address, it replace the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the path of URI to be replaced, the full request URI is passed (possibly, modified).

To pass a request to a non-HTTP server, the appropriate **_pass directive should be used:

  • fastcgi_pass - passes a request to a FastCGI server
  • uwsgi_pass - passes a request to a uwsgi server
  • scgi_pass - passes a request to an SCGI server
  • memcached_pass - passes a request to a memcached server

2-3-3) Passing Request Headers

By default, nginx redefines two header fields in proxied requests, “Host” and “Connection”, and eliminates the header fields whose values are empty strings. “Host” is set to the $proxy_host variable and “Connection” is set to close.

To change these setting, as well as modify other header fields, use the proxy_set_header directive. This directive can be specified in a location or higher. It can also be specified in a particular server context or in the http block. For example:

location /some/path/ {    proxy_set_header Host $host;    proxy_set_header X-Real-IP $remote_addr;    proxy_pass http://localhost:8000;}

To prevent a header field from being passed to the proxied server, set it to an empty string as follows:

location /some/path/ {    proxy_set_header Accept-Encoding "";    proxy_pass http://localhost:8000;}

2-3-4) Configuring Buffers

By default nginx buffers response from proxied server. A response is stored in the internal buffers and is not sent to the client until the whole response is received. Buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from proxied server to the client synchronously. However, when buffering is enabled nginx allows the proxied server to process responses quickly, while nginx stores the responses for as much time as the clients need to download them.

The directive that is responsible for enabling and disabling buffering is proxy_buffering. By default it is set to on and buffering is enabled.

The proxy_buffers directive controls the size and number of buffers allocated for a request. The first part of the response from a proxied server is stored in a separate buffer, the size of which is set with the proxy_buffer_size directive. This part usually contains a comparatively small response header and can be made smaller than the buffers for the rest of the response.

In the following example, the default number of buffers is increased and the size of the buffer for the first portion of the response is made smaller than the default.

location /some/path/ {    proxy_buffers 16 4k;    proxy_buffer_size 2k;    proxy_pass http://localhost:8000;}

If buffering is disabled, the response is sent to the client synchronously while it is receiving it from the proxied server. This behavior may be desirable for fast interactive clients that need to start receiving the response as soon as possible.

To disable buffering in a specific location, place the proxy_buffering directive in the location with the off parameter, as follows:

location /some/path/ {    proxy_buffering off;    proxy_pass http://localhost:8000;}

In this case NGINX uses only the buffer configured by proxy_buffer_size to store the current part of a response.

A common use of a reverse proxy is to provide load balancing.

2-3-5) Choosing an Outgoing IP Address

If your proxy server has several network interfaces, sometimes you might need to choose a particular source IP address for connecting to a proxied server or an upstream. This may be useful if a proxied server behind nginx is configured to accept connections from particular IP network or IP address ranges.

Specify the proxy_bind directive and the IP address of the necessary network interface:

location /app1/ {    proxy_bind 127.0.0.1;    proxy_pass http://example.com/app1/;}location /app2/ {    proxy_bind 127.0.0.2;    proxy_pass http://example.com/app2/;}

The IP addres can be also specified with a variable. For example, the $server_addr variable passes the IP address of the network interface that accepted the request:

location /app3/ {    proxy_bind $server_addr;    proxy_pass http://example.com/app3/;}

2-4) Compression and Decompression

This section describes how to configure compression or decompression of response, as well as sending compressed file.

Compressing responses often significantly reduces the size of transmitted data. However, since compression happens at runtime it can also add considerable processing overhead which can negatively affect performance. Nginx performs compression before sending response to clients, but does not “double compress” responses that are already compressed (For example, by a proxied server).

2-4-1) Enabling Compression

To enable compression, include the gzip directive with on parameter:

gzip on;

By default, nginx compresses response only with MIME type text/html. To compress responses with other MIME types, include the gzip_types directive and list the additional types.

gzip_types text/plain application/xml;

To specify the minimum length of the response to compress, use the gzip_min_length directive. The default is 20 bytes (here adjusted to 1000):

gizp_min_length 1000;

By default, nginx does not compress responses to the proxied requests( request that come from the proxy server). The fact that a request comes from a proxy server is determined by the presence of the Via header field in the request.o configure compression of these responses, use the gzip_proxied directive. The directive has a number of parameters specifying which kinds of proxied requests NGINX should compress. For example, it is reasonable to compress responses only to requests that will not be cached on the proxy server. For this purpose the gzip_proxied directive has parameters that instruct NGINX to check the Cache-Control header field in a response and compress the response if the value is no-cache, no-store, or private. In addition, you must include the expired parameter to check the value of the Expires header field. These parameters are set in the following example, along with the auth parameter, which checks for the presence of the Authorization header field (an authorized response is specific to the end user and is not typically cached):

gzip_proxied no-cache no-store private expired auth;

As with most other directives, the directives that configure compression can be included in the http context or in a server context or location configuration block.

The overall configuration of gzip compression might be look like this:

server {    gzip on;    gzip_types      text/plain application/xml;    gzip_proxied    no-cache no-store private expired auth;    gzip_min_length 1000;    ...}

2-4-2) Enabling Decompression

Some clients do not support response with the gzip encoding method. At the same time, it might be desirable to store compressed data, or compress responses on the fly and store them in the cache. To successfully serve both clients that do and do not accept compressed data, Nginx can decompress data on the fly when sending it to the latter type of client.

To enable runtime decompression, use the gunzip directive:

location /storage/ {    gunzip on;    ...}

The gunzip directive can be specified in the same context as the gzip directive:

server {    gzip on;    gzip_min_length 1000;    gunzip on;    ...}

Note that this directive is defined in a separate module that might not be included in an open source NGINX build by default.

2-4-3) Sending Compressed Files

To send a compressed version of a file to the client instead of the regular one, set the gzip_static directive to on within the appropriate context.

location / {    gzip_static on;}

In this case, to service a request for /path/to/file, ngnix tries to find and send the file /path/to/file.gz. If the file does not exist, or the client does not support gzip, nginx sends the uncompressed version of the file.


Note that the gzip_static directive does not enable on-the-fly compression. It merely uses a file compressed beforehand by any compression tool. To compress content (and not only static content) at runtime, use the gzip directive.

2-5) Web Content cache

This section describes how to enable and configure caching of responses received from proxied servers.

When caching is enabled, Nginx saves responses in a disk cache and uses them to response to clients without having to proxy request for the same content every time.

2-5-1) Enabling the Caching of Responses

To enable caching, include the proxy_cache_path directive in the top-level http context. The mandatory first parameter is the local filesystem path for cached content, and the mandatory keys_zone parameter defines the name and size of the shared memory zone that is used to store metadata about cached items:

http {    ...    proxy_cache_path /data/nginx/cache keys_zone=one:10m;}

The include the proxy_cache directive in the context (protocol type, virtual server, or location) for which you want to cache server responses, specifying the zone name defined by the keys_zone parameter to the proxy_cache_path directive (in this case, one):

http {    ...    proxy_cache_path /data/nginx/cache keys_zone=one:10m;    server {        proxy_cache one;        location / {            proxy_pass http://localhost:8000;        }    }}

Note that the size defined by the keys_zone parameter does not limit total amount of cached response data. Cached responses themselves are stored with a copy of the metadata in specific files on filesystem. To limit the amount of cached response data, include the max_size parameter to the proxy_cache_path directive. (But note that the amount of cached data can temporarily exceed this limit, as described in the following section.)

2-5-2) NGINX Processes Involved in Caching

There are two additional nginx processes involved in caching:

  • The cache manager is activated periodically to check the state of the cache. If the cache size exceeds the limit set by the max_size parameter to the proxy_cache_path directive, the cache manager removes the data that was accessed least recently. As previously mentioned, the amount of cached data can temporarily exceed the limit during the time between cache manager activations

  • The cache loader runs only once, right after nginx starts. It loads metadata about previously cached data into the shared memory zone. Loading the whole cache at once could consume sufficient resources to slow nginx performance during the first few minutes after startup. To avoid this, configure iterative loading of the cache by including the following parameters to the proxy_cache_path directive:

    • loader_threashold - Duration of an iteration, in milliseconds (by default 200)
    • loader_files - Maximum number of items loaded during one iteration (by default, 100)
    • loader_sleeps - Delay between iterations, in milliseconds (by default 50)

In the following example, iterations last 300 milliseconds or until 200 items have been loaded:

proxy_cache_path /data/nginx/cache keys_zone=one:10m loader_threshold=300 loader_files=200;

2-5-3) Specifying Which Requests to Cache

By default, nginx can cache all responses to requests made with the HTTP GET and HEAD methods the first time such responses are received from a proxied server. As the key (identifier) for a request, nginx uses the request string. If a request has the same key as a cached response, nginx sends the cached response to the client. You can include various directives in the http, server. or location context to control which responses are cached.

To change the request characteristics used in calculating the key, include the proxy_cache_key directive:

proxy_cache_key "$host$request_uri$cookie_user";

To define the minimum number of times that a request with the same key must be made before the response is cached, include the proxy_cache_min_use directive:

proxy_cache_min_uses 5;

To cache responses to requests with methods other than GET and HEAD, list them along with GET and HEAD as parameters to the proxy_cache_methods directive:

proxy_cache_methods GET HEAD POST;

2-5-4) Limiting or Bypassing Caching

By default, responses remain in the cache indefinitely. They are removed only when the cache exceeds the maximum configured size, and the in order by length of time since they were last requested. You can set how long cached responses are considered valid, or even whether they are used at all, by including directives in the http, server, or location context:

To limit how long cached responses with specific status code are considered valid, include the proxy_cache_valid directive:

proxy_cache_valid 200 302 10m;proxy_cache_valid 404      1m;

In this example, responses with code 200 or 302 are considered valid for 10 minutes, and responses with code 404 are valid for 1 minute. To define the validity time for responses with all status codes, specify any as the first parameter:

proxy_cache_valid any 5m;

To define conditions under which nginx does not send cached responses to clients, include the proxy_cache_bypass directive. Each parameter defines a condition and consists of a number of variables. If at least one parameter is not empty and does not equal “0” (zero), nginx does not look up the response in the cache , but instead forwards the request to the backend server immediately.

proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;

To define conditions under which NGINX Plus does not cache a response at all, include the proxy_no_cache directive, defining parameters in the same way as for the proxy_cache_bypass directive.

proxy_no_cache $http_pragma $http_authorization;

2-5-5) Purging Content From The Cache

Nginx makes it possible to remove outdated cached file from the cache. This is necessary for removing outdated cached content to prevent serving old and new versions of web pages at the same time. The cache is purged upon receiving a special “purge” request that contains either a custom HTTP header, or the “purge” HTTP method.

2-5-5-1) Configuring Cache Purge

let’s set up a configuration that identifies requests that use the “purge” HTTP method and deletes matching URLs.

1 On the http level, create a new variable, for example, purgemethod,thatwilldependontherequest_method variable:

http {    ...    map $request_method $purge_method {        PURGE 1;        default 0;    }}

2 In the location where caching is configured, include the proxy_cache_purge directive that will specify a condition of a cache purge request. In out example, it is the $purge_method configured at the previous step:

server {    listen      80;    server_name www.example.com;    location / {        proxy_pass  https://localhost:8002;        proxy_cache mycache;        proxy_cache_purge $purge_method;    }}

2-5-5-2) Sending the Purge Command

When the proxy_cache_purge directive is configured, you’ll need to send a special cache purge request to purge the cache, You can issue a purge request using a range of tools, for example, the curl command:

$ curl -X PURGE -D – "https://www.example.com/*"HTTP/1.1 204 No ContentServer: nginx/1.5.7Date: Sat, 01 Dec 2015 16:33:04 GMTConnection: keep-alive

In above example, the resources that have a common URL part (specified by the asterisk wildcard) will be removed. However, such cache entries will not be removed completely from the cache: they will remain on the disk until they are deleted for either inactivity (the inactive parameter of proxy_cache_path), or processed by the cache purge process, or a client attempts to access them.

2-5-5-3) Restricting Access to the Purge Command

It is recommended that you configure a limited number of IP addresses allowed to send a cache purge request:

geo $purge_allowed {   default         0;  # deny from other   10.0.0.1        1;  # allow from localhost   192.168.0.0/24  1;  # allow from 10.0.0.0/24}map $request_method $purge_method {   PURGE   $purge_allowed;   default 0;}

In this example, Nginx checks if the “purge” method is used in a request, and if so, analyzes the client IP address. If the IP address is whitelisted, then the purgemethodissettopurge_allowed: “1” permits purging, “0” denies purging.

2-5-5-4) Completely Removing Files from the Cache

To completely remove cache files that match an asterisk, you will need to activate a special cache purger process that will permanently iterate through all cache entries and delete the entries that match the wildcard key. On the http level, add the purger parameter to the proxy_cache_path directive.

proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;

2-5-5-5) Cache Purge Configuration Example

http {    ...    proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;    map $request_method $purge_method {        PURGE 1;        default 0;    }    server {        listen      80;        server_name www.example.com;        location / {            proxy_pass        https://localhost:8002;            proxy_cache       mycache;            proxy_cache_purge $purge_method;        }    }    geo $purge_allowed {       default         0;       10.0.0.1        1;       192.168.0.0/24  1;    }    map $request_method $purge_method {       PURGE   $purge_allowed;       default 0;    }}

2-5-6) Byte-Range Caching

Sometimes, the initial cache fill operation may take some time, especially for large files. When the first request starts downloading a part of a video file, next request will have to wait for the entire file to be downloaded and put into the cache.

Nginx makes it possible cache such range requests and gradually fill the cache with the cache slice module. The file is divided into small “slices”. Each range request chooses particular slices that would cover the requested range and, if this range is still not cached, put it into the cache. All other requests to these slices will take the response from the cache.

To enable byte-range caching :
1 Make sure your nginx is compiled with slice module

2 Specify the size of the slice with the slice directive:

location / {    slice  1m;}

The slice size should be adjusted reasonably enough to make slice download fast. A too small size may result in excessive memory usage and a large number of opened file descriptors while processing the request, a too large value may result in latency.

3 Include the $slice_range variable to the cache key:

proxy_cache_key $uri$is_args$args$slice_range;

4 Enable caching of responses with 206 status code:

proxy_cache_valid 200 206 1h;

5 Enable passing range requests to the proxied server by passing the $slice_range variable in the range header field:

proxy_set_header  Range $slice_range;

Byte-range caching example:

location / {    slice             1m;    proxy_cache       cache;    proxy_cache_key   $uri$is_args$args$slice_range;    proxy_set_header  Range $slice_range;    proxy_cache_valid 200 206 1h;    proxy_pass        http://localhost:8000;}

Note that if slice caching is turned on, the initial file should not be changed.

2-5-7) Combined Configuration Example

The following sample configuration combines some of the caching options described above:

http {    ...    proxy_cache_path /data/nginx/cache keys_zone=one:10m loader_threshold=300                      loader_files=200 max_size=200m;    server {        listen 8080;        proxy_cache one;        location / {            proxy_pass http://backend1;        }        location /some/path {            proxy_pass http://backend2;            proxy_cache_valid any 1m;            proxy_cache_min_uses 3;            proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;        }    }}