LOAD BALANCING WITH NGINX

Load Balancing in Nginx

       ——《Nginx From Beginner to Pro》

Now that you have learned about the basics of load balancing and advantages of using a software load balancer, let’s move forward and work on the Nginx servers you already created in the previous chapters.
Clean Up the Servers
Before setting up anything new, clean up the previous applications so that you can start afresh. This is to keep things simpler. You will be settings up applications in different ways in the upcoming sections of this chapter. The idea is to give you information about different scenarios from a practical perspective.
1. Log on to the WFE1 using ssh -p 3026 user1@127.0.0.1
2. Remove everything from the Nginx home directory.
sudo rm -rf /usr/share/nginx/html/*
3. Reset your configuration ( sudo vi /etc/nginx/nginx.conf ) to the following:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr – $remote_user – [$time_local] – $document_root –
$document_uri – ‘
‘$request – $status – $body_bytes_sent – $http_referer’;
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
index index.html index.htm;
include /etc/nginx/conf.d/*.conf;
}
4. Now, remove the entries in conf.d by using the following command:
sudo rm -f /etc/nginx/conf.d/*.conf
5. Repeat the steps for WFE2.

Create Web Content
Let’s create some content so that it is easy to identify which server served the request. In practical situations, the content on the WFE1 and WFE2 will be same for the same application. Run the following command on
both WFE1 and WFE2:
uname -n | sudo tee /usr/share/nginx/html/index.html
This command is pretty straightforward. It uses the output of uname -n and dumps it in a file called index.html in the default root location of Nginx. View the content and ensure that the output is different on both the servers.
$cat /usr/share/nginx/html/index.html
wfe1.localdomain

Configure WFE1 and WFE2
The content is available on both servers now, but since you have already cleaned up the configuration you will need to re-create the configuration file by using the following command:
sudo cp /etc/nginx/conf.d/default.template /etc/nginx/conf.d/main.conf

The command will create a copy of the configuration for a default website. If you recall, the default.
template contained the following text:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
• Restart the service: sudo systemctl restart nginx.
• Repeat the steps on WFE2.
• Once done, you should be able to execute curl localhost on both servers, and you should get output as wfe1.localdomain and wfe2.localdomain respectively. Notice that even though the request is same ( curl localhost ), the output is different. In practice, the output will be the same from both servers.
Set Up NLB Server
Setting up an NLB server is no different than setting up a regular web server. The installation steps are similar to what you have learned already. The configuration, however, is different and you will learn about it in the upcoming sections.
1. Create a new virtual machine called NLB.
2. Set up a NAT configuration as you have learned in previous chapters. It should look similar to Figure 8-4 .

3. Install Nginx (refer to chapter 2 ) on the NLB server.
4. Since it is a new server, when you execute curl localhost , you will see the
default welcome page. You can ignore it for the time being.
5. Open the configuration file ( /etc/nginx/conf.d/default.conf ) and make the
changes as follows:
upstream backend{
server 10.0.2.6;
server 10.0.2.7;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
6. Restart the service.
7. Try the following command a few times and notice how it gives you output from
WFE1 and WFE2 in an alternate fashion.
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain

So, what just happened? Basically, you have set up a load balancer using Nginx and what you saw was the load balancer in action. It was extremely simple, right? There are a couple of directives at play here.
• upstream directive: The upstream directive defines a group of servers. Each server directive points to an upstream server. The server can listen on different ports if needed. You can also mix TCP and UNIX-domain sockets if required. You will learn more about it in the upcoming scenarios.
• proxy_pass directive: This directive sets the address of a proxied server. Notice that in this case, the address was defined as back end, and in turn contained multiple servers. By default, if a domain resolves to several addresses, all of them will be used in a round-robin fashion.
Load Balancing Algorithms
When a load balancer is configured, you need to think about various factors. It helps if you know the application and its underlying architecture. Once you have found the details, you will need to configure some parameters of Nginx so that you can route the traffic accordingly. There are various algorithms that you can use based on your need. You will learn about it next.
Round Robin
This is the default configuration. When the algorithm is not defined, the requests are served in round-robin fashion. At a glance, it might appear way too simple to be useful. But, it is actually quite powerful. It ensures that your servers are equally balanced and each one is working as hard.
Let’s assume that you have two servers, and due to the nature of your application you would like three requests to go to the first server (WFE1) and one request to the second server (WFE2). This way, you can route the traffic in a specific ratio to multiple servers. To achieve this, you can define weight to your server definitions in the configuration file as follows.
upstream backend{
server 10.0.2.6 weight=3;
server 10.0.2.7 weight=1;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Reload Nginx configuration and try executing curl localhost multiple times. Note that three requests went to the WFE1 server, whereas one request went to WFE2.
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain

In scenarios where you cannot easily determine the ratio or weight, you can simply use the least connected algorithm . It means that the request will be routed to the server with the least number of active connections.
This often leads to a good load-balanced performance. To configure this, you can use the configuration file like so:
upstream backend{
least_conn;
server 10.0.2.6 weight=1;
server 10.0.2.7 weight=1;
}
Without a load testing tool, it will be hard to determine the output using command line. But the idea is fairly simple. Apart from the least number of active connections, you can also apply weight to the servers, and it would work as expected.
IP Hash
There are quite a few applications that maintain state on the server: especially the dynamic ones like PHP,Node, ASP.NET, and so on. To give a practical example, let’s say the application creates a temporary file for a specific client and updates him about the progress. If you use one of the round-robin algorithms, the subsequent request might land on another server and the new server might have no clue about the file processing that started on the previous server. To avoid such scenarios, you can make the session sticky, so that once the request from a specific client has reached a server, Nginx continues to route the traffic to the same server. To achieve this, you must use ip_hash directive like so:
upstream backend{
ip_hash;
server 10.0.2.6;
server 10.0.2.7;
}
The configuration above ensures that the request reaches only one specific server for the client based on the client’s IP hash key. The only exception is when the server is down, in which case the request can land on another server.
Generic Hash
A hash algorithm is conceptually similar to an IP hash. The difference here is that for each request the load
balancer calculates a hash that is based on the combination of text and Nginx variables that you can specify.
It sends all requests with that hash to a specific server. Take a look at the following configuration where hash
algorithm is used with variables $scheme (for http or https) and $request_uri (URI of the request):
upstream backend{
hash $scheme$request_uri;
server 10.0.2.6;
server 10.0.2.7;
}

Bear in mind that a hash algorithm will most likely not distribute the load evenly. The same is true for an IP hash. The reason why you still might end up using it is because of your application’s requirement of a sticky session. Nginx PLUS offers more sophisticated configuration options when it comes to session persistence. The best use case for using hash is probably when you have a dynamic page that makes data intensive operations that are cachable. In this case, the request to that dynamic page can go to one server only, which caches the result and keeps serving the cache result, saving the effort required at the database side and on all the other servers.
Least Time (Nginx PLUS), Optionally Weighted
Nginx PLUS has an additional algorithm that can be used. It is called the least time method where the load balancer mathematically combines two metrics for each server—the current number of active connections and a weighted average response time for past requests —and sends the request to the server with the lowest value. This is a smarter and more effective way of doing load balancing with heuristics.
You can choose the parameter on the least_time directive, so that either the time to receive the response
header or the time to receive the full response is considered by the directive. The configuration looks like so:
upstream backend{
least_time (header | last_byte);
server 10.0.2.6 weight=1;
server 10.0.2.7 weight=1;
}
Most Suitable Algorithm
There is no silver bullet or straightforward method to tell you which method will suit you best. There are plenty of variables that need to be carefully determined before you choose the most suitable method. In general, least connections and least time are considered to be best choices for the majority of the workloads.
Round robin works best when the servers have about the same capacity, host the same content, and the requests are pretty similar in nature. If the traffic volume pushes every server to its limit, round robin might push all the servers over the edge at roughly the same time, causing outages.
You should use load testing tools and various tests to figure out which algorithm works best for you. One thing that often helps you make good decision is the knowledge of the application’s underlying architecture.

If you are well aware about the application and its components, you will be more comfortable in doing
appropriate capacity planning.
You will learn about load testing tools, performance, and benchmarking in the upcoming chapters.
Load Balancing Scenarios
So far in this chapter you have seen an Nginx load balancer routing to the back-end Nginx servers. This is not a mandatory requirement. You can choose Nginx to route traffic to any other web server. As a matter of fact, that is what is done mostly in practical scenarios and as far as the request is HTTP based, it will just work.
Nginx routes the request based on the mapped URI. You can use Nginx easily to front end the PHP, ASP.
NET, Node.js, or any other application for that matter and enjoy the benefits of Nginx as you will see in the
upcoming scenarios.

Nginx Routing Request to Express/Node. js
If you recall, in the previous chapter you configured Nginx for MEAN stack. Assuming WFE1 and WFE2 are hosting applications based on MEAN stack and the application is running on port 3000, your NLB server’s configuration will look like the following:
upstream nodeapp {
server 10.0.2.6:3000;
server 10.0.2.7:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://nodeapp;
}
}
A common mistake that usually happens is that the additional ports are not opened in the firewall. So, you need to ensure that ports are opened explicitly by using the following command on both WFE1 and WFE2:
[user1@wfe1 ~]$ sudo firewall-cmd –permanent –add-port=3000/tcp
success
[user1@wfe1 ~]$ sudo firewall-cmd –reload
success
Once you have opened the ports, Nginx will start routing the request successfully. Note that the opened ports are not exposed to the Internet. It is just for Nginx that is load balancing the requests.
Passing the HOST Header
Since everything has been working in these simple demos, it might mislead you into thinking that all you need to pass to the back-end server is the URI. For real world applications you might have additional information in request headers that—if missed—will break the functionality of the application. In other words, the request coming from Nginx to the back-end servers will look different than a request coming directly from the client. This is because Nginx makes some adjustments to headers that it receives from the client. It is important that you are aware of these nuances.
• Nginx gets rid of any empty headers for performance reasons.
• Any header that contains an underscore is considered invalid and is eventually
dropped from the headers collection. You can override this behavior by explicitly
setting underscores_in_headers on ;
• The “HOST” header is set to the value of $proxy_host, which is a variable that
contains the domain name of IP address grabbed from the proxy_pass definition. In
the configuration that follows, it will be backend .
• Connection header is added and set to close.

You can tweak the header information before passing on by using the proxy_set_header directive.
Consider the following configuration in the NLB:
upstream backend{
server 10.0.2.6;
server 10.0.2.7;
}
server {
listen 80;
location / {
proxy_set_header HOST $host;
proxy_pass http://backend;
}
}
In the previous configuration, an explicit HOST header has been set using proxy_set_header directive.
To view the effect, follow these steps:
• Ensure that your NLB configuration appears as the previous configuration block.
Restart Nginx service.
• On WFE1, change the nginx.conf ( sudo vi /etc/nginx/nginx.conf ) such that the
log_format has an additional field called $host as follows:
log_format main ‘$host – $remote_addr – $remote_user – [$time_local] – $document_
root – $document_uri – $request – $status – $body_bytes_sent – $http_referer’;
• Save the file and exit. Restart Nginx service.
• Switch back to NLB and make a few requests using curl localhost
• View the logs on the WFE1 using sudo tail /var/log/nginx/access.log -n 3.
[user1@wfe1 ~]$ sudo tail /var/log/nginx/access.log -n 3
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
• As you can see, the requests had localhost as the hostname and it is because you have used proxy_set_header HOST $host.
• To view what the result would have looked like without this header change, comment the line in NLB’s configuration:
location / {
# proxy_set_header HOST $host;
proxy_pass http://backend;
}

• Restart Nginx on NLB and retry curl localhost a few times.
• If you view the logs on WFE1 using the tail command, you should see an output
similar to this:
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
backend – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
backend – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
• Notice the last couple of lines where the hostname appears as back end. This is the default behavior of Nginx if you don’t set the HOST header explicitly. Based on
your application, you might need to set explicitly or ignore this header in the NLB configuration.
Forwarding IP Information
Since the requests are forwarded to the back end, it has no information about where the requests have actually come from. To the back-end servers, it knows the NLB as the client. There are scenarios where you might want to log information about the actual visitors. To do that, you can use proxy-set-header just as you did in the previous example but with different variables like so:
location / {
proxy_set_header HOST $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
In this configuration apart from setting HOST header, you are also setting the following headers:
• X-Real-IP is set to $remote_addr variable that contains the actual client IP.
• X-Forwarded-For is another header set here, which contains $proxy_add_x_forwarded_for. This variable contains a list of $remote_addr – client IPs – separated by a comma.
• To log the actual client IP, you should now modify the log_format to include $http_x_real_ip variable that contains the real client IP information.
• By default, X-Real-IP is stored in $http_x_real_ip. You can change this behavior
by using – real_ip_header X-Forwarded-For; – in your http, location or server directive in order to save the value of X-Forward-For header instead of X-Real-IP
header.

Buffering
As you can guess, with an NLB in between the real back-end server, there are two hops for every request. This may adversely affect the client’s experience. If the buffers are not used, data that is sent from the back-end server immediately gets transmitted to the client. If the clients are fast, they can consume this immediately and buffering can be turned off. For practical purposes, the clients will typically not be as fast as the server in consuming the data. In that case, turning buffering on will tell Nginx to hold the back-end data temporarily, and feed that data to the client. This feature allows the back ends to be freed up quickly since they have to simply work and ensure that the data is fed to Nginx NLB. By default, buffering is on in Nginx
and controlled using the following directives:
• proxy_buffering: Default value is on, and it can be set in http, server, and location blocks.
• proxy_buffers number size : proxy_buffers directive allows you to set the number of
buffers along with its size for a single connection. By default, the size is equal to one
memory page, and is either 4K or 8K depending on the platform.
• proxy_buffer_size size : The headers of the response are buffered separately from the
rest of the response. This directive sets that size, and defaults to proxy_buffers size.
• proxy_max_temp_file_size size : If the response is too large, it can be stored in a
temporary file. This directive sets the maximum size of the temporary file.
• proxy_temp_file_write_size size : This directive governs the size of data written to the
file at a time. If you use 0 as the value, it disables writing temporary files completely.
• proxy_temp_path path : This directive defines the directory where temporary files are
written.
Nginx Caching
Buffering in Nginx helps the back-end servers by offloading data transmission to the clients. But the request actually reaches the backend server to begin with. Quite often, you will have static content, like 3rd party

JavaScript libraries, CSS, Images, PDFs, etc. that doesn’t change at all, or rarely changes. In these cases, it makes sense to make a copy of the data on the NLB itself, so that the subsequent requests could be served directly from the NLB instead of fetching the data every time from the backend servers. This process is called caching.
To achieve this, you can use the proxy_cache_path directive like so in the HTTP block:
proxy_cache_path path levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m
Before you use this directive, create the path as follows and set appropriate permissions:
mkdir -p /data/nginx/cache
chown nginx /data/nginx/cache
chmod 700 /data/nginx/cache
• Levels define the number of subdirectories Nginx will create to maintain the cached files. Having a large number of files in one flat directory slows down access, so it is recommended to have at least a two-level directory hierarchy.
• keys_zone defines the area in memory which contains information about cached file keys. In this case a 10MB zone is created and it should be able to hold about 80,000
keys (roughly).

• max_size is used to allocate 10GB space for the cached files. If the size increases, cache manager process trims it down by removing files that were used least recently.
• inactive=60m implies the number of minutes the cache can remain valid in case it is not used. Effectively, if the file is not used for 60 minutes, it will be purged from the cache automatically.
By default, Nginx caches all responses to requests made with the HTTP GET and HEAD methods. You can cache dynamic content too where the data is fetched from a dynamic content management system, but changes less frequently, using fastcgi_cache . You will learn about caching details in chapter 12 .
Server Directive Additional Parameters
The server directive has more parameters that come in handy in certain scenarios. The parameters are fairly
straightforward to use and simply require you to use the following format:
server address [parameters]
You have already seen the server address in use with weight. Let’s learn more about some additional
parameters.
• max_fails=number: Sets the number of unsuccessful attempts before considering the server unavailable for a duration. If this value is set to 0, it disables the accounting of
attempts.
• fail_timeout=time: Sets the duration in which max_fails should happen. For example, if max_fails parameter is set to 3, and fail_timeout is set to 10 seconds, it would imply that there should be 3 failures in 10 seconds so that the server could be considered unavailable.
• backup: Marks the server as a backup server. It will be passed requests when the primary servers are unavailable.
• down: Marks the server as permanently unavailable.
• max_conns=number: Limits the maximum number of simultaneous active connections. Default value of 0 implies no limit.
Configure Nginx (PLUS) for Heath Checks
The free version of Nginx doesn’t have a very important directive, and it is called health_check. This feature is available in Nginx PLUS, and enabling it gives you a lot of options related to health of the upstream servers.
• interval=time : Sets the interval between two health checks. The default value is 5
seconds and it implies that the server checks the upstream servers every 5 seconds.
• fails=number : If the upstream server fails x number of times, it will be considered
unhealthy. The default value is 1.
• passes=number : Once considered unhealthy, the upstream server needs to pass the
test x number of times before it could be considered healthy. The default value is 1.
• uri = path : Defines the URI that is used to check requests. Default value is /.
• match=name : You can specify a block with its expected output in order the test to succeed. In the following configuration, the test is to ensure that the output has a status code of 200, and the body contains “Welcome to nginx!”
http {
server {
location / {
proxy_pass http://backend;
health_check match=welcome;
}
}
match welcome {
status 200;
header Content-Type = text/html;
body ~ “Welcome to nginx!”;
}
}
• If you specify multiple checks, any single failure will make the server be considered
unhealthy.
Activity Monitoring in Nginx (PLUS)
Nginx PLUS includes a real-time activity monitoring interface that provides load and performance metrics.
It uses a RESTful JSON interface, and hence it is very easy to customize. There are plenty of third-party monitoring tools that take advantage of JSON interface and provide you a comprehensive dashboard for performance monitoring.
You can also use the following configuration block to configure Nginx PLUS for status monitoring.
server {
listen 8080;
root /usr/share/nginx/html;
# Redirect requests for / to /status.html
location = / {
return 301 /status.html;
}
location = /status.html { }
location /status {
allow x.x.x.x/16; # permit access from local network
deny all; # deny access from everywhere else
status;
}
}

Status is a special handler in Nginx PLUS. The configuration here is using port 8080 to view the detailed status of Nginx requests. To give you a better idea of the console, the Nginx team has set up a live demo page that can be accessed at http://demo.nginx.com/status.html .
Summary
In this chapter, you have learned about the basic fundamentals of high availability and why it matters. You should also be comfortable with the basic concepts about hardware and software load balancing. Nginx is an awesome product for software load balancing and you have learned about how easily you can set it up in your web farm. The architecture of Nginx allows you to have a very small touch point for front-end servers, and the flexibility ensures that you can customize it precisely based on your requirements. You can scale out your farm easily with Nginx, and use Nginx PLUS to achieve even more robustness in your production farm when the need arises.

Nginx负载均衡

tar xzvf nginx-1.7.7.tar.gz
cd nginx-1.7.7
./configure –with-http_realip_module –with-http_flv_module –with-http_sub_module –with-http_mp4_module –with-http_stub_status_module
make && make install

配置文件
user nobody;
worker_processes 8;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

pid logs/nginx.pid;

events {
worker_connections 20480;
}

http {
include mime.types;
default_type application/octet-stream;

log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;

#access_log logs/access.log main;
access_log /dev/null main;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;

#gzip on;
upstream test1 {
server 192.168.1.202:80 weight=1 max_fails=2 fail_timeout=20s;
server 192.168.1.203:80 weight=1 max_fails=2 fail_timeout=20s;
server 192.168.1.204:80 weight=1 max_fails=2 fail_timeout=20s;
server 192.168.1.205:80 weight=1 max_fails=2 fail_timeout=20s;

}

server {
listen 80;
server_name test1.com;

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
proxy_pass http://test1/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
}

location /nginx_status {
stub_status on;
access_log off;
allow 192.168.0.0/16;
deny all;
}
#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

}

server {
listen 80;
server_name test3.com;
rewrite ^/(.*) http://www.baidu.com/$1 permanent;

}

server {
listen 80;
server_name test4.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://www.163.com;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;

}

access_log logs/_access.log main;

}

}

Top 20 Nginx WebServer Best Security Practices

Nginx is a lightweight, high performance web server/reverse proxy and e-mail (IMAP/POP3) proxy. It runs on UNIX, GNU/Linux, BSD variants, Mac OS X, Solaris, and Microsoft Windows. According to Netcraft, 6% of all domains on the Internet use nginx webserver. Nginx is one of a handful of servers written to address the C10K problem. Unlike traditional servers, Nginx doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. Nginx powers several high traffic web sites, such as WordPress, Hulu, Github, and SourceForge. This page collects hints how to improve the security of nginx web servers running on Linux or UNIX like operating systems.
Default Config Files and Nginx Port

/usr/local/nginx/conf/ – The nginx server configuration directory and /usr/local/nginx/conf/nginx.conf is main configuration file.
/usr/local/nginx/html/ – The default document location.
/usr/local/nginx/logs/ – The default log file location.
Nginx HTTP default port : TCP 80
Nginx HTTPS default port : TCP 443

You can test nginx configuration changes as follows:
# /usr/local/nginx/sbin/nginx -t
Sample outputs:

the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
configuration file /usr/local/nginx/conf/nginx.conf test is successful

To load config changes, type:
# /usr/local/nginx/sbin/nginx -s reload
To stop server, type:
# /usr/local/nginx/sbin/nginx -s stop
#1: Turn On SELinux

Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.
Do Boolean Lockdown

Run the getsebool -a command and lockdown system:

getsebool -a | less
getsebool -a | grep off
getsebool -a | grep o

To secure the machine, look at settings which are set to ‘on’ and change to ‘off’ if they do not apply to your setup with the help of setsebool command. Set correct SE Linux booleans to maintain functionality and protection. Please note that SELinux adds 2-8% overheads to typical RHEL or CentOS installation.
#2: Allow Minimal Privileges Via Mount Options

Server all your webpages / html / php files via separate partitions. For example, create a partition called /dev/sda5 and mount at the /nginx. Make sure /nginx is mounted with noexec, nodev and nosetuid permissions. Here is my /etc/fstab entry for mounting /nginx:

LABEL=/nginx /nginx ext3 defaults,nosuid,noexec,nodev 1 2

Note you need to create a new partition using fdisk and mkfs.ext3 commands.
#3: Linux /etc/sysctl.conf Hardening

You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1

# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0

# Don’t act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1

# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1

# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535

# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536

# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000

# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608

# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1

See also:

Linux Tuning The VM (memory) Subsystem
Linux Tune Network Stack (Buffers Size) To Increase Networking Performance

#4: Remove All Unwanted Nginx Modules

You need to minimizes the number of modules that are compiled directly into the nginx binary. This minimizes risk by limiting the capabilities allowed by the webserver. You can configure and install nginx using only required modules. For example, disable SSI and autoindex module you can type:
# ./configure –without-http_autoindex_module –without-http_ssi_module
# make
# make install
Type the following command to see which modules can be turn on or off while compiling nginx server:
# ./configure –help | less
Disable nginx modules that you don’t need.
(Optional) Change Nginx Version Header

Edit src/http/ngx_http_header_filter_module.c, enter:
# vi +48 src/http/ngx_http_header_filter_module.c
Find line

static char ngx_http_server_string[] = “Server: nginx” CRLF;
static char ngx_http_server_full_string[] = “Server: ” NGINX_VER CRLF;

Change them as follows:

static char ngx_http_server_string[] = “Server: Ninja Web Server” CRLF;
static char ngx_http_server_full_string[] = “Server: Ninja Web Server” CRLF;

Save and close the file. Now, you can compile the server. Add the following in nginx.conf to turn off nginx version number displayed on all auto generated error pages:

server_tokens off

#5: Use mod_security (only for backend Apache servers)

mod_security provides an application level firewall for Apache. Install mod_security for all backend Apache web servers. This will stop many injection attacks.
#6: Install SELinux Policy To Harden The Nginx Webserver

By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:
# yum -y install selinux-policy-targeted selinux-policy-devel
Download targeted SELinux policies to harden the nginx webserver on Linux servers from the project home page:
# cd /opt
# wget ‘http://downloads.sourceforge.net/project/selinuxnginx/se-ngix_1_0_10.tar.gz?use_mirror=nchc’
Untar the same:
# tar -zxvf se-ngix_1_0_10.tar.gz
Compile the same
# cd se-ngix_1_0_10/nginx
# make
Sample outputs:

Compiling targeted nginx module
/usr/bin/checkmodule: loading policy configuration from tmp/nginx.tmp
/usr/bin/checkmodule: policy configuration loaded
/usr/bin/checkmodule: writing binary representation (version 6) to tmp/nginx.mod
Creating targeted nginx.pp policy package
rm tmp/nginx.mod.fc tmp/nginx.mod

Install the resulting nginx.pp SELinux module:
# /usr/sbin/semodule -i nginx.pp
#7: Restrictive Iptables Based Firewall

The following firewall script blocks everything and only allows:

Incoming HTTP (TCP port 80) requests
Incoming ICMP ping requests
Outgoing ntp (port 123) requests
Outgoing smtp (TCP port 25) requests

#!/bin/bash
IPT=”/sbin/iptables”

#### IPS ######
# Get server public ip
SERVER_IP=$(ifconfig eth0 | grep ‘inet addr:’ | awk -F’inet addr:’ ‘{ print $2}’ | awk ‘{ print $1}’)
LB1_IP=”204.54.1.1″
LB2_IP=”204.54.1.2″

# Do some smart logic so that we can use damm script on LB2 too
OTHER_LB=””
SERVER_IP=””
[[ “$SERVER_IP” == “$LB1_IP” ]] && OTHER_LB=”$LB2_IP” || OTHER_LB=”$LB1_IP”
[[ “$OTHER_LB” == “$LB2_IP” ]] && OPP_LB=”$LB1_IP” || OPP_LB=”$LB2_IP”

### IPs ###
PUB_SSH_ONLY=”122.xx.yy.zz/29″

#### FILES #####
BLOCKED_IP_TDB=/root/.fw/blocked.ip.txt
SPOOFIP=”127.0.0.0/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16 0.0.0.0/8 240.0.0.0/4 255.255.255.255/32 168.254.0.0/16 224.0.0.0/4 240.0.0.0/5 248.0.0.0/5 192.0.2.0/24″
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v “^#|^$” ${BLOCKED_IP_TDB})

### Interfaces ###
PUB_IF=”eth0″ # public interface
LO_IF=”lo” # loopback
VPN_IF=”eth1″ # vpn / private net

### start firewall ###
echo “Setting LB1 $(hostname) Firewall…”

# DROP and close everything
$IPT -P INPUT DROP
$IPT -P OUTPUT DROP
$IPT -P FORWARD DROP

# Unlimited lo access
$IPT -A INPUT -i ${LO_IF} -j ACCEPT
$IPT -A OUTPUT -o ${LO_IF} -j ACCEPT

# Unlimited vpn / pnet access
$IPT -A INPUT -i ${VPN_IF} -j ACCEPT
$IPT -A OUTPUT -o ${VPN_IF} -j ACCEPT

# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! –syn -m state –state NEW -j DROP

# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags ALL FIN,URG,PSH -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags ALL ALL -j DROP

# Drop NULL packets
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags ALL NONE -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ” NULL Packets ”
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags ALL NONE -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags SYN,RST SYN,RST -j DROP

# Drop XMAS
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags SYN,FIN SYN,FIN -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ” XMAS Packets ”
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags SYN,FIN SYN,FIN -j DROP

# Drop FIN packet scans
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags FIN,ACK FIN -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ” Fin Packets Scan ”
$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags FIN,ACK FIN -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp –tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

# Log and get rid of broadcast / multicast and invalid
$IPT -A INPUT -i ${PUB_IF} -m pkttype –pkt-type broadcast -j LOG –log-prefix ” Broadcast ”
$IPT -A INPUT -i ${PUB_IF} -m pkttype –pkt-type broadcast -j DROP

$IPT -A INPUT -i ${PUB_IF} -m pkttype –pkt-type multicast -j LOG –log-prefix ” Multicast ”
$IPT -A INPUT -i ${PUB_IF} -m pkttype –pkt-type multicast -j DROP

$IPT -A INPUT -i ${PUB_IF} -m state –state INVALID -j LOG –log-prefix ” Invalid ”
$IPT -A INPUT -i ${PUB_IF} -m state –state INVALID -j DROP

# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
do
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG –log-prefix ” SPOOF List Block ”
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
done
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist

# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
do
$IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} –destination-port 22 -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} –sport 22 -j ACCEPT
done

# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp –icmp-type 8 -s 0/0 -m state –state NEW,ESTABLISHED,RELATED -m limit –limit 30/sec -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp –icmp-type 0 -d 0/0 -m state –state ESTABLISHED,RELATED -j ACCEPT

# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 –sport 1024:65535 –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp –sport 80 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT

# allow outgoing ntp
$IPT -A OUTPUT -o ${PUB_IF} -p udp –dport 123 -m state –state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp –sport 123 -m state –state ESTABLISHED -j ACCEPT

# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp –dport 25 -m state –state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp –sport 25 -m state –state ESTABLISHED -j ACCEPT

### add your other rules here ####

#######################
# drop and log everything else
$IPT -A INPUT -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ” DEFAULT DROP ”
$IPT -A INPUT -j DROP

exit 0

#8: Controlling Buffer Overflow Attacks

Edit nginx.conf and set the buffer size limitations for all clients.
# vi /usr/local/nginx/conf/nginx.conf
Edit and set the buffer size limitations for all clients as follows:

## Start: Size Limits & Buffer Overflows ##
client_body_buffer_size 1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;
## END: Size Limits & Buffer Overflows ##

Where,

client_body_buffer_size 1k – (default is 8k or 16k) The directive specifies the client request body buffer size.
client_header_buffer_size 1k – Directive sets the headerbuffer size for the request header from client. For the overwhelming majority of requests a buffer size of 1K is sufficient. Increase this if you have a custom header or a large cookie sent from the client (e.g., wap client).
client_max_body_size 1k- Directive assigns the maximum accepted body size of client request, indicated by the line Content-Length in the header of request. If size is greater the given one, then the client gets the error “Request Entity Too Large” (413). Increase this when you are getting file uploads via the POST method.
large_client_header_buffers 2 1k – Directive assigns the maximum number and size of buffers for large headers to read from client request. By default the size of one buffer is equal to the size of page, depending on platform this either 4K or 8K, if at the end of working request connection converts to state keep-alive, then these buffers are freed. 2x1k will accept 2kB data URI. This will also help combat bad bots and DoS attacks.

You also need to control timeouts to improve server performance and cut clients. Edit it as follows:

## Start: Timeouts ##
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;
## End: Timeouts ##

client_body_timeout 10; – Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408). The default is 60.
client_header_timeout 10; – Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
keepalive_timeout 5 5; – The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
send_timeout 10; – Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.

#9: Control Simultaneous Connections

You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:

### Directive describes the zone, in which the session states are stored i.e. store in slimits. ###
### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ###
limit_zone slimits $binary_remote_addr 5m;

### Control maximum number of simultaneous connections for one session i.e. ###
### restricts the amount of connections from a single ip address ###
limit_conn slimits 5;

The above will limits remote clients to no more than 5 concurrently “open” connections per remote ip address.
#10: Allow Access To Our Domain Only

If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don’t want to display request using an IP address:

## Only requests to our Host are allowed i.e. nixcraft.in, images.nixcraft.in and www.nixcraft.in
if ($host !~ ^(nixcraft.in|www.nixcraft.in|images.nixcraft.in)$ ) {
return 444;
}
##

#11: Limit Available Methods

GET and POST are the most common methods on the Internet. Web server methods are defined in RFC 2616. If a web server does not require the implementation of all available methods, they should be disabled. The following will filter and only allow GET, HEAD and POST methods:

## Only allow these request methods ##
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
}
## Do not accept DELETE, SEARCH and other methods ##

More About HTTP Methods

The GET method is used to request document such as http://www.cyberciti.biz/index.php.
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
The POST method may involve anything, like storing or updating data, or ordering a product, or sending E-mail by submitting the form. This is usually processed using the server side scripting such as PHP, PERL, Python and so on. You must use this if you want to upload files and process forms on server.

#12: How Do I Deny Certain User-Agents?

You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.

## Block download agents ##
if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}
##

Block robots called msnbot and scrapbot:

## Block some robots ##
if ($http_user_agent ~* msnbot|scrapbot) {
return 403;
}

#12: How Do I Block Referral Spam?

Referer spam is dengerouns. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.

## Deny certain Referers ###
if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
{
# return 404;
return 403;
}
##

#13: How Do I Stop Image Hotlinking?

Image or HTML hotlinking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker’s site. This is usually done on forums and blogs. I strongly suggest you block and stop image hotlinking at your server level itself.

# Stop deep linking or hot linking
location /images/ {
valid_referers none blocked www.example.com example.com;
if ($invalid_referer) {
return 403;
}
}

Example: Rewrite And Display Image

Another example with link to banned image:

valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}

See also:

HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.

#14: Directory Restrictions

You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.
Limiting Access By Ip Address

You can limit access to directory by ip address to /docs/ directory:

location /docs/ {
## block one workstation
deny 192.168.1.1;
## allow anyone in 192.168.1.0/24
allow 192.168.1.0/24;
## drop rest of the world
deny all;
}

Password Protect The Directory

First create the password file and add a user called vivek:
# mkdir /usr/local/nginx/conf/.htpasswd/
# htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd vivek
Edit nginx.conf and protect the required directories as follows:

### Password Protect /personal-images/ and /delta/ directories ###
location ~ /(personal-images/.*|delta/.*) {
auth_basic “Restricted”;
auth_basic_user_file /usr/local/nginx/conf/.htpasswd/passwd;
}

Once a password file has been generated, subsequent users can be added with the following command:
# htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName
#15: Nginx SSL Configuration

HTTP is a plain text protocol and it is open to passive monitoring. You should use SSL to to encrypt your content for users.
Create an SSL Certificate

Type the following commands:
# cd /usr/local/nginx/conf
# openssl genrsa -des3 -out server.key 1024
# openssl req -new -key server.key -out server.csr
# cp server.key server.key.org
# openssl rsa -in server.key.org -out server.key
# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Edit nginx.conf and update it as follows:

server {
server_name example.com;
listen 443;
ssl on;
ssl_certificate /usr/local/nginx/conf/server.crt;
ssl_certificate_key /usr/local/nginx/conf/server.key;
access_log /usr/local/nginx/logs/ssl.access.log;
error_log /usr/local/nginx/logs/ssl.error.log;
}

Restart the nginx:
# /usr/local/nginx/sbin/nginx -s reload
See also:

For more information, read the Nginx SSL documentation.

#16: Nginx And PHP Security Tips

PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:

# Disallow dangerous functions
disable_functions = phpinfo, system, mail, exec

## Try to limit resources ##

# Maximum execution time of each script, in seconds
max_execution_time = 30

# Maximum amount of time each script may spend parsing request data
max_input_time = 60

# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M

# Maximum size of POST data that PHP will accept.
post_max_size = 8M

# Whether to allow HTTP file uploads.
file_uploads = Off

# Maximum allowed size for uploaded files.
upload_max_filesize = 2M

# Do not expose PHP error messages to external users
display_errors = Off

# Turn on safe mode
safe_mode = On

# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path

# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_

# Restrict PHP information leakage
expose_php = Off

# Log all errors
log_errors = On

# Do not register globals for input data
register_globals = Off

# Minimize allowable PHP post size
post_max_size = 1K

# Ensure PHP redirects appropriately
cgi.force_redirect = 0

# Disallow uploading unless necessary
file_uploads = Off

# Enable SQL safe mode
sql.safe_mode = On

# Avoid Opening remote files
allow_url_fopen = Off

See also:

PHP Security: Limit Resources Used By Script
PHP.INI settings: Disable exec, shell_exec, system, popen and Other Functions To Improve Security

#17: Run Nginx In A Chroot Jail (Containers) If Possible

Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, or OpenVZ virtualization which uses the concept of containers.
#18: Limits Connections Per IP At The Firewall Level

A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.
Linux Iptables: Throttle Nginx Connections Per Second

The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:

/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –set
/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –update –seconds 60 –hitcount 15 -j DROP
service iptables save

BSD PF: Throttle Nginx Connections Per Second

Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.

webserver_ip=”202.54.1.1″
table persist
block in quick from

pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload
flush)

Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:

Sample PF firewall script.
Sample Iptables firewall script.

#19: Configure Operating System to Protect Web Server

Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:
# find /nginx -user nginx
# find /usr/local/nginx/html -user nginx
Make sure you change file ownership to root or other user. A typical set of permission /usr/local/nginx/html/
# ls -l /usr/local/nginx/html/
Sample outputs:

-rw-r–r– 1 root root 925 Jan 3 00:50 error4xx.html
-rw-r–r– 1 root root 52 Jan 3 10:00 error5xx.html
-rw-r–r– 1 root root 134 Jan 3 00:52 index.html

You must delete unwated backup files created by vi or other text editor:
# find /nginx -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’
# find /usr/local/nginx/html/ -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’

Pass -delete option to find command and it will get rid of those files too.
#20: Restrict Outgoing Nginx Connections

The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):

/sbin/iptables -A OUTPUT -o eth0 -m owner –uid-owner vivek -p tcp –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT

Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.
Bounce Tip: Watching Your Logs & Auditing

Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
# grep “/login.php??” /usr/local/nginx/logs/access_log
# grep “…etc/passwd” /usr/local/nginx/logs/access_log
# egrep -i “denied|error|warn” /usr/local/nginx/logs/error_log
The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authetication events, file modifications, account modification and so on. As usual disable all services and follow our “Linux Server Hardening” security tips.
Conclusion

Your nginx server is now properly harden and ready to server webpages. However, you should be consulted further resources for your web applications security needs. For example, wordpress or any other third party apps has its own security requirements.

http://www.cyberciti.biz/tips/linux-unix-bsd-nginx-webserver-security.html