Skip to content

Instantly share code, notes, and snippets.

@carlessistare
Created September 2, 2013 20:34
Show Gist options
  • Save carlessistare/6417055 to your computer and use it in GitHub Desktop.
Save carlessistare/6417055 to your computer and use it in GitHub Desktop.
Tuning Nginx for heavy loading with nodejs as upstream. Short requests and a lot of concurrence.
# This number should be, at maximum, the number of CPU cores on your system.
# (since nginx doesn't benefit from more than one worker per CPU.)
worker_processes 8;
# Determines how many clients will be served by each worker process.
# (Max clients = worker_connections * worker_processes)
# "Max clients" is also limited by the number of socket connections available on the system (~64k)
# run ss -s and u'll see a timewait param
# The reason for TIMED_WAIT is to handle the case of packets arriving after the socket is closed.
# This can happen because packets are delayed or the other side just doesn't know that the socket has been closed.
#
# From the point of view of the client (nginx)
# sysctl net.ipv4.ip_local_port_range
# sysctl net.ipv4.tcp_fin_timeout
# Result:
# net.ipv4.ip_local_port_range = 32768 61000
# net.ipv4.tcp_fin_timeout = 60 (in other words it causes the TIMED_WAIT)
# (61000 - 32768) / 60 = 470 sockets at any given time
# You can tune these values in order to get more sockets available at a time
#
# Another option would be:
# net.ipv4.tcp_tw_recycle = 1
# net.ipv4.tcp_tw_reuse = 1
# In order to allow used sockets in WAIT state, to be reused
#
# From the point of view of the server (node process)
# sysctl net.core.somaxconn
# It limits the maximum number of requests queued to a listen socket. You can increase it.
# The value of somaxconn is the size of the listen queue.
# Once the connection is established it is no longer in the listen queue and this number doesn't matter.
# If the listen queue is filled up due to to many simultaneous connection requests then additional connections will be refused.
# Defaults to 128. The value should be raised substantially to support bursts of request.
# For example, to support a burst of 1024 requests, set somaxconn to 1024.
# net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog
worker_connections 4000;
# essential for linux, optmized to serve many clients with each thread (efficient method used on Linux 2.6+.)
use epoll;
# Accept as many connections as possible, after nginx gets notification about a new connection.
# May flood worker_connections, if that option is set too low.
multi_accept on;
# Open files
# How to know how much open files u consume?
# ulimit -n # open files limit per process
# lsof | grep nginx | wc -l # count how many open files an app is taking
# cat /proc/sys/fs/file-max # get max open files allowed
# Number of file descriptors used for Nginx. This is set in the OS with 'ulimit -n 200000'
# or using /etc/security/limits.conf
# Edit /etc/security/limits.conf in order to increase hard and soft opened files allowed
# * hard nproc 200000
# * soft nproc 200000
worker_rlimit_nofile 200000;
# Caches information about open FDs, freqently accessed files.
# Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec.
# I recommend using some varient of these options, though not the specific values listed below.
open_file_cache max=200000 inactive=5s;
open_file_cache_valid 15s;
open_file_cache_min_uses 1;
open_file_cache_errors off;
# Buffer log writes to speed up IO, or disable them altogether
#access_log /var/log/nginx/access.log main buffer=16k;
access_log off;
# Sendfile copies data between one FD and other from within the kernel.
# More efficient than read() + write(), since the requires transferring data to and from the user space.
sendfile on;
# Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. This is useful for prepending headers before calling sendfile,
# or for throughput optimization.
tcp_nopush on;
# don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time.
tcp_nodelay on;
# Timeout for keep-alive connections. Server will close connections after this time.
keepalive_timeout 3;
# Number of requests a client can make over the keep-alive connection. This is set high for testing.
keepalive_requests 100;
# allow the server to close the connection after a client stops responding. Frees up socket-associated memory.
reset_timedout_connection on;
# send the client a "request timed out" if the body is not loaded by this time. Default 60.
client_body_timeout 10;
# If the client stops reading data, free up the stale client connection after this much time. Default 60.
send_timeout 2;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment