refer:
Traditional process- or thread-based models of handling concurrent connections involve handling each connection with a separate process or thread, and blocking on network or input/output operations. Depending on the application, it can be very inefficient in terms of memory and CPU consumption:
From the very beginning, nginx was meant to be a specialized tool to achieve more performance, density and economical use of server resources while enabling dynamic growth of a website, so it has followed a different model. It was actually inspired by the ongoing development of advanced event-based mechanisms in a variety of operating systems. What resulted is a modular, event-driven, asynchronous, single-threaded, non-blocking architecture which became the foundation of nginx code.
nginx uses multiplexing and event notifications heavily, and dedicates specific tasks to separate processes. Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.
Some basic directives:
user nobody; # a directive in the 'main' context events { # configuration of connection processing } http { # Configuration specific to HTTP and affecting all virtual servers server { # configuration of HTTP virtual server 1 location /one { # configuration for processing URIs starting with '/one' } location /two { # configuration for processing URIs starting with '/two' } } server { # configuration of HTTP virtual server 2 } } stream { # Configuration specific to TCP/UDP and affecting all virtual servers server { # configuration of TCP virtual server 1 } }
if (!-e $request_filename) { rewrite ^ /path/index.php last; }
⇒ This checks whether there is a file, directory, or symbolic link that matches the $request_filename. If such a file is not found, the connection is being redirected to /path/index.php.
location / { try_files $uri $uri/ $uri.html =404; }
location / { try_files $uri $uri/ /test/index.html; }
⇒ you probably understand:
location / => matches all locations** try_files $uri =>try $uri first, for example http://example.com/images/image.jpg nginx will try to check if there's a file inside /images called image.jpg if found it will serve it first. $uri/=> which means if you didn't find the first condition $uri try the URI as a directory
refer:
nginx uses a fixed number of workers, each of which handles incoming requests. The general rule of thumb is that you should have one worker for each CPU-core your server contains.
You can count the CPUs available to your system by running:
$ grep ^processor /proc/cpuinfo | wc -l
With a quad-core processor this would give you a configuration that started like so:
# One worker per CPU-core. worker_processes 4; events { worker_connections 8192; multi_accept on; use epoll; } worker_rlimit_nofile 40000; http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; # Your content here .. }
Here we've also raised the worker_connections setting, which specifies how many connections each worker process can handle.
The maximum number of connections your server can process is the result of:
worker_processes * worker_connections (= 32384 in our example).
One of the first things that many people try to do is to enable the gzip compression module available with nginx. The intention here is that the objects which the server sends to requesting clients will be smaller, and thus faster to send → good for low network bandwidth
However this involves the trade-off common to tuning, performing the compression takes CPU resources from your server, which frequently means that you'd be better off not enabling it at all.
Generally the best approach with compression is to only enable it for large files, and to avoid compressing things that are unlikely to be reduced in size (such as images, executables, and similar binary files).
To compress static contents, nginx must be built which support module: –with-http_gzip_static_module
With above idea in mind the following is a sensible configuration: Edit /usr/local/nginx/conf/nginx.conf:
gzip on; gzip_vary on; gzip_min_length 10240; gzip_comp_level 2; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\.";
This enables compression for files that are over 10k, aren't being requested by early versions of Microsoft's Internet Explorer, and only attempts to compress text-based files.
To check gzip was enabled, we check the response header from web browser(we use httpfox for checking the header):
If you want to allow users upload something or upload personally something over the HTTP then you should maybe increase post size. It can be done with client_max_body_size value inside http {…}. On default it’s 1 Mb, but it can be set example to 20 Mb and also increase buffer size with following configuration :
client_max_body_size 20m; client_body_buffer_size 128k;
If you get following error, then you know that client_max_body_size is too low:
“Request Entity Too Large” (413)
Off access log of static files and cache browser side 360 days:
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 360d; }
Check slow css,js or images and run benchmark
ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js
ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js Percentage of the requests served within a certain time (ms) 50% 16166 66% 17571 75% 17925 80% 18328 90% 22019 95% 26190 98% 26190 99% 26190 100% 26190 (longest request)
ab -n 100 -c 20 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js Percentage of the requests served within a certain time (ms) 50% 8 66% 8 75% 8 80% 8 90% 8 95% 13 98% 13 99% 14 100% 14 (longest request)
⇒ with above benchmarch for static file ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js(size 400k), we can see the problem for performance because the network or bandwidth
refer: https://blog.nginx.org/blog/creating-nginx-rewrite-rules
# ----------------------------------------------------------------------------------------- # ~ : Enable regex mode for location (in regex ~ mean case-sensitive match) # ~* : case-insensitive match # | : Or # () : Match group or evaluate the content of () # $ : the expression must be at the end of the evaluated text # (no char/text after the match) $ is usually used at the end of a regex # location expression. # ? : Check for zero or one occurrence of the previous char ex jpe?g # ^~ : The match must be at the beginning of the text, note that nginx will not perform # any further regular expression match even if an other match is available # (check the table above); ^ indicate that the match must be at the start of # the uri text, while ~ indicates a regular expression match mode. # example (location ^~ /realestate/.*) # Nginx evaluation exactly this as don't check regexp locations if this # location is longest prefix match. # = : Exact match, no sub folders (location = /) # ^ : Match the beginning of the text (opposite of $). By itself, ^ is a # shortcut for all paths (since they all have a beginning). # .* : Match zero, one or more occurrence of any char # \ : Escape the next char # . : Any char # * : Match zero, one or more occurrence of the previous char # ! : Not (negative look ahead) # {} : Match a specific number of occurrence ex. [0-9]{3} match 342 but not 32 # {2,4} match length of 2, 3 and 4 # + : Match one or more occurrence of the previous char # [] : Match any char inside # --------------------------------------------------------------------------------------------
refer:
global config for all pools:
[global] ; Error log file ; If it's set to "syslog", log is sent to syslogd instead of being written ; in a local file. ; Note: the default prefix is /usr/local/php/var ; Default Value: log/php-fpm.log ;error_log = log/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; The maximum number of processes FPM will fork. This has been design to control ; the global number of processes when using dynamic PM within a lot of pools. ; Use it with caution. ; Note: A value of 0 indicates no limit ; Default Value: 0 ; process.max = 128
pool www config:
; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed.
Set up emergency_restart_threshold, emergency_restart_interval and process_control_timeout. Default values for these options are totally off, but I think it’s better use these options example like following in php-fpm.conf(we can off for performance):
emergency_restart_threshold 10 emergency_restart_interval 1m process_control_timeout 10s
What this mean? So if 10 PHP-FPM child processes exit with SIGSEGV or SIGBUS within 1 minute*then PHP-FPM restart automatically. This configuration also sets 10 seconds time limit for child processes to wait for a reaction on signals from master. (In some case, the php-fpm child processes full memory and can't process the request, these configurations will automatically restart the php-fpm child processes)
Default php-fpm will use pool [www] to configuration for all site. In advance, it’s possible to use different pools for different sites and allocate resources very accurately and even use different users and groups for every pool. Following is just example configuration files structure for PHP-FPM pools for three different sites (or actually three different part of same site):
/etc/php-fpm.d/site.conf /etc/php-fpm.d/blog.conf /etc/php-fpm.d/forums.conf
Or config in php-fpm.conf
; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p argument) ; - /onec/php otherwise ;include=etc/fpm.d/*.conf
(Create directory /onec/php/etc/fpm.d/) Just example configurations for every pool:
[www] ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /onec/php) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = nobody group = nobody ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all IPv4 addresses on a ; specific port; ; '[::]:port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = 127.0.0.1:9000
[site] listen = 127.0.0.1:9000 user = site group = site request_slowlog_timeout = 5s slowlog = /var/log/php-fpm/slowlog-site.log listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 5 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 4 pm.max_requests = 200 listen.backlog = -1 pm.status_path = /status request_terminate_timeout = 120s rlimit_files = 131072 rlimit_core = unlimited catch_workers_output = yes env[HOSTNAME] = $HOSTNAME env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp
⇒pool [site] use port 9000
[blog] listen = 127.0.0.1:9001 user = blog group = blog request_slowlog_timeout = 5s slowlog = /var/log/php-fpm/slowlog-blog.log listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 4 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 pm.max_requests = 200 listen.backlog = -1 pm.status_path = /status request_terminate_timeout = 120s rlimit_files = 131072 rlimit_core = unlimited catch_workers_output = yes env[HOSTNAME] = $HOSTNAME env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp
⇒pool [blog] use port 9001
[forums] listen = 127.0.0.1:9002 user = forums group = forums request_slowlog_timeout = 5s slowlog = /var/log/php-fpm/slowlog-forums.log listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 10 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 4 pm.max_requests = 400 listen.backlog = -1 pm.status_path = /status request_terminate_timeout = 120s rlimit_files = 131072 rlimit_core = unlimited catch_workers_output = yes env[HOSTNAME] = $HOSTNAME env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp
⇒pool [forums] use port 9002
So this is just example howto configure multiple different size pools.
Example Config:
process.max = 15 pm.max_children = 100 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 15 pm.max_requests = 1000
process.max: The maximum number of processes FPM will fork. This has been design to control the global number of processes when using dynamic PM within a lot of pools
The configuration variable pm.max_children controls the maximum amount of FPM processes that can ever run at the same time. This value can be calculate like this :
pm.max_children = total RAM - (500MB) / average process memory
ps -ylC php-fpm --sort:rss | awk '!/RSS/ { s+=$8 } END { printf "%s\n", "Total memory used by PHP-FPM child processes: "; printf "%dM\n", s/1024 }'
⇒ get total memory used by all php-fpm process base on basic command ps -ylC php-fpm. Then get number of php-fpm processes:
ps -ylC php-fpm --sort:rss | grep php-fpm | wc -l
And the average process memory:
Avg Memory = Total Memory/number of process
Other configs:
pm.start_servers = (pm.max_spare_servers + pm.min_spare_servers)/2
refer: