This is an old revision of the document!
Table of Contents
nginx architecture
refer:
Overview of nginx Architecture
Traditional process- or thread-based models of handling concurrent connections involve handling each connection with a separate process or thread, and blocking on network or input/output operations. Depending on the application, it can be very inefficient in terms of memory and CPU consumption:
- Spawning a separate process or thread requires preparation of a new runtime environment, including allocation of heap and stack memory, and the creation of a new execution context.
- Additional CPU time is also spent creating these items, which can eventually lead to poor performance due to thread thrashing on excessive context switching.
- All of these complications manifest themselves in older web server architectures like Apache's. This is a tradeoff between offering a rich set of generally applicable features and optimized usage of server resources.
From the very beginning, nginx was meant to be a specialized tool to achieve more performance, density and economical use of server resources while enabling dynamic growth of a website, so it has followed a different model. It was actually inspired by the ongoing development of advanced event-based mechanisms in a variety of operating systems. What resulted is a modular, event-driven, asynchronous, single-threaded, non-blocking architecture which became the foundation of nginx code.
nginx uses multiplexing and event notifications heavily, and dedicates specific tasks to separate processes. Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.
Nginx Vs Apache
- Nginx is based on event-driven architecture. Apache is based on process-driven architecture. It is interesting to note that Apache in its earliest release was not having multitasking architecture. Later Apache MPM (multi-processing module) was added to achieve this.
- Nginx doesn’t create a new process for a new request. Apache creates a new process for each request.
- In Nginx, memory consumption is very low for serving static pages. But, Apache’s nature of creating new process for each request increases the memory consumption.
- Several benchmarking results indicates that when compared to Apache, Nginx is extremely fast for serving static pages.
- Nginx development started only in 2002. But Apache initial release was in 1995.
- In complex configurations situation, when compared to Nginx, Apache can be configured easily as it comes with lot of configuration features to cover wide range of requirements.
- When compared to Nginx, Apache has excellent documentation.
- In general, Nginx have less components to add more features. But Apache has tons of features and provides lot more functionality than Nginx.
- Nginx do not support Operating Systems like OpenVMS and IBMi. But Apache supports much wider range of Operating Systems.
- Since Nginx comes only with core features that are required for a web server, it is lightweight when compared to Apache.
- The performance and scalability of Nginx is not completely dependent on hardware resources, whereas the performance and scalability of the Apache is dependent on underlying hardware resources like memory and CPU.
Optimize nginx configuration for performance and benchmark
refer:
General Tunning
nginx uses a fixed number of workers, each of which handles incoming requests. The general rule of thumb is that you should have one worker for each CPU-core your server contains.
You can count the CPUs available to your system by running:
$ grep ^processor /proc/cpuinfo | wc -l
With a quad-core processor this would give you a configuration that started like so:
# One worker per CPU-core. worker_processes 4; events { worker_connections 8192; multi_accept on; use epoll; } worker_rlimit_nofile 40000; http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; # Your content here .. }
Here we've also raised the worker_connections setting, which specifies how many connections each worker process can handle.
The maximum number of connections your server can process is the result of:
worker_processes * worker_connections (= 32384 in our example).
compress static contents
One of the first things that many people try to do is to enable the gzip compression module available with nginx. The intention here is that the objects which the server sends to requesting clients will be smaller, and thus faster to send → good for low network bandwidth
However this involves the trade-off common to tuning, performing the compression takes CPU resources from your server, which frequently means that you'd be better off not enabling it at all.
Generally the best approach with compression is to only enable it for large files, and to avoid compressing things that are unlikely to be reduced in size (such as images, executables, and similar binary files).
To compress static contents, nginx must be built which support module: –with-http_gzip_static_module
With above idea in mind the following is a sensible configuration: Edit /usr/local/nginx/conf/nginx.conf:
gzip on; gzip_vary on; gzip_min_length 10240; gzip_comp_level 2; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\.";
This enables compression for files that are over 10k, aren't being requested by early versions of Microsoft's Internet Explorer, and only attempts to compress text-based files.
To check gzip was enabled, we check the response header from web browser(we use httpfox for checking the header):
Nginx Request / Upload Max Body Size (client_max_body_size)
If you want to allow users upload something or upload personally something over the HTTP then you should maybe increase post size. It can be done with client_max_body_size value inside http {…}. On default it’s 1 Mb, but it can be set example to 20 Mb and also increase buffer size with following configuration :
client_max_body_size 20m; client_body_buffer_size 128k;
If you get following error, then you know that client_max_body_size is too low:
“Request Entity Too Large” (413)
Nginx Cache Control for Static Files (Browser Cache Control Directives)
Off access log of static files and cache browser side 360 days:
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 360d; }
Run benchmark for checking optimize effect
Check slow css,js or images and run benchmark
ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js
- benchmark on windows:
ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js Percentage of the requests served within a certain time (ms) 50% 16166 66% 17571 75% 17925 80% 18328 90% 22019 95% 26190 98% 26190 99% 26190 100% 26190 (longest request)
- benchmark on local linux(web hosting):
ab -n 100 -c 20 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js Percentage of the requests served within a certain time (ms) 50% 8 66% 8 75% 8 80% 8 90% 8 95% 13 98% 13 99% 14 100% 14 (longest request)
- benchmark load speed from other countries from http://www.webpagetest.org/
⇒ with above benchmarch for static file ab -n 20 -c 4 http://shop.babies.vn/media/js/af1ae2e07ff564e3d1499f7eb7aecdf9.js(size 400k), we can see the problem for performance because the network or bandwidth
Nginx Security
refer:
- List nginx security issues: http://nginx.org/en/security_advisories.html