I am having performance problems with my website. My configuration is 1G VPS with wordpress/nginx/php-fpm on ubuntu 11.04. The bottleneck is when the browser is waiting for first byte from the server. It takes 4-6 secs just waiting for first response from the server after initiating the connection (The website is new and it receives very low traffic currenlty , about 50-150 visit/day). Following are my nginx conf, I hope it may help understanding where the problem is. I want to know if there is something wrong with this configuration that may be optimized. Also if anyone can recommend me profiling/analysis tools to use that suits my configuration.
Note: I replaced my username with myusername
, my domain with mydomain.com
nginx.conf
user myusername;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
index index.php index.html index.htm;
sendfile on;
# tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 50m;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/default
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /home/myusername/www;
# Make site accessible from http://localhost/
server_name mydomain.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.php;
}
location /doc {
# root /usr/share;
autoindex on;
allow 127.0.0.1;
deny all;
}
location /images {
# root /usr/share;
autoindex off;
}
error_page 404 = @wordpress;
log_not_found off;
location @wordpress {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /index.php;
}
location ^~ /files/ {
rewrite /files/(.+) /wp-includes/ms-files.php?file=$1 last;
}
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/www;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ .php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ .php$ {
try_files $uri @wordpress;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /.ht {
deny all;
}
location ^~ /blogs.dir/ {
internal;
root /home/myusername/www/wp-content;
}
}
Looks like a WordPress site, I’d lean more towards it being a performance problem there than with the nginx config itself.
Some recommendations:
1 – Make sure you have APC installed and enabled
2 – Install a server-side caching plugin (W3 Total Cache or Supercache) and configure it to use APC as a backing store (and turn on all of the caching layers)
As far as profilers go, I’m a huge fan of NewRelic and their Pro level is free for the first 2 weeks (usually long enough to find the hot spots) with basic performance information free forever.
It seems your nginx configuration indeed has a lot room for improvements. Nginx is already very efficient with how it utilizes CPU and Memory. However, we can tweak several parameters based on the type of workload that we plan to serve. If we are primarily serving static files, we expect our workload profile to be less CPU intensive and more disk-process oriented. Actually your nginx.conf shouldn’t a problem as long as the very nature of nginx is geared toward maximum performance, but as you stated, you’re not getting nginx good performance at all.
I also run a 1GB – 1 core VPS running a fresh LEMP install (Ubuntu 14.04, nginx, MySQL, php5-fpm and nothing else one would consider memory consuming such as cPanel, Zpanel and alikes, no phpMyAdmin as well as (I use MySQL Workbench app). So, I’ve got a WordPress site up and running without any cache plugins or even APC/memcached schemes (still researching for the best approach that will fit my needs), and I always have an excellent performance.
Anyway, the nginx.conf set up below is still a very basic adjustment in order to increase nginx performance. This is a duplicate of the current nginx.conf file I use to serve my website. I’m sharing it with you here just as a reference. You can further tweak it based on your own researches, but I believe you’ll certainly notice an overall enhancement after trying it out.
So let’s go through it…
TUNING nginx
Determine Nginx worker_processes and worker_connections
We can configure the number of single-threaded Worker processes to be 1.5 to 2x the number of CPU cores to take advantage of Disk bandwidth (IOPs).
Make sure you use the correct amount of worker_processes in your /etc/nginx/nginx.conf. This should be equal to the amount of CPU cores in the output of the command bellow, (execute it on your terminal app):
cat /proc/cpuinfo | grep processor
In my case the result below shows only one processor
So my machine has only 1 processor available, then I set
I’ve comment most of the important parts that should be tweaked, again, you should research and start building your very own configuration that will fit your work/production environment. We are not covering any caching technics or serving the site through a ssl (https) secure connection, just plain basic nginx configuration.
I hope it helps you to get started. Good luck.