Iâm preparing a site with many users (100.000+). Since the output depends on the current userâs relation to some custom post types and taxonomies a static cache wonât help much.
Besides using a separate table for the users and serving static files from a cookieless domain â what else can I do for performance?
For example: Should I use InnoDB or MyISAM? Hints on indexes?
Update
Obviously, I wasnât clear enough. Sorry.
All users are logged in. Always. No one else can see more than the start page. The site offers paid material for online courses.
Iâm looking for tips related to to a large user base only. Basic general performance optimizations like compression, lazy loading of scripts, sprites etc. are useful, but thatâs not what I’m out for.
You can use “W3 Total Cache” which isn’t a static file cache system. It however uses stuff such as opcode caching, memcached, and object caching to decrease page load time. APC, or another opcache, would be a good addition to your server, as well as using a lightweight httpd instead of bloated Apache.
Forcing GZIP on users is also considered a good idea as most people who don’t receive GZIP files are actually able to receive GZIP files. The request headers can get managed by firewalls, etc.
However 80% of page load is generally front end so that’s somewhere you’ll want to work on. “W3 Total Cache” does concatenation of CSS and JavaScript as well as minification of the files. It is the best option if you’ve properly got you’re JavaScript and CSS files to show up only on pages where they are needed. However most sites don’t, so its extra required configuration is nothing but annoying. Also minification of files generally results in breaking stuff so I just do the concatenation of the files.
The use of serving static files a cookieless domain will save a few ms but to get real savings in page load the use of a CDN will save roughly 100ms per item. Also using multiple domains to serve the files will increase the page load for older browsers who have limits on how many concurrent file requests can be done per domain.
You may also want to look into using
http://smush.it
to save on the size of images without loss in quality. (https://github.com/icambridge/filesmush script for running local files through smushit. https://github.com/tylerhall/Autosmush for running images on S3 through smushit.)InnoDB should be used if your comments vastly outnumber your posts. Otherwise MyISAM may actually be faster.
Maybe you can change to mySQL 5.5 with default innodb standard.
Also it is an change to use a load balancer and separating the user tables in an other database, extra server.
Agree with what @Backie has written.
wp-supercache also has experimental support for object caching using APC, and CDN support. I have found object caching can result in odd behaviour from (badly?) written plugins.
MyISAM will typically be quicker unless you have a lot of writes (comments mainly) as it only locks a page, not the whole table.
Make sure you turn on MySQL slow query logging, and check through where your actual bottlenecks are, and then use EXPLAIN to work out whether you need to add indexes, etc. This page is a pretty good intro if you’ve not used EXPLAIN before.
The mysqltuner script can also be helpful in working out where you might need to tune your MySQL config.