Our team is working on the 3rd of large WP sites for a client, and contrary to most “typical” WordPress setups, our projects are generally experience lots of requests, data pulls, and general content – if you saw (and could see) our site, you would be hard [word]pressed to know it was even a WP site. I’ve been examining performance, particularly on the database level, and we’ve implemented some basic caching for larger data sets.
My question is – would we gain much in performance by taking some steps to improve the database in terms of structure? Such steps would include:
- converting tables to InnoDB (our default is MyISAM)
- setting up proper foreign-key relationships, such as wp_postmeta.post_id => wp_posts.ID
Those are the only ones I’ve come up with so far, but there may be others. I’ve done work with various SQL engines, but I’m by no means a master of optimization on the database level.
I can’t comment on foreign keys but moving to InnoDB would be a good idea for large scale.
Although it’s a little slower than MyISAM, it does row level locking, rather than table level locking. As a result saving a post or doing a lengthy operation or query won’t lock the posts table, freezing the entire site while it waits for the table to unlock.
For very large multisites it may be desirable to investigate HyperDB, which is what wordpress.com uses.
Not DB related but you may want to look into the short init definition for AJAX calls