my WPMS site is hosted on 8 cores/32mb RAM server but the response time is very high.
We have about 1000 blogs (35,000+ tables on single db) and 70,000 pageviews per mounth.
I think that i can down the response time moving blogs that have more pageviews into separeted DB and
spliting all blogs into 100 blogs per db with hyper-db plugin. What do you think?
Leave a Reply
You must be logged in to post a comment.
Yes, you are long past the point where you need to split the database. 😉
70k page views per day seems rather trivial considering your hardware. Even if the bulk of it is spread in a 10h per day time frame, you’re dealing with (roughly) a page served per second. Your hardware should be dealing with that without a hiccup. (Then again, it is WP…)
Imo, before you start splitting sites in multiple databases, install memcached and an object-cache. Doing so will bring down the number of DB queries.
Then, optionally add BatCache or the Semiologic Cache (which has its own object-cache implementation), aka something that implements memcached-based caching (the first caches pages served to guests; the second caches pages served to guests and key queries for all users). If you think you’ll eventually outgrow a single server, you don’t want any static file based caching. If you think you won’t, Total Cache or even Super Cache are good choices as well.
Last but not least, have you configured mod_deflate (which is built into Sem Cache too) and asset concatenation (also done by Sem Cache, as well as by the likes of Total Cache)? These significantly reduce the perceived load time.
Edit: sorry, I just re-read the question and you say 70k per month, not day; which is even more ridiculous as a server load: 2 pages served per minute, i.e. your CPU should nearly always be idle.
Try db-cache first. If your queries start being cache-d you will find a 5-6x improvement. WordPress does not have a permanent cache. Can someone tell, why? Every metadata, user data comes from database by every page load. Nonsense…