PROBLEM:
I have a scalable instance of WordPress (not a VM but Microsoft Azure’s preconfigured WordPress web application instance) setup on Azure using ClearDB for the MySQL database. I have Tested the Azure BLOB storage WordPress plugin and it works to display data from my CDN where my Azure storage is setup.
I want the ability to have my WordPress instance scale up to meet demand in different regional locations. Currently our setup (not in Azure, but using 4 VMs hosted by Ultima in different geographic data centers) has a staging server (in the US) and 3 regional servers in the EU, AU and US. As demand increases in different regions (more than 3 is fine and if we can get down to specific countries, great) I want an instance of my WordPress site to spin up in that region and direct the user making the request to the server closest to their region.
-
How can I use traffic manager to define regions and tell the since
server instance to spin up another instance node (not VM) in that region? -
How can I make sure that the node in that region uses a copy of
ClearDB closest to that region (I don’t want all web front ends
accessing a DB in a single location in the US region) -
How can I use traffic manager to make sure that data pulled from the
CDN (Azure storage) pulls from a node closest to the region
requested)? -
How can I setup a staging server and publish to a public node that
then scales up for demand in regionally specific locations?
My questions will make more sense if you read how we currently have our global WordPress failover setup.
Current Configuration & Back story
Currently I have our WordPress site setup globally on different Ultima hosted VMs hosted in 3 regional locations as follows.
4 Servers
- Staging – Local staging server VM near our office in the US
- EU – European server hosted by Ultima in the EU region. It is a
Hyper-V instance of the staging server - AU – Australian server hosted by Ultima in the AU region. It is a
Hyper-V instance of the staging server - US – North American server hosted by Ultima in the US region. It is
a Hyper-V instance of the staging server
WordPress: Version 4.0.1 is installed on all versions (NOT MULTISITE)
I have edited the WordPress wp-config.php to detect the requested regional server (switch/case) based on requesting IP and then setup profiles for each server to define the following:
- DB_NAME
- DB_USER
- DB_PASSWORD
- DB_HOST
- WP_HOME
- WP_SITEURL
Example: A request for our EU server, IE eu.contoso.com would have these values in the matching CASE statement for
switch ($ip) {
case (strstr ($ip, '255.255.255.255') == true): // EU server request, set to port 3306
$port = '3306';
break;
}
$host = $ip . ':' . $port;
SWITCH ($hostname) {
case 'eu.contoso.com': //This is the EU server
define('DB_NAME', 'main_wp_website');
/** MySQL database username */
define('DB_USER', 'main_website_user');
/** MySQL database password */
define('DB_PASSWORD', 'main_website_password');
/** MySQL hostname */
define('DB_HOST', $host);
define('WP_HOME', 'http://eu.contoso.net');
define('WP_SITEURL', 'http://eu.contoso.net');
break;
}
REASON: This allows a user to enter the WordPress site from different request point and properly set the DB access to the DB at that region and rewrite the URLs with the region specific root domain. All profiles for all cases are present in every server’s wp-config.php file so they are identical across servers.
MySQL: Version 6 of the MySQL Server setup in Master/Slave replication(in my.ini on each server) from staging to global regional servers
- Staging – Setup as the Master Server using GTID to do replication sync one way to the global regional servers
- EU, AU, US – All setup as slaves in read only mode. They accept replication updates from the staging server but cannot
publish locally submitted data back
File Storage: This is the website directory, WordPress content directory (plugins, themes, uploads, etc)
-
We are using a product called Syncovery to monitory these folders and
detect changes and automatically sync the files via FTP to the other
servers similar folders -
We use BeyondCompare for manual synchronization when Syncovery takes
too long to detect the changes
Traffic Routing: We use a DNS service called EdgeDirector which detects the region from the requesting IP and routes the requesting user to the server closest to their region.
WHAT I WANT TO PREVENT
Currently we have syncing issues with our files (database replication is fine and has no issues). Our syncing program has to constantly scan and sync hundreds of thousands of files and when a file change is detected, a scan is set off (which takes time) and then the detected file differences are copied across (which also takes time depending on file size). If after a scan is fired off, another file or folder changes, we have to wait for the first scan/file copy to finish. This is leading to long delays in pushing published content from our staging server to the public cluster.
I also don’t want to have 4 different instances/servers to maintain. I would like at most a staging instance and a public instance that we publish to and scales up/down based on demand in regionally specific locations.
- How does a staging/public instance setup work in Azure?
- How do I configure both the staging instance and the public instance? I assume they would be exactly the same except for the server name and DB name.
- How would I execute a change to publish from staging to public?
Is there an easier way to make a scalable WordPress site that is globally available in different regions based on demand that I have described above? I am new to azure and I am finding that older methodologies are not necessarily the best in the new cloud environments.
I have a good idea of how to get a WordPress site going on Azure and already have a working instance using ClearDB and Azure BLOB Storage. I really just want to know how to scale it regionally like our current setup or a new way to meet the demand of our users in regionally diverse areas by ensuring they are always accessing the closest server and it is automatically provisioned and scaled up when the need arises.