I was doing something like this where I
had nginx in front of wordpress, with lsyncd replicating data from
a master server to the read-only slave I had. If you don't want a
release process and just want to be able to update a cluster of
wordpress servers as easily as updating one server then you can
force admin traffic to SSL in your wp-config.php:
define('FORCE_SSL_ADMIN', true); Then you need to route https traffic to only one designated master, and then synchronize off of that to the slaves -- i used lsyncd to make it practically real-time to do updates. That worked, but for my personal wordpress site it was overkill and I just went to taking backups and automating builds rather than maintaining constant high availability. So, that should make it so that you can do wordpress software and plugin updates in the admin console and which will then be replicated to the slaves via lsyncd, which will be very snappy. You probably could replace this with git if you want to. If you want to test updates first, then make the master not take external traffic and instead of lsyncd use git/rsync after you've Q/A'd it and decided to push to the slaves... You could even use a hybrid approach where you had one master production server which used lsyncd to the rest of the prod servers, then you pushed to that master via git or rsync or chef or whatever release mechanism you like. s3fs supports rsync so you could rsync from the master to s3, then rsync down to the clients. That might be 'nice' in also giving you a backup copy of your data in s3 (and you could timestamp copies so that you could rollback, etc). I don't think you'd run into too many i/o problems with S3, but I don't know how big your dataset is... You probably want to spend a little time monitoring and writing some automatic recovery scripts to deal with sick s3fs mountpoints since I've seen that happen -- that could be as simple as a script that ran once a minute and touched a file on s3fs and if it error'd out it tried to remount the filesystem and if that failed then killed the software and raised an alert or something... I don't know what the compelling reasons would be to use git vs. rsync-over-s3fs or vice versa... I'd also most likely keep the scripts that did the synchronization managed and pushed with chef, but external to chef so that they could be executed outside of a chef-client run for ease of hitting them with knife ssh and doing quick pushes (and then also hit them within chef-client to ensure that all servers were converged on a schedule). I also used lsyncd across the whole root of wordpress to synch the software as well, and this required quite a bit of hacking up the opensource wordpress cookbook since it tends to assume that you're running on one server and likes to install a specific version of the software and likes to think it owns the database (or at least did, I haven't looked at it in months...) On 10/9/12 1:27 PM, Morgan Blackthorne wrote: " type="cite"> I'm currently running a WordPress cluster on AWS for livemocha.com. This has been interesting (especially overlapping it with the existing legacy site), but one of the big problems that I've been facing is trying to keep the themes/plugins/etc consistent across the EC2 nodes. WordPress gets unhappy if you activate a new plugin on one node and then can't find it on another node. The plugin activation state is stored in the DB, not the local fs, so when it finds it missing it de-activates it across the board... not helpful. |
Archive powered by MHonArc 2.6.16.