- From: Joseph Holsten <
>
- To: Bryan Taylor <
>
- Cc: Chef Dev <
>
- Subject: [chef-dev] Re: Re: Idiom for adding a node to a Cluster
- Date: Wed, 16 Oct 2013 22:13:59 +0000
If you can convince the community to agree on a One True Way to backup, I
will buy you many beverages.
Say we're talking about mysql, you've got issues:
creating the backup
- mysqldump?
- mysqlhotcopy?
- percona xtrabackup?
- filesystem (xfs, zfs) snapshot?
backup archive style
- differential?
- full?
frequency and rotation
transfer protocol & storage
- fibre channel (lol)
- iscsi
- rsync
- ftp
- s3/swift
I can't get my team to agree on what the best way is, much less the internet.
So anyway, let me know when I can buy you those beverages.
On 2013-10-16, at 20:42, Bryan Taylor
<
>
wrote:
>
I think I've arrived to the point of your 2nd paragraph. It really just
>
comes down to how does an opscode community cookbook set a reasonable
>
default for the backup location. It's easy enough to have the master set up
>
a cron to do a local backup and then copy those files us to this location.
>
>
The problem is: where? I see three options:
>
1) rsync backup to the chef server. It exists. Otherwise, yuck
>
2) provision a node explicitly for this purpose. Also, yuck
>
3) Use one of the db nodes for this purpose. Also yuck
>
>
Which one sucks least and would be accepted in a pull request? Or is there
>
another way? My assumption is anybody doing this for real, would
>
immediately override the backup location with a "real" location that
>
doesn't suck.
>
>
On 10/16/2013 03:16 PM, Joseph Holsten wrote:
>
> If you're using something with autoclustering, adding node addresses to
>
> config files and rolling restarts is safe to do with chef. Use role/tag
>
> search to find nodes and populate host lists, notify service restart when
>
> the config file is changed, and bob's your uncle. We do this for
>
> elasticsearch and hazelcast.
>
>
>
> If you're setting up slaves/replicas, you can probably set up a run-once
>
> resource to bootstrap the server from a backup, authenticate itself with
>
> the master, and turn on replication. We did this for free-ipa (ldap)
>
>
>
> If you need something that needs stonith-style singletons, doesn't handle
>
> split-brain on its own, &c, you need automation designed for that.
>
> Pacemaker & corosync are old school, things built on zookeeper, doozer or
>
> etcd are what the cool kids are doing. Everything I've heard of actually
>
> being in production does this on another band than chef, typically with a
>
> command-and-control tool like capistrano, fabric, mcollective, rundeck,
>
> &c. We use this approach for most things, notably mysql.
>
>
>
> If you're looking for a magic bullet, etcd-chef
>
> < https://github.com/coderanger/etcd-chef>
>
> has that hard-consistency in its data store and supports triggers on
>
> config changes, so (if you're daring) that might meet your needs
>
> perfectly. I'm hoping to spike some work on it as soon as I migrate my
>
> entire company into Rackspace Chicago. But I doubt I'll be doing a
>
> production master failover via etc-chef in the immediate future.
>
>
>
> In a broader sense I think our industry's terms for clusters are lacking,
>
> and our tools suffer for it.
>
> --
>
> ~j
>
> info janitor @ simply measured
>
>
>
> On 2013-10-15, at 22:20, Bryan Taylor
>
> <
>
>
> wrote:
>
>
>
>
>
>> I'm wondering what the chef idioms are for a certain problem that comes
>
>> up a lot when expanding a cluster. Let's say I have some kind of
>
>> persistence store and I want to enable replication, or add a new node
>
>> with replication to an already running cluster. The replication will
>
>> communicate on some custom protocol, but in order to work, I have to move
>
>> stateful data, like db logs or whatever, from the master to a new node.
>
>> The master is "the master right now", so it needs to be dynamically
>
>> discovered, and accessed via rsync or scp, say, to pull the files down.
>
>> I'm thinking for this I should just provision every cluster node with a
>
>> fixed static public/private key.
>
>>
>
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
- [chef-dev] Re: Re: Idiom for adding a node to a Cluster, (continued)
Archive powered by MHonArc 2.6.16.