- From: Joseph Holsten <
>
- To: Bryan Taylor <
>
- Cc: Chef Dev <
>
- Subject: [chef-dev] Re: Idiom for adding a node to a Cluster
- Date: Wed, 16 Oct 2013 20:16:03 +0000
If you're using something with autoclustering, adding node addresses to
config files and rolling restarts is safe to do with chef. Use role/tag
search to find nodes and populate host lists, notify service restart when the
config file is changed, and bob's your uncle. We do this for elasticsearch
and hazelcast.
If you're setting up slaves/replicas, you can probably set up a run-once
resource to bootstrap the server from a backup, authenticate itself with the
master, and turn on replication. We did this for free-ipa (ldap)
If you need something that needs stonith-style singletons, doesn't handle
split-brain on its own, &c, you need automation designed for that. Pacemaker
& corosync are old school, things built on zookeeper, doozer or etcd are what
the cool kids are doing. Everything I've heard of actually being in
production does this on another band than chef, typically with a
command-and-control tool like capistrano, fabric, mcollective, rundeck, &c.
We use this approach for most things, notably mysql.
If you're looking for a magic bullet, etcd-chef <
https://github.com/coderanger/etcd-chef> has that hard-consistency in its
data store and supports triggers on config changes, so (if you're daring)
that might meet your needs perfectly. I'm hoping to spike some work on it as
soon as I migrate my entire company into Rackspace Chicago. But I doubt I'll
be doing a production master failover via etc-chef in the immediate future.
In a broader sense I think our industry's terms for clusters are lacking, and
our tools suffer for it.
--
~j
info janitor @ simply measured
On 2013-10-15, at 22:20, Bryan Taylor
<
>
wrote:
>
I'm wondering what the chef idioms are for a certain problem that comes up
>
a lot when expanding a cluster. Let's say I have some kind of persistence
>
store and I want to enable replication, or add a new node with replication
>
to an already running cluster. The replication will communicate on some
>
custom protocol, but in order to work, I have to move stateful data, like
>
db logs or whatever, from the master to a new node. The master is "the
>
master right now", so it needs to be dynamically discovered, and accessed
>
via rsync or scp, say, to pull the files down. I'm thinking for this I
>
should just provision every cluster node with a fixed static public/private
>
key.
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
Archive powered by MHonArc 2.6.16.