[chef] Re: Re: Re: Re: reconfiguring server vs continuous upgrades


Chronological Thread 
  • From: Andrew Gross < >
  • To: chef < >
  • Subject: [chef] Re: Re: Re: Re: reconfiguring server vs continuous upgrades
  • Date: Fri, 12 Apr 2013 16:57:43 -0400

Hey Spike,

How we get around using custom node information:

1. Chef Search: Our Redis slaves are launched with a 'redis-slave' role that does nothing different from the regular redis role. It is only there so we can search for nodes with that role and apply config changes appropriately.

2. Databags: This is where we store API keys etc.  Bonus points for using encrypted versions.


On Fri, Apr 12, 2013 at 3:54 PM, Daniel DeLeo < " target="_blank"> > wrote:

On Friday, April 12, 2013 at 12:31 PM, Spike Grobstein wrote:

Hey Andrew (and everyone else replying as I compose this),

Thanks for the info. A lot of solid points.

I have a lot of data stored in the node's attributes (and edited via `knife node edit <nodename>`). I guess this is where it would be better to use databags? We have things like scout (scoutapp.com) api keys in there. I guess this could also be solved by hitting the API and pulling down the key at configure time.

So this opens up some more questions... There are cases that I'll configure a node with a postgres role. I then use the node's attributes to configure whether it's a master or a slave, and if it is either, which node it will replicate from/to. In the case where I'd be reconfiguring one of those, but I want to retain that configuration, what would be the best way to do that? Specific roles for each of those specific cases with the required attributes? Or some databag trick?

I've got some other details I need to work out now, too, but I should be able to work that out on my own. Namely how to handle our internal DNS changes. I have straight-up File resources for the BIND configs that I modify when I add new nodes and we name the nodes serially based on role (eg: app001, app002, resque001, db001, db002, etc), so I'll have to figure out if that was a solid choice and if there's a better way to do that.
We generate hostnames from $PRIMARY_ROLE-$SLUG.$DOMAIN where:

$PRIMARY_ROLE comes from a case statement in a recipe. It looks at node[:roles] and picks the "most important" role. This is just for convenience so you see what kind of machine you're on from the hostname in the prompt.

$SLUG is the cloud instance id when using a cloud, or picked from a generated UUID

The only tricky bit is that you need to configure chef with a static node_name setting instead of using FQDN, and if you want this to match the hostname, then you need to have a script generate the slug before Chef runs.

In any case, I find sequential integers in hostnames to be a PITA in an automated environment so I'd recommend migrating to a different scheme.

-- 
Daniel DeLeo




Archive powered by MHonArc 2.6.16.

§