[chef] Re: Re: Re: How do you manage multiple data centers


Chronological Thread 
  • From: Maxime Brugidou < >
  • To:
  • Subject: [chef] Re: Re: Re: How do you manage multiple data centers
  • Date: Sat, 11 May 2013 11:54:01 +0200

The network_location plugin you describe looks promising. Our homemade plugin is very similar albeit less generic, which is why I can't share it to the community. But it basically checks network subnets for the main IP address.

However we get vlan data and other info through LLDP (another plugin which is available somewhere) which is more automated than the yaml file from network_location.

Anyway the usage pattern seem to be that datacenter should just be an attribute of the node (and populated by ohai if possible).

On May 10, 2013 7:19 PM, "steve ." < "> > wrote:
Surprised no one here's mentioned the as-seen-at-ChefConf network_location plugin.  (At work we're picking our way through the discussion with Legal about what we can open-source, but I think the most likely things are network_location and the Artifactory cookbook.)

Our approach has been to let the plugin handle presenting the physical/network topological location of the node and leave node.chef_environment to refer to the business's idea of what environment the node is in.  If the business says a node is a dev node, it doesn't matter to us what data center it's in -- it's going to act like a dev node.

To recap for those who weren't in the room:

network_location is a simple Ohai plugin (distributed via cookbook in the community-acknowledged method) that loads after the Ohai 'network' plugin.  It loads up a YAML file (currently distributed with the plugin in the cookbook, which is gross) that contains IPv4 CIDR blocks with key-value pairs associated with them.  The plugin then walks the 'network' attribute tree looking for IPv4 addresses and tries to match each one with a CIDR block in the file.  If it gets a match, it dumps the KVPs underneath the matched IP address.  Once it's done, it also goes back and puts the matching KVPs for node[:ipaddress] up in node[:network] ...

Fairly recently, someone had the brilliant idea of sticking "Unknown" values in if no match was found for this stuff.

So we get "node[:network][:facility]" and "node[:network][:vlan]" values that we can always test for, along with a default value that we can assume means you're not someplace we care about.

I would have suggested we open-source this sooner but I find the idea of distributing flat files kind of embarrassing.  There are a variety of places and methods here for people to store information about their networks, unfortunately.

Anyone have any interest in a plugin like this that sources from netdot?  I might need to write something like that soon and I don't think any such plugin exists yet in the wild.





On Fri, May 10, 2013 at 6:47 AM, Steffen Gebert < " target="_blank"> > wrote:
Hi Maxime,

> Currently the plugin is purely based on the network subnet since we have
> clear separated subnets for each DC.

would you mind sharing this plugin? I'm not too deep into ruby, so
coding it on my own would cost me some efforts, but I'd like to have
such functionality, too.

Yours
Steffen

On 5/8/13 1:01 PM, Maxime Brugidou wrote:
> Currently the plugin is purely based on the network subnet since we have
> clear separated subnets for each DC.
>
> We are adding additional location info like room/rack/plane using LLDP
> (based on the physical network topology which matches the physical
> location).
> On May 8, 2013 10:26 AM, "Jesse Nelson" < " target="_blank"> > wrote:
>
>> Maxime, I agree the fact that a node resides in a certain location
>> shouldn't be prescribed it should be discovered. What does your ohai plugin
>> do the discover the datacenter the node is in ?
>>
>>
>> On Wed, May 8, 2013 at 12:53 AM, Maxime Brugidou <
>> " target="_blank"> > wrote:
>>
>>> We run chef on 7 datacenters and get the node's datacenter from an ohai
>>> plugin.
>>>
>>> This is actually the cleanest thing to do, you don't have to manually set
>>> the node's DC anywhere since it's auto-discovered. When you need specific
>>> attributes per data center we use "wrapper" cookbooks that dynamically
>>> define attributes according to the DC.
>>>  On May 8, 2013 7:19 AM, "Torben Knerr" < " target="_blank"> > wrote:
>>>
>>>> Hey guys,
>>>>
>>>> this is theoretical, I'm not through this in practice yet:
>>>>
>>>> From a aconceptual point of view, I'd argue to definitely use
>>>> environments rather than roles for keeping the datacenter (=a different
>>>> environment) specific attributes.
>>>>
>>>> From the gut feeling I would have started with sth like 'prod_dc1' and
>>>> 'prod_dc2' environments etc..
>>>>
>>>> Did I get it conceptually wrong or are there other practical reasons why
>>>> you are all using a single 'prod' env and managing the dc specific stuff in
>>>> roles instead?
>>>>
>>>> Cheers, Torben
>>>> On May 8, 2013 1:40 AM, "Jesse Nelson" < " target="_blank"> > wrote:
>>>>
>>>>> I've used internal DNS to denote locality. We use a contrived 4letter
>>>>> LTD and then use the datacenter as the first dot: i.e: 356sf.myorg,
>>>>> 365ny.myorg.  I know it violates the cardinal no metadata in name rule that
>>>>> I try to abide by, but this makes it pretty easy to handle data center
>>>>> specific needs via node.domain, and without the need for specific roles to
>>>>> locality. A single cook/role (or role cook) denotes all the specifics for
>>>>> each datacenter, and individual cooks can easily override if needed.
>>>>>
>>>>
>>
>






Archive powered by MHonArc 2.6.16.

§