[chef] Re: I think I'm doing it wrong: DNS as an example


Chronological Thread 
  • From: Daniel DeLeo < >
  • To:
  • Subject: [chef] Re: I think I'm doing it wrong: DNS as an example
  • Date: Mon, 25 Mar 2013 11:42:27 -0700


On Monday, March 25, 2013 at 5:04 AM, Brian Akins wrote:

Let's take DNS (with route53) as an example:

Each node uses an LWRP (based on HW's route53 cookbook) to check
route53 and add itself to DNS if needed. This seems like a common
patter and is all good.

However, what about when you have, say, 5000 nodes? It just seems
absolutely silly to have each node do this every hour. While it does
make sure that new nodes get added to DNS right away - it just seems
unnecessary to do this every chef run.

Now, imagine the above but with 3 or 4 services - API calls for
monitoring, load balancing, etc. The "LWRP every chef run" is easy
and makes sense when you have a relatively few number of nodes.

How are other large installs handling this?

I was thinking that a script that once every x minutes scraped route53
and chef and just applied the "diff" would be more suitable for
"large" installs.

Or am I just fretting over nothing?

--Brian
At Opscode, we use Dyn DNS. We at first used a naive approach, setting the DNS record every run, but eventually we ran into API throttling problems. For this case it was simple enough to verify that the host already had the CNAME it wanted and skip the API call.

Re: suggestions to use ZooKeeper: go for it if you get enough value to justify the management overhead.


-- 
Daniel DeLeo




Archive powered by MHonArc 2.6.16.

§