[chef] Re: Re: app deployments w/ chef


Chronological Thread 
  • From: "steve ." < >
  • To: " " < >
  • Subject: [chef] Re: Re: app deployments w/ chef
  • Date: Thu, 10 Oct 2013 13:40:44 -0700

We like RunDeck for deployment orchestration, though obviously we are interested in trying pushy.  (Or push.  Or whatever it's called this week.)  In most cases our RunDeck jobs trigger a Chef run on a specific scope of Chef nodes within an environment, in series or in parallel ... though we have some folks using a deployment pattern where RunDeck updates a data bag before or after running Chef, and/or triggers some other post-deploy process.

Could you do all that in a Chef run?  It's probably possible but I'm not sure a Chef handler running on the node is going to be the right place for every service validation mechanism.  You end up potentially adding many more gem dependencies into Chef's Ruby install when you do that, which is perhaps not desirable in production.

So maybe what you do is write a lightweight Chef report handler that hits a web service saying, "This node's just finished deploying version Y of Product X and needs to be validated."  That web service could be, say, a CI service or something else capable of running tests against a parameterized endpoint and making pretty graphs of the result.

Then *that* thing would be responsible for deciding your app was ready to go into the production pool on the LB.  (Maybe it hits the load balancer's API directly to put the node into service, maybe it sets a "validated" flag on the Chef node object or in a data bag somewhere and Chef takes care of it?)

And all this assumes that you don't have to do any destructive DB migrations.  (You should follow Graham's advice and then you won't need to solve for that scenario :D )


On Thu, Oct 10, 2013 at 12:32 PM, Graham Christensen < " target="_blank"> > wrote:

On Thursday, October 10, 2013 at 2:25 PM, Wes Morgan wrote:

2. When running chef-client on nodes that already have some version of the app(s) running, make sure that they all run the same version and upgrade simultaneously and atomically (or as close to that as possible). So, for example, if you were storing a git rev in a data bag as the "current version I want deployed" and then one node kicked off its regular chef-client run and upgraded to that, the other nodes running that app would then still be on an old version of that code. That shouldn't happen, but it would with the simplest use of the Chef deploy resource and the splayed, regular interval chef-client runs.
 The only way I've sanely been able to handle this is do breaking deployments in two parts, so the old version and the new CAN coexist in production for some period of time. This involves (and it is what I do, for example, for DB migrations: )

1. Creating a release which adds new columns to the database and writes to them, as well as the old columns, while still reading from the old columns.
2. Waiting for the deployment to be consistent
3. Creating a second deployment which reads from the new columns and drops the old columns.

Graham





Archive powered by MHonArc 2.6.16.

§