[chef] Re: Re: Re: Re: Policyfiles and chef provision


Chronological Thread 
  • From: Christine Draper < >
  • To:
  • Subject: [chef] Re: Re: Re: Re: Policyfiles and chef provision
  • Date: Thu, 23 Jul 2015 13:02:52 -0500

Dan,

I guess 'chef provision' is more like 'chef policy provision' than a general interface for people wanting a simpler on-ramp to provisioning in general?

The way I currently pass in arguments to chef provisioning is using the chef-client -j option, i.e. I treat them as node attributes on the provisioning node. I guess  the --no-policy option would let me specify them on the command line and access them through the opts object without storing them as node attributes, which could be useful.  

I'm still struggling to get a mental picture of using policyfiles with provisioning, even for simple multi-node systems. Say something as simple as an appserver and a dbserver that I want to provision multiple times when a tester needs them. They need different runlists and attributes, but they should be using the same set of cookbook versions. I want to bring up the dbserver first so I can configure the appserver with its IP address. My natural inclination is to write a provisioning recipe that brings up the machines and sets their attributes/runlists and environment (to control cookbook versions).  What would the policyfile version of this scenario be?

Regards,
Christine  



On Thu, Jul 23, 2015 at 12:01 PM, Daniel DeLeo < " target="_blank"> > wrote:
On Wednesday, July 22, 2015 at 5:57 PM, Christine Draper wrote:
> I had the same reaction re the chef provision command... it seems targeted at letting me one or more 'identicalish' nodes, whereas my use of provisioning is typically to setup a variety of nodes that form a working system or solution.


I did think about clustering scenarios when I was writing `chef provision`, but it turns out that it can be complicated depending on what the exact use case is. Do you want a “throwaway” cluster to integration-test your cookbooks as a whole? How do you keep different developers’ throwaway clusters from conflicting with each other? How do you know what machines (or other resources) to destroy when the user requests to destroy stuff? How much responsibility should be placed on the user if they update their provisioning code and then run the destroy operation (which could leave a stray EBS volume, for example)? None of these problems are insurmountable, but it will take a lot of thinking to do this in a way that provides a great experience.

So, for the policyfile focused stuff, I opted to make it work more like `knife bootstrap` and `knife cloud create`, except using using Chef Provisioning so hopefully you don’t have the CLI options issues (i.e., you need 15 options you can never remember) that those commands have.

That said, I also felt that, regardless of whether you’re using Policyfiles or not, it wasn’t easy to pass “argument” type information into Chef Provisioning with any of the existing methods (environment variables being the easiest, but they have some unhelpful failure modes when you typo things). So I added the `chef provision --no-policy` mode of operation which loads up Chef and Chef Provisioning and then just runs your recipes. If you’re already happily creating clusters with Chef Provisioning, this might be the best way for you to use `chef provision` right now.

>
> Is there a way within a chef provisioning recipe to say 'set this machine up using this policyfile'? I couldn't see one. Maybe we need a policyfile resource (to load policies) and a policy attribute on the machine resource.
What `chef provision` does right now is, it adds `policy_name` and `policy_group` settings to the client.rb file via Chef Provisioning’s built-in option for this. You can do this manually as well.

At the moment this is the only way to tell chef-client which policyfile you want, because the node object doesn’t yet have these fields. In a future release of Chef Client and Chef Server, we will add these fields to the node object. After that’s done, it will be possible to add these to Chef Provisioning.

Policyfiles will be a bit trickier, since the process for generating the JSON and uploading everything to the server involves a lot of things (dependency resolution, local caching of cookbooks from supermarket, multiple cookbook uploads and then finally uploading the policy JSON) and there are some imperative operations involved (i.e., the user has to decide if they want to update dependencies or just take the lockfile as-is). I’m not ruling it out, but right now my thinking is there’s a bit of an impedance mismatch between the Chef Provisioning way of thinking and they way Policyfiles work otherwise. Maybe some more insight into user expectations here would be helpful

>
> Regards,
> Christine
>
> On Wed, Jul 22, 2015 at 1:16 PM, Maxime Brugidou < "> (mailto: "> )> wrote:
> > Hey,
> > Sorry might be wrong but from what I understand a Policyfile based workflow is pretty much like versioning a role with an associated Berksfile.lock.
> > I am not sure exactly how things are intended to be used but the associated provision cookbook should probably provision nodes from the given Policy name and nothing else. Each Policyfile would have their own provision cookbook (and maybe even git repo since they are versionned separately). This seem a bit extreme to me but could be greatly improved if we leverage named run lists in the Policyfile: then we can have multiple run_lists under the same Policyfile. I haven't tested that yet.
> > Maxime
> > On Jul 22, 2015 7:30 PM, "Chris Sibbitt" < "> (mailto: "> )> wrote:
> > > I've been experimenting with policyfile support lately, and I'm hoping someone can clarify some thinking around policyfile support in the "chef provision" command.
> > >
> > > `chef provision POLICY_GROUP --policy-name POLICY_NAME` lets me specify ONE policyfile and run a provisioning recipe. Policyfiles define a run_list, but one of my typical chef-provisioning recipes contains multiple machines with different run_lists.
> > >
> > > I'm not sure whether to take this as a suggestion that provisioning recipes should only do one machine each, or whether the tooling is just not quite meshing yet (I'm aware it's all very new and beta), or whether there is something conceptual missing from my thinking.
> > >
> > > Anyone else experimenting with this combination yet?
I addressed this above, you can use the —no-policy option to skip the policyfile part, which is probably the best way to do clusters. You’d have to manage uploading the policies and such yourself via the command line.

> > >
> >
>
>
>
> --
> ThirdWave Insights, LLC I (512) 971-8727 (tel:%28512%29%20656-7724) I www.ThirdWaveInsights.com (http://www.ThirdWaveInsights.com) I P.O. Box 500134 I Austin, TX 78750



--
Daniel DeLeo






--
ThirdWave Insights, LLC I (512) 971-8727 I www.ThirdWaveInsights.com I P.O. Box 500134 I Austin, TX 78750



Archive powered by MHonArc 2.6.16.

§