[chef] Re: Re: RE: RE: Re: RE: Re: Proper Berksfile, Berksfile.lock usage?


Chronological Thread 
  • From: Ranjib Dey < >
  • To: " " < >
  • Subject: [chef] Re: Re: RE: RE: Re: RE: Re: Proper Berksfile, Berksfile.lock usage?
  • Date: Thu, 21 May 2015 16:39:05 -0700

policyfile address similar requirements. 

if you are going down the path where every cookbook version has to be locked for every nodes, explicitly, its gonna be a pain sooner or later. cloning git repo has its own merits and disadvantages. you are now forced to share you git credentials and development history (and everything thats in the repo but not needed by chef-client) with every nodes. Code != artifact. Artifacts are numerically versioned independent entity, while code in SCM (like git) is coupled with its own history (i.e. deltas) and not numerically versioned. As you have mentioned, in smaller deployments you might find git cloning appealing because you cant get berks to adapt your workflow, but its not clear how you'll update the git repo, or maintain node metadata (like whats the equivalent of `knife status`?) is very unclear. in puppet world this was very popular, but thats more due to puppet using git repo internally, which chef does not.

I use chef-solo /chef-client -z extensively for volatile, on-demand infrastructure, or where i hand off the servers after initial provisioning (i.e. they are never updated, configured after provisioning). Even in those cases i find the git clone style a pain.  you can create a debian or rpm installer with a single command using FPM. this allows versioning, metadata management (by dpkg or rpm) etc. 

i think theres a lot of scenarios where using chef-client -z/chef-solo shines, but git clone on the node itself is bad move, there are lot other/easier ways to distribute a fixed set of cookbooks, along with berksfile.lock. 

On Thu, May 21, 2015 at 3:24 PM, Torben Knerr < " target="_blank"> > wrote:
Doesn't help the discussion pretty much, but just wanted to drop my +1 to what Nico said,
as I have a quite similar same setup for most of my Chef projects :-)

I would go even one step further and use chef-zero / chef-client -z in favor of chef-solo though,
which gives you compatibility with cookbooks using search:

Question if you really need Chef Server as part of your workflow. Depending on what you 
want to achieve you might definitely need it. If you don't really need it, it can make your 
workflow much simpler.

HTH, Torben



On Thu, May 21, 2015 at 7:17 PM, Nico Kadel-Garcia < " target="_blank"> > wrote:

This is why I do not use a chef server. I use chef-solo, and Berksfile.lock is in a git repository on the local host. That gives me complete control, on each host, of exactly which versions of the cookbooks are in play, and I don’t have to manually deduce and re-deduce dependency updates.

 

I don’t get chef ‘search’ functions, but that’s fine for small or dynamic development environments.

 

 

From: Fouts, Chris [mailto: " target="_blank"> ]
Sent: Wednesday, May 20, 2015 3:22 PM
To: " target="_blank">
Subject: [chef] RE: Re: RE: Re: Proper Berksfile, Berksfile.lock usage?

 

Been there done that, it failed for us.

 

Chris

 

From: Yoshi Spendiff [ " target="_blank">mailto: ]
Sent: Wednesday, May 20, 2015 3:05 PM
To: chef
Subject: [chef] Re: RE: Re: Proper Berksfile, Berksfile.lock usage?

 

But you're not setting attributes in the environment, you're setting cookbook version requirements. These are obtained from your Berksfile.lock which should be in source control for an environment cookbook. Running berks apply from a Berksfile.lock from a specific cookbook version would roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there's a problem the damage is usually done, rolling back to a previous cookbook version isn't necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just do an upload to one org and then control the release of those cookbooks to different environments.

Berkshelf is a dependency management tool with a few extra features. If you've chosen a workflow where you have to move objects between organisations it's not going to help you with that any more than it already is (which is pulling in the dependencies like you say in each bit).

 

On Wed, May 20, 2015 at 11:52 AM, Fouts, Chris < " target="_blank"> > wrote:

Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

 

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

 

Chris

 

From: Yoshi Spendiff [mailto: " target="_blank"> ]
Sent: Wednesday, May 20, 2015 2:42 PM
To: chef
Subject: [chef] Re: Proper Berksfile, Berksfile.lock usage?

 

I would suggest using different environments for, well environments, not organisations.

You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.

The way you're doing it now is just work duplication, there's no need to have orgs split like that.

 

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris < " target="_blank"> > wrote:

I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

 

I have the following setup:

 

Git server: http://git-server.domain.com

BerksAPI server: http://chef-server.domain.com:26200

Chef dev organization: https://chef-server.domain.com:443/organization/dev

Chef QA organization: https://chef-server.domain.com:443/organization/qa

 

Step 1:

When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

 

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

 

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

 

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

 

Can you fine ladies and gentlemen elaborate your process?

 

Chris




--




--






Archive powered by MHonArc 2.6.16.

§