- From: Greg Symons <
>
- To: <
>
- Subject: [chef] Berkshelf Frustrations, or, Am I Just Doing It Wrong?
- Date: Wed, 27 Mar 2013 17:03:15 -0500
- Organization: DrillingInfo, Inc
So, I've been struggling with getting Berkshelf to do exactly what I
want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to
assemble all of our disparate cookbooks sources (community, github,
internal git repos) into a chef repo and upload to the server. I'm
currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
1. cookbooks have their separate release cycles and work independently,
tagging released versions as they progress. Their dependencies are
resolved first from a central chef server, then from the community api.
The cookbooks (hopefully) properly list their dependencies in metadata,
and use pessimistic constraints where necessary.
2. For each platform (here defined nebulously as a set of applications
and services that form a coherent whole), there is a chef repository
that contains (mostly) only roles, data bags, and environments, and a
Berksfile.
3. When the time comes to make a platform release, we update cookbooks
to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site
where possible, and from git with specific refs where it's not.
Internally developed cookbooks are always given a version constraint in
the Berksfile, external cookbooks only when necessary.
4. After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the
acceptance test passes, we approve the new configuration for deployment.
5. When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had
by all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm
OK if things change between executions of `berks update`. The problem is
that if I do a `berks update` on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a `berks install` on another
machine, I don't necessarily get the same cookbooks on the second
machine as I did on the first machine... if the cookbooks were
unconstrained in the Berksfile, they may be updated. I see the same
problem if I do a `berks upload` even on the same machine, since it
resolves the dependencies before doing the upload. I've even tried
specifying Berksfile.lock as the Berksfile (i.e. `berks upload
--berksfile Berksfile.lock`) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency
to do a `berks install --path PATH` and commit the results? Is my idea
for workflow completely wrong?
Thanks, anyone who can help,
Greg
- [chef] Berkshelf Frustrations, or, Am I Just Doing It Wrong?, Greg Symons, 03/27/2013
Archive powered by MHonArc 2.6.16.