Am 28.03.2013 16:26 schrieb "Torben Knerr" < " target="_blank"> >:
>
> Hi Greg,
>
> sounHi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef solo and thus without a chef server.
In addition to what you described:
- we use application (a.k.a role or toplevel) cookbooks not roles
- for library cookbooks we use no version constraints or if necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
- for application cookbooks we use strict versioning (e.g. = 1.1.0) for all dependencies in the graph (i.e. including the transitive ones) in metadata.rb. We do this because we want stable application cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus we lock the transitive deps as well in metadata.rb
- with such an approach your environment version constraints would be very simple => you only need lock the app cookbook's version thereFinally, we treat dependency resolution for app cookbooks a bit differently:
- rather than a toplevel Cheffile/Berksfile in the chef repository we have a custom yml file which contains the app cookbooks + version + git location/ref
- for each app cookbook in the yml file, we:
1. Clone the cookbook ref to /tmp/<cb>
2. Cd /tmp/<cb> && berks install --path <chefrepo>/cookbooks/<cb>-<version>
- this isolates the dependencies per application cookbook, i.e. you can have different versions of library cookbook x for app cookbooks a and b, even though both live in the same chef repoThis works quite good for us so far.
HTH, Torben
Am 27.03.2013 23:03 schrieb "Greg Symons" < " target="_blank"> >:So, I've been struggling with getting Berkshelf to do exactly what I want it to do, to the extent that I'm starting to wonder if what I'm trying to do is the Wrong Thing(tm). It seems to work great for cookbook development, but where I'm running into trouble is when I try to assemble all of our disparate cookbooks sources (community, github, internal git repos) into a chef repo and upload to the server. I'm currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
1. cookbooks have their separate release cycles and work independently, tagging released versions as they progress. Their dependencies are resolved first from a central chef server, then from the community api. The cookbooks (hopefully) properly list their dependencies in metadata, and use pessimistic constraints where necessary.
2. For each platform (here defined nebulously as a set of applications and services that form a coherent whole), there is a chef repository that contains (mostly) only roles, data bags, and environments, and a Berksfile.
3. When the time comes to make a platform release, we update cookbooks to the latest versions as determined by Berkshelf. The platform Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are directly mentioned in role runlists; ideally application and role cookbooks, but we're not there yet), sourced from the community site where possible, and from git with specific refs where it's not. Internally developed cookbooks are always given a version constraint in the Berksfile, external cookbooks only when necessary.
4. After resolving the dependencies, we generate a set of environment cookbook constraints that constrain the test environment to the resolved versions. We upload the cookbooks and then upload the new environment to allow the test platform to converge to the new configuration. The new configurations are given a "thorough" acceptance test. When the acceptance test passes, we approve the new configuration for deployment.
5. When the configuration is approved, we modify the production environment to have the same constraints as test, the production nodes happily converge to their new configurations, and celebrations are had by all.
The problem I'm running into is between steps 3 and 4. I'm having trouble getting repeatable results out of the cookbook resolution. I'm OK if things change between executions of `berks update`. The problem is that if I do a `berks update` on my machine, commit both the Berksfile and the Berksfile.lock, and then do a `berks install` on another machine, I don't necessarily get the same cookbooks on the second machine as I did on the first machine... if the cookbooks were unconstrained in the Berksfile, they may be updated. I see the same problem if I do a `berks upload` even on the same machine, since it resolves the dependencies before doing the upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. `berks upload --berksfile Berksfile.lock`) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency to do a `berks install --path PATH` and commit the results? Is my idea for workflow completely wrong?
Thanks, anyone who can help,
Greg
Archive powered by MHonArc 2.6.16.