- From: "Fouts, Chris" <
>
- To: "
" <
>
- Subject: [chef] RE: question around chef/berkshelf workflow
- Date: Tue, 28 Oct 2014 14:14:11 +0000
- Accept-language: en-US
With Chef 12, you will only need one server since it has the built-in concept
of "Organizations" and cookbooks in each organization are segregated from
each other.
Chris
________________________________________
From: Tara Hernandez
Sent: Monday, October 27, 2014 9:07 PM
To:
Subject: [chef] question around chef/berkshelf workflow
Hi folks,
Right now we’re still largely using an old style workflow, with ":site
opscode" in the Berksfile and "path: ../blah" nonsense to refer to our local
cookbooks we store in a git repo. This is bad because then obviously you
can’t do things like chef version tagging in local dev, you have to have ALL
local cookbooks on your filesystem, and it’s just generally messy.
So, instead I want to engage more actively with our internal chef servers.
To do this I want to be able to have a berkshelf api service running with the
following endpoints (and in this order):
1. “stable” internal chef server, source of truth for cookbooks, data bags
and roles and only written to via CI system (this is in place now)
2. The Chef marketplace (for any community cookbook dependencies)
3. A “dev” internal chef server, that anybody in the org can upload
whatever to as part of the dev/testing process.
However, this doesn’t work for us because we use different credentials
between our two internal chef servers and even though it seems like we’re
providing the berkshelf API service itself with credentials to use, it only
ever seems to use what the user has locally.
Am I totally insane for thinking this is a good idea? Is there a better
approach here? Looking for wisdom…
Thanks!
-Tara
PS - By the way, I’m trying to hire another body to help us work on this and
generally help build a cool deployment pipeline infrastructure if anybody’s
looking:
http://www.lithium.com/company/careers/job-listing?jvi=ol5HZfwQ,Job
Archive powered by MHonArc 2.6.16.