[chef] Re: question around chef/berkshelf workflow


Chronological Thread 
  • From: Damien Roche < >
  • To:
  • Subject: [chef] Re: question around chef/berkshelf workflow
  • Date: Fri, 31 Oct 2014 20:53:07 +0000

Ohai Tara

That should work fine all your berkshelf clients need to use a credentials that are vaild on both your internal endpoints.

On 28 Oct 2014 01:08, "Tara Hernandez" < "> > wrote:
Hi folks,

Right now we’re still largely using an old style workflow, with ":site opscode" in the Berksfile and "path: ../blah" nonsense to refer to our local cookbooks we store in a git repo.  This is bad because then obviously you can’t do things like chef version tagging in local dev, you have to have ALL local cookbooks on your filesystem, and it’s just generally messy.

So, instead I want to engage more actively with our internal chef servers.  To do this I want to be able to have a berkshelf api service running with the following endpoints (and in this order):
  1. “stable” internal chef server, source of truth for cookbooks, data bags and roles and only written to via CI system (this is in place now)
  2. The Chef marketplace (for any community cookbook dependencies)
  3. A “dev” internal chef server, that anybody in the org can upload whatever to as part of the dev/testing process.
However, this doesn’t work for us because we use different credentials between our two internal chef servers and even though it seems like we’re providing the berkshelf API service itself with credentials to use, it only ever seems to use what the user has locally.  

Am I totally insane for thinking this is a good idea?  Is there a better approach here?  Looking for wisdom…

Thanks!
-Tara

PS - By the way, I’m trying to hire another body to help us work on this and generally help build a cool deployment pipeline infrastructure if anybody’s looking:





Archive powered by MHonArc 2.6.16.

§