[chef] Re: Re: Re: Re: Re: Bridging the gap between node and artifact


Chronological Thread 
  • From: Danny Hadley < >
  • To:
  • Subject: [chef] Re: Re: Re: Re: Re: Bridging the gap between node and artifact
  • Date: Mon, 21 Jul 2014 16:05:36 -0400

Yep yep, that build pipeline is what we’ve implemented too, but I meant this to be more a discussion about that last step - “passing in the ID”. Based on what you suggested, I still don’t think we have a solution that offers a robust solution that scales well with a growing team size and development/release pace - seems like the only solution we have for deploying a specific build version of an artifact is to override the attribute on the node during it’s bootstrapping? The question here is “How can I get my recently built artifact from a specific commit onto a provisioned node without making any changes to our chef code base?”. Is the answer to that question using the bootstrap command? 

Do you want to write, operate, debug, and maintain a mapping layer of nodes and artifacts in your artifact server? If that serves the wrong artifact to a node for some reason, how do you debug that (probably need tooling and APIs to query the mapping)? 

I’m still thinking about this at a conceptual level. How this feature/ability gets implemented would be a challenge, but I’m just saying it seems like there should be a way to manage these mappings without manipulating the bootstrap command. What form that takes is not really my main focus. 

How long does it take to build your artifacts from a given commit, and are there tests you want to run locally before you build 
How do new nodes get created (and how sophisticated are the users creating them)?
How do VMs get destroyed, and what other resources get cleaned up when you do that?  

These questions are more directed at the pipeline. Lets assume the pipe you had in your message is what we’ve implemented. (Actually, instead of provisioning new nodes in our development environment, we have a pool of nodes that can be restored to a checkpoint and then re-provisioned with chef; this saves time from having to create a whole new node every time we need to get a machine to an engineer).

If you need to pull down an artifact to a different machine for testing (or maybe the old VM just died) how do you do that? 

You would still be able to download artifacts via their build “identifier”: 

http://smart-artifact-server.com/artifacts/zz?build=12af

All I’m suggesting is that there is an additional way to request that artifact: via it’s node name. 

http://smart-artifact-server.com/artifacts/zz?node=danny-workstation

Those two urls would present you with the same artifact (in a pipe that was building commit 12af for danny)

You should also think about how agnostic your cookbook code will be to different versions. What happens when a new version of your app needs different Chef code?

Are you asking what happens when my code needs to change the recipe that deploys it onto a machine? As long as the recipe is still downloading a file from an artifact server, thats not really what I’m concerned with right now. 

On July 16, 2014 at 5:09:57 AM, Thom May ( "> ) wrote:

We're playing around with exactly the pipeline Dan suggests, using chef-metal to automatically provision nodes as the last stage of a jenkins job. The job excludes origin/master but builds every other branch, pushes it to the artifact server and then just kicks a chef run. Simples.
-T


On Tue, Jul 15, 2014 at 11:33 PM, Daniel DeLeo < " target="_blank"> > wrote:


On Tuesday, July 15, 2014 at 1:32 PM, Danny Hadley wrote:

> Do you think there is a calling for a better solution though? That -j param would be pretty beefy with the amount of artifacts needed by an enterprise system (I worked for a company that deployed around 20 in-house-made artifacts to their application servers). I was thinking of a chef-server/artifactory plugin or sister application called like “Tailor” that would allow the downloading of artifacts “tailored” to the node that requested them. It could even be fancy and actually build the artifacts at request time too; could then be called TinkerTailor, pretty neat IMO. All it really needs to match is:
>
> node name -> commit hash
>
> in some sort of relational table which can be done by massaging the artifactory/chef server API’s. Am I crazy or does that tool seem like it would be really useful? Devops guys could go:
>
> okay bob, I’m going to give you server "200app-bob (192.168.0.4)” for your feature branch, it’ll have your latest commit (12a4).
> Then they would log into the “tailor” web interface and just tie those strings between "200app-bob” and 12a4, and with their recipe configured to download the file from
> “http://tailor.somecompany.com/zz?nn=#{node.node_name}
> tailor would server up the artifact that was build from commit 12a4! Personally, I think it would be way more exiting in a build system but this was the simplest example I could whip up.

I think you’d do best to take a step back and think about how you want this to look operationally. Do you want to write, operate, debug, and maintain a mapping layer of nodes and artifacts in your artifact server? If that serves the wrong artifact to a node for some reason, how do you debug that (probably need tooling and APIs to query the mapping)? How long does it take to build your artifacts from a given commit, and are there tests you want to run locally before you build a VM? How do new nodes get created (and how sophisticated are the users creating them)? If you need to pull down an artifact to a different machine for testing (or maybe the old VM just died) how do you do that? How do VMs get destroyed, and what other resources get cleaned up when you do that? You should also think about how agnostic your cookbook code will be to different versions. What happens when a new version of your app needs different Chef code?

If you’re comfortable writing the code, then you can do whatever you want, and maybe this custom artifact server thing is really the best option, but it’s definitely more complicated than having your build process just pass in a build identifier via attribute to chef-client, and the chef-client code would be no more complicated. Personally, I would do a Ci pipeline of:

  local build -> tests (this could be 2 stages for fast tests -> slow tests) -> store artifact -> provision server, passing in artifact ID

--
Daniel DeLeo





Archive powered by MHonArc 2.6.16.

§