Thanks a lot for your answer Tyler.
I'm going to take a look at the cookbook you pointed me to.
Regarding the deprecation of fog-aws, that makes me quite sad ;-)
Basically I'd like to see fog-ec2 kept alive in order to be able to use it against providers that have an EC2 compatable API like QStack.
chef-provisioning-aws doesn't seem to support changing the ec2 endpoint for example, so that will be unusable for us unfortunately.
Just my 2 cents... I hope you take this into some consideration regarding deprecating fog-aws (which as I mentioned might be better named fog-ec2).
Kind regards,
Stefan Freyr.
From: Tyler Ball <
>
Sent: Tuesday, April 14, 2015 6:51 PM
To: Stefán Freyr Stefánsson
Cc:
Subject: Re: [chef] chef-provisioning best practices for supporting multiple configurations
Hey Stefan,
The setup recipe sets `with_driver` and `with_machine_options` from attributes. So using that single cluster-provision recipe you could provision machines in AWS or machines in Azure using only different attributes (which you could set in an
environment, or policy file, etc.).
I see something similar fulfilling your needs (as I understand them - I could still be misunderstanding). By `machine building ruby script` do you mean the script which contacts AWS and provisions the machine? Or do you mean a script that installs
Chef once the machine is provisioned?
In the first case, I think you should convert that script to a recipe which leverages chef-provisioning to provision machines. In the second case, I would need to understand the use case better.
You are right - we need some minimum viable product cookbooks and documentation which show the right way to use chef-provisioning.
Finally, yes, AWS via chef-provisioning-fog is going to be deprecated. The reason is that Fog doesn’t provide as many AWS resources as direct interaction with the AWS SDK does.
Unfortunately the best way to find information about chef-provisioning is to ask questions in the gitter or here on the mailing list. We’re moving so fast on this stuff that it is
hard to keep any documentation current. The examples folder in the cookbook contains examples that we run manually, so they should be the most-up-to-date examples.
-T
On Apr 14, 2015, at 10:51 AM, Stefán Freyr Stefánsson <
" class="">
> wrote:
Hi.
Thanks for the answer but there seems to be a little misunderstanding.
What I was mostly complaining about was that I want to separate the AWS specific things from the definition of what machines I want to build.
In other words, I want to be able to build a 7 machine cluster and with relative ease, I want to be able to do that on more than one environment (say AWS, our private EC2 compliant cloud, Rackspace, etc).
In your example, you've fixed the driver to aws inside the "machine definition ruby script".
I'm going to play around with what you said about passing attributes into these scripts, If I do that I can probably have a set of attributes for each driver type. But I'm not quite sure how to proceed
since my machine building ruby script isn't actually a recipe, it's just a ruby file that's not a part of a cookbook. Maybe that's part of my problem, should this be an actual recipe?
At any rate, I think the time has come for some documentation about what the proper ways to do chef-provisioning are. At least I haven't been able to find it.
But I'm curious, why should I use the aws_driver instead of fog? Is fog being deprecated? I haven't seen anything about that?
-Stefan Freyr.
From: Tyler Ball <
" class="">
>
Sent: Monday, April 13, 2015 4:25 PM
To:
" class="">
Subject: [chef] Re: chef-provisioning best practices for supporting multiple configurations
For creating different types of machines, you can directly set `machine_options` as an attribute on `machine`:
require 'chef/provisioning/aws_driver'
with_driver 'aws::eu-west-1'
machine_batch ‘my cluster’ do
machine ‘small’ do
machine_options :bootstrap_options {
instance_type: ‘m3.medium’
}
end
machine ‘large’ do
machine_options :bootstrap_options {
instance_type ‘c4.large’
}
end
end
If you are provisioning in AWS, you should also be using chef-provisioning-aws instead of chef-provisioning-fog.
The hash you set on :bootstrap_options can come from an attribute, set at a role or environment level.
Your provisioning run_list looks fine to me - what about running multiple recipes (a base recipe and different recipes for different machine sizes) makes you uncomfortable?
-T
On Apr 10, 2015, at 6:47 AM, Gabriel Rosendorf <
" class="">
> wrote:
I'm also starting to look into chef-provisioning, and specifically chef-provisioning-aws. One thing that I've sort of struggled with is what that workflow is going to look like. The direction that I'm heading is this:
- Each deployment (collection of vpc, subnets, instances, etc) will reside in it's own chef repo, separate from where we store our normal cookbooks, environments, etc.
- Each region that we'll deploy to is represented as an environment, e.g. us-east-1.json.
- When we converge, we point chef-client to a cookbook and an environment, e.g. chef-client -z -o provisioning-cookbook -E us-west-2
This allows us to control attributes per region, subnet CIDRs for example. Also, since the outputs are stored as data bags, this method of using separate chef repos per deployment type keeps those separated in source control.
This is my first crack at chef-provisioning, so this may be a horrible idea of reasons that I haven't encountered yet, but for now it seems to make sense. I'd love to hear how other folks are doing it!
Best,
Gabriel
On Thu, Apr 9, 2015 at 10:39 PM Christine Draper <
" class="">
> wrote:
That's an interesting question. I've also been struggling with that separation.
Perhaps if the provisioning recipes were in a cookbook, then the machine_options to use for each node might be defined using attributes, and looked up and set on each machine. But I haven't tried it (so far only running with recipe files rather than a run
list).
Regards,
Christine
|