[chef] Re: Re: Re: Re: Re: Re: Re: Re: Chef on Amazon EC2 with auto-scaling


Chronological Thread 
  • From: Aaron Abramson < >
  • To: " " < >
  • Subject: [chef] Re: Re: Re: Re: Re: Re: Re: Re: Chef on Amazon EC2 with auto-scaling
  • Date: Sat, 23 Jul 2011 10:22:50 -0500

What about scp'ing the pem off a "static" instances internal address that is locked down with a security group?  Only your instances can access it...  Any ssh key you use would be available in the user data, but it adds another layer of security.



On Jul 23, 2011, at 3:24 AM, Avishai Ish-Shalom < "> > wrote:

You are almost correct. User data can be modified after the instance is up using ec2-modify-instance-attribute, however this is cumbersome and requires the instance to be stopped first (and naturally only works with EBS based amis). sigh.

The secure, convenient option would be a signed s3 link set to expire in 15 minutes, this however forces you to generate user-data using templates (i use erubies) and doesn't work with autoscaling.

In short, if you want a method that works with autoscaling and doesn't require bundling an ami, you're screwed security-wise. Unless someone in this list figures how to do it of course...  i've considered ip bound one-time tokens but decided against implementing yet another security layer. I bake ami's with validation.pem when i can and take my chances when i don't have the time.

Regards,
Avishai


On 23/07/11 03:38, Bryan Brandau wrote:

" type="cite">Avishai,

Passing in the validation.pem with the user-data becomes a security concern.  It will always be present as the instance runs.  Once a client gets it's client.pem you should be removing the validation.pem.  You won't be able to do this when it's passed in.

On Mon, Jul 18, 2011 at 8:25 AM, Avishai Ish-Shalom < "> "> > wrote:
I've been using chef with autoscaling quite often in the last two years.
I've found the most versatile approach is to have a minimal user-data
script take care of bootstrapping chef and let chef do the rest. the
validation.pem is included in the user-data.

Another, somewhat unrelated script is a cleanup daemon, basically it
scans the list of EC2 servers periodically and updates chef nodes with
the status of the related ec2 instance (e.g. node[:ec2][:status] =
"running", matched on instance-id). This allows filtering search results
for servers that are dead/stopped/etc. the daemon also removes nodes and
clients after a they are dead for a while.

pros of my method:

* no ami maintenance, you can use any community ami

* works very well with high rates of recycling nodes

* simple, easy to extend and modify cluster functionality

* cleanup doesn't depend on proper node shutdown


Cons (those i thought of at least)

* servers take longer to get to "production ready" status

* chef server and recipes become major points of failure for autoscaling

* a little more load on chef server

Regards,
Avishai


On 18/07/11 15:46, Edward Sargisson wrote:

> Hi,
> Please forgive me for directing you to my own blog but here is my post
> on how I did it [1] (which Opscode kindly link to). This method
> (provided to me on this list) uses Ubuntu's cloud-init to bootstrap
> Chef onto the image and then gets Chef to do the rest.
>
> Re: OS upgrades. If you mean package upgrades then write a cookbook
> that does it. There is an apt cookbook for ubuntu that updates the
> package list but doesn't run the upgrade for you.
>
> If you want to actually upgrade the OS (i.e. Ubuntu Maverick to Natty)
> then Chef doesn't do this directly. In EC2 these images are pre-baked
> so, with Chef, instead of starting with the Maverick image you start
> with the Natty image. Chef will then install everything else you need
> and you just need to test to make sure it worked..
>
> [1] http://www.trailhunger.com/blog/technical/2011/05/28/keeping-an-amazon-elastic-compute-cloud-ec2-instance-up-with-chef-and-auto-scaling/
>
> On Mon, Jul 18, 2011 at 5:16 AM, Bryan McLellan < "> "> > wrote:
>> On Jul 18, 2011 6:32 AM, "Oliver Beattie" < "> "> > wrote:
>>> * As I originally mentioned, what is the procedure for managing these
>>> servers? Would I just be able to run commands via knife to all my servers?
>>> How does it keep track of nodes joining (or more importantly leaving) my
>>> "cluster"?
>> Knife uses the Chef server API to talk to the server. Since all nodes
>> register with the server (both a node object for the data and a client
>> object for authentication) knife node list produces a list of all nodes
>> registered with the server. Knife doesn't know about nodes itself. When you
>> use knife to create a new system, via ec2 server create or bootstrap, the
>> node still registers itself with the chef server, not knife.
>>
>>> * Another (somewhat unrelated question) I had is how does Chef manage OS
>>> upgrades? Does it manage them at all? For instance, how would I say "go run
>>> aptitude upgrade on all my production servers"?
>> knife ssh name:* "sudo aptitude upgrade -y"
>>
>> Or you can create a cookbook to do this if you trust upstream to produce
>> non-breaking changes.
>>
>> Chef itself doesn't manage OS upgrades, but it certainly can.Remember that
>> Chef is a tool designed to help you automate your systems. A hammer doesn't
>> pound nails alone.
>>
>> Bryan
>>



<avishai.vcf>



Archive powered by MHonArc 2.6.16.

§