[chef] Re: Re: Best practices for autoscaling nodes?


Chronological Thread 
  • From: Morgan Blackthorne < >
  • To:
  • Subject: [chef] Re: Re: Best practices for autoscaling nodes?
  • Date: Mon, 23 Jul 2012 13:27:34 -0700

I did some preliminary testing on this, because I found that my node cleanup script had pruned a real node. So I created this script:

#!/bin/sh

rm /etc/chef/client.pem

cat - >/etc/chef/client.rb <<EOF
environment        "prod"
log_level          :info
log_location       '/var/log/chef/chef-client.log'
ssl_verify_mode    :verify_none
signing_ca_user    "chef"
chef_server_url    "<sanitized>"
file_cache_path    "/var/cache/chef"
file_backup_path   "/var/lib/chef/backup"
pid_file           "/var/run/chef/client.pid"
cache_options      ({ :path => "/var/cache/chef/checksums", :skip_expires => true})
validation_client_name 'chef-validator'
Mixlib::Log::Formatter.show_time = true
EOF

/usr/bin/chef-client -j /etc/chef/bootstrap.json

It worked to bootstrap the node correctly, but when I re-ran it, I got this error:

:~# /etc/rc.local
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    28  100    28    0     0   2758      0 --:--:-- --:--:-- --:--:--  3111
[Mon, 23 Jul 2012 20:13:06 +0000] INFO: *** Chef 10.12.0 ***
[Mon, 23 Jul 2012 20:13:06 +0000] INFO: Client key /etc/chef/client.pem is not present - registering
[Mon, 23 Jul 2012 20:13:06 +0000] INFO: HTTP Request Returned 409 Conflict: Client already exists
[Mon, 23 Jul 2012 20:13:06 +0000] INFO: HTTP Request Returned 403 Forbidden: You are not allowed to take this action.
[Mon, 23 Jul 2012 20:13:06 +0000] FATAL: Stacktrace dumped to /var/cache/chef/chef-stacktrace.out
[Mon, 23 Jul 2012 20:13:06 +0000] FATAL: Net::HTTPServerException: 403 "Forbidden"
:~#

Basically, I have the base build host that I make changes to and create the AMI from there. If I set it to run from /etc/rc.local, then when I rebuild the AMI, the node will reboot and this script will run a second/third/etc time on the build host itself, which will fail because the node hasn't been deleted.

Is there an easy way to automate this to be idempotent, so that it can configure itself if it's a new node, and re-provision itself if it's an old node?

~*~ StormeRider ~*~

"With hearts immortal, we stand before our lives
A soul against oblivion, forever asking why
This upside down symphony in a paradox called life
Our hearts immortal
What you give to love, will never die"


On Mon, Jul 23, 2012 at 12:23 PM, Morgan Blackthorne < " target="_blank"> > wrote:
Thanks, I just read that.

So basically, you set it up to bootstrap Chef each time using the userdata. When customizing an AMI at an ops level, wouldn't it make more sense to do something along the lines of:
  • install chef-client and get it configured to talk to Chef, with your validation key
  • disable chef-client from running at boot time
  • create a script that runs from rc.local to delete & reconfigure the client.rb file, and then start chef-client with the bootstrap.json file
  • build the AMI
I'm newish to this, but I'm thinking that the less you need to change when spinning up an instance, the better. And doing it this way, if you need to push user-data to the instance (which I do for our nightly Selenium testing), it doesn't interfere with that at all.

Thoughts? I want to make sure I'm not missing something before I go down this road...

~*~ StormeRider ~*~

"With hearts immortal, we stand before our lives
A soul against oblivion, forever asking why
This upside down symphony in a paradox called life
Our hearts immortal
What you give to love, will never die"



On Mon, Jul 23, 2012 at 6:00 AM, Edward Sargisson < " target="_blank"> > wrote:
I wrote a blog post on how to do this *way back*. I still use the technique today.

The short answer is that you use your OS' AWS bootstrap code to write your node_name to /etc/chef/client.rb based on the AWS instance id.

Cheers,
Edward


On Sun, Jul 22, 2012 at 8:21 PM, Morgan Blackthorne < " target="_blank"> > wrote:
I was wondering what the best practice is for dealing with nodes on AWS that are set up to autoscale. Right now I have a CMS cluster that autoscales and all of the nodes identify themselves to the Chef server as the same node (since the contents of /etc/chef are identical on all of the nodes).

I'm guessing that this isn't the best practice, but I'm not sure how to set up the nodes so that they autoregister with the chef server and request a given role when doing so (I'm still getting up to speed on this install that my former cosysadmin configured). Ideally I'd like to be able to use "knife ssh" with the various client nodes down the road, and if multiple nodes are appearing as one, that's not really going to have the desired effect.

Thanks,

Morgan.






Archive powered by MHonArc 2.6.16.

§