- From: Eric Heydrick <
>
- To:
- Subject: [chef] Re: Chef Deployment System for Swift - a proposed design - feedback?
- Date: Thu, 28 Apr 2011 00:01:45 -0700
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; b=Glt3h/lS4AjWsztu36ALm/Y9e7K56Bbgoh1pDX0khmFRbrr7qKBz6B110ZUFYQMld1 9Fyt1Pwoe2FqBYAJ9Jd0WjzUFBl9mLoo1i90PuTrV1VQUApFoU4ghnoz5XA3WnyYvLcG 75Br0yntgSHEf0cPSmA5CUUOhhMtjwXYYCQIU=
On Wed, Apr 27, 2011 at 4:45 PM, Judd Maltin
<
>
wrote:
>
Hi Folks, (sorta cross posted with
>
)
>
>
I've been hacking away at creating an automated deployment system for Swift
>
using Chef. I'd like to drop a design idea on you folks (most of which I've
>
already implemented) and get feedback from this esteemed group.
>
>
My end goal is to have a "manifest" (apologies to Puppet) which will define
>
an entire swift cluster, deploy it automatically, and allow edits to the
>
ingredients to manage the cluster. In this case, a "manifest" is a
>
combination of a chef databag describing the swift settings, and a
>
spiceweasel infrastructure.yaml file describing the OS configuration.
>
>
Ingredients:
>
- swift cookbook with base, proxy and server recipes. proxy nodes also
>
(provisionally) contain auth services. storage nodes handle object,
>
container and account services.
>
-- Base recipe handles common package install, OS user creation. Sets up
>
keys.
>
-- Proxy recipe handles proxy nodes: network config, package install,
>
memcache config, proxy and auth package config, user creation, ring
>
management (including builder file backup), user management
>
-- Storage recipe handles storage nodes: network config, storage device
>
config, package install, ring management.
>
>
- chef databag that describes a swift cluster (eg: mycluster_databag.json)
>
-- proxy config settings
>
-- memcached settings
>
-- settings for all rings and devices
>
-- basic user settings
>
-- account management
>
>
- chef "spiceweasel" file that auto-vivifies the infrastructure: (eg:
>
mycluster_infra.yaml)
>
-- uploads cookbooks
>
-- uploads roles
>
-- uploads the cluster's databag
>
-- kicks off node provisioning by requesting from infrastructure API (ec2 or
>
what have you) the following:
>
--- chef roles applied (role[swift:proxy] or role[swift:storage])
>
--- server flavor
>
--- storage device configs
>
--- hostname
>
--- proxy and storage network details
>
>
By calling this spiceweasel file, the infrastructure can leap into
>
existence.
>
>
I'm more or less done with all this stuff - and I'd really appreciate
>
conceptual feedback before I take out all the non-sense code I have in the
>
files and publish.
>
>
Many thanks! Happy spring, northern hemispherians!
>
-judd
>
>
>
>
--
>
Judd Maltin
>
T: 917-882-1270
>
F: 501-694-7809
>
A loving heart is never wrong.
I think that's a great idea and I've been working on a similar tool
for provisioning environments from scratch. I use it to provision prod
and test environments and eventually will hand it to my devs to spin
up their own environments, devops goodness and all.
My tool glues together cloud provisioning, chef, DNS, and some other
stuff like talking to load balancers and attaching EBS volumes with
the end goal of spinning a completely functioning environment with one
command. The input to the tool is a json file where you list out the
nodes to build along with attributes like the EC2 instance size, chef
run list, and DNS aliases. The tool takes the spec file and generates
a custom cloud-init firstboot script from an erb template which is
passed to each instance via user-data when it's launched. When the
instance starts it installs chef, applies the runlist, adds itself to
DNS using DDNS, deploys applications with chef, and adds the node to
the load balancer if it's supposed to be in a pool. Every node gets a
friendly CNAME so you don't have to remember the cloud assigned name.
I add nodes to the LB via a little Sinatra-based webapp that talks to
the LB's API. Having nodes add themselves to the LB means one less
operation outside of the provisioning system. Our application settings
are kept in per-environment databags and the tool can generate new
databags from a template and upload it to chef.
I think whole stack provisioning is the way to go and tools like these
and CloudFormation make the process so much better than launching
individual instances and configuring them one-by-one. Ultimately I can
envision doing this with chef itself, afterall chef is a systems
integration framework.
-Eric
Archive powered by MHonArc 2.6.16.