[chef] Re: Re: Re: General questions about provision/kickstart!


Chronological Thread 
  • From: "Eric G. Wolfe" < >
  • To:
  • Subject: [chef] Re: Re: Re: General questions about provision/kickstart!
  • Date: Thu, 22 Dec 2011 12:36:17 -0500

On 12/22/2011 7:17 AM, Sven Sternberger wrote:
Hello!

On Wed, 2011-12-21 at 14:06 -0500, Eric G. Wolfe wrote:
Can you tell us a bit more about what role your existing CM
infrastructure plays?  How are you provisioning systems with existing
"legacy" solutions in place?
1. Register a host with our selfmade provisoning system
(http://www-it.desy.de/systems/services/wboom/). Mac,Ip and DHCP/PXE
template are also stored in an enterprise system (VitalQIP). The
data is stored in the AFS.

2. Depending on group assignment, hardware type and some flags the
provisioning system  creates kickstart files for all supported variants
of Scientificlinux and config files for pxelinux.
The kickstart files brings an adjusted partition schema and extra
packages. In the post part we mount afs and start our legacy CM
(http://www-it.desy.de/systems/services/salad/)
The pxelinux config sets the os version to install.

3. Based on the data from the registration, we run in fixed intervals
our CM (shellscripts).
These scripts get their parameter from the AFS and bring extra packages,
updates, nfs, setting root pw, automount config, access rights ...

What I'm still missing in all cobbler, forman, puppet, chef stuff is
the  central place to register a host and store the meta data. It looks
like I have several places where a host has metadata.
So for example I give a set of workgroup server from one department the
same partion scheme and I want for all workgroup server the same
automount configuration. The first setting is for cobbler, the second
for chef, but I have to configure it in cobbler and chef?
One of the principles of using chef is managing infrastructure as code. The point is to be able to restore your IT services from a source code repository and a data backup. I let Kickstart handle the JeOS: installing baseline packages; setting the root password; partitioning and formatting volumes; configuring network services; turning off unnecessary services. Every single RHEL 5, or RHEL 6, box spun up by my provisioning system looks exactly the same as a JeOS, a generic server with the bare minimum running. Kickstart is flexible enough to let you approximate partitioning predictably. The only variance in my JeOS image would be volume partitioning depending on application requirements.

Everything else after the initial deployment gets managed by Chef. You start breaking up all of your services into manageable pieces. You might grab a community cookbook for NFS (http://ckbk.it/nfs), make a few changes as needed, and abstract your company specific hostnames and exports into a role. Autofs is a service in its own right, so you cook up a recipe for that and then abstract the company specific bits into another role. Access rights could probably be managed with the sudo (http://ckbk.it/sudo) and users (http://ckbk.it/users) cookbooks. You would create a data bag for your users and drop off json objects for each user, then assign sudo access by a role. If you have specific file (http://wiki.opscode.com/display/chef/Resources#Resources-File) or directory (http://wiki.opscode.com/display/chef/Resources#Resources-Directory) permission requirements, those might fit best into an application specific cookbook. The same could be said for package requirements (http://wiki.opscode.com/display/chef/Resources#Resources-Package), these usually fit best in an application specific cookbook. For example, it makes sense for an Apache cookbook (http://ckbk.it/apache2) to install the httpd or apache2 package and any dependent pieces for that application.

If you have common requirements across all your machines, you might develop a baseline role (https://gist.github.com/1295668) which describes what every server should look like in your environment, regardless of its purpose. By stacking up these roles in a run_list, you will likely end up storing less host specific metadata, and more generalized role specific metadata. If you have edge cases which don't conform to the baseline, you can override portions of the baseline in a secondary role which is then added to your node run_list. I would recommend aiming for provisioning a generic, "just enough" environment with Cobbler, or your existing provisioning system, and hand off the heavyweight configuration to something like Chef. Its not flexible to have hundreds of post install shell scripts run in your provisioning phase, I've been there and that gets to be unmanageable and error prone. It is very flexible to have modularized roles which can be chosen for any given application you wish to deploy in the post install/configuration phase.
At this point we will have to code the glue between something like
foreman and chef (and it looks like the integration with puppet is
already there for free)
or
we will configure chef with the metadata from our legacy system

regards!

sven





Archive powered by MHonArc 2.6.16.

§