- From: Brad Knowles <
>
- To:
- Cc: Brad Knowles <
>
- Subject: [chef] Re: Re: Re: Installing large numbers of packages
- Date: Wed, 31 Aug 2011 19:34:05 -0500
On Aug 31, 2011, at 7:08 PM, Matt Palmer wrote:
>
Well, chef-solo doesn't do databags, and chef's server looks like such
>
a hairball I'm going to avoid it for as long as I possibly can.
I can't speak for chef-solo, but I did do a chef repo/chef-client install for
Hosted Chef, and I can tell you that with the omnibus installer, that process
was almost as painless as I've ever had. The only remaining issue I have
with that is outlined in the CHEF-2578 ticket.
So far as I know, this omnibus installer is intended for use with all types
of Chef installations, whether that be chef-solo, chef-client, chef-server,
Hosted Chef, etc....
>
There's also resilience issues, a strong aversion to centralisation,
>
and too many painful memories of Puppet scaling nightmares to get
>
over.
Well, all CM systems are about centralization, regularization,
categorization, and management of information, so I don't think you're going
to get away from that. In the case of Chef, what you're doing is trying to
get all this information about your internal systems & network infrastructure
recorded into a reliable and version-controlled CM system.
>
Even without that, though, I'm having trouble working out how it's
>
better to have a list of packages in one place, and a resource
>
specification that installs those packages somewhere else. I can
>
almost convince myself that putting attributes in an external JSON
>
file makes sense for roles (although I think it's codifying the same
>
mistake that practically everyone makes using Puppet, where you define
>
a pile of global variables and cross your fingers that everything
>
works, rather than having locally-passed parameters that define how
>
you want to use something Here and Now), but making a list somewhere
>
external just so I can avoid having to walk an array is insane. My
>
recipe says "this is how you configure a workstation", and the list of
>
packages you have to install in order to do that should be in that
>
recipe.
I think it comes down to a separation of "code" from "data". You should put
your "code" into a code repository, but when the data that the code is going
to be operating needs to be changed, you shouldn't necessarily have to change
the code just to accommodate the change in the data.
The installation script should be simple and easy to read, regardless of how
many packages are being installed -- that information should come from the
database. And when all that is changing is the data, it should work as you
want with the existing code that is already in place on all your nodes.
When you're talking about a small number of nodes to be managed, I'm not sure
that this makes such a difference. But as you try to scale up, it's going to
become more and more important that you keep this separation between code &
data.
So, do you want to learn the right way from Day One, or do you want to learn
a single-file method that you will have to unlearn as you try to scale up?
Maybe your problem with CM systems isn't with the systems themselves, but
with the way you're trying to use them -- or maybe misuse them?
Anyway, that's just food for thought from the peanut gallery. I've been down
the scaling road before, but not with Chef. It's going to be interesting to
see how this works out.
--
Brad Knowles
<
>
Archive powered by MHonArc 2.6.16.