Brainstorming session ahead.You can try- forcing Ruby to GC more frequently by patching the chef-client in key places- running the chef-client with ulimit- Jay FeldblumOn Thu, Dec 8, 2011 at 5:17 PM, Erik Hollensbe < " target="_blank"> > wrote:
You could also schedule chef-client runs when they're necessary, e.g. design hooks for controlled deployments.
-Erik
On Dec 8, 2011, at 1:46 PM, Alex Soto wrote:
> maybe run the client via cron instead of a daemon so the memory use is only during client runs.
>
> On Dec 8, 2011, at 1:43 PM, Chris wrote:
>
>> My company is pretty late to the Chef party, only getting things started about 6 months ago (after a year of asking for it), but now that we have things up and running we've run into a bit of a problem. The client consumes a fairly large amount of memory, between 175-250m per server. This has caused a lot of concern from the Operations team since that amount * N VMs can get quite expensive. I've been doing some research into this and noticed that the amount of resident memory can depend on how many recipes are loaded on a node, and Opscode docs seem to confirm this. Right now these cookbooks are loaded into a single base role and added to each node for ease of use. They're all OS level recipes to manage hostfiles, resolv.conf etc.. etc.. There are 20 total. We also have application roles that can add another 3 or 4 recipes.
>> I've hacked around a bit on the Samba cookbook and removed all the code used to create users, which has lowered the memory foot print down to a steady 192m, but i fear this won't be enough to convince my ops team to keep chef. They want to dump it and go back to using shell and perl scripts for everything.
>>
>> My question is, does anyone have any tips for reducing the memory usage? I'd like to be able to keep Chef around.
>>
>> Thanks!
>>
>>
>
Archive powered by MHonArc 2.6.16.