[chef] Re: Re: compile vs execution phase


Chronological Thread 
  • From: Bill Warner < >
  • To:
  • Subject: [chef] Re: Re: compile vs execution phase
  • Date: Sun, 5 Jan 2014 18:50:43 -0700

I was thinking of extending the directory resource with an lwrp that can take a wild card.  Not sure if it will do what I want, but If I could have an lwrp that could create directories based on a path of /data/*/cassandra/data then evaluate at execution time and create all the data dirs it may help.

I think the idea of Attributes being the desired state of the node, then the execution phase getting it to that state is something I still need to figure out how to work with.

Is there something I'm missing to the chef architecture?  One recipe that depends on another recipe executing before it's attributes could even be inferred doesn't seem to me like an obscure use case.  Is a multi-pass convergence something I should expect as necessary?

Thanks again


On Sun, Jan 5, 2014 at 1:46 PM, Lamont Granquist < " target="_blank"> > wrote:

I think this is worse than you think it is, because all attributes files are parsed before any recipes are evaluated, so your trouble here is going beyond compile-converge mode issues.

You may want to move both your attribute-default-setting code and your directory construction into an LWRP.  That will result in your attributes only being set correctly in converge mode after the LWRP has run, however.   If you have more code that expects to be able to consume the node[:cassandra][:data_dirs] attribute in compile mode, then you'll probably need to modify or wrap your cassandra cookbook to do its work at compile time and then update the attributes after its done its work.


On 1/3/14 10:46 PM, Bill Warner wrote:
I've been working on learning how to write cookbooks using chef-solo.  I have created two cookbooks, one that evaluates the physical array controllers and configures, partitions, luks_encrypts, mkfs and mounts the drives to /data/1, /data/2, /data/3, etc...  This works really well on a new machine and puts the hardware in a state we are looking for no matter how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of cassandra, runs a template of the cassandra.yaml and creates the data directories depending on what drives are mounted to /data/1 /data/2 /data/3 etc...
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
  next if drives.match(/^\.\.?$)
  if system "/bin/mountpoint /data/#{drives}>/dev/null" then
    default[:cassandra][:data_dirs].push( "/data/#{drives}/cassandra/data" )
  end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
  directory "#{data_dir}" do
    owner "cassandra"
    group "cassandra"
    mode 0750
    action :create
    recursive true
  end
end


Of course this is getting generated at compile time when nothing is in /data.  The disks cookbook runs and populates /data/# however, the cassandra cookbook wont work unless there is a way to regen the recipe after the disk cookbook has completed.  running chef-solo a second time gets things into a proper state.

Is there a better way to create the directories?  I think I can use the lazy attribute assignment to make my template work, but the directory creation is giving me hell.

Thanks for any help
--
Bill Warner




--
--
Bill Warner



Archive powered by MHonArc 2.6.16.

§