[chef] Re: Re: Re: Re: compile vs execution phase


Chronological Thread 
  • From: Bill Warner < >
  • To:
  • Subject: [chef] Re: Re: Re: Re: compile vs execution phase
  • Date: Sun, 5 Jan 2014 22:43:28 -0700

I was trying to keep the disks somewhat generic as we have other clusters that will use the same base then setup hadoop, solr and other data directories for other environments/clusters.  If I merge the disk setup into the Cassandra cookbook then I'd have to do the same for all the others and it seems like a loot of code duplication.

I'll have to chew on this for a bit and see if I understand where you're coming from.


On Sun, Jan 5, 2014 at 10:05 PM, Lamont Granquist < " target="_blank"> > wrote:
On 1/5/14 6:50 PM, Bill Warner wrote:
I was thinking of extending the directory resource with an lwrp that can take a wild card.  Not sure if it will do what I want, but If I could have an lwrp that could create directories based on a path of /data/*/cassandra/data then evaluate at execution time and create all the data dirs it may help.

I think the idea of Attributes being the desired state of the node, then the execution phase getting it to that state is something I still need to figure out how to work with.

Is there something I'm missing to the chef architecture?  One recipe that depends on another recipe executing before it's attributes could even be inferred doesn't seem to me like an obscure use case.  Is a multi-pass convergence something I should expect as necessary?
Attributes should really /be/ the desired state of the node, but you're making those attributes be dynamic and depend on prior resources getting executed.  I'm not quite sure exactly why you need this layer of indirection, since it seems like it'd be better to have the inputs that feed the cassandra cookbook and setup the drives in the /data subdirectory also feed creating the subdirectories?





--
--
Bill Warner



Archive powered by MHonArc 2.6.16.

§