- From: "John E. Vincent (lusis)" <
>
- To:
- Subject: [chef] Re: Re: Re: Concurrency node creation issue
- Date: Tue, 6 Sep 2011 23:55:52 -0400
That would be awesome ;)
And AJ brings up a pretty good point, the engine that stores the
information is fairly irrelevant. The trick is simply some logic in
the cookbook that holds up the run until some external source says
"Hey, this node over here says it's done." You could block by
continually polling redis, noah, thlayli, mysql or whatever until some
record is there.
On Tue, Sep 6, 2011 at 11:36 PM, AJ Christensen
<
>
wrote:
>
Yo,
>
>
I've got some secret syrpy sizzauce coordination LWRP cookbook that
>
can kick this master election scenario (and a few others), it has
>
providers for Noah and an in-house (Cloudscaling [0]) tuple space
>
solution similar to the Linda coordination language [1] called
>
Thlayli, written by Zed Shaw (not yet FOSS, TBA)
>
>
I've used it for hanging the distributed components of OpenStack
>
together for the day job, but I'm considering making the Noah portions
>
of it available for the cookbook contest / world domination.
>
>
Is anyone else doing any work in this field, specifically distributed
>
systems coordination? MPI? =)
>
>
–AJ
>
>
[0] http://cloudscaling.com/
>
[1] http://en.wikipedia.org/wiki/Linda_(coordination_language)
>
>
On 7 September 2011 15:23, Joshua Timberman
>
<
>
>
wrote:
>
> Hello,
>
>
>
> As Andrew points out, this is a race condition.
>
>
>
> What we typically do is the specificity route, where a single system who
>
> is to be master is assigned such as a role. For example the "database
>
> master" for an application will have a role ... "appname_database_master."
>
> Any other nodes that are slaves would have "appname_database_slave." Our
>
> "database" cookbook and its companion "application" follow this pattern
>
> and behave accordingly.
>
>
>
> However I'd love to see a solution that utilizes Noah or Zookeeper to
>
> solve the problem more dynamically.
>
>
>
> On Sep 4, 2011, at 3:57 PM, Daniel Cukier wrote:
>
>
>
>> Hi,
>
>>
>
>> I have in my infrastructure a topology where there's one master node and
>
>> many slave nodes.
>
>> What I want to do is to automatically detect if a node is a master or a
>
>> slave. The rule is:
>
>> 1) If there is no master node yet, the next node will be the master
>
>> 2) If there is a master, the next node will be slave and its master will
>
>> be
>
>> the already existing master.
>
>>
>
>> The problem occurs when I have an empty infrastructure (zero nodes) and I
>
>> try to create the first 2 nodes simultaneously.
>
>> When both nodes get provisioned, both check that there is no master and
>
>> both
>
>> are set to be master, which is a wrong configuration.
>
>> This is a very common synchronization problem, but I don't know how to
>
>> deal
>
>> with it in the Chef environment.
>
>>
>
>> Here's the recipe to configure a node:
>
>>
>
>> @@master = node
>
>>
>
>> search(:node, 'role:myrole') do |n|
>
>> if n['myrole']['container_type'] == "master"
>
>> @@master = n
>
>> if n.name != node.name
>
>> node.set['myrole']['container_type'] = "slave"
>
>> end
>
>> end
>
>> end
>
>>
>
>> if @@master == node
>
>> node.set['myrole']['container_type'] = "master"
>
>> end
>
>>
>
>> template "#{node['myrole']['install_dir']}/#{ZIP_FILE.gsub('.zip',
>
>> '')}/conf/topology.xml" do
>
>> source "topology.xml.erb"
>
>> owner "root"
>
>> group "root"
>
>> mode "0644"
>
>> variables({:master => @@master})
>
>> end
>
>>
>
>> How can I avoid this problem?
>
>>
>
>> Thanks a lot
>
>>
>
>> Daniel Cukier
>
>
>
> --
>
> Opscode, Inc
>
> Joshua Timberman, Director of Training and Services
>
> IRC, Skype, Twitter, Github: jtimberman
>
>
>
>
>
Archive powered by MHonArc 2.6.16.