That makes sense, I suppose, but the problem is that there's nothing that can be used as a reasonable default for the file drop location. The FTP solution certainly works, but how do you implement it in a community cookbook?
From: Aniket Sharad < " target="_blank"> >
Date: Wednesday, October 16, 2013 12:10 AM
To: Bryan Taylor < " target="_blank"> >
Cc: Graham Christensen < " target="_blank"> >, Chef Dev < " target="_blank"> >
Subject: Re: [chef-dev] Re: Idiom for adding a node to a Cluster
I think it is not a recommended practice to make Chef server a file server and upload/download huge files as part of cookbooks.
Can think of two solutions:1. using an FTP to procure files from it during each chef-client run (remote_file resource can be used for the purpose of moving zips)2. use torrents to move files - don't know if there is a provider for it already
RegardsAniket
On 16 October 2013 10:10, Bryan Taylor < " target="_blank"> > wrote:
I like it. I hadn't thought of pushing the state data to some external location. That solves all the complexities around finding the master or dealing with down/hung nodes.
Can this process be made generic enough, though to be fit for use in a community opscode cookbook? Where can the "backup location" be defaulted to? It'd be nice if the chef server could always be used as a file server.
From: Graham Christensen < " target="_blank"> >
Date: Tuesday, October 15, 2013 6:47 PM
To: Bryan Taylor < " target="_blank"> >
Cc: Chef Dev < " target="_blank"> >
Subject: Re: [chef-dev] Idiom for adding a node to a Cluster
I currently have a mode attribute identifying the cluster it is part of. Every X hours a backup / snapshot is made, and a new node auto detects and imports the latest snap from the cluster by name (foocluster-201310151830.tar.gz for example) which includes all the info to join and catch up to the cluster.
Hope that helps.
Ps - this is really easy on AWA et al, a bit trickier on bare metal.
Graham
On Tuesday, October 15, 2013, Bryan Taylor wrote:
I'm wondering what the chef idioms are for a certain problem that comes up a lot when expanding a cluster. Let's say I have some kind of persistence store and I want to enable replication, or add a new node with replication to an already running cluster. The replication will communicate on some custom protocol, but in order to work, I have to move stateful data, like db logs or whatever, from the master to a new node. The master is "the master right now", so it needs to be dynamically discovered, and accessed via rsync or scp, say, to pull the files down. I'm thinking for this I should just provision every cluster node with a fixed static public/private key.
Archive powered by MHonArc 2.6.16.