We tend to recommend against this, as you are usually leaking both data and control across failure domains.
Think about it this way: when it fails, do you really want to add the increased latency? How about data replication when you are split brained? How do you fail back to being in multiple datacenters? Is one primary, the other passive?
The alternative is to treat each as an isolated failure domain, make them HA, and solve the consistency problem at the delivery of data level. It works much, much better.
Best,
Adam
From: Mark Pimentel <
">
>
Reply-To: " "> " < "> > Date: Thursday, February 14, 2013 9:56 AM To: " "> " < "> > Subject: [chef] Re: Re: RE: Re: Chef11 HA Say this scenario is configured across sites, with each chef server serving different data centers. Would the keys be the same for both servers?
This would be used in a scenario where we have a main deployment chef whereby we would control all objects with the complementary servers replicating cookbook data as well as user and node information. The other servers would simply replicate
back their node information.
On Thu, Feb 14, 2013 at 12:01 PM, Adam Jacob
<
" target="_blank">
> wrote:
Using DRBD for this is a good idea. If you share /var/opt/chef-server via Thanks, Mark |
Archive powered by MHonArc 2.6.16.