Thank you for the elaborate answer As you said, it’s a battle at this stage to proof this concept of automation, but we’re coming there Last year we had one person in ChefConf, for ChefConf 2013 were already 3 and I believe the use in our company will grow once they see the obvious potential
in other projects and fields I think I’ll give it a go for HA, the DRBD and postgresql is things I have done before, just need to figure the solr That’s basically what I thought for the only components I need to replicate Baruch From: Jesse Campbell [mailto:
In the past, the official answer had been that the private chef paid offering comes out of the box with HA. For HA, you'll need to take a look inside the current installer. There are multiple back end store components (solr, bookshelf, postgres) which all need replication or clustering, then there are middle tier services like the chef expander (or whatever it
is called now) and the message queue (used to be rabbitmq), and the chef server api, all of which need to be deployed in multiple places hitting those replicated backends (the MQ might want to be treated like a back end component). Then you'll want load balancing between the server api endpoints, then you'll want to have the webui and knife and chef client pointing at the load balancer. For multiple datacenters, you'll want to get some kind of reliable replication for the backend components (solr, bookshelf, postgres), and have separate copies of the front and middle tiers in each DC pointing to the replicated back end. It isn't an easy problem to solve, which is why opscode is hoping you'll pay for it :) -Jesse On Feb 15, 2013 9:16 AM, "Baruch Shpirer" <
" target="_blank">
> wrote: How would you go about creating the HA pair? Some docs/drafts/pointers? From: Adam
Jacob [mailto:
" target="_blank">
]
Yes – have an HA pair (or at least HA Backends, with multiple API front-ends) in each failure domain. Make each
failure domain highly available, and make the system partition tolerant by enforcing that no writes ever need to cross the boundary. Adam From:
Baruch Shpirer <
" target="_blank">
> Can you define “to treat
each as an isolated failure domain, make them HA” From: Adam
Jacob [
" target="_blank">mailto:
]
We tend to recommend against this, as you are usually leaking both data and control across failure domains. Think about it this way: when it fails, do you really want to add the increased latency? How about data replication
when you are split brained? How do you fail back to being in multiple datacenters? Is one primary, the other passive? The alternative is to treat each as an isolated failure domain, make them HA, and solve the consistency problem
at the delivery of data level. It works much, much better. Best, Adam From:
Mark Pimentel <
" target="_blank">
> Say this scenario is configured across sites, with each chef server serving different data centers. Would the
keys be the same for both servers? This would be used in a scenario where we have a main deployment chef whereby we would control all objects with
the complementary servers replicating cookbook data as well as user and node information. The other servers would simply replicate back their node information. On Thu, Feb 14, 2013 at 12:01 PM, Adam Jacob <
" target="_blank">
>
wrote: Using DRBD for this is a good idea. If you share /var/opt/chef-server via
--
|
Archive powered by MHonArc 2.6.16.