[chef] Re: Re: RE: Re: RE: Re: Re: Re: RE: Re: Chef11 HA


Chronological Thread 
  • From: Mark Pimentel < >
  • To:
  • Subject: [chef] Re: Re: RE: Re: RE: Re: Re: Re: RE: Re: Chef11 HA
  • Date: Fri, 15 Feb 2013 13:27:21 -0500

Much appreciated and many thanks.  While we still have somewhat of an uphill battle to acquire private chef ourselves, we do what we can by preaching its benefits in the org.  We are continually trying to prove its worth and hope that someday we could be in a position to acquire the private chef offering.  With that said, we do contribute where we can as I have also authored a cookbook and will be attending ChefConf along with one of the workshops.

I must also say how helpful you guys have been to the community at large.

The support is excellent.  

Thank you very much.



On Fri, Feb 15, 2013 at 12:46 PM, Adam Jacob < " target="_blank"> > wrote:
The Private Chef documentation has some background on what we do architecturally. The configuration is different (we support HA topologies out of the box in Private Chef, and you need to assemble it yourself with Open Source Chef,) but the architecture is one we're used to great success with many customers.


Best,
Adam


How would you go about creating the HA pair?

Some docs/drafts/pointers?

 

From: Adam Jacob [ " target="_blank">mailto: ]
Sent: Thursday, February 14/02/2013 20:13
To: " target="_blank">
Subject: [chef] Re: RE: Re: Re: Re: RE: Re: Chef11 HA

 

Yes – have an HA pair (or at least HA Backends, with multiple API front-ends) in each failure domain. Make each failure domain highly available, and make the system partition tolerant by enforcing that no writes ever need to cross the boundary.

 

Adam

 

 

Can you define “to treat each as an isolated failure domain, make them HA”

 

 

From: Adam Jacob [ " target="_blank">mailto: ]
Sent: Thursday, February 14/02/2013 20:05
To: " target="_blank">
Subject: [chef] Re: Re: Re: RE: Re: Chef11 HA

 

We tend to recommend against this, as you are usually leaking both data and control across failure domains.

 

Think about it this way: when it fails, do you really want to add the increased latency? How about data replication when you are split brained? How do you fail back to being in multiple datacenters? Is one primary, the other passive?

 

The alternative is to treat each as an isolated failure domain, make them HA, and solve the consistency problem at the delivery of data level. It works much, much better.

 

Best,

Adam

 

 

Say this scenario is configured across sites, with each chef server serving different data centers.  Would the keys be the same for both servers?

 

This would be used in a scenario where we have a main deployment chef whereby we would control all objects with the complementary servers replicating   cookbook data as well as user and node information.  The other servers would simply replicate back their node information.

 

On Thu, Feb 14, 2013 at 12:01 PM, Adam Jacob < " target="_blank"> > wrote:

Using DRBD for this is a good idea. If you share /var/opt/chef-server via
DRBD, you can use the normal mechanisms for starting/stopping the cluster,
and be certain you will have identical data.

Private Chef supports this configuration out of the box, fwiw, but it's
equally possible with Open Source Chef.

Best,
Adam



On 2/13/13 8:45 PM, "Baruch Shpirer" < " target="_blank"> > wrote:

>Is there any draft to the HA procedure/setup?
>
>Also, if I configure postgresql for master-master replication
>and use drbd for the bookshelf folder
>does it mean i got 2 identical servers in async mode?
>Will clients be using same validation public key in both sites?
>
>Baruch
>
>-----Original Message-----
>From: Seth Falcon [mailto: " target="_blank"> ]
>Sent: Monday, February 11/02/2013 17:35
>To: < " target="_blank"> >
>Subject: [chef] Re: Chef11 HA
>
>
>On Feb 11, 2013, at 1:29 PM, Vaidas Jablonskis wrote:
>
>> This might be slightly unrelated to this conversion, but what I wonder
>>what is stored in postgres database?
>
>All of the Chef object data is stored in the db. You can explore the
>schema a bit like this:
>
> :~# su - opscode-pgsql
>$ bash
> :~$ which psql
>/opt/chef-server/embedded/bin/psql
> :~$ psql opscode_chef psql (9.2.1)
>Type "help" for help.
>
>opscode_chef=# \d
>                         List of relations
> Schema |             Name              |   Type   |     Owner
>--------+-------------------------------+----------+---------------
> public | checksums                     | table    | opscode-pgsql
> public | clients                       | table    | opscode-pgsql
> public | cookbook_version_checksums    | table    | opscode-pgsql
> public | cookbook_version_dependencies | view     | opscode-pgsql
> public | cookbook_versions             | table    | opscode-pgsql
> public | cookbook_versions_by_rank     | view     | opscode-pgsql
> public | cookbooks                     | table    | opscode-pgsql
> public | cookbooks_id_seq              | sequence | opscode-pgsql
> public | data_bag_items                | table    | opscode-pgsql
> public | data_bags                     | table    | opscode-pgsql
> public | environments                  | table    | opscode-pgsql
> public | joined_cookbook_version       | view     | opscode-pgsql
> public | nodes                         | table    | opscode-pgsql
> public | osc_users                     | table    | opscode-pgsql
> public | roles                         | table    | opscode-pgsql
> public | sandboxed_checksums           | table    | opscode-pgsql
> public | schema_info                   | table    | opscode-pgsql
>(17 rows)
>
>And find the script used to initialize the schema here:
>https://github.com/opscode/chef_db/blob/master/priv/pgsql_schema.sql
>
>> What happens when this database gets corrupted or data is lost, for
>>instance?
>
>Bad things happen. If the db data is lost or corrupted, so is your Chef
>Server.
>
>+ seth
>
>
>
>




 

--
Thanks,

Mark




--
Thanks,

Mark



Archive powered by MHonArc 2.6.16.

§