We have a product comprised of 12-25 nodes with a combination of RHEL and Windows OS’s. Each node has its identity dictated by the set *.msi and *.rpms we install
onto it. We can have several deployments of these products throughout our lab, say 5 in the dev lab, 9 in the QA lab, 4 in the Perf lab, etc. So if at one time we have 20 deployed products, that makes 240-500 nodes we may configure at any given time. We have been exploring two approaches to use Chef to configure our nodes Option 1 We have a single Chef server that contains all our cookbooks that all nodes talk to. I understand the need to segregate cookbooks under development, vs. ones
for test or production. I also understand that we may need provision to make this highly-available, etc., so if one server fails we have a standby server. Option 2 Each product is configured with its own chef server, such that the deployment of the product involves first the creation of a chef server, and then the nodes
on THIS product can be deployed via this chef server. IOW, if we had 20 products deployed currently, we’ll need 20 chef servers – 1 chef server per product Currently we orchestrate our product deployment via Jenkins Any pros/cons to each approach? Chris |
Archive powered by MHonArc 2.6.16.