In the long term, option 1 is likely where we are headed, or if not then we'll throw away the MySQL server entirely and completely replace with data bags. But in the short term, turning templater into a web service greatly simplifies this process (and removes the back ticks and filesystem location dependence)
1) Ruby has its own ways to talk with mysql (mysql2 gem, for example) and own template engines.2) You can create Perl webservice to talk with. It will simplify tasks like caching of templated data.2012/4/21 Jesse Campbell < " target="_blank"> >I'm in the process of migrating from a perl template toolkit and mysql
based config management system to chef.
We have too much to migrate to be able to do it all in one go, so I'd
like a way to push data from one system into the other.
One way I've come up with is a quick recipe that does this:
data_bag_raw = Chef::JSONCompat.from_json(`perl
~syncmaster/synchronizer/templater.pl -r -t chefdatabag.json`)
sam_config_item = data_bag_item("sam-config",node["domain"])
sam_config_item.raw_data = data_bag_raw
the template toolkit generates the chefdatabag.json on the fly from
the mysql database, ends up with about 15 stored procedure calls for
my test stack, but close to 2000 calls for the production stack.
I'm okay with this, but I'm wondering...
Is there a better way?
I don't want every node in the stacks pulling from the mysql database
directly, i want to push the data into a data bag.
should I be using an execute task instead of backticks to call out to
my template library?
just looking for random ideas here...
Archive powered by MHonArc 2.6.16.