Hi,
I have a deployment scenario which I would much appreciate some Chef user community feedback on.
Context Overview:
The context is a server running a node
.js web app which connects to a
mongolab.com database instance.
What I'm looking at is using Jenkins CI to build a docker image after each successful master branch build.
Probably on the same machine as the Jenkins CI server I'd run a private docker registry.
Jenkins would build a node. j's app image and push to the local private docker registry.
Lets say I have 5 node web app servers behind a Load Balancer.
A Chef run production deployment would then pull from the private docker registry for each
of the five node web app servers.
Question:
My question is at the point where Jenkins builds the new docker image after a successful master branch build.
Jenkins is building from a git repo but that git repo, currently, does not have sensitive production config data.
Config data like the API keys for external services like
mongolab,
datadog,
loggly.com etc.
What I'm thinking is having a step where a Jenkins task might hit something like an etcd server to obtain that production config data and update the node.js web app source config file with it before the docker build.
Does this sound like a good option?
For example this is what is pulled from the git repo:
"loggly": {
"inputToken": "your-lggoy-token",
"auth": {
"username": "your-loggly-username",
"password": "your-loggly-password"
}
}
This needs to be updated before the image is built, so that when a container is fired up in production the application can connect to the external loggly resource.
Or .. should the container instance query etcd at the point it starts up?
I hope I explained all that OK.
Any feedback would be muchly appreciated :-)
Thanks!