With Jenkins, there are tools to copy workspaces from one job to another, to copy workspace data from one system to another, to archive artifacts, etc.... You should be making use of those kinds of tools so that you never need to go out to a specific slave to find out what went wrong with a given job. And if we lived in a perfect world, I'm sure we'd never need to ssh anywhere and could just sit in the middle of an interactive matrix like Tony Stark. Regardless, this is an active development environment and we are in and out of Jenkins slaves and development team VMs, and just generally being the diagnostic fixit guys. The information doesn't always come to you. Sometimes you have to go to the information. And while I get a lot of information from the Jenkins GUI, sometimes there's no substitute for going out and looking at something. Also, the decision was made (before I came along) to have all the dev teams use their own chef servers, so there is no unified environment in dev. Don't get me started on the discrepancies between this arena and other environments. I'm hoping to move toward homogenization. But we have hundreds of developers, and something like 50 Openstack tenants (all dev teams). The infrastructure started small with a couple of teams and then grew to include a large portion of the enterprise rather quickly. Or possibly I'm the only person who doesn't work in a perfect world. Everyone else, back to your regularly scheduled Ironman Matrix. Sascha Bates
|
">
| 612 850 0444 |
|
|
On 10/21/12 7:57 PM, Brad Knowles wrote:" type="cite">On Oct 21, 2012, at 5:35 PM, Sascha Bates ">< > wrote:How are people working with these servers that have no useful labels? I really want to know. We ssh all over the place in our infrastructure and if I had to go out and look up a special snowflake ec2-blah-blah or IP every time I wanted to get to a server, I'd probably take a baseball bat to my infrastructure.IMO, if you're manually ssh'ing around in a large infrastructure, you're doing it wrong. If you've got a small infrastructure, and you can keep all the hostnames in your head at the same time, and you can manage your IP address space with simple /etc/hosts files, then manually ssh'ing around may be required on occasion and is probably okay. However, if you've got a larger infrastructure, then you should have better tools to get the information you need and put it in a place where you can get to it, without you having to manually ssh around. Those tools might end up having to use ssh to get the information, but that part of the process should be hidden from you.For example, we run a Jenkins setup with 20 slaves. The slaves are rebuilt often using openstack and chef. I'm currently writing a knife plugin to set up ssh aliases for nodes in a role that make it easy for us to do things like 'ssh slave20' as we are constantly helping people figure out stuff about their jobs on the slaves or diagnosing VM issues (I'd actually really like to take a baseball bat to our Openstack infra).With Jenkins, there are tools to copy workspaces from one job to another, to copy workspace data from one system to another, to archive artifacts, etc.... You should be making use of those kinds of tools so that you never need to go out to a specific slave to find out what went wrong with a given job. -- Brad Knowles ">< > LinkedIn Profile: <http://tinyurl.com/y8kpxu> |
Archive powered by MHonArc 2.6.16.