[chef] Re: Re: new nodes and known_hosts


Chronological Thread 
  • From: "Leo Dirac (SR)" < >
  • To:
  • Subject: [chef] Re: Re: new nodes and known_hosts
  • Date: Wed, 17 Oct 2012 21:22:07 -0700

Thanks for the tip on building a more secure system.  That's overkill for what I'm working on right now, and I expect a lot of other chef users.

Erasing the entire known_hosts file is clearly inappropriate -- I wasn't suggesting that. Just removing the single offending line which refers to a system that we know no longer exists based on the IAAS provider's response to the request to create a new instance.

On Wed, Oct 17, 2012 at 7:06 PM, AJ Christensen < " target="_blank"> > wrote:
For one of our clients deployed to AWS via CloudFormation, at "stack
creation time"; we've taken to generating an OpenSSL CA (PKI) for the
stack, used by all systems that run SSH, pre distributing (from an S3
bucket, with Host Keys -> S3 Bucket) the CA and allowing machines to
generate "server" ssh keys. We also generate a keypair for the VPN
BASTION HOST with this same methodology.

OpenSSH on workstations can be configured to verify remote machines
against a known chain, so all in all, this allows for a much more
secure (and technically correct) authentication and authorization
model. Being standard OpenSSL authentication; this approach is more
cryptographically sane than nuking the known_hosts; also it meets the
level required by NIST MODERATE.

Hope this helps,

AJ

On 18 October 2012 13:36, Leo Dirac (SR) < "> > wrote:
> I'm starting to use Google Compute Engine and repeatedly running into a
> problem with known_hosts.  GCE recycles public IP addresses pretty
> frequently, probably because its usage is still pretty low.  When this
> happens, SSH on my workstation gets concerned that the signature of the
> machine at that IP has changed -- it gives a nice warning if I try to
> connect directly, but from within Ruby it just throws a NetSSH exception
> without explanation.  Now I know I need to go edit ~/.ssh/known_hosts when
> this happens, but it's a gotcha.
>
> This has to come up with other cloud providers too, but probably
> infrequently enough to not be a big deal.  It seems to the right thing to do
> would be to have knife go clear known_hosts for the specific IP when a new
> node is being created.  Conceptually I'd rather not have knife messing with
> my local security credentials, but in this case it seems like it really is
> the right thing to do.  Thoughts?
>




Archive powered by MHonArc 2.6.16.

§