For one of our clients deployed to AWS via CloudFormation, at "stack
creation time"; we've taken to generating an OpenSSL CA (PKI) for the
stack, used by all systems that run SSH, pre distributing (from an S3
bucket, with Host Keys -> S3 Bucket) the CA and allowing machines to
generate "server" ssh keys. We also generate a keypair for the VPN
BASTION HOST with this same methodology.
OpenSSH on workstations can be configured to verify remote machines
against a known chain, so all in all, this allows for a much more
secure (and technically correct) authentication and authorization
model. Being standard OpenSSL authentication; this approach is more
cryptographically sane than nuking the known_hosts; also it meets the
level required by NIST MODERATE.
Hope this helps,
AJ
On 18 October 2012 13:36, Leo Dirac (SR) < "> > wrote:
> I'm starting to use Google Compute Engine and repeatedly running into a
> problem with known_hosts. GCE recycles public IP addresses pretty
> frequently, probably because its usage is still pretty low. When this
> happens, SSH on my workstation gets concerned that the signature of the
> machine at that IP has changed -- it gives a nice warning if I try to
> connect directly, but from within Ruby it just throws a NetSSH exception
> without explanation. Now I know I need to go edit ~/.ssh/known_hosts when
> this happens, but it's a gotcha.
>
> This has to come up with other cloud providers too, but probably
> infrequently enough to not be a big deal. It seems to the right thing to do
> would be to have knife go clear known_hosts for the specific IP when a new
> node is being created. Conceptually I'd rather not have knife messing with
> my local security credentials, but in this case it seems like it really is
> the right thing to do. Thoughts?
>
Archive powered by MHonArc 2.6.16.