35

I have an AWS EC2 Ubuntu instance for pet projects. When I tried logging in one day, this error results:

~$ ssh -i"/home/kona/.ssh/aws_kona_id" kona@server.akona.me -p22 
Enter passphrase for key '/home/kona/.ssh/aws_kona_id': 
Received disconnect from [IP address] port 22:2: Too many authentication failures
Disconnected from [IP address] port 22
~$

kona is the only account enabled on this server

I've tried rebooting the server, changing my IP address, and waiting.

EDIT:

kona@arcticjieer:~$ ssh -o "IdentitiesOnly yes" -i"/home/kona/.ssh/aws_kona_id" -v kona@ec2-3-17-146-113.us-east-2.compute.amazonaws.com -p22 
OpenSSH_8.1p1 Debian-1, OpenSSL 1.1.1d  10 Sep 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to ec2-3-17-146-113.us-east-2.compute.amazonaws.com [3.17.146.113] port 22.
debug1: Connection established.
debug1: identity file /home/kona/.ssh/aws_kona_id type -1
debug1: identity file /home/kona/.ssh/aws_kona_id-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1p1 Debian-1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
debug1: Authenticating to ec2-3-17-146-113.us-east-2.compute.amazonaws.com:22 as 'kona'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:D3sIum9dMyyHNjtnL7Pr4u5DhmP5aQ1jaZ8Adsdma9E
debug1: Host 'ec2-3-17-146-113.us-east-2.compute.amazonaws.com' is known and matches the ECDSA host key.
debug1: Found key in /home/kona/.ssh/known_hosts:41
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /home/kona/.ssh/aws_kona_id  explicit
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/kona/.ssh/aws_kona_id
Enter passphrase for key '/home/kona/.ssh/aws_kona_id': 
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
kona@ec2-3-17-146-113.us-east-2.compute.amazonaws.com: Permission denied (publickey).
kona@arcticjieer:~$ 
ivan_pozdeev
  • 352
  • 4
  • 13
Arctic Kona
  • 467
  • 1
  • 4
  • 6
  • 3
    Having a backup ? as I seen it you are in trouble. – yagmoth555 Oct 28 '19 at 18:59
  • 4
    Is this after the first attempted authentication? If so, the error you're receiving might be the most telling: 'Too many authentication failures'. Is this SSH server accessible to the Internet? If so, it's probably in the process of being brute forced. – thepip3r Oct 28 '19 at 19:01
  • 22
    Add `-o "IdentitiesOnly yes"` to your command line. Keep in mind that `ssh` throws *all* the SSH keys to the server if oyu do not specify `IdentitiesOnly`. So if you have a lot of SSH keys your logins will fail. Specifying an explicit key in the command line is not enough to avoid this behaviour. Moreover, the key specified on the command line need not be used first (which might be what most people would expect maybe...) – Giacomo Alzetta Oct 29 '19 at 12:43
  • 3
    @GiacomoAlzetta I believe this is the right answer, so please write an answer, not just comment :). – Edheldil Oct 29 '19 at 13:53
  • Do you have cloudwatch enabled? Was the instance left running on for a while? I wonder if people were trying to break into it and Amazon just blocked ssh access. – raubvogel Oct 29 '19 at 14:49
  • Might be obvious to you, but has this SSH session ever worked for you ? – Criggie Oct 30 '19 at 10:22
  • Secondly - are you positive that IP address is correct for your instance? Confirm it in the AWS console, you might be trying to SSH to someone else's host. – Criggie Oct 30 '19 at 10:22
  • From your edit you seem to have a different problem. Your client is correctly using that key, but the key simply doesn't work. You should access your instance from the AWS console and ensure that your `authorized_keys` are set correctly and have the correct permissions. – Giacomo Alzetta Oct 31 '19 at 15:00

7 Answers7

71

This error usually means that you’ve got too many keys loaded in your ssh-agent.

Explanation: Your ssh client will attempt to use all the keys from ssh-agent one by one before it gets to use the key specified with -i aws_kona_id. Yes, it's a bit counter-intuitive. Because each such attempt counts as an authentication failure and by default only 5 attempts are allowed by the SSH server you are getting the error you see: Too many authentication failures.

You can view the identities (keys) attempted with ssh -v.

The solution is to tell ssh to only use the identities specified on the command line:

ssh -o "IdentitiesOnly yes" -i ~/.ssh/aws_kona_id -v kona@server.akona.me

If it doesn’t help post the output of that command here.

MLu
  • 24,849
  • 5
  • 59
  • 86
  • 3
    To elaborate: this happens because your client tries private key 1, which fails because the server doesn't know it, then it tries private key 2, which fails because the server doesn't know it, then it tries private key 3, which fails because the server doesn't know it, then it gives up and never gets to the 5th or 6th key which is the correct one. – user253751 Oct 29 '19 at 11:24
  • I don't think it happens even after using the `-i` – hjpotter92 Oct 29 '19 at 12:05
  • 7
    @hjpotter92 `-i` does **not** limit the keys ssh will use, moreover the key specified by `-i` need not be tried first. That's why if you want to use multiple keys (like one key per host) you really need to use the `.ssh/config` and in the host configuration always specify the `IdentitiesOnly yes`. – Giacomo Alzetta Oct 29 '19 at 13:11
  • @GiacomoAlzetta thanks for the comments, added your input to the answer. – MLu Oct 30 '19 at 20:35
  • I'm using 1password and had this issue with around 20 keys stored. I archived some unused keys and now it works. Thank you !! – maxime Aug 01 '23 at 15:20
30

I think MLu's answer is possibly correct in this case. The way to validate this is to run a command line ssh command specifying the correct key to the server.

ssh -i "keyfile.pem" ec2-user@1.2.3.4

If that doesn't work, and in the general case of "I've been locked out of my server, help!", the generally recommend approach is to mount the volume to another instance as a data volume.

  1. Stop the EC2 server.
  2. Mount the volume onto a new instance as a data volume.
  3. Do any investigation or repairs required (look at logs, add keys, etc). This can include creating new users and new keys, changing files on the file system, etc.
  4. Mount the volume as a root volume on the original instance.

Repeat until you have access. If you can't get access this at least gets you access to your data.

Tim
  • 31,888
  • 7
  • 52
  • 78
  • 3
    This is an overkill in this case and won't probably help anyway because the problem lies on the client side, trying too many keys for authentication. – MLu Oct 29 '19 at 22:41
  • 3
    You could be right @MLu, this is a general procedure when you don't have a key. The problem can be diagnosed by specifying the correct key on the ssh command line. If it's that, great, your answer is perfect. This one can stay as a more general "I can't access my EC2 server" answer. – Tim Oct 29 '19 at 23:30
29

SSH by default tries all available SSH keys. It does so in a "random" order. Specifying the -i option simply tells SSH to add that keyfile to the list of keys to try.

It does not:

  • limit SSH to use only that key
  • tell SSH to try that key first

What ends up happening (quite often if you use many keys) is that SSH tries a couple random key that don't work and the server stops accepting authentication attempts from your client.

If you want to tell SSH to "use only this key" you must specify the IdentitiesOnly yes option:

ssh -o "IdentitiesOnly yes" -i"/home/kona/.ssh/aws_kona_id" kona@server.akona.me -p22 

IdentitiesOnly yes tells SSH only to use the explicitly specified keys (in this case only the key specified using -i).

This is why when I use custom keys for different hosts I always define the host configuration in .ssh/config. This allows me to use a simple alias and, more importantly, to specify IdentitiesOnly yes and which key to use to avoid this kind of mistake:

Host kona.server
    Hostname server.akona.me
    IdentityFile ~/.ssh/aws_kona_id
    IdentitiesOnly yes
    Port 22
    User kona

With the above in your .ssh/config you should be able to login in your server with simply:

$ ssh kona.server
Giacomo Alzetta
  • 411
  • 3
  • 5
  • 1
    Its not a bug its a feature :D – Hakaishin Oct 30 '19 at 14:35
  • Do you know the reason as to why ssh tries the identities in 'random' order? Is there a reason why SSH doesn't at least try the explicitly provided identity file first? I'm just curious. – Paul Belanger Oct 30 '19 at 15:52
  • This right here is the real answer---solves the problem, prevents it from happening again, supports as many servers as the user needs. – DoubleD Oct 30 '19 at 18:42
5

The verbose output you've just added shows that you get Permission denied for ~/.ssh/aws_kona_id.

That's a completely different problem than Too many authentication failures.

Perhaps your aws_kona_id isn't the right key for the user (and that's why it kept trying all the other identities from the ssh-agent) or you should use the default EC2 user account, e.g. ec2-user or ubuntu or what have you.

Try those accounts or try to find the right key for kona user.

MLu
  • 24,849
  • 5
  • 59
  • 86
2

The too-many-keys being tried is a problem on the local machine, not the server. Just clear the keys from your local ssh-agent memory with

ssh-add -D

or if you know which key is mis-associated with a server, use

ssh-add -d <identity>

That might require to re-enter passphrase for other key/server combination next time you connect, but it will get rid of the login block on the EC2 machine.

HugoDesRo
  • 21
  • 2
1

just run your ssh-agent

eval "$(ssh-agent -s)"

then

add your rsa key

ssh-add id_rsa

then try to connect again to your server

1

Ubuntu EC2 Instance have a ubuntu user account installed with your ssh key.

If you don't remove this account you can still connect with :

ssh -i "/home/kona/.ssh/aws_kona_id" ubuntu@server.akona.me

And fix your account problem after sudo -i and investigating /home/kona/.ssh/authorized_keys

profy
  • 1,146
  • 9
  • 20