82

I setup a SSH server online that is publicly accessible by anyone. Therefore, I get a lot of connections from IPs all over the world. Weirdly, none actually try to authenticate to open a session. I can myself connect and authenticate without any problem.

From time to time, I get the error: kex_exchange_identification: Connection closed by remote host in the server logs. What causes that?

Here is 30 minutes of SSH logs (public IPs have been redacted):

# journalctl SYSLOG_IDENTIFIER=sshd -S "03:30:00" -U "04:00:00"
-- Logs begin at Fri 2020-01-31 09:26:25 UTC, end at Mon 2020-04-20 08:01:15 UTC. --
Apr 20 03:39:48 myhostname sshd[18438]: Connection from x.x.x.207 port 39332 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:39:48 myhostname sshd[18439]: Connection from x.x.x.207 port 39334 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:39:48 myhostname sshd[18438]: Connection closed by x.x.x.207 port 39332 [preauth]
Apr 20 03:39:48 myhostname sshd[18439]: Connection closed by x.x.x.207 port 39334 [preauth]
Apr 20 03:59:36 myhostname sshd[22186]: Connection from x.x.x.83 port 34876 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:59:36 myhostname sshd[22186]: error: kex_exchange_identification: Connection closed by remote host

And here is my SSH configuration:

# ssh -V
OpenSSH_8.2p1, OpenSSL 1.1.1d  10 Sep 2019
# cat /etc/ssh/sshd_config 
UsePAM yes
AddressFamily any
Port 22
X11Forwarding no
PermitRootLogin prohibit-password
GatewayPorts no
PasswordAuthentication no
ChallengeResponseAuthentication no
PrintMotd no # handled by pam_motd
AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 /etc/ssh/authorized_keys.d/%u
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
LogLevel VERBOSE
UseDNS no
AllowUsers root
AuthenticationMethods publickey
MaxStartups 3:100:60

After searching the web, I have seen references to MaxStartups indicating that it could be the reason for this error but after changing the default value as shown in my sshd_config and attempting more than 3 connections, the server unambiguously indicates the probem

Apr 20 07:26:59 myhostname sshd[31468]: drop connection #3 from [x.x.x.226]:54986 on [10.0.0.11]:22 past MaxStartups

So, what causes error: kex_exchange_identification: Connection closed by remote host?

Tom Newton
  • 93
  • 1
  • 8
soliz
  • 922
  • 1
  • 5
  • 7
  • 4
    The error means connection is established and then dropped without any good reason. This can happen either 1. if sshd consume too much resources at some point or 2. it can be firewall dropping too many connections or 3. it is tcpd doing that or 4. kernel limits (like high ports exhaust). First you can look in firewall settings for any limits and if none found try to run sshd with debugging option. – kab00m May 04 '20 at 10:25
  • For me it was simply trying to connect to the wrong port – a1300 Apr 17 '21 at 13:22
  • I just reseted the daemon with: service sshd restart and it worked. – Matheus Frik Feb 22 '23 at 19:23
  • Rebooting the server solved the problem – younes zeboudj Feb 27 '23 at 12:44
  • In my case rebooting the damn router solved the problem. – Tiago Apr 05 '23 at 09:17
  • In my case my config changes in `/etc/ssh/sshd_config` broke the server silently. Reverting them made it work again. – Sridhar Sarnobat Apr 23 '23 at 01:26

15 Answers15

35

Weirdly, none actually try to authenticate to open a session.

Some spiders and services like Shodan scans public ipv4 addresses for open services, e.g. salt masters, ftp servers, RDPs, and also SSH services. These spiders usually only connect to the services without doing any valid authentication steps.

I get the error: kex_exchange_identification: Connection closed by remote host in the server logs. What causes that?

I haven't found conclusive answers about that, so... time to browse the source then.

In OpenSSH source code, kex_exchange_identification is a function to exchange server and client identification (duh), and the specified error happened if the socket connection between OpenSSH server and client is interrupted (see EPIPE), i.e. client already closed its connection.

mforsetti
  • 2,666
  • 2
  • 16
  • 20
  • 2
    Related to this: I installed ntopng recently and network discovery was turned on. This caused these messages to appear – aman207 Mar 09 '22 at 15:36
  • I had my Netmask as `255.255.255.255` and i needs to be `255.255.255.0`, two days over this ughh – JREAM Dec 29 '22 at 21:23
11

I've just had this exact issue, and the cause was that I had a port translation happening internally to the load balancer, meaning that my ssh connections were reaching the host on port 80 instead of port 22.

The host was they rightly terminating the connections, and the error message returned to my terminal was as follows;

~/Documents/Projects$ ssh -vvvvA dave@xx.xx.xx.250
OpenSSH_8.1p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/dave/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 47: Applying options for *
debug2: resolve_canonicalize: hostname xx.xx.xx.250 is address
debug2: ssh_connect_direct
debug1: Connecting to xx.xx.xx.250 [xx.xx.xx.250] port 22.
debug1: Connection established.
debug1: identity file /Users/dave/.ssh/id_rsa type 0
debug1: identity file /Users/dave/.ssh/id_rsa-cert type -1
debug1: identity file /Users/dave/.ssh/id_dsa type -1
debug1: identity file /Users/dave/.ssh/id_dsa-cert type -1
debug1: identity file /Users/dave/.ssh/id_ecdsa type -1
debug1: identity file /Users/dave/.ssh/id_ecdsa-cert type -1
debug1: identity file /Users/dave/.ssh/id_ed25519 type -1
debug1: identity file /Users/dave/.ssh/id_ed25519-cert type -1
debug1: identity file /Users/dave/.ssh/id_xmss type -1
debug1: identity file /Users/dave/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1
debug1: kex_exchange_identification: banner line 0: HTTP/1.1 400 Bad Request
debug1: kex_exchange_identification: banner line 1: Server: nginx/1.14.0 (Ubuntu)
debug1: kex_exchange_identification: banner line 2: Date: Fri, 20 Nov 2020 09:30:23 GMT
debug1: kex_exchange_identification: banner line 3: Content-Type: text/html
debug1: kex_exchange_identification: banner line 4: Content-Length: 182
debug1: kex_exchange_identification: banner line 5: Connection: close
debug1: kex_exchange_identification: banner line 6:
debug1: kex_exchange_identification: banner line 7: <html>
debug1: kex_exchange_identification: banner line 8: <head><title>400 Bad Request</title></head>
debug1: kex_exchange_identification: banner line 9: <body bgcolor="white">
debug1: kex_exchange_identification: banner line 10: <center><h1>400 Bad Request</h1></center>
debug1: kex_exchange_identification: banner line 11: <hr><center>nginx/1.14.0 (Ubuntu)</center>
debug1: kex_exchange_identification: banner line 12: </body>
debug1: kex_exchange_identification: banner line 13: </html>
kex_exchange_identification: Connection closed by remote host

Fixed the internal port translation, and now the problem has gone away.

Dave Rix
  • 325
  • 3
  • 8
  • 1
    similar for me. port 80 is taken up by httpd, and ssh-server do not report error. – Jake Mar 01 '21 at 03:03
  • This helped me. In my case, I had a docker container set up to direct port 2222 to the host... but I was running sshd in the container on the default port by accident. So docker was saying "yes, I'm here but nothing is happening back there in the container". There was no sshd listening on 2222. – rfay Jul 11 '21 at 21:32
  • 1
    how do you fix the internal port translation? could you provide some details or references – sirius Mar 24 '22 at 11:13
  • same here, just needed to specify the correct port and the problem was gone – Rainer Glüge May 14 '22 at 08:40
1

I resolved my issue with 'kex_exchange_identification: Connection closed by remote host' when I noticed I was trying to connect using the Server IP when I should have been using the Private IP.

My set up may be worlds apart from all of you, just thought to pass on my own discovery.

EDIT:

With some hosting providers you will have two IPs, one is public, one is private, the private is the one you should use in this instance.

You either know or don't, I appreciate this will not apply to everyone, which is why I say it may be a different set up.

No other answers worked for me, until I used the private key.

Doopz
  • 119
  • 3
  • 2
    Please provide additional details in your answer. As it's currently written, it's hard to understand your solution. – Community Sep 04 '21 at 18:17
1

In my case, an update of openssh-server seemed to have changed the defaults settings. Explicitly specifying PermitRootLogin in /etc/ssh/sshd_config solved it.

To answer the initial question, logging as root without a key (using a password) may generate this error with your config.

1

I had this issue on a dedicated server with many services on it and a lot of traffic, with ~100 IPs attached.

Because of too many login attempts (even if PasswordAuthentication is off), this error randomly appeared, causing, for example, rsync backups to fail.

One solution could be using a non-standard port, but that would mean going and changing all the scripts that connect to the server.

I instead added a ListenAddress directive (2, one for ipv4 and one for ipv6) so that sshd listen only on my main server IP, which is not used by any live site. That caused login attempts to drop by >99%.

the_nuts
  • 430
  • 6
  • 19
0

In my case, I used manual /etc/hosts entries and proxied through a bastion. The bastion didn't have the same /etc/hosts entries, so it refused the tunnel.

Shay
  • 1
  • This does not really answer the question. If you have a different question, you can ask it by clicking [Ask Question](https://serverfault.com/questions/ask). To get notified when this question gets new answers, you can [follow this question](https://meta.stackexchange.com/q/345661). Once you have enough [reputation](https://serverfault.com/help/whats-reputation), you can also [add a bounty](https://serverfault.com/help/privileges/set-bounties) to draw more attention to this question. - [From Review](/review/late-answers/507360) – djdomi Dec 31 '21 at 21:29
0

In my case was creating the ssh key from a protected variable in Gitlab CI I had to remove the protection over that variable to get it working.

0

You might just be connecting to the wrong port.

Verify the exact port !

You can specify a custom port using ssh -p port user@host

assayag.org
  • 119
  • 1
0

In my case I was trying to ssh to an Ubuntu VM running on VirtualBox. I had neglected to install openssh-server on the VM.

sudo apt-get install openssh-server

fixed it.

Red Cricket
  • 470
  • 2
  • 8
  • 21
  • I had this happen after I updated to FreeBSD 13.1; for me, reinstalling `ssh-tools` fixed this. – lbutlr Jun 18 '22 at 11:21
0

I had the non-standard port specified with -p, keys added to the server and my credentials manager, and I was getting this error.

I found with my specific issue, I needed to whitelist my IP in the hosting service's server control panel for my SSH login ID.

0

Some other reasons are :

  1. If your Web Server / App Server listen on different port this issue also happen.
  2. Also you we need to check the Client System firewall for it , If the outbound connection is established or not.
  3. Basic troubleshoot for that is TELNET , you need to check with telnet command i.e telnet your-host-server port , ie : telnet 192.10.10.1 6594
  4. It tell not getting any response from the server with port then you need to check your client system firewall else check it's under any firewall or Any Other NAC / Network Controller .

Thank You

0

I'm having a similar issue. When I first got the error I edited and deleted my ~/.ssh/known_host file entry for that server. It then worked just fine, but I logged out and tried to get back in about 5 minutes later and got the error.

I've seen this happen on other servers as well, but didn't think anything of it since our update cycle is frequent enough that I thought it was just a new version of ssh and the keys needed to be updated.

So as a temporary fix, deleting the entry in your ~/.ssh/known_host will get you back in, but it will happen again when you try to log back in.

0

In my case, I got this sporadically with AWS EC2. The root cause was that the sg was not properly configured. It was set to allow ingress traffic only from ports 0-65000 without the rest of them.

Eytan Naim
  • 13
  • 2
0

This would happen randomly when connecting to our servers. After looking at the SSH server logs in /var/log/secure, we saw a burst of incomplete SSH connection attempts by some hackers/scanners. About 10 within a couple seconds. Our kex_exchange_identification error looks like it happened at the same time.

We use fail2ban to block bad IPs, so we are going to add some more filter rules to catch these behavior and block them.

AngularNerd
  • 111
  • 2
0

One possible cause for this issue is that ~/.ssh/authorized_keys, or something around it, has the wrong permissions.

The Quantum Physicist
  • 658
  • 2
  • 11
  • 26