2

I have a server that prints several environment variables after every ssh login, and I need to disable that printing. E.g.

server2:~ # ssh root@server1
This is the banner.  It resides in /etc/banner
Password:
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=192.168.0.3 57287 22
  SSH_CONNECTION=192.168.0.3 57287 192.168.0.1 22
  SSH_TTY=/dev/pts/3
  TERM=xterm
server1:~ #

The printing of the variables is not done by the banner. I added a banner and see that its printout occurs before the password is entered, while the variables are printed after the password is entered. The sshd service is not running in debug mode, which I've confirmed from ps -aux | grep sshd.

server1:~ # ps aux | grep sshd
root       647  0.0  0.0   3844   340 ?        Ss   16:33   0:00 monitord: sshd
root       648  0.0  0.0  53996  2568 ?        S    16:33   0:01 /usr/sbin/sshd -D
root       650  0.0  0.0   3844   336 ?        Ss   16:33   0:00 monitord: sshd_internal                                                    
root       651  0.0  0.0  53996  2544 ?        S    16:33   0:00 /usr/sbin/sshd -D -f /etc/ssh/sshd_config_internal

There's no /root/.ssh/ files that would do this:

server1:~ # ls -a /root/.ssh
.  ..  authorized_keys  id_rsa  id_rsa.pub  known_hosts

And I haven't found anything in the /etc/ssh/sshd_config file that I would expect could impact this. I was able to disable the "Last login" information which appears immediately before the variables by setting PrintLastLog to no, and I've also set PrintMotd to no. I've tried another sshd binary from a server which does not print the variables, and that binary also prints the variables. So I know it's not the sshd binary itself, but some configuration on the server. I'm just at a loss for what else could be printing those variables.

And if it helps, when I run a command directly from ssh from another node, the printout of the command occurs after the variables. E.g.

server2:~ # ssh server1 ls
This is a test.  I reside in /root/testBanner
Password:
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=192.168.0.3 57335 8022
  SSH_CONNECTION=192.168.0.3 57335 192.168.0.1 8022
file1
file2    <---- output of ssh command "ls" occurs after variables
file3
server2:~ #

I've even tried creating a new user, with no ~/.bashrc ~/.profile, etc configuration files, and when ssh'ing as that user, it displays the variables also.

Also, I'm running SUSE 11

server1:~ # cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 1

The version of SSHD:

OpenSSH_5.1p1, OpenSSL 0.9.8j-fips 07 Jan 2009

Here is the init script

# cat /etc/init.d/sshd
#!/bin/bash
#
# /etc/init.d/lde-sshd: start/stop ssh daemon
#

### BEGIN INIT INFO
# Provides:       sshd
# Required-Start: $network $syslog
# Required-Stop: $network $syslog
# Should-Start: lde
# Default-Start:  3
# Default-Stop:
# Description:    Secure shell deamon
### END INIT INFO

. /usr/lib/lde/lde.functions

check_node_type control payload detached standalone

lde_init_status_init lde-sshd

ADDITIONAL=$(
cd /etc/ssh/
shopt -s nullglob
for i in sshd_config_*; do
        echo ${i#sshd_config_};
done
)

case $1 in
        start)
                echo -n "Starting SSH daemon "
                if ! /usr/bin/monitord -n sshd -c "/usr/sbin/sshd -D"; then
                        panic "Failed to start SSH daemon"
                fi
                lde_init_status_start $?
                for i in $ADDITIONAL; do
                        echo -n "Starting $i SSH daemon "
                        if ! /usr/bin/monitord -n sshd_$i -c "/usr/sbin/sshd -D -f /etc/ssh/sshd_config_$i"; then
                                panic "Failed to start $i SSH daemon"
                        fi
                        lde_init_status_start $?
                done
                ;;
        stop)
                echo -n "Stopping SSH daemon "
                /usr/bin/monitord -n sshd -k
                lde_init_status_stop $?
                for i in $ADDITIONAL; do
                        echo -n "Stopping $i SSH daemon "
                        /usr/bin/monitord -n sshd_$i -k
                        lde_init_status_stop $?
                done
                ;;
        restart)
                $0 stop
                $0 start
                lde_init_status_silent $?
                ;;
        status)
                echo -n "Checking SSH daemon "
                lde_init_srv_status_check /usr/bin/monitord -n sshd -s
                for i in $ADDITIONAL; do
                        echo -n "Checking $i SSH daemon "
                        lde_init_srv_status_check /usr/bin/monitord -n sshd_$i -s
                done
                ;;
        *)
                echo "usage: $0 [start|stop|restart|status]"
                lde_init_error_unknown_option
                ;;
esac

lde_init_status_report

# End of file

As commented below, I discovered the issue is with apparmor using the same variable (debug_flag) and setting sshd's debug_flag to 1 when it sets its own debug_flag to 1. I don't know how the two programs can be using the same variable/address space, but I stepped through the code with gdb and see that apparmor and sshd were using the same address for their debug_flags. Perhaps this issue was already resolved in later versions of apparmor or sshd.

jww
  • 97,681
  • 90
  • 411
  • 885
Rusty Lemur
  • 1,697
  • 1
  • 21
  • 54
  • please, if you consider this question as solved, select my answer so it will got away from Unanswered questions. – Jakuje Dec 27 '15 at 21:44
  • 1
    Stack Overflow is a site for programming and development questions. This question appears to be off-topic because it is not about programming or development. See [What topics can I ask about here](http://stackoverflow.com/help/on-topic) in the Help Center. Perhaps [Super User](http://superuser.com/) or [Unix & Linux Stack Exchange](http://unix.stackexchange.com/) would be a better place to ask. Also see [Where do I post questions about Dev Ops?](http://meta.stackexchange.com/q/134306). – jww Dec 27 '15 at 21:52

1 Answers1

1

This means the server is running in debug mode. Have a look to init script or systemd units or whatever is your Suse using and remove the -d options.

service should start with these options (from Fedora):

/usr/sbin/sshd -D $OPTIONS

Also make sure how are defined your $OPTIONS or other environment variable appended to the commandline.

Jakuje
  • 24,773
  • 12
  • 69
  • 75
  • It's not running in debug mode. See the output added to the original post – Rusty Lemur Dec 11 '15 at 18:17
  • please, post that output to the edited question. It is not readable like this – Jakuje Dec 11 '15 at 18:18
  • I checked the code and the `debug_flag` is triggered only by `-d` option on `sshd` and only this flag triggers printing environment variables. If you are not using any ancient `ssh` version that would have it in different way. – Jakuje Dec 11 '15 at 18:39
  • I tested running sshd on another server with the -d option, and it does do the exact same printout of variables. But the -d option isn't being specified in the failing system, so how else could it be triggered? – Rusty Lemur Dec 11 '15 at 19:52
  • I asked and you didn't answer, about the openssh version, so once more, what is the openssh version you are using. Also you didn't post any init scripts or systemd units which you are using to start `sshd` (though the `ps` *should* also do the same). Anyway, in current version, environment variables are written [here](https://github.com/openssh/openssh-portable/blob/master/session.c#L1320), based on `debug_flag`, which is set *only* by option `-d`, [here](https://github.com/openssh/openssh-portable/blob/master/sshd.c#L1529) (though older versions can have it different way). – Jakuje Dec 12 '15 at 12:14
  • Thank you for your comments. I've added the sshd version at the end of the edited post. Where can I find the init scripts or systemd units? – Rusty Lemur Dec 14 '15 at 18:16
  • Just checked the source and the logic is [the same](https://github.com/openssh/openssh-portable/blob/V_5_1_P1/session.c#L1290). Init scripts will be probably in `/etc/init.d/sshd` or similiar. – Jakuje Dec 14 '15 at 18:56
  • Thank you very much for your help in looking into this. Although I haven't exactly pinpointed the issue, I have discovered that it is apparmor that is causing the flag to change. It seems that our versions of apparmor and sshd both use a "debug_flag" variable (apparmor's is globally scoped), and it seems that what is happening is apparmor is setting it's own debug_flag to 1. However, sshd's debug_flag has the same memory address, so it is also set to 1, and thus enabled. – Rusty Lemur Dec 21 '15 at 21:24
  • that is really weird. This should never happen to have two applications such closely connected. But I am not much knowledgeable about `apparmor`, so I am not sure how much is it touching the "protected" applications. Is it really the way that you turn on apparrmor debug and these messages will appear? Do you also see debug messages in system logs from `sshd`? I am very interested in this so if you have more info. – Jakuje Dec 21 '15 at 21:30