SSH - loose notes

From Helpful
Jump to navigation Jump to search

Security related stuff.

Linux - PAM notes · SELinux

Securing services

A little more practical

More techincal waffling

Message signing notes · Hashing notes ·
Auth - Kinds of auth setup · identity and auth notes ·· OAuth notes · OpenID notes Kerberos notes · · SASL notes
Encryption - Encryption notes · public key encryption notes · data-at-rest encryption ·pre-boot authentication · encrypted connections

Unsorted - · Anonymization notes · website security notes · integrated security hardware · Glossary · unsorted

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Auth in scripts

specifying a port on...


ssh -p 1234


scp -P 1234


rsync -e 'ssh -p 1234'

On speed

On bandwidth, and high-bandwidth networks

Single-connection speed limit

Various SSH implementations have historically been too conservative about the size of some internal buffers.

The effect is the same as having a small TCP window size: not much data may be in flight at one time, so on links with high latency you see single connections being slow even if the link itself can go much faster.

This is mostly an implication from the Bandwidth-Delay Product (BDP), and as such e.g. won't be an issue on sub-millisecond-latency LANs, but does on non-local, dozen-millisecond-or-more links like most of the internet, where you may get just 2MByte/s on a link that bandwidthwise can easily do ten times that.

Since this is a per-connection issue, and usually only bothersome when transferring files, one workaround is using multiple copy commands and therefore multiple connections, yet this is often a little cumbersome to set up each time, and there are more structural fixes.

Encryption load

Encryption takes CPU, and most encryption is single-threaded for simplicity and security. You are likely to max out a core before maxing out a single network link.

This is most noticeable in high-throughput, low-latency network links, such as ≥gigabit LAN.

Again, multiple copies solve this, now because multiple independent transfers will be scheduled to different cores.

Note that when using ssh for interoperable convenience more than for strongest security, you can just use a simpler, faster cipher for a particular connection.

For example, when I was testing on a host-to-host gBit line:

  • ...where I can get 116MB/s from an unencrypted TCP transfer
  • ...I get ~34MB/s using scp with its default (apparently aes128-cbc)
  • ...I get ~54MB/s when using -c arcfour
  • ...I get ~95MB/s peaks with two arcfour transfers
  • ...I get ~95MB/s consistently with three arcfour transfers

People having run some tests at some time seem to end up on arcfour. Don't treat this as a constant - in particular, CPUs with AES instructions (various post-2010, see [1]) should make options like AES-GCM faster.

Note that you can prefer different ciphers via config, see #Ciphers

SSH updates

Apparently OpenSSH improved the buffer size part of this around version 4.7 [2] (~2007). Note that both ends need to be updated.

Further optimizations were experimented with, see e.g. HPN. It can offer more speed in many situations, but requires both sides to be patched. Changes:

  • has larger buffers. This is often the largest improvement in higher-BDP situations
  • has multi-threaded cipher implementations, so you can use more than one CPU core for a single connection
  • allows disabling encryption (without disabling integrity checks)


When link speed or encryption overhead is your bottleneck, and your data is compressible, compression is quite useful.

...though note that when the link is as fast ask your disks (e.g. easily true on gigabit LAN) or your data is not very compressible, compression ends up being overhead that doesn't give you more speed.

To control per connection:

  • -C means 'compress everything'
  • -o Compression=no
  • -o Compression=delayed with delayed meaning 'yes, but only start compressing everything only after auth', which seems to be a precaution meaning potential zlib exploits can't become ssh exploits(verify)

You can set it in config, but may not want to.

On setup latency

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Tips and tricks

Go to directory on remote host

- I've worked on a cluster, where you sometimes want to go to a node or a fileserver -- and would like to start in the same (network-mounted) directory you were in before.

I came up with:

function fileserver  {
  local gotodir=`pwd`
  ssh -Y -t fileserver "cd $gotodir ; bash -l"
  # Actually, I use the following to attempt to minimize IO stolen from our NFS server
  #ssh -Y -t fileserver "cd $gotodir ; ionice -c3 bash -l"

Copying through tar and ssh

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

That is, (ab)using ssh's communication of stdio for a data pipe,and using it to carry tar data (instead of running a shell).

Doing this instead of using file-aware copying can be

  • faster when you deal with many small files, largely because a continuous TCP stream will spend less time in overhead than many individual transfers (scp, sftp)
  • more convenient (and less IO) because you stream the archive out as you create it, instead of storing it on disk first


tar czf - sourcedir/ | ssh "tar xzf - -C ~/gohere"
tar czf - sourcedir/ | ssh "(cd ~/gohere; tar xzf - )"

In other words: Compress to stdout, pipe that to ssh, and on the other end make ssh hand this pipe to a tar that decompresses from its stdin -- in a particular directory.


  • You want to watch the directory structure you're packing, and where exactly it will go on the remote end.
  • -C is 'change to this directory', so the above two examples are basically the same.
  • In some cases it makes sense to add a mkdir, but it may be easier to make the created directory part of the tar, doing the work on the sending side
  • This example uses compression via tar.
    • z for gzip. You might want j for bzip2, but only when the tars on both ends support it
    • for uncompressable data, compression will just make things slower
    • if you use ssh with compression anyway, you probably don't want tar to compress too
  • you can preserve ownership information (to the degree it is communicable)
    • use -p on the receiving end's tar

Other config notes

Notes on clients

On security

Check whether people brute-force you

To see the brute force attempts, look at your sshd logs.

An attempt at shell-fu that does automatic summaries: (location of your logs can vary. It may e.g. be /var/log/auth.log and you may need to consider log rotation)

Amount per source IP: (probably at least dozens)

cat /var/log/sshd/* /var/log/auth*[^z] /var/log/auth.log | grep Invalid | \
 sed -r 's/([^\ ]+ [^\ ]+) .*(nvalid user|for) (.*) from ([0-9.]+) port.*/\1\t\3\t\4/' | \
 cut -f 3 | sort | uniq -c | sort -n | less

Amount per day: (probably at least hundreds)

cat /var/log/sshd/* /var/log/auth*[^z] /var/log/auth.log | grep Invalid | tr -s ' ' | \
 sed -r 's/([^\ ]+ [^\ ]+) .*(nvalid user|for) (.*) from ([0-9.]+) port.*/\1\t\3\t\4/' | \
 cut -f 1 | sort | uniq -c | sort -n | less

The usernames they try: (generally things like 'admin', 'test', 'test1', 'testuser', 'httpd', 'guest', 'customerservice', 'pi', 'ubuntu', 'hadoop', but also seeing usernames derived from hostnames)

cat /var/log/sshd/* /var/log/auth*[^z] /var/log/auth.log | grep Invalid | \
 sed -r 's/([^\ ]+ [^\ ]+) .*(nvalid user|for) (.*) from ([0-9.]+) port.*/\1\t\3\t\4/' | \
 cut -f 2 | sort | uniq -c | sort -n | less

The source IPs of accepted logins, to see if there are any that may not be you:

cat /var/log/sshd/* /var/log/auth*[^z] /var/log/auth.log | \
 egrep 'Accepted (keyboar|publi|pass)[^\ ]+ for' | \
 sed -r 's/(.*from[\ ])([0-9.]+)([\ ]port.*)/\2'/ | sort | uniq -c | sort -n

The same IPs, without count, but with hostname lookup:

cat /var/log/sshd/* /var/log/auth*[^z] /var/log/auth.log | \
 egrep 'Accepted (keyboar|publi|pass)[^\ ]+ for' | \
 sed -r 's/(.*from[\ ])([0-9.]+)([\ ]port.*)/\2'/ | sort | uniq | xargs -n 1 host

Preventive policies

At network level

You can try a firewall with some state, such as iptables with recent - you can use that to have it automatically reject connections from sources that open many new connections to a port in a very short period of time.

But: Not very effective against botnet brute-forcing.


Another thing you could do is look at SSHD's output, particularly the failed login attempts, and block any IP which is trying a lot of them (by putting them into hosts.deny or iptables).

Somewhat like the recent thing above, but the setup is done for you, and has longer memory and more configuration.

But: Not very effective against botnet brute-forcing.

At SSH level

possible policies include:

Allow only specific users to ever log in (AllowUsers in sshd_config) to avoid access via some common accounts you didn't create or really know about (databases and such).

  • people need to bother sysadmin for each change
  • Potential users usually need to go through slow official channels, typically discouraging use.-->

Only allow specific hosts, through firewall or SSHD

  • people need to bother sysadmin for each change
  • ...for each IP, meaning you block their ability to work from even slightly unusual places. Somewhat reasonable if you have a job-specific VPN, otherwise largely defeats the point

Don't allow root logins via SSH (often the default)

  • While it doesn't prevent much, it generally makes sense to use a regular account for admin (preferably not called something obvious like 'admin'), and use sudo or sudo su when you want something superuser-done

Allow only key-based logins

  • Bothersome/impossible to log in from unusual places.
  • Only really safe if users understand the importance of the security of their key
  • ...but still, it does block brute forcing.

fail2ban and denyhosts

Fail2ban looks at SSHD logs for mainly at failed login attempts, and updates the firewall to block the IPs they came from. default;

there are further plugins like webserver and mail - these are disabled by default.
you can make it look at further services, and make it take additional actions.

See also:

Denyhosts is similar, specific to just SSH.

It also allows you to sync with a centralized list of blocked servers.

See also:

Whitelisting IPs

In fail2ban you can whitelist in /etc/fail2ban/jail.conf, editing the line like (default section), then restarting (you can also do it online, yes)

ignoreip =

In denyhosts, it seems whitelisting is impossible, and removing things is somwhat manual.

See denyhosts faq: "How can I remove an IP address that DenyHosts blocked?", where WORK_DIR is often /var/lib/denyhosts/

And helpers like


Dropped idle connections

Some modems, routers, and access points will drop TCP connections that have not sent anything for a few minutes - sometimes shorter (1 or 2 minutes), sometimes much longer (say, an hour).

For many types of connections, that amount of time sending absolutely 0 bytes is probably a connection that somehow never got actively closed, and there are TCP states where there are no timeouts, so this makes some sense as a "don't keep collecting things in modem's limited memory until its next reboot".

Yet a SSH is an exception, in that login sitting at a prompt is silent, so will be disconnected quickly, which is annoying if you are doing work or admin stuff on it.

To avoid this, you can ensure the connection is never silent for that long.

(There are also a few reasons you might not want overly quick checking)

Manual / quick and dirty

There are hackish workarounds that are enough whenever the situation is an exception rather than the rule.

Consider running a talkative command, such as top or watch -n 15 date.

More structural

For permanently configured, SSH-based solutions, see the rest of this section:

SSH v2 can poll via the encrypted channel - and will itself drop the connection if it gets no response.

You can configure both sides to do such polling. Both are disabled by default.

This feature allows a few polls to go without answer, to be flexible about routes that can be expected to be down or congested for a few seconds. They default to 3, which is usually more than enough.

From the server side

The SSH server can ask the client to do this, typically be editing sshd_config to have something like:

ClientAliveInterval 60
ClientAliveCountMax 3

You want this polling to happen faster than the timeout of the connection-dropping culprit. A value on the order of 60 (seconds) covers most cases. Slower is often fine too.

From the client side

Each client can ask a server to do this, by using

ServerAliveInterval 60
ServerAliveCountMax 3

in the client config. It's probably easiest to put it under Host * to make it apply to all.

You can also hand it in on the command line of a sepecific connect, e.g.

ssh -o ServerAliveInterval=15


If you tell the sshd config:

TCPKeepAlive yes will occasionally use each connection, at TCP level.

When idle connections have disappeared at the remote end without mentioning it (e.g. if they crashed), this behaviour ensures will be noticed sooner rather than later (or never), and lets us reclaim local resources.

For most of these, see man ssh_config and man sshd_config

Keys not being picked up

In theory, ssh will pick up identities from your ~/.ssh/ directory.

It may be that the .ssh-related file permissions are not secure enough for ssh's liking.

If you're not comfortable with permission details and want to check whether this is your problem, you can adding StrictModes No to your sshd_config and restarting sshd. If that fixes it, the proper fix is to disable that again, and set your permissions properly.

Non-shell uses of ssh

....such as use of cron.

Consider what user it's running as -- cron, root, the user the crontab belongs to (not all crons have per-user crontabs)

Since you want things to work without interaction you will need to set up a keypair login, and either have it be phraseless, or perhaps use an agent, to get a one-interaction-per-bootup deal.

Shell-less use of SSH seems to not pick up the keys from the user's ~/.ssh/ directory (why? when?), in which case it will probably fall back to interactive login and fail because there is no terminal. If you suspect this, you can check it using ssh -v in the same place.

You can explicitly hand in a key via -i

Slow login

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Logging in with a -v (or -vv or even -vvv) may help explain what it's doing. You may e.g. discover that

  • making config more specific to hosts helps
  • GSSAPI is a good idea when you need it - but when not used, or not correctly set up, it may stall while adding nothing
so if you don't use it for a specific host, or at all,
then tell your client GSSAPIAuthentication no on specific hosts (or even all)

Slow to ask for password (i.e. time between ssh command and getting a login prompt)

  • misconfigured DNS on client side (meaning slow connect at IP level to SSH server)
To check, see if the name resolves quickly (and keep in mind caches may not make that test repeatable. It randomly being slow and fast may be a hint towards this issue for the same reason)
It can help to see what part is parses on in a ssh -vvv
  • misconfigured IPv6
To avoid IPv6, use AddressFamily inet in config, or -4 on a command

Slow to show prompt (i.e. between entereing passphrase and having the shell load)

  • If only sometimes, the server may be heavily loaded.
  • initial login needing work, e.g.
quota on NFS


  • SSH daemon doing reverse DNS lookups
it's used for
hostname-based host auth (and not even key-based host auth)
matchin hostnames in from= entries in authorized_keys
basically, unless your security guys / you know why you need this, you don't need this.
also, most clients will not have reverse DNS entry
so usually you might as well disable this with UseDNS no
may be enabled or disabled by default on your install (and changed in openssh versions)

Immediate session close

I had a case where I was logging in successfully, but was immediately disconnected. The client side saw:

session closed for user username

and increasing client verbosity didn't add anything.

The problem was that the account I was logging into has /bin/false for a shell (/bin/false was the default for newly adduser'd accounts), which ssh ran without error, but which of course meant immediate termination.


"wrong authorisation protocol attempted"

In my case, my ~/.Xauthority file had somehow become owned by root.

The simplest fix is to rm ~/.Xauthorit* and log in again.


Moderately generic (OpenSSH?) error, frequently related to PuTTY and key exchange (possibly not understanding the offering of newer keys?).

In that case, a newer PuTTY should fix it.

Tweaking key exchange details also seems to help.

refused connect from IP

This seems to often be sshd adhering to hosts.deny (e.g. due to denyhosts / fail2ban).

Connection closed by IP [preauth] server logs. Seems to mean the client realized it has run out of authentication options. (verify)

e.g. seen when auth methods require a password/passphrase but are non-interactive, or where a keypair is rejected.

Unable to negotiate with hostname port 22: no matching cipher found

There is no match between the list of ciphers the server offers, and the list the client accepts.

Can be caused by:

  • updates disabling weaker ciphers (usually only a problem if one side hasn't been updated in years)
  • You've configured specific ciphers via the Cipher configuration
in my case I had tried to force faster ciphers, which were since disabled

drop connection #number from [CLIENTIP]:PORT on [SERVERIP]:PORT past MaxStartups=

"protocol version mismatch -- is your shell clean?"

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Can mean:

  • attempting to rsync over a shell-restricted SSH -- because it can't run rsyncd on it

"Too many authentication failures for username"

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

MaxAuthTries was exceeded, within a single login attempt.

Has a few possible causes - but the one I've run into most is offering many keys before offering the one that actually applies, usually because I had a bunch of host/account-specific ones that were actually configured as "try on any host".

If you can make the login verbose somehow (-vv on CLI ssh), this should be simple to diagnose.

IdentitiesOnly yes

  • ...on the CLI means "use only the key I specify here", none from config or agent (verify)
you may actually need it when using -i
  • the config file means only use the ones in the relevant host section, and none from agent (verify)

Not using it at all means all identities from agent and config (even if in host sections}} will be sent, without much control of the order (beyond that agent entries go first).

So yes, if you use keys a lot, you will often want IdentitesOnly.

If you e.g. have some general identities and it'd run some hosts out of attempts to go interactive, you may want:

PreferredAuthentications keyboard-interactive,password

See also:

X11 connection rejected because of wrong authentication


  • couldn't write to ~/.Xauthority can e.g. be
bad permissions on homedir
bad permissions on ~/.Xauthority (removing it is easiest)
full disk (or quota)
  • not being the user you log in as (verify)
e.g. a su, because it does not fully switch you to the other user (verify)
  • X11Forwarding not enabled (verify)

rexec line something: Deprecated option something

These are complains about your sshd_config

In theory you should go through them and see if your configuration actually does something different now

...but in most cases you can often just remove them - because most of these will have appeared around 2016, when openssh dropped SSH1 support, and all options that only applied to SSH1. (apparently including KeyRegenerationInterval, ServerKeyBits, RSAAuthentication, RhostsRSAAuthentication, and UsePrivilegeSeparation) (verify)

Host Key verification failed



SFTP is a file transfer protocol designed as an extension of SSH.

SFTP is not FTP over SSH, and is unrelated to SFTP (plain FTP over TLS/SSL.

SFTP is probably mostly used because you have an existing SSH setup, so conveniently inherit its authentication and encryption.

SFTP also some a few more features than SCP (Secure Copy, a similar extension of SSH), including more filesystem operations (list, remove) and download resuming.

...enough features that there is software to mount SFTP as if it were a local drive.

You may care to know that OpenSSH can itself do a chrooted file transfer - see SSH_jail#SFTP_copying_only


man ssh_config for the Ciphers entry.

The cipher that gets used is negotiated in each transfer, because it depends on which are supported, and which are enabled, on both sides, which will change over time.

For example, openssh recently started to consider most -cbc deprecated in favour of -ctr variants, as well as arcfour and blowfish

Listing SSH connections

The simplest estimation is probably noticing entries with IPs/hostnames in:


Or perhaps:

last | grep 'still logged in'

If you want to see port numbers and tunnels and such, look at: (note difference between ssh and sshd entries)

sudo lsof -i -n | egrep 'ssh'

See also