One of the main functions of OpenSSH is that of accessing and running programs on other systems. That is, after all, one of the main purposes of the program. There are several ways to expand upon that, either interactively or as part of unattended scripts. So in addition to an interactive login, ssh(1) can be used to simply execute a program or script. Logout is automatic when the program or script has run its course. Some combinations are readily obvious. Others require more careful planning. Sometimes it is enough of a clue just to know that something can be done, at other times more detail is required. A number of examples of useful combinations of using OpenSSH to run remote tasks follow.
Run a Remote Process
a An obvious use of ssh(1) is to use the shell to run a program on the remote system and then exit. Often this is a shell, but it can be any program available to the account. For feedback, ssh(1) passes whatever exit value was returned by the remote process. When a remote process is completed, ssh(1) will terminate and pass on the exit value of the last remote process to complete. So in this way, it can be used in scripts and the outcome of the remote processes can be used.
The following will return success, 0, on the local system where ssh(1) was run.
$ ssh -l fred server.example.org /bin/true $ echo $?
The following will return failure, 1, on the local system where ssh(1) was run.
$ ssh -l fred server.example.org /bin/false $ echo $?
If any other values, from 0 to 255, were returned, ssh(1) will pass them back the local host from the remote host.
Run a Remote Process and Capture Output Locally
Output from programs run on the remote machine can be saved locally using a normal redirect. Here we run dmesg(8) on the remote machine:
$ ssh -l fred server.example.org dmesg > dmesg.from.server.log
Interactive processes will be difficult or impossible to operate in that manner because no output will be seen. For interactive processes requiring any user input, output can be piped through tee(1) instead to send the output both to the file and to stdout. This example runs an anonymous FTP session remotely and logs the output locally.
$ ssh -l fred server.example.org "ftp -a anotherserver" | tee ftp.log
It may be necessary to force pseudo-TTY allocation to get both input and output to be properly visible.
$ ssh -t -l fred server.example.org "ftp -a anotherserver" | tee /home/fred/ftp.log
The simplest way to read data on one machine and process it on another is to use pipes.
$ ssh email@example.com 'cat /etc/ntpd.conf' | diff /etc/ntpd.conf -
Run a Local Process and Capture Remote Data
Data can be produced on one system and used on another. This is different than tunneling X, where both the program and the data reside on the other machine and only the graphical interface is displayed locally. Again, the simplest way to read data on one machine and use it on another is to use pipes.
$ cat /etc/ntpd.conf | ssh firstname.lastname@example.org 'diff /etc/ntpd.conf -'
In the case where the local program expects to read a file from the remote machine, a named pipe can be used in conjunction with a redirect to transfer the data. In the following example, a named pipe is created as a transfer point for the data. Then ssh(1) is used to launch a remote process which sends output to stdout which is captured by a redirect on the local machine and sent to the named pipe so a local program can access the data via the named pipe.
In this particular example, it is important to add a filter rule to tcpdump(8) itself to prevent an infinite feedback loop if ssh(1) is connecting over the same interface as the data being collected. This loop is prevented by excluding either the SSH port, the host used by the SSH connection, or the corresponding network interface.
$ mkfifo -m 600 netdata $ ssh -fq -i /home/fred/.ssh/key_rsa \ 'sudo tcpdump -lqi eth0 -w - "not port 22"' > netdata $ wireshark -k -i netdata &
Any sudo(8) privileges for tcpdump(8) also need to operate without an interactive password, so great care and precision must be exercised to spell out in /etc/sudoers exactly which program and parameters are to be permitted and nothing more. The authentication for ssh(1) must also occur non-interactively, such as with a key and key agent. Once the configurations are set, ssh(1) is run and sent to the background after connecting. With ssh(1) in the background the local application is launched, in this case wireshark(1), a graphical network analyzer, which is set to read the named pipe as input.
On some systems, process substitution can be used to simplify the transfer of data between the two machines. Doing process substitution requires only a single line.
$ wireshark -k -i <( ssh -fq -i /home/fred/.ssh/key_rsa \ 'sudo tcpdump -lqi eth0 -w - "not port 22"' )
However, process substitution is not POSIX compliant and thus not portable across platforms. It is limited to bash(1) only and not present in other shells. So, for portability, use a named pipe.
Run a Remote Process While Either Connected or Disconnected
There are several different ways to leave a process running on the remote machine. If the intent is to come back to the process and check on it periodically then a terminal multiplexer is probably the best choice. For simpler needs there are other approaches.
Run a Remote Process in the Background While Disconnected
Many routine tasks can be set in motion and then left to complete on their own without needing to stay logged in. When running remote process in background it is useful to spawn a shell just for that task.
$ ssh -t -l fred server.example.org 'sh -c "tar zcf /backup/usr.tgz /usr/" &'
Another way is to use a terminal multiplexer. An advantage with them is being able to reconnect and follow the progress from time to time, or simply to resume work in progress when a connection is interrupted such as when traveling. Here tmux(1) reattaches to an existing session or else, if there is none, then creates a new one.
$ ssh -t -l fred server.example.org "tmux a -d || tmux"
On older systems, screen(1) is often available. Here it is launched remotely to create a new session if one does not exists, or re-attach to a session if one is already running. So if no screen(1) session is running, one will be created.
$ ssh -t -l fred server.example.org "screen -d -R"
Once a screen(1) session is running, it is possible to detach it and close the SSH connection without disturbing the background processes it may be running. That can be particularly useful when hosting certain game servers on a remote machine. Then the terminal session can then be reattached in progress with the same two options.
$ ssh -t -l fred server.example.org "screen -d -R"
Keeping Authentication Tickets for a Remote Process After Disconnecting
Authentication credentials are often deleted upon logout and thus any remaining processes no longer have access to whatever the authentication tokens were used for. In such cases, it is necessary to first create a new credential cache sandbox to run an independent process in before disconnecting.
$ pagsh $ /usr/local/bin/a-slow-script.sh
Kerberos and AFS are two examples of services that require valid, active tickets. Using pagsh(1) is one solution for those environments.
Automatically Reconnect and Restore an SSH Session Using tmux(1) or screen(1)
Active, running sessions can be restored after either an intentional or accidental break by using a terminal multiplexer. Here ssh(1) is to assume that the connection is broken after 15 seconds (three tries of five seconds each) of not being able to reach the server and to exit. Then the tmux(1) session is reattached or, if absent, created.
$ while ! ssh -t email@example.com -o 'ServerAliveInterval 5' \ 'tmux attach -d || tmux new-session'; $ do true; $ done
Then each time ssh(1) exits, the shell tries to connect with it again and when that happens to look for a tmux(1) session to attach to. That way if the TCP or SSH connections are broken, none of the applications or sessions stop running inside the terminal multiplexer. Here is an example for screen(1) on older systems.
$ while ! ssh -t firstname.lastname@example.org -o 'ServerAliveInterval 5' \ 'screen -d -R;' $ do true; $ done
The above examples give only an overly simplistic demonstration where at their most basic they are useful to resume a shell where it was after the TCP connection was broken. Both tmux(1) and screen(1) are quite capable of much more and worth exploring especially for travelers and telecommuters.
See also the section on "Public Key Authentication" to integrate keys into the process of automatically reconnecting.
Sharing a Remote Shell
Teaching, team programming, supervision, and creating documentation are some examples of when it can be useful for two people to share a shell. There are several options for read-only viewing as well as for multiple parties being able to read and write.
Read-only Monitoring or Logging
Pipes and redirects are a quick way to save output from an SSH session or to allow additional users to follow along read-only.
One sample use-case is when a router needs to be reconfigured and is available via serial console. Say the router is down and a consultant must log in via another user's laptop's connection to access the router's serial console and it is necessary to supervise what is done or help at certain stages. It is also very useful in documenting various activities, including configuration or installation.
Read-only Using tee(1)
Capture shell activity to a log file and optionally use tail to watch it real time. The utility tee(1), like a t-joint in plumbing, is used here to send output to two destinations, both stdout and a file.
$ ssh email@example.com | tee /tmp/session.log
Force Serial Session with Remote Logging Using tee(1)
The tee(1) utility can capture output from any program that can write to stdout. It is very useful for walking someone at a remote site through a process, supervising, or building documentation.
This example uses chroot(8) to keep the choice of actions as limited as possible. Actually building the chroot jail is a separate task. Once built, the guest user is made a member of the group 'cconsult'. The serial connection for the test is on device ttyUSB0, which is a USB to serial converter and cu(1) is used for the connection. tee(1) takes the output from cu(1) and saves a copy to a file for logging while the program is used. The following is would go in sshd_config(5)
Match Group cconsult ChrootDirectory /var/chroot-test AllowTCPForwarding no X11Forwarding no ForceCommand cu -s 19200 -l /dev/ttyUSB0 | tee /var/tmp/cu.log
$ tail -f /var/tmp/cu.log
It is possible to automate some of the connection. Make a script, such as /usr/local/bin/screeners, then use that script with the ForceCommand directive. Here is an example of a script that tries to reconnect to an existing session. If no sessions already exist, then a new one is created and automatically establishes a connection to a serial device.
#!/bin/sh # try attaching to an existing screen session, # or if none exist, make a new screen session /usr/bin/screen -d -R || \ /usr/bin/screen \ /bin/sh -c "/usr/bin/cu -s 19200 -l /dev/ttyUSB0 | \ /usr/bin/tee /tmp/consultant.log"
Interactive Sharing Using a Terminal Multiplexer
If the same account is going to be sharing the session, then it's rather easy. In the first terminal, start tmux(1) where 'sessionname' is the session name:
$ tmux new-session -s sessionname
Then in the second terminal:
$ tmux attach-session -t sessionname
That's all that's needed if the same account is logged in from different locations and will share a session. For different users, you have to set the permissions on the tmux(1) socket so that both users can read and write it. That will first require a group which has both users as members.
Then after both accounts are in the shared group, in the first terminal, the one with the main account, start tmux(1) as before but also assign a name for the session's socket. Here 'sessionname' is the session name and 'sharedsocket' is the name of the socket:
$ tmux -S /tmp/shareddir/sharedsocket new-session -s sessionname
Then change the group of the socket and the socket's directory to a group that both users share in common. Make sure that the socket permissions allow the group to write the socket. In this example the shared group is 'foo' and the socket is /tmp/shareddir/sharedsocket.
$ chgrp foo /tmp/shareddir/ $ chgrp foo /tmp/shareddir/sharedsocket $ chmod u=rwx,g=rx,o= /tmp/shareddir/ $ chmod u=rw,g=rw,o= /tmp/shareddir/sharedsocket
Finally, have the second account log in attach to the designated session using the shared socket.
$ tmux -S /tmp/shareddir/sharedsocket attach-session -t sessionname
At that point, either account will be able to both read and write to the same session.
If the same account is going to share a screen(1) session, then it's an easy procedure. In the one terminal, start a new session and assign a name to it. In this example, 'sessionname' is the name of the session:
$ screen -S sessionname
In the other terminal, attach to that session:
$ screen -x sessionname
If two different accounts are going to share the same screen(1) session, then the following extra steps are necessary. The first user does this when initiating the session:
$ screen -S sessionname ^A :multiuser on ^A :acladd user2
Then the second user does this:
$ screen -x user1/sessionname
In screen(1), if more than one user account is used the
aclchg command can remove write access for the other user:
^A :aclchg user -w "#". Note that tmux(1) or screen(1) must run as SUID for multiuser support. If it is not set, you will get an error message reminding you when trying to connect the second user. You might also have to set permissions for /var/run/screen to 755.
Display Remote Graphical Programs Locally Using X11 Forwarding
It is possible to run graphical programs on the remote machine and have them displayed locally by forwarding X11, the current implementation of the X Window system. X11 is used to provide the graphical interface on many systems. See the website www.X.org for its history and technical details. It is built into most desktop operating systems. It is even distributed as part of Macintosh OS X, though there it is not the default method of graphical display.
X11 forwarding is off by default and must be enabled on both the SSH client and server if it is to be used.
X11 also uses a client server architecture and the X server is the part that does the actual display for the end user while the various programs act as the clients and to the server. Thus by putting the client and server on different machines and forwarding the X11 connections, it is possible to run programs on other computers but have them displayed and available as if they were on the user's computer.
A note of caution is warranted. Allowing the remote machine to forward X11 connections will allow it and its applications to access many devices and resources on the machine hosting the X server. Regardless of the intent of the users, these are the devices and other resources accessible to the user account. So forwarding should only be done when the other machine, that user account, and its applications are reliable.
On the server side, to enable X11 forwarding by default, put the line below in sshd_config(5), either in the main block or a Match block:
On the client side, forwarding of X11 is also off by default, but can be enabled using three different ways. It can be enabled in ssh_config(5) or else using either the -X or -Y run-time arguments.
$ ssh -l fred -X desk.example.org
The connection will be slow, however. If responsiveness is a factor, it may be relevant to consider a SOCKS proxy instead or some other technology all together like FreeNX.
Using ssh_config(5) to Specify X11 Forwarding
X11 forwarding can be enabled in /etc/ssh_config for all accounts for all outgoing SSH connections or for just specific hosts by configuring ssh_config(5).
It is possible to apply ssh_config(5) settings to just one account in ~/.ssh/config to limit forwarding by default to an individual host by hostname or IP number.
Host desk.example.org X11Forwarding yes
And here it is enabled for a specific machine by IP number
Host 192.168.111.25 X11Forwarding yes
Likewise, use limited pattern matching to allow forwarding for a subdomain or a range of IP addresses. Here it is enabled for any host in the pool.example.org domain, any host from 192.168.100.100 to 192.168.100.109, and any host from 192.168.123.1 through 192.168.123.254:
Host *.pool.example.org X11Forwarding yes Host 192.168.100.10? X11Forwarding yes Host 192.168.123.* X11Forwarding yes
Again, X11 is built-in to most desktop systems. There is an optional add-on for OS X which has its roots in NextStep. X11 support may be missing from some particularly outdated, legacy platforms, however. But even there it is often possible to retrofit them using the right tools, one example being the tool Xming.
Locking Down a Restricted Shell
A restricted shell sets up a more controlled environment than what is normally provided by a standard interactive shell. Though it behaves almost identically to a standard shell, it has many exceptions regarding capabilities that are whitelisted, leaving the others disabled. The restrictions include, but are not limited to, the following:
- The SHELL, ENV, and PATH variables cannot be changed.
- Programs can't be run with absolute or relative paths.
- Redirections that create files can't be used (specifically >, >|, >>, <>).
Even with said restrictions, there are several ways by which it is trivial to escape from a restricted shell: If normal shells are available anywhere in the path, they can be launched instead. If regular programs in the available path provide shell escapes to full shells, they too can be used. Finally, if sshd(8) is configured to allow arbitrary programs to be run independently of the shell, a full shell can be launched instead. So there's more to safely using restricted shells than just setting the account's shell to
/bin/rbash and calling it a day. Several steps are needed to make that as difficult as possible to escape the restrictions, especially over SSH.
(The following steps assume familiarity with the appropriate system administration tools and their use. Their selection and use are not covered here.)
First, create a directory containing a handful of symbolic links that point to white-listed programs. The links point to the programs that the account should be able to run when the directory is added to the PATH environment variable. These programs should have no shell escape capabilities and, obviously, they should not themselves be unrestricted shells.
If you want to prevent exploration of the system at large, remember to also lock the user into a chroot or jail. Even without programs like ls(1) and cat(1), exploration is still possible (see below: "ways to explore without ls(1) and cat(1)").
Symbolic links are used because the originals are, hopefully, maintained by package management software and should not be moved. Hard links cannot be used if the original and whitelisted directories are in separate filesystems. Hard links are necessary if you set up a chroot or jail that excludes the originals.
$ ls -l /usr/local/rbin/ total 8 lrwxr-xr-x 1 root wheel 22 Jan 17 23:08 angband -> /usr/local/bin/angband lrwxr-xr-x 1 root wheel 9 Jan 17 23:08 date -> /bin/date -rwxr-xr-x 1 root wheel 2370 Jan 17 23:18 help lrwxr-xr-x 1 root wheel 12 Jan 17 23:07 man -> /usr/bin/man lrwxr-xr-x 1 root wheel 13 Jan 17 23:09 more -> /usr/bin/more lrwxr-xr-x 1 root wheel 28 Jan 17 23:09 nethack -> /usr/local/bin/nethack-3.4.3 ...
Next, create a minimal .profile for that account. Set its owner to 'root'. Do this for its parent directory too (which is the user's home directory). Then, allow the account's own group to read both the file and the directory.
$ cd /home/fred $ cat .profile PATH=/usr/games:/usr/local/rbin export PATH HOME TERM $ ls -ld . .profile drwxr-xr-x 3 root fred 512 Jan 17 23:20 . -rw-r--r-- 1 root fred 48 Jan 17 23:20 .profile
Next, create a group for the locked down account(s) and populate it. Here the account is in the group games and will be restricted through its membership in that group.
$ groups fred fred games
Next, lock down SSH access for that group or account through use of a ForceCommand directive in the server configuration and apply it to the selected group. This is necessary to prevent trivial circumvention through the SSH client by calling a shell directly, such as with
ssh -t firstname.lastname@example.org /bin/sh or similar. Remember to disable forwarding if it is not needed. For example, the following can be appended to sshd_config(5) so that any account in the group 'games' gets a restricted shell no matter what they try with the SSH client.
Match Group games X11Forwarding no AllowTcpForwarding no ForceCommand rksh -l
Note that the restricted shell is invoked with the -l option by the ForceCommand so that it will be a login shell that reads and executes the contents of /etc/profile and $HOME/.profile if they exist and are readable. This is necessary to set the custom PATH environment variable. Again, be sure that $HOME/.profile is not in any way editable or overwritable by the restricted account. Also note that this disables SFTP access by that account, which prevents quite a bit of additional mischief.
Last but not least, set the account's login shell to the restricted shell. Include the full path to the restricted shell. It might also be necessary to add it to the list of approved shells found in /etc/shells first.
Beware: ways to explore without ls(1) and cat(1)
# To see a list of files in the current working directory: echo * # To see the contents of a text file: while read j; do echo "$j"; done <.profile