1. Using SSH with Public Key authentication
We want our master node to have SSH access to all of the compute nodes.
During the default kickstart installation a root
password
(cobbler) is set. Any connection attempt to one of the nodes will
therefore prompt us for this password:
[root@master ~]# ssh c01
root@c01's password:
Last login: Tue Feb 26 11:46:47 2019 from 192.168.16.1
Password authentication is error-prone and not secure. For cluster administration it is common and recommended to disable password authentication completely and replace it with public key authentication.
For this kind of authentication each client needs to create a key pair:
A public key, which you share and install on SSH-enabled servers. SSH servers use this key to verify your identity.
A private key, which must be kept secret and that is optionally protected by a password. This key must not be shared with anyone and must be protected. It is used to authenticate yourself with any SSH server that knows your public key.
1.1. Creating a new key pair
Key pairs are created using the ssh-keygen
command on your SSH
client. By default, the key created will be placed in $HOME/.ssh/
and have a name based on the used algorithm. Currently the default
algorithm on CentOS 7 is RSA with 2048 bit strength. Other systems might
use different defaults. This creates id_rsa
for the private key and
id_rsa.pub
for the public key. Create a keypair without a password.
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:2cxsFdPydgyCOEvVK/PGJc9kVCkdzBjNnY45TwooduE root@clm5
The key's randomart image is:
+---[RSA 2048]----+
| o.oooOo*|
| +.. +=+O.|
| ..oo .*=o |
| o.Eooo+*oo|
| . S *=.O=. |
| . +.o. |
| . |
| |
| |
+----[SHA256]-----+
If you look at the permissions of these two files, you will notice that
the private key is only readable by root
. Only the public key is
world readable. SSH will refuse to use any keys that are too liberal
with their permissions.
[root@master ~]# ls -lah ~/.ssh/id_rsa*
-rw------- 1 root root 1.7K Feb 26 06:44 /root/.ssh/id_rsa
-rw-r--r-- 1 root root 391 Feb 26 06:44 /root/.ssh/id_rsa.pub
1.2. Copying a public key to an SSH server
To authenticate using your key pair you first have to copy the public key to the SSH server you want to authenticate with.
There is two possible ways of doing this:
Using the
ssh-copy-id
commandManually appending the contents of your public key to the user’s
$HOME/.ssh/authorized_keys
file
When using the ssh-copy-id
command, you have to specify which key
you want to install. It will then try to login to the server and usually
fall back to password authentication. If your public key has not been
added already, it is appended to $HOME/.ssh/authorized_keys
.
[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa c01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'c01'"
and check to make sure that only the key(s) you wanted were added.
After this completes successfully, you will be able to connect directly.
[root@master ~]# ssh c01
Last login: Tue Feb 26 11:47:41 2019 from 192.168.16.1
[root@c01 ~]#
If you set a password for your private key, you will be asked for this password instead. Private keys can be unlocked for longer periods of time. This avoids having to retype the password multiple times, which makes it more convenient and secure than a regular password.
1.3. Disable Password Authentication
Warning
Disabling password authentication before installing your public key is an effective way to locking yourself out of your own system. :-)
Note
Do this part only on the compute nodes!
Open /etc/ssh/sshd_config
to modify the available authentication
methods. Set PasswordAuthentication
to no
.
PasswordAuthentication no
Double check if you are able to authenticate with your public key. Then
reload the sshd
service and try again. It is recommended to have at
least one session open while testing.
[root@master ~]# systemctl reload sshd
1.4. Disable Root Login
Note
Do this part only on the compute nodes!
To further lock down SSH servers root
logins are often completely
disabled or limited to a specific subnet. Before attempting this, please
make sure to install your root
public key on all compute nodes.
Edit /etc/ssh/sshd_config
and set PermitRootLogin
to no
globally. Afterwards you can modify this rule for a particular subnet by
using a Match
block.
# global setting
PermitRootLogin no
# locally allow root logins with public key from internal SSH subnet
# !!! this has to be at the end of the file !!!
Match Address 192.168.16.0/20
PermitRootLogin without-password
Finally reload the configuration.
[root@c01 ~]# systemctl reload sshd
1.4.1. Moving around the cluster as root without password
One final piece of SSH configuration that is very useful in practice is
being able to freely move as root
between nodes. With a single SSH
key pair this would require your root
private key to be copied onto
each compute node.
This, however, would create the situation that one compromised system
could expose more systems outside of the cluster that might be reachable
using that particular key. To contain the damage within the cluster, it
is useful to create a separate key pair just for accessing cluster nodes
internally as root
.
For this create a second key pair without a password using ssh-keygen
and
name the file different, such as id_rsa_internal
. Each compute node should
have a copy of these files in /root/.ssh/
. Add the public key to the
authorized_keys
file too.
Finally, to make the SSH client on each compute node aware of the second
key, edit /etc/ssh/ssh_config
and define Host
rules for the
.hpc
subnet. Keys defined as IdentifyFile
in the SSH
configuration are used for authentication.
Note
The changes in /etc/ssh/ssh_config
are needed both in master
and all compute nodes.
Host *.hpc
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_rsa_internal
StrictHostKeyChecking yes
Host 192.168.*
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_rsa_internal
StrictHostKeyChecking yes
Host *
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_rsa_internal
We also have to add the id_rsa_internal.pub
public key to the
/root/.ssh/authorized_keys
file on each compute node. With existing
compute node you can use ssh-copy-id
. But this is only a temporary
fix.
1.4.2. Add root
keys during cobbler installation
Create a kickstart snippet in /var/lib/cobbler/snippets
which automatically
adds both root
keys during installation and updates/replaces
/etc/ssh/ssh_config
with the above configuration to add the internal key.
That means you should have the content of both id_rsa.pub
and
id_rsa_internal.pub
added to the /root/.ssh/authorized_keys
file. You will also have to copy the private key id_rsa_internal
to
/root/.ssh
. Ensure the permission of the private key are 600
.
Add this snippet to your compute node kickstart and re-image at least one of them.
1.4.3. Synchronizing known hosts
Everytime you connect to a host, SSH will check
/etc/ssh/ssh_known_hosts
and $HOME/.ssh/known_hosts
to see if
that server is to be trusted. If it can’t be found, SSH will ask if you
trust this server. If you accept, an entry will be added to your current
user’s known_hosts
file. That means without a valid entry in one of
these known hosts files, SSH wil always want to you to confirm. This
might not seem like a problem now, but once you have a few hundred
compute nodes and automated tools that require SSH, this can quickly
turn into a nightmare.
What we are going to do is register all hosts in our master’s
/etc/ssh/ssh_known_hosts
and then synchronize this file to all
compute nodes. This way all nodes know who to trust.
To generate this file, use ssh-keyscan
to populate the
/etc/ssh/ssh_known_hosts
file. Note that we add both short and full
host name, as well as the IP. This way no matter how we enter the host
the host key is known.
# add host keys to known hosts file
[root@master ~]# ssh-keyscan -H -t ecdsa c01 >> /etc/ssh/ssh_known_hosts
[root@master ~]# ssh-keyscan -H -t ecdsa c01.hpc >> /etc/ssh/ssh_known_hosts
[root@master ~]# ssh-keyscan -H -t ecdsa 192.168.17.1 >> /etc/ssh/ssh_known_hosts
Should a key already exist or conflict for some reason, it is
recommended to first remove them. This can be done with the
ssh-keygen
command.
# remove host keys from file
[root@master ~]# ssh-keygen -R c01 -f /etc/ssh/ssh_known_hosts
[root@master ~]# ssh-keygen -R c01.hpc -f /etc/ssh/ssh_known_hosts
[root@master ~]# ssh-keygen -R 192.168.17.1 -f /etc/ssh/ssh_known_hosts
1.4.4. Keeping keys consistent and synchronizing known hosts
During installation, the CentOS installer generates a unique host key
pair for the system. That means that after every reinstall the host key
you have stored and use to validate the machine identity will have
changed. The SSH client will warn you and not connect to this
potentially malicious server. In other words, our
/etc/ssh/ssh_known_hosts
becomes invalid.
Instead of patching up our known_hosts
, we instead keep a copy of
all host keys of our cluster on master. If we are forced to reinstall a
compute node, we simply copy the correct files back after installation.
Create a script which copies all the
/etc/ssh/ssh_host_*
files into a directory structure on masterCreate a second script which copies the SSH host keys back to a newly installed system. You will need to disable
StrictKeyChecking
for this.