3. Network directories
Clusters use network directories to share common locations across all systems. This typically includes the software installation and the user home directories. In our mini-cluster we will set up our master node as network file server using the NFS protocol.
We will store all user home directories in /data/home
and install
software in /data/opt
.
3.1. Setting up a NFS server
3.1.1. Installation
Setting up a NFS server only requires a single package, which will install all other dependencies.
[root@master ~]# yum install nfs-utils
Enable and start the nfs-server
service.
[root@master ~]# systemctl enable nfs-server
[root@master ~]# systemctl start nfs-server
3.1.2. Exporting a folder via NFS
Start by creating a folder on the master node that we want to share:
[root@master ~]# mkdir -p /data/opt
Create a text file inside this folder:
[root@master ~]# echo Hello NFS! > /data/opt/README
The generated file should have the following permissions:
[root@master ~]# ls -lah /data/opt/README
-rw-r--r-- 1 root root 11 Feb 25 13:13 /data/opt/README
We now want to share this folder with the rest of the cluster. For this we need to let our NFS server know that we want to export this folder.
NFS shares are controlled by the /etc/exports
file. You can also
create files ending with .exports
in /etc/exports.d
instead.
Each line in such a file defines one folder shared to one or a
collection of clients.
Add the following line to /etc/exports
:
/data/opt 192.168.17.1(ro)
To apply this setting, use exportfs -a
[root@master ~]# exportfs -a
You can verify that this export is now active by just running
exportfs
[root@master ~]# exportfs
/data/opt 192.168.17.1
This will share the /data/opt
folder as read-only filesystem to
the server with the IP 192.168.17.1
.
3.2. Setting up a NFS client
Similar to the server setup, we first have to install the necessary
nfs-utils
package on each client.
[root@c01 ~]# yum install nfs-utils
There is several ways to mount a NFS share:
Manually using
mount
Permanently during boot using
/etc/fstab
Automatically using
autofs
Option 1 is usually good for testing only. While option 2 is a solution for permanent mounts, option 3 gives us more flexibility of creating more mount later on-the-fly.
For testing, create the folder where you want to mount it and use the
mount
command:
# create folder
[root@c01 ~]# mkdir -p /data/opt
# mount network folder
[root@c01 ~]# mount -t nfs master.hpc:/data/opt /data/opt
You can verify how your NFS share is mounted using mount
:
[root@c01 ~]# mount | grep data
master.hpc:/data/opt on /data/opt type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.17.1,local_lock=none,addr=192.168.16.1)
View the README file.
[root@c01 ~]# ls -lah /data/opt/
total 8.0K
drwxr-xr-x 2 root root 4.0K Feb 25 13:13 .
drwxr-xr-x 3 root root 17 Feb 25 19:32 ..
-rw-r--r-- 1 root root 11 Feb 25 13:13 README
$ cat /data/opt/README
Hello NFS!
You will notice that while it is shown as root
owned file, it is not
possible to change it from your client. And root
will not be able to
create a file in your NFS folder.
[root@c01 ~]# touch /data/opt/TEST
touch: cannot touch ‘/data/opt/TEST’: Permission denied
Well, we did specify it as ro
(read-only) on the server. Let’s try
to change this on the server /etc/exports
file:
/data/opt 192.168.17.1(rw)
Rerun exportfs -a
to apply the change.
[root@c01 ~]# touch /data/opt/TEST
touch: cannot touch ‘/data/opt/TEST’: Permission denied
On the server, create a new folder /data/opt/test
with with
global write rights.
# this has to be your NFS server
[root@master ~]# mkdir -p /data/opt/test
[root@master ~]# chmod 777 /data/opt/test
On the client, try writing into this new folder.
[root@c01 ~]# touch /data/opt/test/TEST
You will notice that it has worked. Let’s look at what was written:
[root@c01 ~]# ls -lah /data/opt/test/
total 8.0K
drwxrwxrwx 2 root root 4.0K Feb 25 14:50 .
drwxr-xr-x 3 root root 4.0K Feb 25 14:40 ..
-rw-r--r-- 1 nfsnobody nfsnobody 0 Feb 25 14:49 TEST
Our files were created not as root
, but as user nfsnobody
and
group nfsnobody
. This is because root
on one system is not
root
on another system by default. This is called root squash. Any
access to files on the network folder will be interpreted as access from
user with uid nfsnobody
and gid nfsnobody
.
You can disable this behavior by adding the no_root_squash
option in
your server exports
file. However, it is discouraged.
PATH HOST_OR_SUBNET(OPTIONS) HOST_OR_SUBNET(OPTIONS) HOST_OR_SUBNET(OPTIONS)
Typical options can be found in the following table, which are further
explained in the man page of exports
.
Option |
Description |
---|---|
|
read-only |
|
read-write |
|
forces the NFS server to only reply once changes are committed to storage. this is the default. |
|
allows NFS server to reply before changes are written. improves performance, but at the cost of potential data loss |
|
by default root on the client and root on the server are
not the same. instead it uses uid and gid 65534 ( |
Install NFS on a second compute node and try to access the server. You
will quickly notice that any attempt will fail. Instead of adding each
server individually to the exports
file, you can also specify the
subnet.
/data/opt 192.168.16.0/20(ro)
Multiple server options can be added for different clients. This allows you to customize access for each of them.
/data/opt 192.168.16.1(rw) 192.168.16.0/20(ro)
This configuration will give 192.168.16.1
read-write access, while
everyone else on that subnet only gets read-only access.
Unmount all your clients now.
[root@c01 ~]# umount /data/opt
3.2.1. Mounting with /etc/fstab
Instead of specifying options for the mount
command, you can also
specify such a mount in /etc/fstab
.
Warning
Be careful editing this file, as it can result in a unbootable system!
Add a new line to /etc/fstab
to mount the NFS directory with default
options and as read-only filesystem.
#
# /etc/fstab
# Created by anaconda on Mon Feb 25 19:11:45 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_c01-root / xfs defaults 0 0
UUID=ea3938d5-bcdc-48aa-9315-613346a56104 /boot xfs defaults 0 0
/dev/mapper/centos_c01-home /home xfs defaults 0 0
/dev/mapper/centos_c01-swap swap swap defaults 0 0
# NFS configuration
master.hpc:/data/opt /data/opt nfs defaults,ro 0 0
Due to this extra line you will now be able to mount this folder by specifying its mount point.
[root@c01 ~]# mount /data/opt
Verify the mount works:
[root@c01 ~]# mount | grep data
master.hpc:/data/opt on /data/opt type nfs4 (ro,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.17.1,local_lock=none,addr=192.168.16.1)
3.2.2. Configuring automount
The automounter is a service that works together with the kernel to mount filesystems on demand. This is especially useful for mounting home directories of individual users.
Mounting directories, caching its data and metadata in RAM (something the kernel does for you automatically) uses up resources.
With automounter we create text files which list the mapping of
directories to their matching network filesystem location. autofs
will then mount them when they are accessed. It also automatically
unmounts directories if they aren’t used for some time.
Installation
Start by installing
autofs
on a compute node.[root@c01 ~]# yum install autofs [root@c01 ~]# systemctl enable autofs [root@c01 ~]# systemctl start autofs
Remove
/data/opt
Ensure
/data/opt
is not mounted from before and remove the empty/data/opt
and/data
folders from your compute node.Edit the file
/etc/auto.master
The
/etc/auto.master
file is the starting point for creating mappings for your system./data /etc/auto.data --timeout=1200
This configuration means that all mounts in
/data
are controlled by the/etc/auto.data
file and will timeout (be unmounted) after 1200 seconds of inactivity.autofs
will create an empty read-only folder/data
at startup.Create this new
/etc/auto.data
fileAdd the following configuration:
opt -fstype=nfs4,ro,async,ac,nfsvers=4,wsize=8192,rsize=8192,bg,hard master.hpc:/data/opt
This maps the folder
/data/opt
to the NFS directorymaster.hpc:/data/opt
. It also illustrates how to use mount options.Reload the
autofs
service[root@c01 ~]# systemctl reload autofs
Try to access first
/data
and then/data/opt
. You will notice a short lag while accessingopt
the first time, this is due to the mount. But afterwards that filesystem will be available.