2. Synchronize time across the cluster
chrony
is a flexible Network Time Protocol (NTP) implementation designed
for synchronizing system clocks.
2.1. Chrony server installation
The master node will serve as NTP server for the rest of the cluster.
By default,
chrony
is installed in Almalinux but check if it is there[root@master ~]# yum install chrony
Edit the
/etc/chrony.conf
fileBy default,
chrony
does not allow other machines to synchronize their time with it. To make your master node serve as a time source for your cluster, you need to allow access from your cluster’s subnets:# Allow NTP client access from local network. allow 192.168.0.0/20 allow 192.168.16.0/20
The following lines ensure that the NTP server uses the Global NTP Pool Project servers as reference clocks.
# Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 0.pool.ntp.org iburst server 1.pool.ntp.org iburst server 2.pool.ntp.org iburst server 3.pool.ntp.org iburst
For the Temple HPC training cluster, your master nodes are not directly connected to the internet. Use the following NTP server instead by adding the following line and comment out all other
server
lines:server 172.16.1.1 prefer
To serve time even when the master node is not synchronized with upstream NTP servers, uncomment the “local stratum” line
# Serve time even if not synchronized to a time source. local stratum 10
This makes the server act as a stratum 8 time source when it can’t reach its upstream servers
Enable/start or restart the
chronyd
server. First check if it is arealdy running:[root@master ~]# systemctl status chronyd ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2025-05-29 04:15:26 EDT; 3min 33s ago
If it is running, like in the previous example, restart it with the command:
[root@master ~]# systemctl restart chronyd
Otherwise, if it is not running, enable and start with service:
[root@master ~]# systemctl enable chronyd [root@master ~]# systemctl start chronyd
Check if chrony is operating correctly and synchronized with its time sources
Use the
chronyc tracking
command to show detailed information about the synchronization status.:[root@master ~]# chronyc tracking Reference ID : AC100101 (_gateway) Stratum : 4 Ref time (UTC) : Wed Apr 09 03:23:29 2025 System time : 0.000009234 seconds slow of NTP time Last offset : -0.000012674 seconds RMS offset : 0.000044795 seconds Frequency : 27.100 ppm slow
You can also check which time sources your server with the
chronyc sources
command:
2.2. Chrony Client Installation
Each compute node will act as a Chrony client. They run the same daemon as the server, however, only to synchronize their clocks with our own local Chrony server.
Warning
Before you can do any software installation, delete all Almalinux repo files in
/etc/yum.repos.d/
.
[root@c01 ~]# rm /etc/yum.repos.d/Almalinux*
Check if the
chrony
daemon needs to be installed[root@c01 ~]# yum install chrony
Edit the
/etc/chrony.conf
fileAdd the following line to the Chrony configuration file
server ntp.hpc iburst
And remove or comment all other
server
orpool
lines.
Warning
Verify you can resolve the name ntp.hpc i.e. host ntp.hpc
. If the name is not resolvable you need to edit your DNS configuration to add it.
Also remember, the host
command is provided by bind-utils
install it using yum
.
Enable and start or restart the
chronyd
daemonVerify your NTP client is synchronized with the reference clock from
ntp.hpc
Use the
chronyc sources
command to query the current status of your system:[root@c01 ~]# chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* master.hpc 4 6 377 7 -141us[ -154us] +/- 13ms
From the master node, it is possible to check the list of NTP clients using the
chronyc clients
command:[root@master ~]# chronyc clients Hostname NTP Drop Int IntL Last Cmd Drop Int Last =============================================================================== c02.hpc 9764 0 6 - 40 0 0 - - c03.hpc 9324 0 6 - 57 0 0 - - c01.hpc 9394 0 6 - 31 0 0 - -