Himrod cluster

Principal Investigators

Ashraf Aboulnaga, Hans De Sterck, Ken Salem

Cluster hardware overview

system count cpu memory disk interconnect other
himrod-big-[1-4] 4 2x Intel E5-2670 @2.6GHz (16 physical cores total) 512GB 6x 600GB 10k RPM HDD + 1x 200GB SSD 10GbE  
himrod-[1-23] 23 2x Intel E5-2670 @ 2.60GHz (16 physical cores total) 256GB 6x 600GB 10k RPM HDD + 1x 200GB SSD 10GbE  
himrod.cs 1 2x Intel E5-2670 @ 2.60GHz (16 physical cores total) 32GB Dell PERC RAID 2x 10GbE 10GbE UW uplink
himrod-storage 1 2x Intel E5-2620 v2 @ 2.0GHz (12 physical cores total) 64GB Dell PERC RAID 2x 10GbE 23TB user homes, 40TB scratch via 2x10GbE NFS, 10GbE links LACP bonded

Drive configuration

Each compute system has five mechanical hard drives and a 2TB SATA SSD for data. Also a SAS SSD for OS. Details as follows:

Mount point logical device physical device capacity protocol
/ sda1 Pliant LB206M 200GB SAS2 6Gbps
/localdisk0 sdb1 10k RPM SEAGATE ST600MM0006 2.5" 600GB SAS2 6Gbps
/localdisk1 sdc1 10k RPM SEAGATE ST600MM0006 2.5" 600GB SAS2 6Gbps
/localdisk2 sdd1 10k RPM SEAGATE ST600MM0006 2.5" 600GB SAS2 6Gbps
/localdisk3 sde1 10k RPM SEAGATE ST600MM0006 2.5" 600GB SAS2 6Gbps
/localdisk4 sdf1 10k RPM SEAGATE ST600MM0006 2.5" 600GB SAS2 6Gbps
/localdisk5 sdg1 Samsung 860 EVO 2.5" 2000GB SATA3 6Gbps

Network configuration

System Interface Address Configuration
himrod-[1-23] bond0 192.168.224.10+[1-23] eno1
himrod-big-[1-4] bond0 192.168.224.20+[1-4] eno1
himrod.cs bond0 192.168.224.1 eno1

Password changes

The cluster uses a Samba4 Active Directory system to manage users and system access. Passwords can be changed when logged into the head node of the cluster by issuing the following command:

samba-tool user password

You will be prompted for your current password and then a new password (meeting complexity), twice:

Password for [LOCAL-DOMAIN\ldpaniak]:
New Password: 
Retype Password: 

Memory Characteristics: Topology, Bandwidth and Latency

Using the Intel tool here:

https://software.intel.com/en-us/articles/intelr-memory-latency-checker

I get the following results:

Notes:

-"remote" bandwidth much saturates the QPI link on these systems ~19GB/s

- can see difference in memory speeds:  1333MHz in "big" systems vs 1600MHz in "small" systems



himrod-big-1:

root@himrod-big-2:~/Linux# ./mlc
Intel(R) Memory Latency Checker - v3.5
Measuring idle latencies (in ns)...
Numa node
Numa node      0     1 
       0     92.6  141.9 
       1    141.9   93.0 

Measuring Peak Injection Memory Bandwidths for the system
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using all the threads from each core if Hyper-threading is enabled
Using traffic with the following read-write ratios
ALL Reads        : 59043.8
3:1 Reads-Writes : 55713.0
2:1 Reads-Writes : 54781.6
1:1 Reads-Writes : 54648.4
Stream-triad like: 52317.5

Measuring Memory Bandwidths between nodes within system 
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using all the threads from each core if Hyper-threading is enabled
Using Read-only traffic type
Numa node
Numa node      0       1 
       0   29542.7  17774.2 
       1   17955.9  29180.6 

Measuring Loaded Latencies for the system
Using all the threads from each core if Hyper-threading is enabled
Using Read-only traffic type
Inject Latency
Bandwidth
Delay (ns)
MB/sec
==========================
 00000 258.64  59238.5
 00002 258.54  59223.7
 00008 258.20  59336.8
 00015 258.86  59254.7
 00050 255.43  59511.2
 00100 244.16  59472.9
 00200 124.59  51916.2
 00300 105.50  36428.1
 00400 99.76  27986.9
 00500 96.99  22749.0
 00700 96.22  16607.4
 01000 94.89  11925.7
 01300 94.20   9388.8
 01700 93.99   7368.4
 02500 94.12   5242.8
 03500 93.89   3948.4
 05000 93.94   2971.5
 09000 94.07   1954.8
 20000 94.20   1253.7


Measuring cache-to-cache transfer latency (in ns)...
Local Socket L2->L2 HIT  latency 27.2
Local Socket L2->L2 HITM latency 30.6
Remote Socket L2->L2 HITM latency (data address homed in writer socket)
Reader Numa Node
Writer Numa Node     0      1
            0        -     152.5 
            1      153.0     - 

Remote Socket L2->L2 HITM latency (data address homed in reader socket)
Reader Numa Node
Writer Numa Node     0       1
            0        -      97.8 
            1      100.2     - 



himrod-1:

root@himrod-1:~/Linux# ./mlc
Intel(R) Memory Latency Checker - v3.5
Measuring idle latencies (in ns)...
Numa node
Numa node      0     1 
       0     81.8  128.9 
       1    130.3   82.3 

Measuring Peak Injection Memory Bandwidths for the system
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using all the threads from each core if Hyper-threading is enabled
Using traffic with the following read-write ratios
ALL Reads        : 83745.2
3:1 Reads-Writes : 78666.5
2:1 Reads-Writes : 77641.8
1:1 Reads-Writes : 78861.7
Stream-triad like: 71649.8

Measuring Memory Bandwidths between nodes within system 
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using all the threads from each core if Hyper-threading is enabled
Using Read-only traffic type
Numa node
Numa node      0       1 
       0    41875.4  19632.7 
       1    19726.5  41469.1 

Measuring Loaded Latencies for the system
Using all the threads from each core if Hyper-threading is enabled
Using Read-only traffic type
Inject Latency
Bandwidth
Delay (ns)
MB/sec
==========================
 00000 180.00  83983.7
 00002 180.00  83989.8
 00008 179.57  84094.9
 00015 179.63  84172.9
 00050 176.62  84289.6
 00100 138.43  82620.5
 00200 96.10  52684.3
 00300 90.29  36569.2
 00400 87.58  28131.7
 00500 86.67  22862.7
 00700 85.05  16718.4
 01000 84.23  12025.1
 01300 83.81   9482.1
 01700 83.50   7460.3
 02500 83.41   5334.7
 03500 83.31   4038.2
 05000 83.27   3061.3
 09000 83.28   2044.3
 20000 83.28   1343.2

Measuring cache-to-cache transfer latency (in ns)...
Local Socket L2->L2 HIT  latency 26.7
Local Socket L2->L2 HITM latency 30.4
Remote Socket L2->L2 HITM latency (data address homed in writer socket)
Reader Numa Node
Writer Numa Node     0      1
            0        -    144.3 
            1      142.1    -
 
Remote Socket L2->L2 HITM latency (data address homed in reader socket)
Reader Numa Node
Writer Numa Node     0      1
            0        -    97.1 
            1      99.6     - 

VPN configuration

Himrod offers OpenVPN connectivity for cluster users. Authorization is via cluster credentials (username and password). A working ovpn configuration which is compatible with most OpenVPN clients is here.

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2021-07-08 - HarshRoghelia
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback