Select your language

Material Description

 

History

  • The initial platform was acquired in 2011
  • An extension was added in 2014 (nodes + storage)
  • In October 2017, the 2 login servers, the administration platform, and the storage solution were renewed
  • February 2019: renewal of the 1st generation of nodes
  • November 2020: new AMD chassis

Compute nodes

Number of cores for computation :

(102 * 20) + 32 + (40 * 40) + (128 * 4) = 4184 cores (a theoretical peak performance of approximately 170.8 TFlops)

Generation LICALLO 2014 : 41.4 TFlops

Generation LICALLO 2019 : 120 TFlops

Generation LICALLO 2020 : 9.4 TFlops

Additionally the node dedicated to visualization

 

Generation #Nodes Proc RAM GPUs Infiniband
Licallo 2011 74 bi cpus Intel Xeon X5660 à 2.8GHz ; 2 x 6 coeurs 48 Go   QDR-40 Gb/s
Licallo 2011 8 bi cpus Intel Xeon E5620 à 2.40GHz ; 2 x 4 coeurs 24 Go  2 * NVIDIA Tesla M2050 QDR-40 Gb/s
Licallo 2014 102 bi cpus Intel Xeon IvyBridge E5-2670 v2 @ 2.50GHz ; 2 x 10 coeurs 64 Go    FDR-56Gb/s
Licallo 2014 1 quadri cpus Intel Xeon E5-4650L à 2.60GHz ; 4 x 8 coeurs 1 To    FDR-56 Gb/s
Licallo 2019 40 PowerEdge C6420 bi cpus Intel(R) Xeon(R) Gold 6148  @ 2.40GHz : 2 * 20 cores 192 Go    FDR-56 Gb/s
Licallo 2019 1 PowerEdge R740 bi cpus Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz, 2 * 20 cores 384 Go  2 * Quadro P4000 - 1792 Cuda Cores FDR-56 Gb/s
Licallo 2020 1 PowerEdge C6400-R bi cpus AMD Epyc 7702 @ 2GHz, 2 * 64 cores 1 To   FDR-56 Gb/s

 

  • The nodes from the 2014 generation are grouped in pairs within a blade, occupying 2 slots in B510 chassis

The Bullx R4 428-E3 server is a 2U form factor server

  • The nodes acquired in 2019 are grouped in sets of 4 within Dell PowerEdge C6400 enclosures
  • The node acquired in 2020 is grouped in sets of 4 within a Dell PowerEdge C6400-R enclosure

The 2 front-end nodes of type PowerEdge R630 (1U): Castor and Pollux

These nodes are dedicated to user connections via SSH

  • 2 Intel processors E5-2640v4  at 2.4 GHz; 10 cores
  • 64 GB of RAM
  • 2 SAS HDDs of 600 GB in RAID 1
  • Mellanox dual-port InfiniBand card

The administration node Gemini is a PowerEdge R630

  • 2 Intel processors E5-2640v4  at 2.4 GHz; 10 cores
  • 64 GB of RAM
  • 2 SAS HDDs of 600 GB in RAID 1 for the OS
  • 3 SAS HDDs of 1.2 TB in RAID 5 for the applications
  • 2 1Gb RJ45 network ports + 2 10Gb SFP network ports
  • Mellanox dual-port InfiniBand card

The storage space

The storage solution architecture is based on BeegFS technology

It includes:

  • 2 Dell PowerEdge R730xd servers managing
  • 2 Dell PowerVault MD1400 disk arrays (286 TB raw capacity)

Prototype iRODS

A bullx R423-E3 server connected to a NetApp E2600 disk shelf via a dual 6Gbs SAS connection.

20 disks of 4 TB each, totaling 80 TB raw capacity in 2 RAID 6 configurations (7.2 TB usable).

 

Software configuration

  • All machines have been updated to CentOS Linux release 7.4.1708 (Core).
  • The latest version of the Intel compiler is available, including the MPI library
         Environment

          Usage of modules   

         Available applications

         List of software (updated 2019)

         Job Submission

          Use of Slurm job scheduler

         Best practices
  • Do not request more cores than necessary
  • Do not request more time than necessary