Child pages
  • Hardware
Skip to end of metadata
Go to start of metadata

Currently, the IDUN cluster consists of:

Compute Nodes


Type#ProcessorsProcessor#CoresMemory [GB]#GPUsGPUTopology
compute-1-0-[1-27]Dell PE6302Intel Xeon E5-2630 v420128

compute-3-0-[25-31,33,35]Dell PEC64202Intel Xeon Gold 613228192

compute-2-0-[1-8]Dell PE7302Intel Xeon E5-2695 v4361282NVIDIA Tesla P100
compute-3-0-[1-19]Dell PE7302Intel Xeon E5-2650 v4241282NVIDIA Tesla P100
compute-3-0-[20-24]Dell PE7402Intel Xeon Gold 6132287682NVIDIA Tesla V100

Login Nodes

NodeType#ProcessorsProcessor#CoresMemory [GB]#GPUsGPU
idun-login1.hpc.ntnu.noDell PE6302Intel Xeon E5-2630 v42064

idun-login2.hpc.ntnu.noDell PE6302Intel Xeon E5-2630 v42064

idun-login3.hpc.ntnu.noDell PE7302Intel Xeon E5-2695 v4361281NVIDIA Tesla P100

Legacy Nodes

#NodesType#ProcessorsProcessor#CoresMemory [GB]#GPUsGPU
1Dell PE7302Intel Xeon E5-2697 v328256

1Dell PE7302Intel Xeon E5-2660 v3202562NVIDIA K2
1Dell PE7302Intel Xeon E5-2697 v3282562NVIDIA TITANX
1SGI UV204Intel Xeon E5-4627 v2243841NVIDIA K6000

These nodes are not part of the queuing system. Access only on request

Admin Nodes

  • 1 admin node/provisioning node: Dell PE620
  • 2 samba server {idun-samba1, idun-samba2}


  • 3 Mellanox passive FDR switches for interconnect/storage on general part of cluster
  • 2 Mellanox passive EDR switch for interconnect/storage on GPU part of cluster
  • 3 Gigabit ethernet switches for provisioning and admin network
  • No labels