Skip to content

Technical specifications#

In a nutshell#

Sherlock features over 1,700 compute nodes, 55,600+ CPU cores and 700+ GPUs, for a total computing power of more than 5.4 Petaflops. That would rank it in the Top500 list of the most powerful supercomputers in the world.

The cluster currently extends across 2 Infiniband fabrics (EDR, HDR). A 9.7 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth, provides scratch storage for more than 7,200 users, and 1,100 PI groups.

Resources#

The Sherlock cluster has been initiated in January 2014 with a base of freely available computing resources (about 2,000 CPU cores) and the accompanying networking and storage infrastructure (about 1 PB of shared storage).

Since then, it's been constantly expanding, spawning multiple cluster generations, with numerous contributions from many research groups on campus.

Cluster generations

For more information about Sherlock's ongoing evolution and expansion, please see Cluster generations.

Interface#

Type Qty Details
login nodes 12 sherlock.stanford.edu (load-balanced)
data transfer nodes 3 dedicated bandwidth for large data transfers

Computing#

Access to computing resources

Computing resources marked with below are freely available to every Sherlock user. Resources marked with are only accessible to Sherlock owners and their research teams.

Type Access Nodes CPU cores Details
compute nodes
normal partition
193 5,172 - 57x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB
- 40x 24 (Intel 5118), 191 GB RAM, EDR IB
- 26x 32 (AMD 7543), 256 GB RAM, HDR IB
- 70x 32 (AMD 7502), 256 GB RAM, HDR IB
development nodes
dev partition
4 104 - 2x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB
- 2x 32 (AMD 7543P), 256 GB RAM, HDR IB
- 32x Tesla A30_MIG-1g.6gb
large memory nodes
bigmem partition
9 504 - 4x 24 (Intel 5118), 384 GB RAM, EDR IB
- 1x 32 (Intel E5-2697Av4), 512 GB RAM, EDR IB
- 1x 56 (Intel E5-4650v4), 3072 GB RAM, EDR IB
- 1x 64 (AMD 7502), 4096 GB RAM, HDR IB
- 2x 128 (AMD 7742), 1024 GB RAM, HDR IB
GPU nodes
gpu partition
26 748 - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla P100 PCIe
- 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla P40
- 3x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla V100_SXM2
- 1x 24 (Intel 5118), 191 GB RAM, EDR IB
- 4x Tesla V100_SXM2
- 2x 24 (Intel 5118), 191 GB RAM, EDR IB
- 4x Tesla V100 PCIe
- 16x 32 (AMD 7502P), 256 GB RAM, HDR IB
- 4x Geforce RTX_2080Ti
- 2x 32 (AMD 7502P), 256 GB RAM, HDR IB
- 4x Tesla V100S PCIe
privately-owned nodes
owners partition
1,493 48,648 40 different node configurations, including GPU and bigmem nodes
Total 1,731 55,600 792

Storage#

More information

For more information about storage options on Sherlock, please refer to the Storage section of the documentation.

Sherlock is architected around shared storage components, meaning that users can find the same files and directories from all of the Sherlock nodes.

  • Highly-available NFS filesystem for user and group home directories (with hourly snapshots and off-site replication)
  • High-performance Lustre scratch filesystem (9.7 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth)
  • Direct access to SRCC's Oak long-term research data storage system (51.3 PB)