Skip to content

Technical specifications#

In a nutshell#

Sherlock features over 1,500 compute nodes, 37,000+ CPU cores and 600+ GPUs, for a total computing power of more than 2.6 Petaflops. That would rank it in the Top500 list of the most powerful supercomputers in the world.

The cluster currently extends across 3 Infiniband fabrics (EDR, FDR, HDR). A 6.1 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth, provides scratch storage for more than 5,200 users, and 800 PI groups.

Resources#

The Sherlock cluster has been initiated in January 2014 with a base of freely available computing resources (about 2,000 CPU cores) and the accompanying networking and storage infrastructure (about 1 PB of shared storage).

Since then, it's been constantly expanding, spawning multiple cluster generations, with numerous contributions from many research groups on campus.

Cluster generations

For more information about Sherlock's ongoing evolution and expansion, please see Cluster generations.

Interface#

Type Qty Details
login nodes 16 sherlock.stanford.edu (load-balanced)
data transfer nodes 4 dedicated bandwidth for large data transfers

Computing#

Access to computing resources

Computing resources marked with below are freely available to every Sherlock user. Resources marked with are only accessible to Sherlock owners and their research teams.

Type Access Nodes CPU cores Details
compute nodes
normal partition
156 4,288 - 56x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB
- 28x 24 (Intel 5118), 191 GB RAM, EDR IB
- 70x 32 (AMD 7502), 256 GB RAM, HDR IB
- 2x 128 (AMD 7742), 1024 GB RAM, HDR IB
development nodes
dev partition
2 40 - 2x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB
large memory nodes
bigmem partition
3 152 - 1x 32 (Intel E5-2697Av4), 512 GB RAM, EDR IB
- 1x 56 (Intel E5-4650v4), 3072 GB RAM, EDR IB
- 1x 64 (AMD 7502), 4096 GB RAM, HDR IB
GPU nodes
gpu partition
26 748 - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla P100 PCIe
- 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla P40
- 3x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB
- 4x Tesla V100_SXM2
- 1x 24 (Intel 5118), 191 GB RAM, EDR IB
- 4x Tesla V100_SXM2
- 2x 24 (Intel 5118), 191 GB RAM, EDR IB
- 4x Tesla V100 PCIe
- 16x 32 (AMD 7502P), 256 GB RAM, HDR IB
- 4x Geforce RTX_2080Ti
- 2x 32 (AMD 7502P), 256 GB RAM, HDR IB
- 4x Tesla V100S PCIe
privately-owned nodes
owners partition
1,387 31,756 44 different node configurations, including GPU and bigmem nodes
Total 1,579 37,088

Storage#

More information

For more information about storage options on Sherlock, please refer to the Storage section of the documentation.

Storage components are common to both clusters, meaning users can find the same files and directories from both Sherlock 1.0 and Sherlock 2.0 nodes.

  • Highly-available NFS filesystem for user and group home directories (with hourly snapshots and off-site replication)
  • High-performance Lustre scratch filesystem (6.1 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth)
  • Direct access to SRCC's Oak long-term research data storage system (24.8 PB)