Technical specifications#
In a nutshell#
Sherlock features over 1,700 compute nodes, 57,000+ CPU cores and 800+ GPUs, for a total computing power of more than 6.1 Petaflops. That would rank it in the Top500 list of the most powerful supercomputers in the world.
The cluster currently extends across 3 Infiniband fabrics (EDR, HDR, NDR). A 9.7 PB parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth, provides scratch storage for more than 7,900 users, and 1,100 PI groups.
Resources#
The Sherlock cluster has been initiated in January 2014 with a base of freely available computing resources (about 2,000 CPU cores) and the accompanying networking and storage infrastructure (about 1 PB of shared storage).
Since then, it's been constantly expanding, spawning multiple cluster generations, with numerous contributions from many research groups on campus.
Cluster generations
For more information about Sherlock's ongoing evolution and expansion, please see Cluster generations.
Interface#
Type | Qty | Details |
---|---|---|
login nodes | 20 | sherlock.stanford.edu (load-balanced) |
data transfer nodes | 7 | dedicated bandwidth for large data transfers |
Computing#
Access to computing resources
Computing resources marked with below are freely available to every Sherlock user. Resources marked with are only accessible to Sherlock owners and their research teams.
Type | Access | Nodes | CPU cores | Details |
---|---|---|---|---|
compute nodesnormal partition | 211 | 5,700 | - 57x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB - 40x 24 (Intel 5118), 191 GB RAM, EDR IB - 14x 24 (AMD 8224P), 192 GB RAM, NDR IB - 28x 32 (AMD 7543), 256 GB RAM, HDR IB - 70x 32 (AMD 7502), 256 GB RAM, HDR IB - 2x 64 (AMD 9384X), 384 GB RAM, NDR IB | |
development nodesdev partition | 4 | 104 | - 2x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB - 2x 32 (AMD 7543P), 256 GB RAM, HDR IB 32x Tesla A30_MIG-1g.6gb | |
large memory nodesbigmem partition | 11 | 824 | - 4x 24 (Intel 5118), 384 GB RAM, EDR IB - 1x 32 (Intel E5-2697Av4), 512 GB RAM, EDR IB - 1x 56 (Intel E5-4650v4), 3072 GB RAM, EDR IB - 1x 64 (AMD 7502), 4096 GB RAM, HDR IB - 1x 64 (Intel 8462Y+), 4096 GB RAM, NDR IB - 2x 128 (AMD 7742), 1024 GB RAM, HDR IB - 1x 256 (AMD 9754), 1536 GB RAM, NDR IB | |
GPU nodesgpu partition | 33 | 1,068 | - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla P100 PCIe - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla P40 - 3x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla V100_SXM2 - 1x 24 (Intel 5118), 191 GB RAM, EDR IB 4x Tesla V100_SXM2 - 2x 24 (Intel 5118), 191 GB RAM, EDR IB 4x Tesla V100 PCIe - 16x 32 (AMD 7502P), 256 GB RAM, HDR IB 4x Geforce RTX_2080Ti - 2x 32 (AMD 7502P), 256 GB RAM, HDR IB 4x Tesla V100S PCIe - 4x 32 (Intel 6426Y), 256 GB RAM, NDR IB 4x Tesla L40S - 2x 64 (Intel 8462Y+), 1024 GB RAM, NDR IB 4x Tesla H100_SXM5 - 1x 64 (Intel 8462Y+), 2048 GB RAM, NDR IB 8x Tesla H100_SXM5 | |
privately-owned nodesowners partition | 1,496 | 48,808 | 40 different node configurations, including GPU and bigmem nodes | |
Total | 1,761 | 57,040 | 836 |
Storage#
More information
For more information about storage options on Sherlock, please refer to the Storage section of the documentation.
Sherlock is architected around shared storage components, meaning that users can find the same files and directories from all of the Sherlock nodes.
- Highly-available NFS filesystem for user and group home directories (with hourly snapshots and off-site replication)
- High-performance Lustre scratch filesystem (9.7 PB parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth)
- Direct access to Stanford Research Computing's Oak long-term research data storage system (195.4 GB)