Technical specifications#
In a nutshell#
Sherlock features over 1,600 compute nodes, 51,900+ CPU cores and 700+ GPUs, for a total computing power of more than 4.7 Petaflops. That would rank it in the Top500 list of the most powerful supercomputers in the world.
The cluster currently extends across 2 Infiniband fabrics (EDR, HDR). A 7.0 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth, provides scratch storage for more than 6,600 users, and 1,000 PI groups.
Resources#
The Sherlock cluster has been initiated in January 2014 with a base of freely available computing resources (about 2,000 CPU cores) and the accompanying networking and storage infrastructure (about 1 PB of shared storage).
Since then, it's been constantly expanding, spawning multiple cluster generations, with numerous contributions from many research groups on campus.
Cluster generations
For more information about Sherlock's ongoing evolution and expansion, please see Cluster generations.
Interface#
Type | Qty | Details |
---|---|---|
login nodes | 12 | sherlock.stanford.edu (load-balanced) |
data transfer nodes | 3 | dedicated bandwidth for large data transfers |
Computing#
Access to computing resources
Computing resources marked with below are freely available to every Sherlock user. Resources marked with are only accessible to Sherlock owners and their research teams.
Type | Access | Nodes | CPU cores | Details |
---|---|---|---|---|
compute nodesnormal partition | 179 | 4,820 | - 57x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB - 28x 24 (Intel 5118), 191 GB RAM, EDR IB - 24x 32 (AMD 7543), 256 GB RAM, HDR IB - 70x 32 (AMD 7502), 256 GB RAM, HDR IB | |
development nodesdev partition | 4 | 104 | - 2x 20 (Intel E5-2640v4), 128 GB RAM, EDR IB - 2x 32 (AMD 7543P), 256 GB RAM, HDR IB 32x Tesla A30_MIG-1g.6gb | |
large memory nodesbigmem partition | 9 | 504 | - 4x 24 (Intel 5118), 384 GB RAM, EDR IB - 1x 32 (Intel E5-2697Av4), 512 GB RAM, EDR IB - 1x 56 (Intel E5-4650v4), 3072 GB RAM, EDR IB - 1x 64 (AMD 7502), 4096 GB RAM, HDR IB - 2x 128 (AMD 7742), 1024 GB RAM, HDR IB | |
GPU nodesgpu partition | 26 | 748 | - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla P100 PCIe - 1x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla P40 - 3x 20 (Intel E5-2640v4), 256 GB RAM, EDR IB 4x Tesla V100_SXM2 - 1x 24 (Intel 5118), 191 GB RAM, EDR IB 4x Tesla V100_SXM2 - 2x 24 (Intel 5118), 191 GB RAM, EDR IB 4x Tesla V100 PCIe - 16x 32 (AMD 7502P), 256 GB RAM, HDR IB 4x Geforce RTX_2080Ti - 2x 32 (AMD 7502P), 256 GB RAM, HDR IB 4x Tesla V100S PCIe | |
privately-owned nodesowners partition | 1,441 | 45,416 | 40 different node configurations, including GPU and bigmem nodes | |
Total | 1,663 | 51,952 | 736 |
Storage#
More information
For more information about storage options on Sherlock, please refer to the Storage section of the documentation.
Sherlock is architected around shared storage components, meaning that users can find the same files and directories from all of the Sherlock nodes.
- Highly-available NFS filesystem for user and group home directories (with hourly snapshots and off-site replication)
- High-performance Lustre scratch filesystem (7.0 PB parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth)
- Direct access to SRCC's Oak long-term research data storage system (51.3 PB)