Skip to content

Sherlock facts#

as of March 2024

Users#

  • 7,207 user accounts

  • 1,126 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 201 owner groups

Interfaces#

  • 12 login nodes

  • 3 data transfer nodes (DTNs)

Computing#

  • 5.44 PFLOPs (FP64)

    19.61 (FP32) PFLOPs

  • 55,600 CPU cores

    4 CPU generations (13 CPU models)

  • 792 GPUs

    4 GPU generations (12 GPU models)

Hardware#

  • 1,731 compute nodes

    19 server models (from 3 different manufacturers)

  • 37 racks

    1,186 rack units

Energy#

  • 599.04 kW

    total power usage

  • 57 PDUs

Storage#

  • 9.7 PB $SCRATCH

    parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth

  • 51.3 PB $OAK

    long term research data storage

Networking#

  • 104 Infiniband switches

    across 2 Infiniband fabrics (EDR, HDR)

  • 5,732 Infiniband cables

    spanning about 30.14 km

  • 53 Ethernet switches

Scheduler#

  • 179 Slurm partitions

  • 41,604 CPU.hours/day

    over 4 years of computing in a single day

  • $2,779,841 /month

    to run the same workload on t2.large on-demand cloud instances