Skip to content

Sherlock facts#

as of March 2025

Users#

  • 7,968 user accounts

  • 1,199 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 210 owner groups

Interfaces#

  • 20 login nodes

  • 7 data transfer nodes (DTNs)

Computing#

  • 11.41 PFLOPs (FP64)

    37.74 (FP32) PFLOPs

  • 71,184 CPU cores

    7 CPU generations (18 CPU models)

  • 1,024 GPUs

    5 GPU generations (13 GPU models)

Hardware#

  • 2,015 compute nodes

    25 server models (from 4 different manufacturers)

  • 53 racks

    1,493 rack units

Energy#

  • 819.93 kW

    total power usage

  • 89 PDUs

Storage#

  • 9.7 PB $SCRATCH

    parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth

  • 74.4 PB $OAK

    long term research data storage

Networking#

  • 136 Infiniband switches

    across 3 Infiniband fabrics (EDR, HDR, NDR)

  • 6,496 Infiniband cables

    spanning about 31.51 km

  • 71 Ethernet switches

Scheduler#

  • 189 Slurm partitions

  • 46,679 CPU.hours/day

    over 5 years of computing in a single day

  • $3,118,909 /month

    to run the same workload on t2.large on-demand cloud instances