Skip to content

Sherlock facts#

as of March 2026

Users#

  • 8,740 user accounts

  • 1,283 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 220 owner groups

Interfaces#

  • 20 login nodes

  • 7 data transfer nodes (DTNs)

Computing#

  • 15.97 PFLOPs (FP64)

    52.52 (FP32) PFLOPs

  • 75,472 CPU cores

    7 CPU generations (18 CPU models)

  • 1,188 GPUs

    5 GPU generations (14 GPU models)

Hardware#

  • 2,063 compute nodes

    25 server models (from 3 different manufacturers)

  • 57 racks

    1,633 rack units

Energy#

  • 925.69 kW

    total power usage

  • 97 PDUs

Storage#

  • 14.6 PB $SCRATCH

    parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth

  • 77.4 PB $OAK

    long term research data storage

Networking#

  • 140 Infiniband switches

    across 3 Infiniband fabrics (EDR, HDR, NDR)

  • 6,699 Infiniband cables

    spanning about 31.88 km

  • 75 Ethernet switches

Scheduler#

  • 200 Slurm partitions

  • 54,525 CPU.hours/day

    over 6 years of computing in a single day

  • $3,643,157 /month

    to run the same workload on t2.large on-demand cloud instances