Skip to content

Sherlock facts#

as of February 2026

Users#

  • 8,553 user accounts

  • 1,263 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 220 owner groups

Interfaces#

  • 20 login nodes

  • 7 data transfer nodes (DTNs)

Computing#

  • 15.92 PFLOPs (FP64)

    49.68 (FP32) PFLOPs

  • 74,964 CPU cores

    7 CPU generations (18 CPU models)

  • 1,168 GPUs

    5 GPU generations (14 GPU models)

Hardware#

  • 2,060 compute nodes

    25 server models (from 3 different manufacturers)

  • 57 racks

    1,618 rack units

Energy#

  • 806.46 kW

    total power usage

  • 97 PDUs

Storage#

  • 9.7 PB $SCRATCH

    parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth

  • 77.4 PB $OAK

    long term research data storage

Networking#

  • 140 Infiniband switches

    across 3 Infiniband fabrics (EDR, HDR, NDR)

  • 6,695 Infiniband cables

    spanning about 31.88 km

  • 75 Ethernet switches

Scheduler#

  • 199 Slurm partitions

  • 51,469 CPU.hours/day

    over 5 years of computing in a single day

  • $3,439,013 /month

    to run the same workload on t2.large on-demand cloud instances