Skip to content

Sherlock facts#

as of July 2025

Users#

  • 8,046 user accounts

  • 1,214 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 212 owner groups

Interfaces#

  • 20 login nodes

  • 7 data transfer nodes (DTNs)

Computing#

  • 11.58 PFLOPs (FP64)

    38.07 (FP32) PFLOPs

  • 71,932 CPU cores

    7 CPU generations (18 CPU models)

  • 1,028 GPUs

    5 GPU generations (13 GPU models)

Hardware#

  • 2,025 compute nodes

    25 server models (from 4 different manufacturers)

  • 55 racks

    1,498 rack units

Energy#

  • 823.29 kW

    total power usage

  • 97 PDUs

Storage#

  • 9.7 PB $SCRATCH

    parallel, distributed filesystem, delivering over 600 GB/s of I/O bandwidth

  • 69.8 PB $OAK

    long term research data storage

Networking#

  • 140 Infiniband switches

    across 3 Infiniband fabrics (EDR, HDR, NDR)

  • 6,552 Infiniband cables

    spanning about 31.61 km

  • 75 Ethernet switches

Scheduler#

  • 191 Slurm partitions

  • 56,775 CPU.hours/day

    over 6 years of computing in a single day

  • $3,793,523 /month

    to run the same workload on t2.large on-demand cloud instances