Skip to content

Sherlock facts#

as of September 2023

Users#

  • 6,956 user accounts

  • 1,094 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 199 owner groups

Interfaces#

  • 12 login nodes

  • 3 data transfer nodes (DTNs)

Computing#

  • 5.00 PFLOPs (FP64)

    18.73 (FP32) PFLOPs

  • 53,488 CPU cores

    4 CPU generations (13 CPU models)

  • 756 GPUs

    4 GPU generations (12 GPU models)

Hardware#

  • 1,693 compute nodes

    19 server models (from 3 different manufacturers)

  • 37 racks

    1,147 rack units

Energy#

  • 558.83 kW

    total power usage

  • 57 PDUs

Storage#

  • 7.9 PB $SCRATCH

    parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth

  • 51.3 PB $OAK

    long term research data storage

Networking#

  • 104 Infiniband switches

    across 2 Infiniband fabrics (EDR, HDR)

  • 5,654 Infiniband cables

    spanning about 30.11 km

  • 53 Ethernet switches

Scheduler#

  • 174 Slurm partitions

  • 37,228 CPU.hours/day

    over 4 years of computing in a single day

  • $2,487,465 /month

    to run the same workload on t2.large on-demand cloud instances