Skip to content

Sherlock facts#

as of May 2024

Users#

  • 7,253 user accounts

  • 1,132 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 202 owner groups

Interfaces#

  • 12 login nodes

  • 3 data transfer nodes (DTNs)

Computing#

  • 5.44 PFLOPs (FP64)

    19.76 (FP32) PFLOPs

  • 55,632 CPU cores

    4 CPU generations (13 CPU models)

  • 796 GPUs

    4 GPU generations (12 GPU models)

Hardware#

  • 1,732 compute nodes

    19 server models (from 3 different manufacturers)

  • 37 racks

    1,188 rack units

Energy#

  • 577.55 kW

    total power usage

  • 58 PDUs

Storage#

  • 9.7 PB $SCRATCH

    parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth

  • 58.3 PB $OAK

    long term research data storage

Networking#

  • 104 Infiniband switches

    across 2 Infiniband fabrics (EDR, HDR)

  • 5,733 Infiniband cables

    spanning about 30.13 km

  • 53 Ethernet switches

Scheduler#

  • 179 Slurm partitions

  • 42,605 CPU.hours/day

    over 4 years of computing in a single day

  • $2,846,701 /month

    to run the same workload on t2.large on-demand cloud instances