Skip to content

Sherlock facts#

as of June 2023

Users#

  • 6,601 user accounts

  • 1,071 PI groups

    from all Stanford's seven Schools, SLAC, Stanford Institutes, etc.

  • 191 owner groups

Interfaces#

  • 12 login nodes

  • 3 data transfer nodes (DTNs)

Computing#

  • 4.74 PFLOPs (FP64)

    18.21 (FP32) PFLOPs

  • 51,952 CPU cores

    4 CPU generations (13 CPU models)

  • 736 GPUs

    4 GPU generations (12 GPU models)

Hardware#

  • 1,663 compute nodes

    19 server models (from 3 different manufacturers)

  • 37 racks

    1,123 rack units

Energy#

  • 578.37 kW

    total power usage

  • 57 PDUs

Storage#

  • 7.0 PB $SCRATCH

    parallel, distributed filesystem, delivering over 200 GB/s of I/O bandwidth

  • 51.3 PB $OAK

    long term research data storage

Networking#

  • 104 Infiniband switches

    across 2 Infiniband fabrics (EDR, HDR)

  • 5,571 Infiniband cables

    spanning about 29.78 km

  • 53 Ethernet switches

Scheduler#

  • 169 Slurm partitions

  • 47,798 CPU.hours/day

    over 5 years of computing in a single day

  • $3,193,693 /month

    to run the same workload on t2.large on-demand cloud instances