Phoenix HPC System


Cluster Overview

Basic Information

Cluster Type

HPC

Status

Warning

Total Racks

5

Location

Datacenter

US-EAST-1

Halls

Hall A

Description

Traditional HPC cluster for scientific computing workloads

Cluster Utilization

5.2%

91.0%

874 active jobs

Power Consumption

43.7 MW

PUE: 2 • 85.3 kW/node

GPU Health

N/A

All GPUs operational

Compute Performance

873.3K TFLOPS

437 jobs queued

Cluster Specifications

Compute Resources

Total Nodes

512

CPU Cores

32,768

Memory

128 TB

Storage

10 PB

GPU Configuration

Total GPUs

8735

GPU Models

Max 1550, Max 1550, Max 1550, Max 1550, Max 1550

Topology

CUSTOM

Interconnect

SLINGSHOT

GPU Utilization

91%

Network Configuration

Compute Fabric

SLINGSHOT

Topology

MESH

Bandwidth

88 Tbps

Latency

4373 ns

Management Subnet

10.1.42.0/24

Cluster Utilization

Loading cluster utilization data...

Rack Composition

Rack R1-1

MANAGEMENT

Power

10.7 / 35 kW

Cooling

rear door

Temps

14°C → 42°C

Space

16/48U (32U free)

Rack R1-2

MANAGEMENT

Power

11.5 / 35 kW

Cooling

rear door

Temps

10°C → 50°C

Space

16/48U (32U free)

Rack R1-3

MANAGEMENT

Power

12.3 / 35 kW

Cooling

rear door

Temps

26°C → 22°C

Space

16/48U (32U free)

Rack R1-4

MANAGEMENT

Power

13.1 / 35 kW

Cooling

rear door

Temps

22°C → 30°C

Space

16/48U (32U free)

Rack R1-5

MANAGEMENT

Power

13.9 / 35 kW

Cooling

rear door

Temps

18°C → 37°C

Space

16/48U (32U free)

Workload Scheduler

Type

CUSTOM

Endpoint

https://dead-grouper.info/

Version

0.3.8

Jobs Running

874

Jobs Queued

437

Configuration

Auto Scaling

Disabled

Power Capping

Disabled

Maintenance Window

Sat 20:00 (7h)

Metadata

Created

7/5/2025, 9:26:52 PM

Last Updated

7/5/2025, 9:26:52 PM

Tags

stagingstagingstagingstagingstaging