• Brookfield Admin Control

  • Home
Resources
  • Clusters
  • Storage
  • Networking
  • Maintenance
Tenant
  • Dashboard
  • Settings
  • Documentation
  • SF Compute Support

Clusters

Add Hardware

Clusters are the physical hardware that runs your workloads


Total Clusters

6

Total GPUs

448

Average GPU Utilization

88.4%

9.3%

vs last week

Job Failure Rate

4.5%

1.3%

vs last week

Titan AI Training Pod

Healthy

Nodes

32

GPUs

256 H100 SXM

Compute

356.4K TFLOPS

Memory

64 TB

Storage

2 PB

Utilization

86.9%

Jobs

356 running

178 queued

Power

17.8 MW

1.00 PUE

Inlet Temp

22.0°C

26.0°C max

AI Training•ASIA-PAC-1 - Hall D

Mercury Inference Cluster

Healthy

Nodes

32

GPUs

128 L4, L40S

Compute

390.0K TFLOPS

Memory

16 TB

Storage

0.5 PB

Utilization

87.1%

Jobs

390 running

195 queued

Power

19.5 MW

1.00 PUE

Inlet Temp

23.0°C

26.0°C max

AI Inference•US-EAST-1 - Hall A

Phoenix HPC System

Warning

Nodes

512

Compute

873.3K TFLOPS

Memory

128 TB

Storage

10 PB

Utilization

91.0%

Jobs

874 running

437 queued

Power

43.7 MW

2.00 PUE

Inlet Temp

33.0°C

38.0°C max

HPC•US-EAST-1 - Hall A

Atlas Container Platform

Healthy

Nodes

64

Compute

606.7K TFLOPS

Memory

32 TB

Storage

1 PB

Utilization

88.9%

Jobs

607 running

303 queued

Power

30.4 MW

2.00 PUE

Inlet Temp

27.0°C

31.0°C max

Kubernetes•EU-CENTRAL-1 - Hall C

Nimbus Compute Farm

Healthy

Nodes

128

Compute

800.1K TFLOPS

Memory

64 TB

Storage

5 PB

Utilization

90.4%

Jobs

800 running

400 queued

Power

40.0 MW

2.00 PUE

Inlet Temp

31.0°C

36.0°C max

General Compute•EU-CENTRAL-1 - Hall C

Radeon AI Accelerator

Healthy

Nodes

8

GPUs

64 MI300X

Compute

238.3K TFLOPS

Memory

16 TB

Storage

0.5 PB

Utilization

85.9%

Jobs

238 running

119 queued

Power

11.9 MW

1.00 PUE

Inlet Temp

20.0°C

23.0°C max

AI Training•US-EAST-1 - Hall A