Professional AI Platforms

AI Development at the Deskside.

Build, fine-tune, and deploy AI models locally with workstation-class systems powered by NVIDIA Grace Blackwell architecture. Designed to bring data-centre-grade acceleration, memory coherence, and AI performance directly to the deskside.

Grace Blackwell Architecture

Unified CPU-GPU design delivers exceptional performance for LLMs, multimodal AI, and large-scale inference.

Pre-Installed NVIDIA AI Software

Ready-to-run NVIDIA AI software stack including CUDA®, cuDNN, TensorRT™, and containerised frameworks to accelerate development from day one.

From Prototype to Production

Develop, validate, and deploy AI workloads locally before scaling to edge or data centre environments, using the same NVIDIA platform.

Why Run AI Locally?

Keep Your Data Local

Sensitive datasets, prompts, and outputs remain fully in your environment under your security policies.

Predictable Costs

Avoid cloud egress fees, unpredictable billing, and variable runtime consumption.

Low‑Latency Speed

Train and iterate instantly without network delays, perfect for active development cycles.

Compliance Ready

Ideal for healthcare, finance, R&D, defence, and government workloads that must stay on‑prem.

Ultra‑Small Desktop AI Supercomputer

ASUS GX10

The ASUS Ascent GX10 places a true AI supercomputer on your desktop with NVIDIA GB10 Grace Blackwell architecture — combining a high‑performance 20‑core ARM CPU, integrated Blackwell GPU, and 128 GB LPDDR5X unified system memory for powerful local AI workloads.

AI Performance
Up to ~1 petaFLOP (FP4) for inference and tuning
CPU
20‑core ARM v9.2‑A (GB10)
GPU
Integrated NVIDIA Blackwell GPU (GB10)
Unified Memory
128 GB LPDDR5X shared system memory
Storage
1/2/4 TB M.2 NVMe SSD options
Networking
NVIDIA ConnectX‑7 SmartNIC & 10G Ethernet
Connectivity
Wi‑Fi 7 & Bluetooth 5.4, USB‑C & HDMI
Footprint
150 × 150 × 51 mm | ~1.48 kg
Warranty
12, 24, 36 Months Onsite
From A$6,363 ex GST*
ASUS GX10
Clustered Dual GX10 Bundle
Pre-Configured AI Cluster Bundle

Dual ASUS GX10 AI Cluster

A scalable 2‑node cluster of ASUS GX10 AI supercomputers linked via high‑speed NVIDIA ConnectX‑7 networking. Ideal for teams, labs, and dev groups needing high‑throughput local AI inference and fine‑tuning. Two nodes deliver higher aggregate performance and capacity than a single unit, while remaining desk‑friendly.

Interconnect (In‑Node)
NVLink‑C2C links the CPU and GPU within each GX10, enabling fast, low‑latency data sharing.
AI Performance
Delivers up to ~2 petaFLOPs FP4 compute across two nodes, with ~1 PF4‑PFLOP per unit.
Processor
Powered by two Arm v9.2‑A CPU complexes (GB10), each with 20 cores for efficient AI workflows.
GPU
Equipped with two integrated NVIDIA Blackwell GPUs (GB10), one inside each GX10 node.
Unified Memory
Provides 2 × 128 GB unified LPDDR5X (256 GB total), with memory allocated per node for AI tasks.
Storage
Offers up to 4 TB NVMe per node, allowing configurations up to ~8 TB across the full cluster.
Networking
Features NVIDIA ConnectX‑7 linking both nodes, plus integrated 10G LAN for external connectivity.
Cluster Benefits
Handles larger AI models and datasets locally, supporting workloads up to Llama 3.1 (405B params).
Warranty
Includes 36‑month onsite coverage with options for extended support and protection.
From A$13,453 ex GST*
Superior Deskside AI Supercomputer

ASUS ExpertCenter Pro ET900N G3

A deskside AI system built to deliver data‑center‑class performance for local development, training, and research. Powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip with NVLink‑C2C, it brings unified memory and up to 20 PFLOPS FP4 of compute directly to your workspace.

AI Performance
Up to 20 PFLOPS FP4 next‑generation AI compute
Superchip
NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip
Unified Memory
Up to 775 GB coherent CPU‑GPU memory
NVLink‑C2C Interconnect
High‑bandwidth, low‑latency CPU–GPU chip‑to‑chip link
Tensor Cores
5th‑generation NVIDIA Tensor Cores for FP4 acceleration
Software Stack
NVIDIA DGX OS and NVIDIA AI Software Stack
Use Cases
LLM training, multimodal AI, deep learning research, simulation workloads
NVIDIA MIG Technology
Multi‑Instance GPU partitioning so multiple developers or teams can run isolated workloads in parallel on one system
Expandability & PCIe
Cluster‑ready interconnects and three full‑length PCIe x16 slots for additional GPUs or accelerators
Enterprise‑Class AI Performance Notify me when available
ASUS ExpertCenter Pro ET900N G3

Run big models — really big

  • Single GX10: ~1 PFLOP FP4, 128 GB unified memory.
  • Dual GX10: ~2 PFLOPS FP4 linked.
  • ET900N (GB300): up to 20 PFLOPS FP4, 775 GB memory.
  • Why it matters: Faster iteration, no egress fees.

Fast connections, smoother workflows

  • Single GX10: ConnectX-7 + 10G + Wi-Fi 7.
  • Dual GX10: High-bandwidth shared workloads.
  • ET900N (GB300): ConnectX-8 up to 800 Gb/s.

We Help You Get Started

Sizing & Architecture

We design compute, storage, and networking tuned to your AI workloads and production goals.

Onsite Install & Validation

We configure and benchmark NVIDIA hardware and software so your team can begin immediately.

Local Support & Warranty

ANZ-based support with warranty options up to 5 years ensures fast response and peace of mind.

Scale-Out & Clustering

We build ConnectX‑7/8 networks for multi-node training, distributed inference, and private AI clusters.

Data & Storage Integration

We integrate TrueNAS, Seagate, QNAP and enterprise storage to streamline AI data pipelines.

FAQ

* Prices shown are exclusive of GST and may change without prior notice. Product images are for illustration purposes only. Final product appearance and specifications may vary.

Ready to Elevate Your Workflow?

Get personalised recommendations from our workstation specialists. We'll help you find the perfect system for your needs.

Request a Consultation