Checkout using your account
This form is protected by reCAPTCHA - the Google Privacy Policy and Terms of Service apply.
Checkout as a new customer
Creating an account has many benefits:
Professional AI Platforms
Build, fine-tune, and deploy AI models locally with workstation-class systems powered by NVIDIA Grace Blackwell architecture. Designed to bring data-centre-grade acceleration, memory coherence, and AI performance directly to the deskside.
Unified CPU-GPU design delivers exceptional performance for LLMs, multimodal AI, and large-scale inference.
Ready-to-run NVIDIA AI software stack including CUDA®, cuDNN, TensorRT™, and containerised frameworks to accelerate development from day one.
Develop, validate, and deploy AI workloads locally before scaling to edge or data centre environments, using the same NVIDIA platform.
Sensitive datasets, prompts, and outputs remain fully in your environment under your security policies.
Avoid cloud egress fees, unpredictable billing, and variable runtime consumption.
Train and iterate instantly without network delays, perfect for active development cycles.
Ideal for healthcare, finance, R&D, defence, and government workloads that must stay on‑prem.
Small, quiet, powerful — great for dev teams.
Deskside AI supercomputer for big models.
The ASUS Ascent GX10 places a true AI supercomputer on your desktop with NVIDIA GB10 Grace Blackwell architecture — combining a high‑performance 20‑core ARM CPU, integrated Blackwell GPU, and 128 GB LPDDR5X unified system memory for powerful local AI workloads.
A scalable 2‑node cluster of ASUS GX10 AI supercomputers linked via high‑speed NVIDIA ConnectX‑7 networking. Ideal for teams, labs, and dev groups needing high‑throughput local AI inference and fine‑tuning. Two nodes deliver higher aggregate performance and capacity than a single unit, while remaining desk‑friendly.
A deskside AI system built to deliver data‑center‑class performance for local development, training, and research. Powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip with NVLink‑C2C, it brings unified memory and up to 20 PFLOPS FP4 of compute directly to your workspace.
Run big models — really big
|
Fast connections, smoother workflows
|
We design compute, storage, and networking tuned to your AI workloads and production goals.
We configure and benchmark NVIDIA hardware and software so your team can begin immediately.
ANZ-based support with warranty options up to 5 years ensures fast response and peace of mind.
We build ConnectX‑7/8 networks for multi-node training, distributed inference, and private AI clusters.
We integrate TrueNAS, Seagate, QNAP and enterprise storage to streamline AI data pipelines.
Yes — two GX10 nodes linked via ConnectX‑7 can serve larger models like Llama 3.1 405B on‑prem. The ET900N (GB300) goes further with more compute and much larger unified memory.
A single GX10 delivers about ~1 PFLOP FP4 with 128 GB unified memory, plus built‑in ConnectX‑7 and 10G LAN — ideal for private prototyping, fine‑tuning and local inference.
Yes — it refers to FP4 AI performance on the GB300 Grace Blackwell Ultra platform used in ET900N, designed for high‑throughput LLM training and inference.
Inside each system, CPU↔GPU uses NVLink‑C2C for low‑latency shared memory access; between systems, GX10 uses ConnectX‑7 and ET900N uses ConnectX‑8 for high‑bandwidth clustering.
Local systems avoid unpredictable egress and cross‑AZ data charges and reduce iteration latency — you spend more time training and less time waiting (or paying for bandwidth).
ET900N ships with NVIDIA DGX OS and the NVIDIA AI software stack; GX10/DGX Spark comes ready for local model work with NVIDIA’s preinstalled stack.
Yes — on NVIDIA GPUs that support MIG (Multi‑Instance GPU) you can carve a physical GPU into multiple, hardware‑isolated GPU instances (each with dedicated memory, compute, and cache). This lets multiple users or services run in parallel without noisy‑neighbor effects, simplifies quota management, and improves utilization. Teams can target specific MIG instances via CUDA device IDs; in clusters, Kubernetes can schedule MIG profiles using the NVIDIA device plugin.
* Prices shown are exclusive of GST and may change without prior notice. Product images are for illustration purposes only. Final product appearance and specifications may vary.
Get personalised recommendations from our workstation specialists. We'll help you find the perfect system for your needs.
Request a Consultation