GIGABYTE has introduced the AI TOP ATOM, a compact desktop AI supercomputer with 1 petaFLOP of AI performance, which is very similar to NVIDIA DGX Spark. That’s because it’s also built around the NVIDIA Grace Blackwell GB10 Superchip. Housed in a 1-liter chassis, it’s designed for generative AI, large language models, and machine learning workloads directly on the desktop.
The AI TOP ATOM features 128 GB of LPDDR5x unified memory, supports up to 4 TB PCIe Gen5 SSD storage. The 1,000 TOPS (1 petaFLOP) FP4 of AI compute enables it to handle models with up to 200 billion parameters, or 405 billion in a dual-system configuration. Connectivity includes 10GbE networking, Wi-Fi 7, Bluetooth 5.3, HDMI 2.1a, and multiple USB 3.2 Gen 2×2 Type-C ports.
GIGABYTE AI TOP ATOM specifications:
- SoC – NVIDIA GB10
- CPU – 20-core Armv9 processor with 10x Cortex-X925 cores and 10x Cortex-A725 cores
- Architecture – NVIDIA Grace Blackwell
- GPU – Blackwell Architecture
- CUDA Cores – Blackwell Generation
- 5th Gen Tensor cores
- 4th Gen RT (Ray Tracing) cores
- Tensor Performance – 1000 AI TOPS (FP4)
- VPU – 1x NVENC, 1x NVDEC
- System Memory – 128 GB 256-bit LPDDR5x memory (273 GB/s memory bandwidth)
- Storage – Up to 4 TB PCIe Gen5 SSD
- Display – HDMI 2.1a port
- Audio – HDMI multichannel audio output
- Networking
- 10GbE RJ45 port
- ConnectX-7 Smart NIC to connect two Spark DGX together with a speed of up to 200Gbps
- WiFi 7 and Bluetooth 5.3
- USB
- 1x USB 3.2 Gen 2×2 Type-C port (PD IN)
- 3x USB 3.2 Gen 2×2 Type-C ports (up to 20Gbps)
- Power Consumption – 240W external adapter
- Dimensions – 150 x 150 x 50.5 mm
- Weight – 1.2 kg

The device is also scalable through the NVIDIA ConnectX-7 SmartNIC, for high-bandwidth, low-latency communication between systems. A single unit supports AI models with up to 200 billion parameters, connecting two systems boosts capacity to 405 billion parameters, allowing users to handle larger generative AI workloads.

In terms of software, the device supports NVIDIA DGX OS and Ubuntu Linux. Additionally, it fully integrates the NVIDIA AI software stack for local AI development, training, and deployment. GIGABYTE also provides AI TOP Utility, which simplifies AI workflows with built-in tools for model download, inference, RAG, and machine learning.
The GIGABYTE AI TOP ATOM is available on Newegg, starting at $3,499.99 for the 1TB PCIe 4.0 SSD model, while the 4TB PCIe 4.0 and 4TB PCIe 5.0 versions are priced at $3,899.99 and $3,999.99, respectively. More details are available on the product page and the press release.
Debashis Das is a technical content writer and embedded engineer with over five years of experience in the industry. With expertise in Embedded C, PCB Design, and SEO optimization, he effectively blends difficult technical topics with clear communication
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.






Am I the only one to think that they’re using the exact same mother board and only changed the enclosure ? The only difference I can find with the DGX Spark is that they’re offering the 1TB version to save $500. I think that a stripped down version of this system without the 100GbE NIC and with only 32GB RAM could make a very nice Arm development workstation, but as it is, it’s still far too expensive for a general purpose device and will only target those who need to work intensively with AI.
I think the NICs are an inherent part of the design because they allow you to link multiple devices. Having 2 links also means you can extend this to more than 2 units without an expensive 25Gbps switch. AFAIK these links also supports RDMA, which is very useful I guess for splitting AI loads.
If you just want a (slightly immature) Arm workstation, try the Orion O6, or its cheaper variants (although they use less powerful cores than the Spark, and half the memory).
I do have an Orion O6 (which is my AI box BTW), and what I’m seeing here is a more than twice as powerful machine, which is why I’m saying it could make a nice workstation.
does the O6 work for AI? cix never released any SDK related to the NPU
I’m using it with LLMs on CPU only. Here what matters is mostly the DRAM bandwidth. It’s not huge due to internal limitations in the SoC (about 45GB/s) but it’s better than in the other PCs I have around.
more than twice, at 8x the price? No thanks.
This is just an OEM version of the Nvidia DGX Spark, just like all the other OEM versions that was announced by others like Dell and HP, etc..
Still very expensive compared to all the mini-PC can get based on AMD Ryzen Al Max+ 395 with 128GB RAM which can almost perform similarly.
These machines are specifically designed for AI performance. The AI Max+ 395 does about 126 TOPS out of the box. These do 1000 (> 8x). It’s not even close. If you’re not focused on AI workloads, there is no reason to buy a GB10 based workstation but if you are, these provide a rather ideal balance of performance, memory, power consumption (240W Peak), and price. There simply is no out-of-the-box competition right now.
First things first, you’re comparing FP4 TOPS to INT8 TOPS. Marketing got you.
Even more it compare sparse FP4 vs ~BF16 and at 1/2 of price.
to be fare for AI Max+ 395 AMD add CPU(8Tflops BF16 + 59 GPU FB16 + 55? on TPU BF16-Block)
The AI performance has been divided in half again, pray we don’t reduce it further:
Article on VideoCardz: “John Carmack says NVIDIA DGX Spark runs at half of the rated power and delivers half the quoted performance”
> which can almost perform similarly.
1000 TOPS?
If not, and you don’t need the TOPS, these devices are … not for you?
Sadly, CPU makers do make their consumer CPU’s for Windows, and if Windows now is advertising Copilot here and there, they will pack NPU’s with their CPU’s without asking if you need them or not. I hope they find out not everything is AI and release some good CPU’s without NPU capabilities for the people who want to stay away completely from AI c**p.
[ maybe, me thinking about the rate for updating the local LLM for having a current output for AI related results. That’s a huge amount of data through the networks for current LLM (~60/150GB-700&GB (~GPT3.5/4, GPT4 USD ~78M training cost), on disk, for about 200 billion parameter LLMs?) and the 1000TOPS are FP4(?)
and, yes, impressive development (thx) ]
Wow, cool machine, and very nice price.
Someone maintains ARM binaries for all the key AI libraries?
I’ll just stick to my PS5 and 1st gen Amazon echo thank you very much.
Here’s an interesting post about bugs in the DGX Spark that might also be relevant here:
https://publish.obsidian.md/aixplore/Practical+Applications/dgx-lab-benchmarks-vs-reality-day-4