MILK-V Shenzhen Technology has just unveiled the Jupiter 2, the first RVA23-compliant RISC-V SBC powered by a 2.4 GHz SpacemiT K3 octa-core X100 CPU with up to 60 TOPS of AI performance, up to 32GB LPDDR5, 256GB UFS, and PCIe Gen3 x4 NVMe SSD support.
Designed by SpacemiT themselves, the board also features an eDP connector, a 10GbE SFP+ cage, a Gigabit Ethernet RJ45 port, built-in WiFi 6 and Bluetooth 5.2 wireless connectivity, two USB Type-C connectors, four USB 2.0 ports, an M.2 Key-B socket coupled with a NanoSIM card slot for 4G LTE or 5G cellular connectivity, and more.
Jupiter 2 specifications (preliminary):
- System-on-Module – K3-CoM260 (aka Jupiter 2 NX)
- SoC – SpacemiT K3
- CPU
- 8x 64-bit RISC-V X100 “big” cores clocked up to 2.4 GHz, RVA23 compliance; 130 KDMIPS performance (similar to RK3588)
- 8x RISC-V A100 AI Cores with support for up to 1024-bit RVV1.0 parallel computing, optimized for matrix operations.
- GPU – Imagination Technologies BXM4-64-MC1 GPU with Vulkan 1.3, OpenCL 3.0, and OpenGL ES 1.1/2.0/3.2 support
- VPU
- Video decoder – H.265, H.264, VP9 up to 4K @ 120 FPS
- Video encoder – H.265, H.264 up to 4K @ 60 FPS
- AI – Up to 60 TOPS (INT4) of AI performance using dedicated TCM and DMA acceleration channels;
- CPU
- System Memory – Up to 32GB LPDDR5 @ 6400 MT/s (51GB/s bandwidth)
- Storage
- Up to 256GB UFS storage
- SPI NOR flash
- MicroSD card slot (yes, on the module)
- Host interface – 260-pin SO-DIMM edge connector
- SoC – SpacemiT K3
- Storage – M.2 Key-M 2280 (PCIe Gen3 x4) socket for NVMe SSD
- Display Interface
- eDP connector
- 1x USB-C port with DP 1.2
- Audio – Audio connector
- Networking
- 10GbE SFP+ cage via RealTek RTL8127 controller
- Gigabit Ethernet RJ45 port via RealTek RTL8211 controller
- WiFi 6 and Bluetooth 5.2 module with two IPEX antenna connectors
- Optional 4G LTE/5G cellular via M.2 Key-B socket
- USB
- 2x USB-C ports
- 1x USB 3.2 port with DP 1.2 Alt. mode and USB PD
- 1x USB 3.2 OTG port
- 4x USB 2.0 Type-A ports
- 2x USB-C ports
- Expansion
- M.2 M-Key 2280 (PCIe Gen3 x4) socket
- M.2 B-Key 2242/3052 (PCIe x2 + USB 2.0) socket
- 2x “RTI” FPC connectors supporting EtherCAT, CAN-FD, and other interfaces for microsecond-level motion control and robotics
- EC-IO connector for the Embedded Controller managing fan, I2C, GPIO, button, and LED
- Misc
- 3x buttons
- RTC battery connector
- 4-pin SYS connector
- Power Supply up to 65W
- 12V DC up to 7A via 2-pin ATX connector
- USB PD via USB-C port
- Dimensions
- Board: 100 x 86 mm (Pico-ITX Plus form factor)
- With heatsink: 103 x 90.5 x 35mm

The specifications are still a work-in-progress since the company didn’t release the full details. I was initially surprised to find out the SpacemiT K3 is advertized as an octa-core processor, since tests on a remote K3 RISC-V platform showed 16 cores in Linux (Ubuntu 26.04), albeit with only eight cores usable [Update: That’s because there are eight X100 cores and eight A100 “AI” cores; the specs have been updated; check the comments and block diagram below for details].

On the software front, the Jupiter 2 board will support Bianbu 3.0 OS, Ubuntu 26.04 (thanks to RVA23 compatibility), OpenHarmony 6.0, OpenKylin 2.0, Deepin 25, and Fedora. The company highlights support for RV Hypervisor 1.0, AIA, and RV IOMMU extensions, as well as hardware virtualization for CPU, memory, and I/Os. The system-on-module appears to be compatible with NVIDIA Jetson Orin Nano/NX carrier boards based on the photo below.

Early benchmarks for the SpacemiT K3 indicate Rockchip RK3588-level of multi-core performance, and results slightly lower than a Raspberry Pi 5 for single-core performance. What you get is much faster storage, networking, a proper video processing unit with 4K decoding and encoding, and a built-in 60 TOPS NPU. Bandwidth should also be significantly better, but early memset/memcpy results are only marginally better than those of a Raspberry Pi 5. Some data was also shared to show the performance delta between the SpacemiT K1 and K3 SoCs when using the Jupiter NX and Jupiter NX2 modules.

The Jupiter 2 RISC-V SBC is expected to ship in April 2026, and MILK-V has yet to provide price information, but they’ve already launched pre-orders on Arace, where users can spend $5 to get a $50 discount once the board becomes available. Everything is super confusing, as there appear to be three products. The SpacemiT K3 system-on-module, the Jupiter 2 SBC, and a Jupiter 2 NX development with the Radxa C200 carrier board and K3-CoM260 SoM. [Update: prices listed on Arace: $199 for the Jupiter 2, $199 for the Jupiter 2 NX, and $239 for the Jupiter 2 NX Devkit. See comments section, price may differ depending on your location]. Additional details may also be found on the product page and in the announcement on X.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.







nm ?
12V via 2-pin ATX connector… , SOC may not be usable for portable devices ? maybe underclocking ? but is rk3588 used in portable devices ?
> I was first surprised to find out the SpacemiT K3 is an octa-core processor, since tests on a remote K3 RISC-V platform showed 16 cores in Linux (Ubuntu 26.04), albeit with only eight cores usable.
I’ve been using the remote test machine for several weeks and we just in the last couple of days got the information on how to run Linux programs (the same ones as for the X100 cores) on the A100 cores.
In general the A100 cores are around 40% to 60% of the speed of the X100 cores, but that’s enough to provide a meaningful boost to normal workloads, not just “AI” ones. For example just using the 8 “AI” cores alone builds a Linux kernel in 39m23s, which is faster than the 42m12s of the previous fastest cheap(ish) SBC the Milk-V Megrez with EIC7700X (which is just quad core). The eight X100s manage this in 16m56s, and using distcc to combine both X100 and A100 cores (which adds a lot of preprocessing / compression / networking overhead drops this to 14m26s, a full 3 times faster than the Megrez.
On my own (single core) primes test a 2.0 GHz A100 core comes in at 12.0 seconds, just a little faster than the original 1.5 GHz Pi 4 at 12.1s and a little slower than the 1.85GHz C910 cored TH1520 (LPi4A, Melez). And that’s the slow cores! The 2.4 GHz X100 cores do it in 6.9s, faster than a Core 2 Duo with the same clock speed (7.55s), and just a little slower than a Graviton 2 (Neoverse N1) at 6.53s.
The prices are on this page
https://arace.tech/collections/milk-v-jupiter-2-series
$275 Milk-V Jupiter 2
$275 Milk-V Jupiter 2 NX
$330 Milk-V Jupiter 2 Dev Kit
Thanks. I’ve updated the post. I have $199, $199, and $239 here, but I guess it depends on whether VAT and/or tariffs are applied. I assume those are the models with 8GB RAM.
Milk-V Jupiter 2
€170,95
Out of stock
Milk-V Jupiter 2 Dev Kit
€204,95
Out of stock
Milk-V Jupiter 2 NX
€170,95
Out of stock
So lower numbers in the EU than in USA. Because of Trump Taxes? And/or lower dollar?
Those EU prices don’t include VAT, while the US prices include the tariffs. With 20% VAT, it should be around 205 Euros (about $245 US) for the Jupiter 2.
Really odd to label the new one Milk-V Jupiter “2” when its not the same mini-itx form factor as the first one…
“I was first surprised to find out the SpacemiT K3 is an octa-core processor, since tests on a remote K3 RISC-V platform showed 16 cores in Linux (Ubuntu 26.04), albeit with only eight cores usable.”
With a single command you can use the 8 higher cores: https://github.com/sanderjo/SpacemiT-K3-X100-A100/blob/main/processes_on_higher_cores.md
But I think it’s safer to position & market as 8 core cpu, and 8 cores AI.
They are marketing it as 8-core chip with high potential for AI (seems to target automation, industrial, robotics, etc.).
The additional 8 cores are different (lacks of hypervisor, vector length is different, cache sizes are different, some AI block seems to be shared between two cores, etc).
It looks like you probably can run 16-cores (still, asymmetric 8 + 8) if you run RV64GC. If you use vectors (e.g. IFUNCs in glibc for optimized mem* str* functions) you probably need to make sure those aren’t migrated between cores with different vector length.
Think of it like Esperanto (ET-SoC-1, 1000+ vector optimized cores + 4 big OoO cores) or like Tenstorrent (Blackhole, 16 SiFive X280 cores + 120 Tensix cores [also RISC-V]). The main difference here is that those 8 AI cores are still kinda big and not optimized fully for AI. The above products use RISC-V, but those cores are highly optimized for vectors and matrix operations.
Yes.
With distcc I was able to use the 8 lower and the 8 higher cores at the same time.
See https://github.com/sanderjo/SpacemiT-K3-X100-A100/blob/main/processes_on_higher_cores.md
“It looks like you probably can run 16-cores (still, asymmetric 8 + 8) if you run RV64GC. If you use vectors (e.g. IFUNCs in glibc for optimized mem* str* functions) you probably need to make sure those aren’t migrated between cores with different vector length.”
The current software NEVER migrates a process between core types. You can force a one-time move to an A100 core, which is safe if you do it early in a statically-linked asm program — I’ve written such a program, in 40 instructions, which I’ve made available as /usr/local/aix on the SpacemiT test machine, and a tiny shell script wrapper to do PATH lookup (tedious in asm without using libraries). so e.g. you can type “aix /usr/bin/gcc -O hello.c -o hello” which switched to an A100 core and then execve’s “/usr/bin/gcc -O hello.c -o hello” in the same process. Overhead is almost exactly 1ms. Or you can use the shell wrapper and type “aix gcc -O hello.c -o hello” which does PATH lookup then exec’s
aixand adds 1ms more overhead.Note that the AI cores by themselves build a Linux kernel 7% faster than my previous fastest machine, a Milk-V Megrez with EIC7700X. Using both X100 and A100 cores it’s 3x faster than the Megrez.
You can use the full ISA not just RV64GC. The only ISA difference is H, which does not concern User mode programs. All you have to do is make sure you switch to the AI core **before** starting to use the V extension. My
aixhelper uses only RV64IMC.Why do other people’s messages keep paragraph breaks, but mine get all run together?
Ohhh … cancel that … it’s only because my messages are long enough to get a “READ MORE”
60 TOPS (INT4) guys, It is not clearly written anywhere. You might think int8 , it will give similiar performance with Jetson Orin Nano but THATS NOT TRUE!!!. Price is same with jetson orin nano(250 USD for korea ) but HALF THE PERFORMANCE!, Carrier board also extra money. no cuda, chinese vendor , low documentation, not having community. It is riscV not arm, so power consumption will be much more than JETSON. So if someone find a single reason to buy this over jetson orin nano, please inform me. Because I dont see a single reason to buy this board. (note I support all small hardware companies, but i am against scammers.) Nvidia already pricing everything crazy, how you can be less powerful and more expensive? Its just a scam.
You can’t estimate performance just by looking at the advertised TOPS. It’s mostly irrelevant. A 3 TOPS accelerator can work just as well as a 24 TOPS one on specific workloads. This needs to be tested.
I agree but we talk on paper, the device not released yet.
On paper , more TOPs gives you better FPS.
Nvidia have better ecosystem and and better pricing compare to this one.
For korea this device is 300 USD (not including a fan)
And from my experience , ARM microprocessors, always have better power consumption than the RISC-V microprocessors.
So, more expensive (even from NVIDIA) , less powerful AI (on paper), more power consumption , less document. I think I made my point very clear. If you have any specific application for this device , i would like to know. I cant believe riscV companies become more expensive than ARM , and giant tech companies.
No argument here. RISC-V can’t compete against Arm in the application processor space right now. It’s a long-term process.
It’s getting closer! This is competitive with Graviton 1 launched in November 2018. I believe the K5 later this year is likely to be roughly comparable to Graviton 2 (deployed on AWS in 2020). There are some other considerably faster chips expected this year too.
Besides raw performance, there’s also the performance/price ratio. Now, most RISC-V platforms are quite expensive compared to their performance. I suppose a RISC-V SoC with Rockchip RK3588 level of performance would work if boards were priced accordingly, but that’s really the case for now.
The RISC-V chips are generally a bit smaller than equivalent Arm ones (or certainly no larger) which makes them cheaper to make.
The issue is that the selling price depends almost entirely on spreading the fixed many millions of dollars initial cost of making a mask set — not to mention the NRE — over however many chips are sold.
If RISC-V SoCs and SBCs are more expensive than Arm ones it’s 99% because there simply aren’t very many sold.
In their presentation SpacemiT said they’d sold 150,000 K1s. That’s TINY. And I think they said they currently have firm orders for 30,000 K3s.
There can’t be price parity until those number are in the millions.
The first Raspberry Pis and Odroids used unsold chips made for mass-market set-top boxes (Pi, Odroid C2) or phones (Odroid XU3/XU4).
RISC-V isn’t yet in those markets because the performance level required to do so is much higher now than it was in 2010 and RISC-V SoCs are just now reaching performance levels for those markets today.
If someone wants to order 10 million K3s then I’m sure SpacemiT will give them a very nice price.
perhaps we have reached the point where risc v wont beat ARM in compute ever but it’s good enough for some tasks that can replace ARM such as routers/AI accelerators/IoT/ etc but for mass production i bet the price difference would be marginal against a comparable ARM chip.. and the situation is not quite comparable either to MIPS vs ARM 30-25 years ago
I’m confident RISC-V will beat Arm and x86 long term because it’s open to anyone in the world with a brilliant idea for a new µarch or a new specialised accelerator for the latest application area to step in and get something designed and made. It’s not only a few engineers at Intel/AMD or Arm and its very few architecture licencees (Apple, Qualcomm, NVIDIA, Google, Amazon, Samsung, Broadcom).
Already almost all security research is done using RISC-V. Everyone doing AI accelerators is basing them on RISC-V. Anything new and exciting in future is very likely to be done on RISC-V — **even** by companies that hold Arm licences.
Often the highest performance implementations will be proprietary, but there are also already respectably high performance open source implementations. In the Xiangshan project, for example.
The SpacemiT K3 just being launched is the overall best RISC-V SoC available so far from any company [1] and SpacemiT acknowledges building off open source work such as the OpenC910 core from THead.
Note that SpacemiT is a startup founded in 2021 and currently has around 150 employees.
They are able to stand on the shoulders of giants.
RISC-V is behind at the moment. K1 was 2000-level PC e.g. Pentium 3 or PPC G4. K3 is 2010 level Core 2. K5 later this year will I think be around Skylake level. And multiple US companies such as Tenstorrent and Akeana have 2020 Apple M1 level RISC-V architectures taping out as we speak (though they will initially be at slightly lower clock speed).
Tenstorrent is expecting parity with Arm and x86 in their design for 2028 release. Even if that slips to 2030 the gap will be minimal.
[1] the SG2042 has a lot of slower cores and a lot of cache and PCIe but significant bugs and performance problems — and much higher price. The UltraRISC UR-DP1000 is not bad, but lacks RVA23 and RVV support and, like the K3, has been announced and orders are being taken, but has not shipped yet.
I wish your predictions resist the assaults of time, but I’m less optimistic. This platform is extremely fragmented, resulting in a difficult choice for software integrator regarding what should constitute the common base to be supported. Arm has known that at the armv4/v5 era. Most software vendors would consider the ubiquitous arm926ej-s as the base, but when armv7 arrived (and was compatible with that one), software was totally sub-optimal (not using floats nor various extensions). Even at the armv7 era, not all would enable Neon (e.g. Marvell’s PJ1/PJ4/PJ4B didn’t have it and had a limited vfp3-d16 instead), but they had the precious integer divide instruction that many other lacked. In the end portable software used only the common base and was totally sub-optimal. Arm improved the situation later by enforcing mandatory features in architecture levels. Nowadays if you build for armv8.1-a you have neon, divide, crc32, LSE atomics etc, so it’s much easier to benefit from most new features at once because you know that all compliant chips will be compatible with your code. At the moment, RISC-V is even worse than what armv5 was, it has more extensions than letters in the alphabet, and the common base (rv64gc) is just not great yet IMHO. One just has to count the number of instructions for a given function and compare it to generic armv8 code, it’s between +20 and +50% instructions count depending on the code.
There is no such RISC-V fragmentation.
If you are making software for an embedded device then you choose the CPU/chip with the features you need, you know what you chose, and you tell your compiler exactly what extensions it has. Easy.
If you are making software for the mass-market then you build it for RVA23 which is an excellent baseline roughly equal to x86_64-v3(Haswell/Excavator) or ARMv9.0-A. That’s what essentially all new SBCs will support from this year on (with very few exceptions, such as the Milk-V Titan), certainly by next year, as well as Android, laptops, set top boxes, servers, whatever. It will be in everything with Haswell/Skylake or better performance (and the Core2 performance of K3).
Optional: you could consider making an RV64GV version for the at most few tens of thousands of low performance SBCs out there. Test for RVV at runtime if you want and use that for functions that will benefit, but that’s really only the K1, or add in the C906&C910 chips if you want to include an RVV draft 0.7 version too.
That’s **it**.
The K3 supports RVA23. It will be the RISC-V standard for years. It has roughly the same feature set as ARMv9 or X86-64v4. As the first RVA23 in the market, this is literally the chip that makes your comment obsolete.
> As the first RVA23 in the market, this is literally the chip that makes your comment obsolete.
And when will it become the default one supported by all active distros ? Because that’s what conditions a vendor’s ability to support their products for such a platform. On x86_64, you can just build for opteron/haswell and it simply works *everywhere*, benefiting from newer CPUs’ optimizations. A few arch-specific optimizations are handled by the libc (use of SSE/AVX for memcpy for example). In the ARM world, you target A53 and it just works everywhere, albeit not optimally if you heavily use threads since newer armv8.1 cores (A76 and above as well as A55) support LSE atomics. But even then, gcc-10+ provide the outline atomics which automatically switch to the best one. Thus again, for the vast majority of applications you just don’t care and you emit portable code for armv8-a. For RISC-V, you’re saying “target RVA23”. OK, but it will not work on currently supported distros nor hardware, but only on *new* hardware (and it’s still not even supported by any released version of gcc as far as I can tell).
That’s just not yet usable for new applications but will be in 2-3 years when new operating systems ship with support for it by default. And maybe by then RVA23 will be considered ridiculously old like RV64GC currently looks like while a year ago or two it was still considered as a reasonably portable common target. This is what I’m explaining about fragmentation.
If RVA23 turns out to be as complete as (say) Skylake, most likely it will remain a reasonable common target for many years to come. Just like RV64GC currently is to some extents.
What is this claim that Arm is not fragmented. Even to this day that’s simply not true. Just look at the software support for current consumer-level hardware. Even for that brand new Nvidia Spark it’s not feasible to take standard Ubuntu and install without some hackery of replacing mainline kernel with Nvdia’s binary-only version. Forget attempting Debian on an Apple M5 or Qualcomm laptop to say the least. What about taking Raspberry OS image for the Pi 500+ keyboard computer and using it for a Thelia Astra Ampere computer from System76. Surprise, it does NOT work! Go look at Armbian’s download list. It spans pages because there are different images for each device. All at different levels of usability from each other.
Add in Arm phones and tablets to the mix, as well as server-class cpu like Gravitron, Cobolt, Axion and you have the land of fragmentation. How are all these proof of a supposedly standardized Arm ecosystem.
Further to criticize Risc-V and extensions, but not mention that Arm itself allow vendors to customize and add their own incompatible “extensions” is purposely misleading and ignoring the true reality.
You obviously mistake ISA fragmentation to system fragmentation.
Hmmm, so Apple being able to add some x86 instructions into their own silicon to bolster poor Rosetta software performance as one example. That is not deviating from the standard? With ARM’s permission?
Sipeed is also taking pre-orders for the exact same board, SoM, and devkit.
https://sipeed.com/k3
Banana Pi calls it the BPI-SM10 SoM/Core board https://docs.banana-pi.org/en/BPI-SM10/BananaPi_BPI-SM10
I’m not sure whether getting so many distributors will help or just bring confusion.
OpenGL 3.0 –> OpenCL 3.0
we need an AI benchmark. CPU wise is worst than RK3588 for sure