We first noted the UltraRISC UR-DP1000-powered Milk-V Titan mini-ITX motherboard when we wrote an article about three high-performance RISC-V processors to watch in H2 2025. There have been some delays, as there often are, but the Titan board finally appears to be in stock, so it’s probably a good time to have a closer look.
Powered by a 2 GHz UR-DP1000 octa-core RISC-V CPU, the Titan mini-ITX motherboard supports up to 64GB DIMM memory and M.2 NVMe storage (PCIe Gen4 x4), and features a PCIe Gen4 x16 slot for a graphics card or other expansion, Gigabit Ethernet, four USB 3.0 ports, a BMC, and more.
Milk-V Titan specifications:
- CPU – UltraRISC UR-DP1000
- 8x 64-bit RISC-V UR-CP100 “RV64GCBHX” cores up to 2.0 GHz
- Two 4x core cluster design with 4MB L3 cache each, and a total of 16MB cache.
- Fully RVA22 compliant, and “Compliant with RVA23 excluding V extension.”
- Supports Hardware Virtualization, RISC-V RV64 ISA H(v1.0) Extension
- Memory – Up to 64GB at 3200 MT/s via 2x DDR4 DIMM slots; ECC support
- Storage – M.2 NVMe SSD via M.2 M-Key (PCIe Gen4 x4) socket
- Networking – Gigabit Ethernet RJ45 port
- USB
- 4x USB 3.0 Type-A ports (5 Gbps)
- 1x USB 2.0 via front USB header
- 1x USB Type-C debug port
- Expansion
- PCIe 4.0 x16 slot (with PCIe 4.0 x16 signalling) for graphics cards or computing cards
- M.2 Key-M socket (PCIe Gen4 x4) for NVMe SSD
- BMC
- 100Mbps RJ45 port for remote control
- USB 2.0 Type-A port for BMC Storage
- USB Type-C port
- Debugging
- 3-pin UART for the CPU
- USB Type-C Debug connector
- Misc
- Power, Reset, BMC, Reset, BMC Boot buttons
- Front panel header for Power Button / Reset Button / Status LED / Power LED
- PWM fan connector
- RTC battery socket (CR1220)
- Power Supply
- 12V DC via 5.5/2.5mm power barrel jack
- 24-pin ATX power connector
- Power Consumption (no peripherals, 64GB DDR4, 128GB SSD)
- Idle – ~14W (12V/1.2A)
- Full load – ~30W (12V/2.5A)
- Dimensions – 170 x 170 mm (mini-ITX form factor)
- Compliance – FCC/CE
We’ll notice there’s no video output, and users will need to add a graphics card if they need it. Four models have been tested based on the documentation for the board: AMD RX 9070 XT, RX 580, RX 550, and R5 230. Alternatively, the board could be used for networking and/or storage applications by using a suitable PCIe card. Another remark is that idle power consumption is not super low at about 14 Watts with 64GB RAM and a 128GB SSD.
The company mentions support for Ubuntu (preferred OS), Debian, and Fedora, but software-related documentation is still a work in progress. Other RISC-V OS images should be supported thanks to UEFI support with ACPI, CPPC, and SMBIOS. In our article about the UR-DP1000 CPU last July, we also noted that mainline Linux support was expected by Q4 2026. As a side note, I was quite pleased with the progress made on the software side of the RISC-V ecosystem in my recent reviews of the MUSE Book laptop and VisionFive 2 Lite SBC, even though the performance and the price/performance ratio are not there yet.

In terms of performance, we are still provided the SPECCPU2006 single-core INT @ 10.4/GHz and single-core FP @ 12/GHz, but the UR-DP1000 SoC also shows up on GeekBench 5.5.1 with almost 30% higher single-core performance and over twice the multi-core performance compared to the ESWIN EIC7702-based (8x RISC-V @ 1.8 GHz) DeepComputing FML13V03 laptop, better known as the “DC-ROMA RISC-V Mainboard Gen II for Framework”.

It looks promising at first, but after reading Jeff Geerling’s review of the DeepComputing laptop, the single-core performance is quite lower than a Raspberry Pi 4 (178 vs 286 in Geekbench 6), and multi-core is about equivalent (640 vs 653) despite the EIC7702X having twice the number of cores. If we extrapolate those results, we can roughly estimate the Titan’s performance: single-core performance should be almost equivalent to that of a Raspberry Pi 4, and multi-core performance closer to that of the Raspberry Pi 5, albeit still under. The Titan motherboard still wins when it comes to PCIe interfaces and out-of-the-box graphics card support, but you can not expect a miracle when it comes to CPU performance, even though some software optimization may have improved the results since the GeekBench 5.5.1 was performed (July 2025).
The Titan mini-ITX motherboard can now be ordered on Arace for $329, a bit higher than for earlier pre-orders ($279). The Arace website is confusing, as it says “in stock”, but the title still views it as a “pre-order shipping within 45 days”. You’ll also need to purchase a UDIMM RAM module and an NVMe SSD for storage to get started, as well as an optional PCIe card for graphics, networking, or storage, if you want to make use of the PCIe Gen4 x16 slot.
Thanks to Teka for the tip.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.





But will the documentation ever be finished?
Recent Kernels?
Current Distributions?
As an owner of a milk-v jupiter and nanorv I highly doubt it.
@CNXSoft “NVIDIA RX 9070 XT, RX 580, RX 550, and R5 230” I think those are all AMD cards not Nvidia.
Oops! Fixed!
Milk-v told us that they have some “small problem” within aging test.So they would delay a little bit of time.
The board can be interesting to those who want to develop natively for RISC-V, however it comes with a price! It’s nice to note that it supports standard DDR4 with two channels, but the bad news is that the price of this RAM will have to be added to the price of the board. Same for the GPU if one wants a display output, though maybe thanks to the BMC some users won’t care.
Of course it’s far from being fast. The CPU cores are roughly equivalent to those of a core2duo at the same frequency, so that’s about like having two core2quad on the board. The hardware is slowly catching up, it’s only just 20 years late now. Most likely in 20 years it will only be 10 years late.
I clearly don’t see this being used as a primary machine (except by those in love for retrocomputing), but it might be OK as a machine to SSH into in order to build stuff natively and run tests. It might be an acceptable option for distros who build natively for example.
Finally an affordable risc-v board supporting regular 64gb memory for development. Hopefully more of the other minor linux distros can now generate their binary packages into respective repos and join into the open ecosystem. With the latest linux kernel radeon is good to go out-of-the-box graphics card support, without having to wait for a company like nvidia to provide risc-v packages. Enough so that the box64 x86_64 to risc-v real time translator is able to run various steam games. So just stick to mature radeon.
Wouldn’t it still be faster to emulate RISC-V instead of buying one of these?
It’s cool, but holy crap the performance is terrible.
Probably, based on this: https://cloud-v.co/blog/risc-v-1/benchmarking-risc-v-qemu-user-mode-emulator-with-spec-cpu2017-singlecore-intrate-14
It might be interesting to what platform could emulate RISC-V the fastest. I’m not sure if ARM would have more of an advantage over x86 here.
How else to move a HARDWARE platform forward without using a real product to debug real world behavior or test compatibility with other devices. Perfect example is the graphics card support mentioned in the article. You think having plug-n-play with the latest AMD cards so easily now, simply just happened over night. No that took years of open source enthusiasts toiling away at it. Basically ever since the first SiFive boards came out with a physical pcie slot to allow testing with actual Radeon cards, and iterations of various different vendor boards to continually troubleshoot all that time to get the hardware kinks worked out to this point. And because of this readily available RISC-V hardware, there are Linux ports that already includes graphical desktops and major GUI apps like office and browsers working to almost the same level of usability (not talking about performance metrics here) as x64 arch. With hardware you have to start somewhere with an actual product. And the RISC-V founders were smart they got viable hardware out to the public as early and often as possible to prove their design principles (again nothing to do with performance) and get the community to help out instead of trying to internally do everything themselves, which accelerated adoption further.
There are multiple areas that need to be worked on. Hardware compatibility is indeed only reasonable on hardware (e.g. one may need to debug a PCIe controller etc), and not that many people do work on hardware support. But for a hardware platform to be adopted, it also needs running software, and many software packagers who only need to discover build options, see ./configure fail to detect the platform etc, emulation is often equally fine (at least it helps make a lot of forward progress). And when they have a bunch of software to port, very quickly build time does count. As someone who has used the VF2 board for such work, I can say that it doesn’t take long to be discouraged from building your software on it given how slow it is, and to leave the board powered off on a shelf forever. In the end most of the risc-v porting work I’ve done happened on x86_64 with a cross-compiler building small images for lichee-rv-nano.
Not sure what you are getting at there, but the Vision Five and Lichee RV are tiny boards limited to 8GB max soldered RAM. The Titan here is 8 cores, supporting 64GB of RAM in a mini-itx form factor which can take a more powerful ATX supply. It also comes with a PCIe slot that allows easily inserting a 4-port (or more) SATA card along with connecting cheaper 30TB 3.5 spinning drives to go with that for massive storage. So it is basically a development PC/server. Now any person or distro team can set up automated jobs and scripts simply directed at their source repo to do parallel compiles of all their packages, that can self install the new executables and libraries for use immediately to compile further down stream, AS WELL as implement inline CI testing of the resulting executables to validate functionality in real time on actual hardware.
There is a point where cross-compiling is a necessity or more practical, such situations where SBC’s was the only thing available to people. But with new boards like the Titan having the expanded hardware feature set at more affordable rates now existing, those limitations are gone and passed the threshold that can be used directly to compile and run programs immediately at the same time. With options available, even general software development work via cross-compiling becomes inefficiently cumbersome, or was not even feasible in the first place for those programming environments like python, that does not support cross-compiling python binaries to different archs and require more convoluted workarounds if those are needed to create packages for.
My point was to respond to the previous participant who insisted on the hardware aspect of the solution that hardware is not all and that I had great examples of totally incapable hardware like VF2. The RAM size is not the limiting factor there, the CPU cores and DRAM latency+bandwidth are. This titan board is clearly much better on the specs and as I mentioned above, it should have roughly the perfs of a core2quad, which is weak for a desktop system but sufficient to perform native builds in batches (e.g. a distro build). So overall it will likely be adopted by some developers, most likely distro packagers who would benefit from something faster than running Qemu on a regular PC or using a VF2. But it’s not your latest desktop machine.
Here I thought I was the weirdo, being satisfied with GigE on my boards when everyone else seems to be moving to 10Gb or at least multi-gig. Evidently I’m not alone!
GbE is perfectly fine for plenty of use cases. 100M was definitely not and is finally dead, it took quite a long time though. However it’s also a matter of price tag. A GbE-capable device with 2 ports like the E20C starts around $25 nowadays. Some might expect that for $300-400 (RAM included) a the extra $3 needed to turn the 1G to 2.5/5G would have been invested from the beginning. But this is no big deal, and whatever the bit rate someone will complain (i.e. at 10G, it heats too much, at 1G it’s too slow). 2.5/5 are reasonable to efficiently use one PCIe lane (gen2 or gen3) however.