MetaComputing AI PC with Framework Laptop 13 features CIX P1 12-core Arm processor with up to 45 TOPS

MetaComputing, a Switzerland-based company founded in 2024, has announced the “MetaComputing AI PC with Framework Laptop 13” featuring the same CIX P1 12-core Cortex-A720/A520 SoC with 45 TOPS found in SBCs like the Orion O6 and Orange Pi 6 Plus, or the MINISFORUM MS-R1 mini PC.

As far as I know, it’s the first laptop based on CIX P1. It features 16GB or 32GB of RAM and a 1TB SSD and ships with the Framework Laptop 13 chassis (Pro model). There’s also a “Standard” model with a co-branded Framework and Cooler Master Case.

CIX P1 Framework laptop

MetaComputing AI PC specifications (preliminary):

  • SoC – CIX P1 (CP8180)
    • 12-core DynamIQ processor
      • 4x Cortex‑A720 big cores @ up to 2.6 GHz
      • 4x Cortex‑A720 medium cores
      • 4x Cortex‑A520 LITTLE cores
    • Cache – 12MB shared L3 cache
    • GPU – Arm Immortalis G720 MC10 with hardware ray-tracing support, graphics APIs: Vulkan 1.3, OpenGL ES 3.2, OpenCL 3.0
    • VPU
      • Video Decoder – Up to 8Kp60 AV1, H.265, H.264, VP9, VP8, H.263, MPEG‑4, MPEG‑2
      • Video Encoder – Up to 8Kp30 H.265, H.264, VP9, VP8
    • AI accelerator – Up to 28.8 TOPS Neural Processing Unit (NPU) with support for INT4/INT8/INT16, FP16/BF16, and TF32; up to 45 TOPS with CPU+CPU+NPU
    • Manufacturing Process – TSMC 6nm
    • TDP – 28 Watts
  • Memory – 16GB or 32GB RAM
  • Storage – 1TB NVMe SSD
  • Display (Pro model only) – 13.5-inch display with 2256×1504 resolution, 60Hz refresh rate, 400nit brightness, and 100% sRGB color gamut
  • Video Output (Standard model only) – HDMI expansion card
  • Camera (Pro model only) – 1080p60 webcam
  • Wireless – Intel Wi-Fi 6E and Bluetooth 5.3 AX210 module
  • USB
    • Standard model – USB-C expansion card
    • Pro model – 2x USB-C expansion cards
  • User input (Pro model only) – QWERTY keyboard and touchpad
  • Misc – Fingerprint reader (Pro model only)
  • Power Supply – Via USB-C port
  • Battery – 55Wh (Pro model only)
  • Dimensions & Weight
    • Standard model – TBD
    • Pro model – 296.63 x 228.98 x 15.85mm | 1.3kg

The specs are preliminary since the company provided limited details, and I used these, plus information from the Framework Laptop 13 and previous CIX P1 boards, to derive the specifications.

MetaComputing AI PC
MetaComputing AI PC with COOLER MASTER case

The system comes preinstalled with Ubuntu 25.04, and the company claims that it provides “a stable and developer-friendly environment optimized for Arm and AI workloads” and that “users can easily run AI frameworks, development tools, and edge applications out of the box”.

If the case feels familiar, it’s because it’s the same design as used in the DeepComputing DC-ROMA mainboard based on a StarFive JH7110 RISC-V Soc. DeepComputing and MetaComputing sound similar. A quick search shows both were founded by Yuning Liang, but they are separate entities, with the former established in Hong Kong, while the latter is newer and headquartered in Switzerland.

Framework Laptop 13 modular design
The Framework Laptop 13 has a modular design

It looks great on paper, but I have several concerns about the solution. First, as noted above, the technical specifications are rather light on details, and I could not see any demo of the laptop or mini AI PC. So I’m not even sure whether the hardware is ready or under development. I also have serious concerns about using the CIX P1 on a battery-powered laptop, as we’ve measured high idle power consumption on the Radxa Orion O6 (16-17W), and this was further confirmed by people who reviewed the MINISFORUM MS-R1 mini PC.

DeepComputing devices are often richly priced, and the MetaComputing AI PC with Framework Laptop 13 is also fairly expensive at $999 (16GB RAM) and $1,099 (32GB RAM). The “Standard” mini PC goes for $549/$649 in the same RAM configurations.

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.

Radxa Orion O6 Armv9 mini-ITX motherboard

19 Replies to “MetaComputing AI PC with Framework Laptop 13 features CIX P1 12-core Arm processor with up to 45 TOPS”

  1. When seeing the title, my immediate reaction was “will they manage to address the idle power issue”. Otherwise the device is doomed.

  2. As far as I understand Linux doesn’t even properly handle/schedule Intel CPUs with performance/efficiency cores. Seems even less likely with this niche product. But would be happy to hear I’m wrong 🙂

    1. Linux has done a good job with big.LITTLE for a few years now, which is part of why linux kicked Windows’ tail with Alder Lake and newer, but now Windows isn’t doing so bad either.

      1. As I understand it, EAS is a bit half baked, but I’m going off of one comment I read here:

        https://old.reddit.com/r/linuxhardware/comments/1m3zfiy/linux_support_of_honor_magicbook_art_14_and_honor/nqcvmnj/

        .. schedutil is terrible for the battery save, it designed for servers and gamers with only performance in mind. So it will push your frequencies to top always whenever you need it or not, beacuse it has stupid iowaut boost logic. This logic assumes that if your server have some iowait there – means that soon your disk system will return you some data that need to be processed. And the longer your system is waiting for disks – the higher is the frequency because it does not only boost it once to keep on medium level but continue boosting it more and more. So most likely if you have schedutil activated – your freqs will be much higher than using hardware-managed powersave (intel_pstate = active). There is a patch for that by Rafael Wysocki , maintaner of pm tree – that allows EAS with hardware pstates, but it has not landed yet. Even for the -next kernel tree.

          1. Wow, what a mess! Thank you so much for the document outlining the situation. I found it difficult to actually find “from the basics” explanation of the whole EAS and scheduler details.

            It’s always surprising just how much stuff and configuration is there in the kernel and just not exposed to the user in any desktop environment. Scheduler, Fan Control, ZRAM, OOM Managers etc etc. And when it is exposed somewhat (ex: networking stack, power management) it’s in such a half-assed not debuggable manner.

            I guess there is some parallel here. The CPU exposes a bunch of capability and it’s just completely left on the floor

            As you say.. in the final link in the chain:

            Seems nobody gives a sh*t about such stuff?

          2. I think we will never get anything serious in this area for a simple reason: performance and efficiency are the exact two metrics companies are fighting on through marketing. Have you seen how every single Cortex-A or Cortex-X is systematically 20% faster and 20% more power efficient than the previous one ? And how home-made designs are all at least 50% faster than ARM’s ? If you start to put measured values in OPP tables, you’ll have to either pick the marketing team’s numbers and have something totally inaccurate (especially when mixing a in-house design with a standard one for the little core), or you have to put measured values but someone will instantly point to this saying “this comes from $VENDOR’s engineering team so it must be true and public specs are false”.

            The only solution IMHO would be to ease setting these values for the user, and also filling in the total measured power consumption of the machine (including DRAM, GPU etc), because on certain tasks you’ll prefer to run for completion (e.g. compiling a kernel at an airport while waiting for your plane) in order to power off your laptop once finished, while in other cases you’ll want to use the minimal power for a task you cannot speed up (e.g. watching a video).

        1. Just had a quick look at Cix’ latest code drop in Radxa’s kernel repo: Cix only populates the capacity-dmips-mhz properties with values 1024 for all A720 and 403 for all the A55 and lacks individual ‘energy-costs’ DT nodes.

          But maybe all of the real scheduler stuff happens in some firmware BLOB anyway? Haven’t booted my Cix P1 thingy for over half a year due to the comically high idle consumption of this platform and the crappy ‘information policy’ all around Cix.

  3. Here’s a comparison against the i5-1145G7. I recently grabbed a used Dell Lattitude for about $220USD off Amazon with 32GB RAM having this CPU. The Cix is about 30% slower in single-core and about 5% faster in multi-core. The Cix will have about double the AI inferencing performance for smaller models, but with 32GB expandable to 64GB the Dell will run circles around the Cix with larger models.

    https://browser.geekbench.com/v6/cpu/compare/15215693?baseline=10409724

    1. > The Cix is about 30% slower in single-core and about 5% faster in multi-core

      Though Geekbench 6 is not able to measure properly anything ARM with more than 2 cores so the ‘multi scores’ this ‘benchmark’ spits out have to be taken with a huge pile of salt.

      1. All this time I thought it was just the microscopic LLC and poor thermal design. Historically ARM has needed more cores for similar multicore performance. Marry that with GB always having struggled with multicore scaling, and you have a point. But GB6 seems to represent Snapdragon Elite and Apple M-series reasonably well.

        1. Maybe you should add a bandwidth scaling test to your SOC benchmark, and use it to predict how well multicore can scale with and without the efficiency cores. I think you’ll find many boards suffer from cores-for-marketing-purposes-only and that’s the true reason their performance fails to scale. ie where you have 4 performance cores and the platform only supports 3/4 of their memory bandwidth capacity, adding the low-cache efficiency cores to the mix will only slow things down unless the data is embarrassingly local. AnandTech reported on that back in 2016. For the common 8-core Set-Top-Box SOCs I’m sure this is still the case today. You might even get better performance disabling one of the performance cores as well, lol.

          1. You have that on RPi which has too narrow a DRAM bus for all cores. I’ve shown that when I long ago published the “bogolocs” metric (lines of code per second in compilation speed) for my build farm, which was directly proportional to DRAM bandwidth and almost not at all to CPU performance.

            In a related topic, I long ago noticed how the RK3399 had difficulties boosting the DMC frequency when only using little cores, and it was needed to run at least one big core to massively boost the performance by switching the DMC from 200 to 928 MHz. This is an indication that the vendor considers the little cores worthless for any meaningful job, and they’re only there for marketing…

        2. > GB6 seems to represent Snapdragon Elite and Apple M-series reasonably well

          I doubt it. Jeff Geerling tested a 192 core ARM setup a while ago: https://github.com/geerlingguy/sbc-reviews/issues/52#issuecomment-2452250408

          With GB6 192 cores are 11.6 times ‘faster’ than a single core, with GB5 the very same setup is multi-threaded 84 times faster than a single core and is able to consume twice as much while being ‘benchmarked’.

          TL;DR: GB6 is crap.

  4. Looking forward to a thorough review of this product. What works and not. Pretty excited about that armv9.2 core with mte and various other features. IIRC isn’t it the first “open” armv9.2 laptop?

    1. I’m not sure I’ll get a sample for review. It’s more likely some YouTubers will review it instead. I think Jeff Geerling might get a sample since he already reviewed the DC-ROMA II.

Leave a Reply

Your email address will not be published. Required fields are marked *

Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC
Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC