Particle Tachyon Review – A Qualcomm QCM6490 Edge AI and 5G cellular SBC tested with Ubuntu

The Particle Tachyon.
The Particle Tachyon.

Hello, today I’m going to review the Particle Tachyon SBC designed for high-performance edge AI, IoT, and connectivity applications. Powered by the Qualcomm QCM6490 platform with an octa-core Kryo CPU, an Adreno GPU, and a Hexagon DSP. The board also integrates robust wireless options, including 5G, Wi-Fi 6E, and Bluetooth 5.2.

The Particle Tachyon adopts the Raspberry Pi form factor and provides various I/O interfaces, such as a 40-pin GPIO header compatible with Raspberry Pi HATs, along with expansion options for sensors and peripherals. It also includes a Qwiic connector for SparkFun and Adafruit integrations, as well as MIPI-CSI/DSI connectors for cameras and displays.

Particle Tachyon Unboxing

The parcel was shipped from Hong Kong and arrived with all the expected components. Inside the package, I found a single-cell 3.7 V LiPo battery with a 3-pin JST-PH connector, the main Tachyon board, a small welcome card, and an additional microphone audio board.

Received package.
Received package.

The following photo compares the Particle Tachyon, BeagleY-AI, Raspberry Pi 5, and Raspberry Pi 4 Model B boards.

Size comparison of the Particle Tachyon, BeagleY-AI, Raspberry Pi 5, and Raspberry Pi 4 Model B boards
Size comparison of the Particle Tachyon, BeagleY-AI, Raspberry Pi 5, and Raspberry Pi 4 Model B boards

Device setup

To set up the device, I followed the steps from the official Setting up your Tachyon page. I first connected the Li-Po battery to the JST connector on the board, then plugged the board into my laptop using a USB Type-C cable. The red LED lit up immediately, confirming that the board was powered correctly. After that, I installed the Particle CLI tool on my laptop and updated it with particle update-cli command. Running particle --version confirmed the installation, returning version 3.38.1. The next step was to enter programming mode by pressing and holding the main button for about three seconds until the LED switched to flashing yellow. I then installed the USB driver using the Zadig tool, as recommended.

I first tried the desktop setup by running the CLI command particle tachyon setup and following the guided process. At the software selection step, I chose the desktop variant, which includes a full GUI environment. According to the official documentation, both Ubuntu 20.04 (legacy support) and Ubuntu 24.04 (current development) are supported. At the time of this review, the setup tool installed Ubuntu 20.04. The OS download and installation completed smoothly in under 20 minutes.


After restarting the device and reconnecting the keyboard and mouse, the system booted into the “Welcome to Tachyon” dialog. Steps 1 through 3 passed smoothly, but step 4 was never completed, no matter how many times I retried. When I checked the Particle Console, I could see the device ID listed correctly under the Devices page, but the handshake process never finished.

Welcome to Tachyon dialog.
Welcome to Tachyon dialog.
Organize your Particle device step.
Organize your Particle device step.
The Particle Console was waiting for the device to connect.
The Particle Console was waiting for the device to connect.

Based on community feedback suggesting the headless setup instead of the GUI desktop, I next tried the headless option. This installation also completed smoothly, and the main LED transitioned from blinking green to magenta, indicating that the Wi-Fi connection was established. However, the LED never turned cyan, which should indicate a successful cellular connection. Nevertheless, this time the Particle Console successfully completed the handshake and established a connection with the board.

Wi-Fi connection was established.
Wi-Fi connection was established.

Next, I tested some remote command-line operations, such as running htop remotely from the Terminal panel without any issues. The following images show additional tests using the Particle Console to check the device’s status.

The Particle Console successfully completed the handshake with the Tachyon.
The Particle Console successfully completed the handshake with the Tachyon.
Remote execution of the htop command.
Remote execution of the htop command.
Viewing network status in the Particle Console.
Viewing network status in the Particle Console.
Checking eSIMs status in the Particle Console.
Checking eSIMs status in the Particle Console.

Checking Particle Tachyon board’s information with inxi

The inxi log from the Particle Tachyon board shows it running Ubuntu 20.04.6 LTS (Focal Fossa) with kernel 5.4.219 on an ARM-based Qualcomm SoC. The CPU is identified as an 8-core Kryo cluster clocked between 300 MHz and 2.7 GHz, consistent with the advertised Qualcomm QCM6490 platform used in the Tachyon. Graphics output is handled via the msm-dai-q6-hdmi driver with display at 1920×1080 @ 60Hz, but rendering falls back to llvmpipe (LLVM 12, Mesa 21.2.6) instead of direct Adreno GPU acceleration. Battery monitoring works as expected, and thermal readings were stable with the CPU at 28 °C.

In terms of connectivity, the log lists the Qualcomm CNSS PCI Wi-Fi interface with wlan0 active, along with multiple virtual and cellular-style interfaces. Local storage is reported as a 116 GB KM8L9001JM-B624 eMMC module. Memory usage shows 3.96 GB in use out of 7.1 GB of RAM. Overall, the log confirms that the reported hardware matches the official Tachyon specifications, though graphics support currently defaults to software rendering rather than exposing the Adreno GPU through drivers.

Benchmarking with sbc-bench

Briefly, when I tried running sbc-bench it failed, as the script consistently returned the error failed to set pid xxxx's affinity: Invalid argument, as shown in the log below. Anyway, I was able to run sbc-bench -m to monitor some CPU and basic system information, as shown in the following log. We posted an issue on GitHub, but could not find a solution in a timely manner. Since I did not have time to analyze the error and wanted to move on to testing other features of the Particle Tachyon, I stopped the sbc-bench testing at this point.


This is the result of sbc-bench -m command.

Testing the network performance with iperf

I tested the wireless network communication speeds using iperf3 over both 2.4 GHz and 5 GHz Wi-Fi, connected through my home router. All of the following results were obtained without any optimization; other devices, such as TVs and mobile phones, may have been using the Wi-Fi at the same time. The router was located about 6–7 meters away.

For the test setup, I configured my Windows 11 laptop as the server using the iperf -s command. The iperf3 results between the Particle Tachyon and the host computer show a clear difference in performance between 2.4 GHz and 5 GHz Wi-Fi, as shown below

Testing data communication speed over 2.4 GHz Wi-Fi

Sending:


Receiving:


Bidirectional:


On the 2.4 GHz band, throughput in send mode averaged 13.4 Mbit/s, while receive mode was slightly higher at 23.5 Mbit/s. In bidirectional mode, performance dropped further, averaging 8.2 Mbit/s for transmit and 10.8 Mbit/s for receive.

Testing data communication speed over the 5 GHz Wi-Fi

Sending:


Receiving:


Bidirectional:


Switching to 5 GHz Wi-Fi significantly improved performance. In send mode, the Tachyon achieved a stable around 102.0 Mbit/s, while receive mode reached 87.3 Mbit/s. Bidirectional mode showed a good balance, with transmit averaging 41.4 Mbit/s and receive about 49.4 Mbit/s.

Overall, the Tachyon’s wireless performance is solid on the 5 GHz band. Although some instabilities were detected in the logs, such as occasional packet loss and retransmissions. These may have been caused by the host computer, signal interference, or the router itself. In addition, I ran iperf3 on a Windows 11 host machine, which may not deliver the same level of performance as a Linux-based system for network benchmarking.

Benchmarking the eMMC flash with iozone

The iozone benchmark results on the Particle Tachyon board demonstrate strong I/O performance with a 512 MB test file and record sizes of 1024 KB and 16384 KB. Sequential write speeds ranged from about 516 to 548 MB/s, with rewrite speeds slightly lower at around 513 to 523 MB/s. Read and reread operations were faster, reaching approximately 793 MB/s at 1 MB record sizes and up to 950 MB/s at 16 MB. Random read performance was also efficient, measuring about 709 MB/s for 1 MB records and about 933 MB/s for 16 MB records, while random write speeds remained steady at around 517 to 520 MB/s.


Overall, the results highlight a well-optimized storage subsystem on the Particle Tachyon board, with performance scaling positively at larger block sizes, particularly for reads. The consistent write speeds and high sequential read performance suggest that the onboard storage and controller are tuned for balanced workloads.

Testing web browsers with Speedometer 3.1

I tested Speedometer 3.1 using both the default Chromium web browser included with the OS and a newly installed Firefox browser on the Particle Tachyon. The average score on Firefox reached 4.80, noticeably higher than Chromium’s 3.45. The following analysis combines both logs for comparison.

Some of the results from the Chromium test.


Some of the results from the Firefox test.


Chromium achieved a score of 3.49 in Speedometer 3.1.
Chromium achieved a score of 3.49 in Speedometer 3.1.

Firefox achieved a score of 4.96 in Speedometer 3.1.
Firefox achieved a score of 4.96 in Speedometer 3.1.

On Chromium, frameworks such as Vue, Svelte, and Backbone completed tasks in the 100–200 ms range, showing smooth handling of synchronous operations and moderate latency for asynchronous ones. Heavier frameworks like React and Angular performed core tasks in the mid-200 ms range, while jQuery and the ES5 baselines struggled the most, often exceeding 300 ms and in some cases reaching over one second.

On Firefox, performance was consistently better across nearly all frameworks. Vue and Svelte were the most responsive, completing add and delete tasks more quickly than on Chromium. React and Angular also showed improvement, keeping latencies below their Chromium counterparts. Even jQuery, while still the slowest, exhibited reduced lag compared to the Chromium results.

Testing YouTube video playback on the Particle Tachyon board

I tested YouTube playback on the Particle Tachyon using the Norway 4K video at multiple resolutions ranging from 240p up to 2160p. The video was viewed in full screen with my desktop set to 1080p, so everything above that was upscaled or downscaled to fit.

At the lower resolutions of 240p, 360p, and 480p, the board had no trouble at all. Playback was smooth with only minor frame drops, and buffer health stayed stable throughout, which shows that light video streaming workloads are handled reliably.

particle tachyon youtube chromium 426x240
Resolution: 426×240
particle tachyon youtube chromium 640x360
Resolution: 640×360
particle tachyon youtube chromium 854x480
Resolution: 854×480
particle tachyon youtube chromium 1280x720
Resolution: 1280×720
particle tachyon youtube chromium 1920x1080
Resolution: 1920×1080
particle tachyon youtube chromium 2560x1440
Resolution: 2560×1440
particle tachyon youtube chromium 3840x2160
Resolution: 3840×2160

Moving up to 720p and 1080p, playback was still quite usable, though I did notice the occasional dropped frame, especially at full HD, where the chipset clearly had to work harder. Pushing beyond that to 1440p and 2160p exposed the limits. The video did play, but frame drops were frequent, and buffer health fluctuated as the connection speed spiked to keep up with the higher bitrate.

Youtube playback performances.
YouTube playback performances.

Testing WebGL rendering on web browsers

My next test focused on browser-based 3D rendering using the WebGL Aquarium demo on Chromium. With just a single fish on screen, the frame rate averaged around 5 fps. Increasing the count to 100 or 500 fish kept performance at about 4 fps, and at 1000 to 5000 fish, it remained in the 3–4 fps range. At heavier loads, such as 10,000 to 25,000 fish, the frame rate dropped further to between 2 fps and 1 fps.

Overall, while the 3D rendering performance on Chromium was quite low, likely because the default configuration of the desktop OS image is not yet optimized for GPU acceleration, it did not place heavy demand on the CPU. As a result, other GUI applications remained responsive and did not suffer from the kind of lag I experienced on other SBC boards I had previously tested.

particle tachyon webgl chromium 1
Fish=1, FPS=5
particle tachyon webgl chromium 100
Fish=100, FPS=4
particle tachyon webgl chromium 500
Fish=500, FPS=4
particle tachyon webgl chromium 1000
Fish=1000, FPS=4
particle tachyon webgl chromium 5000
Fish=5000, FPS=3
particle tachyon webgl chromium 10000Fish=10000, FPS=2
particle tachyon webgl chromium 20000Fish=15000, FPS=2 particle tachyon webgl chromium 25000Fish=20000, FPS=2

Testing 3D graphics rendering with glmark2

Next, I tested 3D graphics rendering with glmark2, which produced an overall score of 62. The tool reported that the system was running on Mesa llvmpipe (LLVM 12.0.0) rather than the Adreno GPU. Other simpler tests, such as texture filtering and basic effects, achieved over 100 fps, while more complex shading and lighting tasks dropped into the 40–50 fps range. The heaviest workloads, including terrain rendering and refraction, showed the greatest limitations, with frame rates of just 4 fps and 10 fps, respectively.

Testing glmark2.
Testing glmark2.

Testing AI performance

Preparations

During this review, I noticed that there were no clear end-to-end tutorials for installing or testing AI on the Particle Tachyon board, so I relied on the resources available from the Qualcomm AI Hub. After creating an account to obtain the required API_TOKEN, I set up a new virtual environment with venv to keep the installation clean. My first attempt with Python 3.10 failed during testing, but switching to Python 3.9 worked much more smoothly.

Next, I installed the Qualcomm AI Hub package using pip3 install qai-hub. Setup continued by logging in and configuring with qai-hub configure --api_token API_TOKEN, which linked my environment to the Qualcomm AI Hub account. From there, I confirmed device availability using qai-hub list-devices. The toolkit also provides useful commands such as qai-hub list-models and qai-hub list-exports, which makes it easy to check supported models. Overall, the process required some trial and error, but once configured, the tools worked reliably for testing AI performance on the Particle Tachyon.

Basic tests

Several AI models, such as MobileNet, FFNet, and YOLO, are available at https://github.com/quic/ai-hub-models. For my first test, I followed the real-time selfie segmentation example from the getting started page. I installed the model with pip3 install "qai-hub-models[ffnet_40s]" and ran it using python -m qai_hub_models.models.ffnet_40s.demo. The following image shows the segmentation result.


 

Input image used for real-time selfie segmentation testing.
Input image used for real-time selfie segmentation testing.
Output result from the real-time selfie segmentation test.
Output result from the real-time selfie segmentation test.

I also tested YOLO by installing YOLOv7 with pip install "qai_hub_models[yolov7]". After that, I ran the object detection demo on a local image by specifying the image path with --image ../particle_test_logs/data/CrossWalk_640.jpg.

particle tachyon ai yolo inputInput image Output result from Yolov7.Yolov7
Output result from Yolov8n.Yolov8n Output result from Yolov11n.Yolov11n

Next, I tested AI inferencing in Python using an example script from the Qualcomm AI Hub, and it ran smoothly.


Inference Job Results in Qualcomm AI Hub.
Inference Job Results in Qualcomm AI Hub.

Real-time object detection with a  USB camera

The official Particle Tachyon web page notes that the board supports two camera modules through the CSI1 and DSI/CSI2 connectors, as shown in the image below. In theory, this should allow two cameras to be used simultaneously. However, I tested this with my Raspberry Pi Camera Module 3 and Raspberry Pi AI Camera, but neither was detected by the board. I found that Raspberry Pi cameras are not currently supported on the Particle Tachyon due to their closed-source firmware stack. As a result, the following results were obtained using a USB webcam instead.

Camera connectors
Camera connectors

To prepare the performance evaluation, image frames were captured from a USB webcam using OpenCV at a resolution of 640×480. The YOLOv8 detection model was installed via pip install "qai-hub-models[yolov8-det]", following the guidelines provided in the qai-hub-models repository.

To export the quantized model with the provided script, I first checked whether the QCM6490 chipset appeared in the supported device list, but it was not available. As an alternative, I selected QCS6490, which offers nearly identical hardware components. The model was then exported from the Qualcomm AI Hub using the qai_hub_models.models.yolov8_det.export script, targeting the Qualcomm QCS6490 proxy chipset. The export configuration used the TensorFlow Lite (TFLite) runtime with an output resolution of 512×512. Since no quantization parameters were specified, the default setup was applied. The exported files were saved in the designated directory for subsequent testing

A Python 3.9 virtual environment was created, with the TFLite runtime and NumPy 1.x installed to ensure compatibility. Within this setup, the YOLOv8n model was tested across multiple square input resolutions—512, 256, 128, and 64 pixels—to compare unquantized and quantized performance.


Testing realtime object detection with YOLOv8n.
Testing realtime object detection with YOLOv8n.

Performances of YOLOv8n on Particle Tachyon.
Performances of YOLOv8n on Particle Tachyon.
Camera connectors
Particle Tachyon YOLOv8 Performance Comparisons

The YOLOv8n performance test on the Particle Tachyon, illustrated in the graph, shows a clear correlation between input resolution and processing time. For the unquantized model, inference dominates the total runtime at all resolutions. At 512×512, the model required ~0.69 seconds, but reducing the input size to 256×256 nearly halved the runtime (~0.37s). Further reductions to 128×128 and 64×64 lowered latency to ~0.19s and ~0.16s, respectively. Preprocessing and postprocessing times remained minimal across all sizes, confirming that inference is the primary performance bottleneck.

By contrast, the quantized model significantly accelerated execution. At 512×512, runtime dropped from ~0.69s to ~0.19s (3.7× faster), while at 256×256 it fell from ~0.37s to ~0.05s (almost 8× faster). Even at the smallest tested size, 64×64, the quantized version reduced latency to ~0.01s per frame, approaching real-time performance. These results demonstrate that the Tachyon’s Qualcomm QCM6490 SoC benefits greatly from quantized models, enabling efficient object detection across resolutions suitable for robotics, surveillance, and other edge AI applications where low latency is critical.

Measure power consumption and heat distribution

My final test focused on measuring power consumption and heat distribution. I measured the board’s power consumption using a USB power metering dongle under three conditions: idle state, playing a 1080p YouTube video in windowed mode, and running a full CPU load with WebGL (10,000 fish) alongside a 4K YouTube video. In all cases, the readings from the USB meter were fairly stable at around 5.32 W. No significant power spikes were observed, which may be because the board primarily draws power from the USB cable to charge the battery.

CPU usage in idle state.
CPU usage in idle state.
CPU usage while playing a 1080p YouTube video in windowed mode.
CPU usage while playing a 1080p YouTube video in windowed mode.
CPU usage while playing a 4K YouTube video and running WebGL Aquarium in windowed mode.
CPU usage while playing a 4K YouTube video and running WebGL Aquarium in windowed mode.
Average power consumption of the Particle Tachyon.
Average power consumption of the Particle Tachyon.

To observe the thermal distribution, I used my FLIR E4 thermal camera to capture images while the board was remotely controlled through the Particle Console’s Terminal panel. It was a rainy morning in my hometown, with the room temperature around 27–28 °C. The first thermal image was taken while running the htop command, where the maximum temperature observed was about 39.7 °C. The heat was distributed fairly evenly, with the hottest spots concentrated around the processor area. The second image was recorded when I remotely ran the sbc-bench script for several minutes. Although the script failed, it still drew more CPU power than the previous case, and the temperature rose to approximately 57–58 °C. In this case, the processor region is clearly much hotter. Interestingly, the temperature of the PCB seems to be higher than the two chips in both cases.

Thermal distribution of the Particle Tachyon board in an idle state.
Thermal distribution of the Particle Tachyon board in an idle state.
Thermal image of the Particle Tachyon board under heavy CPU load.
Thermal image of the Particle Tachyon board under heavy CPU load.

Conclusions

In conclusion, I personally like the overall design of the board, and its performance is quite good. The setup process is straightforward in both the desktop OS and headless modes. However, I did encounter some issues during this review. The main limitations found during this review were the lack of comprehensive documentation and examples for AI testing on the Particle Tachyon website, the requirement to keep the battery connected at all times, and the absence of audio output from the desktop OS.

For those interested, the Particle Tachyon is available at its official store for $299 (8GB RAM and 128GB Flash), or $249 (4GB RAM and 64GB Flash).

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.

Radxa Orion O6 Armv9 mini-ITX motherboard

10 Replies to “Particle Tachyon Review – A Qualcomm QCM6490 Edge AI and 5G cellular SBC tested with Ubuntu”

  1. This looks to be some highly outdated operating system. What is the current functionality state at mainline Linux support?
    What parts probably except of the wifi chip need closed source firmware files to run?

      1. There’s a VERY huge functional difference between the QCM6490 and the QCS6490, same performance sure, but calling it same chip..

        The tachyon is targeted at projects requiring cellular/GPS.

        That being said, the Radxa having ethernet, a full hdmi port, and an nvme slot vs the tachyons single usb-c for both power and display, and a porrly supported dsi..

        They are very different products.

    1. As I understand it, the main problem is that 5G does not work on Ubuntu 24.04, and that’s mainly why they still default to Ubuntu 20.04.

      1. Could you explain more in detail why 5G needs outdated operating system? How is the modem connected? The same 6490 SoC in a phone (Fairphone 5) is working on mainline Linux with 5G.
        That is why this need especially more information why it does not work here.

  2. I’m a simple man… how does this compare to any of the RK3588 based boards?

    They should be fairly similar and significantly cheaper, no?

    1. The CPU of the QCM6490 will be somewhat faster, and AI should be too, with 12 TOPS advertised. But as things stand, software support for the RK3588 is currently much better, as, for instance, GPU acceleration is not working on this Qualcomm board.

      The Tachyon is mostly of interest to people using 5G, as this feature adds significant cost to the board. If you don’t use 5G or the Particle Cloud, the RK3588 boards will offer better value.

      If you want to try a similar Qualcomm platform without 5G, The Radxa Dragon Q6A will be a better solution based on QCS6490. It doesn’t seem available for sale just now, but some people have already gotten samples for testing.

Leave a Reply

Your email address will not be published. Required fields are marked *

Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC
Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC