Hello, today I’m going to review the Particle Tachyon SBC designed for high-performance edge AI, IoT, and connectivity applications. Powered by the Qualcomm QCM6490 platform with an octa-core Kryo CPU, an Adreno GPU, and a Hexagon DSP. The board also integrates robust wireless options, including 5G, Wi-Fi 6E, and Bluetooth 5.2.
The Particle Tachyon adopts the Raspberry Pi form factor and provides various I/O interfaces, such as a 40-pin GPIO header compatible with Raspberry Pi HATs, along with expansion options for sensors and peripherals. It also includes a Qwiic connector for SparkFun and Adafruit integrations, as well as MIPI-CSI/DSI connectors for cameras and displays.
Particle Tachyon Unboxing
The parcel was shipped from Hong Kong and arrived with all the expected components. Inside the package, I found a single-cell 3.7 V LiPo battery with a 3-pin JST-PH connector, the main Tachyon board, a small welcome card, and an additional microphone audio board.
The following photo compares the Particle Tachyon, BeagleY-AI, Raspberry Pi 5, and Raspberry Pi 4 Model B boards.

Device setup
To set up the device, I followed the steps from the official Setting up your Tachyon page. I first connected the Li-Po battery to the JST connector on the board, then plugged the board into my laptop using a USB Type-C cable. The red LED lit up immediately, confirming that the board was powered correctly. After that, I installed the Particle CLI tool on my laptop and updated it with particle update-cli command. Running particle --version confirmed the installation, returning version 3.38.1. The next step was to enter programming mode by pressing and holding the main button for about three seconds until the LED switched to flashing yellow. I then installed the USB driver using the Zadig tool, as recommended.
I first tried the desktop setup by running the CLI command particle tachyon setup and following the guided process. At the software selection step, I chose the desktop variant, which includes a full GUI environment. According to the official documentation, both Ubuntu 20.04 (legacy support) and Ubuntu 24.04 (current development) are supported. At the time of this review, the setup tool installed Ubuntu 20.04. The OS download and installation completed smoothly in under 20 minutes.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
=================================================================================== Particle Tachyon Setup Command =================================================================================== Welcome to the Particle Tachyon setup! This interactive command: - Flashes your Tachyon device - Configures it (password, WiFi credentials etc...) - Connects it to the internet and the Particle Cloud! What you'll need: 1. Your Tachyon device 2. The Tachyon battery 3. A USB-C cable Important: - This tool requires you to be logged into your Particle account. - For more details, check out the documentation at: https://part.cl/setup-tachyon =================================================================================== Step 1: Okay—first up! Checking if you're logged in... ...All set! You're logged in as xxxxxxxx@xxxxxxxx and ready to go! =================================================================================== Step 2: Now let's get the device info Starting Process. See logs at: \Users\kumpe\.particle\logs\tachyon_flash_xxxxxxx.log Device info: - Device ID: xxxxxxxx - Region: RoW - OS Version: Ubuntu 20.04 - USB Version: 3.0 =================================================================================== Step 3: Now let's capture some information about how you'd like your device to be configured when it first boots. First, pick a password for the root account on your Tachyon device. This same password is also used for the "particle" user account. ? Enter a password for the root and particle accounts: [hidden] ? Re-enter the password for the root and particle accounts: [hidden] ... ... =================================================================================== Step 11: All done! Your Tachyon device is now booting into the operating system and will automatically connect to Wi-Fi. It will also: - Activate the built-in 5G modem - Connect to the Particle Cloud - Run all system services, including battery charging For more information about Tachyon, visit our developer site at: https://developer.particle.io! View your device on the Particle Console at: https://console.particle.io/testproduct-xxxx/devices/yyyyyy |
After restarting the device and reconnecting the keyboard and mouse, the system booted into the “Welcome to Tachyon” dialog. Steps 1 through 3 passed smoothly, but step 4 was never completed, no matter how many times I retried. When I checked the Particle Console, I could see the device ID listed correctly under the Devices page, but the handshake process never finished.



Based on community feedback suggesting the headless setup instead of the GUI desktop, I next tried the headless option. This installation also completed smoothly, and the main LED transitioned from blinking green to magenta, indicating that the Wi-Fi connection was established. However, the LED never turned cyan, which should indicate a successful cellular connection. Nevertheless, this time the Particle Console successfully completed the handshake and established a connection with the board.

Next, I tested some remote command-line operations, such as running htop remotely from the Terminal panel without any issues. The following images show additional tests using the Particle Console to check the device’s status.




Checking Particle Tachyon board’s information with inxi
The inxi log from the Particle Tachyon board shows it running Ubuntu 20.04.6 LTS (Focal Fossa) with kernel 5.4.219 on an ARM-based Qualcomm SoC. The CPU is identified as an 8-core Kryo cluster clocked between 300 MHz and 2.7 GHz, consistent with the advertised Qualcomm QCM6490 platform used in the Tachyon. Graphics output is handled via the msm-dai-q6-hdmi driver with display at 1920×1080 @ 60Hz, but rendering falls back to llvmpipe (LLVM 12, Mesa 21.2.6) instead of direct Adreno GPU acceleration. Battery monitoring works as expected, and thermal readings were stable with the CPU at 28 °C.
In terms of connectivity, the log lists the Qualcomm CNSS PCI Wi-Fi interface with wlan0 active, along with multiple virtual and cellular-style interfaces. Local storage is reported as a 116 GB KM8L9001JM-B624 eMMC module. Memory usage shows 3.96 GB in use out of 7.1 GB of RAM. Overall, the log confirms that the reported hardware matches the official Tachyon specifications, though graphics support currently defaults to software rendering rather than exposing the Adreno GPU through drivers.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
System: Host: tachyon-7c1f4061 Kernel: 5.4.219 aarch64 bits: 64 Desktop: Gnome 3.36.9 Distro: Ubuntu 20.04.6 LTS (Focal Fossa) Machine: Type: ARM Device System: Particle Tachyon details: N/A Battery: ID-1: battery charge: 94% condition: N/A CPU: Topology: 8-Core (3-Die) model: N/A variant: kryo bits: 64 type: MCP MCM Speed: 806 MHz min/max: 300:691:806/1958:2400:2707 MHz Core speeds (MHz): 1: 691 2: 691 3: 691 4: 691 5: 691 6: 691 7: 691 8: 806 Graphics: Device-1: msm-dai-q6-hdmi driver: msm_dai_q6_hdmi v: N/A Display: wayland server: X.Org 1.20.13 driver: msm_dai_q6_hdmi resolution: 1920x1080~60Hz OpenGL: renderer: llvmpipe (LLVM 12.0.0 128 bits) v: 4.5 Mesa 21.2.6 Audio: Device-1: msm-audio-apr driver: audio_apr Device-2: msm-dai-q6-hdmi driver: msm_dai_q6_hdmi Device-3: audio-ref-clk driver: audio_ref_clk Device-4: audio-ref-clk driver: audio_ref_clk Device-5: audio-ref-clk driver: audio_ref_clk Device-6: audio-ref-clk driver: audio_ref_clk Device-7: usb-audio-qmi-dev driver: uaudio_qmi Device-8: audio-ref-clk driver: audio_ref_clk Device-9: audio-ref-clk driver: audio_ref_clk Device-10: audio-ref-clk driver: audio_ref_clk Device-11: audio-ref-clk driver: audio_ref_clk Device-12: msm-audio-ion-cma driver: msm_audio_ion Device-13: msm-audio-ion driver: msm_audio_ion Device-14: q6core-audio driver: q6core_audio Sound Server: ALSA v: k5.4.219 Network: Device-1: Qualcomm driver: cnss_pci IF: p2p0 state: down mac: ea:8d:a6:f1:4d:e3 Device-2: ipa-smmu-wlan-cb driver: ipa IF-ID-1: bridge0 state: down mac: 22:2c:6a:1e:74:21 IF-ID-2: docker0 state: down mac: 02:42:f6:6c:a2:0e IF-ID-3: dummy0 state: down mac: 76:12:a9:e6:01:75 IF-ID-4: erspan0 state: down mac: 00:00:00:00:00:00 IF-ID-5: gre0 state: down mac: 00:00:00:00 IF-ID-6: gretap0 state: down mac: 00:00:00:00:00:00 IF-ID-7: ip6_vti0 state: down mac: 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 IF-ID-8: ip6gre0 state: down mac: 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 IF-ID-9: ip6tnl0 state: down mac: 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 IF-ID-10: ip_vti0 state: down mac: 00:00:00:00 IF-ID-11: rmnet_data0 state: down mac: N/A IF-ID-12: rmnet_data1 state: down mac: N/A IF-ID-13: rmnet_data10 state: down mac: N/A IF-ID-14: rmnet_data11 state: down mac: N/A IF-ID-15: rmnet_data12 state: down mac: N/A IF-ID-16: rmnet_data13 state: down mac: N/A IF-ID-17: rmnet_data14 state: down mac: N/A IF-ID-18: rmnet_data15 state: down mac: N/A IF-ID-19: rmnet_data16 state: down mac: N/A IF-ID-20: rmnet_data2 state: down mac: N/A IF-ID-21: rmnet_data3 state: down mac: N/A IF-ID-22: rmnet_data4 state: down mac: N/A IF-ID-23: rmnet_data5 state: down mac: N/A IF-ID-24: rmnet_data6 state: down mac: N/A IF-ID-25: rmnet_data7 state: down mac: N/A IF-ID-26: rmnet_data8 state: down mac: N/A IF-ID-27: rmnet_data9 state: down mac: N/A IF-ID-28: rmnet_ipa0 state: unknown speed: N/A duplex: N/A mac: N/A IF-ID-29: sit0 state: down mac: 00:00:00:00 IF-ID-30: tailscale0 state: unknown speed: 10 Mbps duplex: full mac: N/A IF-ID-31: tunl0 state: down mac: 00:00:00:00 IF-ID-32: wlan0 state: up mac: e8:8d:a6:6b:4d:e3 Drives: Local Storage: total: 118.88 GiB used: 10.77 GiB (9.1%) ID-1: /dev/sda model: KM8L9001JM-B624 size: 116.73 GiB ID-2: /dev/sdb model: KM8L9001JM-B624 size: 8.0 MiB ID-3: /dev/sdc model: KM8L9001JM-B624 size: 8.0 MiB ID-4: /dev/sdd model: KM8L9001JM-B624 size: 128.0 MiB ID-5: /dev/sde model: KM8L9001JM-B624 size: 128.0 MiB ID-6: /dev/sdf model: KM8L9001JM-B624 size: 144.0 MiB ID-7: /dev/sdg model: KM8L9001JM-B624 size: 1.75 GiB Partition: ID-1: / size: 110.21 GiB used: 10.57 GiB (9.6%) fs: ext4 dev: /dev/sda11 Sensors: System Temperatures: cpu: 28.0 C mobo: N/A Fan Speeds (RPM): N/A Info: Processes: 519 Uptime: 11m Memory: 7.13 GiB used: 3.96 GiB (55.6%) Shell: bash inxi: 3.0.38 |
Benchmarking with sbc-bench
Briefly, when I tried running sbc-bench it failed, as the script consistently returned the error failed to set pid xxxx's affinity: Invalid argument, as shown in the log below. Anyway, I was able to run sbc-bench -m to monitor some CPU and basic system information, as shown in the following log. We posted an issue on GitHub, but could not find a solution in a timely manner. Since I did not have time to analyze the error and wanted to move on to testing other features of the Particle Tachyon, I stopped the sbc-bench testing at this point.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
Starting to examine hardware/software for review purposes... WARNING: dmesg output does not contain early boot messages which help in identifying hardware details. It is recommended to reboot now and then execute the benchmarks. Press [ctrl]-[c] to stop or [enter] to continue. Average load and/or CPU utilization too high (too much background activity). Waiting... ... sbc-bench v0.9.72 Installing needed tools: apt-get -f -qq -y install sysstat links mmc-utils smartmontools stress-ng p7zip, tinymembench, ramlat, mhz.., cpufetch (can't build cpuminer) Done. Checking cpufreq OPP...taskset: failed to set pid 26174's affinity: Invalid argument taskset: failed to set pid 26176's affinity: Invalid argument taskset: failed to set pid 26198's affinity: Invalid argument taskset: failed to set pid 26200's affinity: Invalid argument taskset: failed to set pid 26220's affinity: Invalid argument ... Done. Executing tinymembench. Done. Executing RAM latency tester...taskset: failed to set pid 28505's affinity: Invalid argument taskset: failed to set pid 28514's affinity: Invalid argument Done. Executing OpenSSL benchmark. Done. Executing 7-zip benchmark. Done. Throttling test: heating up the device, 5 more minutes to wait. Done. Checking cpufreq OPP again...taskset: failed to set pid 34832's affinity: Invalid argument taskset: failed to set pid 34834's affinity: Invalid argument taskset: failed to set pid 34866's affinity: Invalid argument taskset: failed to set pid 34868's affinity: Invalid argument Done (13 minutes elapsed). ./sbc-bench.sh: line 4595: 100 * MeasuredClockspeedStart / 0 : division by 0 (error token is "0 ") ./sbc-bench.sh: line 1: kill: (32094) - No such process |
This is the result of sbc-bench -m command.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
Snapdragon 497 rev 1.0, Kernel: aarch64, Userland: arm64 CPU sysfs topology (clusters, cpufreq members, clockspeeds) cpufreq min max CPU cluster policy speed speed core type 0 0 0 300 1958 Cortex-A55 / r2p0 1 0 0 300 1958 Cortex-A55 / r2p0 2 0 0 300 1958 Cortex-A55 / r2p0 3 0 0 300 1958 Cortex-A55 / r2p0 4 1 4 691 2400 Cortex-A78 / r1p1 5 1 4 691 2400 Cortex-A78 / r1p1 6 1 4 691 2400 Cortex-A78 / r1p1 7 2 7 806 2707 Cortex-A78 / r1p1 Time cpu0/cpu4/cpu7 load %cpu %sys %usr %nice %io %irq Temp 10:33:56: 1958/2400/2707MHz 2.07 11% 1% 9% 0% 0% 0% °C 10:34:01: 1958/2400/2707MHz 2.14 2% 1% 0% 0% 0% 0% °C 10:34:06: 1958/2400/2707MHz 2.21 2% 1% 0% 0% 0% 0% °C 10:34:11: 1958/2400/2707MHz 2.27 1% 1% 0% 0% 0% 0% °C |
Testing the network performance with iperf
I tested the wireless network communication speeds using iperf3 over both 2.4 GHz and 5 GHz Wi-Fi, connected through my home router. All of the following results were obtained without any optimization; other devices, such as TVs and mobile phones, may have been using the Wi-Fi at the same time. The router was located about 6–7 meters away.
For the test setup, I configured my Windows 11 laptop as the server using the iperf -s command. The iperf3 results between the Particle Tachyon and the host computer show a clear difference in performance between 2.4 GHz and 5 GHz Wi-Fi, as shown below
Testing data communication speed over 2.4 GHz Wi-Fi
Sending:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
iperf3 -c 192.168.1.8 -t 60 -i 10 Connecting to host 192.168.1.8, port 5201 [ 5] local 192.168.1.11 port 39964 connected to 192.168.1.8 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 15.2 MBytes 12.7 Mbits/sec 16 33.9 KBytes [ 5] 10.00-20.00 sec 17.5 MBytes 14.7 Mbits/sec 12 33.9 KBytes [ 5] 20.00-30.00 sec 14.1 MBytes 11.8 Mbits/sec 20 28.3 KBytes [ 5] 30.00-40.00 sec 14.3 MBytes 12.0 Mbits/sec 18 43.8 KBytes [ 5] 40.00-50.00 sec 17.1 MBytes 14.3 Mbits/sec 14 46.7 KBytes [ 5] 50.00-60.00 sec 17.8 MBytes 14.9 Mbits/sec 14 90.5 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-60.00 sec 96.0 MBytes 13.4 Mbits/sec 94 sender [ 5] 0.00-60.02 sec 95.4 MBytes 13.3 Mbits/sec receiver iperf Done. |
Receiving:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
iperf3 -c 192.168.1.8 -t 60 -i 10 -R Connecting to host 192.168.1.8, port 5201 Reverse mode, remote host 192.168.1.8 is sending [ 5] local 192.168.1.11 port 38422 connected to 192.168.1.8 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 18.5 MBytes 15.5 Mbits/sec [ 5] 10.00-20.00 sec 19.9 MBytes 16.7 Mbits/sec [ 5] 20.00-30.00 sec 28.8 MBytes 24.2 Mbits/sec [ 5] 30.00-40.00 sec 28.6 MBytes 24.0 Mbits/sec [ 5] 40.00-50.00 sec 36.3 MBytes 30.4 Mbits/sec [ 5] 50.00-60.00 sec 36.4 MBytes 30.5 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-60.02 sec 169 MBytes 23.6 Mbits/sec sender [ 5] 0.00-60.00 sec 168 MBytes 23.5 Mbits/sec receiver iperf Done. |
Bidirectional:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
iperf3 -c 192.168.1.8 -t 60 -i 10 --bidir Connecting to host 192.168.1.8, port 5201 [ 5] local 192.168.1.11 port 39962 connected to 192.168.1.8 port 5201 [ 7] local 192.168.1.11 port 39976 connected to 192.168.1.8 port 5201 [ ID][Role] Interval Transfer Bitrate Retr Cwnd [ 5][TX-C] 0.00-10.00 sec 10.9 MBytes 9.13 Mbits/sec 87 15.6 KBytes [ 7][RX-C] 0.00-10.00 sec 6.53 MBytes 5.48 Mbits/sec [ 5][TX-C] 10.00-20.00 sec 6.40 MBytes 5.37 Mbits/sec 31 24.0 KBytes [ 7][RX-C] 10.00-20.00 sec 11.6 MBytes 9.76 Mbits/sec [ 5][TX-C] 20.00-30.00 sec 11.7 MBytes 9.80 Mbits/sec 28 21.2 KBytes [ 7][RX-C] 20.00-30.00 sec 16.9 MBytes 14.2 Mbits/sec [ 5][TX-C] 30.00-40.00 sec 9.88 MBytes 8.29 Mbits/sec 39 17.0 KBytes [ 7][RX-C] 30.00-40.00 sec 12.9 MBytes 10.8 Mbits/sec [ 5][TX-C] 40.00-50.00 sec 9.82 MBytes 8.24 Mbits/sec 38 24.0 KBytes [ 7][RX-C] 40.00-50.00 sec 17.5 MBytes 14.7 Mbits/sec [ 5][TX-C] 50.00-60.00 sec 10.1 MBytes 8.44 Mbits/sec 43 35.4 KBytes [ 7][RX-C] 50.00-60.00 sec 11.6 MBytes 9.73 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID][Role] Interval Transfer Bitrate Retr [ 5][TX-C] 0.00-60.00 sec 58.7 MBytes 8.21 Mbits/sec 266 sender [ 5][TX-C] 0.00-60.01 sec 58.0 MBytes 8.11 Mbits/sec receiver [ 7][RX-C] 0.00-60.00 sec 77.2 MBytes 10.8 Mbits/sec sender [ 7][RX-C] 0.00-60.01 sec 77.1 MBytes 10.8 Mbits/sec receiver iperf Done. |
On the 2.4 GHz band, throughput in send mode averaged 13.4 Mbit/s, while receive mode was slightly higher at 23.5 Mbit/s. In bidirectional mode, performance dropped further, averaging 8.2 Mbit/s for transmit and 10.8 Mbit/s for receive.
Testing data communication speed over the 5 GHz Wi-Fi
Sending:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Connecting to host 192.168.1.8, port 5201 [ 5] local 192.168.1.11 port 56378 connected to 192.168.1.8 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 126 MBytes 106 Mbits/sec 2 724 KBytes [ 5] 10.00-20.00 sec 122 MBytes 103 Mbits/sec 5 400 KBytes [ 5] 20.00-30.00 sec 123 MBytes 104 Mbits/sec 2 421 KBytes [ 5] 30.00-40.00 sec 123 MBytes 103 Mbits/sec 4 311 KBytes [ 5] 40.00-50.00 sec 120 MBytes 101 Mbits/sec 0 532 KBytes [ 5] 50.00-60.00 sec 118 MBytes 99.1 Mbits/sec 2 516 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-60.00 sec 733 MBytes 102 Mbits/sec 15 sender [ 5] 0.00-60.04 sec 733 MBytes 102 Mbits/sec receiver |
Receiving:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
iperf3 -c 192.168.1.8 -t 60 -i 10 -R Connecting to host 192.168.1.8, port 5201 Reverse mode, remote host 192.168.1.8 is sending [ 5] local 192.168.1.11 port 34518 connected to 192.168.1.8 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 98.0 MBytes 82.2 Mbits/sec [ 5] 10.00-20.00 sec 103 MBytes 86.3 Mbits/sec [ 5] 20.00-30.00 sec 101 MBytes 84.8 Mbits/sec [ 5] 30.00-40.00 sec 106 MBytes 89.0 Mbits/sec [ 5] 40.00-50.00 sec 109 MBytes 91.5 Mbits/sec [ 5] 50.00-60.00 sec 107 MBytes 90.0 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-60.01 sec 625 MBytes 87.4 Mbits/sec sender [ 5] 0.00-60.00 sec 624 MBytes 87.3 Mbits/sec receiver iperf Done. |
Bidirectional:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
Connecting to host 192.168.1.8, port 5201 [ 5] local 192.168.1.11 port 55836 connected to 192.168.1.8 port 5201 [ 7] local 192.168.1.11 port 55844 connected to 192.168.1.8 port 5201 [ ID][Role] Interval Transfer Bitrate Retr Cwnd [ 5][TX-C] 0.00-10.00 sec 35.8 MBytes 30.0 Mbits/sec 39 247 KBytes [ 7][RX-C] 0.00-10.00 sec 61.0 MBytes 51.1 Mbits/sec [ 5][TX-C] 10.00-20.00 sec 42.9 MBytes 36.0 Mbits/sec 2 245 KBytes [ 7][RX-C] 10.00-20.00 sec 66.5 MBytes 55.8 Mbits/sec [ 5][TX-C] 20.00-30.00 sec 53.3 MBytes 44.7 Mbits/sec 2 460 KBytes [ 7][RX-C] 20.00-30.00 sec 57.6 MBytes 48.3 Mbits/sec [ 5][TX-C] 30.00-40.00 sec 49.4 MBytes 41.4 Mbits/sec 13 267 KBytes [ 7][RX-C] 30.00-40.00 sec 51.2 MBytes 42.9 Mbits/sec [ 5][TX-C] 40.00-50.00 sec 60.1 MBytes 50.4 Mbits/sec 11 372 KBytes [ 7][RX-C] 40.00-50.00 sec 59.9 MBytes 50.3 Mbits/sec [ 5][TX-C] 50.00-60.00 sec 54.5 MBytes 45.7 Mbits/sec 2 393 KBytes [ 7][RX-C] 50.00-60.00 sec 56.5 MBytes 47.4 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID][Role] Interval Transfer Bitrate Retr [ 5][TX-C] 0.00-60.00 sec 296 MBytes 41.4 Mbits/sec 69 sender [ 5][TX-C] 0.00-60.06 sec 295 MBytes 41.2 Mbits/sec receiver [ 7][RX-C] 0.00-60.00 sec 353 MBytes 49.4 Mbits/sec sender [ 7][RX-C] 0.00-60.06 sec 353 MBytes 49.3 Mbits/sec receiver iperf Done. |
Switching to 5 GHz Wi-Fi significantly improved performance. In send mode, the Tachyon achieved a stable around 102.0 Mbit/s, while receive mode reached 87.3 Mbit/s. Bidirectional mode showed a good balance, with transmit averaging 41.4 Mbit/s and receive about 49.4 Mbit/s.
Overall, the Tachyon’s wireless performance is solid on the 5 GHz band. Although some instabilities were detected in the logs, such as occasional packet loss and retransmissions. These may have been caused by the host computer, signal interference, or the router itself. In addition, I ran iperf3 on a Windows 11 host machine, which may not deliver the same level of performance as a Linux-based system for network benchmarking.
Benchmarking the eMMC flash with iozone
The iozone benchmark results on the Particle Tachyon board demonstrate strong I/O performance with a 512 MB test file and record sizes of 1024 KB and 16384 KB. Sequential write speeds ranged from about 516 to 548 MB/s, with rewrite speeds slightly lower at around 513 to 523 MB/s. Read and reread operations were faster, reaching approximately 793 MB/s at 1 MB record sizes and up to 950 MB/s at 16 MB. Random read performance was also efficient, measuring about 709 MB/s for 1 MB records and about 933 MB/s for 16 MB records, while random write speeds remained steady at around 517 to 520 MB/s.
|
1 2 3 4 5 6 7 8 9 10 11 |
iozone -e -I -a -s 512M -r 1024k -r 16384k -i 0 -i 1 -i 2 Iozone: Performance Test of File I/O Version $Revision: 3.489 $ Compiled for 64 bit mode. Build: linux random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 524288 1024 516731 513034 792809 795314 708912 517244 524288 16384 548228 523277 942690 950215 933300 520009 iozone test complete. |
Overall, the results highlight a well-optimized storage subsystem on the Particle Tachyon board, with performance scaling positively at larger block sizes, particularly for reads. The consistent write speeds and high sequential read performance suggest that the onboard storage and controller are tuned for balanced workloads.
Testing web browsers with Speedometer 3.1
I tested Speedometer 3.1 using both the default Chromium web browser included with the OS and a newly installed Firefox browser on the Particle Tachyon. The average score on Firefox reached 4.80, noticeably higher than Chromium’s 3.45. The following analysis combines both logs for comparison.
Some of the results from the Chromium test.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
{ ... "Score": { "name": "Score", "unit": "score", "description": "Scaled inverse of the Geomean", "mean": 3.490126014335767, "delta": 0.08525368411588982, "percentDelta": 2.4427107722101864, "sum": 34.90126014335767, "min": 3.2505296853945795, "max": 3.671860859944475, "values": [ 3.2505296853945795, 3.528285683765244, 3.602685103205795, 3.3430617694265514, 3.5040605622261314, 3.475742816788889, 3.5150877298778496, 3.671860859944475, 3.495627857605704, 3.514318075122452 ] } } |
Some of the results from the Firefox test.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
{ ... "Score": { "name": "Score", "unit": "score", "description": "Scaled inverse of the Geomean", "mean": 4.95512761866093, "delta": 0.14967231208370088, "percentDelta": 3.020554133056783, "sum": 49.55127618660929, "min": 4.387574109889527, "max": 5.099536229934388, "values": [ 4.387574109889527, 4.86512346366237, 5.017082525331689, 5.05246500141165, 5.040888596011321, 5.006502278815123, 5.042243557016601, 4.977395230281133, 5.099536229934388, 5.062465194255494 ] } } |


On Chromium, frameworks such as Vue, Svelte, and Backbone completed tasks in the 100–200 ms range, showing smooth handling of synchronous operations and moderate latency for asynchronous ones. Heavier frameworks like React and Angular performed core tasks in the mid-200 ms range, while jQuery and the ES5 baselines struggled the most, often exceeding 300 ms and in some cases reaching over one second.
On Firefox, performance was consistently better across nearly all frameworks. Vue and Svelte were the most responsive, completing add and delete tasks more quickly than on Chromium. React and Angular also showed improvement, keeping latencies below their Chromium counterparts. Even jQuery, while still the slowest, exhibited reduced lag compared to the Chromium results.
Testing YouTube video playback on the Particle Tachyon board
I tested YouTube playback on the Particle Tachyon using the Norway 4K video at multiple resolutions ranging from 240p up to 2160p. The video was viewed in full screen with my desktop set to 1080p, so everything above that was upscaled or downscaled to fit.
At the lower resolutions of 240p, 360p, and 480p, the board had no trouble at all. Playback was smooth with only minor frame drops, and buffer health stayed stable throughout, which shows that light video streaming workloads are handled reliably.







Moving up to 720p and 1080p, playback was still quite usable, though I did notice the occasional dropped frame, especially at full HD, where the chipset clearly had to work harder. Pushing beyond that to 1440p and 2160p exposed the limits. The video did play, but frame drops were frequent, and buffer health fluctuated as the connection speed spiked to keep up with the higher bitrate.

Testing WebGL rendering on web browsers
My next test focused on browser-based 3D rendering using the WebGL Aquarium demo on Chromium. With just a single fish on screen, the frame rate averaged around 5 fps. Increasing the count to 100 or 500 fish kept performance at about 4 fps, and at 1000 to 5000 fish, it remained in the 3–4 fps range. At heavier loads, such as 10,000 to 25,000 fish, the frame rate dropped further to between 2 fps and 1 fps.
Overall, while the 3D rendering performance on Chromium was quite low, likely because the default configuration of the desktop OS image is not yet optimized for GPU acceleration, it did not place heavy demand on the CPU. As a result, other GUI applications remained responsive and did not suffer from the kind of lag I experienced on other SBC boards I had previously tested.
![]() Fish=1, FPS=5 |
![]() Fish=100, FPS=4 |
![]() Fish=500, FPS=4 |
![]() Fish=1000, FPS=4 |
![]() Fish=5000, FPS=3 |
Fish=10000, FPS=2 |
Fish=15000, FPS=2 |
Fish=20000, FPS=2 |
Testing 3D graphics rendering with glmark2
Next, I tested 3D graphics rendering with glmark2, which produced an overall score of 62. The tool reported that the system was running on Mesa llvmpipe (LLVM 12.0.0) rather than the Adreno GPU. Other simpler tests, such as texture filtering and basic effects, achieved over 100 fps, while more complex shading and lighting tasks dropped into the 40–50 fps range. The heaviest workloads, including terrain rendering and refraction, showed the greatest limitations, with frame rates of just 4 fps and 10 fps, respectively.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
======================================================= glmark2 2021.02 ======================================================= OpenGL Information GL_VENDOR: Mesa/X.org GL_RENDERER: llvmpipe (LLVM 12.0.0, 128 bits) GL_VERSION: 3.1 Mesa 21.2.6 ======================================================= [build] use-vbo=false: FPS: 68 FrameTime: 14.706 ms [build] use-vbo=true: FPS: 67 FrameTime: 14.925 ms [texture] texture-filter=nearest: FPS: 113 FrameTime: 8.850 ms [texture] texture-filter=linear: FPS: 118 FrameTime: 8.475 ms [texture] texture-filter=mipmap: FPS: 113 FrameTime: 8.850 ms [shading] shading=gouraud: FPS: 48 FrameTime: 20.833 ms [shading] shading=blinn-phong-inf: FPS: 44 FrameTime: 22.727 ms [shading] shading=phong: FPS: 40 FrameTime: 25.000 ms [shading] shading=cel: FPS: 40 FrameTime: 25.000 ms [bump] bump-render=high-poly: FPS: 24 FrameTime: 41.667 ms [bump] bump-render=normals: FPS: 113 FrameTime: 8.850 ms [bump] bump-render=height: FPS: 108 FrameTime: 9.259 ms [effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 111 FrameTime: 9.009 ms [effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 103 FrameTime: 9.709 ms [pulsar] light=false:quads=5:texture=false: FPS: 121 FrameTime: 8.264 ms [desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 19 FrameTime: 52.632 ms [desktop] effect=shadow:windows=4: FPS: 53 FrameTime: 18.868 ms [buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 37 FrameTime: 27.027 ms [buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 38 FrameTime: 26.316 ms [buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 38 FrameTime: 26.316 ms [ideas] speed=duration: FPS: 49 FrameTime: 20.408 ms [jellyfish] <default>: FPS: 36 FrameTime: 27.778 ms [terrain] <default>: FPS: 4 FrameTime: 250.000 ms [shadow] <default>: FPS: 28 FrameTime: 35.714 ms [refract] <default>: FPS: 10 FrameTime: 100.000 ms [conditionals] fragment-steps=0:vertex-steps=0: FPS: 66 FrameTime: 15.152 ms [conditionals] fragment-steps=5:vertex-steps=0: FPS: 63 FrameTime: 15.873 ms [conditionals] fragment-steps=0:vertex-steps=5: FPS: 66 FrameTime: 15.152 ms [function] fragment-complexity=low:fragment-steps=5: FPS: 66 FrameTime: 15.152 ms [function] fragment-complexity=medium:fragment-steps=5: FPS: 63 FrameTime: 15.873 ms [loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 64 FrameTime: 15.625 ms [loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 65 FrameTime: 15.385 ms [loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 65 FrameTime: 15.385 ms ======================================================= glmark2 Score: 62 ======================================================= |
Testing AI performance
Preparations
During this review, I noticed that there were no clear end-to-end tutorials for installing or testing AI on the Particle Tachyon board, so I relied on the resources available from the Qualcomm AI Hub. After creating an account to obtain the required API_TOKEN, I set up a new virtual environment with venv to keep the installation clean. My first attempt with Python 3.10 failed during testing, but switching to Python 3.9 worked much more smoothly.
Next, I installed the Qualcomm AI Hub package using pip3 install qai-hub. Setup continued by logging in and configuring with qai-hub configure --api_token API_TOKEN, which linked my environment to the Qualcomm AI Hub account. From there, I confirmed device availability using qai-hub list-devices. The toolkit also provides useful commands such as qai-hub list-models and qai-hub list-exports, which makes it easy to check supported models. Overall, the process required some trial and error, but once configured, the tools worked reliably for testing AI performance on the Particle Tachyon.
Basic tests
Several AI models, such as MobileNet, FFNet, and YOLO, are available at https://github.com/quic/ai-hub-models. For my first test, I followed the real-time selfie segmentation example from the getting started page. I installed the model with pip3 install "qai-hub-models[ffnet_40s]" and ran it using python -m qai_hub_models.models.ffnet_40s.demo. The following image shows the segmentation result.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
(venv310) particle@tachyon-7c1f4061:~/Documents/particle_test_logs$ python -m qai_hub_models.models.ffnet_40s.demo Downloading data at https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/cityscapes_segmentation/v1/cityscapes_like_demo_2048x1024.jpg to /home/particle/.qaihm/models/cityscapes_segmentation/v1/cityscapes_like_demo_2048x1024.jpg 100%|█████████████████████████████████████████████████████████████████████████████| 443k/443k [05:22<00:00, 1.37kB/s] Done Downloading data at https://github.com/quic/aimet-model-zoo/releases/download/torch_segmentation_ffnet/ffnet40S_dBBB_cityscapes_state_dict_quarts.pth to /home/particle/.qaihm/models/ffnet/v1/ffnet40S/ffnet40S_dBBB_cityscapes_state_dict_quarts.pth 100%|███████████████████████████████████████████████████████████████████████████| 55.8M/55.8M [00:05<00:00, 9.72MB/s] Done cityscapes_segmentation requires repository https://github.com/Qualcomm-AI-research/FFNet.git . Ok to clone? [Y/n] y Cloning https://github.com/Qualcomm-AI-research/FFNet.git to /home/particle/.qaihm/models/cityscapes_segmentation/v2/Qualcomm-AI-research_FFNet_git... Done Loading pretrained model state dict from /home/particle/.qaihm/models/ffnet/v1/ffnet40S/ffnet40S_dBBB_cityscapes_state_dict_quarts.pth Initializing ffnnet40S_dBBB_mobile weights /home/particle/.qaihm/models/cityscapes_segmentation/v2/Qualcomm-AI-research_FFNet_git/models/ffnet_blocks.py:599: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. pretrained_dict = torch.load( Running Inference on 0 samples Displaying predicted image |


I also tested YOLO by installing YOLOv7 with pip install "qai_hub_models[yolov7]". After that, I ran the object detection demo on a local image by specifying the image path with --image ../particle_test_logs/data/CrossWalk_640.jpg.
Input image |
Yolov7 |
Yolov8n |
Yolov11n |
Next, I tested AI inferencing in Python using an example script from the Qualcomm AI Hub, and it ran smoothly.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# https://app.aihub.qualcomm.com/docs/hub/inference_examples.html#running-inference-with-a-tflite-model import numpy as np import qai_hub as hub sample = np.random.random((1, 224, 224, 3)).astype(np.float32) inference_job = hub.submit_inference_job( model="models/SqueezeNet10.tflite", device=hub.Device("Samsung Galaxy S23 (Family)"), inputs=dict(x=[sample]), ) assert isinstance(inference_job, hub.InferenceJob) inference_job.download_output_data() |

Real-time object detection with a USB camera
The official Particle Tachyon web page notes that the board supports two camera modules through the CSI1 and DSI/CSI2 connectors, as shown in the image below. In theory, this should allow two cameras to be used simultaneously. However, I tested this with my Raspberry Pi Camera Module 3 and Raspberry Pi AI Camera, but neither was detected by the board. I found that Raspberry Pi cameras are not currently supported on the Particle Tachyon due to their closed-source firmware stack. As a result, the following results were obtained using a USB webcam instead.
To prepare the performance evaluation, image frames were captured from a USB webcam using OpenCV at a resolution of 640×480. The YOLOv8 detection model was installed via pip install "qai-hub-models[yolov8-det]", following the guidelines provided in the qai-hub-models repository.
To export the quantized model with the provided script, I first checked whether the QCM6490 chipset appeared in the supported device list, but it was not available. As an alternative, I selected QCS6490, which offers nearly identical hardware components. The model was then exported from the Qualcomm AI Hub using the qai_hub_models.models.yolov8_det.export script, targeting the Qualcomm QCS6490 proxy chipset. The export configuration used the TensorFlow Lite (TFLite) runtime with an output resolution of 512×512. Since no quantization parameters were specified, the default setup was applied. The exported files were saved in the designated directory for subsequent testing
A Python 3.9 virtual environment was created, with the TFLite runtime and NumPy 1.x installed to ensure compatibility. Within this setup, the YOLOv8n model was tested across multiple square input resolutions—512, 256, 128, and 64 pixels—to compare unquantized and quantized performance.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
import cv2 import numpy import qai_hub as hub import torch from ultralytics import YOLO import time def main(): print("Loading YOLO model") model = YOLO("models/yolov8n.pt") print("Model loaded successfully!") #target_img_size = (64, 64) #target_img_size = (128, 128) #target_img_size = (256, 256) #target_img_size = (512, 512) target_img_size = (1024, 1024) print("Opening camera") cap = cv2.VideoCapture('/dev/video0') print("Camera opened") preprocess_time_min = 999999.99 preprocess_time_max = -preprocess_time_min inference_time_min = preprocess_time_min inference_time_max = preprocess_time_max postprocess_time_min = preprocess_time_min postprocess_time_max = preprocess_time_max total_time_min = preprocess_time_min total_time_max = preprocess_time_max while cap.isOpened(): ret, frame = cap.read() if not ret: continue # Perform object detection on the image frame results = model.predict( frame, imgsz=target_img_size, conf=0.5, verbose=False) annotated_frame = results[0].plot() speed_ms = results[0].speed # This is a dictionary in ms preprocess_time = speed_ms['preprocess'] inference_time = speed_ms['inference'] postprocess_time = speed_ms['postprocess'] total_time = preprocess_time + inference_time + postprocess_time if(preprocess_time < preprocess_time_min): preprocess_time_min = preprocess_time if(preprocess_time > preprocess_time_max): preprocess_time_max = preprocess_time if(inference_time < inference_time_min): inference_time_min = inference_time if(inference_time > inference_time_max): inference_time_max = inference_time if(postprocess_time < postprocess_time_min): postprocess_time_min = postprocess_time if(postprocess_time > postprocess_time_max): postprocess_time_max = postprocess_time if(total_time < total_time_min): total_time_min = total_time if(total_time > total_time_max): total_time_max = total_time print("---------------------------------") print(f"Pre-processing: {preprocess_time:.2f}, {preprocess_time_min:.2f} - {preprocess_time_max:.2f}ms") print(f"Inference: {inference_time:.2f}, {inference_time_min:.2f} - {inference_time_max:.2f}") print(f"Post-processing: {postprocess_time:.2f}, {postprocess_time_min:.2f} - {postprocess_time_max:.2f}") print(f"Total Time: {total_time:.2f}, {total_time_min:.2f} - {total_time_max:.2f}") print("---------------------------------") cv2.imshow("annotated_frame", annotated_frame) key = cv2.waitKey(1) if(key == ord('q')): break # done cap.release() cv2.destroyAllWindows() if __name__ == "__main__": main() |



The YOLOv8n performance test on the Particle Tachyon, illustrated in the graph, shows a clear correlation between input resolution and processing time. For the unquantized model, inference dominates the total runtime at all resolutions. At 512×512, the model required ~0.69 seconds, but reducing the input size to 256×256 nearly halved the runtime (~0.37s). Further reductions to 128×128 and 64×64 lowered latency to ~0.19s and ~0.16s, respectively. Preprocessing and postprocessing times remained minimal across all sizes, confirming that inference is the primary performance bottleneck.
By contrast, the quantized model significantly accelerated execution. At 512×512, runtime dropped from ~0.69s to ~0.19s (3.7× faster), while at 256×256 it fell from ~0.37s to ~0.05s (almost 8× faster). Even at the smallest tested size, 64×64, the quantized version reduced latency to ~0.01s per frame, approaching real-time performance. These results demonstrate that the Tachyon’s Qualcomm QCM6490 SoC benefits greatly from quantized models, enabling efficient object detection across resolutions suitable for robotics, surveillance, and other edge AI applications where low latency is critical.
Measure power consumption and heat distribution
My final test focused on measuring power consumption and heat distribution. I measured the board’s power consumption using a USB power metering dongle under three conditions: idle state, playing a 1080p YouTube video in windowed mode, and running a full CPU load with WebGL (10,000 fish) alongside a 4K YouTube video. In all cases, the readings from the USB meter were fairly stable at around 5.32 W. No significant power spikes were observed, which may be because the board primarily draws power from the USB cable to charge the battery.




To observe the thermal distribution, I used my FLIR E4 thermal camera to capture images while the board was remotely controlled through the Particle Console’s Terminal panel. It was a rainy morning in my hometown, with the room temperature around 27–28 °C. The first thermal image was taken while running the htop command, where the maximum temperature observed was about 39.7 °C. The heat was distributed fairly evenly, with the hottest spots concentrated around the processor area. The second image was recorded when I remotely ran the sbc-bench script for several minutes. Although the script failed, it still drew more CPU power than the previous case, and the temperature rose to approximately 57–58 °C. In this case, the processor region is clearly much hotter. Interestingly, the temperature of the PCB seems to be higher than the two chips in both cases.


Conclusions
In conclusion, I personally like the overall design of the board, and its performance is quite good. The setup process is straightforward in both the desktop OS and headless modes. However, I did encounter some issues during this review. The main limitations found during this review were the lack of comprehensive documentation and examples for AI testing on the Particle Tachyon website, the requirement to keep the battery connected at all times, and the absence of audio output from the desktop OS.
For those interested, the Particle Tachyon is available at its official store for $299 (8GB RAM and 128GB Flash), or $249 (4GB RAM and 64GB Flash).

My main research areas are digital image/audio processing, digital photogrammetry, AI, IoT, and UAV. I am open to other subjects as well.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.





















This SBC has AI rated at 12 TOPS
This looks to be some highly outdated operating system. What is the current functionality state at mainline Linux support?
What parts probably except of the wifi chip need closed source firmware files to run?
While I don’t know for this card, Radxa’s upcoming Q6A uses the same chip and apparently runs (almost) unmodified mainline. The one I received runs 6.16.7.
There’s a VERY huge functional difference between the QCM6490 and the QCS6490, same performance sure, but calling it same chip..
The tachyon is targeted at projects requiring cellular/GPS.
That being said, the Radxa having ethernet, a full hdmi port, and an nvme slot vs the tachyons single usb-c for both power and display, and a porrly supported dsi..
They are very different products.
As I understand it, the main problem is that 5G does not work on Ubuntu 24.04, and that’s mainly why they still default to Ubuntu 20.04.
Could you explain more in detail why 5G needs outdated operating system? How is the modem connected? The same 6490 SoC in a phone (Fairphone 5) is working on mainline Linux with 5G.
That is why this need especially more information why it does not work here.
I don’t know the answer. The documentation related to Ubuntu 24.04 progress and access to source code can be found @ https://developer.particle.io/tachyon/software/ubuntu_24_04/overview#why-ubuntu-2404-for-tachyon
I’m a simple man… how does this compare to any of the RK3588 based boards?
They should be fairly similar and significantly cheaper, no?
The CPU of the QCM6490 will be somewhat faster, and AI should be too, with 12 TOPS advertised. But as things stand, software support for the RK3588 is currently much better, as, for instance, GPU acceleration is not working on this Qualcomm board.
The Tachyon is mostly of interest to people using 5G, as this feature adds significant cost to the board. If you don’t use 5G or the Particle Cloud, the RK3588 boards will offer better value.
If you want to try a similar Qualcomm platform without 5G, The Radxa Dragon Q6A will be a better solution based on QCS6490. It doesn’t seem available for sale just now, but some people have already gotten samples for testing.
Please test power consumption across different states without the battery attached. Thank you