Earlier this month, I started the review of the Intel-based UP AI development kits with an unboxing of the UP TWL, UP Squared Pro TWL, and UP Xtreme ARL single board computers. I’ve now had time to test the first model, the credit card-sized, Intel Processor N150-based UP TWL SBC with 64GB eMMC flash preloaded with Ubuntu 24.04.
As usual, I’ll run a few benchmarks and test the board’s key hardware features, but I’ll then focus on the AI part since that’s what the kit is for. Note that the UP TWL AI Dev Kit is an entry-level solution, and all AI workloads will be running on the CPU or the integrated GPU, since there’s no dedicated AI accelerator or an M.2 slot to add one on this model. In the next parts of the review, the UP Squared Pro TWL adds an Hailo-8L AI accelerator, and the UP Xtreme ARL delivers up to 83 TOPS through a 14-core Intel Core Ultra 5 225H “Arrow Lake” processor.
UP TWL SBC system information
The UP TWL quad-core SBC comes preloaded with Ubuntu 24.04.3 LTS installed on a 64GB (62.6GB) eMMC flash, and the system also features 8GB of RAM.
We can get more information with the inxi utility:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
devkit@devkit-UP-TWL01:~$ sudo inxi -Fc0 System: Host: devkit-UP-TWL01 Kernel: 6.14.0-32-generic arch: x86_64 bits: 64 Console: pty pts/2 Distro: Ubuntu 24.04.3 LTS (Noble Numbat) Machine: Type: Desktop Mobo: AAEON model: UP-TWL01 v: V1.0 serial: 250163462 UEFI: American Megatrends LLC. v: UPTWAM10 date: 03/31/2025 CPU: Info: quad core model: Intel N150 bits: 64 type: MCP cache: L2: 2 MiB Speed (MHz): avg: 700 min/max: 700/3600 cores: 1: 700 2: 700 3: 700 4: 700 Graphics: Device-1: Intel Alder Lake-N [Intel Graphics] driver: i915 v: kernel Device-2: Sunplus Innovation FHD Camera driver: snd-usb-audio,uvcvideo type: USB Display: server: X.org v: 1.21.1.11 with: Xwayland v: 23.2.6 driver: gpu: i915 tty: 80x24 resolution: 1920x1080 API: EGL v: 1.5 drivers: iris,swrast platforms: gbm,surfaceless,device API: OpenGL v: 4.6 compat-v: 4.5 vendor: mesa v: 25.0.7-0ubuntu0.24.04.2 note: console (EGL sourced) renderer: Mesa Intel Graphics (ADL-N), llvmpipe (LLVM 20.1.2 256 bits) Audio: Device-1: Intel Alder Lake-N PCH High Definition Audio driver: snd_hda_intel Device-2: Sunplus Innovation FHD Camera driver: snd-usb-audio,uvcvideo type: USB API: ALSA v: k6.14.0-32-generic status: kernel-api Network: Device-1: Realtek RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet driver: r8169 IF: enp1s0 state: up speed: 1000 Mbps duplex: full mac: 00:07:32:c9:bc:3e IF-ID-1: docker0 state: down mac: d2:f7:7f:d5:38:a3 Drives: Local Storage: total: 58.32 GiB used: 12.75 GiB (21.9%) ID-1: /dev/mmcblk0 model: TY2964 size: 58.32 GiB type: Removable Partition: ID-1: / size: 56.07 GiB used: 12.74 GiB (22.7%) fs: ext4 dev: /dev/mmcblk0p2 ID-2: /boot/efi size: 1.05 GiB used: 6.1 MiB (0.6%) fs: vfat dev: /dev/mmcblk0p1 Swap: ID-1: swap-1 type: file size: 4 GiB used: 0 KiB (0.0%) file: /swap.img Sensors: Src: lm-sensors+/sys Message: No sensor data found using /sys/class/hwmon or lm-sensors. Info: Memory: total: 8 GiB available: 7.51 GiB used: 1.26 GiB (16.8%) igpu: 60 MiB Processes: 218 Uptime: 15m Init: systemd target: graphical (5) Shell: Sudo inxi: 3.3.34 |
All main features seem to be detected properly, including Gigabit Ethernet and the USB camera I connected to the board.
Benchmarks
Since the performance of the Intel Processor N150 and other Intel Alder Lake-N/Twin Lake processors at large is well known, I’ve just run sbc-bench.sh in this review:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
devkit@devkit-UP-TWL01:~$ sudo ./sbc-bench.sh -r Starting to examine hardware/software for review purposes... sbc-bench v0.9.72 Installing needed tools: apt-get -f -qq -y install powercap-utils links mmc-utils smartmontools stress-ng, p7zip 16.02, tinymembench, ramlat, mhz, cpufetch, cpuminer. Done. Checking cpufreq OPP. Done. Executing tinymembench. Done. Executing RAM latency tester. Done. Executing OpenSSL benchmark. Done. Executing 7-zip benchmark. Done. Throttling test: heating up the device, 5 more minutes to wait. Done. Checking cpufreq OPP again. Done (12 minutes elapsed). Results validation: * Measured clockspeed not lower than advertised max CPU clockspeed * No swapping * Background activity (%system) OK * Too much other background activity: 5% avg, 26% max -> https://tinyurl.com/mr2wy5uv * Powercap detected. Details: "sudo powercap-info -p intel-rapl" -> https://tinyurl.com/4jh9nevj # AAEON UP-TWL01 V1.0 / N150 Tested with sbc-bench v0.9.72 on Sun, 09 Nov 2025 08:58:05 +0100. ### General information: Information courtesy of cpufetch: Name: Intel(R) N150 Microarchitecture: Alder Lake Technology: 10nm Max Frequency: 3.600 GHz Cores: 4 cores AVX: AVX,AVX2 FMA: FMA3 L1i Size: 64KB (256KB Total) L1d Size: 32KB (128KB Total) L2 Size: 2MB L3 Size: 6MB N150, Kernel: x86_64, Userland: amd64 CPU sysfs topology (clusters, cpufreq members, clockspeeds) cpufreq min max CPU cluster policy speed speed core type 0 0 0 700 3600 - 1 0 1 700 3600 - 2 0 2 700 3600 - 3 0 3 700 3600 - 7688 KB available RAM ### Policies (performance vs. idle consumption): Status of performance related policies found below /sys: /sys/module/pcie_aspm/parameters/policy: [default] performance powersave powersupersave ### Clockspeeds (idle vs. heated up): Before at 48.0°C: cpu0: OPP: 3600, Measured: 2983 (-17.1%) After at 44.0°C: cpu0: OPP: 3600, Measured: 3581 ### Performance baseline * memcpy: 3491.0 MB/s, memchr: 5190.3 MB/s, memset: 4130.4 MB/s * 16M latency: 203.5 151.3 200.2 146.6 190.7 149.6 132.7 138.7 * 128M latency: 212.6 182.1 212.4 180.5 207.3 210.6 181.6 166.3 * 7-zip MIPS (3 consecutive runs): 6603, 6906, 6920 (6810 avg), single-threaded: 3144 * `aes-256-cbc 741172.07k 1056420.27k 1143840.09k 1174898.35k 1181171.71k 1185300.48k` * `aes-256-cbc 763600.11k 1068320.73k 1157893.89k 1180891.14k 1186084.18k 1191701.16k` ### PCIe and storage devices: * Realtek RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet: Speed 2.5GT/s, Width x1, driver in use: r8169, * 58.3GB "Kingston TY2964" HS400 Enhanced strobe eMMC 5.1 card as /dev/mmcblk0: date 05/2025, manfid/oemid: 0x000070/0x0100, hw/fw rev: 0x0/0x5b00000000000000 * Winbond W25Q256JW 32MB SPI NOR flash, drivers in use: spi-nor/intel-spi ### Swap configuration: * /swap.img on /dev/mmcblk0p2: 4.0G (0K used) on MMC storage ### Software versions: * Ubuntu 24.04.3 LTS (noble) * Compiler: /usr/bin/gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 / x86_64-linux-gnu * OpenSSL 3.0.13, built on 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) ### Kernel info: * `/proc/cmdline: BOOT_IMAGE=/boot/vmlinuz-6.14.0-32-generic root=UUID=eaca6cca-80e9-4aab-9a74-10fa0e135c4a ro quiet splash vt.handoff=7` * Vulnerability Reg file data sampling: Mitigation; Clear Register File * Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl * Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization * Kernel 6.14.0-32-generic / CONFIG_HZ=1000 Waiting for the device to cool down...................................... 31.0°C^C |
The UP TWL runs super cool thanks to its built-in fansink, and the heatsink is cool at all times. It’s, however, rather noisy like most other hardware platforms from AAEON. They design hardware for the industrial market, so low noise may not be as important as in consumer devices.
In terms of performance, the UP TWL SBC achieved 6810 MIPS in the 7-zip benchmark on average, which compares to 9730 MIPS on the Zimaboard 2 with the same Intel N150 SoC and rather poor cooling.
The PL1/PL2 power limits can explain the difference:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
devkit@devkit-UP-TWL01:~$ sudo powercap-info -p intel-rapl enabled: 1 Zone 0 name: package-0 enabled: 1 max_energy_range_uj: 262143328850 energy_uj: 9205598137 Constraint 0 name: long_term power_limit_uw: 6000000 time_window_us: 27983872 max_power_uw: 6000000 Constraint 1 name: short_term power_limit_uw: 25000000 time_window_us: 2440 max_power_uw: 0 Constraint 2 name: peak_power power_limit_uw: 78000000 max_power_uw: 0 |
PL1 set to 6W and PL2 to 25W, which compares to 12W and 20W for the Zimaboard 2 and 6W and 12W for the Intel N100-based MINIX Z100 0dB fanless mini PC. The results are widely different. I ran the test again, this time without background activity detected, and the results were the same. I tried to change the PL1 to 12W in the BIOS…
… and started again:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
devkit@devkit-UP-TWL01:~$ sudo ./sbc-bench.sh -r Starting to examine hardware/software for review purposes... sbc-bench v0.9.72 Installing needed tools: distro packages already installed. Done. Checking cpufreq OPP. Done. Executing tinymembench. Done. Executing RAM latency tester. Done. Executing OpenSSL benchmark. Done. Executing 7-zip benchmark. Done. Throttling test: heating up the device, 5 more minutes to wait. Done. Checking cpufreq OPP again. Done (11 minutes elapsed). Results validation: * Measured clockspeed not lower than advertised max CPU clockspeed * No swapping * Background activity (%system) OK * Powercap detected. Details: "sudo powercap-info -p intel-rapl" -> https://tinyurl.com/4jh9nevj # AAEON UP-TWL01 V1.0 / N150 Tested with sbc-bench v0.9.72 on Sun, 16 Nov 2025 11:41:18 +0100. ### General information: Information courtesy of cpufetch: Name: Intel(R) N150 Microarchitecture: Alder Lake Technology: 10nm Max Frequency: 3.600 GHz Cores: 4 cores AVX: AVX,AVX2 FMA: FMA3 L1i Size: 64KB (256KB Total) L1d Size: 32KB (128KB Total) L2 Size: 2MB L3 Size: 6MB N150, Kernel: x86_64, Userland: amd64 CPU sysfs topology (clusters, cpufreq members, clockspeeds) cpufreq min max CPU cluster policy speed speed core type 0 0 0 700 3600 Alder Lake 1 0 1 700 3600 Alder Lake 2 0 2 700 3600 Alder Lake 3 0 3 700 3600 Alder Lake 7687 KB available RAM ### Policies (performance vs. idle consumption): Status of performance related policies found below /sys: /sys/module/pcie_aspm/parameters/policy: [default] performance powersave powersupersave ### Clockspeeds (idle vs. heated up): Before at 41.0°C: cpu0: OPP: 3600, Measured: 3586 After at 60.0°C: cpu0: OPP: 3600, Measured: 3586 ### Performance baseline * memcpy: 8114.0 MB/s, memchr: 13553.3 MB/s, memset: 7975.7 MB/s * 16M latency: 151.8 120.6 152.6 121.1 151.1 123.5 110.6 118.4 * 128M latency: 183.2 145.0 183.6 144.9 182.1 176.8 146.7 136.9 * 7-zip MIPS (3 consecutive runs): 12706, 11158, 11208 (11690 avg), single-threaded: 3775 * `aes-256-cbc 940630.29k 1246782.57k 1288999.85k 1299980.29k 1302861.14k 1301763.41k` * `aes-256-cbc 955755.29k 1246943.87k 1289035.35k 1300072.45k 1301504.00k 1303341.74k` ### PCIe and storage devices: * Realtek RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet: Speed 2.5GT/s, Width x1, driver in use: r8169, * 931.5GB "JMicron JMS583" as /dev/sda: USB, Driver=uas, 10Gbps (capable of 12Mbps, 480Mbps, 5Gbps, 10Gb/s Symmetric RX SuperSpeedPlus, 10Gb/s Symmetric TX SuperSpeedPlus) * 58.3GB "Kingston TY2964" HS400 Enhanced strobe eMMC 5.1 card as /dev/mmcblk0: date 05/2025, manfid/oemid: 0x000070/0x0100, hw/fw rev: 0x0/0x5b00000000000000 * Winbond W25Q256JW 32MB SPI NOR flash, drivers in use: spi-nor/intel-spi ### Challenging filesystems: The following partitions are NTFS: sda2 -> https://tinyurl.com/mv7wvzct ### Swap configuration: * /swap.img on /dev/mmcblk0p2: 4.0G (0K used) on MMC storage ### Software versions: * Ubuntu 24.04.3 LTS (noble) * Compiler: /usr/bin/gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 / x86_64-linux-gnu * OpenSSL 3.0.13, built on 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) ### Kernel info: * `/proc/cmdline: BOOT_IMAGE=/boot/vmlinuz-6.14.0-35-generic root=UUID=eaca6cca-80e9-4aab-9a74-10fa0e135c4a ro quiet splash vt.handoff=7` * Vulnerability Reg file data sampling: Mitigation; Clear Register File * Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl * Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization * Vulnerability Vmscape: Mitigation; IBPB before exit to userspace * Kernel 6.14.0-35-generic / CONFIG_HZ=1000 Waiting for the device to cool down.......... 34.0°C^C |
11,690 MIPS in 7-zip is more like it. Note that all other tests below were done with PL1 set to 6W (default), rather than 12W. It’s not the first time AAEON set their board to a conservation PL1 value, potentially for improved stability in high temperature environments, and they let customers select an optimal value in the BIOS as needed..
UP TWL SBC features testing
I’ve also checked the key hardware features of the UP TWL SBC as follows:
- HDMI – Video OK, Audio OK
- Storage – eMMC flash OK: 317 MB/s sequential reads, 230 MB/s sequential writes.
12345678devkit@devkit-UP-TWL01:~$ iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2random random bkwd record stridekB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread102400 4 57547 62208 41194 40885 37905 60357102400 16 131062 131451 117222 116387 84337 126055102400 512 202151 205965 256896 257999 251971 198250102400 1024 216926 214023 283977 283782 281250 212573102400 16384 230982 229794 317011 317484 317168 229200 - Gigabit Ethernet – OK (iperf3 DL: 942 Mbps, UL: 942 Mbps, full-duplex: 938/935 Mbps)
- USB ports tested with an ORICO NVMe SSD enclosure (EXT-4 partition), USB mouse, RF dongle for a wireless keyboard, and USB camera
- USB 3.0 combo jack
- Top – 10 Gbps; tested up to 999 MB/s with iozone3
- Bottom – 10 Gbps; tested up to 1,007 MB/s with iozone3
- USB 3.0 on Ethernet combo jack – 10 Gbps – tested up to 1,009 MB/s with iozone3
- USB 3.0 combo jack
- RTC – OK
1234567891011devkit@devkit-UP-TWL01:~$ sudo apt install util-linux-extradevkit@devkit-UP-TWL01:~$ timedatectlLocal time: Sun 2025-11-16 04:52:59 CETUniversal time: Sun 2025-11-16 03:52:59 UTCRTC time: Sun 2025-11-16 03:52:59Time zone: Europe/Amsterdam (CET, +0100)System clock synchronized: yesNTP service: activeRTC in local TZ: nodevkit@devkit-UP-TWL01:~$ sudo hwclock -r2025-11-16 04:53:08.509478+01:00 - GPIOS – OK – Also see the 40-pin GPIO header layout for all UP boards.
12345678910111213141516171819202122232425devkit@devkit-UP-TWL01:~$ ls /dev/gpiochip*/dev/gpiochip0 /dev/gpiochip1devkit@devkit-UP-TWL01:~$ sudo apt install libgpiod-dev gpioddevkit@devkit-UP-TWL01:~$ sudo gpioinfo 0gpiochip0 - 360 lines:line 0: unnamed unused input active-highline 1: unnamed unused input active-highline 2: unnamed unused input active-highline 3: unnamed unused input active-highline 4: unnamed unused input active-highline 5: unnamed unused input active-highline 6: unnamed unused input active-highline 7: unnamed unused input active-high...devkit@devkit-UP-TWL01:~$ sudo gpioinfo 1gpiochip1 - 28 lines:line 0: unnamed unused input active-highline 1: unnamed unused input active-highline 2: unnamed unused input active-highline 3: unnamed unused input active-highline 4: unnamed unused input active-highline 5: unnamed unused input active-highline 6: unnamed unused input active-highline 7: unnamed unused input active-high...
Everything works as expected from those tests.
AI testing on the UP TWL Intel N150 SBC
Since it’s an AI development kit, I ran several AI workloads on the system using Network Optix Nx Meta and the AAEON UP AI toolkit
Network Optix Nx Meta
I started with the Nx AI Certification Test. Let’s install it first:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
sudo apt dist-upgrade sudo apt install python3-pip python3-venv mkdir nxai_test cd nxai_test wget https://artifactory.nxvms.dev/artifactory/nxai_open/NXAITest/nxai_test.tgz tar -xvf nxai_test.tgz python3 -m venv ./ source ./bin/activate # activate python venv pip3 install -r requirements.txt ./Utilities/install_nxai_manager.sh python3 Utilities/install_acceleration_library.py python3 Utilities/download_models.py |
The last command will download models (about 3.5GB of data) and may take a while. We can now run all the tests:
|
1 |
python3 all_suites.py |
Everything happens within the terminal, and there’s no visualization. There are two main parts: benchmarks and stability tests.
Here’s the end of the AI benchmarks log:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
... ################################################### All model benchmarks completed. Benchmark results: Model-Yolov8s-[640x640]: 2.02 FPS Model-ViT-Tiny: 32.84 FPS Model-Yolov4-[1280x1280]: 0.98 FPS Model-Yolov9-e-[640x640]: 0.60 FPS Model-Yolov9-e-converted-[640x640]: 0.32 FPS Model-Emotion-Recognizer: 301.84 FPS Model-Yolov9-c-[640x640]: 0.38 FPS Postprocessor-Illegal-Dumping: 4.05 FPS Model-Yolov7-Tiny-[1280x1280]: 1.05 FPS Pipeline-Feature-Extraction: 79.20 FPS Model-Face-Locator: 91.45 FPS 80-classes-object-detector[640x640]: 3.91 FPS Model-Yolov4-[320x320]: 15.25 FPS Model-Resnet-50: 8.77 FPS 80-classes-object-detector[320x320]: 15.24 FPS Model-Yolo5su-[640x640]: 2.35 FPS Quantized-INT8: 9.29 FPS Model-Resnet-18: 19.32 FPS Model-Regnet-Y: 69.17 FPS Model-Yolov7x-[1280x1280]: 0.10 FPS Model-Yolov8l-[640x640]: 0.44 FPS Model-Yolo5su-[1280x1280]: 0.57 FPS Model-Yolov4-[128x128]: 68.55 FPS Model-Yolov7-Tiny-[640x640]: 4.12 FPS Quantized-FP32: 9.09 FPS Model-Yolo5su-[256x256]: 14.10 FPS Model-Densenet: 0.33 FPS Pipeline-Direct: 12.75 FPS Model-Yolov9-m-converted-[640x640]: 0.79 FPS Model-Clip: 6.28 FPS Model-Yolov4-[640x640]: 3.90 FPS Multi-Model: 19.08 FPS Quantized-FP16: 6.24 FPS Model-Mobilenet-V3: 48.45 FPS Empty-Small: 789.06 FPS Pipeline-Conditional: 189.31 FPS Model-Yolov9-[640x640]: 0.61 FPS Model-Yolov9-converted-[640x640]: 0.61 FPS Model-Yolov7x-[640x640]: 0.36 FPS Model-Yolov9-m-[640x640]: 0.66 FPS Empty-Large: 103.27 FPS Model-PPE: 9.37 FPS postprocessor-python-example: 14.98 FPS postprocessor-c-example: 15.73 FPS postprocessor-python-image-example: 14.77 FPS postprocessor-c-image-example: 14.75 FPS |
Some of the tests run at an acceptable speed, while others struggle at under 1 FPS. We’ll use this performance data to compare it against the results for the Intel N150+Hailo-8L and Intel Core Ultra 5 225H in the next parts of the review. You can check the full benchmarks log if interested.
The stability test was successful:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
... --------------------------------------------------- Running test: multi_model_multi_stream_test Creating backup settings file... Loading test settings... Creating Unix socket server... Starting Edge AI Manager Sending socket input Messages: 413503 Rate 0: 5.66 Rate 1: 109.2 Sclblmod Memory: 8.32 MB Relative to start: 100% Sclbld Memory: 108.18 MB Relative to start: 101% Stopping Edge AI Manager Terminated AI Manager: 0 Test completed succesfully. Restoring backup settings file... Test completed without unhandled exception. --------------------------------------------------- ------------------------------------------------------- Tests passed: 6 / 6 |
Again, I saved the full log for that part.
AAEON UP AI toolkit demos
In the second part of the AI demos, I’ll use the UP AI toolkit examples available on GitHub.
Those are the steps to install and launch the AAEON UP AI toolkit:
|
1 2 3 4 5 6 7 8 |
cd ~ git clone https://github.com/up-division/up-ai/ cd up-ai chmod +x prepare.sh start_app.sh ./prepare.sh sudo reboot cd ~/up-ai ./start_app.sh |
I didn’t go exactly smoothly for me, as the first time, the prepare command ended as follows:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
raise ReadTimeoutError(self._pool, None, "Read timed out.") pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. Error setting up workers: /home/devkit/up-ai/demos/edge-ai-sizing-tool/workers/dlstreamer/venv/bin/python -m pip install -r requirements.txt exited with code 2 An error occurred: Command failed with exit code 1 npm notice npm notice New major version of npm available! 10.9.4 -> 11.6.2 npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.2 npm notice To update run: npm install -g npm@11.6.2 npm notice Running driver installation script... [sudo] password for devkit: Driver is already installed ! Enviroment Installation is Complete! Please Reboot! |
A download error occurred, but if you only read the last few lines, it looks like the installation was successful, while it was not (OpenVino was not installed). So, I had to run the command again, and the second time it progressed further, but I encountered another timeout (twice) when the script attempted to download TensorFlow. It might be useful to change the pip mirror, but I could not find any in Thailand. Nevertheless, the fourth time, the installation was (almost) successful:
|
1 2 3 4 5 6 |
v, nvidia-cusolver-cu12, matplotlib, jsonschema-specifications, IPython, torch, seaborn, pymoo, keras, jsonschema, fastapi, ultralytics-thop, torchvision, tensorflow, nncf, ultralytics ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device Error setting up workers: /home/devkit/up-ai/demos/edge-ai-sizing-tool/workers/object-detection/venv/bin/python -m pip install -r requirements.txt exited with code 1 An error occurred: Command failed with exit code 1 Running driver installation script... |
It turns out all those AI demos take a lot of space, and a 64GB eMMC flash is a bit tight:
|
1 2 3 |
devkit@devkit-UP-TWL01:~$ du -h --max-depth 1 | grep ai 3.2G ./nxai_test 25G ./up-ai |
So I deleted the nxai_test, some cached files from pip, and finally, the installation was successful:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
Skipping npm install (node_modules already exists) Skipping npm run build (build directory already exists) Skipping setup-workers (venv folders already exist) Checking for existing PM2 EAST application... false Starting EAST application with PM2... pm2 [ 'start', 'npm', '--name', '"EAST"', '--', 'start' ] [PM2] Starting /usr/bin/npm in fork_mode (1 instance) [PM2] Done. ┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐ │ id │ name │ mode │ ↺ │ status │ cpu │ memory │ ├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤ │ 0 │ "EAST" │ fork │ 0 │ online │ 0% │ 35.3mb │ └────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘ Running driver installation script... Driver is already installed ! Enviroment Installation is Complete! Please Reboot! |
The up-ai directory is 33GB after installation is complete:
|
1 2 |
devkit@devkit-UP-TWL01:~$ du -h --max-depth 1 | grep ai 33G ./up-ai |
This does not include the several GB needed for pip packages. As I used the AI demo, I got into more storage capacity issues, and eventually connected a USB SSD to temporarily move some of the files to it to complete this review. 64GB of flash is clearly not enough for an AI development kit.
After a reboot, we can try the application by running the command:
|
1 2 |
cd ~/up-ai ./start_app.sh |
I was expecting a menu here, but instead it launched Firefox and opened localhost:8080, giving us access to the UP Edge AI Sizing Tool.
It is a zero-code configuration dashboard that allows users to easily set up AI applications by selecting inputs, accelerators, performance modes, and AI models right from a web browser. To get started, click the Add demo button on the left panel. We can then select Computer Vision, Natural speed, or Audio demo. I went with a Computer Vision demo for Object Detection using the UP USB camera. I added one Yolo8s demo using the Intel N150 CPU, and another identical demo relying on the “Intel Graphics (GPU)”.
We can click on the demo in the left panel to start it, and the camera output will show up with boxes to highlight detected objects. The dashboard also reports the frame rate, about 1.25 FPS on the CPU.
We can also check the CPU, GPU, and memory usage as the AI workload is running. We’re close to 100% CPU usage here, and in the screenshot below, the frame rate dropped further to 1.04 FPS.
Switching to GPU-accelerated object detection improves the inference speed to about 6-7 FPS.
CPU usage is still close to 100%, and the GPU is now being used a bit more. Memory usage is about the same at 39% (of ~8GB)
After looking at the script code, I realized I needed to add a parameter (any will do) to get a menu:
|
1 2 3 4 5 6 7 8 9 10 |
./start_app.sh menu ============================= Function Menu ============================= 1. Object detect -- Video 2. Object detect -- Camera 3. Chatbot 0. Exit ============================= Please input : |
The first one will use videos stored in the eMMC flash, but since we have the camera connected, I went with option 2 to run object detection with the USB camera, so it should similar to above, just not in a web browser:
I was told OpenVino Object Detect was already installed, but since I had never run that test before, I just asked the script to delete and reinstall it. Once done, we are asked to select the hardware (Intel Device was the only option), and where we want to run it on the CPU or GPU, and I went with the latter.
A window will open with the camera output, detection boxes, and red text with inference time (38.8ms) and frame rate (25.9 FPS). That’s much faster than in the web browser…
We can also press the “a” key to see CPU and memory usage.
I went back to the menu to install the Chatbot. I was again told it was already installed, so I went forward without requesting a new install, and it failed due to a missing demo environment…
So I reinstalled OpenVino Chatbot, selected Intel Device, and tiny-llama-1b-chat, the only options I was offered.
We now have a Chatbot up and running in the web browser. The speed is not too bad since it’s only a 1 billion parameter model, but you can expect it to know too much… It can be useful for small, custom language models.
Power consumption
Since it’s often requested, I also measured the power consumption of the development kit using a wall power meter:
- Power off – 1.6 -1.7 Watts
- Idle – 5.2 – 5.5 Watts (fan active at all times)
- Stress test (stress -c 4)
- First few seconds – 18.2 – 18.6 Watts
- Longer runs – 16.7 – 16.8 Watts
- Object detection – Camera + GPU – 17.6 – 18.1 Watts
Remarks: PL1 was set to 12W. An HDMI monitor (Eazeye Radiant), an Ethernet cable, a USB mouse, and a wireless USB dongle were connected to the board. I also added the UP USB camera for the object detection test.
Conclusion
The UP TWL AI Dev Kit is an entry-level artificial intelligence development kit relying only on the CPU and GPU of the Intel Processor N150 Twin Lake SoC. It’s clearly not an AI powerhouse, but it can be suitable for some AI workloads. The system ships with Meta Nx support, as well as the AAEON UP AI toolkit, to easily experiment with AI workloads. The main downside I found was that the 64GB eMMC flash can be filled pretty quickly, and the UP TWL SBC does not offer storage expansion options, except via USB 3.2 (10 Gbps) ports.
Otherwise, everything works as expected, including Gigabit Ethernet, a relatively fast eMMC flash, HDMI video and audio output, all three 10 Gbps USB ports, the RTC, GPIOs, etc… As usual, the company is rather conservative with power limits, and you may extract more performance by changing the power limits in the BIOS, as we showed in this review.
Next up is the UP Squared Pro TWL AI Dev Kit, also based on an Intel N150 SoC coupled with 8GB RAM and a 64GB eMMC flash, but shipping with an Hailo-8L M.2 AI accelerator module for higher AI performance. I’ll also make sure to install an M.2 NVMe SSD to avoid the storage issues I had with the UP TWL AI Dev Kit.
I’d like to thank AAEON for sending the UP TWL AI Dev Kit for review. It can be purchased for $279 on the UP shop. The kit includes the board, a 12V/5A power supply, and a USB camera.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.


















