
Hello, today I am going to review the reComputer AI Industrial R2135-12 from Seeed Studio. This is an industrial edge computer built around the Raspberry Pi Compute Module 5 platform. The model is configured with 8 GB LPDDR4 memory and 32 GB eMMC storage. It provides a rich set of I/O options, including dual Gigabit Ethernet, USB 3.0/USB 2.0, HDMI output, and industrial interfaces such as RS-485/RS-232, CAN, and GPIO, along with Wi-Fi and Bluetooth support and a wide DC power input range suitable for industrial environments.
In addition to standard checking and benchmarking the device, I will also include a hands-on demo application in which the system runs an AI model for real-time people detection from a USB camera, then sends detection results to an external ESP32 microcontroller to drive LED matrices for visually highlighting the locations of detected people. I made the following YouTube video to quickly demonstrate the device’s AI performance of the reComputer R2135-12 model. Continue reading to find out how this was implemented.
For this review, I kept the system close to its factory state. Apart from installing basic benchmarking tools like inxi, sbc-bench, and Geekbench, I used only the preinstalled software, with no system updates or extra packages, to see how ready the device is right out of the box.
reComputer Industrial R2135-12 unboxing
The parcel was shipped from China to Thailand and arrived in about 10 days in a sturdy cardboard box, with the contents well-protected using brown shredded paper as filler. Inside the box, all components were neatly packed and securely cushioned, preventing any noticeable movement during transit. Below is the complete list of the received components.
- reComputer Industrial R2135-12
- Mounting brackets
- Bracket screws
- DIN-rail clip
- DC power female jack to screw terminal adapter
- 12V/3A power adapter (with 4 interchangeable adapter plugs)
- 15*2-pin terminal block connector (male)
- 120Ω resistors
- Wi-Fi/BLE antenna
- User manual (not in the photo)


First-Time Setup and Use
The device weighs about 1.3 kg, and when I first picked it up, it felt a bit heavier than I expected. The main enclosure is made of aluminum, which serves both as a sturdy protective case and a passive heatsink. On the bottom panel, a label identifies the unit as the reComputer Industrial R2135-12.


Based on the naming convention described in the official documentation, this confirms that the unit I received is built around the Raspberry Pi Compute Module 5 and comes equipped with 8 GB of RAM and 32 GB of onboard eMMC storage. It also includes wireless networking support and an integrated Hailo-8 AI accelerator.
Teardown
I disassembled the device to inspect the internal components by removing the four screws on the bottom panel, which allowed the side panels to be detached. I then carefully lifted the top panel to look inside. However, the Wi-Fi antenna was firmly attached to the panel, and there appeared to be thermal paste between the bottom panel and some components on the main board. To avoid disturbing these connections, I left everything untouched, as shown in the images below.




Overall, the internal layout looks well-organized, with all key components securely installed. Aside from a small amount of excess silicone adhesive around a few capacitors, the assembly appears clean and solid.

Powering up the device
According to the user manual, the device supports two power input options: a DC terminal input (9–36 V DC) and a 30 W PoE port. The DC terminal input is recommended when powering high-demand peripherals or multiple external devices. For this review, I powered the unit using the provided 2-pin terminal block connected to the included 12V/3A power adapter, as shown in the image below. The device does not include a physical power button and automatically boots as soon as power is applied, so it should always be shut down properly through the operating system.

The device I received comes preinstalled with a ready-to-use Raspberry Pi OS system image, allowing it to be used straight out of the box. The images below show the default operating system desktop and the output from htop, which displays some of the default running processes.


According to the documentation, users can re-burn the system image using the official image files provided on the official GitHub by selecting the reComputer-R2x-arm64 option. Detailed, step-by-step instructions on using rpiboot together with Raspberry Pi Imager to reflash the system are available in the user manual. Users can also install Ubuntu on the reComputer Industrial R2135-12 by following the official Install Ubuntu on a Raspberry Pi guide to flash the downloaded image. Detailed installation steps for the reComputer are also provided in the user manual.
Checking common features using the command line
The user manual includes several command-line test instructions, such as querying GPIO pins, scanning for Wi-Fi networks, and toggling the user LED. I ran a selection of these commands, and they all worked as expected. The output below shows the result of the GPIO mapping query.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
cat /sys/kernel/debug/gpio ... gpiochip0: GPIOs 569-622, parent: platform/1f000d0000.gpio, pinctrl-rp1: gpio-569 (ID_SDA |spi2 CS0 ) out hi ACTIVE LOW gpio-570 (ID_SCL ) gpio-571 (GPIO2 ) gpio-572 (GPIO3 ) gpio-573 (GPIO4 |spi3 CS0 ) out hi ACTIVE LOW gpio-574 (GPIO5 ) gpio-575 (GPIO6 ) gpio-576 (GPIO7 ) gpio-577 (GPIO8 ) gpio-578 (GPIO9 |sysfs ) in hi gpio-579 (GPIO10 ) gpio-580 (GPIO11 ) gpio-581 (GPIO12 ) gpio-582 (GPIO13 ) gpio-583 (GPIO14 ) gpio-584 (GPIO15 ) gpio-585 (GPIO16 ) gpio-586 (GPIO17 ) gpio-587 (GPIO18 ) gpio-588 (GPIO19 ) gpio-589 (GPIO20 ) gpio-590 (GPIO21 ) gpio-591 (GPIO22 ) gpio-592 (GPIO23 ) gpio-593 (GPIO24 |spi2 CS1 ) out hi ACTIVE LOW gpio-594 (GPIO25 |spi3 CS1 ) out hi ACTIVE LOW gpio-595 (GPIO26 ) gpio-596 (GPIO27 ) ... |
Next, I tested the user RGB LED by turning it purple, which was achieved by enabling the red and green LEDs using the following command.
|
1 2 3 4 5 6 |
ls /sys/class/leds/ recomputer@reComputer-R2x:/sys/class/leds $ ls ACT input21::capslock input21::kana input21::scrolllock led-green mmc0 mmc1:: default-on input21::compose input21::numlock led-blue led-red mmc0:: PWR echo 1 > /sys/class/leds/led-red//brightness echo 1 > /sys/class/leds/led-blue/brightness |
Wi-Fi scanning also worked as expected.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
recomputer@reComputer-R2x:~ $ sudo iwlist wlan0 scan wlan0 Scan completed : Cell 01 - Address: 9C:63:5B:CD:36:ED Channel:157 Frequency:5.785 GHz Quality=37/70 Signal level=-73 dBm Encryption key:on ESSID:"JUDA_TP_5GHz" Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master ... Cell 02 - Address: 9C:63:5B:FD:36:EC Channel:9 Frequency:2.452 GHz (Channel 9) Quality=51/70 Signal level=-59 dBm Encryption key:on ESSID:"JUDA_TP" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master ... Cell 03 - Address: 9E:63:5B:FD:36:ED Channel:157 Frequency:5.785 GHz Quality=36/70 Signal level=-74 dBm Encryption key:on ESSID:"" Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master ... |
The Bluetooth scanning results are shown below.
|
1 2 3 4 5 6 7 |
sudo bluetoothctl scan on [bluetooth]# scan on Discovery started [CHG] Controller 88:A2:9E:31:39:96 Discovering: yes [NEW] Device 8C:DF:2C:AA:BF:CA vivo Y27 5G [CHG] Device 8C:DF:2C:AA:BF:CA RSSI: -51 |
The final test in this section focuses on verifying the functionality of the DO (digital output). This was done by simply using a multimeter to measure the voltage at GPIO638 (DO1), which correctly switched between 0 V and approximately 1 V as the output level was toggled between low and high using the following command-line instructions.
|
1 2 3 4 |
echo 638 > /sys/class/gpio/export echo in > /sys/class/gpio/gpio638/direction echo 1 > /sys/class/gpio/gpio638/value echo 0 > /sys/class/gpio/gpio638/value |
System information
Next, I installed and ran the inxi command-line tool to check the basic system information, with the output shown below.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
recomputer@reComputer-R2x:~ $ inxi -F System: Host: reComputer-R2x Kernel: 6.12.34+rpt-rpi-2712 arch: aarch64 bits: 64 Desktop: LabWC Distro: Debian GNU/Linux 12 (bookworm) Machine: Type: ARM System: Raspberry Pi Compute Module 5 Rev 1.0 details: N/A rev: d04180 serial: d2f02181ab20c2b1 CPU: Info: quad core model: N/A variant: cortex-a76 bits: 64 type: MCP cache: L2: 2 MiB Speed (MHz): avg: 2400 min/max: 1500/2400 cores: 1: 2400 2: 2400 3: 2400 4: 2400 Graphics: Device-1: bcm2712-hdmi0 driver: vc4_hdmi v: N/A Device-2: bcm2712-hdmi1 driver: vc4_hdmi v: N/A Display: wayland server: X.org v: 1.21.1.7 with: Xwayland v: 22.1.9 compositor: LabWC driver: gpu: vc4-drm,vc4_crtc,vc4_dpi,vc4_dsi,vc4_firmware_kms,vc4_hdmi,vc4_hvs,vc4_txp,vc4_v3d,vc4_vec resolution: 1920x1080~60Hz API: OpenGL v: 3.1 Mesa 24.2.8-1~bpo12+rpt3 renderer: V3D 7.1.10.2 Audio: Device-1: bcm2712-hdmi0 driver: vc4_hdmi Device-2: bcm2712-hdmi1 driver: vc4_hdmi API: ALSA v: k6.12.34+rpt-rpi-2712 status: kernel-api Server-1: PipeWire v: 1.2.7 status: active Network: Device-1: Raspberry Pi RP1 PCIe 2.0 South Bridge driver: rp1 IF: wlan0 state: down mac: 88:a2:9e:31:39:95 Device-2: Microchip (formerly SMSC) SMSC9512/9514 Fast Ethernet Adapter type: USB driver: smsc95xx IF: eth1 state: down mac: 2c:f7:f1:22:ee:3d IF-ID-1: can0 state: down mac: N/A IF-ID-2: can1 state: down mac: N/A IF-ID-3: eth0 state: up speed: 1000 Mbps duplex: full mac: 88:a2:9e:31:39:94 Bluetooth: Device-1: bcm7271-uart driver: bcm7271_uart Report: hciconfig ID: hci0 state: up address: 88:A2:9E:31:39:96 bt-v: 3.0 Drives: Local Storage: total: 29.12 GiB used: 7.8 GiB (26.8%) ID-1: /dev/mmcblk0 type: Removable vendor: Samsung model: BJTD4R size: 29.12 GiB Partition: ID-1: / size: 28.08 GiB used: 7.72 GiB (27.5%) fs: ext4 dev: /dev/mmcblk0p2 Swap: ID-1: swap-1 type: file size: 512 MiB used: 0 KiB (0.0%) file: /var/swap Sensors: System Temperatures: cpu: 47.9 C mobo: N/A Fan Speeds (RPM): N/A Info: Processes: 270 Uptime: 11m Memory: 7.88 GiB used: 2.21 GiB (28.1%) gpu: 8 MiB Shell: Bash inxi: 3.3.26 |
The report confirms that the system is based on the Raspberry Pi Compute Module 5 Rev 1.0, running Debian GNU/Linux 12 (bookworm) with a 6.12.34+rpt-rpi-2712 kernel on a 64-bit ARM (aarch64) architecture. The quad-core Cortex-A76 CPU operates at up to 2.4 GHz, scaling between 1.5 and 2.4 GHz, and all four cores were running at the maximum frequency at the time of the report. The system includes 7.88 GB of RAM, with roughly 23–28% in use shortly after boot, and 29.12 GB of local storage, of which about 27% was already utilized. A CPU temperature of around 47–48 °C suggests stable thermal behavior under light load.
On the graphics side, the device uses the Broadcom BCM2712 (VC4/V3D) GPU with Mesa 24.2.8, supporting OpenGL 3.1 and driving a 1920 × 1080 @ 60 Hz display over HDMI. The desktop runs under a Wayland (LabWC) environment with Xwayland enabled. For networking, a Gigabit Ethernet (eth0) interface was active during the test, while additional Ethernet, Wi-Fi, CAN, and Bluetooth interfaces were present but inactive at the time. Audio output is provided over HDMI using ALSA, with PipeWire running in the background.
Overall, these reported hardware and software details are consistent with the advertised specifications of the reComputer Industrial R2135-12, including the Compute Module 5 platform, quad-core Cortex-A76 CPU, and 8 GB memory configuration.
Benchmarking
SBC-Bench
Next, I installed and ran sbc-bench, which completed successfully with all checks passing. The results showed no CPU throttling or swapping, stable clock frequencies reaching the advertised maximum, and acceptable background activity throughout the test.
Memory performance was consistent across all Cortex-A76 cores, with memcpy throughput of around 5.1 GB/s and memset throughput of about 8.5 GB/s. Memory latency remained very low within cache ranges (approximately 1.7 ns) and increased smoothly as buffer sizes grew, reaching roughly 120–135 ns for very large buffers.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
... Memory performance (all 8 CPU clusters measured individually): memcpy: 5149.2 MB/s (Cortex-A76) memset: 8584.0 MB/s (Cortex-A76) memcpy: 5148.8 MB/s (Cortex-A76) memset: 8577.6 MB/s (Cortex-A76) memcpy: 5143.4 MB/s (Cortex-A76) memset: 8584.4 MB/s (Cortex-A76) memcpy: 5145.6 MB/s (Cortex-A76) memset: 8585.5 MB/s (Cortex-A76) memcpy: 5154.8 MB/s (Cortex-A76) memset: 8585.6 MB/s (Cortex-A76) memcpy: 5133.8 MB/s (Cortex-A76) memset: 8577.9 MB/s (Cortex-A76) memcpy: 5149.0 MB/s (Cortex-A76) memset: 8576.4 MB/s (Cortex-A76) memcpy: 5149.6 MB/s (Cortex-A76) memset: 8590.0 MB/s (Cortex-A76) ... 32k: 1.691 1.690 1.690 1.690 1.690 1.691 1.692 3.296 64k: 1.701 1.697 1.701 1.697 1.700 1.700 1.702 3.303 128k: 5.072 5.071 5.071 5.073 5.071 5.769 7.212 12.81 256k: 5.426 5.259 5.251 5.216 5.251 5.837 7.288 12.82 512k: 7.258 7.734 7.469 7.737 7.159 8.220 9.185 15.00 1024k: 17.89 17.10 17.78 17.10 17.68 17.87 20.00 29.25 2048k: 19.21 18.72 18.64 18.72 18.74 19.88 23.22 31.64 4096k: 76.75 78.28 77.24 78.45 76.25 89.41 120.7 159.3 8192k: 119.0 102.7 104.4 103.1 102.8 109.7 142.4 193.9 16384k: 113.6 112.0 113.4 112.9 112.9 118.7 143.9 162.7 32768k: 126.9 123.9 125.8 123.5 125.4 126.6 131.4 141.6 65536k: 129.4 126.8 128.7 126.7 128.7 127.8 130.5 133.9 131072k: 129.8 128.5 129.6 128.5 129.6 128.5 129.8 132.5 ... |
For compute workloads, the 7-Zip benchmark reports multi-core total scores of roughly 11,100 across repeated runs and a single-threaded score of about 3,121, showing stable and repeatable compression and decompression performance.
|
1 2 3 |
... 7-zip total scores (3 consecutive runs): 11110,11136,11131, single-threaded: 3121 ... |
In cryptographic tests, OpenSSL results are strong and consistent across all cores, with AES-128-CBC throughput close to 1.88 GB/s, AES-192-CBC around 1.57 GB/s, and AES-256-CBC approximately 1.35 GB/s at larger block sizes. Overall, the sbc_bench_2_c.txt results present a coherent and validated performance profile, highlighting stable CPU behavior, solid memory bandwidth, predictable latency scaling, and reliable integer and crypto performance.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
... OpenSSL results (all 8 CPU clusters measured individually): type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-128-cbc 594976.14k 1290786.28k 1694330.03k 1827406.51k 1881746.09k 1886972.59k (Cortex-A76) aes-128-cbc 595541.14k 1291237.40k 1693894.83k 1828485.12k 1881284.61k 1887262.04k (Cortex-A76) aes-128-cbc 596013.89k 1289307.41k 1694745.86k 1827424.94k 1881426.60k 1887447.72k (Cortex-A76) aes-128-cbc 595018.13k 1293503.04k 1694060.89k 1826374.66k 1881770.67k 1887300.27k (Cortex-A76) aes-128-cbc 598141.49k 1291072.23k 1694859.86k 1827430.40k 1881655.98k 1887485.95k (Cortex-A76) aes-128-cbc 598797.65k 1289548.05k 1694564.52k 1826965.50k 1881718.78k 1887327.57k (Cortex-A76) aes-128-cbc 594819.94k 1290364.86k 1696595.46k 1827517.78k 1881795.24k 1887431.34k (Cortex-A76) aes-128-cbc 595902.32k 1289898.33k 1695651.93k 1827775.15k 1881655.98k 1887316.65k (Cortex-A76) aes-192-cbc 562560.68k 1124640.34k 1432901.29k 1518641.83k 1569901.23k 1573453.82k (Cortex-A76) aes-192-cbc 562473.23k 1124936.36k 1432863.06k 1518676.99k 1569764.69k 1573306.37k (Cortex-A76) aes-192-cbc 564201.12k 1124473.24k 1433254.57k 1518724.44k 1569559.89k 1573650.43k (Cortex-A76) aes-192-cbc 563271.49k 1125458.18k 1432898.47k 1518941.18k 1569434.28k 1573557.59k (Cortex-A76) aes-192-cbc 561526.78k 1123707.39k 1433080.75k 1518962.69k 1569876.65k 1573759.66k (Cortex-A76) aes-192-cbc 563839.60k 1124952.77k 1432805.12k 1518688.60k 1569884.84k 1573726.89k (Cortex-A76) aes-192-cbc 561921.38k 1124819.56k 1432981.25k 1518673.92k 1569901.23k 1573765.12k (Cortex-A76) aes-192-cbc 563001.52k 1124441.58k 1432818.69k 1517664.60k 1569540.78k 1573694.12k (Cortex-A76) aes-256-cbc 548336.69k 998419.99k 1242847.57k 1316564.65k 1346516.31k 1349380.78k (Cortex-A76) aes-256-cbc 547503.40k 998072.36k 1242349.57k 1316630.87k 1346502.66k 1347960.83k (Cortex-A76) aes-256-cbc 547646.89k 998284.86k 1241602.13k 1316511.40k 1346497.19k 1349074.94k (Cortex-A76) aes-256-cbc 539631.60k 992004.89k 1239950.42k 1315208.19k 1345888.26k 1349211.48k (Cortex-A76) aes-256-cbc 547980.23k 998398.10k 1242483.54k 1316303.19k 1346377.05k 1349167.79k (Cortex-A76) aes-256-cbc 547453.64k 999679.94k 1242614.87k 1316233.22k 1346546.35k 1349326.17k (Cortex-A76) aes-256-cbc 547551.88k 998091.35k 1242746.37k 1316634.28k 1346527.23k 1349266.09k (Cortex-A76) aes-256-cbc 547312.89k 998043.54k 1241990.14k 1316261.21k 1346543.62k 1349413.55k (Cortex-A76) ... |
Overall, the results show that the device delivers stable CPU behavior, solid memory bandwidth, predictable latency scaling, and reliable integer and cryptographic performance.
Benchmarking filesystem with iozone
I also ran filesystem benchmarking using iozone to evaluate storage I/O performance under direct I/O conditions. The test was configured with a 512 MB file size and large record sizes of 1 MB and 16 MB, and the results are shown below.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
... Include fsync in write timing O_DIRECT feature enabled Auto Mode File size set to 524288 kB Record Size 1024 kB Record Size 16384 kB Command line used: iozone -e -I -a -s 512M -r 1024k -r 16384k -i 0 -i 1 -i 2 Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 524288 1024 83964 95051 319753 320757 319815 78884 524288 16384 81408 77665 334355 332469 333798 76082 ... |
Sequential read performance was consistent across both record sizes, measuring approximately 320–337 MB/s. Re-read and random read results fell within the same range, indicating stable read throughput under repeated access patterns. Sequential write performance reached approximately 100–111 MB/s for both record sizes, with random write throughput showing similar values. Overall, these results indicate high and consistent read bandwidth, along with moderate and repeatable write performance when tested with direct I/O enabled.
Benchmarking with Geekbench 6.5
Next, I installed and ran the Geekbench 6.5.0. The system achieved a single-core score of 865 and a multi-core score of 1,982, reflecting its overall CPU performance. The full benchmark results can be viewed via this Geekbench result link.
The Geekbench 6 single-core test produced an overall score of 865, with most sub-tests falling in the mid-800 range. Stronger single-core performance was observed in Clang compilation, Horizon Detection, Navigation, and PDF Rendering, while general workloads such as Text Processing, HDR, HTML5 browsing, Ray Tracing, and Structure from Motion stayed close to the average. More demanding vision-related tasks, particularly Object Detection and Object Remover, showed noticeably lower scores.

The multi-core results reflect how well the system scales across all CPU cores under parallel workloads. High scores were achieved in Asset Compression, Ray Tracing, and Clang compilation, indicating efficient multi-core utilization for compute-heavy tasks. Other workloads, such as PDF Rendering, Structure from Motion, Navigation, Background Blur, Photo Library, and HDR, also benefited from parallel execution, while tasks like Text Processing, HTML5 browsing, and Object Detection showed more moderate scaling.

Overall, the multi-core score is about 2.3× higher than the single-core result, clearly showing the benefit of scaling workloads across multiple CPU cores. Tasks that score in the mid-800 range in single-core mode typically exceed 2,000 in multi-core tests, with some surpassing 3,000, while workloads such as Object Detection and Object Remover improve but remain relatively lower due to their per-core performance limits.
Benchmarking Web Browser Performance with Speedometer 3.1
Next, I used Speedometer 3.1 to benchmark the two web browsers that come preinstalled with the OS image. Both browsers delivered very similar Speedometer scores.
For Chromium, the Speedometer test reported a mean score of around 4.16, with results tightly clustered across runs and a geometric mean execution time of roughly 240 ms, indicating stable browser performance. Most of the runtime was spent in JavaScript- and DOM-heavy tasks, such as complex UI updates and chart rendering.
Firefox delivered very similar resultsand comparably low run-to-run variation, again corresponding to a geometric mean execution time in the same ~240 ms range. The small difference between the two browsers falls within normal benchmark variation and does not indicate any meaningful performance gap, suggesting that both offer effectively equivalent web performance on this system.


The small difference between Chromium and Firefox falls within normal benchmark variation and does not indicate a meaningful performance gap; overall, both browsers deliver effectively equivalent web performance on this system.
Benchmarking YouTube Video Playback Performance
Next, I tested YouTube playback by playing 4K videos in full-screen mode on a 4K monitor (3840 × 2160) with YouTube’s Stats for Nerds enabled. Playback was smooth and stable from 144p up to 1080p, with no dropped frames observed. However, resolution options above 1080p were not available.
At lower resolutions (144p–360p), playback ran at modest bitrates and consistently maintained long buffer health, often around 120 seconds, indicating ample decoding and network headroom even when scaled to a large 4K viewport. At higher resolutions (480p–1080p), playback remained reliable within the same display setup, with connection speeds scaling appropriately and buffer health staying in a comfortable range, typically between 30 and 120 seconds. Even at 1080p/30, the system sustained smooth playback without dropped frames, demonstrating stable video decoding and buffering behavior across all commonly used YouTube resolutions.






Benchmarking Web Browser 3D Rendering Performance Using WebGL
Next, I evaluated web browser 3D rendering performance using the WebGL Aquarium demo in the Chromium web browser.
With the canvas fixed at 1024 × 1024, the results showed clear and predictable performance scaling as scene complexity increased. At low object counts, performance was excellent: the scene ran at a stable 60 fps with 1–100 fish, remained close to real time at 500 fish (~54 fps), and still delivered acceptable smoothness at 1,000 fish (~48 fps). As the number of fish increased further, the frame rate dropped steadily, reaching around 29 fps at 5,000 fish, where animation began to feel noticeably less smooth.

Under heavier loads, GPU limitations became more apparent. At 10,000 fish, the frame rate dropped to roughly 16 fps, decreasing further to about 11 fps at 15,000 fish and 8 fps at 20,000 fish. Extremely dense scenes with 25,000–30,000 fish reduced performance to approximately 6–7 fps, clearly beyond real-time rendering. Overall, these results indicated that Chromium could handle moderate WebGL workloads smoothly, but performance degraded rapidly as draw calls and fragment workload increased, highlighting GPU-bound behavior rather than browser instability. The final graph below compares the frame rates across these scenarios.


Testing the Hailo-8 AI Accelerator
My reComputer Industrial R2135-12 was equipped with a preinstalled Hailo-8 AI accelerator, with all required libraries and packages already installed and ready to use. To test the Hailo Raspberry Pi 5 examples, I simply sourced the setup script at /mnt/hailo-rpi5-examples/setup_env.sh, which activated the predefined virtual environment and configured all the necessary paths for the Hailo-8 runtime. Once the environment was enabled, the system was immediately ready to run Hailo-8 example applications and inference workloads without any additional setup. All of the provided Python example scripts were built on GStreamer and its internal pipelines, which handled video streaming, video processing, and AI inference.

To test these examples, I started with the simple_detection.py script, a lightweight detection demo designed to minimize CPU load. Some useful command-line options included --input, which could be used to select the input source (such as a video file or a connected USB webcam), and --show-fps, which overlaid frame rate information on the video output. With the default configuration provided in the Hailo-8 examples, the script runs on a sample video file with a predefined frame rate limit. The console output displays detection details such as confidence scores for each detected object. Overall, the demo ran smoothly at approximately 30 FPS. The following image shows the results of running simple_detection.py script.

The image below shows the result of running full detection using the detection.py example script.

Here are the results of running the instance_segmentation.py script, which performed as expected, achieving a similar 30 FPS with the default configuration.

The depth_estimation.py example script is based on the SCDepthV3 depth estimation model and also performed very well.

The final example I tested was the pose estimation demo, which returned 17 keypoints (HAILO_LANDMARKS) for each detected person. These included landmarks for the nose, eyes, ears, shoulders, elbows, wrists, hips, knees, and ankles. The example script also worked as expected and produced stable, consistent pose estimation results.

Use case demonstration – Detecting persons’ location in a delimited zone
In this section, I demonstrated the AI performance of the reComputer Industrial R2135-12 by detecting people in a video stream and estimating each person’s location in real-world coordinates. The detected positions were then highlighted on an external 8×16 LED matrix driven by an ESP32-based development board, effectively acting as a simple LED “floor plan” to visualize where people were located.
It’s worth noting that, in a real-time setup, accurate position estimation would normally require an interactive GUI that allows users to click on the video frame to collect reference points for calibration. Due to time constraints, I skipped this step and used a prerecorded video instead, focusing on showcasing the raw computing and AI capabilities of the reComputer itself.
Background theory
The following figure briefly illustrates the concept of recovering real-world coordinates from image points. In this example, we assume that the ground plane is known and that the real-world coordinates of two reference points, A and B, are given. Their corresponding locations in the image can also be identified, denoted as point-a and point-b. With this information, and by knowing the camera frustum (shown as the yellow triangle in the figure), the position and orientation of the camera can be estimated. Based on this setup, if an object lies on the ground plane and its image location can be identified (for example, point-c), its real-world coordinates can be recovered by back-projecting a ray (shown in orange) from the camera center through point-c in the image and finding the intersection of this ray with the ground plane.

The following image shows an example of how the tripod base coordinates could be estimated when the camera intrinsics were known, together with four ground control points. In this setup, the reference origin (0, 0) was defined at the center point among points A–D, with each floor tile measuring 60 × 60 cm. The actual location of the tripod base was approximately (0, −60, 0) in world coordinates. Based on this configuration, the estimated tripod position (0.3, -59.4, 0.0) closely matched the expected real-world location, demonstrating the accuracy of this simple coordinate recovery approach.

Setting up the scene and estimating the camera pose
First, I calibrated the camera using a set of chessboard images and OpenCV’s camera calibration functions to obtain the intrinsic parameters (fx, fy, cx, cy) along with the lens distortion coefficients (k1, k2, p1, p2, k3).
|
1 2 3 4 5 6 7 8 9 |
fx = 3238.6635331929683 fy = 3241.0613554172223 cx = 2042.009749715308 cy = 925.2763732253052 k1 = 0.2523496972079927 k2 = -1.4002772998133683 p1 = 0.002290276484551306 p2 = -0.0014385064745750323 k3 = 2.7815419705574613 |
Next, I prepared the testing scene as shown in the image below, where the blue tape marked the reference origin (0, 0). The +X axis was defined along the A–B direction, while the +Y axis was defined along the A–D direction. I then recorded a video of the scene and manually extracted a frame to identify several reference points. Using this information, I estimated the camera transformation with the help of Python functions based on OpenCV’s Perspective-n-Point (PnP) pose estimation methods.


Below is the function I used to estimate the camera pose.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
def estimate_camera_poses(pts_img, pts_world, camera_matrix, dist_coeffs): print("-------------------------------------") n_image_points = pts_img.shape n_object_points = pts_world.shape poses = [] image_points = pts_img objp = pts_world.reshape(-1, 1, 3) imgp = pts_img.reshape(-1, 1, 2) success, rvec, tvec = cv2.solvePnP( objp, imgp, camera_matrix, dist_coeffs) R, _ = cv2.Rodrigues(rvec) pose = numpy.hstack((R, tvec)) # 3x4 matrix poses.append({ 'rotation_vector': rvec, 'translation_vector': tvec, "rotation_matrix": R, 'pose_matrix': pose }) return poses |
This is the estimated camera rotation and translation information.
|
1 2 3 4 |
R = [[ 0.02303737 -0.99858007 0.04803245], [-0.31316605 -0.05283524 -0.94822754], [ 0.94941893 0.00680254 -0.31393856]] t = [-211.98, 101.39, 121.58] |
The rotation matrix was more difficult to interpret directly, while the translation vector was much easier to understand. In this case, it indicated that the camera was positioned approximately 2.1 m away from point A along the negative X-axis (toward the bottom of the image), about 1 m along the Y-axis, and at a height of roughly 1.2 m above the ground plane relative to point A.
Foot location estimation
The next step was to estimate the 3D position of a detected person’s foot. For simplicity, I used the Hailo-8 pose_estimation.py example to detect human body landmarks and extracted the 2D image coordinates of the left and right ankles. These image coordinates were then converted into world coordinates using the back-projection technique described earlier.
The following two functions were used to construct a ray from the camera center and compute its intersection with the ground plane.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
def create_a_ray(u, v, K, distCoeffs, R, t): pts = numpy.array([[[u, v]]], dtype=numpy.float64) undist = cv2.undistortPoints(pts, K, distCoeffs) x_n, y_n = undist[0,0] d_cam = numpy.array([x_n, y_n, 1.0], dtype=numpy.float64) C = -R.T @ t d_world = R.T @ d_cam d_world = d_world / numpy.linalg.norm(d_world) return C.reshape(3), d_world.reshape(3) def intersect_ray_with_plane(C, d, n, d0): denom = n @ d if numpy.abs(denom) < 1e-9: return None, None s = -(n @ C + d0) / denom if s < 0: return None, None X = C + s * d return X, s |
Based on this approach, the estimated foot position was (202.0, 44.2, 0.0) in the world reference frame. This result was reasonable, as I was standing very close to point B, whose known coordinate is (240, 0).

Displaying the detected foot location using an ESP32 and LED matrices
The final step was to visualize the detected person’s location using two 8 × 8 LED matrices. The LED row and column indices were calculated by linearly scaling the estimated world coordinate range (approximately 480.0 × 240.0 cm) to match the 8 × 16 LED grid. These indices were then sent to an ESP32 over a USB connection.
For this demonstration, I used a KidBright 32 V1.3 board, which includes an onboard HT16K33 LED matrix controller, making it convenient for quick prototyping. The image below shows the system running in real time with input from a USB webcam, where the illuminated LEDs correspond to the estimated person location. A video of this demo is included at the top of this review, and it can also be viewed directly via this YouTube link.

Temperature and heat distribution
The final test focused on temperature behavior and heat distribution. I first ran the device in an idle state for 5 minutes and then captured thermal images using a FLIR E4 thermal camera.

Under idle conditions, the average surface temperature was around 33 °C, with an ambient temperature of approximately 25 °C, as shown below.

Next, I tested the device under a full workload by running the WebGL Aquarium demo with 30,000 fish, playing a 1080p YouTube video, and running an AI example application simultaneously. The system was left running in this state for 10 minutes before capturing the thermal images shown below.

Based on the thermal images, the device exhibited a consistent and well-distributed heat profile across different operating states. Under heavier workloads, the enclosure surface temperature peaked at approximately 35–37 °C, with the warmest areas concentrated on the top panel and upper side surfaces. This indicated that the internal heat sources—most likely the Compute Module and the AI accelerator—were efficiently transferring heat to the aluminum enclosure, which functioned as a passive heatsink. Notably, no localized hot spots were observed, suggesting effective internal thermal coupling and heat spreading.

Conclusions
The reComputer AI Industrial R2135-12 offers a wide range of wired and wireless connectivity options. The pre-flashed system image and preinstalled software, including the Hailo-8 AI acceleration examples, saved me a significant amount of setup time. Its AI performance averaged around 30 FPS, which is more than sufficient for many of my research applications. In this review, I encountered only one minor issue: the official product page was somewhat confusing, as the images shown on the website differed from the unit I received, which initially led me to consult the wrong user manual.
We’d like to thank Seeed Studio for sending the reComputer AI Industrial R2135-12 for review. It is available for purchase from Seeed Studio for $279.00 for the 8 GB RAM / 32 GB eMMC (26 TOPS) version reviewed here, or you can select the 16GB RAM variant for $339 on the same page. The Raspberry Pi CM5 Edge AI PC may eventually become available on the company’s Amazon and AliExpress stores.

My main research areas are digital image/audio processing, digital photogrammetry, AI, IoT, and UAV. I am open to other subjects as well.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.




