reComputer Industrial R2135-12 review – A Raspberry Pi CM5-powered fanless Edge AI PC with Hailo-8 AI accelerator

The reComputer AI Industrial R2135-12.
The reComputer AI Industrial R2135-12.

Hello, today I am going to review the reComputer AI Industrial R2135-12 from Seeed Studio. This is an industrial edge computer built around the Raspberry Pi Compute Module 5 platform. The model is configured with 8 GB LPDDR4 memory and 32 GB eMMC storage. It provides a rich set of I/O options, including dual Gigabit Ethernet, USB 3.0/USB 2.0, HDMI output, and industrial interfaces such as RS-485/RS-232, CAN, and GPIO, along with Wi-Fi and Bluetooth support and a wide DC power input range suitable for industrial environments.

In addition to standard checking and benchmarking the device, I will also include a hands-on demo application in which the system runs an AI model for real-time people detection from a USB camera, then sends detection results to an external ESP32 microcontroller to drive LED matrices for visually highlighting the locations of detected people. I made the following YouTube video to quickly demonstrate the device’s AI performance of the reComputer R2135-12 model. Continue reading to find out how this was implemented.

For this review, I kept the system close to its factory state. Apart from installing basic benchmarking tools like inxi, sbc-bench, and Geekbench, I used only the preinstalled software, with no system updates or extra packages, to see how ready the device is right out of the box.

reComputer Industrial R2135-12 unboxing

The parcel was shipped from China to Thailand and arrived in about 10 days in a sturdy cardboard box, with the contents well-protected using brown shredded paper as filler. Inside the box, all components were neatly packed and securely cushioned, preventing any noticeable movement during transit. Below is the complete list of the received components.

  • reComputer Industrial R2135-12
  • Mounting brackets
  • Bracket screws
  • DIN-rail clip
  • DC power female jack to screw terminal adapter
  • 12V/3A power adapter (with 4 interchangeable adapter plugs)
  • 15*2-pin terminal block connector (male)
  • 120Ω resistors
  • Wi-Fi/BLE antenna
  • User manual (not in the photo)
The reComputer Industrial R2135-12 packaging.
The reComputer Industrial R2135-12 packaging.
Recevied components.
Received components.

First-Time Setup and Use

The device weighs about 1.3 kg, and when I first picked it up, it felt a bit heavier than I expected. The main enclosure is made of aluminum, which serves both as a sturdy protective case and a passive heatsink. On the bottom panel, a label identifies the unit as the reComputer Industrial R2135-12.

Label at the bottom panel.
Label at the bottom panel.
The reComputer naming convention.
The reComputer naming convention.

Based on the naming convention described in the official documentation, this confirms that the unit I received is built around the Raspberry Pi Compute Module 5 and comes equipped with 8 GB of RAM and 32 GB of onboard eMMC storage. It also includes wireless networking support and an integrated Hailo-8 AI accelerator.

Teardown

I disassembled the device to inspect the internal components by removing the four screws on the bottom panel, which allowed the side panels to be detached. I then carefully lifted the top panel to look inside. However, the Wi-Fi antenna was firmly attached to the panel, and there appeared to be thermal paste between the bottom panel and some components on the main board. To avoid disturbing these connections, I left everything untouched, as shown in the images below.

Removing side panel.
Removing the side panel.
Removing the panel on the opposite side.
Removing the panel on the opposite side.
From left to right, reset button, HDMI connectors, and 2x15 terminal block.
From left to right, reset button, HDMI connectors, and 2×15 terminal block.
From left to right, grounding points, power terminal block, LEDs, reset switch, USB-C, Nano SIM, and Ethernet ports.
From left to right, grounding points, power terminal block, LEDs, reset switch, USB-C, Nano SIM, and Ethernet ports.

Overall, the internal layout looks well-organized, with all key components securely installed. Aside from a small amount of excess silicone adhesive around a few capacitors, the assembly appears clean and solid.

Top view.
Top view.

Powering up the device

According to the user manual, the device supports two power input options: a DC terminal input (9–36 V DC) and a 30 W PoE port. The DC terminal input is recommended when powering high-demand peripherals or multiple external devices. For this review, I powered the unit using the provided 2-pin terminal block connected to the included 12V/3A power adapter, as shown in the image below. The device does not include a physical power button and automatically boots as soon as power is applied, so it should always be shut down properly through the operating system.

Powering the device via the 2-pin terminal block connector.
Powering the device via the 2-pin terminal block connector.

The device I received comes preinstalled with a ready-to-use Raspberry Pi OS system image, allowing it to be used straight out of the box. The images below show the default operating system desktop and the output from htop, which displays some of the default running processes.

Default desktop.
Default desktop.
Checking running processes with htop.
Checking running processes with htop.

According to the documentation, users can re-burn the system image using the official image files provided on the official GitHub by selecting the reComputer-R2x-arm64 option. Detailed, step-by-step instructions on using rpiboot together with Raspberry Pi Imager to reflash the system are available in the user manual. Users can also install Ubuntu on the reComputer Industrial R2135-12 by following the official Install Ubuntu on a Raspberry Pi guide to flash the downloaded image. Detailed installation steps for the reComputer are also provided in the user manual.

Checking common features using the command line

The user manual includes several command-line test instructions, such as querying GPIO pins, scanning for Wi-Fi networks, and toggling the user LED. I ran a selection of these commands, and they all worked as expected. The output below shows the result of the GPIO mapping query.


Next, I tested the user RGB LED by turning it purple, which was achieved by enabling the red and green LEDs using the following command.

Testing user LED.
Testing user LED.


Wi-Fi scanning also worked as expected.


The Bluetooth scanning results are shown below.


The final test in this section focuses on verifying the functionality of the DO (digital output). This was done by simply using a multimeter to measure the voltage at GPIO638 (DO1), which correctly switched between 0 V and approximately 1 V as the output level was toggled between low and high using the following command-line instructions.

System information

Next, I installed and ran the inxi command-line tool to check the basic system information, with the output shown below.


The report confirms that the system is based on the Raspberry Pi Compute Module 5 Rev 1.0, running Debian GNU/Linux 12 (bookworm) with a 6.12.34+rpt-rpi-2712 kernel on a 64-bit ARM (aarch64) architecture. The quad-core Cortex-A76 CPU operates at up to 2.4 GHz, scaling between 1.5 and 2.4 GHz, and all four cores were running at the maximum frequency at the time of the report. The system includes 7.88 GB of RAM, with roughly 23–28% in use shortly after boot, and 29.12 GB of local storage, of which about 27% was already utilized. A CPU temperature of around 47–48 °C suggests stable thermal behavior under light load.

On the graphics side, the device uses the Broadcom BCM2712 (VC4/V3D) GPU with Mesa 24.2.8, supporting OpenGL 3.1 and driving a 1920 × 1080 @ 60 Hz display over HDMI. The desktop runs under a Wayland (LabWC) environment with Xwayland enabled. For networking, a Gigabit Ethernet (eth0) interface was active during the test, while additional Ethernet, Wi-Fi, CAN, and Bluetooth interfaces were present but inactive at the time. Audio output is provided over HDMI using ALSA, with PipeWire running in the background.

Overall, these reported hardware and software details are consistent with the advertised specifications of the reComputer Industrial R2135-12, including the Compute Module 5 platform, quad-core Cortex-A76 CPU, and 8 GB memory configuration.

Benchmarking

SBC-Bench

Next, I installed and ran sbc-bench, which completed successfully with all checks passing. The results showed no CPU throttling or swapping, stable clock frequencies reaching the advertised maximum, and acceptable background activity throughout the test.

Memory performance was consistent across all Cortex-A76 cores, with memcpy throughput of around 5.1 GB/s and memset throughput of about 8.5 GB/s. Memory latency remained very low within cache ranges (approximately 1.7 ns) and increased smoothly as buffer sizes grew, reaching roughly 120–135 ns for very large buffers.


For compute workloads, the 7-Zip benchmark reports multi-core total scores of roughly 11,100 across repeated runs and a single-threaded score of about 3,121, showing stable and repeatable compression and decompression performance.


In cryptographic tests, OpenSSL results are strong and consistent across all cores, with AES-128-CBC throughput close to 1.88 GB/s, AES-192-CBC around 1.57 GB/s, and AES-256-CBC approximately 1.35 GB/s at larger block sizes. Overall, the sbc_bench_2_c.txt results present a coherent and validated performance profile, highlighting stable CPU behavior, solid memory bandwidth, predictable latency scaling, and reliable integer and crypto performance.


Overall, the results show that the device delivers stable CPU behavior, solid memory bandwidth, predictable latency scaling, and reliable integer and cryptographic performance.

Benchmarking filesystem with iozone

I also ran filesystem benchmarking using iozone to evaluate storage I/O performance under direct I/O conditions. The test was configured with a 512 MB file size and large record sizes of 1 MB and 16 MB, and the results are shown below.


Sequential read performance was consistent across both record sizes, measuring approximately 320–337 MB/s. Re-read and random read results fell within the same range, indicating stable read throughput under repeated access patterns. Sequential write performance reached approximately 100–111 MB/s for both record sizes, with random write throughput showing similar values. Overall, these results indicate high and consistent read bandwidth, along with moderate and repeatable write performance when tested with direct I/O enabled.

Benchmarking with Geekbench 6.5

Next, I installed and ran the Geekbench 6.5.0. The system achieved a single-core score of 865 and a multi-core score of 1,982, reflecting its overall CPU performance. The full benchmark results can be viewed via this Geekbench result link.

The Geekbench 6 single-core test produced an overall score of 865, with most sub-tests falling in the mid-800 range. Stronger single-core performance was observed in Clang compilation, Horizon Detection, Navigation, and PDF Rendering, while general workloads such as Text Processing, HDR, HTML5 browsing, Ray Tracing, and Structure from Motion stayed close to the average. More demanding vision-related tasks, particularly Object Detection and Object Remover, showed noticeably lower scores.

Geekbench single-core score.
Geekbench single-core score.

The multi-core results reflect how well the system scales across all CPU cores under parallel workloads. High scores were achieved in Asset Compression, Ray Tracing, and Clang compilation, indicating efficient multi-core utilization for compute-heavy tasks. Other workloads, such as PDF Rendering, Structure from Motion, Navigation, Background Blur, Photo Library, and HDR, also benefited from parallel execution, while tasks like Text Processing, HTML5 browsing, and Object Detection showed more moderate scaling.

Geekbench multi-core score.
Geekbench multi-core score.

Overall, the multi-core score is about 2.3× higher than the single-core result, clearly showing the benefit of scaling workloads across multiple CPU cores. Tasks that score in the mid-800 range in single-core mode typically exceed 2,000 in multi-core tests, with some surpassing 3,000, while workloads such as Object Detection and Object Remover improve but remain relatively lower due to their per-core performance limits.

Benchmarking Web Browser Performance with Speedometer 3.1

Next, I used Speedometer 3.1 to benchmark the two web browsers that come preinstalled with the OS image. Both browsers delivered very similar Speedometer scores.

For Chromium, the Speedometer test reported a mean score of around 4.16, with results tightly clustered across runs and a geometric mean execution time of roughly 240 ms, indicating stable browser performance. Most of the runtime was spent in JavaScript- and DOM-heavy tasks, such as complex UI updates and chart rendering.

Firefox delivered very similar resultsand comparably low run-to-run variation, again corresponding to a geometric mean execution time in the same ~240 ms range. The small difference between the two browsers falls within normal benchmark variation and does not indicate any meaningful performance gap, suggesting that both offer effectively equivalent web performance on this system.

Speedometer 3.1 score on Firefox.
Speedometer 3.1 score on Firefox.
Speedometer 3.1 score on Chromium.
Speedometer 3.1 score on Chromium.

The small difference between Chromium and Firefox falls within normal benchmark variation and does not indicate a meaningful performance gap; overall, both browsers deliver effectively equivalent web performance on this system.

Benchmarking YouTube Video Playback Performance

Next, I tested YouTube playback by playing 4K videos in full-screen mode on a 4K monitor (3840 × 2160) with YouTube’s Stats for Nerds enabled. Playback was smooth and stable from 144p up to 1080p, with no dropped frames observed. However, resolution options above 1080p were not available.

At lower resolutions (144p–360p), playback ran at modest bitrates and consistently maintained long buffer health, often around 120 seconds, indicating ample decoding and network headroom even when scaled to a large 4K viewport. At higher resolutions (480p–1080p), playback remained reliable within the same display setup, with connection speeds scaling appropriately and buffer health staying in a comfortable range, typically between 30 and 120 seconds. Even at 1080p/30, the system sustained smooth playback without dropped frames, demonstrating stable video decoding and buffering behavior across all commonly used YouTube resolutions.

YouTube Stats for Nerds at 144p
YouTube Stats for Nerds at 144p
YouTube Stats for Nerds at 240p.
YouTube Stats for Nerds at 240p.
YouTube Stats for Nerds at 360p.
YouTube Stats for Nerds at 360p.
YouTube Stats for Nerds at 480p.
YouTube Stats for Nerds at 480p.
YouTube Stats for Nerds at 720p.
YouTube Stats for Nerds at 720p.
YouTube Stats for Nerds at 1080p.
YouTube Stats for Nerds at 1080p.

Benchmarking Web Browser 3D Rendering Performance Using WebGL

Next, I evaluated web browser 3D rendering performance using the WebGL Aquarium demo in the Chromium web browser.

With the canvas fixed at 1024 × 1024, the results showed clear and predictable performance scaling as scene complexity increased. At low object counts, performance was excellent: the scene ran at a stable 60 fps with 1–100 fish, remained close to real time at 500 fish (~54 fps), and still delivered acceptable smoothness at 1,000 fish (~48 fps). As the number of fish increased further, the frame rate dropped steadily, reaching around 29 fps at 5,000 fish, where animation began to feel noticeably less smooth.

FPS for the WebGL Aquarium demo with 1, 100, 500, 1000, and 5000 fish.
FPS for the WebGL Aquarium demo with 1, 100, 500, 1000, and 5000 fish.

Under heavier loads, GPU limitations became more apparent. At 10,000 fish, the frame rate dropped to roughly 16 fps, decreasing further to about 11 fps at 15,000 fish and 8 fps at 20,000 fish. Extremely dense scenes with 25,000–30,000 fish reduced performance to approximately 6–7 fps, clearly beyond real-time rendering. Overall, these results indicated that Chromium could handle moderate WebGL workloads smoothly, but performance degraded rapidly as draw calls and fragment workload increased, highlighting GPU-bound behavior rather than browser instability. The final graph below compares the frame rates across these scenarios.

FPS for the WebGL Aquarium demo with 10000, 15000, 20000, 25000, and 30000 fish.
FPS for the WebGL Aquarium demo with 10000, 15000, 20000, 25000, and 30000 fish.
Comparison of WebGL Aquarium 3D rendering performance at different fish counts.
Comparison of WebGL Aquarium 3D rendering performance at different fish counts.

Testing the Hailo-8 AI Accelerator

My reComputer Industrial R2135-12 was equipped with a preinstalled Hailo-8 AI accelerator, with all required libraries and packages already installed and ready to use. To test the Hailo Raspberry Pi 5 examples, I simply sourced the setup script at /mnt/hailo-rpi5-examples/setup_env.sh, which activated the predefined virtual environment and configured all the necessary paths for the Hailo-8 runtime. Once the environment was enabled, the system was immediately ready to run Hailo-8 example applications and inference workloads without any additional setup. All of the provided Python example scripts were built on GStreamer and its internal pipelines, which handled video streaming, video processing, and AI inference.

Sourcing the Hailo-8 environment setup script.
Sourcing the Hailo-8 environment setup script.

To test these examples, I started with the simple_detection.py script, a lightweight detection demo designed to minimize CPU load. Some useful command-line options included --input, which could be used to select the input source (such as a video file or a connected USB webcam), and --show-fps, which overlaid frame rate information on the video output. With the default configuration provided in the Hailo-8 examples, the script runs on a sample video file with a predefined frame rate limit. The console output displays detection details such as confidence scores for each detected object. Overall, the demo ran smoothly at approximately 30 FPS. The following image shows the results of running simple_detection.py script.

Running the simple detection example.
Running the simple detection example.

The image below shows the result of running full detection using the detection.py example script.

Running the full detection example.
Running the full detection example.

Here are the results of running the instance_segmentation.py script, which performed as expected, achieving a similar 30 FPS with the default configuration.

Runing the Instance Segmentation example.
Running the Instance Segmentation example.

The depth_estimation.py example script is based on the SCDepthV3 depth estimation model and also performed very well.

Runing the depth estimation example.
Running the depth estimation example.

The final example I tested was the pose estimation demo, which returned 17 keypoints (HAILO_LANDMARKS) for each detected person. These included landmarks for the nose, eyes, ears, shoulders, elbows, wrists, hips, knees, and ankles. The example script also worked as expected and produced stable, consistent pose estimation results.

Runing the pose estimation example.
Running the pose estimation example.

Use case demonstration – Detecting persons’ location in a delimited zone

In this section, I demonstrated the AI performance of the reComputer Industrial R2135-12 by detecting people in a video stream and estimating each person’s location in real-world coordinates. The detected positions were then highlighted on an external 8×16 LED matrix driven by an ESP32-based development board, effectively acting as a simple LED “floor plan” to visualize where people were located.

It’s worth noting that, in a real-time setup, accurate position estimation would normally require an interactive GUI that allows users to click on the video frame to collect reference points for calibration. Due to time constraints, I skipped this step and used a prerecorded video instead, focusing on showcasing the raw computing and AI capabilities of the reComputer itself.

Background theory

The following figure briefly illustrates the concept of recovering real-world coordinates from image points. In this example, we assume that the ground plane is known and that the real-world coordinates of two reference points, A and B, are given. Their corresponding locations in the image can also be identified, denoted as point-a and point-b. With this information, and by knowing the camera frustum (shown as the yellow triangle in the figure), the position and orientation of the camera can be estimated. Based on this setup, if an object lies on the ground plane and its image location can be identified (for example, point-c), its real-world coordinates can be recovered by back-projecting a ray (shown in orange) from the camera center through point-c in the image and finding the intersection of this ray with the ground plane.

Concept of back-projection.
Concept of back-projection.

The following image shows an example of how the tripod base coordinates could be estimated when the camera intrinsics were known, together with four ground control points. In this setup, the reference origin (0, 0) was defined at the center point among points A–D, with each floor tile measuring 60 × 60 cm. The actual location of the tripod base was approximately (0, −60, 0) in world coordinates. Based on this configuration, the estimated tripod position (0.3, -59.4, 0.0) closely matched the expected real-world location, demonstrating the accuracy of this simple coordinate recovery approach.

Example of tripod base coordinate estimation.
Example of tripod base coordinate estimation.

Setting up the scene and estimating the camera pose

First, I calibrated the camera using a set of chessboard images and OpenCV’s camera calibration functions to obtain the intrinsic parameters (fx, fy, cx, cy) along with the lens distortion coefficients (k1, k2, p1, p2, k3).


Next, I prepared the testing scene as shown in the image below, where the blue tape marked the reference origin (0, 0). The +X axis was defined along the A–B direction, while the +Y axis was defined along the A–D direction. I then recorded a video of the scene and manually extracted a frame to identify several reference points. Using this information, I estimated the camera transformation with the help of Python functions based on OpenCV’s Perspective-n-Point (PnP) pose estimation methods.

Setting up the testing environment.
Setting up the testing environment.
Reference points.
Reference points.

Below is the function I used to estimate the camera pose.


This is the estimated camera rotation and translation information.


The rotation matrix was more difficult to interpret directly, while the translation vector was much easier to understand. In this case, it indicated that the camera was positioned approximately 2.1 m away from point A along the negative X-axis (toward the bottom of the image), about 1 m along the Y-axis, and at a height of roughly 1.2 m above the ground plane relative to point A.

Foot location estimation

The next step was to estimate the 3D position of a detected person’s foot. For simplicity, I used the Hailo-8 pose_estimation.py example to detect human body landmarks and extracted the 2D image coordinates of the left and right ankles. These image coordinates were then converted into world coordinates using the back-projection technique described earlier.

The following two functions were used to construct a ray from the camera center and compute its intersection with the ground plane.


Based on this approach, the estimated foot position was (202.0, 44.2, 0.0) in the world reference frame. This result was reasonable, as I was standing very close to point B, whose known coordinate is (240, 0).

Result of foot location estimation.
Result of foot location estimation.

Displaying the detected foot location using an ESP32 and LED matrices

The final step was to visualize the detected person’s location using two 8 × 8 LED matrices. The LED row and column indices were calculated by linearly scaling the estimated world coordinate range (approximately 480.0 × 240.0 cm) to match the 8 × 16 LED grid. These indices were then sent to an ESP32 over a USB connection.

For this demonstration, I used a KidBright 32 V1.3 board, which includes an onboard HT16K33 LED matrix controller, making it convenient for quick prototyping. The image below shows the system running in real time with input from a USB webcam, where the illuminated LEDs correspond to the estimated person location. A video of this demo is included at the top of this review, and it can also be viewed directly via this YouTube link.

Running the demo application.
Running the demo application.

Temperature and heat distribution

The final test focused on temperature behavior and heat distribution. I first ran the device in an idle state for 5 minutes and then captured thermal images using a FLIR E4 thermal camera.

Running the device under full load.
Running the device under full load.

Under idle conditions, the average surface temperature was around 33 °C, with an ambient temperature of approximately 25 °C, as shown below.

Temperature in idle state.
Temperature in idle state.

Next, I tested the device under a full workload by running the WebGL Aquarium demo with 30,000 fish, playing a 1080p YouTube video, and running an AI example application simultaneously. The system was left running in this state for 10 minutes before capturing the thermal images shown below.

Temperature under high load.
Temperature under high load.

Based on the thermal images, the device exhibited a consistent and well-distributed heat profile across different operating states. Under heavier workloads, the enclosure surface temperature peaked at approximately 35–37 °C, with the warmest areas concentrated on the top panel and upper side surfaces. This indicated that the internal heat sources—most likely the Compute Module and the AI accelerator—were efficiently transferring heat to the aluminum enclosure, which functioned as a passive heatsink. Notably, no localized hot spots were observed, suggesting effective internal thermal coupling and heat spreading.

Temperature under high load from different view angles.
Temperature under high load from different view angles.

Conclusions

The reComputer AI Industrial R2135-12 offers a wide range of wired and wireless connectivity options. The pre-flashed system image and preinstalled software, including the Hailo-8 AI acceleration examples, saved me a significant amount of setup time. Its AI performance averaged around 30 FPS, which is more than sufficient for many of my research applications. In this review, I encountered only one minor issue: the official product page was somewhat confusing, as the images shown on the website differed from the unit I received, which initially led me to consult the wrong user manual.

We’d like to thank Seeed Studio for sending the reComputer AI Industrial R2135-12 for review. It is available for purchase from Seeed Studio for $279.00 for the 8 GB RAM / 32 GB eMMC (26 TOPS) version reviewed here, or you can select the 16GB RAM variant for $339 on the same page. The Raspberry Pi CM5 Edge AI PC may eventually become available on the company’s Amazon and AliExpress stores.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.

Radxa Orion O6 Armv9 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
1 Comment
oldest
newest
Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC