docker News - CNX Software - Embedded Systems News

CamThink NeoEyes NE301 – An ultra-low-power, STM32N6-based Edge AI camera

CamThink NeoEyes 301

CamThink NeoEyes NE301 is an ultra-low-power Edge AI camera built around the STM32N6 Arm Cortex-M55 MCU with Neural-ART NPU that “offers significantly enhanced features and performance” compared to the company’s earlier ESP32-S3-based NeoEyes NE101. The camera ships with a 4MP MIPI CSI camera sensor by default, but USB camera sensors are also supported. It also features 64MB PSRAM, 128MB hyperflash, WiFi 6 and Bluetooth 5.4 wireless connectivity, optional support for a 4G LTE module (global or US), audio wafers, USB-C and UART debug, a 16-pin GPIO header, and support for either USB, battery, or PoE power. CamThink NeoEyes 301 specifications: MCU – STMicro STM32N6 MCU Core – Arm 32-bit Cortex-M55 CPU @ up to 800MHz with Arm Helium and Arm MVE GPU – Neo-Chrom 2.5D GPU, Chrom-ART Accelerator (DMA2D) NPU – ST Neural-ART accelerator @ 1 GHz, up to 600 GOPS; 3 TOPS/W enabling fanless operation BPU – Hardware-accelerated H.264 […]

Axelera Metis M.2 Max Edge AI module doubles LLM and VLM processing speed

Metis M.2 Max

Axelera AI’s Metis M.2 Max is an M.2 module based on an upgraded Metis AI processor unit (AIPU) delivering twice the memory bandwidth of the current Metis M.2 module for compute-intensive Edge AI inference applications such as large language models (LLMs) and vision language models (VLMs). The new Metis M.2 Max also offers a slimmer profile, advanced thermal management features, and additional security capabilities. It is equipped with up to 16 GB of memory, and versions for both a standard operating temperature range (-20°C to +70°C) and an extended operating temperature range (-40°C to +85°C) will be offered. These enhancements make Metis M.2 Max ideal for applications in industrial manufacturing, retail, security, healthcare, and public safety. Axelera AI Metis M.2 Max specifications and host requirements: Accelerator – Metis AIPU’ System Memory – 1GB, 4GB, 8GB, or 16GB memory Host Interface – M.2 2280 M-key edge connector with PCIe Gen. 3.0 […]

Xerxes Pi – A Raspberry Pi CM4/CM5 carrier board with a rack-friendly design (Crowdfunding)

Xerxes Pi A cross vendor compute module carrier board

Designed by Rapid Analysis in Australia, the Xerxes Pi is a cross-vendor compute module carrier board that fits into a 1U rack and supports Raspberry Pi CM4/CM5, Radxa CM5, Banana Pi CM4/CM5, and Orange Pi CM4/CM5 compute modules. Designed at just one-third the size of a Nano-ITX board (120 × 40 mm), it’s ideal for home lab and small business servers looking for a low-cost way to run Docker containers and other open-source software. For storage, the carrier board includes a microSD card, and an M.2 E-Key slot enables support for accelerators or peripherals. Additionally, it features an I²C/SPI header and optional PoE via add-on boards or splitters. The design is well thought out and comes with a thermally efficient design with ventilated enclosures, optional PLA or metal heatsinks, and open-source 3D printable rack cases (single or multi-board). With open schematics, 3D files, Xerxes Pi targets DIY electronics, clustered computing, edge servers, […]

Compulab MCM-iMX95 – A solder-down NXP i.MX 95 SoM

NXP i.MX 95 SoM QFN SMD package

Compulab MCM-iMX95 is yet another NXP i.MX 95 system-on-module (SoM), whose main selling point is being offered as a solder-down QFN package with SMD pads. The hexa-core Cortex-A55 Edge AI module ships with 4GB to 16GB LPDDR5 memory, 16GB to 128GB eMMC flash, an NXP PF0900 PMIC, and an RTC. All I/Os are exposed through 180 QFN SMD pads, including LVDS and MIPI DSI display interfaces, two MIPI CSI camera interfaces, two Gigabit plus one 10 Gbps Ethernet MACs, two PCIe Gen3 x1 interfaces, and more. Compulab MCM-iMX95 specifications: SoC – NXP i.MX 95 CPU Up to 6x Arm Cortex-A55 cores @ up to 1.8 GHz Real-time co-processors – Arm Cortex-M7 @ 800MHz and Cortex-M33 @ 250MHz 2D/3D Graphics Acceleration 3D Arm Mali GPU with OpenGL ES 3.2, Vulkan 1.2, OpenCL 3.0 2D GPU Video Encode / Decode – 4Kp30 H.265 and H.264 AI/ML – 2 TOPS eIQ Neutron NPU System […]

Huginn is a self-hosted, open-source alternative to IFTTT and Zapier

Huginn open source automation tool

IFTTT and Zapier automation tools enable users to create automated workflows connecting various apps, services, and devices. They are relatively easy to use, but their free tiers are now rather limited, and you have to rely on the cloud. Huginn is a self-hosted, open-source alternative to IFTTT or Zapier that can work on your own network without cloud connectivity. Andrew Cantino released the first version of the project 12 years ago (in 2013) by Andrew Cantino, but it now has a larger community of developers and users. Somehow, I only found out about Huginn when XDA Developers wrote about it earlier this week. Let’s have a look. Developers describe Huginn as a system for building agents that perform automated tasks for you online, and view it as a hackable version of IFTTT or Zapier hosted on the user’s server with full control over the data. Here are some of the […]

Firefly’s CSB1-N10 series AI cluster servers can deliver up to 1000 TOPS of AI power with Rockchip or NVIDIA Jetson Modules

CSB1 N10 series AI cluster servers

Firefly has recently introduced the CSB1-N10 series AI cluster servers designed for applications such as natural language processing, robotics, and image generation. These 1U rack-mounted servers are ideal for data centers, private servers, and edge deployments. The servers have multiple computing nodes, featuring either energy-efficient processors (Rockchip RK3588, RK3576, or SOPHON BM1688) or high-performance NVIDIA Jetson modules (Orin Nano, Orin NX). With 60 to 1000 TOPS AI power, the CSB1-N10 servers can handle the demands of large AI models, including language models like Gemma-2B and Llama3, as well as visual models like EfficientVIT and Stable Diffusion. CSB1-N10 series specifications All CSB1-N10 AI servers have the same interfaces, and the only differences are the CPU, memory, storage, multimedia, AI capabilities, and related software support. So it’s likely Firefly has made Rockchip system-on-modules compatible with NVIDIA Jetson SO-DIMM form factor, and indeed we previously noted that Firefly designed Core-1688JD4, Core-3576JD4, or Core-3588JD4 […]

Firefly ROC-RK3576-PC low-profile Rockchip RK3576 SBC supports AI models like Gemma-2B, LlaMa2-7B, ChatGLM3-6B

Firefly ROC RK3576 PC SBC

Firefly ROC-RK3576-PC is a low-power, low-profile SBC built around the Rockchip RK3576 octa-core Cortex-A72/A53 SoC which we also find in the Forlinx FET3576-C, the Banana Pi BPI-M5, and Mekotronics R57 Mini PC. In terms of power and performance, this SoC falls in between the Rockchip RK3588 and RK3399 SoCs and can be used for AIoT applications thanks to its 6 TOPS NPU. Termed “mini computer” by Firefly this SBC supports up to 8GB LPDDR4/LPDDR4X memory and 256GB of eMMC storage. Additionally, it offers Gigabit Ethernet, WiFi 5, and Bluetooth 5.0 for connectivity. An M.2 2242 PCIe/SATA socket and microSD card can be used for storage, and the board also offers HDMI and MIPI DSI display interfaces, two MIPI CSI camera interfaces, a few USB ports, and a 40-pin GPIO header. Firefly ROC-RK3576-PC specifications SoC – Rockchip RK3576 CPU 4x Cortex-A72 cores at 2.2GHz, four Cortex-A53 cores at 1.8GHz Arm Cortex-M0 MCU at 400MHz GPU […]

Testing AI and LLM on Rockchip RK3588 using Mixtile Blade 3 SBC with 32GB RAM

mixtile blade 3 review RK3588 AI LLM

We were interested in testing artificial intelligence (AI) and specifically large language models (LLM) on Rockchip RK3588 to see how the GPU and NPU could be leveraged to accelerate those and what kind of performance to expect. We had read that LLMs may be computing and memory-intensive, so we looked for a Rockchip RK3588 SBC with 32GB of RAM, and Mixtile – a company that develops hardware solutions for various applications including IoT, AI, and industrial gateways – kindly offered us a sample of their Mixtile Blade 3 pico-ITX SBC with 32 GB of RAM for this purpose. While the review focuses on using the RKNPU2 SDK with computer vision samples running on the 6 TOPS NPU, and a GPU-accelerated LLM test (since the NPU implementation is not ready yet), we also went through an unboxing to check out the hardware and a quick guide showing how to get started […]

Banana Pi BPI-R4 Pro networking SBC