Allwinner V853 Arm Cortex-A7 + RISC-V SoC comes with 1 TOPS NPU for AI Vision applications

Allwinner V853

Allwinner V853 SoC combines an Arm Cortex-A7 core with a Xuantie E907 RISC-V core, and a 1 TOPS NPU for cost-sensitive AI Vision applications such as smart door locks, smart access control, AI webcams, tachographs, and smart desk lamps. Manufactured with a 22nm process, the SoC comes with an ISP image processor and Allwinner Smart video engine capable of up to 5M @ 30fps H.265/H.264 encoding and 5M @ 25fps H.264 decoding, offers parallel CSI and MIPI CSI camera interfaces, and well as MIPI DSI and RGB display interfaces. Allwinner V853 specifications: CPU Arm Cortex-A7 CPU core @ 1 GHz with 32 KB I-cache, 32 KB D-cache, and 128 KB L2 cache Alibaba Xuantie E907 RISC-V core with 16 KB I-cache and 16 KB D-cache NPU (Neural network Processing Unit) – Up to 1 TOPS for V853 and 0.8 TOPS for V853S,  embedded 128KB internal buffer, support for TensorFlow, Caffe, […]

Axiomtek AIE900-XNX – A 5G connected fanless Edge AI system for AMR, AGV, and computer vision

Axiomtek AIE900-XNX

Axiomtek AIE900-XNX is a fanless Edge AI computing system powered by NVIDIA Jetson Xavier NX system-on-module designed for autonomous mobile robots (AMR), automated guided vehicles (AGV), and other computer vision applications. The system delivers up to 21 TOPS thanks to the 6-core NVIDIA Carmel ARM v8.2 (64-bit) processor, NVDLA accelerators, and 384-core NVIDIA Volta architecture GPU found in the Jetson Xavier NX module. The AIE900-XNX Edge AI computer also comes with a 5G module for high-speed cellular connectivity and supports SerDes, PoE, and MIPI CSI cameras for video processing. Axiomtek AIE900-XNX specifications: NVIDIA Jetson Xavier NX system-on-module with CPU – 6-core NVIDIA Carmel Armv8.2 64-bit CPU with 6 MB L2 + 4 MB L3 cache GPU – 384-core NVIDIA Volta GPU with 48 Tensor Cores AI Accelerator – 2x NVDLA System Memory – 8GB 128-bit LPDDR4x onboard Storage – 16GB eMMC flash Storage –  M.2 Key M 2280 with PCIe […]

Orbbec Persee+ 3D AI camera runs Ubuntu or Android on Amlogic A311D processor

Orbbec Persee+ 3D AI camera

Orbbec Persee+ is a 3D depth camera running Linux with AI capabilities thanks to Amlogic A311D hexa-core processor equipped with a 5 TOPS NPU (Neural-network Processing Unit). The Persee+ is designed to help researchers, engineers, and hobbyists implement advanced uses for 3D imaging. Orbbec has been around for several years with the first product we covered here being the Orbbec Persee 3D depth camera running Ubuntu or Android on Rockchip RK3288 processor and unveiled in 2015. Last year, the company introduced the Zora P1 Amlogic A311D development board for Orbbec 3D cameras, so in a way, the Orbbec Persee+ is born from the work done on the Persee camera and Zora P1 over the years. Orbbec Persee+ 3D AI camera specifications: SoC – Amlogic A311D hexa-core processor with 4x Cortex-A73 cores, 2x Cortex-A53 cores, Arm Mali-G52MP4 GPU, 5 TOPS NPU System Memory – 4GB RAM Storage – 8GB (specs) / […]

Arm Cortex-M85 is faster than Cortex-M7, offers higher ML performance than Cortex-M55

Arm Cortex M85

Arm has introduced a new MCU-class core with the Cortex-M85 core that offers higher integer performance than Cortex-M7, and higher machine learning performance compared to Cortex-M55 equipped with Helium instructions. The new Cortex-M85 core is designed for developers requiring increased performance for their Cortex-M powered products without going to Cortex-A cores, and instead, keeping important features such as determinism, short interrupt latencies, and advanced low-power management modes found in all Cortex-M cores. Arm Cortex-M85 key features and specifications: Architecture – Armv8.1-M Performance efficiency – 6.28 CoreMark/MHz and 3.13/4.52/8.76DMIPS/MHz (1. “ground rules” in the Dhrystone documentation, 2. inlining of functions,  3. simultaneous (”multi-file”) compilation). Bus interfaces AMBA 5 AXI 64-bit Main system bus (compatible with AXI4 IPs) AMBA 5 AHB 32-bit Peripheral bus AMBA 5 AHB 64-bit TCM Access bus (subordinate port) Pipeline – 7-stage (for main integer pipeline) Security Arm TrustZone technology PACBTI extension (Pointer Authentication, Branch Target Identification) helps […]

reServer Jetson-50-1-H4 is an AI Edge server powered by NVIDIA Jetson AGX Orin 64GB

Jetson AGX Orin 64GB AI inference server

reServer Jetson-50-1-H4 is an AI inference edge server powered by Jetson AGX Orin 64GB with up to 275 TOPS of AI performance, and based on the same form factor as Seeed Studio’s reServer 2-bay multimedia NAS introduced last year with an Intel Core Tiger Lake single board computer. The 12-core Arm server comes with 32GB LPDDR5, a 256GB NVMe SSD pre-loaded with the Jetpack SDK and the open-source Triton Inference server, two SATA bays for 2.5-inch and 3.5-inch drives, up to 10 Gbps Ethernet, dual 8K video output via HDMI and DisplayPort, USB 3.2 ports, and more. reServer Jetson-50-1-H4 (preliminary) specifications: SoM – Jetson AGX Orin module with CPU – 12-core Arm Cortex-A78AE v8.2 64-bit processor with 3MB L2 + 6MB L3 cache GPU / AI accelerators NVIDIA Ampere architecture with 2048 NVIDIA CUDA cores and 64 Tensor Cores @ 1.3 GHz DL Accelerator – 2x NVDLA v2.0 Vision Accelerator […]

NVIDIA NVDLA AI accelerator driver submitted to mainline Linux

NVDLA

A large patchset has been submitted to mainline Linux for NVIDIA NVDLA AI accelerator Direct Rendering Manager (DRM) driver, accompanied by an open-source user mode driver. The NVDLA (NVIDIA Deep Learning Accelerator) can be found in recent Jetson modules such as Jetson AGX Xavier and Jetson AGX Orin, and since NVDLA was made open-source hardware in 2017, it can also be integrated into third-party SoCs such as StarFive JH7100 Vision SoC and Allwinner V831 processor. I actually assumed everything was open-source already since we were told that NVDLA was a “complete solution with Verilog and C-model for the chip, Linux drivers, test suites, kernel- and user-mode software, and software development tools all available on Github’s NVDLA account.” and the inference compiler was open-sourced in September 2019. But apparently not, as developer Cai Huoqing submitted a patchset with 23 files changed, 13243 insertions, and the following short description: The NVIDIA Deep […]

ROC-RK3588S-PC is the first Rockchip RK3588S SBC, supports up to 32GB RAM

ROC-RK3588S-PC

Rockchip RK3588S processor, a cost-down version of Rockchip RK3588 SoC with fewer interfaces, has made its way into Firefly ROC-RK3588S-PC SBC (single board computer) about the size of a credit card and equipped with up to 32GB RAM. The compact SBC also comes with up to 128GB eMMC flash, and offers support for NVMe storage, up to four video outputs through HDMI, USB-C and MIPI DSI interfaces, Gigabit Ethernet, USB 3.0, two MIPI CSI camera interfaces, and more. Firefly ROC-RK3588S-PC specifications: SoC – Rockchip RK3588S octa-core processor with 4x Cortex-A76 cores @ up to 2.4 GHz, four Cortex-A55 cores, Arm Mali-G610 MP4 quad-core GPU with OpenGL ES3.2 / OpenCL 2.2 / Vulkan1.1 support, 6 TOPS NPU, and an 8Kp60 H.265/VP9/AVS2 video decoder, 4Kp60 decoder, 8Kp30 H.265/H.264 video encoder System Memory – 4GB, 8GB, 16GB or 32GB LPDDR4/LPDDR4x/LPDDR5 Storage 16GB, 32GB, 64GB, or 128GB eMMC flash M.2 (PCIe 2.0) socket for […]

FOMO (Faster Objects, More Objects) enables real-time object detection on low-end embedded systems

FOMO face detection

FOMO used to stand for “Fear Of Missing Out” in my corner of the Internet, but Edge Impulse’s FOMO is completely different, as the “Faster Object, More Objects” model is designed to lower the footprint and improve the performance of object detection on resource-constrained embedded systems. The company says FOMO is 30x faster than MobileNet SSD and works on systems with less than 200K of RAM available. Edge Impulse explains the FOMO model provides a variant between basic image classification (e.g. is there a face in the image?) and more complex object detection (how many faces are in the image, if any, and where and what size are they?). That’s basically a simplified version of object detection where we’ll know the position of the objects in the image, but not their sizes. So instead of seeing the usual bounding box while the model is running, the face position will be […]

Exit mobile version