Lyra V2 open-source audio codec gets faster, higher quality and compatible with more platforms

Lyra V2 vs Opus

Lyra V2 is an update to the open-source Lyra audio codec introduced last year by Google, with a new architecture that offers scalable bitrate capabilities, better performance, higher quality audio, and works on more platforms. Under the hood, Lyra V2 is based on an end-to-end neural audio codec called SoundStream with a “residual vector quantizer” (RVQ) sitting before and after the transmission channel, and that can change the audio bitrate at any time by selecting the number of quantizers to use. Three bitrates are supported: 3.2 kps, 6 kbps, and 9.2 kbps. Lyra V2 leverages artificial intelligence, and a TensorFlow Lite model enables it to run on Android phones, Linux, as well as Mac and Windows although support for the latter two is experimental. iOS and other embedded platforms are not supported at this time, but this may change in the future. It gets more interesting once we start to […]

Ztachip open-source RISC-V AI accelerator performs up to 50 times faster

Ztachip RISC-V AI accelerator

Ztachip is an open-source RISC-V accelerator for vision and AI edge applications running on low-end FPGA devices or custom ASIC that is said to perform 20 to 50 times faster than on non-accelerated RISC-V implementations, and is also better than RISC-V cores with vector extensions (no numbers were provided here). Ztachip, pronounced zeta-chip, is not tied to a particular architecture, but the example code features a RISC-V core based on the VexRiscv implementation and can accelerate common computer vision tasks such as edge detection, optical flow, motion detection, color conversion, as well as TensorFlow AI models without retraining. The open-source AI accelerator has been tested on Digilent ArtyA7-100T FPGA board in combination with a PMOD VGA module to connect to a display and an OV7670 VGA camera module. You can then build the sample found on Github with the Xilinx Vivado Webpack free edition and flash it to the board […]

mini PCIe module features Rockchip RK1808K SoC with 3.0 TOPS NPU

Toybrick RK1808 mPCIe AI accelerator card

Rockchip RK1808 SoC with a built-in 3.0 TOPS AI accelerator has been around since 2019, and we’ve seen it in USB compute sticks, SBCs, and even in Pine64 SoEdge-RK1808 SO-DIMM module, but somehow never in the more widely used M.2 or mPCIe form factors. Toybrick TB-RK1808M0 changes that and offers Rockchip RK1808K SoC coupled with 1GB RAM and an 8GB eMMC flash in a mini PCIe module that exposes USB 3.0, USB 2.0, UART, and GPIO signals. Toybrick TB-RK1808M0 specifications: SoC – Rockchip RK1808K CPU – Dual-core Cortex-A35 processor @ up to 1.4 GHz AI Accelerator – 3.0 TOPS NPU for INT8 inference (300 GOPS for INT16, 100 GFLOPS for FP16) VPU – 1080p60 H.264 decode, 1080p30 H.264  encode System Memory – 1GB DDR Storage – 8GB eMMC flash Host interface – Mini PCIe edge connector with USB 3.0, USB 2.0, UART, and GPIO Misc – Heatsink for cooling Supply […]

Allwinner V853 Arm Cortex-A7 + RISC-V SoC comes with 1 TOPS NPU for AI Vision applications

Allwinner V853

Allwinner V853 SoC combines an Arm Cortex-A7 core with a Xuantie E907 RISC-V core, and a 1 TOPS NPU for cost-sensitive AI Vision applications such as smart door locks, smart access control, AI webcams, tachographs, and smart desk lamps. Manufactured with a 22nm process, the SoC comes with an ISP image processor and Allwinner Smart video engine capable of up to 5M @ 30fps H.265/H.264 encoding and 5M @ 25fps H.264 decoding, offers parallel CSI and MIPI CSI camera interfaces, and well as MIPI DSI and RGB display interfaces. Allwinner V853 specifications: CPU Arm Cortex-A7 CPU core @ 1 GHz with 32 KB I-cache, 32 KB D-cache, and 128 KB L2 cache Alibaba Xuantie E907 RISC-V core with 16 KB I-cache and 16 KB D-cache NPU (Neural network Processing Unit) – Up to 1 TOPS for V853 and 0.8 TOPS for V853S,  embedded 128KB internal buffer, support for TensorFlow, Caffe, […]

reServer Jetson-50-1-H4 is an AI Edge server powered by NVIDIA Jetson AGX Orin 64GB

Jetson AGX Orin 64GB AI inference server

reServer Jetson-50-1-H4 is an AI inference edge server powered by Jetson AGX Orin 64GB with up to 275 TOPS of AI performance, and based on the same form factor as Seeed Studio’s reServer 2-bay multimedia NAS introduced last year with an Intel Core Tiger Lake single board computer. The 12-core Arm server comes with 32GB LPDDR5, a 256GB NVMe SSD pre-loaded with the Jetpack SDK and the open-source Triton Inference server, two SATA bays for 2.5-inch and 3.5-inch drives, up to 10 Gbps Ethernet, dual 8K video output via HDMI and DisplayPort, USB 3.2 ports, and more. reServer Jetson-50-1-H4 (preliminary) specifications: SoM – Jetson AGX Orin module with CPU – 12-core Arm Cortex-A78AE v8.2 64-bit processor with 3MB L2 + 6MB L3 cache GPU / AI accelerators NVIDIA Ampere architecture with 2048 NVIDIA CUDA cores and 64 Tensor Cores @ 1.3 GHz DL Accelerator – 2x NVDLA v2.0 Vision Accelerator […]

Codasip L31 and L11 RISC-V cores for AI/ML support TFLite Micro, customizations

Codasip L31 L11

Codasip has announced the L31 and L11 low-power embedded RISC-V processor cores optimized for customization of AI/ML IoT edge applications with power and size constraints. The company further explains the new L31/L11 RISC-V cores can run Google’s TensorFlow Lite for Microcontrollers (TFLite Micro) and can be optimized for specific applications through Codasip Studio RISC-V design tools. As I understand it, this can be done by the customers themselves thanks to a full architecture license as stated by Codasip CTO, Zdeněk Přikryl: Licensing the CodAL description of a RISC-V core gives Codasip customers a full architecture license enabling both the ISA and microarchitecture to be customized. The new L11/31 cores make it even easier to add features our customers were asking for, such as edge AI, into the smallest, lowest power embedded processor designs. The ability to customize the cores is important for AI and ML applications since the data types, […]

Coral Dev Board Micro combines NXP i.MX RT1176 MCU with Edge TPU in Pi Zero form factor

Coral Dev Board Micro

Coral Dev Board Micro is the latest iteration of Google’s Edge AI devkit with an NXP i.MX RT1176 Cortex-M7/M4 crossover processor/microcontroller coupled with the company’s 4 TOPS Edge TPU, a camera, and a microphone in a board that’s about the size of a Raspberry Pi Zero SBC. The new board follows the original NXP i.MX 8M-based Coral Dev board that was introduced in 2019, and Coral Dev Board mini based on MediaTek MT8167S processor launched in 2020, and keeps with the trend of providing more compact solutions with lower-end host processors for edge AI. Coral Dev Board Micro specifications: MCU – NXP i.MX RT1176 processor with an Arm Cortex-M7 core @ up to 1 GHz, Cortex-M4 core up to 400 MHz, 2MB internal SRAM, 2D graphics accelerators; System Memory – 512 Mbit (64 MB) RAM Storage – 1 Gbit (128 MB) flash memory ML accelerator – Coral Edge TPU coprocessor […]

$499 BrainChip AKD1000 PCIe board enables AI inference and training at the edge

Brainchip AKD1000 mini PCIe board

BrainChip has announced the availability of the Akida AKD1000 (mini) PCIe boards based on the company’s neuromorphic processor of the same name and relying on spiking neural networks (SNN) which to deliver real-time inference in a way that is much more efficient than “traditional” AI chips based on CNN (convolutional neural network) technology. The mini PCIe card was previously found in development kits based on Raspberry Pi or an Intel (x86) mini PC to let partners, large enterprises, and OEMs evaluate the Akida AKD1000 chip. The news is today is simply that the card can easily be purchased in single units or quantities for integration into third-party products. BrainChip AKD1000 PCIe card specifications: AI accelerator – Akida AKD1000 with Arm Cortex-M4 real-time core @ 300MHz System Memory – 256Mbit x 16 bytes LPDDR4 SDRAM @ 2400MT/s Storage – Quad SPI 128Mb NOR flash @ 12.5MHz Host interface – 5GT/s PCI […]