Archive

Posts Tagged ‘gstreamer’

Amlogic A111, A112 & A113 Processors are Designed for Audio Applications, Smart Speakers

September 9th, 2017 6 comments

Amlogic processors are mostly found in TVs and TV boxes, but the company is now apparently entering a new market with A111, A112, and A113 audio processors. I was first made aware of those new processors through Buildroot OpenLinux Release Notes V20170831.pdf document posted on their Open Linux website, where two boards with Amlogic A113D and A113X are shown.

S400 Version 03 Board

First, S400 board with the following key features/specifications:

  • SoC – Amlogic A113D CPU
  • System Memory – 1GB DDR3
  • Storage – 512MB SLC NAND flash
  • Display I/F – MIPI interface
  • Connectivity – Gigabit Ethernet SDIO WiFi/BT (AP6356S)
  • Audio
    • SPDIF_IN/SPDIF_OUT
    • LINE_IN/LINE_OUT
    • 2x Audio headers (MIC_Connector & SPK_Connector)
  • USB – 1x USB 2.0 OTG
  • Expansion – 2x PCIe ports
  • Misc – 6x ADC Keys, IR_IN/IR_OUT, UART Interface (RS232)

The second S420 board is based on A113X SoC, and comes with less features (no display, no Ethernet, no PCIe…), less memory:

  • SoC – Amlogic A113X CPU
  • System Memory – 512 MB DDR3
  • Storage – 512MB SLC NAND flash
  • Connectivity – SDIO WiFi/BT (AP6356S)
  • Audio
    • SPDIF_IN
    • LINE_IN/LINE_OUT
    • 2x Audio headers (MIC_Connector & SPK_Connector)
  • USB – 1x USB 2.0 OTG
  • Misc – 6x ADC Keys, IR_IN/IR_OUT, UART Interface (RS232)

The document also explains how to build Linux built with buildroot (you’ll need an Amlogic account), and use audio via applications or frameworks such as aplay, gstreamer, alsaplayer, shairport (Airplay), VLC, DLNA, etc…

Information about Amlogic A113X/A113D processor is lacking on the web, but I eventually found that Amlogic had a YouTube account with now a whopping two subscribers (including yours truly), and one of the two videos was an Alexa voice services demo on Amlogic A113 with what looks like a microphone array inserted on the top of the board.

Further research led me to a page in Chinese discussing Amlogic A111, A112, A113 audio processors, and revealing that Xiaomi AI smart speaker is based on Amlogic A112 quad core Cortex A53 processor, that also shows up in GeekBench running Android 6.0. They also report that A113 features the same four Cortex 53 cores, but has better audio capabilities with 8x PDM interfaces, and 16x I2S interfaces. I also found a page about a microphone array designed for Amlogic S905/S912/A112, and based on Knowles SPH0645LM4H-B miniature microphones .

Finally, I decided to go directly to Amlogic website, and they do have pages for A111 and A112 SoCs, strangely not indexed by search engines so far.

Amlogic A111 key features:

  • CPU – Quad-core ARM Cortex-A5
  • Audio Interface
    • 2-channel I2S input and output
    • TDM/PCM input and output, up to 8 channels
    • S/PDIF output
  • Video Interface – LVDS and MIPI-DSI panel output
  • Security – Supports secure boot and secure OS
  • Ethernet – 10/100/1000M MAC
  • IP License (Optional) – Dolby Digital, Dolby Digital Plus, DTS Digital Surround, DTS HD, DTS Express
  • Process – 28nm HKMG

Amlogic A112 key features:

  • CPU – Quad-core ARM Cortex-A53
  • Audio Interface
    • 8-channel I2S and S/PDIF input and output
    • TDM/PCM input and output, up to 8 channels
    • 2-channel PDM input
  • Video Interface – RGB888 output
  • Security – Supports secure boot and secure OS
  • Ethernet – 10/100M MAC+PHY
  • IP License(Optional) – Dolby Digital, Dolby Digital Plus, DTS Digital Surround, DTS HD, DTS Express
  • Process – 28nm HKMG

If you are interested in evaluating / playing with those processors, and cannot get hold of Amlogic boards (since they only deal with companies), one solution is to get Xiaomi AI smart speaker available for pre-order/arrival notice on sites likes GearBest or GeekBuying, and expected to ship on October 1st.

Thanks to vertycall for the tip.

e-con Systems Introduces a 360° Camera Kit for NVIDIA Jetson TX1/TX2 Development Boards

September 7th, 2017 No comments

e-con Systems has previously launched MIPI cameras for Jetson TX1/TX2 development kit, but the company has now announced e-CAM30_HEXCUTX2, a kit with an adapter board, and six synchronized HD cameras connected that can be used for video surveillance, or robots requiring a 360° or “720°” field of view.

The kit is comprised of the following elements:

  • e-CAMHEX_TX2ADAP adapter board for connecting six cameras through Jetson boards’s J22 connector supporting up to 6x 2-lane MIPI CSI-2 cameras
  • 6x 3.4MP MIPI CSI2 low light camera board with interchangeable S-mount lens, and featuring ON Semiconductor AR0330 color CMOS image sensor; each camera supports VGA to 1080p/3M resolution up to 30 fps
  • 6x 30cm custom micro coaxial cable

The kit operates at 5V, and requites between 5.33 to 8.10 watts, the later while streaming 6 Cameras on Jetson TX2. Software support is implemented through a Linux camera driver (V4L2) on top of NVIDIA’s JetPack 2.3/3.0, and e-con Systems also developed demos such as their Gstreamer based hexcam app to manage six synchronized cameras, as well as e-CAM_TK1 GUVCView for single streams as showcased in the embedded video below.


The e-CAM30_HEXCUTX2 camera kit is available now, and such system is rather pricey at $1,499 without the Jetson TX1/TX2 development kit, but you save $200 if you order before September 14, 2017. You’ll find a purchase link, and access to software and hardware documentation in the product page.

Embedded Linux Conference & Open Source Summit Europe 2017 Schedule

August 27th, 2017 3 comments

The Embedded Linux Conference & IoT summit 2017 took place in the US earlier this year in February, but there will soon be a similar event with the Embedded Linux Conference *& Open Source Summit Europe 2017 to take up in Europe on October 23 – 25 in Prague, Czech Republic, and the Linux Foundation has just published the schedule. It’s always useful to find out what is being discussed during such events, even if you are not going to attend, so I went through the different sessions, and compose my own virtual schedule with some of the ones I find the most interesting.

Monday, October 23

  • 11:15 – 11:55 – An Introduction to SPI-NOR Subsystem – Vignesh Raghavendra, Texas Instruments India

Modern day embedded systems have dedicated SPI controllers to support NOR flashes. They have many hardware level features to increase the ease and efficiency of accessing SPI NOR flashes and also support different SPI bus widths and speeds.

In order to support such advanced SPI NOR controllers, SPI-NOR framework was introduced under Memory Technology Devices (MTD). This presentation aims at providing an overview of SPI-NOR framework, different types of NOR flashes supported (like SPI/QSPI/OSPI) and interaction with SPI framework. It also provides an overview of how to write a new controller driver or add support for a new flash device.

The presentation then covers generic improvements done and proposed while working on improving QSPI performance on a TI SoC, challenges associated when using DMA with these controllers and other limitations of the framework.

  • 12:05 – 12:45 – Free and Open Source Software Tools for Making Open Source Hardware – Leon Anavi, Konsulko Group

The open source hardware movement is becoming more and more popular. But is it worth making open source hardware if it has been designed with expensive proprietary software? In this presentation, Leon Anavi will share his experience how to use free and open source software for making high-quality entirely open source devices: from the designing the PCB with KiCAD through making a case with OpenSCAD or FreeCAD to slicing with Cura and 3D printing. The talk will also provide information about open source hardware licenses, getting started guidelines, tips for avoiding common pitfalls and mistakes. The challenges of prototyping and low-volume manufacturing with both SMT and THT will be also discussed.

  • 14:20 – 15:00 – Introduction to SoC+FPGA – Marek Vašut, DENX Software Engineering GmbH

In this talk, Marek introduces the increasingly popular single-chip SoC+FPGA solutions. At the beginning, the diverse chip offerings from multiple vendors are introduced, ranging from the smallest IoT-grade solutions all the way to large industrial-level chips with focus on their software support. Mainline U-Boot and Linux support for such chips is quite complete, and already deployed in production. Marek demonstrates how to load and operate the FPGA part in both U-Boot and Linux, which recently gained FPGA manager support. Yet to fully leverage the potential of the FPGA manager in combination with Device Tree (DT) Overlays, patches are still needed. Marek explains how the FPGA manager and the DT Overlays work, how they fit together and how to use them to obtain a great experience on SoC+FPGA, while pointing out various pitfalls.

  • 15:10 – 15:50 – Cheap Complex Cameras – Pavel Machek, DENX Software Engineering GmbH

Cameras in phones are different from webcams: their main purpose is to take high-resolution still pictures. Running preview in high resolution is not feasible, so resolution switch is needed just before taking final picture. There are currently no applications for still photography that work with mainline kernel. (Pavel is working on… two, but both have some limitations). libv4l2 is doing internal processing in 8-bit, which is not enough for digital photography. Cell phones have 10 to 12-bit sensors, some DSLRs do 14-bit depth.

Differences do not end here. Cell phone camera can produce reasonable picture, but it needs complex software support. Auto-exposure / auto-gain is a must for producing anything but completely black or completely white frames. Users expect auto-focus, and it is necessary for reasonable pictures in macro range, requiring real-time processing.

  • 16:20 – 17:00 – Bluetooth Mesh with Zephyr OS and Linux – Johan Hedberg, Open Source Technology Center, Intel

Bluetooth Mesh is a new standard that opens a whole new wave of low-power wireless use cases. It extends the range of communication from a single peer-to-peer connection to a true mesh topology covering large areas, such as an entire building. This paves the way for both home and industrial automation applications. Typical home scenarios include things like controlling the lights in your apartment or adjusting the thermostat. Although Bluetooth 5 was released end of last year, Bluetooth Mesh can be implemented on any device supporting Bluetooth 4.0 or later. This means that we’ll likely see very rapid market adoption of the feature.

The presentation will give an introduction to Bluetooth Mesh, covering how it works and what kind of features it provides. The talk will also give an overview of Bluetooth Mesh support in Zephyr OS and Linux and how to create wireless solutions with them.

  • 17:10 – 17:50 – printk() – The Most Useful Tool is Now Showing its Age – Steven Rostedt, VMware

printk() has been the tool for debugging the Linux kernel and for being the display mechanism for Linux as long as Linux has been around. It’s the first thing one sees as the life of the kernel begins, from the kernel banner and the last message at shutdown. It’s critical as people take pictures of a kernel oops to send to the kernel developers to fix a bug, or to display on social media when that oops happens on the monitor on the back of an airplane seat in front of you.

But printk() is not a trivial utility. It serves many functionalities and some of them can be conflicting. Today with Linux running on machines with hundreds of CPUs, printk() can actually be the cause of live locks. This talk will discuss all the issues that printk() has today, and some of the possible solutions that may be discussed at Kernel Summit.

  • 18:00 – 18:45 – BoF: Embedded Linux Size – Michael Opdenacker, Free Electrons

This “Birds of a Feather” session will start by a quick update on available resources and recent efforts to reduce the size of the Linux kernel and the filesystem it uses.

An ARM based system running the mainline kernel with about 3 MB of RAM will also be demonstrated. If you are interested in the size topic, please join this BoF and share your experience, the resources you have found and your ideas for further size reduction techniques!

Tuesday, October 24

  • 10:55 – 11:35 – Introducing the “Lab in a Box” Concept – Patrick Titiano & Kevin Hilman, BayLibre

Continuous Integration (CI) has been a hot topic for long time. With the growing number of architectures and boards, it becomes impossible for maintainers to validate a patch on all configurations, making it harder and harder to keep the same quality level without leveraging CI and test automation. Recent initiatives like LAVA, KernelCI.org, Fuego, (…) started providing a first answer, however the learning curve remains high, and the HW setup part is not covered.

Baylibre, already involved in KernelCI.org, decided, as part of the AGL project, to go one step further in CI automation and has developed a turnkey solution for developers and companies willing to instantiate a LAVA lab; called “Lab in a Box”, it aims at simplifying the configuration of a board farm (HW, SW).

Motivations, challenges, benefits and results will be discussed, with a demo of a first “Lab in a Box” instantiation.

  • 11:45 – 12:25 – Protecting Your System from the Scum of the Universe – Gilad Ben-Yossef, Arm Holdings

Linux based systems have a plethora of security related mechanisms: DM-Crypt, DM-Verity, Secure Boot, the new TEE sub-system, FScrypt and IMA are just a few examples. This talk will describe these the various systems and provide a practical walk through of how to mix and match these mechanisms and design them into a Linux based embedded system in order to strengthen the system resilience to various nefarious attacks, whether the system discussed is a mobile phone, a tablet, a network attached DVR, a router, or an IOT hub in a way that makes maximum use of the sometime limited hardware resources of such systems.

  • 14:05 – 14:45 – Open Source Neuroimaging: Developing a State-of-the-Art Brain Scanner with Linux and FPGAs – Danny Abukalam, Codethink

Neuroimaging is an established medical field which is helping us to learn more about how the human brain works, the most complex human organ. This talk aims to cover neuroimaging systems, from hobbyist to professional, and how open source has been used to build state-of-the-art systems. We’ll have a look the general problem area, why open source was a good fit, and some examples of solutions including a commercial effort that we have been involved in bringing to market. Typically these solutions consist of specialist hardware, a bespoke software solutions stack, and a suite to manage and process the vast amounts of data generated during the scan. Other points of interest include how we approached building a maintainable and upgradeable system from the outset. We’ll also talk about future plans for neuroimaging, future ideas for hardware & discuss areas lacking good open source solutions.

  • 14:55 – 15:35 – More Robust I2C Designs with a New Fault-Injection Driver – Wolfram Sang, Renesas

It has its challenges to write code for certain error paths for I2C bus drivers because these errors usually don’t happen on the bus. And special I2C bus testers are expensive. In this talk, a new GPIO based driver will be presented which acts on the same bus as the bus master driver under inspection. A live demonstration will be given as well as hints how to handle bugs which might have been found. The scope and limitations of this driver will be discussed. Since it will also be analyzed what actually happens on the wires, this talk also serves as a case study how to snoop busses with only Free Software and OpenHardware (i.e. sigrok).

  • 16:05 – 16:45 – GStreamer for Tiny Devices – Olivier Crête, Collabora

GStreamer is a complete Open Source multimedia framework, and it includes hundreds of plugins, including modern formats like DASH, HLS or the first ever RTSP 2.0 implementation. The whole framework is almost 150MB on my computer, but what if you only have 5 megs of flash available? Is it a viable choice? Yes it is, and I will show you how.

Starting with simple tricks like only including the necessary plugins, all the way to statically compiling only the functions that are actually used to produce the smaller possible footprint.

  • 16:55 – 17:35 – Maintaining a Linux Kernel for 13 Years? You Must be Kidding Me. We Need at Least 30? – Agustin Benito Bethencourt, Codethink Ltd

Industrial grade solutions have a life expectancy of 30+ years. Maintaining a Linux kernel for such a long time in the open has not been done. Many claim that is not sustainable, but corporations that build power plants, railway systems, etc. are willing to tackle this challenge. This talk will describe the work done so far on the kernel maintenance and testing front at the CIP initiative.

During the talk it will be explained how we decide which parts of the kernel to cover – reducing the amount of work to be done and the risk of being unable to maintain the claimed support. The process of reviewing and backporting fixes that might be needed on an older branch will be briefly described. CIP is taking a different approach from many other projects when it comes to testing the kernel. The talk will go over it as well as the coming steps. and the future steps.

Wednesday, October 24

  • 11:05 – 11:45 – HDMI 4k Video: Lessons Learned – Hans Verkuil, Cisco Systems Norway

So you want to support HDMI 4k (3840×2160) video output and/or video capture for your new product? Then this is the presentation for you! I will describe the challenges involved in 4k video from the hardware level, the HDMI protocol level and up to the kernel driver level. Special attention will be given to what to watch out for when buying 4k capable equipment and accessories such as cables and adapters since it is a Wild, Wild West out there.

  • 11:55 – 12:35 – Linux Powered Autonomous Arctic Buoys – Satish Chetty, Hera Systems 

In my talk/presentation, I cover the technical, and design challenges in developing an autonomous Linux powered Arctic buoy. This system is a low cost, COTS based, extreme/harsh environment, autonomous sensor data gathering platform. It measures albedo, weather, water temperature and other parameters. It runs on a custom embedded Linux and is optimized for efficient use of solar & battery power. It uses a variety of low cost, high accuracy/precision sensors and satellite/terrestrial wireless communications.

I talk about using Linux in this embedded environment, and how I address and solve various issues including building a custom kernel, Linux drivers, frame grabbing issues and results from cameras, limited power challenges, clock drifts due to low temperature, summer melt challenges, failure of sensors, intermittent communication issues and various other h/w & s/w challenges.

  • 14:15 – 14:55 – Linux Storage System Bottleneck for eMMC/UFS – Bean Huo & Zoltan Szubbocsev, Micron

The storage device is considered a bottleneck to the system I/O performance. This thinking drives the need for faster storage device interfaces. Commonly used flash based storage interfaces support high throughputs, eg. eMMC 400MB/s, UFS 1GB/s. Traditionally, advanced embedded systems were focusing on CPU and memory speeds and these outpaced advances in storage speed improvements. In this presentation, we explore the parameters that impact I/O performance. We describe at a high level how Linux manages I/O requests coming from user space. Specifically, we look into system performance limitations in the Linux eMMC/UFS subsystem and expose bottlenecks caused by the software through Ftrace. We show existing challenges in getting maximum performance of flash-based high-speed storage device. by this presentation, we want to motivate future optimization work on the existing storage stack.

  • 15:05 – 15:45 – New GPIO Interface for User Space – Bartosz Golaszewski

Since Linux 4.8 the GPIO sysfs interface is deprecated. Due to its many drawbacks and bad design decisions a new user space interface has been implemented in the form of the GPIO character device which is now the preferred method of interaction with GPIOs which can’t otherwise be serviced by a kernel driver. The character device brings in many new interesting features such as: polling for line events, finding GPIO chips and lines by name, changing & reading the values of multiple lines with a single ioctl (one context switch) and many more. In this presentation, Bartosz will showcase the new features of the GPIO UAPI, discuss the current state of libgpiod (user space tools for using the character device) and tell you why it’s beneficial to switch to the new interface.

  • 16:15 – 16:55 – Replace Your Exploit-Ridden Firmware with Linux – Ronald Minnich, Google

With the WikiLeaks release of the vault7 material, the security of the UEFI (Unified Extensible Firmware Interface) firmware used in most PCs and laptops is once again a concern. UEFI is a proprietary and closed-source operating system, with a codebase almost as large as the Linux kernel, that runs when the system is powered on and continues to run after it boots the OS (hence its designation as a “Ring -2 hypervisor”). It is a great place to hide exploits since it never stops running, and these exploits are undetectable by kernels and programs.

Our answer to this is NERF (Non-Extensible Reduced Firmware), an open source software system developed at Google to replace almost all of UEFI firmware with a tiny Linux kernel and initramfs. The initramfs file system contains an init and command line utilities from the u-root project, which are written in the Go language.

  • 17:05 – 17:45 – Unikernelized Real Time Linux & IoT – Tiejun Chen, Vmware

Unikernel is a novel software technology that links an application with OS in the form of a library and packages them into a specialized image that facilitates direct deployment on a hypervisor. But why these existing unikernels have yet to gain large popularity broadly? I’ll talk what challenges Unikernels are facing, and discuss exploration of if-how we could convert Linux as Unikernel, and IoT could be a valuable one of use cases because the feature of smaller size & footprint are good for those resource-strained IoT platforms. Those existing unikernels are not designed to address those IoT characters like power consumption and real time requirement, and they also doesn’t support versatile architectures. Most existing Unikernels just focus on X86/ARM. As a paravirtualized unikenelized Linux, especially Unikernelized Real Time Linux, really makes Unikernels to succeed.


If you’d like to attend the real thing, you’ll need to register and pay a registration fee:

  • Early Registration Fee: US$800 (through August 27, 2017)
  • Standard Registration Fee: US$950 (August 28, 2017 – September 17, 2017)
  • Late Registration Fee: US$1100 (September 18, 2017 – Event)
  • Academic Registration Fee: US$200 (Student/Faculty attendees will be required to show a valid student/faculty ID at registration.)
  • Hobbyist Registration Fee: US$200 (only if you are paying for yourself to attend this event and are currently active in the community)

There’s also another option with the Hall Pass Registration ($150) if you just want to network on visit with sponsors onsite, but do not plan to attend any sessions or keynotes.

e-con Systems Launches e-CAM130_CUTX1 Ultra HD Camera for Nvidia Jetson TX1 Development Board

January 20th, 2017 No comments

e-con Systems, an embedded camera solution company, has just announced the launch of e-CAM130_CUTX1 MIPI camera board for NVIDIA Jetson Tegra X1 development kit. The 13MP camera is based on On Semiconductor AR1820 CMOS image sensor, connects to TX1 board via its 4-lane MIPI CSI-2 connector, and supporting up to 3840 x 2160 @ 30fps/ [email protected] 20 fps video streaming in uncompressed YUV format.

Jetson TX1 Board fitted with e-CAM130_CUTX1 camera module

e-CAM130_CUTX1 4K camera board features & specifications:

  • Sensor – 1/2.3″ Optical form factor AR1820HS sensor with on-board high performance ISP.
  • Focus Type – Fixed focus
  • Resolution: – 13MP on e-CAM130_CUTX1 (The sensor is capable of 18MP)
  • Pixel size – 1.25μm pixel with Aptina / ON Semiconductor A-PixHS with BSI technology and advanced pixel architecture
  • Sensor Active Area – 4912(H) x 3684(V)
  • Responsivity – 0.62 V/lux-sec (545nm); SNR: 36.3 dB; Dynamic Range: 65.8 dB
  • Output Format – Uncompressed YUV422 format and compressed MJPEG format. YUV422 resolutions:
    • VGA @ 60 fps
    • HD (720p) @ 72 fps
    • Full HD (1080p) @ 72 fps
    • 4K/Ultra HD @ 30 fps
    • 13MP @ 20 fps
  • Shutter type – Electronic Rolling Shutter
  • DFOV : 13M – 74°, 4K/1080p/720p – 69°, VGA – 72°
  • Interface – High-speed 4-lane MIPI CSI-2 interface
  • Operating Voltage – 5V +/- 5%, Current – 450mA
  • Dimensions – 75.03 mm x 40.18 mm x 25.6 mm (without lens)
  • Weight – 20 grams without lens, 26.5 grams with.

The board comes with an S-mount (M12) lens mount that enables customers to choose a lens of their choice.

The company provides a standard V4L2 driver for the camera board, which also supports Gstreamer 1.0 for video recording and networking streaming, and can be controlled with programs such GUVCViewer as demonstrated in Ubuntu 16.04 in the video below.

e-CAM130_CUTX1 4K camera module is available now for $249 via e-con Systems product page, where you’ll also find documentation (free email registration required) such as the datasheet, a getting started guide, various usage guide, and a developer’s guide.

AVC8000nano mini PCIe Frame Grabber Captures up to 8 D1 Videos

February 25th, 2016 1 comment

There are plenty of solutions to stream or capture multiple video streams from cameras, but example for security purpose, but usually the equipment is relatively large and heavy. Advanced Micro Peripherals AVC8000nano mini PCIe capture card miniaturizes all that thanks to its form factor, and its 8 u.FL connectors used to capture eight D1 videos at full frame rate.

AVC8000nano Connected to Gateworks Ventana SBC and 8 Cameras

AVC8000nano Connected to Gateworks Ventana SBC and 8 Analog Cameras

AVC8000nano features:

  • Video Inputs
    • 8x Live NTSC/PAL video inputs with 8x 10-bit ADC and anti-aliasing filters
    • 8x D1 size capture at full frame rate
    • Formats – NTSC-M, NTSC-Japan, NTSC (4.43), RS-170, PAL-B,G,N, PAL-D, PAL-H, PAL-I, PAL-M, PAL-CN, PAL-60 SECAM
    • Adjustments – Contrast, saturation, hue (or chroma phase), and brightness. Software adjustable Sharpness, Gamma and noise suppression
  • Video Capture FormatsRGB555, RGB565, YCbCr 4:2:2, YCbCr 4:1:1
  • Windows support with Drivers and DirectShow/DirectDraw
  • Linux with drivers and Video4Linux
  • Form factor – Full height mini PCI Express
  • Temperature Range – Commercial: 0°C to 60°C; Extended: –40°C to +85°C
AVC8000nano_Block_Diagram

AVC8000nano Block Diagram

The specifications also mentions hardware requirements: “x86 PC-Compatible with mini PCI Express socket”. But as you can see on the first picture, Gateworks managed to make the card work on their Ventana single board computers powered by Freescale/NXP i.MX6 and featuring one or more PCIe connectors so it’s also suitable for ARM platforms. The company also updated their Wiki to show how to use it on their boards with Linux (built with Yocto Project 1.8) using AVC8000nano drivers, Gstreamer, and optionally OpenCV if you want to stitch multiple inputs together.

OpenCV_Camera_Inputs_Stichting

Stitching with OpenCV

Such solutions can be used for vehicle-based Video Capture, real-time situational awareness, law enforcement, remote video surveillance, traffic monitoring and control, video acquisition & analytics, UAVs,  and more.

You may want to visit AVC8000nano product page for more details. Although it has been launched in 2013, I could not find price information for the capture card.

iMX6 TinyRex Module and Development Board Support HDMI Input in Linux (Video Demo)

December 2nd, 2015 2 comments

A couple of years ago, I wrote about iMX6 Rex open source hardware project combining a Freescale i.MX6 SoM and baseboard that aimed a teaching hardware design (schematics and PCB layout). I had not followed the project very closely since then, until I watched a video showcasing HDMI input capabilities in Linux using the new version of the module and baseboard called i.MX6 TinyRex.

Click to Enlarge

Click to Enlarge

i.MX6 Tiny Rex module specifications:

  • SoC – Freescale iMX6 processor up to 1.2GHz and 4 cores
  • System Memory – Up to 4GB DDR3-1066 (533MHz)
  • Storage – EEPROM
  • Connectivity – 10/100/1000 Mbps Ethernet PHY
  • I/Os via 3 board to board connectors:
    • Display / Video Output
      • 1x HDMI (up to QXGA 2048×1536)
      • 1x LVDS (up to WUXGA 1920×1200)
      • 1x 20-bit parallel LCD display (up to WXGA 1366×768) or 1x Video Input (CSI)
      • 1x MIPI DSI differential display output (up to XVGA 1024×768)
    • Video Input
      • 1x 20-bit parallel video input CSI (up to 8192×4096)
      • 1x MIPI differential camera input
    • Storage – 1x SATA; 1x NAND Flash or 1x MMC (8bit); 2x SD (2x 4bit or optional 4 & 8bit)
    • 1x PCIe
    • 2x USB
    • 5x UART, 3x I2C, 2x SPI, 1x CAN
    • Digital audio
    • 2x GPIO, 2x GPIO or PWM
    • System signals -Reset in/out, Boot mode, Power ok, User button
  • Misc – User LED, power LED, JTAG on testpoints
  • Dimensions – 38 x 38 x 4.8
  • Power –  2.7 to 5.5V DC, single +3.3V and +5V
iMX6 Tiny Ref Module Block Diagram (Click to Enlarge)

iMX6 Tiny Ref Module Block Diagram (Click to Enlarge)

The company provides Linux support via the Yocto Project. Bear in mind that contrary to OpenRex, TinyRex is not open source hardware. In order to complement the module, iMX6 TinyRex baseboard Lite has also been designed by Fedevel, and manufactured by Voipac.

imx6_Tiny_Rex

Click to Enlarge

Baseboard specifications:

  • Storage – 1x SATA port, 1x micro SD card slot, up to 128Mbit on-board SPI Flash
  • Video
    • 1x HDMI Output with Audio
    • 1x micro HDMI input with audio (e.g. from GoPro camera) via ADV7610 HDMI receiver.
    • 1x MIPI-CSI camera input (compatible with Raspberry Pi)
  • Connectivity – 1x Gigabit Ethernet
  • USB –  1x USB (Optional: 2x USB ), 1x micro USB OTG port
  • Expansion
    • 1x PCIE mini card socket (PCIE & USB)
    • Headers with 4x UART, 1x SPI, 1x CAN (CMOS), 3x I2C, 2x PWM, 8x GPIO
  • Debugging – 1x UART debug console header (compatible with FTDI cable)
  • Misc – Reset & user buttons, power and user LEDs,
  • Power Supply – 3.2 to 5.5V DC via power barrel
  • Dimensions – 90 x 80 mm (with four holes for heatsink)
Click to Enlarge

Click to Enlarge

The schematics for the baseboard are available on request, and software documentation can be found on imx6rex website, including one part showing how to use HDMI input with the Yocto built image which using Video4Linux2 (V4L2), adv7610 driver, and Gstreamer. The demo below shows how to output the HDMI input to an HDMI monitor. It’s not very useful by itself, unless you do some processing or use as video stream as part of an application, but shows the system works, and could be modified for live video streaming for example.

I understand iMX6 TineRex module and baseboard should be available by the end of the year, or Q1 2016, with the module starting at 59 Euros for 1k orders. Further details can be found on iMX6 TinyRex SoM and Baseboard Lite product pages.

Embedded Linux Conference 2013 Schedule

January 24th, 2013 2 comments

ELC 2012

The Embedded Linux Conference (ELC 2013) will take place on February 20 – 22, 2013 at Park 55 Hotel in San Francisco, California.

ELC consists of 3 days of presentations, tutorials and sessions. There will be over 50 sessions during those 3 days. I’ll highlight a few sessions that I find particularly interesting, and that did not get presented at ELCE 2012 (AFAICR).

February 20

We are now two years into the new maintainer model for ARM platforms, and we have settled down into a workflow that maintainers have adjusted well to. Still, when new platforms arrive, or when maintainer ship changes hands, there’s sometimes a bit of ramp-up in getting used to how we organize our git tree and how we prefer to see code submitted to fit that model.

This presentation will give an overview of how we have chosen to organize and maintain the arm-soc tree, and advice to developers and maintainers on best practices to help organize your code in a way that makes life easier for everybody involved.Main audience for this presentation is developers working on upstream kernels for ARM platforms, including platform maintainers.

The Yocto Project was announced slightly more than 2 years ago at ELC-E Cambridge and in the OpenEmbedded e.V. General Assembly the day after the conference I proposed to embrace and adopt the Yocto Project as the core for OpenEmbedded.

In the past 2 years the ecosystem has seen tremendous growth, but not always in sane directions. This presentation will detail how the Yocto Project, the OpenEmbedded Project, the community and the companies involved evolved during that time.

The Angstrom Distribution and the Beagleboard will be used as examples since those were first OE classic targets to be publicly converted to the new world order.

This presentation will also try to clear up to confusion about what people actually mean when they say “this runs yocto” 🙂

LTSI is the Linux Foundation CE workgroup project that creates and maintains long-term stable kernel for industry use. Recently LTSI-3.4 was released, and it is committed to being kept maintained till the community applies bug-fix and security fix patches on LTS-3.4. The community LTS maintainer Greg Kroah Hartman stated it would last at least till May 2014. This would dramatically reduce your own effort to collect such important patches by you. Furthermore, Linux Foundation Yocto project that provides a recipe for custom Linux BSP creation will add support for LTSI kernel from this release. Given this significant improvement I want to help LTSI user to start work with it. In this session, I will introduce the specification of LTSI-3.4 (enhancement from the community kernel) and how to write a Yocto recipe to collect your own enhancement patches on top of the official LTSI-3.4 kernel.

The common clock framework, which was included in the 3.4 kernel in the beginning of 2012, is now mandatory to support all new ARM
SoCs. It is also part of the “one zImage to run them all” big plan of the ARM architecture in the Linux kernel.After an introduction on why we needed this framework and on the problems it solves, we will go through the implementation details of this framework. Then, with real examples, we will focus on how to use this framework to add clock support to a new ARM SoC. We will also show how the device tree is used in this process.The last part of the talk will review how device drivers use this framework, using examples taken from various parts of the kernel.

Multi-core processors are now the rule rather than the exception in high-end applications. But, as we try to port our legacy applications to multi-core platforms, what pitfalls lay in wait? This presentation will outline the conditions that lead to multi-core race conditions and outline the techniques for identifying and redesigning code to successfully function in a multi-core world.

GStreamer is the leading multimedia framework for various OS platforms, notably Linux systems. A variety of multimedia applications can be constructed with well-implemented plugins, which have versatile functions such as image scaling, cropping, color conversion, and video decoding. However, in the case of embedded systems, they should require further system integration to utilize specialized hardware acceleration engines in SoC for optimal performance.

This presentation shows the case study experience of integrating video plugins with a Renesas SoC platform. It will discuss how to access hardware inside a plugin, assigning buffer memory suited for hardware, and eliminating the ‘memcpy’ call.The audience will learn about essential technique for integrating GStreamer into embedded system. An understanding of the basics of video codecs and color formats is required.

February 21

This BoF is intended to bring together anybody that tests the Linux kernel to share best practices and brainstorm new ideas. Topics may range from .config testing, module/built-in drivers, test methods and tools for testing specific driver subsystems, VM/scheduler/interrupt stress testing, and beyond.

The discussion is targeted at Linux kernel developers, test engineers, and embedded Linux product teams/consultants with the common task of testing Linux kernel integrity. Attendees should have a firm grasp of building and deploying the kernel as well as kernel/userspace kernel APIs.

The LLVM project is an extensive compiler technology suite which is becoming commonplace in many industries. Technology built with LLVM is already shipped in millions of Linux devices as a part of Android/Renderscript. Increasingly it is becoming a big part of the development process for embedded projects, all the way up through to high performance computing clusters. This session will provide an update on the status of the LLVM Linux project; a project which is cooperating with both the Linux kernel and LLVM communities to build the Linux kernel with Clang/LLVM.This talk is for experienced developers who are interested in toolchain technology and Linux Kernel programming.

In 2003 I decided to replace twenty-two GNU packages in Linux From Scratch (everything except the compiler, kernel, and libc) with BusyBox, and then rebuild the result under itself. This didn’t remotely work, so I started testing and improving BusyBox until it did, putting in so much work on BusyBox its maintainer handed the project over to me.In 2006 I handed BusyBox off to a new maintainer and started over from scratch on a fresh implementation, Toybox. In 2011 Tim Bird (founder of CELF) convinced me to repurpose Toybox as a new BSD-Licensed Posix-2008 compliant command line for Android.

This panel explains what’s in the “standard” Linux command line: drawing commands from POSIX, LSB, Android Toolbox, Linux From Scratch, and more. How to determine what should be in the base system, and how to know what to exclude, and why the “standards” aren’t enough.

Closed-source binary drivers and libraries are endemic in embedded, with binary blobs essential on many modern boards to use the on-board 2D, 3D, or video acceleration. Recently there has been progress in open drivers from manufactures for various platforms including Intel, from 3D acceleration with OpenGL to hardware video decode/encode with VA API. This presentation will explain why open drivers are better than closed, discuss the options available, and describe what is available in the Yocto Project BSPs for you to use.The audience for this talk is expected to be developers and architects interested in the state of open graphics in Linux. Knowledge of this field will be assumed.

Performance is an important aspect when developing mobile applications as it affects both the interactive user experience and the device battery life. This presentation will introduce techniques and tools (e.g. profilers) useful for creating high-perfomance code starting at the high-level design stage (code organisation, data layout, etc.) and following through to implementation considerations. Specific instruction sets (e.g. NEON) will not be a primary focus, the goal rather being to enable efficient use of these without delving into details, thus giving the presentation a broader applicability.The target audience is developers of compute-intensive (native) applications or libraries who need to achieve the best possible performance. No special expertise beyond general familiarity with userspace Linux programming is assumed.

As costs have come down and the power of embedded platforms has increased, the hacker/maker community is playing an increasingly critical role in the creation of disruptive technologies. The “Next Big Thing” will likely start out as a hacker project using a commodity embedded hardware platform. Intel’s Atom-based offerings continue to grow while targeting new niches in embedded applications. This talk will outline exciting new developments with Atom processors in the embedded space, and how hackers can make best use of these advantages.This talk will be relevant to hackers, hobbyists, and people interested in developing embedded products based on Atom, and is open to all technical experience levels.

February 22

The ‘In Kernel Switcher’ (IKS) is a solution developed by Linaro and ARM to support ARM’€™s new big.LITTLE implementation. It is pairing together an A7 (LITTLE) and an A15 (big) processor into a logical entity that is then presented to the kernel as one CPU. From there the solution is seeking to achieve optimal performance and power consumption by switching between the big or the LITTLE core based on system usage.This session will present the IKS solution. After giving an overview of the big.LITTLE processor we will present the solution itself, how frequencies are masqueraded to the cpufreq core, the steps involved in doing a “€œswitch”€ between cores and some of the optimisation made to the interactive governor.

The session will conclude by presenting the results that we obtained as well as a brief overview of Linaro’s upstreaming plan.

Always Innovating has announced a new product, the MeCam, a self video nano copter to point-and-shoot yourself. The MeCam launches from the palm of a hand and hovers instantly. This talk will review the lessons learned during the design of this product:

  1. hardware “- CPU: the choice and the different trade-offs involved with this selection.
  2. hardware -€“ sensors: the complete list of the 14 sensors, their advantages and drawbacks.
  3. software -€“ core: the architecture of the Linux based system and the key challenges.
  4. software -€“ stabilization algorithm: the experience during the tuning of the different algorithms participating to the self hovering.

This talk targets developer with good expertise in both hardware and software. No deep knowledge in a specific field is mandatory but serious understanding of ARM and the Linux kernel is a plus.

Since Completely Fair Scheduler (CFS), which is default scheduler of Linux mainline kernel, has been introduced in kernel 2.6.23, due to its remarkable performance, we’ve paid little attention to improving the scheduler. In this presentation, we will show the CFS limitations, unsatisfactory fairness among cores and long response time to user interactive tasks by some experimental result. And then we will explain you an example scenario to solve this vulnerable point in multicore environment.

Sometimes you may encounter segmentation fault at malloc or free. It looks a bug of malloc library, but at most case it is not. Some other part destroys heap management area. It is very hard to tell which program actually destroys the heap if the process is very large and uses so many libraries and threads.
In this session I will show you some tips to trouble shoot heap problem.

  1. tips of malloc library in glibc
  2. how to hook and replace malloc
  3. use mspace in dlmalloc to separete memory spaceExpected audience is developers who writes code in C/C++ language and want to solve problems related heap memory.

Summary of the proposal:

This talk describes the presenter’s experience with using the Yocto Project, along with various open source layers, to build a digital signage solution from scratch. The presenter covers how various components are used from the oe-core, meta-web-kiosk, meta-security, meta-virtualization, and meta-nuc layers to get a working solution for digital signage. The talk provides a live demo of the solution, along with access to the source code & build environment.

Targeted Audience:

This talk is targeted to the open source development community. The audience can expect to get more knowledge about how they can build their own digital signage solution with the help of the Yocto Project and various open source layers.

olibc is derived from bionic libc used in Android, which was initially derived from NetBSD libc. olibc is expected to merge the enhancements done by several SoC vendors and partners, such as Qualcomm, TI, Linaro, etc., which is known to be the major difference from glibc, uclibc, and other traditional C library implementations. Typically, the code size of olibc runtime should be about 300 KB. For ARM target, olibc would benefit from ARMv7 specific features like NEON, Thumb-2, VFPv3/VFPv4, and latest compiler optimization techniques. Also, olibc is released under BSD License.

Those are just my choices among over 50 sessions. You can check the full schedule to find out which sessions suit you best.

You can register for ELC 2013 online.

There are two type of fees:

  • Professional Fee (If your company is paying for you to attend this event): 550 USD
  • Hobbyist Fee: 100 USD (up from $70 last year, who said there’s no inflation?)

Prior to ELC 2013, you can also attend the Android Builders Summit on February 18 & 19 for $200 extra, and/or Yocto Project Developer Day on February 19 at no additional cost.

Video4Linux: Current Status and Future Work – ELCE 2012

January 17th, 2013 No comments

Hans Verkuil, R&D software engineer at Cisco Systems Norway, talks about Video4Linux status, progress, and plans at the embedded Linux Conference in Barcelona, Spain, on November 7, 2012.

Abstract:

Video4Linux is a fast-changing subsystem where a lot of work is done to support the complex video hardware of embedded systems. This presentation will give an overview of the developments in the past year and the work that is planned for the near future.

Hans covers SoC video devices support, core, control, and videobuf2 frameworks, HDTV timings & events API, video connector support, media controllers, codec & flash support, and more…

You can also download the slides for this presentation. For further details about development, subscribe to linux-media mailing lists or chat on #v4l IRC channel on freenode.