Archive

Posts Tagged ‘opencv’

A Day at Chiang Mai Maker Party 4.0

December 6th, 2017 6 comments

The Chiang Mai Maker Party 4.0 is now taking place until December 9, and I went there today, as I was especially interested in the scheduled NB-IoT talk and workshop to find out what was the status about LPWA in Thailand. But there are many other activities planned, and if you happen to be in Chiang Main in the next few days, you may want to check out the schedule on the event page or Facebook.

I’m going to go though what I’ve done today to give you a better idea about the event, or even the maker movement in Thailand.

Click to Enlarge

Booth and activity area should be the same over the 4 days, but the talks, open activity, and workshop will be different each day. Today, people could learn how to solder in the activity area.
The even was not really big with manufacturers/sellers like ThaiEasyElec, INEX, or Gravitech closer to the entrance…


… and slighter higher up in a different zone, companies and makers were showcasing their products or projects. I still managed to spent 5 interesting hours at the event attending to talks and checking out the various projects.

I started my day with a talk entitled “Maker Movement in South East Asia” presented by William Hooi, previously a teacher, who found One Maker Group and setup the first MakerSpace in Singapore, as well as helped introduce the Maker Faire in Singapore in 2012 onwards.


There was three parts to talk with an history of the Maker movement (worldwide), the maker movement in Singapore, and whether Making should be integrated into school curriculum.
He explained at first the government who not know about makers, so it was difficult to get funding, but eventually they jump on the bandwagon, and are now puring money on maker initiative. One thing that surprised me in the talk is that before makers were hidden their hobby, for fear of being mocked by other, for one for one person doing an LED jacket, and another working on an Iron Man suit. The people around them would not understand why they would waste their time on such endeavors, but the Maker Space and Faire helped finding like minded people. Some of the micro:bit boards apparently ended in Singapore, and when I say some, I mean 100,000 units. Another thing that I learned is the concept of “digital retreat for kids” where parents send kids to make things with their hands – for example soldering -, and not use smartphone or tablets at all, since they are already so accustomed to those devices.

One I was done with the talk, I walked around, so I’ll report about some of the interesting project I came across. I may write more detailed posts for some of the items lateron.

Click to Enlarge

Falling object detection demo using OpenCV on the software side, a webcam connected to…

Click to Enlarge

ASUS Tinker board to handle fall detection, and an NVIDIA Jetson board for artificial intelligence. If fall is detection an alert to send to the tablet, and the system also interfaces with Xiaomi Mi band 2.

Katunyou has also made a more compact product, still based on Tinker Board, for nursing home, or private home where an elderly may live alone. The person at the stand also organizes Raspberry Pi 3 workshops in Chiang Mai.

I found yet another product based on Raspberry Pi 3 board. SRAN is a network security device made by Global Tech that report threats from devices accessing your network using machine learning.

Click to Enlarge

Nordic Technology House showcased a magic mirror based on Raspberry Pi 3, and a webcam to detect your dance move, but their actual product shown above is a real-time indoor air monitoring system that report temperature, humidity, CO2 level, and PM 2.5 levels, and come sent alerts via LINE if thresholds are exceeded.

One booth had some drones including the larger one above spraying insecticides for the agriculture market.

Click to Enlarge

There was also a large about sewing machines, including some smarter one where you can design embroidery in a table before sewing.

There were also a few custom ESP8266 or ESP32 boards, but I forgot to take photos.

The Maker Party is also a good place to go with your want to buy some board or smart home devices.

Click to Enlarge

Beside Raspberry Pi Zero W / 3, ESP8266 boards and Asus Tinker board seem to be popular items in Thailand. I could also spot Sonoff wireless switch, and an Amazon Dot, although I could confirm only English is supported, no Thai language.

BBC Micro:bit board and accessories can also be bought at the event.


M5Stack modules, and Raspberry Pi 3 Voice Kit were also for sale.

Click to Enlarge

Books are also available for ESP32, Raspberry Pi 3, IoT, etc… in Thai language.

Click to Enlarge

But if you can’t read Thai there was also a choice of book in English about RPi, Arduino, Linux for Makers, IoT and so on. I then attended the second talk of the day: “NB-IoT” by AIS, one of the top telco company in Thailand. Speakers included Phuchong Charoensub, IoT Marketing Specialist, and Pornsak Hanvoravongchai, Device Innovation Manager, among others. They went through various part include a presentation of AIS current M2M business, what IoT will change (e.g. brings in statups and makers), some technical details about NB-IoT, and the company offering for makers.

I’ll go into more details in a separate post tomorrow, but if you want to get started the good news is that it’s now possible to pre-order a 1,990 THB Arduino Shield ($61) between December 6-9, and get it shipped on February 14, 2018. NB-IoT connectivity is free for one year, and will then cost 350 Baht (around $10) per year per device. However, there’s a cost to enable NB-IoT on LTE base stations, so AIS will only enable NB-IoT at some universities, and maker spaces, meaning for example, I would most certainly be able to use such kit from home. An AIS representative told me their no roadmap for deployment, it will depend on the business demand for such services.

If you are lucky you may even spot one or two dancing dinosaurs at the event.

JeVois-A33 Linux Computer Vision Camera Review – Part 2: Setup, Guided Tour, Documentation & Customization

November 22nd, 2017 4 comments

Computer Vision, Artificial Intelligence, Machine Learning, etc.. are all terms we hear frequently those days. JeVois-A33 smart machine vision camera powered by Allwinner A33 quad core processor was launched last year on Indiegogo to bring such capabilities in a low power small form factor devices for example to use in robotics project.

The company improved the software since the launch of the project, and has now sent me their tiny Linux camera developer kit for review, and I’ve already checked  out the hardware and accessories in the first post. I’ve now had time to test the camera, and I’ll explained how to set it up, test some of the key features via the provided guided tour, and show how it’s possible to customize the camera to your needs with one example.

Getting Started with JeVois-A33

In theory, you could just get started by inserting the micro SD card provided with the camera, connect it to your computer via the USB cable, and follow the other instructions on the website. But to make sure you have the latest features and bug fixed, you’d better download the latest firmware (jevois-image-latest-8G.zip), and flash it to the micro SD card with the multi-platform Etcher tool.

You could also use your own micro SD card, as long as it has 8GB or more capacity. Once this is done, insert the micro SD card into the camera with the fan of the camera and the golden contact of the micro SD card both facing upwards. Connect the camera to your computer with the provided mini USB to USB cable. I also added the USB power meter to monitor the power consumption for the different use cases, and USB serial cable to checkout output from the console. At least that was the plan, but I got no lights from the camera, and voltage was reported to be only 4V. Then I read the guide a little better, and found out I had to use a USB 3.0 port, or two USB 2.0 ports for power.

Once I switched to using two USB 2.0 ports from a powered USB 2.0 hub, I could see output from the serial console…

and both green and orange/red LEDs were lit. The instructions to use JeVois camera are mostly OS agnostic, except for the video capture software. If you are using Windows you can use the free OBS Studio or AMCap programs, and on Mac, select either PhotoBooth or OBS Studio. I’m a Ubuntu user, so instead I installed guvcview:

and ran it use 640×360 resolution and YUYV format as instructed in the getting started guide:

But then I got no output at all in the app:

The last line above would repeat in a loop. The kernel log (dmesg) also reported a crash linked to guvcview:

Another person had the same problem a few months ago, and it was suggested it may be a USB problem. So I connect the camera to directly to two of the USB ports on my tower, and it worked…

Click to Enlarge

The important part of the settings are in the Video Controls tab, where we can change resolution and frame rate to switch between camera modes as we’ll see later on.

But since my tower is under the desk, the USB cable is a bit too short, and the program crashed with the same error message a few minutes later. So I went with my Ubuntu 16.04 laptop instead. Powering the camera via the USB 3.0 port worked until I started deep learning modes, where the camera would stop, causing guvcview to gray out. Finally, I connected the camera to both my USB 3.0 port, and the power bank part of the kit, and the system was then much more stable.

Click to Enlarge

I contacted the company about the issues I had, but they replied this problem was not often reported:

… we have only received very few reports like that but we were able to confirm here using front panel ports on one machine. On my desktop I have a hub too, but usb3 and rated for fast charging (60W power supply for 7+2 ports) and it works ok with jevois. A single usb3 port on my mac laptop is also ok.

So maybe it’s just me with all my cheap devices and accessories…

So three main points to get started:

  1. Update the firmware
  2. Install the camera software
  3. Check power in case of issues / crashes (Both LEDs should be on if the camera is working)

JeVois-A33 Guided Tour

Now we have the camera running, we can try the different features, and the best way to do so is to download Jevois Guided Tour (PDF) that will give you an overview of the camera and how it works, as well as examples.

Click to Enlarge

As shown above, the PDF includes information for each module with the name, link to documentation, introduction, display explanation, and on the top right the resolution/framerate that can be used to launch a given module. On following pages, there will be example pictures that you can point to with the camera.

Some of modules include:

  • Visual attention – finding interesting things
  • Face and handwritten digit recognition
  • QR-codes and other tags
  • Road detection
  • Object matching
  • Object recognition with deep neural networks
  • Color-based object tracking
  • Moving object detection
  • Record video to the microSD card inside JeVois
  • Motion flow detection
  • Eye tracking
  • and more…

You could print the guide with a color printer, but the easiest way is problem to use two screens, once with the PDF guide open, and the other running the camera application (guvcview, OBS Studio…). I’ve gone through some of the example in the guided tour in the video below, with PDF shown on a TV box, and the camera application output shown on the laptop screen on the top bottom corner.

That’s lot of fun, and everything works pretty well most of the time. Some of the tests are quite demanding for such low power device, as for example Darknet based “Deep neural scene analysis” using 1280×480 @ 15 fps with the ability to recognize multiple object types would only refresh the results every 2.7 seconds or so.

Documentation & Customization of Salient SURF Module

If you’ve gone through the guide tour, you should now have a good understanding of what the camera is capable of. So now, let’s take one of the modules, and try to adjust it to our needs. I picked SaliencySURF module with the documentation available here for this section of the review. Introduction for the module:

Trained by default on blue iLab logo, point JeVois to it and adjust distance so it fits in an attention box.
Can easily add training images by just copying them to microSD card.
Can tune number and size of salient regions, can save regions to microSD to create a training set

So let’s take a few other images (Tux logo), copy them to the micro SD card in the camera, and tune some of the settings.

Ideally the camera should also be detected, as a storage device, so that we can easily copy files and edit parameters, and in my computer it was shown as a UVC camera, a USB ACM device, and USB storage device when I connect it:

But for some reasons, I could not see the /dev/sdb storage after that:

[Update: We can use use jevois-usbsd script to access the camera storage from the host computer / board:

]

So instead I had to take out the micro SD card from the camera, and copy the files in /modules/JeVois/SaliencySURF/images/ directory in JEVOIS partition.

The module will process those photo when we start it, and return the name of the file when detected.

We can go back to SaliencySURF directory to edit params.cfg file, and change some parameters to determine how strict a match should be, taking into account that a stricter matching may mean the object was not be detected, and looser matching that we get some false positives. But this is where it gets a little more complicated, as we’ll see from a subset of the list of parameters.

Click to Enlarge

I cannot understand what half of the parameters are supposed to do. That’s where you can click on the SaliencySURF / Saliency links to access the base documentation. and find out how the module works, find out more about each parameter, and easily access the source code for the functions used by the module. That type of documentation is available for all modules used in JeVois C++ framework, and it’s a very good learning tool for people wanting to know more about computer vision. You’ll have to be familiar with C++ to understand the code, and what it really does, beside learning jargon and acronyms specific to computer vision or machine learning.

By default params.cfg file includes just two lines:

Those are the parameters for ObjectMatcher module, with goodpts corresponding to the number range of good matches considered, and distthresh being the maximum distance for a match to be considered good.

I’ve set looser settings in params.cfg:

I saved the file, put the micro SD card back into the camera, and launch guvcview with 320×288 @ 30 fps resolution/framerate to enter SaliencySURF mode.

Click to Enlarge

Oops, it’s seeing Tux logos everywhere, even when there are none whatsoever, so our settings are clearly too loose. So I went back to the default settings, but the rsults was still similar, so since the distance was shown to be 0.30 in my first attempt, I reduced distthresh to 0.2. False positive are now mostly gone, except for very short period od time, and it’s now detecting CNX Tux logo accuractely. Note that Green square is for object detection, and the white squares for saliency zones.

However, it struggles to detect my third Tux logo repeatedly, often following back to CNX Tux logo.

But as you could see with the green square, the detection was done on the left flap of the penguin. That’s because SaliencySURF detection is done in a fixed size zone (64×64 pixels per detault), so camera distance, or size of the zone matter. You can change the size of the salient regions with SaliencySURF rsiz parameter which defined the height and length of the quare in pixel. When I did the test I first tried to detected if from the list of Tux images from DuckDuckGo search ut it was not small or blurry. After switchign to a bigger photo, the cable was too short to cover the logo, so instead I copied to gimp and resized it so that it could fit in the 64×64 square while using the camera, and in this case detection worked resaonably well.

The more you use the camera, the better you’ll be at understanding how it works, and leverage its capabilities.

Final Words

JeVois-A33 camera is an inexpensive way to get started with computer vision and deep learning, with excellent documentation, and if you put the efforts, you’ll even understand how it works at the source code level. It’s also fun to use with many different modules to try. I have not tried it n this review due to time limitations, but you could also connect the camera to an Arduino board controlling a robot (Cat chasing bot anyone?) via the serial interface.

The main challegenges you may face while getting started ar:

  1. Potential crashes due to power issues, but that’s solvable, and a power issues troubleshooting guide has even been published
  2. For robotics projects, you have to keep in mind there will be some lag for some modules, for example from 500ms (single object) to 3 seconds (YOLO test with multiple object types) for deep learning algorithms. Other modules such as ArUco marker are close to real-time performance however.

Bear in mind all processing is done by the Allwinner A33 CPU cores, as the Mali-400MP GPU is not suitable for GPGPU. As more affordable SoC with OpenCL/Vulkan capable GPU (e.g. Mali-T720) are launched, and in some cases even NNA (Neural Network Accelerator), we’ll be able to get similar low power smart cameras, but with much better computer vision performance.

JeVois-A33 can be purchased for $49, but to avoid wasting time with power issues, and give you more options, I’d recommend to go with JeVois-A33 Developer/Robotics Kit reviewed here, going for $99.99 on Amazon, RobotShop, or JeVois Store.

Dragonwally is a Stereoscopic Computer Vision Mezzanine for 96Boards CE Boards

October 11th, 2017 No comments

Hardware based on 96Boards specifications may not have the number of sales as Raspberry Pi or Orange Pi boards, but there’s heavily used by Linaro member and other developer working on bleeding edge software. More and more companies are designing boards compliant with the standard, and several new mezzanine expansion boards such as Secure96, were showcased at Linaro Connect SFO 2017, and are yet to be show up on 96Boards Mezzanine page.

Another 96Boards mezzanine expansion board in development is Dragonwally, designed for stereoscopic computer vision, currently used with DragonBoard 410c board, and targetting applications such as object recognition,  people counting, access control, or driver identification and safety.

DragonWally DW0 board specifications:

  • MIPI DSI interface with high speed connector
  • 2x 5MP cameras
  • 1x USB port
  • 96Boards CE compliant

The two Brazilian developers working on the project interfaced it with DragonBoard 410c running Linaro Debian, and using OpenCV and Python for computer vision development. To demonstrate the capability of the board, they added a touchscreen display for a demo leveraging Amazon Rekognition API for face recognition and camera distance estimation.

DragonWally board does not seem available yet, nor the source code for the demo above. If you’d like more information, visit DragonWally website, or join 96Boards OpenHours #74 tomorrow.

Getting Started with OpenCV for Tegra on NVIDIA Tegra K1, CPU vs GPU Computer Vision Comparison

May 24th, 2017 No comments

This is a guest post by Leonardo Graboski Veiga, Field Application Engineer, Toradex Brasil

Introduction

Computer vision (CV) is everywhere – from cars to surveillance and production lines, the need for efficient, low power consumption yet powerful embedded systems is nowadays one of the bleeding edge scenarios of technology development.

Since this is a very computationally intensive task, running computer vision algorithms in an embedded system CPU might not be enough for some applications. Developers and scientists have noticed that the use of dedicated hardware, such as co-processors and GPUs – the latter traditionally employed for graphics rendering – can greatly improve CV algorithms performance.

In the embedded scenario, things usually are not as simple as they look. Embedded GPUs tend to be different from desktop GPUs, thus requiring many workarounds to get extra performance from them. A good example of a drawback from embedded GPUs is that they are hardly supported by OpenCV – the de facto standard libraries for computer vision – thus requiring a big effort from the developer to achieve some performance gains.

The silicon manufacturers are paying attention to the growing need for graphics and CV-oriented embedded systems, and powerful processors are being released. This is the case with the NVIDIA Tegra K1, which has a built-in GPU using the NVIDIA Kepler architecture, with 192 cores and a processing power of 325 GFLOPS. In addition, this is one of the very few embedded GPUs in the market that supports CUDA, a parallel computing platform from NVIDIA. The good news is that OpenCV also supports CUDA.

And this is why Toradex has decided to develop a System on Module (aka Computer on Module) – the Apalis TK1 – using this processor. In it, the K1 SoC Quad Core ARM Cortex-A15 CPU runs at up to 2.2GHz, interfaced to 2GB DDR3L RAM memory and a 16GB 8-bit eMMC. The full specification of the CoM can be found here.

The purpose of this article is to install the NVIDIA JetPack on the Apalis TK1 System on Module, thus also installing OpenCV for Tegra, and trying to assess how much effort is required to code some simple CV application accelerated by CUDA. The public OpenCV is also tested using the same examples, to determine if it is a viable alternative to the closed-source version from NVIDIA.

Hardware

The hardware employed in this article consists of the Apalis TK1 System on Module and the Apalis Evaluation Board. The main features of the Apalis TK1 have been presented in the introduction, and regarding the Apalis Evaluation Board, we will use the DVI output to connect to a display and the USB ports to interface a USB camera and a keyboard. The Apalis TK1 is presented in figure 1 and the Apalis Evaluation Board in figure 2:

Figure 1 – Apalis TK1 – Click to Enlarge

Figure 2 – Apalis Evaluation Board – Click to Enlarge

System Setup

NVIDIA already provides an SDK package – the NVIDIA JetPack – that comes with all tools that are supported for the TK1 architecture. It is an easy way to start developing applications with OpenCV for Tegra support. JetPack also provides many source code samples for CUDA, VisionWorks, and GameWorks. It also installs the NVIDIA Nsight, an IDE that is based on Eclipse and can be useful for debugging CPU and GPU applications.

OpenCV for Tegra is based on version 2.4.13 of the public OpenCV source code. It is closed-source but free to use and benefits from NEON and multicore optimizations that are not present in the open-source version; on the other hand, the non-free libraries are not included. If you want or need the open-source version, you can find more information on how to build OpenCV with CUDA support here – these instructions were followed and the public OpenCV 2.4.13 was also tested during this article’s development.

Toradex provides an article in the developer website with concise information describing how to install JetPack on the Apalis TK1.

Regarding hardware, it is recommended that you have an USB webcam connected to the Apalis Evaluation Board because samples tested in this article often need a video source as input.

OpenCV for Tegra

After you have finished installing the NVIDIA JetPack, OpenCV for Tegra will already be installed on the system, as well as the toolchain required for compilation on the target. You must have access to the serial terminal by means of an USB to RS-232 adapter or an SSH connection.

If you want to run Python code, an additional step on the target is required:

The easiest way to check that everything works as expected is to compile and run some samples from the public OpenCV repository since it already has the Cmake configuration files as well as some source code for applications that make use of CUDA:

We can begin testing a Python sample, for instance, the edge detector. The running application is displayed in figure 3.

Figure 3 – running Python edge detector sample – Click to Enlarge

After the samples are compiled, you can try some of them. A nice try is the “background/foreground segmentation” samples since they are available with and without GPU support. You can run them from the commands below, as well as see the results in figures 4 and 5.

Figure 4 – running bgfg_segm CPU sample – Click to Enlarge

Figure 5 – running bgfg_segm GPU sample – Click to Enlarge

By running both samples it is possible to subjectively notice the performance difference. The CPU version has more delay.

Playing Around

After having things setup, the question comes: how easy it is to port some application from CPU to GPU, or even start developing with GPU support? It was decided to play around a little with the Sobel application that is well described in the Sobel Derivatives tutorial.

The purpose is to check if it’s possible to benefit from CUDA out-of-the-box, therefore only the function getTickCount from OpenCV is employed to measure the execution time of the main loop of the Sobel implementations. You can use the NVIDIA Nsight for advanced remote debugging and profiling.

The Code

The first code is run completely on the CPU, while in the first attempt to port to GPU (the second code, which will be called CPU-GPU), the goal is to try to find functions analog to the CPU ones, but with GPU optimization. In the last attempt to port, some improvements are done, such as creating filter engines, which reduces buffer allocation, and finding a way to replace the CPU function convertScaleAbs into GPU accelerated functions.

A diagram describing the loop for the three examples is provided in figure 6.

Figure 6 – CPU / CPU-GPU / GPU main loop for Sobel implementations

The main loop for the three applications tested is presented below. You can find the full source code for them on Github:

  • CPU only code:
  • CPU-GPU code:
  • GPU code

The Tests

  • Each of the three examples is executed using a random picture in jpeg format as input.
  • The input pictures dimensions in pixels that were tested are: 3483×2642, 2122×1415, 845×450 and 460×290.
  • The main loop is being iterated 500 times for each run.
  • All of the steps described in figure 6 have their execution time measured. This section will present the results.
  • Therefore there are 12 runs total.
  • The numbers presented in the results are the average values of the 500 iterations for each run.

The Results

The results presented are the total time required to execute the main loop – with and without image capture and display time, available in tables 1 and 2 – and the time each task takes to be executed, which is described in figures 7, 8, 9 and 10. If you want to have a look at the raw data or reproduce the tests, everything is in the aforelinked GitHub repository.

Table 1 – Main loop execution time, in milliseconds

Table 2 – Main loop execution time, discarding read and display image times, in milliseconds

Figure 7 – execution time by task – larger image (3483×2642 pixels) – Click to Enlarge

Figure 8 – execution time by task – large image (2122×1415 pixels) – Click to Enlarge

Figure 9 – execution time by task – small image (845×450 pixels) – Click to Enlarge

Figure 10 – execution time by task – smaller image (460×290 pixels) – Click to Enlarge

The Analysis

Regarding OpenCV for Tegra in comparison to the public OpenCV, the results point out that OpenCV for Tegra has been optimized, mostly for some CPU functions. Even when discarding image read  – that takes a long time to be executed, and has approximately a 2x gain – and display frame execution times, OpenCV for Tegra still bests the open-source version.

When considering only OpenCV for Tegra, from the tables, it is possible to see that using GPU functions without care might even make the performance worse than using only the CPU. Also, it is possible to notice that, for these specific implementations, GPU is better for large images, while CPU is best for small images – when there is a tie, it would be nice to have a power consumption comparison, which hasn’t been done, or also consider the fact that this GPU code is not optimized as best as possible.

Looking at the figures 7 to 10, it can be seen that the Gaussian blur and scale conversion from 16 bits to 8 bits had a big boost when running on GPU, while conversion of the original image to grayscale and the Sobel derivatives had their performance degraded. Another point of interest is the fact that transferring data from/to the GPU has a high cost, and this is, in part, one of the reasons why the first GPU port was unsuccessful – it had more copies than needed.

Regarding image size, it can be noticed that the image read and display have an impact in overall performance that might be relevant depending on the complexity of the algorithm being implemented, or how the image capture is being done.

There are probably many ways to try and/or make this code more optimized, be it by only using OpenCV; by combining custom CUDA functions with OpenCV; by writing the application fully in CUDA or; by using another framework or tool such as VisionWorks.

Two points that might be of interest regarding optimization still in OpenCV are the use of streams – asynchronous execution of code on the CPU/GPU – and zero-copy or shared memory, since the Tegra K1 has CPU and GPU shared memory supported by CUDA (see this NVIDIA presentation from GPU Technology Conference and this NVIDIA blog post for reference).

Conclusion

In this article, the installation of the NVIDIA JetPack SDK and deployment on the Toradex Apalis TK1 have been presented. Having this tool installed, you are able to use OpenCV for Tegra, thus benefiting from all of the optimizations provided by NVIDIA. The JetPack SDK also provides many other useful contents, such as CUDA, VisionWorks and GameWorks samples, and the NVIDIA Nsight IDE.

In order to assess how easy it is for a developer freshly introduced to the CV and GPU concepts to take advantage of CUDA, purely using OpenCV optimized functions, a CPU to GPU port of a Sobel filter application was written and tested. From this experience, some interesting results were found, such as the facts that GPU indeed improves performance – and this improvement magnitude depends on a series of factors, such as size of the input image, quality of implementation – or developer experience, algorithms being used and complexity of the application.

Having a myriad of sample source code, it is easy to start developing your own applications, although care is required in order to make the Apalis TK1 System on Module yield its best performance. You can find more development information in the NVIDIA documentation, as well as the OpenCV documentation. Toradex also provides documentation about Linux usage in its developer website, and has a community forum. Hope this information was helpful, see you next time!

Embedded Systems Conference 2017 Schedule – May 3-4

April 5th, 2017 No comments

The Embedded Systems Conference 2017 will take place over two days in Boston, US on May 3-4, and the organizers have published the schedule of the event. Even if you’re not going to attend, you’ll often learn something or find new information by just checking out the talks and abstracts, so I’ve created my own virtual schedule with some of the most interesting sessions.

Wednesday, May 3rd

  • 08:00 – 08:45 – Combining OpenCV and High Level Synthesis to Accelerate your FPGA / SoC EV Application by Adam Taylor, Adiuvo Engineering & Training Ltd

This session will demonstrate how you can combine commonly used Open source frameworks such as OpenCV with High Level Synthesis to generate a embedded vision system using FPGA / SoC. The combination of OpenCV and HLS allows for a much faster algorithm development time and consequently a faster time to market for the end application.

  • 09:00 – 09:45 – Understanding the ARM Processor Roadmap by Bob Boys,   Product Manager, ARM

In 2008, the ARM processor ranged from the 32-bit ARM7 to the Cortex-A9. There were only three Cortex-M processors. Today the roadmap has extended up to the huge 64-bit Cortex-A72, down to the tiny Cortex-M0 and out to include in the winter 2016, the new Trustzone for ARMv8-M.

The ARM roadmap, in order to effectively service many markets, has grown rather complicated. This presentation will explain the ARM roadmap and offer insights into its features. Questions answered include where processors should be used and sometimes where it makes more sense to use a different processor as well as different instruction and core feature sets.

This will start at ARM 7 TDMI and how and why ARM turned into the Cortex family. Each of the three components: Application (Cortex-A), Real-Time (Cortex-R) and Microcontroller (Cortex-M) will be explained in turn.

  • 10:00 – 10:45 – Mixed Signal Analysis: digital, analog and RF by Mike Borsch,  Application Engineer, Rohde & Schwarz

Embedded systems increasingly employ both digital, analog and RF signals. Debugging and analyzing these systems can be challenging in that one needs to measure a number of different signals in one or more domains simultaneously and with tight time synchronization. This session will discuss how a digital oscilloscope can be used to effectively debug these systems, and some of the instrumentation challenges that go along with this.

  • 11:00 – 11:45 – Panel Discussion: The Extinction of the Human Worker? – The Future Role of Collaborative Robots in Smart Manufacturing
  • 12:00 – 12:45 – How Will MedTech Fare in our New Public Policy Environment by Scott Whittaker, President & Chief Executive Officer, Advanced Medical Technology Association (AdvaMed)
  • 13:00 – 13:45 – Embedded Systems Safety & Security: Dangerous Flaws in Safety-Critical Device Design by Michael Barr, Co-founder and CTO, Barr Group

When safety-critical devices come online, it is imperative that the devices are not only safe but also secure. Considering the many security concerns that exist in the IoT landscape, attacks on connected safety-critical devices are to be expected and the results could be deadly. By failing to design security into dangerous devices, too many engineers are placing life and limb at risk. Join us for a look at related industry trends and a discussion of how we can work together to put future embedded systems on a more secure path.

  • 14:00 – 14:45 – Intel EPID: An IoT ID Standard for Device Authentication & Privacy by Jennifer Gilburg, Director IoT Identity, Intel Platform Security Division

Approved as a TCG & ISO direct anonymous attestation method and open sourced by Intel—EPID (Enhanced Privacy ID) is a proven solution that has been shipped in over 2.5 billion processors since 2008. EPID authenticates platform identity through remote attestation using asymmetric cryptography with security operations protected in the processors isolated trusted execution environment. With EPID, a single public key can have multiple private keys (typically millions). Verifiers authenticate the device as an anonymous member of the larger group, which protects the privacy of the user and prevents attack maps that can be created from traditional PKI authentication. Learn how to utilize or embed EPID in a device and discover the wide range of use cases EPID enables for IoT including 0 touch secure onboarding to IoT control platforms.

  • 15:00 – 15:45 – Building A Brain With Raspberry Pi and Zulu Embedded JVM by Simon Ritter, Deputy CTO, Azul Systems

Machine and deep learning are very hot topics in the world of IT at the moment with many projects focusing on analyzing big data to make ‘intelligent’ decisions.

In this session, we’ll use a cluster of Raspberry Pis running Azul’s Zulu embedded JVM to build our very own brain. This will use a variety of programming techniques and open source libraries to emulate a brain in learning and adapting to data that is provided to it to solve problems. Since the Raspberry Pi makes connecting sensors straightforward we’ll include some of these to provide external stimulus to our artificial brain.

We’ll conclude with a demonstration of our brain in action learning and adapting to a variety of input data.

  • 16:00 – 16:45 – Vulnerabilities in IoT: Insecure Design Patterns and Steps to Improving Device Security by M. Carlton, VP of Research, Senrio

This talk will explore vulnerabilities resulting from insecure design patterns in internet-connected embedded devices using real-world examples. In the course of our research, we have observed a pattern of vendors incorporating remote configuration services, neglecting tamper proofing, and rampantly re-using code. We will explore how these design flaws resulted in vulnerabilities in a remote power supply, a web camera, and a router. This talk is intended for a wide audience, as these insecure design patterns exist across industries and market segments. Attendees will get an inside view into how attackers operate and walk away with an understanding of what must be done to improve the security of embedded devices.

Thursday, May 4th

  • 08:00 – 08:45 – Heterogeneous Software Architecture with OpenAMP by Shaun Purvis, Embedded Systems Specialist, Hardent

Single, high-performance embedded processors are often not adequate to meet today’s system-on-chip (SoC) demands for sustained high-performance and efficiency. As a result, chips increasingly feature multiple processor types to deliver flexible compute power, real-time features and energy conservation requirements. These so called heterogeneous multiprocessor devices yield an extremely robust SoC, but also require a more complex software architecture capable of orchestrating multiple dissimilar processors.

This technical session introduces the OpenAMP software framework designed to facilitate asynchronous multiprocessing (AMP) in a vendor agnostic manner. OpenAMP can be leveraged to run different software platforms concurrently, such as Linux and an RTOS, on different processors within the same SoC whether homogeneous (multi-core), or heterogeneous (multi-processor), or a combination of both.

  • 09:00 – 09:45 – How to Build Products Using Open Platform Firmware by Brian Richardson,  Technical Evangelist, Intel Corporation

Open hardware platforms are great reference designs, but they’re often not considered “product ready” due to debug features built into the firmware… but a few firmware changes can turn an open hardware board into a production-quality platform.

This session demonstrates how to optimize firmware for product delivery, using the MinnowBoard Max as a practical example, by disabling debug interfaces and optimizing the platform for an embedded software payload. Examples are also given for enabling signed firmware updates and secure firmware recovery, based on industry standard UEFI firmware.

  • 10:00 – 10:45 – Understanding Modern Flash Memory Systems by Thomas McCormick, Chief Engineer/Technologist, Swissbit

This session presents an in-depth look at the internals of modern flash memory systems. Specific focus is given to technologies that enable current generations of flash memory, both SLC and MLC, using < 30 nm process technologies to provide reliable code and data storage in embedded computer applications.

  • 11:00 – 11:45 – Implementing Secure Software Systems on ARMv8-M Microcontrollers by Chris Shore,  Director, Technical Marketing, ARM

Microcontrollers incorporating ARM TrustZone technology for ARMv8-M are here!. Now, software engineers developing on ARM Cortex-M processors have access to a level of hardware security which has not been available before. These features that a clear separation between secure and non-secure code, secure and non-secure data.

This presentation shows how software developers can write secure code which takes advantage of new hardware features in the architecture, drastically reducing the attack surface. Writing software carefully builds on those hardware features, avoiding bugs and/or holes which could compromise the system.

  • 12:00 – 12:30 – Keynote: State of the Medical Device Industry by Frost & Sullivan
  • 13:00 – 13:45 – Enabling the Next Era of Human Space Exploration by Jason Crusan, Director of the Advanced Exploration Systems Division within the Human Exploration and Operations Mission Directorate, NASA

Humankind is making plans to extend its reach further into the solar system than ever before. As human spaceflight moves beyond low Earth orbit NASA’s Advanced Exploration Systems is developing innovative tools to driving these new efforts and address the challenges that arise. Innovative technologies, simulations and software platforms related to crew and robotic autonomous operations, logistics management, vehicle systems automation, and life support systems management are being developed. This talk will outline the pioneering approaches that AES is using to develop prototype systems, advance key capabilities, and validate operational concepts for future human missions beyond Earth orbit.

  • 14:00 – 14:45 – Common Mistakes by Embedded System Designers: What They Are and How to Fix Them by Craig Hillman, CEO, DfR Solutions

Embedded system design is a multilevel engineering exercise. It requires synergy between software, electrical and mechanical engineers with the goal to create a system that meets customer requirements while remaining within budget and on time.

The propagation of embedded systems has been extremely successful. Many appliances today contain embedded systems. As an example, many fuel pumps contain single board computers whose sole purpose is credit transactions. Some companies doing positive train control (PTC) use ARM/RISC and ATOM based computer modules. And embedded systems are currently dominating the Internet of Things (IoT) space (ex. mobile gateways).

However, all of this success can tend to mask the challenges of designing a successful embedded system. These challenges are expected to increase dramatically with the integration of embedded systems into IoT applications, where environments can be much more severe than standard home / office installations.

This course presents the fundamentals of designing a reliable embedded device and the most common pitfalls encountered by the system designer.

  • 15:00 – 15:45 – Porting to 64-bit on ARM by Chris Shore, Director, Technical Marketing, ARM

The ARMv8-A architecture introduces 64-bit capability to the most widely used embedded architecture in the world today. Products built to this architecture are now mainstream and widely available. While they are capable of running legacy 32-bit software without recompilation, clearly developers will want to make maximum use of the increased and expanded capability offered by these processors.

This presentation examines the steps necessary in porting current 32-bit ARM software to the new 64-bit execution state. I will cover C porting, assembly language porting and implementation of hand-coded SIMD routines.


If you want to attend ESC ’17, you’ll need to register. The EXPO pass is free if you book in advance, and gives you access to the design and manufacturing suppliers booths, but won’t allow you to attend most of the talks (except sponsored ones), while the conference pass gives you access to all sessions including workshops and tutorials, as well as complimentary lunch vouchers.

CONFERENCE PASS EXPO PASS
SUPER EARLY BIRD
(Ends March 31st, 2017)
$949 FREE
STANDARD
(Ends May 2nd, 2017)
$1,149 FREE
REGULAR/ONSITE $1,299 $75

Open Source ARM Compute Library Released with NEON and OpenCL Accelerated Functions for Computer Vision, Machine Learning

April 3rd, 2017 12 comments

GPU compute promises to deliver much better performance compared to CPU compute for application such a computer vision and machine learning, but the problem is that many developers may not have the right skills or time to leverage APIs such as OpenCL. So ARM decided to write their own ARM Compute library and has now released it under an MIT license.

The functions found in the library include:

  • Basic arithmetic, mathematical, and binary operator functions
  • Color manipulation (conversion, channel extraction, and more)
  • Convolution filters (Sobel, Gaussian, and more)
  • Canny Edge, Harris corners, optical flow, and more
  • Pyramids (such as Laplacians)
  • HOG (Histogram of Oriented Gradients)
  • SVM (Support Vector Machines)
  • H/SGEMM (Half and Single precision General Matrix Multiply)
  • Convolutional Neural Networks building blocks (Activation, Convolution, Fully connected, Locally connected, Normalization, Pooling, Soft-max)

The library works on Linux, Android or bare metal on armv7a (32bit) or arm64-v8a (64bit) architecture, and makes use of  NEON, OpenCL, or  NEON + OpenCL. You’ll need an OpenCL capable GPU, so all Mali-4xx GPUs won’t be fully supported, and you need an SoC with Mali-T6xx, T-7xx, T-8xx, or G71 GPU to make use of the library, except for NEON only functions.

In order to showcase their new library, ARM compared its performance to OpenCV library on Huawei Mate 9 smartphone with HiSilicon Kirin 960 processor with an ARM Mali G71MP8  GPU.

ARM Compute Library vs OpenCV, single-threaded, CPU (NEON)

Even with some NEON acceleration in OpenCV, Convolutions and SGEMM functions are around 15 times faster with the ARM Compute library. Note that ARM selected a hardware platform with one of their best GPU, so while it should still be faster on other OpenCL capable ARM GPUs the difference will be lower, but should still be significantly, i.e. several times faster.

ARM Compute Library vs OpenCV, single-threaded, CPU (NEON)

The performance boost in other function is not quite as impressive, but the compute library is still 2x to 4x faster than OpenCV.

While the open source release was just about three weeks ago, the ARM Compute library has already been utilized by several embedded, consumer and mobile silicon vendors and OEMs better it was open sourced, for applications such as 360-degree camera panoramic stitching, computational camera, virtual and augmented reality, segmentation of images, feature detection and extraction, image processing, tracking, stereo and depth calculation, and several machine learning based algorithms.

JeVois-A33 is a Small Quad Core Linux Camera Designed for Computer Vision Applications (Crowdfunding)

December 27th, 2016 8 comments

JeVois Neuromorphic Embedded Vision Toolkit – developed at iLab at the University of Southern California – is an open source software framework to capture and process images through a machine vision algorithm, primarily designed to run on embedded camera hardware, but also supporting Linux board such as the Raspberry Pi. A compact Allwinner A33 has now been design to run the software and use on robotics and other projects requiring a lightweight and/or battery powered camera with computer vision capabilities.

allwinner-a33-computer-vision-cameraJeVois-A33 camera:

  • SoC – Allwinner A33  quad core ARM Cortex A7 processor @ 1.35GHz with  VFPv4 and NEON, and a dual core Mali-400 GPU supporting OpenGL-ES 2.0.
  • System Memory – 256MB DDR3 SDRAM
  • Storage – micro SD slot for firmware and data
  • 1.3MP camera capable of video capture at
    • SXGA (1280 x 1024) up to 15 fps (frames/second)
    • VGA (640 x 480) up to 30 fps
    • CIF (352 x 288) up to 60 fps
    • QVGA (320 x 240) up to 60 fps
    • QCIF (176 x 144)  up to 120 fps
    • QQVGA (160 x 120) up to 60 fps
    • QQCIF (88 x 72) up to 120 fps
  • USB – 1x mini USB port for power and act as a UVC webcam
  • Serial – 5V or 3.3V (selected through VCC-IO pin) micro serial port connector to communicate with Arduino or other MCU boards.
  • Power – 5V (3.5 Watts) via USB port requires USB 3.0 port or Y-cable to two USB 2.0 ports
  • Misc
    • Integrated cooling fan
    • 1x two-color LED: Green: power is good. Orange: power is good and camera is streaming video frames.
  • Dimensions –  28 cc or 1.7 cubic inches (plastic case included with 4 holes for secure mounting)

jevois-camera-hardwareThe camera runs Linux with the drivers for the camera, JeVois C++17 video capture, processing & streaming framework, OpenCV 3.1, and toolchains. You can either connect it to a host computer’s USB port to check out the camera output (actual image + processed image), or to an MCU board such as Arduino via the serial interface to use machine vision to control robots, drones, or others. Currently three modes of operation are available:

  • Demo/development mode – the camera outputs a demo display over USB that shows the results of its analysis, potentially along with simple data over serial port.
  • Text-only mode – the camera provides no USB output, but only text strings, for example, commands for a pan/tilt controller.
  • Pre-processing mode – The smart camera outputs video that is intended for machine consumption, and potentially processed by a more powerful system.

The smart camera can detect motion, track faces and eyes, detect & decode ArUco makers & QR codes, detect & follow lines for autonomous cars, and more. Since the framework is open source, you’ll also be able to add your own algorithms and modify the firmware. Some documentation has already been posted on the project’s website. The best is to watch the demo video below to see the capacities of the camera and software.

The project launched in Kickstarter a few days ago with the goal of raising $50,000 for the project. A $45 “early backer” pledge should get you a JeVois camera with a micro serial connector with 15cm pigtail leads, while a $55 pledge will add an 8GB micro SD card pre-load with JeVois software, and a 24/28 AWG mini USB Y cable. Shipping is free to the US, but adds $10 to Canada, and $15 to the rest of the work. Delivery is planned for February and March 2017.

AVC8000nano mini PCIe Frame Grabber Captures up to 8 D1 Videos

February 25th, 2016 1 comment

There are plenty of solutions to stream or capture multiple video streams from cameras, but example for security purpose, but usually the equipment is relatively large and heavy. Advanced Micro Peripherals AVC8000nano mini PCIe capture card miniaturizes all that thanks to its form factor, and its 8 u.FL connectors used to capture eight D1 videos at full frame rate.

AVC8000nano Connected to Gateworks Ventana SBC and 8 Cameras

AVC8000nano Connected to Gateworks Ventana SBC and 8 Analog Cameras

AVC8000nano features:

  • Video Inputs
    • 8x Live NTSC/PAL video inputs with 8x 10-bit ADC and anti-aliasing filters
    • 8x D1 size capture at full frame rate
    • Formats – NTSC-M, NTSC-Japan, NTSC (4.43), RS-170, PAL-B,G,N, PAL-D, PAL-H, PAL-I, PAL-M, PAL-CN, PAL-60 SECAM
    • Adjustments – Contrast, saturation, hue (or chroma phase), and brightness. Software adjustable Sharpness, Gamma and noise suppression
  • Video Capture FormatsRGB555, RGB565, YCbCr 4:2:2, YCbCr 4:1:1
  • Windows support with Drivers and DirectShow/DirectDraw
  • Linux with drivers and Video4Linux
  • Form factor – Full height mini PCI Express
  • Temperature Range – Commercial: 0°C to 60°C; Extended: –40°C to +85°C
AVC8000nano_Block_Diagram

AVC8000nano Block Diagram

The specifications also mentions hardware requirements: “x86 PC-Compatible with mini PCI Express socket”. But as you can see on the first picture, Gateworks managed to make the card work on their Ventana single board computers powered by Freescale/NXP i.MX6 and featuring one or more PCIe connectors so it’s also suitable for ARM platforms. The company also updated their Wiki to show how to use it on their boards with Linux (built with Yocto Project 1.8) using AVC8000nano drivers, Gstreamer, and optionally OpenCV if you want to stitch multiple inputs together.

OpenCV_Camera_Inputs_Stichting

Stitching with OpenCV

Such solutions can be used for vehicle-based Video Capture, real-time situational awareness, law enforcement, remote video surveillance, traffic monitoring and control, video acquisition & analytics, UAVs,  and more.

You may want to visit AVC8000nano product page for more details. Although it has been launched in 2013, I could not find price information for the capture card.