Archive

Posts Tagged ‘drone’

A Day at Chiang Mai Maker Party 4.0

December 6th, 2017 6 comments

The Chiang Mai Maker Party 4.0 is now taking place until December 9, and I went there today, as I was especially interested in the scheduled NB-IoT talk and workshop to find out what was the status about LPWA in Thailand. But there are many other activities planned, and if you happen to be in Chiang Main in the next few days, you may want to check out the schedule on the event page or Facebook.

I’m going to go though what I’ve done today to give you a better idea about the event, or even the maker movement in Thailand.

Click to Enlarge

Booth and activity area should be the same over the 4 days, but the talks, open activity, and workshop will be different each day. Today, people could learn how to solder in the activity area.
The even was not really big with manufacturers/sellers like ThaiEasyElec, INEX, or Gravitech closer to the entrance…


… and slighter higher up in a different zone, companies and makers were showcasing their products or projects. I still managed to spent 5 interesting hours at the event attending to talks and checking out the various projects.

I started my day with a talk entitled “Maker Movement in South East Asia” presented by William Hooi, previously a teacher, who found One Maker Group and setup the first MakerSpace in Singapore, as well as helped introduce the Maker Faire in Singapore in 2012 onwards.


There was three parts to talk with an history of the Maker movement (worldwide), the maker movement in Singapore, and whether Making should be integrated into school curriculum.
He explained at first the government who not know about makers, so it was difficult to get funding, but eventually they jump on the bandwagon, and are now puring money on maker initiative. One thing that surprised me in the talk is that before makers were hidden their hobby, for fear of being mocked by other, for one for one person doing an LED jacket, and another working on an Iron Man suit. The people around them would not understand why they would waste their time on such endeavors, but the Maker Space and Faire helped finding like minded people. Some of the micro:bit boards apparently ended in Singapore, and when I say some, I mean 100,000 units. Another thing that I learned is the concept of “digital retreat for kids” where parents send kids to make things with their hands – for example soldering -, and not use smartphone or tablets at all, since they are already so accustomed to those devices.

One I was done with the talk, I walked around, so I’ll report about some of the interesting project I came across. I may write more detailed posts for some of the items lateron.

Click to Enlarge

Falling object detection demo using OpenCV on the software side, a webcam connected to…

Click to Enlarge

ASUS Tinker board to handle fall detection, and an NVIDIA Jetson board for artificial intelligence. If fall is detection an alert to send to the tablet, and the system also interfaces with Xiaomi Mi band 2.

Katunyou has also made a more compact product, still based on Tinker Board, for nursing home, or private home where an elderly may live alone. The person at the stand also organizes Raspberry Pi 3 workshops in Chiang Mai.

I found yet another product based on Raspberry Pi 3 board. SRAN is a network security device made by Global Tech that report threats from devices accessing your network using machine learning.

Click to Enlarge

Nordic Technology House showcased a magic mirror based on Raspberry Pi 3, and a webcam to detect your dance move, but their actual product shown above is a real-time indoor air monitoring system that report temperature, humidity, CO2 level, and PM 2.5 levels, and come sent alerts via LINE if thresholds are exceeded.

One booth had some drones including the larger one above spraying insecticides for the agriculture market.

Click to Enlarge

There was also a large about sewing machines, including some smarter one where you can design embroidery in a table before sewing.

There were also a few custom ESP8266 or ESP32 boards, but I forgot to take photos.

The Maker Party is also a good place to go with your want to buy some board or smart home devices.

Click to Enlarge

Beside Raspberry Pi Zero W / 3, ESP8266 boards and Asus Tinker board seem to be popular items in Thailand. I could also spot Sonoff wireless switch, and an Amazon Dot, although I could confirm only English is supported, no Thai language.

BBC Micro:bit board and accessories can also be bought at the event.


M5Stack modules, and Raspberry Pi 3 Voice Kit were also for sale.

Click to Enlarge

Books are also available for ESP32, Raspberry Pi 3, IoT, etc… in Thai language.

Click to Enlarge

But if you can’t read Thai there was also a choice of book in English about RPi, Arduino, Linux for Makers, IoT and so on. I then attended the second talk of the day: “NB-IoT” by AIS, one of the top telco company in Thailand. Speakers included Phuchong Charoensub, IoT Marketing Specialist, and Pornsak Hanvoravongchai, Device Innovation Manager, among others. They went through various part include a presentation of AIS current M2M business, what IoT will change (e.g. brings in statups and makers), some technical details about NB-IoT, and the company offering for makers.

I’ll go into more details in a separate post tomorrow, but if you want to get started the good news is that it’s now possible to pre-order a 1,990 THB Arduino Shield ($61) between December 6-9, and get it shipped on February 14, 2018. NB-IoT connectivity is free for one year, and will then cost 350 Baht (around $10) per year per device. However, there’s a cost to enable NB-IoT on LTE base stations, so AIS will only enable NB-IoT at some universities, and maker spaces, meaning for example, I would most certainly be able to use such kit from home. An AIS representative told me their no roadmap for deployment, it will depend on the business demand for such services.

If you are lucky you may even spot one or two dancing dinosaurs at the event.

Cheap Evil Tech – WiFi Deauther V2.0 Board and Autonomous Mini Killer Drones

November 24th, 2017 10 comments

Most technological advances usually improve life of people, and with the costs coming down dramatically over the years, available to more people. But technology can be used for bad, for example by governments and some hackers. Today, I’ve come across two cheap hardware devices that could be considered evil. The first one is actually pretty harmless and can be use for education, but disconnects you from your WiFi, which may bring severe physiological trauma to some people, but should not be life threatening, while the other is downright scary with cheap targeted killing machines.

WiFi Deauther V2.0 board

Click to Enlarge

Specifications for this naughty little board:

  • Wireless Module based on ESP8266 WiSoC
  • USB – 1x Micro USB type changed, more stable.
  • Expansion – 17-pin header with 1x ADC, 10x GPIOs, power pins
  • Misc – 1x power switch,  battery status LEDs
  • Power Supply
    • 5 to 12V via micro USB port
    • Support for 18650 battery with charging circuit (Over-charge protection, over-discharge protection)
  • Dimensions – 10×3 cm

The board is pre-flashed with the open source ESP8266 deauther firmware, which allows you to perform a deauth attack with an ESP8266 against selected networks. You can select target IP address, and the board will then disconnect that node constantly, either blocking the connection or slowing it down. You don’t need to be connect to the access point or know the password for it to work. You’ll find more details on how it works on the aforelinked Github page. Note: The project is a proof of concept for testing and educational purposes.

WiFi Deauther V2.0 board can be purchased on Tindie or Aliexpress for $10.80 plus shipping.

A.I. Powered Mini Killer Drones

The good news is that those do not exist yet (at least not for civilians), but the video shows what could happen once people, companies, or governments weaponize face recognition, and drone technology to design mini drones capable of targeted killings. You could fit the palm-sized drones with a few grams of explosives (or lethal poison), tell them who to target, and once they’d find it, land on the skull of the victim, and trigger the explosive for an instant kill. Organizations or governments could also have army of those drones for killing based on metadata obtained from phone records, social media posts, etc… The fictional video shows how those drones could work, and what may happen to society as a consequence.

Technology is already here for such devices. Currently you could probably get $400+ DJI Spark drone to handle face recognition, but considering inexpensive $20+ miniature drones and $50 smart cameras are available (but not quite good enough right now), sub $100 drones with face recognition should be up for sale in a couple of years. The explosive and triggering mechanism would still need to be added, but I’m not privy to the costs… Nevertheless, it should be technically possible to manufacture such machine, even for individuals, for a few hundreds dollars. Link to fictitious StratoEnergetics company.

The Future of Life Institute has published an open letter against autonomous weapons that has been signed by various A.I. and robotics researchers, and other notable endorsers, stating that “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”, but it will likely be tough to keep the genie in the bottle.

UFS 3.0 Embedded Flash to Support Full-Duplex 2.4GB/s Transfer Speeds

September 10th, 2017 3 comments

All my devices still rely on eMMC flash for storage, but premium smartphones, for example, make use of UFS 2.0/UFS 2.1 flash storage with performance similar to SSD, with Samsung UFS 2.0 storage achieving up to 850MB/s read speed, 260 MB/s write speed, and 50K/30K R/W IOPS. UFS 3.0 promises to roughly double the performance of UFS 2.0/2.1 with transfer rates of up to 2.4 GB/s, and separately, the UFS Card v2.0 standard should deliver UFS 2.1 performance on removable storage.

Image Source: Next Generation of Mobile Storage : UFS and UFS Card – Click to Enlarge

Several Chinese and Taiwanese websites, including CTimes and Benchlife, have reported that companies have started getting UFS 3.0 & UFS Card v2.0 licenses from JEDEC, and Phison is working on a controller to support both new standards, and scheduled to launch in 2018.

Premium smartphone SoC are only expect to support UFS 3.0 in 2019 and beyond, and hopefully by that time eMMC will have been replaced by UFS 2.0/2.1 in entry level and mid range devices. The outlook for UFS cards is less clear, as I’ve yet to see a product equipped with a UFS slot.

Click to Enlarge

Based on a recent presentation at the Flash Memory Summit, (typical) embedded storage capacity will also increase to 32GB for IoT / multimedia applications, 256GB for smart home products and drones, 512GB for mobile devices, and over 1TB for automotive applications.

Via Liliputing

Kudrone Nano Drone Shoots “4K” Videos, Follows You With GPS (Crowdfunding)

March 24th, 2017 6 comments

Kudrone is a palm-sized drone equipped with a 4K camera that can follow you around for up to 8 minutes thanks to its 650 mAh battery by tracking your smartphone location via GPS. You can also take matters on your own hands by piloting the drone with your smartphone.

The drone also includes various sensors such as an accelerometer, a gyroscope, a magnetic compass, a sonar, and a vision positioning sensor enabling features such as auto hovering. Some of the specifications include:

  • Storage – Up to 64GB (micro SD card)
  • Connectivity – 802.11 b/g/n WiFi up to 80 meters
  • GNSS – GPS / GLONASS
  • Camera
    • Sony CMOS 1/3.2 image sensor (13MP)
    • F2.8 / H100 V78.5 / D:120 lens
    • Image resolution up to 3280 x 2464
    • Video resolution 4K, 2.7K, 1080p, 720p
  • Flight Parameters – Max altitude – 30 meters; hovering accuracy: +/- 0.1 meter
  • Battery – 650 mAh LiPo1S battery good for up to 8 minutes (but lower when the camera and GPS are on)
  • Dimensions – 174 x 174 x 43 mm

It’s a little odd that it records 4K videos, but image resolution is limited to 3280×2464, so there may be some extrapolation here and the video quality is unlikely to match what most people would consider “4K”. You can see a video shot with drone – but apparently not while flying – here, and it is limited to 1080p60 on YouTube.

Kudrone Team  provided a comparison pitting their drone against some other cheap nano drones, and some higher end drones by DJI and Parrot.

iPhone or Android mobile apps will allow you to control the drone, enable/disable features, and sync your photos and videos with your  smartphone. The preview is shown at 720p with a 160 ms delay.

The drone launched on Indiegogo several days ago, and has been pretty popular having raised close to $700,000 with 21 days to go. All very early bird rewards at $99 are gone, but you could still get the drone for $109 with two propeller sets, a charger, a 16GB micro SD card, two batteries, and a pair of propeller protector. Shipping adds $9 to the US or China, and $25 to the other countries I checked. Delivery is scheduled for July 2017. The drone is made by a company called Fujian Ruiven Technology, and Kudrone is not their first drone. However, you may want to check out the update section on Indiegogo to see pictures and video samples, as well as videos of the drone in action to get a better idea of the drone current capabilities.

Categories: Hardware Tags: Android, drone, indiegogo, ios, wifi

$80 BeagleBone Blue Board Targets Robots & Drones, Robotics Education

March 14th, 2017 3 comments

Last year, we reported that BeagleBoard.org was working with the University of California San Diego on BeagleBone Blue board for robotics educational kits such as EduMiP self-balancing robot, and EduRover four wheel robot. The board has finally launched, so we know the full details, and it can be purchased for about $80 on Mouser, Element14 or Arrow websites.

Click to Enlarge

BeagleBone Blue specifications:

  • SiP (System-in-Package) – Octavo Systems OSD3358 with TI Sitara AM3358 ARM Cortex-A8 processor @ up to 1 GHz,  2×32-bit 200-MHz programmable real-time units (PRUs), PowerVR SGX530 GPU, PMIC, and 512MB DDR3
  • Storage – 4GB eMMC flash, micro SD slot
  • Connectivity – WiFi 802.11 b/g/n, Bluetooth 4.1 LE (TI Wilink 8) with two antennas
  • USB – 1x USB 2.0 client and host port
  • Sensors – 9 axis IMU, barometer
  • Expansion
    • Motor control – 8x 6V servo out, 4x DC motor out, 4x quadrature encoder in
    • Other interfaces – GPIOs, 5x UARTs, 2x SPI, 1x I2C, 4x ADC, CAN bus
  • Misc – Power, reset and 2x user buttons; power, battery level & charger LEDs; 6x user LEDs; boot select switch
  • Power Supply – 9-18V DC input via power barrel; 5V via micro USB port; 2-cell LiPo support with balancing,
  • Dimensions & Weight – TBD

The board ships pre-loaded with Debian, but it also supports the Robot Operating System (ROS) & Ardupilot, as well as graphical programming via Cloud9 IDE on Node.js. You’ll find more details, such as documentation, hardware design files, and examples projects on BeagleBone Blue product page, and github.

The board is formally launched at Embedded World 2017, and Jason Kridner, Open Platforms Technologist/Evangelist at Texas Instruments, and co-founder and board member at BeagleBoard.org Foundation, uploaded a video starting with a demo of various robotics and UAV projects, before giving a presentation & demo of the board at the 2:10 mark using Cloud 9 IDE.


If you attend Embedded World 2017, you should be able to check out of the board and demos at Hall 3A Booth 219a.

FOSDEM 2017 Open Source Meeting Schedule

January 31st, 2017 4 comments

FOSDEM (Free and Open Source Software Developers’ European Meeting) is a 2-day free event for software developers to meet, share ideas and collaborate that happens on the first week-end of February, meaning it will take place on February 4 & 5, 2017 this year. FOSDEM 2017 will features 608 speakers, 653 events, and 54 tracks, with 6 main tracks namely: Architectures, Building, Cloud, Documentation, Miscellaneous, and Security & Encryption.
I won’t be there, but it’s always interesting to look at the schedule, and I made my own virtual schedule focusing especially on talks from “Embedded, mobile and automotive” and “Internet of Things” devrooms.

Saturday 4, 2017

  • 11:00 – 11:25 – Does your coffee machine speaks Bocce; Teach your IoT thing to speak Modbus and it will not stop talking, by Yaacov Zamir

There are many IoT dashboards out on the web, most will require network connection to a server far far away, and use non standard protocols. We will show how to combine free software tools and protocols from the worlds of IT monitoring, Industrial control and IoT to create simple yet robust dashboards.

Modbus is a serial communication protocol developed in 1979 for use with programmable logic controllers (PLCs). In simple terms, it is a method used for transmitting information over serial lines between electronic devices., it’s openly published, royalty-free, simple and robust.

Many industrial controllers can speak Modbus, we can also teach “hobby” devices like Arduino boards and ESP8266 to speak Modbus. Reliable, robust and simple free software Modbus client will be used to acquire the metrics from our device, then the metrics will be collected and sent to Hawkular and Grafana to store and visualize our data.

  • 11:30 – 11:55 – Playing with the lights; Control LIFX WiFi-enabled light bulbs, by Louis Opter

In this talk we’ll take a close look at a one of the “smart” (WiFi-connected) light-bulbs available on the market today. The bulbs expose a small API over UDP that I used to run an interface on a programmable buttons array. We will see how topics like reverse engineering, security, licensing, “self-hosting” and user experience came into play.

monolight is an user interface to control LIFX WiFi-enabled light bulbs. monolight runs on a programmable button array; it is written in Python 3.6 (to have type annotations and asyncio), and it interfaces with the bulbs through a more complex daemon written in C: lightsd.

This talk will start with a live demo of the button grid remotely controlling the light bulbs. We will then explore how it works and some of the motivations behind it (network isolation, trying to not depend on the “cloud”, reliability, user-experience). Finally, we will look into what kind of opportunities even more open IoT products could bring, and open leave the place to Q&A and discussion.

  • 12:00 – 12:30 – Creating the open connected car with GENIVI, by Zeeshan Ali, GENIVI Development Platform (GDP) technical lead

A number of new components have matured in GENIVI to provide a true connected car experience. A couple of them are key connectivity components; namely SOTA (Software Over the Air) and RVI (Remote Vehicle Interface). This talk will discuss both these components, how they work together, the security work done on them and their integration into the GENIVI Development Platform.

This talk will also run down the overall status of GENIVI’s development platform and how it can enable an automotive stack to speak not just with the cloud, but with IoT devices via Iotivity interface.

  • 12:30 – 13:00 – Making Your Own Open Source Raspberry Pi HAT; A Story About Open Source Harware and Open Source Software, by Leon Anavi

This presentation will provide guidelines how to create an open source hardware add-on board for the most popular single board computer Raspberry Pi using free and open source tools from scratch. Specifications of Raspberry Pi Foundation for HAT (Hardware Attached on Top) will be revealed in details. Leon Anavi has been developing an open source Raspberry Pi HAT for IoT for more than a year and now he will share his experience, including the common mistakes for a software engineer getting involved in hardware design and manufacturing. The presentation is appropriate for anyone interested in building entirely open source products that feature open source hardware and open source software. No previous experience or hardware knowledge is required. The main audience are developers, hobbyists, makers, and students. Hopefully the presentation will encourage them to grab a soldering iron and start prototyping their DIY open source device.

  • 13:00 – 13:25 – Building distributed systems with Msgflo; Flow-based-programming over message queues, by Jon Nordby

MsgFlo is a tool to build systems that span multiple processes and devices, for instance IoT sensor networks. Each device acts as a black-box component with input and output ports, mapped to MQTT message queues. One then constructs a system by binding the queues of the components together. Focus on components exchanging data gives good composability and testability, both important in IoT. We will program a system with MsgFlo using Flowhub, a visual live-programming IDE, and test using fbp-spec.

In MsgFlo each process/device is an independent participant, receiving data on input queues, and sending data on output queues. A participant do not know where the data comes from, nor where (if anywhere) the data will go. This strong encapsulation gives good composability and testability. MsgFlo uses a standard message queue protocol (MQTT or AMQP). This makes it easy to use with existing software. As each participant is its own process and communicate over networks, they can be implemented in any programming language. Convenience libraries exist for C++, Python, Arduino, Node.js and Rust. On top of the message queue protocol, a simple discovery mechanism is added. For existing devices without native Msgflo support, the discovery messages can be sent by a dedicated tool.

  • 13:30 – 13:55 – 6LoWPAN in picoTCP, and how to support new Link Layer types, by Jelle De Vleeschouwer

6LoWPAN enables, as the name implies, IPv6-communication over Low-power Wireless Personal Area Networks, e.g. IEEE802.15.4. A lot of resources are available to allow 6LoWPAN over IEEE802.15.4, but how can one extend the 6LoWPAN feature-set for the use with other link layer types? This talk will cover the details about a generic implementation that should work with every link layer type and how one can provide support for ones own custom wireless network. The goal is to give quite a technical and detailed talk with finally a discussion about when 6LoWPAN is actually useful and when is it not.

Last year, as a summer project, a generic 6LoWPAN adaption layer was implemented into picoTCP, an open source embedded TCP/IP-stack developed by Altran Intelligent Systems, with an eye on the IoT. The layer should also be able to allow multiple link-layer extensions, for post-network-layer processing. This could be used for mesh-under routing, link layer security, whatever you want. This talk will cover how one can take advantage of these features and caveats that come with it.

  • 14:00 – 15:00 – Groking the Linux SPI Subsystem by Matt Porter

The Serial Peripheral Interconnect (SPI) bus is a ubiquitous de facto standard found in many embedded systems produced today. The Linux kernel has long supported this bus via a comprehensive framework which supports both SPI master and slave devices. The session will explore the abstractions that the framework provides to expose this hardware to both kernel and userspace clients. The talk will cover which classes of hardware supported and use cases outside the scope of the subsystem today. In addition, we will discuss subtle features of the SPI subsystem that may be used to satisfy hardware and performance requirements in an embedded Linux system.

  • 15:00 – 15:25 – Frosted Embedded POSIX OS; a free POSIX OS for Cortex-M embedded systems, by Brabo Silvius

FROSTED is an acronym that means “FRee Operating System for Tiny Embedded Devices”. The goal of this project is to provide a free kernel for embedded systems, which exposes a POSIX-compliant system call API. In this talk I aim to explain why we started this project, the approach we took to separate the kernel and user-space on Cortex-M CPU’s without MMU, and showcase the latest improvements on networking and supported applications.

  • 15:30 – 16:00 – How to Build an Open Source Embedded Video Player, by Michael Tretter

Video playback for embedded devices such as infotainment systems and media centers demands hardware accelerators to achieve reasonable performance. Unfortunately, vendors provide the drivers for the accelerators only as binary blobs. We demonstrate how we built a video playback system that uses hardware acceleration on i.MX6 by using solely open source software including Gstreamer, Qt QML, the etnaviv GPU driver, and the coda video decoder driver.

The Qt application receives the video streams from a Gstreamer pipeline (using playbin). The Gstreamer pipeline contains a v4l2 decoder element, which uses the coda v4l2 driver for the CODA 960 video encoder and decoder IP core (VPU in the Freescale/NXP Reference Manual), and a sink element to make the frames available to the Qt application. The entire pipeline including the Gstreamer to Qt handover uses dma_bufs to avoid copies in software.This example shows how to use open source drivers to ease the development of video and graphics applications on embedded systems.

  • 16:00 – 16:25 – Project Lighthouse: a low-cost device to help blind people live independently, by David Teller

The Word Health Organization estimates that more than 250 million people suffer from vision impairment, 36 millions of them being entirely blind. In many cases, their impairment prevents them from living independently. To complicate things further, about 90% of them are estimated to live in low-income situations.

Project Lighthouse was started by Mozilla to try and find low-cost technological solutions that can help vision-impaired people live and function on their own. To this date, we have produced several prototypes designed to aid users in a variety of situations. Let’s look at some of them. This will be a relatively low-tech presentation.

  • 16:30 – 16:55 – Scientific MicroPython for Microcontrollers and IoT, IoT programming with Python, by Roberto Colistete Jr

MicroPython is a implementation of Python 3 optimised to run on a microcontroller, created in 2013 by the Physicist Damien P. George. The MicroPython boards runs MicroPython on the bare metal and gives a low-level Python operating system running interactive prompt or scripts.

The MicroPython boards currently use 32 bit microcontrollers clocked at MHz and with RAM limited to tens or hundreds of Kbytes. These are the microcontroller boards with official MicroPython support currently in the beginning 2017 : Pyboard, Pyboard Lite, WiPy 1/2, ESP8266, BBC Micro:bit, LoPy, SiPy, FiPy. They cost between USD3-40, are very small and light, about some to tens of mm in each dimension and about 5-10 g, have low power consumption, so MicroPython boards are affordable and can be embedded in almost anything, almost anywhere.

Some hints will be given to the FOSS community to be open minded about MicroPython : be aware that MicroPython exists, MicroPython is a better programming option than Arduino in many ways, MicroPython boards are available and affordable, porting more Python 3 scientific modules to MicroPython, MicroPython combines well with IoT.

  • 17:00 – 17:25 – Iotivity from devices to cloud; how to make IoT ideas to real using FLOSS, by Philippe Coval & Ziran Sun (Samsung)

The OCF/IoTivity project aims to answer interoperability issues in the IoT world from many different contexts to accommodate a huge range devices from microcontrollers, to consumer electronics such as Tizen wearables or your powerful GNU/Linux system The vision of Iotivity is not restricted to ad hoc environment but also can be connected to Internet and make the service easily accessible by other parties. With cloud access in place, usage scenarios for IoT devices can be enriched immensely.

In this talk we walk through the steps on how to practically handle IoT use cases that tailored towards various topologies. To introduce the approach used in IoTivity, we first give a detailed background introduction on IoTivity framework. Then we will present a demo that shows a few examples, from setting up a basic smart home network to accessing the IoT resource via a third party online service. Challenges and solutions will be addressed from development and implementation aspects for each step of the demo.

We hope this talk will inspire developers to create new IoT prototypes using FLOSS.

  • 17:30 – 17:55 – Open Smart Grid Platform presentation, an Open source IoT platform for large infrastructures, by Jonas van den Bogaard

The Open Smart Grid Platform is an open source IoT platform. The open smart grid platform is a generic IoT platform, built for organizations that manage and/or control large-scale infrastructures. The following use cases are now readily available: smart lighting, smart metering, tariff switching, and microgrids. Furthermore the following use-cases are in development: distribution automation, load management and smart device management. The architecture of the open smart grid platform is modular and consists multiple layers.

The open smart grid platform is highly unique for embracing the open source approach and the following key features:

  • Suitable for scalable environments delivering high performance
  • High availability and multitenant architectures
  • Built with security by design and regularly tested.
  • It has a generic architecture. More use cases and domains are easily added to the platform.
  • The open smart grid platform is based on open standards where possible.

We believe the platform is interesting for developers who have interest in working on use-cases for Smart Cities, Utility Companies and other large-scale infrastructure companies.

  • 18:00 – 19:00 – AGL as a generic secured industrial embedded Linux; factory production line controllers requirements are not that special, by Dominig ar Foll

There is no de facto secured embedded Linux distro while the requirement is becoming more and more critical with the rise of IoT in Industrial domains. When looking under the hood of the Yocto built AGL project (Automotive Linux), it is obvious that it can fit 95% of the most common requirements as a Secured Embedded Linux. We will look how non Automotive industries can easily reuse the AGL code and tools to build their own industrial product and why it’s a safer bet than to build it internally.

Industrial IoT cannot be successful without a serious improvement of the security coverage. Unfortunately there is as today, no of-the-shelves offer and the skills required to create such solution, are at best rare, more often out of reach. AGL as created a customizable embedded Linux distro which is nicely designed for reuse in many domains outside of Automotive. During the presentation we will see how to: – start your development with boards readily available on the Net, – change the BSP and add peripherals using Yocto layers or project like MRAA, – integrate a secure boot in your platform, – add your middleware and your application without breaking the maintained Core OS – develop a UI on the integrated screen and/or an HTML remote browser – update the core OS and your add-ons. – get support and influence the project.

Sunday 5, 2017

  • 10:00 11:00 – How I survived to a SoC with a terrible Linux BSP, Working with jurassic vendor kernels, missing pieces and buggy code, by Luca Ceresoli

In this talk Luca will share some of his experiences with such vendor BSPs, featuring jurassic kernels, non-working drivers, non-existing bootloaders, code of appallingly bad quality, ineffective customer support and Windows-only tools. You will discover why he spent weeks in understanding, fixing and working around BSPs instead of just using them. The effects on the final product quality will be described as well. Luca will also discuss what the options are when you face such a BSP, and what both hackers and vendors can do to improve the situation for everybody’s benefit.

  • 11:00-12:00 – Open Source Car Control, by Josh Hartung

This fall my team launched the Open Source Car Control (OSCC) project, a by-wire control kit that makes autonomous vehicle development accessible and collaborative to developers at every level. In this presentation, we discuss the project and its implications on the development of autonomous cars in a vertically integrated and traditionally closed industry.

A primary barrier to entry in autonomous vehicle development is gaining access to a car that can be controlled with an off-the-shelf computer. Purchasing from an integrator can cost upwards of $100K, and DIY endeavors can result in unreliable and unsafe solutions. The OSCC project acts as a solution to these problems. OSCC is a kit of open hardware and software (based on Arduino) that can be used to take control of the throttle, brake, and steering in modern cars. The result is a fully by-wire test car that can be built for about $10K (USD), including the vehicle. In this discussion, we unpack the impetus and development of the OSCC project, challenges we encountered during development, and the role projects like OSCC have in a necessary “flattening” of the automotive industry.

  • 12:00 – 13:00 – Kernel DLC Metrics, Statistic Analysis and Bug-Patterns, by Nicholas Mc Guire

SIL2LinuxMP strives to qualify a defined GNU/Linux subset for the use in safety-related systems by “assessment of non-compliant development”. To demonstrate that the kernel has achieved suitable reliability and correctness properties basic metrics of such properties and their statistic analysis can be used as part of the argument. Linux has a wealth of analytical tools built-in to it which allow to extract information on compliance, robustness of development, as well as basic metrics on complexity or correctness with respect to defined properties. While IEC 61508 Ed 2 always pairs testing and analysis, we believe that for a high complexity system traditional testing is of relatively low effectiveness and analytical methods need to be the primary path. To this ends we outline some approaches taken:

  • Bug-age analysis
  • Bug-rates and trend analysis
  • Code-complexity/bug relationship
  • Brain-dead correctness analysis
  • Interface and type-correctness analysis
  • API compliance analysis
  • Analysis of build-bot data

While much of the data points to robust and mature code there also are some areas where problems popped up. In this talk we outline the used methods and give examples as well as key findings. FLOSS development has reached a quite impressive maturity, to substantially go beyond we think it will need the use of quantitative process and code metrics – these results from SIL2LinuxMP may be a starting point.

  • 13:00 – 14:00 – Loco Positioning: An OpenSource Local Positioning System for robotics, presentation with a demo of autonomous Crazyflie 2.0 quadcopter, by Arnaud Taffanel

Positioning in robotics has alway been a challenge. For outdoor, robots GPS is solving most of the practical problems, but indoor, precise localization is still done using expensive proprietary systems mainly based on an array of cameras.

In this talk, I will present the loco positioning system: an open source Ultra Wide Band radio-based local positioning system, why we need it and how it works. I will also speak about its usage with the Crazyflie 2.0 open source nano quadcopter, of course ending with an autonomous flying demo.

  • 14:00 14:50 – Free Software For The Machine, by Keith Packard

The Machine is a hardware project at Hewlett Packard Enterprise which takes a new look at computer architecture. With many processors and large amounts of directly addressable storage, The Machine program has offered an equally large opportunity for developing new system software. Our team at HPE has spent the better part of two years writing new software and adapting existing software to expose the capabilities of the hardware to application developers.

As directly addressable storage is such a large part of the new hardware, this presentation will focus on a couple of important bits of free software which expose that to applications, including our Librarian File System and Managed Data Structures libraries. Managed Data Structures introduces a new application programming paradigm where the application works directly on the stable storage form for data structures, eliminating serialization and de-serialization operations.

Finally, the presentation will describe how the hardware is managed, from sequencing power to a rack full of high-performance computing hardware, through constructing custom Linux operating systems for each processor and managing all of them as parts of a single computing platform.

  • 15:00 – 15:25 – Diving into the KiCad source code, by Maciej Sumiński

Let’s be sincere, all of us would love to change something in KiCad. I bet you have an idea for a new tool or another killer feature that would make your life so much easier.

You know what? You are free to do so! Even more, you are welcome to contribute to the project, and it is not that difficult as one may think. Those who have browsed the source code might find it overwhelming at first, but the truth is: you do not have to know everything to create useful extensions.

I would like to invite you for a walk through the KiCad source code to demonstrate how easy it is to add this tool you have always been dreaming about.

  • 15:30 – 16:00 – Testing with volcanoes – Fuego+LAVA, embedded testing going distributed, by Jan-Simon Möller

LAVA and Fuego are great tools individually already. Combining and extending them allows for a much broader test coverage than each tool alone can provide.

The focus of this talk is to share the experiences made and lessons learned so people can integrate such tools better in their own environment. It also raises the pain-points and open issues when setting up a distributed environment.

Especially for Automotive, Long-Term-Support, CIP or Consumer Electronics, advancing the Test-harness is essential to raise the bar and strengthen the confidence in our embedded platforms. Automated testing can improve our ecosystem from two sides: during development (feature does work and does not break things) and during maintenance (no regressions through backports).

  • 16:00 – 16:30 – Adding IEEE 802.15.4 and 6LoWPAN to an Embedded Linux Device, by Stefan Schmidt

Adding support for IEEE 802.15.4 and 6LoWPAN to an embedded Linux board opens up new possibilities to communicate with tiny, IoT type of, devices.

Bringing IP connectivity to devices, like sensors, with just a few kilobytes of RAM and limited battery power is an interesting IoT challenge. With the Linux-wpan and 6LoWPAN subsystems we get Linux ready to support the needed wireless standards as well as protocols that connect these tiny devices into the wider Internet. To make Linux a practical border router or smart home hub for such networks.

This talk will show how to add the needed transceiver hardware to an existing hardware and how to enable and configure the Linux-wpan and 6LoWPAN mainline subsystems to use it. The demonstration will include setting up the communication between Linux and other popular IoT operating systems like RIOT or Contiki as well.

  • 16:30 – 17:00 – OpenPowerlink over Xenomai, by Pierre Ficheux

Industrial Ethernet is a successor of classic field bus such as CAN, MODBUS or PROFIBUS. POWERLINK was created by B&R Automation and provides performance and real­-time capabilities based on standard Ethernet hardware. openPOWERLINK is open source and runs on lots of platforms such as Linux, Windows, various RTOS and dedicated hardware (FPGA). We will explain how to use openPOWERLINK on top of Xenomai 3, a powerful real-time extension for Linux kernel based on co-­kernel technology.

FOSDEM 2017 will take place at the ULB Solbosch Campus in Brussels, Belgium, and no registration is required, you just need to show up in order to attend the event.

JeVois-A33 is a Small Quad Core Linux Camera Designed for Computer Vision Applications (Crowdfunding)

December 27th, 2016 8 comments

JeVois Neuromorphic Embedded Vision Toolkit – developed at iLab at the University of Southern California – is an open source software framework to capture and process images through a machine vision algorithm, primarily designed to run on embedded camera hardware, but also supporting Linux board such as the Raspberry Pi. A compact Allwinner A33 has now been design to run the software and use on robotics and other projects requiring a lightweight and/or battery powered camera with computer vision capabilities.

allwinner-a33-computer-vision-cameraJeVois-A33 camera:

  • SoC – Allwinner A33  quad core ARM Cortex A7 processor @ 1.35GHz with  VFPv4 and NEON, and a dual core Mali-400 GPU supporting OpenGL-ES 2.0.
  • System Memory – 256MB DDR3 SDRAM
  • Storage – micro SD slot for firmware and data
  • 1.3MP camera capable of video capture at
    • SXGA (1280 x 1024) up to 15 fps (frames/second)
    • VGA (640 x 480) up to 30 fps
    • CIF (352 x 288) up to 60 fps
    • QVGA (320 x 240) up to 60 fps
    • QCIF (176 x 144)  up to 120 fps
    • QQVGA (160 x 120) up to 60 fps
    • QQCIF (88 x 72) up to 120 fps
  • USB – 1x mini USB port for power and act as a UVC webcam
  • Serial – 5V or 3.3V (selected through VCC-IO pin) micro serial port connector to communicate with Arduino or other MCU boards.
  • Power – 5V (3.5 Watts) via USB port requires USB 3.0 port or Y-cable to two USB 2.0 ports
  • Misc
    • Integrated cooling fan
    • 1x two-color LED: Green: power is good. Orange: power is good and camera is streaming video frames.
  • Dimensions –  28 cc or 1.7 cubic inches (plastic case included with 4 holes for secure mounting)

jevois-camera-hardwareThe camera runs Linux with the drivers for the camera, JeVois C++17 video capture, processing & streaming framework, OpenCV 3.1, and toolchains. You can either connect it to a host computer’s USB port to check out the camera output (actual image + processed image), or to an MCU board such as Arduino via the serial interface to use machine vision to control robots, drones, or others. Currently three modes of operation are available:

  • Demo/development mode – the camera outputs a demo display over USB that shows the results of its analysis, potentially along with simple data over serial port.
  • Text-only mode – the camera provides no USB output, but only text strings, for example, commands for a pan/tilt controller.
  • Pre-processing mode – The smart camera outputs video that is intended for machine consumption, and potentially processed by a more powerful system.

The smart camera can detect motion, track faces and eyes, detect & decode ArUco makers & QR codes, detect & follow lines for autonomous cars, and more. Since the framework is open source, you’ll also be able to add your own algorithms and modify the firmware. Some documentation has already been posted on the project’s website. The best is to watch the demo video below to see the capacities of the camera and software.

The project launched in Kickstarter a few days ago with the goal of raising $50,000 for the project. A $45 “early backer” pledge should get you a JeVois camera with a micro serial connector with 15cm pigtail leads, while a $55 pledge will add an 8GB micro SD card pre-load with JeVois software, and a 24/28 AWG mini USB Y cable. Shipping is free to the US, but adds $10 to Canada, and $15 to the rest of the work. Delivery is planned for February and March 2017.

Parrot S.L.A.M Dunk is a Ubuntu & ROS Computer with 3D Depth Cameras for Drones & Robots

September 26th, 2016 No comments

Parrot and Canonical have partnered to develop the Parrot S.L.A.M.dunk development kit for the design of applications for autonomous navigation, obstacle avoidance, indoor navigation and 3D mapping for drones and robots, and running both Ubuntu 14.04 and ROS operating systems. The name of the kit is derived from its “Simultaneous Localization and Mapping algorithm” (S.L.A.M) allowing for location without GPS signal.

parrot-slam-dunk

Parrot S.L.A.M Dunk preliminary specifications:

  • SoC – NVIDIA Tegra K1 processor
  • Camera – Fish-eye stereo camera with a 1500×1500 resolution at 60fps
  • Sensors – Inertial-measurement unit (IMU), ultrasound sensor up to 15 meters range, magnetometer, barometer
  • Video Output – micro HDMI
  • USB – 1x micro USB 2.0 port, 1x USB 3.0/2.0 port
  • Weight – 140 grams

Parrot S.L.A.M dunk can be fitted various drones and robotic platforms such as quadcopters and fixed-wings, rolling robots and articulated arms using mounting kits. The computer module is then connected to the host platform via a 3.5mm jack cable and a USB cable in order to send and receive commands and data.

parrot-slam-dunk-drone-3d-depthThis morning I wrote about SoftKinetic 3D sensing camera based on time-of-flight technology, but Parrot S.L.A.M Dunk is based on more commonly used stereo vision cameras. The micro HDMI allows developers to connect the computer to a monitor in order to develop their application for Ubuntu and ROS.

Parrot S.L.A.M Dunk will be available in Q4 2016 at an undisclosed price. More information should eventually be found in Parrot Developer website.