Archive

Posts Tagged ‘medical’

EduExo DIY Robotic Exoskeleton Kit is Arduino Powered, 3D Printable, Designed for STEM Education (Crowdfunding)

May 12th, 2017 1 comment

Robotic exoskeletons are used for medical purposes such as helping with the rehabilitation of stroke patients, or enable paraplegics to walk again, as well as in the work place to assist people lifting heavy objects. While it’s possible to learn about the theory about exoskeleton technology, practical experience may help grasping all concepts better. However, there are not many courses available, and exoskeletons are usually expensive, so Volker Bartenbach, PhD at ETH in Zürich, has decided to created EduExo robotic exoskeleton kit for education purpose.

The EduExo hardware is based on off-the-shelf components like an Arduino UNO board, a motor, and a force sensor, as well as a rigid exoskeleton structure and cuff interfaces. The latter is optional as you can get the kit without it, and will instead receive the STL files to 3D print the parts yourself.

There’s also a handbook to help you get started in several steps:

  1. Exoskeleton Introduction
  2. Mechanics and Anatomy – Theory + instructions to assemble the kit
  3. Electronics and Software –  Theory + instructions to connect electronic components and write basic software with the Arduino IDE
  4. Control Systems  – Theory explaining the behavior of the exoskeleton, and step by step instructions to implement and test the control systems with the kit.
  5. Virtual Reality and Video Games – Learn how to create a computer game, connect the exoskeleton to your computer (Windows PC) and use it as a game controller. The demo relies on Unity 3D engine
  6. The Muscle Control Extension – You can reproduce your arm movements with the kit using an electromyography-EMG sensor (sold separately)

One you’ve gone through the handbook, you should understand the basics of exoskeletons, and maybe try develop your own algorithm or programs. Note that’s it’s just an educational device, it’s not powerful enough to provide any kind of support.

EduExo has been launched on Kickstarter with a 8,000 CHF ($7,939 US) funding goal. A 15 CHF pledge will get you the e-handbook only. If you have a 3D printer and most of the components, 30 CHF should get you the handbook, 3D STL files, and the components list. A full kit with all parts and a printed handbook requires a 165 CHF pledge (early bird). If you want to play with the Muscle Control Extension part, you’d need to spend $250 for the full kit plus the EMG sensor. You may also learn more about the educational kit and exoskeletons in general on EduExo website.

Via Arduino blog

Qualcomm Tricorder XPRIZE Selects Two Winners for Commercial Medical Tricorders

April 18th, 2017 No comments

Healthcare takes around 10% of worldwide GDP, and while in some cases an increase in the healthcare to GDP ratio means better care for people, in other cases it may  lead to a decrease in the population’s living standards. There are political, business, and legal issues involved in the costs, but overtime I’m confident that technology can both improve care and lower the costs, in some instances dramatically, especially if open source designs become more common, and there’s some work in that respect with open source projects for prosthetics, opthalmoscope, and even surgical robots. Some commercial projects also aim(ed) to lower the costs of diagnosis tools such as Sia Lab’s medical lab dongle or Scanadu medical tricorder. The latter project sadly did not manage to pass FDA approval, and the company will stop supporting it on May 15, 2017, but that does not mean others have given up on developing a Star Trek like tricorder project, and Qualcomm Tricorder XPRIZE – which aims at diagnosing 13 disease states – selected two winners for the competition: Final Frontier Medical Devices and Dynamical Biomarkers Group.

Final Frontier Medical Devices DxtER Tricorder

Click to Enlarge

Final Frontier Medical Devices is a US based team of engineers and medical professionals that realized 90% of patients going to emergency services just wanted a diagnostics for their problem, and decided to work on a DxtER tricorder, which “includes a group of non-invasive sensors that are designed to collect data about vital signs, body chemistry and biological functions. This information is then synthesized in the device’s diagnostic engine to make a quick and accurate assessment”.

Final Frontier Mediacal Devices got $2.5 million for their achievements, not bad considering they worked part-time on the project. The video below explains a little more about the team, their work, and the problem they try to solve, but does not give much details about the actual product and different sensors used.

DxtER cannot identify all 13 conditions from the XPRIZE challenge, but their algorithms are said to be able to diagnose 34 health conditions including diabetes, atrial fibrillation, chronic obstructive pulmonary disease, urinary tract infection, sleep apnea, leukocytosis, pertussis, stroke, tuberculosis, and pneumonia.

That aren’t much more details in DxtER’s product page for now.

Dynamical Biomarkers Group Tricorder

Click to Enlarge

Dynamical Biomarkers Group is a 39 persons team based in Taiwan, supported by HTC Research, and  led by Harvard Medical School Associate Professor Chung-Kang Peng. The team got the second prize, still a cool $1 million, for their tricorder prototype comprised of three modules:

  • Smart Vital-Sense Monitor – Temperature, heart rate, blood pressure, respiration, and oxygen saturation.
  • Smart Blood-Urine-Breath Test Kit – Analyze fluids or breath dynamics to diagnose conditions such as urinary tract infection, diabetes, and COPD
  • Smart Scope Module – Bluetooth enabled magnifying camera to obtain high-resolution images of the skin and tympanic (ear) membrane. Used for diseases such as melanoma or otitis media.

These modules allow “physiologic signal analysis, image processing, biomarker detection”, and have been designed to be easy to use through a smartphone with an app that guide the patient through specific tests to generate a diagnosis.

The video below, again does not give much details about the product itself, but present the team, and explain the motivations such as controlling the cost of medical resources in Taiwan, and especially providing quality healthcare in rural areas of Greater China.

From the video, they seem to have greater resources for development than the US based team. Some more details about the tricoder can be found in the Center for Dynamical Biomarkers’ (DBIOM) XPRIZE page.

Via Liliputing

Embedded Systems Conference 2017 Schedule – May 3-4

April 5th, 2017 No comments

The Embedded Systems Conference 2017 will take place over two days in Boston, US on May 3-4, and the organizers have published the schedule of the event. Even if you’re not going to attend, you’ll often learn something or find new information by just checking out the talks and abstracts, so I’ve created my own virtual schedule with some of the most interesting sessions.

Wednesday, May 3rd

  • 08:00 – 08:45 – Combining OpenCV and High Level Synthesis to Accelerate your FPGA / SoC EV Application by Adam Taylor, Adiuvo Engineering & Training Ltd

This session will demonstrate how you can combine commonly used Open source frameworks such as OpenCV with High Level Synthesis to generate a embedded vision system using FPGA / SoC. The combination of OpenCV and HLS allows for a much faster algorithm development time and consequently a faster time to market for the end application.

  • 09:00 – 09:45 – Understanding the ARM Processor Roadmap by Bob Boys,   Product Manager, ARM

In 2008, the ARM processor ranged from the 32-bit ARM7 to the Cortex-A9. There were only three Cortex-M processors. Today the roadmap has extended up to the huge 64-bit Cortex-A72, down to the tiny Cortex-M0 and out to include in the winter 2016, the new Trustzone for ARMv8-M.

The ARM roadmap, in order to effectively service many markets, has grown rather complicated. This presentation will explain the ARM roadmap and offer insights into its features. Questions answered include where processors should be used and sometimes where it makes more sense to use a different processor as well as different instruction and core feature sets.

This will start at ARM 7 TDMI and how and why ARM turned into the Cortex family. Each of the three components: Application (Cortex-A), Real-Time (Cortex-R) and Microcontroller (Cortex-M) will be explained in turn.

  • 10:00 – 10:45 – Mixed Signal Analysis: digital, analog and RF by Mike Borsch,  Application Engineer, Rohde & Schwarz

Embedded systems increasingly employ both digital, analog and RF signals. Debugging and analyzing these systems can be challenging in that one needs to measure a number of different signals in one or more domains simultaneously and with tight time synchronization. This session will discuss how a digital oscilloscope can be used to effectively debug these systems, and some of the instrumentation challenges that go along with this.

  • 11:00 – 11:45 – Panel Discussion: The Extinction of the Human Worker? – The Future Role of Collaborative Robots in Smart Manufacturing
  • 12:00 – 12:45 – How Will MedTech Fare in our New Public Policy Environment by Scott Whittaker, President & Chief Executive Officer, Advanced Medical Technology Association (AdvaMed)
  • 13:00 – 13:45 – Embedded Systems Safety & Security: Dangerous Flaws in Safety-Critical Device Design by Michael Barr, Co-founder and CTO, Barr Group

When safety-critical devices come online, it is imperative that the devices are not only safe but also secure. Considering the many security concerns that exist in the IoT landscape, attacks on connected safety-critical devices are to be expected and the results could be deadly. By failing to design security into dangerous devices, too many engineers are placing life and limb at risk. Join us for a look at related industry trends and a discussion of how we can work together to put future embedded systems on a more secure path.

  • 14:00 – 14:45 – Intel EPID: An IoT ID Standard for Device Authentication & Privacy by Jennifer Gilburg, Director IoT Identity, Intel Platform Security Division

Approved as a TCG & ISO direct anonymous attestation method and open sourced by Intel—EPID (Enhanced Privacy ID) is a proven solution that has been shipped in over 2.5 billion processors since 2008. EPID authenticates platform identity through remote attestation using asymmetric cryptography with security operations protected in the processors isolated trusted execution environment. With EPID, a single public key can have multiple private keys (typically millions). Verifiers authenticate the device as an anonymous member of the larger group, which protects the privacy of the user and prevents attack maps that can be created from traditional PKI authentication. Learn how to utilize or embed EPID in a device and discover the wide range of use cases EPID enables for IoT including 0 touch secure onboarding to IoT control platforms.

  • 15:00 – 15:45 – Building A Brain With Raspberry Pi and Zulu Embedded JVM by Simon Ritter, Deputy CTO, Azul Systems

Machine and deep learning are very hot topics in the world of IT at the moment with many projects focusing on analyzing big data to make ‘intelligent’ decisions.

In this session, we’ll use a cluster of Raspberry Pis running Azul’s Zulu embedded JVM to build our very own brain. This will use a variety of programming techniques and open source libraries to emulate a brain in learning and adapting to data that is provided to it to solve problems. Since the Raspberry Pi makes connecting sensors straightforward we’ll include some of these to provide external stimulus to our artificial brain.

We’ll conclude with a demonstration of our brain in action learning and adapting to a variety of input data.

  • 16:00 – 16:45 – Vulnerabilities in IoT: Insecure Design Patterns and Steps to Improving Device Security by M. Carlton, VP of Research, Senrio

This talk will explore vulnerabilities resulting from insecure design patterns in internet-connected embedded devices using real-world examples. In the course of our research, we have observed a pattern of vendors incorporating remote configuration services, neglecting tamper proofing, and rampantly re-using code. We will explore how these design flaws resulted in vulnerabilities in a remote power supply, a web camera, and a router. This talk is intended for a wide audience, as these insecure design patterns exist across industries and market segments. Attendees will get an inside view into how attackers operate and walk away with an understanding of what must be done to improve the security of embedded devices.

Thursday, May 4th

  • 08:00 – 08:45 – Heterogeneous Software Architecture with OpenAMP by Shaun Purvis, Embedded Systems Specialist, Hardent

Single, high-performance embedded processors are often not adequate to meet today’s system-on-chip (SoC) demands for sustained high-performance and efficiency. As a result, chips increasingly feature multiple processor types to deliver flexible compute power, real-time features and energy conservation requirements. These so called heterogeneous multiprocessor devices yield an extremely robust SoC, but also require a more complex software architecture capable of orchestrating multiple dissimilar processors.

This technical session introduces the OpenAMP software framework designed to facilitate asynchronous multiprocessing (AMP) in a vendor agnostic manner. OpenAMP can be leveraged to run different software platforms concurrently, such as Linux and an RTOS, on different processors within the same SoC whether homogeneous (multi-core), or heterogeneous (multi-processor), or a combination of both.

  • 09:00 – 09:45 – How to Build Products Using Open Platform Firmware by Brian Richardson,  Technical Evangelist, Intel Corporation

Open hardware platforms are great reference designs, but they’re often not considered “product ready” due to debug features built into the firmware… but a few firmware changes can turn an open hardware board into a production-quality platform.

This session demonstrates how to optimize firmware for product delivery, using the MinnowBoard Max as a practical example, by disabling debug interfaces and optimizing the platform for an embedded software payload. Examples are also given for enabling signed firmware updates and secure firmware recovery, based on industry standard UEFI firmware.

  • 10:00 – 10:45 – Understanding Modern Flash Memory Systems by Thomas McCormick, Chief Engineer/Technologist, Swissbit

This session presents an in-depth look at the internals of modern flash memory systems. Specific focus is given to technologies that enable current generations of flash memory, both SLC and MLC, using < 30 nm process technologies to provide reliable code and data storage in embedded computer applications.

  • 11:00 – 11:45 – Implementing Secure Software Systems on ARMv8-M Microcontrollers by Chris Shore,  Director, Technical Marketing, ARM

Microcontrollers incorporating ARM TrustZone technology for ARMv8-M are here!. Now, software engineers developing on ARM Cortex-M processors have access to a level of hardware security which has not been available before. These features that a clear separation between secure and non-secure code, secure and non-secure data.

This presentation shows how software developers can write secure code which takes advantage of new hardware features in the architecture, drastically reducing the attack surface. Writing software carefully builds on those hardware features, avoiding bugs and/or holes which could compromise the system.

  • 12:00 – 12:30 – Keynote: State of the Medical Device Industry by Frost & Sullivan
  • 13:00 – 13:45 – Enabling the Next Era of Human Space Exploration by Jason Crusan, Director of the Advanced Exploration Systems Division within the Human Exploration and Operations Mission Directorate, NASA

Humankind is making plans to extend its reach further into the solar system than ever before. As human spaceflight moves beyond low Earth orbit NASA’s Advanced Exploration Systems is developing innovative tools to driving these new efforts and address the challenges that arise. Innovative technologies, simulations and software platforms related to crew and robotic autonomous operations, logistics management, vehicle systems automation, and life support systems management are being developed. This talk will outline the pioneering approaches that AES is using to develop prototype systems, advance key capabilities, and validate operational concepts for future human missions beyond Earth orbit.

  • 14:00 – 14:45 – Common Mistakes by Embedded System Designers: What They Are and How to Fix Them by Craig Hillman, CEO, DfR Solutions

Embedded system design is a multilevel engineering exercise. It requires synergy between software, electrical and mechanical engineers with the goal to create a system that meets customer requirements while remaining within budget and on time.

The propagation of embedded systems has been extremely successful. Many appliances today contain embedded systems. As an example, many fuel pumps contain single board computers whose sole purpose is credit transactions. Some companies doing positive train control (PTC) use ARM/RISC and ATOM based computer modules. And embedded systems are currently dominating the Internet of Things (IoT) space (ex. mobile gateways).

However, all of this success can tend to mask the challenges of designing a successful embedded system. These challenges are expected to increase dramatically with the integration of embedded systems into IoT applications, where environments can be much more severe than standard home / office installations.

This course presents the fundamentals of designing a reliable embedded device and the most common pitfalls encountered by the system designer.

  • 15:00 – 15:45 – Porting to 64-bit on ARM by Chris Shore, Director, Technical Marketing, ARM

The ARMv8-A architecture introduces 64-bit capability to the most widely used embedded architecture in the world today. Products built to this architecture are now mainstream and widely available. While they are capable of running legacy 32-bit software without recompilation, clearly developers will want to make maximum use of the increased and expanded capability offered by these processors.

This presentation examines the steps necessary in porting current 32-bit ARM software to the new 64-bit execution state. I will cover C porting, assembly language porting and implementation of hand-coded SIMD routines.


If you want to attend ESC ’17, you’ll need to register. The EXPO pass is free if you book in advance, and gives you access to the design and manufacturing suppliers booths, but won’t allow you to attend most of the talks (except sponsored ones), while the conference pass gives you access to all sessions including workshops and tutorials, as well as complimentary lunch vouchers.

CONFERENCE PASS EXPO PASS
SUPER EARLY BIRD
(Ends March 31st, 2017)
$949 FREE
STANDARD
(Ends May 2nd, 2017)
$1,149 FREE
REGULAR/ONSITE $1,299 $75

NXP Introduces Kinetis K27/K28 MCU, QorIQ Layerscape LS1028A Industrial SoC, and i.MX 8X Cortex A35 SoC Family

March 15th, 2017 3 comments

NXP pushed out several press releases with the start of Embedded World 2017 in Germany, including three new micro-controllers/processors addressing different market segments: Kinetis K27/K28 MCU Cortex M4 MCU family, QorIQ Layerscape LS1028A industrial applications processor, and i.MX 8X SoC family for display and audio applications, 3D graphic display clusters, telematics and V2X (Vehicle to everything).

NXP Kinetis K27/K28 MCU

Click to Enlarge

NXP Kinetis K27/K28 MCU family is based on an ARM Cortex-M4 core clocked at up to 150 MHz with FPU,and includes up to 1MB embedded SRAM, 2MB flash, and especially target portable display applications.

Kinetis K27/K28 MCUs share the following main features:

  • 2x I2S interfaces, 2x USB Controllers (High-Speed with integrated High-Speed PHY and Full-Speed) and mainstream analog peripherals
  • 32-bit SDRAM memory controller and QuadSPI interface supporting eXecution-In-Place (XiP)
  • True Random Number Generator, Cyclic Redundancy Check, Memory Mapped Cryptographic Acceleration Unit

K28 supports 3 input supply voltage rails (1.2V, 1.8V and 3V) + separate VBAT domain, implements a Power Management Controller supporting Core Voltage Bypass and can be powered by an external PMIC, and is available in 169 MAPBGA (9x9mm2, 0.65mm pitch) and 210 WLCSP (6.9×6.9mm2, 0.4 mm pitch) packages.

K27 supports 1.71V to 3.6V input voltage + separate VBAT domain, and is offered in 169 MAPBGA (9x9mm, 0.65mm pitch) package only.

Click to Enlarge

FRDM-K28F development board will allow you to play with the new MCUs’ capabilities. It features a Kinetis K28F microconroller, on-board discrete power management, accelerometer, QuadSPI serial flash, USB high-speed connector and full-speed USB OpenSDA. Optional add-on boards allows for USB-Type C, Bluetooth low energy (BLE) connectivity, and a 5” LCD display board with capacitive touch.

Software development can be done through MCUXpresso SDK with system startup code, peripheral drivers, USB and connectivity stacks, middleware, and real-time operating system (RTOS) kernels.

Kinetis K27/K28 MCU family will be start selling in April 2017. Visit NXP K2x USB page for more information.

QorIQ Layerscape LS1028A

LS1028A Block Diagram

NXP QorIQ Layerscape LS1028A SoC comes with two 64-bit ARMv8 core, support real-time processing for industrial control, as well as virtual machines for edge computing in the IoT. It also integrates a GPU and LCD controller enable Human Machine Interface (HMI) systems, and Time-Sensitive Networking (TSN) capabilities based on the IEEE 802.1 standards with a four-port TSN switch and two separate TSN Ethernet controllers.

The processor especially targets “Factory 4.0” automation, process automation, programmable logic controllers, motion controllers, industrial IoT gateway, and Human Machine Interface (HMI).

OEMs can start developing TSN-enabled systems using LS1021ATSN reference design platform based on the previous LS1021A processor in order to quickens time-to-market.The reference design provides four switched Gigabit Ethernet TSN ports, and ships with an open-source, industrial Linux SDK with real-time performance. Applications written for LS1021ATSN will be compatible with the LS1028A SoC since the API calls won’t change.

It’s unclear when LS1028A will become available, but it will be available for 15 years after launch, and you’ll find a few more details on the product page. You could also visit NXP’s booth (4A-220) at Embedded World 2017 to the reference design in action.

NXP i.MX 8X ARM Cortex-A35 Processors

Block Diagram of NXP i.MX 8X family

The last announcement will not really be news to regular readers of CNX Software, since we covered i.MX 8X processors last year using an NXP presentation. As previously known, i.MX 8X family comes with two to four 64-bit ARMv8-A Cortex-A35 cores, as well as a Cortex-M4F core, a Tensilica HiFi 4 DSP, Vivante hardware accelerated graphics and video engines, advanced image processing, advanced SafeAssure display controller, LPDDR4 and DDR3L memory support, and set of peripherals. The processor have been designed to drive up to three simultaneous displays (2x 1080p screens and one parallel WVGA display), and three models have been announced:

  • i.MX 8QuadXPlus with four Cortex-A35 cores, a Cortex-M4F core, a 4-shader GPU, a multi-format VPU and a HiFi 4 DSP
  • i.MX 8DualXPlus with two Cortex-A35 cores, a Cortex-M4F core, a 4-shader GPU, a multi-format VPU and a HiFi 4 DSP
  • i.MX 8DualX with two Cortex-A35 cores, a Cortex-M4F core, a 2-shader GPU, a multi-format VPU and a HiFi 4 DSP

The processors are expected to be used in automotive applications such as  infotainment and cluster, industrial control and vehicles, robotics, healthcare, mobile payments, handheld devices, and so on.

The i.MX 8QuadXPlus and 8DualXPlus application processors will sample in Q3 2017 to selected partners. More details may be found on NXP i.MX8X product page.

Open Surgery Initiative Aims to Build DIY Surgical Robots

February 7th, 2017 No comments

Medical equipments can be really expensive because of the R&D involved and resulting patents, low manufacturing volume, government regulations, and so on. Developed countries can normally afford those higher costs, but for many it may just be prohibitively expensive. The Open Surgery initiative aims to mitigate the costs by “investigating whether building DIY surgical robots, outside the scope of healthcare regulations, could plausibly provide an accessible alternative to the costly professional healthcare services worldwide”.

DIY Surgical Robot – Click to Enlarge

The project is composed of member from the medical, software, hardware, and 3D printing communities, is not intended for (commercial) application, and currently serves only academic purposes.

Commercial surgical robots can cost up to $2,000,000, but brings benefits like smaller incisions, reduced risks of complications and readmissions, and shorter hospital stays thanks to a faster recovery process. There have already been several attempts within the robotics community to come up with cheaper and more portable surgical robots, such as RAVEN II Surgical robot initially developed with funding from the US military to create a portable telesurgery device for battlefield operations, and valued at $200,000. The software used to control RAVEN II has been made open source, so other people can improve on it.

The system is currently only used by researchers in universities to experiment with robotic surgery, but it can’t be used on humans, as it lacks the required safety and quality control systems. This is a step in the right direction, but the price makes it still out of reach for most medical hacker communities, so Frank Kolkman, who setup the Open Surgery initiative, has been trying to build a DIY surgical robot for around $5000 by using as many off-the-shelf parts and prototyping techniques such as laser cutting and 3D printing for several months with the help of the community.

Three major challenges to designing a surgical robot (theoretically) capable of performing laparoscopic surgery have been identified:

  1. The number and size of tools: during a single operation a surgeon would switch between various types of tools, so a robot would either have to have many of them or they should be able to be interchangeable. The instruments are also extremely small, and difficult to make
  2. Anything that comes into contact with the human body has to be sterile to reduce the risk of infection, and most existing tools are made of stainless steel so that they can be sterilized by placing them in an autoclave, that may not be easily accessible to many people.
  3. The type of motion a surgical robot should be able to make, whereby a fixed point of rotation in space is created where the tool enters the body through an entry port – or ‘trocar’. The trocar needs to be stationary so as to avoid tissue damage.

He solved the first  issue by finding laporoscopic instruments on Alibaba, as well as camera, CO2 insufflation pumps, and others items. For the second hurdle, he realized a domestic oven turned to 160 degrees centigrade for 4 hours could be an alternative to an autoclave. The mechanical design was the most complicated, as it required many iterations, and he ended with some 3D printed parts, and DC servo motors. Software was written using Processing open source scripting language. You can see the results in the short video below.

While attempting surgery with the design would not be recommended just yet, a $5,000 DIY surgical robot appears to feasible. Maybe it could be evaluated by one or more trained surgeons first, and then tested on animals that needs surgery, before eventually & potentially being used on human, who would not get the treatment otherwise.

While there’s “Open” in “Open Surgery” and the initial intent was to make the project open source, it turned out it is almost impossible to design surgical robots without infringing on patents. That’s no problem as long as you make parts for private use, however Frank explains that sharing files could cause problems, and the legality of doing so requires some more research.

FOSDEM 2017 Open Source Meeting Schedule

January 31st, 2017 4 comments

FOSDEM (Free and Open Source Software Developers’ European Meeting) is a 2-day free event for software developers to meet, share ideas and collaborate that happens on the first week-end of February, meaning it will take place on February 4 & 5, 2017 this year. FOSDEM 2017 will features 608 speakers, 653 events, and 54 tracks, with 6 main tracks namely: Architectures, Building, Cloud, Documentation, Miscellaneous, and Security & Encryption.
I won’t be there, but it’s always interesting to look at the schedule, and I made my own virtual schedule focusing especially on talks from “Embedded, mobile and automotive” and “Internet of Things” devrooms.

Saturday 4, 2017

  • 11:00 – 11:25 – Does your coffee machine speaks Bocce; Teach your IoT thing to speak Modbus and it will not stop talking, by Yaacov Zamir

There are many IoT dashboards out on the web, most will require network connection to a server far far away, and use non standard protocols. We will show how to combine free software tools and protocols from the worlds of IT monitoring, Industrial control and IoT to create simple yet robust dashboards.

Modbus is a serial communication protocol developed in 1979 for use with programmable logic controllers (PLCs). In simple terms, it is a method used for transmitting information over serial lines between electronic devices., it’s openly published, royalty-free, simple and robust.

Many industrial controllers can speak Modbus, we can also teach “hobby” devices like Arduino boards and ESP8266 to speak Modbus. Reliable, robust and simple free software Modbus client will be used to acquire the metrics from our device, then the metrics will be collected and sent to Hawkular and Grafana to store and visualize our data.

  • 11:30 – 11:55 – Playing with the lights; Control LIFX WiFi-enabled light bulbs, by Louis Opter

In this talk we’ll take a close look at a one of the “smart” (WiFi-connected) light-bulbs available on the market today. The bulbs expose a small API over UDP that I used to run an interface on a programmable buttons array. We will see how topics like reverse engineering, security, licensing, “self-hosting” and user experience came into play.

monolight is an user interface to control LIFX WiFi-enabled light bulbs. monolight runs on a programmable button array; it is written in Python 3.6 (to have type annotations and asyncio), and it interfaces with the bulbs through a more complex daemon written in C: lightsd.

This talk will start with a live demo of the button grid remotely controlling the light bulbs. We will then explore how it works and some of the motivations behind it (network isolation, trying to not depend on the “cloud”, reliability, user-experience). Finally, we will look into what kind of opportunities even more open IoT products could bring, and open leave the place to Q&A and discussion.

  • 12:00 – 12:30 – Creating the open connected car with GENIVI, by Zeeshan Ali, GENIVI Development Platform (GDP) technical lead

A number of new components have matured in GENIVI to provide a true connected car experience. A couple of them are key connectivity components; namely SOTA (Software Over the Air) and RVI (Remote Vehicle Interface). This talk will discuss both these components, how they work together, the security work done on them and their integration into the GENIVI Development Platform.

This talk will also run down the overall status of GENIVI’s development platform and how it can enable an automotive stack to speak not just with the cloud, but with IoT devices via Iotivity interface.

  • 12:30 – 13:00 – Making Your Own Open Source Raspberry Pi HAT; A Story About Open Source Harware and Open Source Software, by Leon Anavi

This presentation will provide guidelines how to create an open source hardware add-on board for the most popular single board computer Raspberry Pi using free and open source tools from scratch. Specifications of Raspberry Pi Foundation for HAT (Hardware Attached on Top) will be revealed in details. Leon Anavi has been developing an open source Raspberry Pi HAT for IoT for more than a year and now he will share his experience, including the common mistakes for a software engineer getting involved in hardware design and manufacturing. The presentation is appropriate for anyone interested in building entirely open source products that feature open source hardware and open source software. No previous experience or hardware knowledge is required. The main audience are developers, hobbyists, makers, and students. Hopefully the presentation will encourage them to grab a soldering iron and start prototyping their DIY open source device.

  • 13:00 – 13:25 – Building distributed systems with Msgflo; Flow-based-programming over message queues, by Jon Nordby

MsgFlo is a tool to build systems that span multiple processes and devices, for instance IoT sensor networks. Each device acts as a black-box component with input and output ports, mapped to MQTT message queues. One then constructs a system by binding the queues of the components together. Focus on components exchanging data gives good composability and testability, both important in IoT. We will program a system with MsgFlo using Flowhub, a visual live-programming IDE, and test using fbp-spec.

In MsgFlo each process/device is an independent participant, receiving data on input queues, and sending data on output queues. A participant do not know where the data comes from, nor where (if anywhere) the data will go. This strong encapsulation gives good composability and testability. MsgFlo uses a standard message queue protocol (MQTT or AMQP). This makes it easy to use with existing software. As each participant is its own process and communicate over networks, they can be implemented in any programming language. Convenience libraries exist for C++, Python, Arduino, Node.js and Rust. On top of the message queue protocol, a simple discovery mechanism is added. For existing devices without native Msgflo support, the discovery messages can be sent by a dedicated tool.

  • 13:30 – 13:55 – 6LoWPAN in picoTCP, and how to support new Link Layer types, by Jelle De Vleeschouwer

6LoWPAN enables, as the name implies, IPv6-communication over Low-power Wireless Personal Area Networks, e.g. IEEE802.15.4. A lot of resources are available to allow 6LoWPAN over IEEE802.15.4, but how can one extend the 6LoWPAN feature-set for the use with other link layer types? This talk will cover the details about a generic implementation that should work with every link layer type and how one can provide support for ones own custom wireless network. The goal is to give quite a technical and detailed talk with finally a discussion about when 6LoWPAN is actually useful and when is it not.

Last year, as a summer project, a generic 6LoWPAN adaption layer was implemented into picoTCP, an open source embedded TCP/IP-stack developed by Altran Intelligent Systems, with an eye on the IoT. The layer should also be able to allow multiple link-layer extensions, for post-network-layer processing. This could be used for mesh-under routing, link layer security, whatever you want. This talk will cover how one can take advantage of these features and caveats that come with it.

  • 14:00 – 15:00 – Groking the Linux SPI Subsystem by Matt Porter

The Serial Peripheral Interconnect (SPI) bus is a ubiquitous de facto standard found in many embedded systems produced today. The Linux kernel has long supported this bus via a comprehensive framework which supports both SPI master and slave devices. The session will explore the abstractions that the framework provides to expose this hardware to both kernel and userspace clients. The talk will cover which classes of hardware supported and use cases outside the scope of the subsystem today. In addition, we will discuss subtle features of the SPI subsystem that may be used to satisfy hardware and performance requirements in an embedded Linux system.

  • 15:00 – 15:25 – Frosted Embedded POSIX OS; a free POSIX OS for Cortex-M embedded systems, by Brabo Silvius

FROSTED is an acronym that means “FRee Operating System for Tiny Embedded Devices”. The goal of this project is to provide a free kernel for embedded systems, which exposes a POSIX-compliant system call API. In this talk I aim to explain why we started this project, the approach we took to separate the kernel and user-space on Cortex-M CPU’s without MMU, and showcase the latest improvements on networking and supported applications.

  • 15:30 – 16:00 – How to Build an Open Source Embedded Video Player, by Michael Tretter

Video playback for embedded devices such as infotainment systems and media centers demands hardware accelerators to achieve reasonable performance. Unfortunately, vendors provide the drivers for the accelerators only as binary blobs. We demonstrate how we built a video playback system that uses hardware acceleration on i.MX6 by using solely open source software including Gstreamer, Qt QML, the etnaviv GPU driver, and the coda video decoder driver.

The Qt application receives the video streams from a Gstreamer pipeline (using playbin). The Gstreamer pipeline contains a v4l2 decoder element, which uses the coda v4l2 driver for the CODA 960 video encoder and decoder IP core (VPU in the Freescale/NXP Reference Manual), and a sink element to make the frames available to the Qt application. The entire pipeline including the Gstreamer to Qt handover uses dma_bufs to avoid copies in software.This example shows how to use open source drivers to ease the development of video and graphics applications on embedded systems.

  • 16:00 – 16:25 – Project Lighthouse: a low-cost device to help blind people live independently, by David Teller

The Word Health Organization estimates that more than 250 million people suffer from vision impairment, 36 millions of them being entirely blind. In many cases, their impairment prevents them from living independently. To complicate things further, about 90% of them are estimated to live in low-income situations.

Project Lighthouse was started by Mozilla to try and find low-cost technological solutions that can help vision-impaired people live and function on their own. To this date, we have produced several prototypes designed to aid users in a variety of situations. Let’s look at some of them. This will be a relatively low-tech presentation.

  • 16:30 – 16:55 – Scientific MicroPython for Microcontrollers and IoT, IoT programming with Python, by Roberto Colistete Jr

MicroPython is a implementation of Python 3 optimised to run on a microcontroller, created in 2013 by the Physicist Damien P. George. The MicroPython boards runs MicroPython on the bare metal and gives a low-level Python operating system running interactive prompt or scripts.

The MicroPython boards currently use 32 bit microcontrollers clocked at MHz and with RAM limited to tens or hundreds of Kbytes. These are the microcontroller boards with official MicroPython support currently in the beginning 2017 : Pyboard, Pyboard Lite, WiPy 1/2, ESP8266, BBC Micro:bit, LoPy, SiPy, FiPy. They cost between USD3-40, are very small and light, about some to tens of mm in each dimension and about 5-10 g, have low power consumption, so MicroPython boards are affordable and can be embedded in almost anything, almost anywhere.

Some hints will be given to the FOSS community to be open minded about MicroPython : be aware that MicroPython exists, MicroPython is a better programming option than Arduino in many ways, MicroPython boards are available and affordable, porting more Python 3 scientific modules to MicroPython, MicroPython combines well with IoT.

  • 17:00 – 17:25 – Iotivity from devices to cloud; how to make IoT ideas to real using FLOSS, by Philippe Coval & Ziran Sun (Samsung)

The OCF/IoTivity project aims to answer interoperability issues in the IoT world from many different contexts to accommodate a huge range devices from microcontrollers, to consumer electronics such as Tizen wearables or your powerful GNU/Linux system The vision of Iotivity is not restricted to ad hoc environment but also can be connected to Internet and make the service easily accessible by other parties. With cloud access in place, usage scenarios for IoT devices can be enriched immensely.

In this talk we walk through the steps on how to practically handle IoT use cases that tailored towards various topologies. To introduce the approach used in IoTivity, we first give a detailed background introduction on IoTivity framework. Then we will present a demo that shows a few examples, from setting up a basic smart home network to accessing the IoT resource via a third party online service. Challenges and solutions will be addressed from development and implementation aspects for each step of the demo.

We hope this talk will inspire developers to create new IoT prototypes using FLOSS.

  • 17:30 – 17:55 – Open Smart Grid Platform presentation, an Open source IoT platform for large infrastructures, by Jonas van den Bogaard

The Open Smart Grid Platform is an open source IoT platform. The open smart grid platform is a generic IoT platform, built for organizations that manage and/or control large-scale infrastructures. The following use cases are now readily available: smart lighting, smart metering, tariff switching, and microgrids. Furthermore the following use-cases are in development: distribution automation, load management and smart device management. The architecture of the open smart grid platform is modular and consists multiple layers.

The open smart grid platform is highly unique for embracing the open source approach and the following key features:

  • Suitable for scalable environments delivering high performance
  • High availability and multitenant architectures
  • Built with security by design and regularly tested.
  • It has a generic architecture. More use cases and domains are easily added to the platform.
  • The open smart grid platform is based on open standards where possible.

We believe the platform is interesting for developers who have interest in working on use-cases for Smart Cities, Utility Companies and other large-scale infrastructure companies.

  • 18:00 – 19:00 – AGL as a generic secured industrial embedded Linux; factory production line controllers requirements are not that special, by Dominig ar Foll

There is no de facto secured embedded Linux distro while the requirement is becoming more and more critical with the rise of IoT in Industrial domains. When looking under the hood of the Yocto built AGL project (Automotive Linux), it is obvious that it can fit 95% of the most common requirements as a Secured Embedded Linux. We will look how non Automotive industries can easily reuse the AGL code and tools to build their own industrial product and why it’s a safer bet than to build it internally.

Industrial IoT cannot be successful without a serious improvement of the security coverage. Unfortunately there is as today, no of-the-shelves offer and the skills required to create such solution, are at best rare, more often out of reach. AGL as created a customizable embedded Linux distro which is nicely designed for reuse in many domains outside of Automotive. During the presentation we will see how to: – start your development with boards readily available on the Net, – change the BSP and add peripherals using Yocto layers or project like MRAA, – integrate a secure boot in your platform, – add your middleware and your application without breaking the maintained Core OS – develop a UI on the integrated screen and/or an HTML remote browser – update the core OS and your add-ons. – get support and influence the project.

Sunday 5, 2017

  • 10:00 11:00 – How I survived to a SoC with a terrible Linux BSP, Working with jurassic vendor kernels, missing pieces and buggy code, by Luca Ceresoli

In this talk Luca will share some of his experiences with such vendor BSPs, featuring jurassic kernels, non-working drivers, non-existing bootloaders, code of appallingly bad quality, ineffective customer support and Windows-only tools. You will discover why he spent weeks in understanding, fixing and working around BSPs instead of just using them. The effects on the final product quality will be described as well. Luca will also discuss what the options are when you face such a BSP, and what both hackers and vendors can do to improve the situation for everybody’s benefit.

  • 11:00-12:00 – Open Source Car Control, by Josh Hartung

This fall my team launched the Open Source Car Control (OSCC) project, a by-wire control kit that makes autonomous vehicle development accessible and collaborative to developers at every level. In this presentation, we discuss the project and its implications on the development of autonomous cars in a vertically integrated and traditionally closed industry.

A primary barrier to entry in autonomous vehicle development is gaining access to a car that can be controlled with an off-the-shelf computer. Purchasing from an integrator can cost upwards of $100K, and DIY endeavors can result in unreliable and unsafe solutions. The OSCC project acts as a solution to these problems. OSCC is a kit of open hardware and software (based on Arduino) that can be used to take control of the throttle, brake, and steering in modern cars. The result is a fully by-wire test car that can be built for about $10K (USD), including the vehicle. In this discussion, we unpack the impetus and development of the OSCC project, challenges we encountered during development, and the role projects like OSCC have in a necessary “flattening” of the automotive industry.

  • 12:00 – 13:00 – Kernel DLC Metrics, Statistic Analysis and Bug-Patterns, by Nicholas Mc Guire

SIL2LinuxMP strives to qualify a defined GNU/Linux subset for the use in safety-related systems by “assessment of non-compliant development”. To demonstrate that the kernel has achieved suitable reliability and correctness properties basic metrics of such properties and their statistic analysis can be used as part of the argument. Linux has a wealth of analytical tools built-in to it which allow to extract information on compliance, robustness of development, as well as basic metrics on complexity or correctness with respect to defined properties. While IEC 61508 Ed 2 always pairs testing and analysis, we believe that for a high complexity system traditional testing is of relatively low effectiveness and analytical methods need to be the primary path. To this ends we outline some approaches taken:

  • Bug-age analysis
  • Bug-rates and trend analysis
  • Code-complexity/bug relationship
  • Brain-dead correctness analysis
  • Interface and type-correctness analysis
  • API compliance analysis
  • Analysis of build-bot data

While much of the data points to robust and mature code there also are some areas where problems popped up. In this talk we outline the used methods and give examples as well as key findings. FLOSS development has reached a quite impressive maturity, to substantially go beyond we think it will need the use of quantitative process and code metrics – these results from SIL2LinuxMP may be a starting point.

  • 13:00 – 14:00 – Loco Positioning: An OpenSource Local Positioning System for robotics, presentation with a demo of autonomous Crazyflie 2.0 quadcopter, by Arnaud Taffanel

Positioning in robotics has alway been a challenge. For outdoor, robots GPS is solving most of the practical problems, but indoor, precise localization is still done using expensive proprietary systems mainly based on an array of cameras.

In this talk, I will present the loco positioning system: an open source Ultra Wide Band radio-based local positioning system, why we need it and how it works. I will also speak about its usage with the Crazyflie 2.0 open source nano quadcopter, of course ending with an autonomous flying demo.

  • 14:00 14:50 – Free Software For The Machine, by Keith Packard

The Machine is a hardware project at Hewlett Packard Enterprise which takes a new look at computer architecture. With many processors and large amounts of directly addressable storage, The Machine program has offered an equally large opportunity for developing new system software. Our team at HPE has spent the better part of two years writing new software and adapting existing software to expose the capabilities of the hardware to application developers.

As directly addressable storage is such a large part of the new hardware, this presentation will focus on a couple of important bits of free software which expose that to applications, including our Librarian File System and Managed Data Structures libraries. Managed Data Structures introduces a new application programming paradigm where the application works directly on the stable storage form for data structures, eliminating serialization and de-serialization operations.

Finally, the presentation will describe how the hardware is managed, from sequencing power to a rack full of high-performance computing hardware, through constructing custom Linux operating systems for each processor and managing all of them as parts of a single computing platform.

  • 15:00 – 15:25 – Diving into the KiCad source code, by Maciej Sumiński

Let’s be sincere, all of us would love to change something in KiCad. I bet you have an idea for a new tool or another killer feature that would make your life so much easier.

You know what? You are free to do so! Even more, you are welcome to contribute to the project, and it is not that difficult as one may think. Those who have browsed the source code might find it overwhelming at first, but the truth is: you do not have to know everything to create useful extensions.

I would like to invite you for a walk through the KiCad source code to demonstrate how easy it is to add this tool you have always been dreaming about.

  • 15:30 – 16:00 – Testing with volcanoes – Fuego+LAVA, embedded testing going distributed, by Jan-Simon Möller

LAVA and Fuego are great tools individually already. Combining and extending them allows for a much broader test coverage than each tool alone can provide.

The focus of this talk is to share the experiences made and lessons learned so people can integrate such tools better in their own environment. It also raises the pain-points and open issues when setting up a distributed environment.

Especially for Automotive, Long-Term-Support, CIP or Consumer Electronics, advancing the Test-harness is essential to raise the bar and strengthen the confidence in our embedded platforms. Automated testing can improve our ecosystem from two sides: during development (feature does work and does not break things) and during maintenance (no regressions through backports).

  • 16:00 – 16:30 – Adding IEEE 802.15.4 and 6LoWPAN to an Embedded Linux Device, by Stefan Schmidt

Adding support for IEEE 802.15.4 and 6LoWPAN to an embedded Linux board opens up new possibilities to communicate with tiny, IoT type of, devices.

Bringing IP connectivity to devices, like sensors, with just a few kilobytes of RAM and limited battery power is an interesting IoT challenge. With the Linux-wpan and 6LoWPAN subsystems we get Linux ready to support the needed wireless standards as well as protocols that connect these tiny devices into the wider Internet. To make Linux a practical border router or smart home hub for such networks.

This talk will show how to add the needed transceiver hardware to an existing hardware and how to enable and configure the Linux-wpan and 6LoWPAN mainline subsystems to use it. The demonstration will include setting up the communication between Linux and other popular IoT operating systems like RIOT or Contiki as well.

  • 16:30 – 17:00 – OpenPowerlink over Xenomai, by Pierre Ficheux

Industrial Ethernet is a successor of classic field bus such as CAN, MODBUS or PROFIBUS. POWERLINK was created by B&R Automation and provides performance and real­-time capabilities based on standard Ethernet hardware. openPOWERLINK is open source and runs on lots of platforms such as Linux, Windows, various RTOS and dedicated hardware (FPGA). We will explain how to use openPOWERLINK on top of Xenomai 3, a powerful real-time extension for Linux kernel based on co-­kernel technology.

FOSDEM 2017 will take place at the ULB Solbosch Campus in Brussels, Belgium, and no registration is required, you just need to show up in order to attend the event.

Sensors Predicting The Future – Elderly Persons Fall Prediction and Detection with Kinect, Webcams and Microphones

September 9th, 2016 No comments

Wearables can be used your young children or elderly persons to monitoring their locations or health, and one use case, especially for old age persons, is to detect falls. However, it’s quite possible they don’t like it and/or not always wear it, so the Center for Eldercare and Technology of the University of Missouri designed a system based on Microsoft Kinect, two webcams, and microphones in order to detect falls, and even predict falls by analyzing gait, i.e. the pattern of movement of the limbs.

fall_detection_and_prevention-kinect_microphones_webcamsThe picture above shows at least part of the hardware setup with the Kinect, a webcam, and a PC  tower doing the processing stored in a cupboard.

Fall detection algorithms are relying on the microphone array, Microsoft Kinect depth camera, and a two-webcam system used to extract silhouettes from orthogonal views and construct a 3D voxel model for analysis. Passive gait analysis algorithms are for their part taking data from the kinect and the two-webcam system. The system was installed in 10 apartment, with data gathered for a period of 2 years, and they found that a gait speed decline of 5cm/s was associated with an 86.3% probability of falling within the following three weeks, and that shortened stride length was associated with a 50.6% probability of falling within the next three weeks.

You can see Gait detection in action in the video below.

More details about the studies and links to research papers can be found on Active Heterogeneous Sensing for Fall Detection and Fall Risk Assessment page on the University of Missouri website.

Via Electronics Weekly

Project OWL Open Source Hardware Ophthalmoscope is 25 Times Cheaper than Commercial Products

August 12th, 2016 4 comments

Medical grade equipments are usually very expensive, partly because of their complexity, but also because of certifications,   legal reasons, and low manufacturing volumes. That’s where open source hardware can make a big difference, and there has been several open source hardware prosthetic hands or arms such as Openbionics hand, but Ebin Philip and his team has tackled another issue with Project OWL, an open indirect ophthalmoscope (OIO) designed for screening retinal diseases, which normally costs between $10,000 to $25,000, but their open source hardware design can be put together for about $400.

Open_Source_Hardware_Ophthalmoscope

The design features a Raspberry Pi 2 board connected to a WaveShare 5″ Touchscreen LCD, a Raspberry Pi Pi IR Camera (M12 lens mount) with 16mm FL M12 lens, a 3 Watt Luxeon LED, two 50x50mm mirrors, a linear polarizer sheet, a 20 Dioptre disposable lens, and various passive components.

Project_OWL_Prototype

OIO (OWL) Prototype development

While the Raspberry Pi board is not open source hardware itself, Ebin has shared the CAD files for the design, as well as the schematics and gerber files for the RPi shield used in the project on Hackaday.io, where you’ll also find some details about the project log. Assembly instructions are currently missing however. One of the software side, the image are processed through OpenCV to remove background image and reflections.

The main goal of the project is to detect retina problems on diabetic patients in rural areas:

Currently there are over 422 million people worldwide suffering from diabetes. 28.5% of them suffer from Diabetic Retinopathy. 50% of diabetics are unaware about the risk of losing their vision. The number of cases of diabetic retinopathy increased from 4 million in 2000 to 7.69 million in 2010 in US alone. Early detection and Treatment can help prevent loss of vision in most cases.

Detection of Diabetic Retinopathy, requires expensive devices for Retinal Imaging , even the cheapest of them costing more than $9000 each. This makes good quality eyecare, expensive and inaccessible to the less privileged. The key idea in the development of OIO (code-named Project OWL) is to provide an affordable solution to help identify DR and hence prevent cases of “avoidable blindness”.

I’m unclear whether this tool is also appropriate for other tests such as dilated fundus examination, or to check the optical nerves for glaucoma patients, etc…. But if it can be used or adapted for such purposes the implications would even better greater.