Archive

Posts Tagged ‘programming’

XOD is a Visual Programming Language for Arduino, Raspberry Pi, and other Maker Boards

June 2nd, 2017 14 comments

When you think about visual programming on the Raspberry Pi or Arduino board, Scratch may come to mind, but some developers have decided to create their own visual programming language working for Arduino, Raspberry Pi, and other boards. Meet XOD, pronounced ksəud.

Click to Enlarge

The developers explains their used “functional reactive programming principles and added graphical functionality”. XOD is comprised of “nodes” that represents either some physical device like a sensor, motor, or relay, or some operation such as addition, comparison, or text concatenation, which you can link together through inputs and outputs to create a program, and XOD IDE will compile the resulting diagram to create and upload a binary program to Arduino, Raspberry Pi, etc… You can also convert a XOD diagram into a node with inputs and outputs to use it in another diagram, so the language is scalable.

The developers are now looking for testers to play with XOD private alpha, but XOD language, IDE, and library sources code will be released on Github once launched. You can register with your email address on their website, if you are interested in participating in the test program.

Top Programming Languages & Operating Systems for the Internet of Things

May 19th, 2017 3 comments

The Eclipse foundation has recently done its IoT Developer Survey answered by 713 developers, where they asked  IoT programming languages, cloud platforms, IoT operating systems, messaging protocols (MQTT, HTTP), IoT hardware architectures and more.  The results have now been published. So let’s have a look at some of the slides, especially with regards to programming languages and operating systems bearing in mind that IoT is a general terms that may apply to sensors, gateways and the cloud, so the survey correctly separated languages for different segments of the IoT ecosystem.

Click to Enlarge

C and C++ are still the preferred languages for constrained devices, and developers are normally using more than one language as the total is well over 100%.

Click to Enlarge

IoT gateways are more powerful and resourceful (memory/storage) hardware, so it’s no surprise higher level languages like Java and Python join C and C++, with Java being the most used language with 40.8% of respondents.

Click to Enlarge

When it comes to the cloud with virtually unlimited resources, and no need to interface with hardware in most cases, higher level languages like Java, JavaScript, Node.js, and Python take the lead.

Click to Enlarge

When it comes to operating systems in constrained IoT devices, Linux takes the lead with 44.1%, in front of bare metal (27.6%) and FreeRTOS (15.0 %). Windows is also there in fourth place probably with a mix of Windows IoT core, Windows Embedded, and WinCE.

Click to Enlarge

Linux is the king of IoT gateways with 66.9% of respondent using it far ahead of Windows in second place with 20.5%. They have no chart for the cloud, probably because users just don’t run their own Cloud servers, but relies on providers. They did ask specifically about the Linux distributions used for IoT projects, and the results are a bit surprising with Raspbian taking the lead with 45.5%, with Ubuntu Core following closely at 44.4%.

Click to Enlarge

Maybe Raspbian has been used during the prototyping phase or for evaluation, as most developers (84%) have been using cheap development boards like Arduino, BeagleBone or Raspberry Pi. 20% also claim to have deployed such boards in IoT solutions.

Click to Enlarge

That’s only a few slides of the survey results, and you’ll find more details about Intel/ARM hardware share, messaging & industrial protocols, cloud solutions, wireless connectivity, and more in the slides below.

Via Ubuntu Insights

Android Studio 3.0 Preview Release with Support for Kotlin Programming Language, Android O Preview Images

May 18th, 2017 No comments

Most Android apps used to be programmed in Java with the Eclipse IDE, then Google introduced Android Studio in 2013 which has now replaced the latter, and with the release of Android Studio 3.0 Canary 1 preview, the company is now offering developers to program apps using Kotlin language instead of Java.

Click to Enlarge

Kotlin programming language is 100% compatible with Java language, and you can even mix Kotlin and Java in your code. Kotlin can make your code much more simple while declaring classes, and it has a few other improvements over Java. Android Studio also include a Java to Kotlin converter. The language has already been used by Expedia, Flipboard, Pinterest, Square, and others.

Android Studio 3.0 also brings many other improvement, such as performance profiling tools for the CPU, memory, and networks showing your app performance in real-time, and faster Gradle builds for large sized app projects.

Android Studio 3.0 also brings changes specific to the Android platform development such as:

  • Support for Instant App development
  • Inclusion of the Google Play Store in the Android O emulator system images
  • Font resources management
  • New wizards for Android O development, etc..

The video below gives a good overview of the many changes done in Android Studio 3.0.

You can download Android Studio 3.0 Canary 1 for Linux, Windows, or Mac to give it a try. It’s also a good way to try Android O, if you don’t own a recent Nexus or Pixel device, or don’t want to flash a beta image to your phone.

Self-hosted OpenGL ES Development on Ubuntu Touch

January 15th, 2017 5 comments

Blu wrote BQ Aquaris M10 Ubuntu Edition review – from a developer’s perspective – last year, and now is back with a new post explaining how to develop and deploy OpenGL ES applications directly on the Ubuntu Touch tablet.

Ever since I started using a BQ M10 for console apps development on the go I’ve been wanting to get something, well, flashier going on that tablet. Since I’m a graphics developer by trade and by heart, GLES was the next step on the Ubuntu Touch for me. This article is about writing, building and deploying GLES code on Ubuntu Touch itself, sans a desktop PC. Keep that in mind if some procedure seems unrefined or straight primitive to you – for one, I’m a primitive person, but some tools available on the desktop are, in my opinion, impractical on the Touch itself. That means no QtCreator today, nor Qt, for that matter.

The display of any contemporary Ubuntu Touch device is powered by Mir – a modern compositor/surface manager taking care of all (rectangular-ish) things eventually appearing on screen. We won’t be delving much into Mir beyond obtaining an EGL context (EGL being the binding layer between GLES and the native windowing system). But enough ado – let’s get to work.

Preparations for doing GLES on a Ubuntu Touch box:

The above, as of the time of this writing, should provide you with gcc/g++-4.9, make and gdb-7.9, among other things. The last package and its dependencies provide you with up-to-date Mir headers. Git comes out of the box, IIRC, but if it’s missing just apt-get it.

We need a primer to step on, so here’s my adaptation of Don Bright’s Mir/GLES adaptation of Joe Groff’s OpenGL tutorials, using Daniel van Vugt’s Mir/EGL examples (yes, that’s a quite a chain-work):

I’ve taken the liberty to expand on the work of those gentlemen by bringing the Mir integration up to date, handling Touch’s novelty Desktop Mode and throwing in my own dusty GLES sample code, for good measure.

To build and install the primer, just do:

That will provide you with an original police-car flashing-lights primer. An alternative primer featuring tangential-space bump-mapping can be built by passing arg ‘guest’ to the build script:

Both versions of the primer use a fundamentally identical interface — a resource-initialization procedure and a frame-drawing procedure, so it’s not much of an effort to use the respective routines from either primers in the framework of the host app hello.cpp, and thus get a running render loop.

A few words about the peculiarities of the GLES development for Ubuntu Touch. It took me some time to show anything on screen, despite the fact I had a valid draw context and a render loop soon after the primer was building successfully. The reason is Unity8 on the Touch will not simply let you run a window-painting app from the terminal – you would get your Mir and EGL contexts alright, but the target surface will never be composited to the screen of the device upon eglSwapBuffers() unless you take certain actions. You have two alternatives here:

  • Produce a valid Click package from your app and subsequently install that to the Apps pane (what our build script does), where you can launch from an icon, or…
  • Use a launcher app to start your window app (info courtesy of Daniel van Vugt):

Unfortunately the second (much quicker and convenient) approach is not currently usable due to a bug, so we’ll stick with the first. Any command-line args we’d want to pass to the app will need to be written to the app’s .desktop file, which can be found at the official app location after installation:

In that file, set the desired args on the ‘Exec’ line, like this:

Another peculiarity was that in Desktop Mode the app window does a classical ‘zoom to full size’ animation at start. Nothing extraordinary in that, if not for the fact that the Mir surface itself resizes along with the window. Now, a default viewport in a GLES context spans the geometry of the target surface at the time of its creation, which, in our case, is the start of the window-zoom animation, with its tiny surface geometry. One needs to wait for the zoom animation to finish, and then set the viewport geometry to the final geometry of the Mir surface, or live with a post-stamp-sized output in the lower left corner of the window, if the viewport is left unchanged.

Once we get past those teething hurdles we actually get quite a nicely behaving full-screen app on our hands – it composites smoothly with all other Ubuntu Touch desktop elements like the Launcher tab at the desktop’s left edge and the pull-down Indicator pane on right (see screenshot). Our app even does live output to the Scopes selector screen (i.e. the task-switching screen) — behold the miracles of modern-day screen compositors! ; )

Click for Original Size (1920×1080)

But hey, don’t just take my word for it – try out GLES coding on a Ubuntu Touch device – you have the basics covered:

  • App’s rendering loop and the entirety of the flashing-screen primer are in hello.cpp
  • Mir context creation and subsequent EGL context binding are in eglapp.cpp
  • Bump-mapping primer is entirely in app_sphere.cpp
  • Various helpers are spread across util_* TUs and hello.cpp
  • All files necessary for the generation of the Click package are in resource folder.

In conclusion, self-sustained development on the Ubuntu Touch is a perfectly viable scenario (take that, iOS!). Moreover, the GPU in the BQ M10 turned out to have a very nice modern GLES3 (3.1) stack – see excerpts from the app logs below. Actually, this is my first portable device with a GLES 3.1 stack, so I haven’t started using it properly yet — the GLES2 primer above doesn’t make use of the new functionality.

If I have to complain about something from the development of this primer, it’d be that I couldn’t use my arm64 code on the primer, since there are only armhf (32-bit) EGL/GLES libraries available for the Touch. So 64-bit code on the Ubuntu Touch remains in console land for now.

Excerpts from the primer logs:

egl version, vendor, extensions:

1.4 Android META-EGL
Android
EGL_KHR_get_all_proc_addresses EGL_ANDROID_presentation_time EGL_KHR_image EGL_KHR_image_base EGL_KHR_gl_texture_2D_image EGL_KHR_gl_texture_cubemap_image EGL_KHR_gl_renderbuffer_image EGL_KHR_fence_sync EGL_KHR_create_context EGL_ANDROID_image_native_buffer EGL_KHR_wait_sync EGL_ANDROID_recordable EGL_HYBRIS_native_buffer2 EGL_HYBRIS_WL_acquire_native_buffer EGL_WL_bind_wayland_display

gl version, vendor, renderer, glsl version, extensions:

OpenGL ES 2.0 (OpenGL ES 3.1)
ARM
Mali-T720
OpenGL ES GLSL ES 3.10
GL_EXT_debug_marker GL_ARM_rgba8 GL_ARM_mali_shader_binary GL_OES_depth24 GL_OES_depth_texture GL_OES_depth_texture_cube_map GL_OES_packed_depth_stencil GL_OES_rgb8_rgba8 GL_EXT_read_format_bgra GL_OES_compressed_paletted_texture GL_OES_compressed_ETC1_RGB8_texture GL_OES_standard_derivatives GL_OES_EGL_image GL_OES_EGL_image_external GL_OES_EGL_sync GL_OES_texture_npot GL_OES_vertex_half_float GL_OES_required_internalformat GL_OES_vertex_array_object GL_OES_mapbuffer GL_EXT_texture_format_BGRA8888 GL_EXT_texture_rg GL_EXT_texture_type_2_10_10_10_REV GL_OES_fbo_render_mipmap GL_OES_element_index_uint GL_EXT_shadow_samplers GL_OES_texture_compression_astc GL_KHR_texture_compression_astc_ldr GL_KHR_texture_compression_astc_hdr GL_KHR_debug GL_EXT_occlusion_query_boolean GL_EXT_disjoint_timer_query GL_EXT_blend_minmax GL_EXT_discard_framebuffer GL_OES_get_program_binary GL_OES_texture_3D GL_EXT_texture_storage GL_EXT_multisampled_render_to_texture GL_OES_surfaceless_context GL_OES_texture_stencil8 GL_EXT_shader_pixel_local_storage GL_ARM_shader_framebuffer_fetch GL_ARM_shader_framebuffer_fetch_depth_stencil GL_ARM_mali_program_binary GL_EXT_sRGB GL_EXT_sRGB_write_control GL_EXT_texture_sRGB_decode GL_KHR_blend_equation_advanced GL_OES_texture_storage_multisample_2d_array GL_OES_shader_image_atomic

GL_MAX_TEXTURE_SIZE: 8192
GL_MAX_CUBE_MAP_TEXTURE_SIZE: 4096
GL_MAX_VIEWPORT_DIMS: 8192, 8192
GL_MAX_RENDERBUFFER_SIZE: 8192
GL_MAX_VERTEX_ATTRIBS: 16
GL_MAX_VERTEX_UNIFORM_VECTORS: 1024
GL_MAX_VARYING_VECTORS: 15
GL_MAX_FRAGMENT_UNIFORM_VECTORS: 1024
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS: 48
GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS: 16
GL_MAX_TEXTURE_IMAGE_UNITS: 16

A Look at Three Options to Develop Real-Time Linux Systems on Application Processors – HMP, Real-Time Linux and Xenomai

October 15th, 2016 6 comments

This is a guest post by written by Guilherme Fernandes, Raul Muñoz, Leonardo Veiga, Brandon Shibley, all working for Toradex.

Introduction

Application processor usage continues to broaden. System-on-Chips, usually powered by ARM Cortex-A cores, are taking over several spaces where small ARM Cortex-M, and other microcontroller devices, have traditionally dominated. This trend is driven by several facts, such as:

  • The strong requirements for connectivity, often related to IoT and not only from a hardware point of view, but also related to software, protocols and security
  • The need for highly interactive interfaces such as multi-touch, high resolution screens and elaborate graphical user interfaces;
  • The decreasing price of SoCs, as consequence of its volume gain and new production capabilities.

Typical cases exemplifying the statement above are the customers we see every day starting a product redesign upgrading from a microcontroller to a microprocessor. This move offers new challenges as the design is more complicated and the operating system abstraction layer is much re complex. The difficulty of hardware design using an application processor is overcome by the use of reference designs and off-the-shelf alternatives like computer-on-modules or single board computers. On the operating system layer, the use of embedded Linux distributions is widespread in the industry. An immense world of open source tools is available simplifying the development of complex and feature rich embedded systems. Such development would be very complicated and time consuming if using microcontrollers. Despite all the benefits, the use of an operating system like Linux still raises a lot of questions and distrust when determinism and real-time control application topics are addressed.

A common approach adopted by developers is the strategy of separating time-critical tasks and regular tasks onto different processors. Hence, a Cortex-A processor, or similar, is typically selected for multimedia and connectivity features while a microcontroller is still employed to handle real-time, determinism-critical tasks. The aim of this article is to present some options developers may consider when developing real-time systems with application processors. We present three possible solutions to provide real-time capability to application processor based designs.

Heterogeneous Multicore Processing

The Heterogeneous Multicore Processing (HMP) approach is a hardware solution. Application processors like the NXP i.MX7 series, the NXP i.MX6SoloX and the upcoming NXP i.MX8 series present a variety of cores with different purposes. If we consider the i.MX7S you will see a dual core processor composed of a Cortex-A7 core @ 800MHz side-by-side with a Cortex-M4 core @ 200MHz. The basic idea is that user interface and high-speed connectivity are implemented on an abstracted OS like Linux with the Cortex-A core while, independently and in parallel, executing control tasks on a Real-Time OS, like FreeRTOS, with the Cortex-M core. Both cores are able to share access to memory and peripherals allowing flexibility and freedom when defining which tasks are allocated to each core/OS. Refer to Figure 1.

NXP i.MX7 Block Diagram (Click to Enlarge)

Figure 1 – NXP i.MX7 Block Diagram (Click to Enlarge)

Some of the advantages of using the HMP approach are:

  • Legacy software from microcontrollers can be more easily reused;
  • Firmware update (M4 core) is simplified as the firmware may be a file at the filesystem of the Cortex-A OS;
  • Increased flexibility of choosing which peripherals will be handled by each core. Since it is software defined, future changes can be made without changing hardware design.

More information on developing applications for HMP-based processors are available at these two articles:

Toradex, Antimicro and The Qt Company collaboratively built a robot showcasing this concept. The robot – named TAQ – is an inverted pendulum balancing robot designed with the Toradex Computer on Module Colibri iMX7. The user interface is built upon Linux with the QT framework running on the Cortex-A7 and the balancing/motor control is deployed on the Cortex-M4. Inter-core communication is used to remote control the robot and animate its face as seen in the short video below.

Real-Time Linux

The second approach we present in this article is software related. Linux is not a real-time operating system, but there are some initiatives which have greatly improved the determinism and timeliness of Linux. One of these efforts is the Real-Time Linux project. Real-Time Linux is a series of patches (PREEMPT_RT) aimed at adding new preemption options to the Linux Kernel along with other features and tools to improve its suitability for real-time tasks. You can find documentation on applying the PREEMPT_RT patch to the Linux kernel and developing applications for it at the official Real-Time Linux Wiki (formerly here).

We did some tests using the PREEMPT_RT patches on a Colibri iMX6DL to exemplify the improvement in real-time performance. The documentation on preparing the Toradex Linux image to deploy the PREEMPT_RT patch is available at this link. We developed a simple application which toggles a GPIO at a 2.5KHz (200µs High / 200µs Low). The GPIO output is connected to a scope where we measure the resulting square wave and evaluate the real output timings. The histograms below show the comparison between the tests on a standard Linux kernel configured for Voluntary Preemption (top) and a PREEMPT_RT patched Linux kernel configured for Real-time Preemption (bottom). The x-axis represents the period of the square wave sample and the y-axis represents the number of samples which measured with such a period. The table below the chart presents the worst and average data.

Click to Enlarge

Figure 2: Histogram of the square wave generated using the standard Kernel (top) and Preempt-RT kernel (bottom) – Click to Enlarge

Description

Samples

Smallest (µs)

Worst Case for 99% of  Samples (µs)

Worst Case (µs)

Median (µs)

Average (µs)

Default Kernel

694,780

36

415

4,635

400

400

PREEMPT_RT Kernel

683,593

369

407

431

400

400

Table 1: Comparison between Default Kernel and real-time Kernel when generating a square wave.

An example software system using the PREEMP_RT patch is provided by Codesys Solutions. They rely on the Real-Time Linux kernel, together with the OSADL (Open Source Automation Development Lab), to deploy their software PLC solution which is already widespread throughout the automation industry across thousands of devices. The video below presents the solution running on a Apalis iMX6Q.

Xenomai

Xenomai is another popular framework to make Linux a real-time system. Xenomai achieves this by adding a co-kernel to the Linux kernel. The co-kernel will handle time-critical operations and will have higher priority than the standard kernel. To use the real-time capabilities of Xenomai the real-time APIs (aka libcobalt) must be used to interface user-space applications with the Cobalt core, which is responsible for ensuring real-time performance.

dual-core-xenomai-configuration

Figure 3: Dual Core Xenomai Configuration

Documentation on how to install Xenomai on your target device can be found at the Xenomai website. Additionally, there is a variety of Embedded Hardware which is known to work as indicated in the hardware reference list, which includes the whole NXP i.MX SoC series.

To validate the use of Xenomai on the i.MX6 SoC we also developed a simple experiment. The target device was the Colibri iMX6DL by Toradex. We ran the same test approach as described above for the Real-Time Linux extension. Some parts of the application code used to implement the test are presented below to highlight the use of Xenomai APIs.

The results comparing Xenomai against a standard Linux kernel are presented in the chart below. Once again, the real-time solution provides a clear advantage – this time with even greater distinction – over the time-response of the standard Linux kernel.

Click to Enlarge

Figure 3: Histogram of the square wave generated using the standard Kernel (top) and Xenomai (bottom) – Click to Enlarge

Description

Samples

Smaller (µs)

Worst Case for 99% of Samples (µs)

Worst Case (µs)

Median (µs)

Average (µs)

Default Kernel

694,780

36

415

4,635

400

400

Xenomai Implementation

1,323,521

386

402

414

400

400

Table 2: Comparison between Default Kernel and Xenomai implementation when generating a square wave.

Conclusion

This article presented a brief overview of some solutions available to develop real-time systems on application processors running Linux as the target operating system. This is a starting point for developers who are aiming to use microprocessors and are concerned about real-time control and determinism.

We presented one hardware-based approach, using Heterogeneous Multicore Processing SoCs and two software based approaches namely: Linux-RT Patch and Xenomai. The results presented do not intend to compare operating systems or real-time techniques. Each of them has strong and weak points and may be more or less suitable depending on the use case.

The primary takeaway is that several feasible solutions exist for utilizing Linux with application processors in reliable real-time applications.

Top 10 Programming Languages in 2016 for Embedded Software Development

July 27th, 2016 6 comments

IEEE Spectrum has published a list of the top programming languages in 2016 for Web, Mobile, Enterprise, and Embedded sectors with rankings created by weighting and combining 12 metrics from 10 sources. So I thought it would be fun to have a look at the top 10 of languages used for embedded software, and the results are:

Top_10_Embedded_Programming_Languages_2016As expected, C and C++ are at the top, but I’m quite surprised that “Arduino” is now considered a programming language, as it is simply based on C/C++.  When I worked as an embedded software engineer a few years ago, I personally used C, and Assembly, and to a lesser extend C++ and VHDL. I only recently started to play with Arduino code, and while I’ve heard of most other languages in the list, it’s the first time I’ve ever seen Ladder Logic, probably because it’s designed to program PLCs in industrial control applications.

The methodology used is interesting, as the company did not survey actual engineers or developers, but instead used data from technical, social and job search websites, such as Github, Twitter, or CareerBuilder, to generate the rankings.

Google Summer of Code 2015 is Now Open for Student Applications

March 19th, 2015 2 comments

Google has now announced that students applications for Google Summer of Code (GSoC) are now open. Students can get paid up to $5,500 to work on various open source projects selected for the event.

GSoC_2015Fewer companies have been accepted this year, and even big names like the Linux Foundation and Mozilla got their application rejected. There are still over 137 open source projects to work on including:

  • MinnowBoard  project – Potential software projects for the Intel Atom embedded board include making low speed I/O buses more accessible via intermediate open source libraries (e.g. SMBus/PMBus/Wiring libraries), and improving the open source firmware.
  • lowRISC SoC project – Potential projects: Schematic Viewer for Netlists (SVG/JavaScript), open source FPGA compilation flow using Yosys, accessing the OpenCores ecosystem, etc…
  • BeagleBoard.org – Lots of project ideas relying on the BeagleBone Black board, dealing with Linux kernel support for embedded devices and interfaces, ARM processor support in open source operating systems and libraries, Heterogeneous co-processor (PRU) support in open source operating systems and libraries, and more.

Interested students can browse the projects, and submit their own proposals based on the “idea pages” or not, before Friday, March 27 at 19:00 UTC on the application page.

Students who are accepted will work on an actual open source software project over the summer, be paired with a mentor, and get paid for their work.  You need to be at least 18 years old, and be enrolled in an accredited academic institution anywhere in the world. You don’t necessarily need to be follow a Computer Science or Electronics Engineering program to apply, as past students Shave also come from disciplines such as Ecology, Medicine and Music. Getting accepted to GSoC and having worked on an open source project for several weeks is certainly something nice to have on your CV. Good luck!

Thanks to Alex (lowRISC) for the tip.

Learn How to Write a Driver for Linux 3.x With The Linux Driver Template

November 14th, 2012 No comments

A Linux Driver Template (LDT) has been published to help new Linux kernel developers writing hardware device drivers.

Constantine Shulyupin posted the Linux Driver Template (LDT) on the Linux mailing list in order to merge it into the mainline Linux kernel. The code can be used as as a starting point for new drivers, and shows how to use several Linux facilities such as  module, platform driver, file operations (read/write, mmap, ioctl, blocking and nonblocking mode, polling), kfifo, completion, interrupt, tasklet, work, kthread, timer, simple misc device,
multiple char devices, Device Model, configfs, UART, hardware loopback, software loopback and  ftracer.

This sample has been added to other device drivers samples in eLinux.org. And if you want to learn further there’s always the Linux driver bible: “Linux Device Drivers, Third Edition” which can be downloaded for free as PDF, although it’s for 2.6.10 kernel and many parts may not be up-to date.

Via: Phoronix