Archive

Posts Tagged ‘qa’

Lab in a Box Concept Embeds x86 Server and 6 ARM Boards into a PC Case for Automated Software Testing

November 3rd, 2017 7 comments

The Linux kernel now has about 20 millions line of code, Arm has hundreds of licensees making thousands of processors and micro-controllers, which end up in maybe hundreds of thousands of different designs, many of which are not using Linux, but for those that do, Linux must be tested to make sure it works. The same stands true for any large software used on multiple hardware platforms.

Manual testing is one way to do it, but it’s time consuming and expensive, so there are software and hardware continuous integration solutions to automate testing such as Linaro LAVA (Linaro Automated Validation Architecture), KernelCI automated Linux kernel testing, and Automotive Grade Linux CIAT that automatically test incoming patch series.

Both CIAT and KernelCI focus on Linux, and rely on LAVA, with KernelCI leveraging hardware contributed by the community, and proven to be effective as since it’s been implemented, failed build configs dropped from 51 with Linux 3.14 to zero today. However, settings the hardware and LAVA can be complicated and messy with all different boards lying around, so BayLibre engineers worked on an affordable “Lab in Box” concept to simplify administration and duplication of such systems in the hope of getting more people involved.

Click to Enlarge

They ended up with a nicely package system that fits into a desktop PC tower and includes:

  • ASRock Q1900B-ITX motherboard based on Intel Celeron J1900 with 8GB RAM and 120GB SSD running LAVA master and dispatcher
  • Devices Under Tests (DUT) will vary depending on your needs, but the demo system includes:
    • Renesas R-Car M3 Starter Kit
    • DragonBoard 410C
    • AML-S905-CC (LePotato) board
    • BeagleBone Black
    • Raspberry Pi 3
    • NXP SABRE Light
  • Connectivity / wiring
    • Network switch
    • USB hub
    • For each DUT board: power cable, serial debug cable, Ethernet cable
  • ACME Cape + Probes + Beaglebone Black to measure power consumption and control the DUTs
  • Power Supply – 530 Watt ATX power supply with +12V and +5V

Click to Enlarge

The system has been proven to work with complete continuous integration system fitted into a single PC case, and costing about 400 Euros excluding the DUTs. Software installation has also been simplified with partially automated software installations (WiP). However there may still need to work, as it’s been found to take a long time to build partially because it’s requires custom wiring for each DUT, boards need to support either 5 or 12V input, and DUT power consumption must be limited to 4A per pair of wires. This system also only supports board that fit into such case, and it’s not really scalable because using a larger case with more board may lead to excessive internal wiring. The Lab in a Box concept could be improved with a more powerful power supply, support for larger boards, and better documentation will also be provided. Baylibre may also work on a professional-grade “Lab in a Box” that fits into a rack.

Watch “Introducing the Lab in a Box Concept” by Patrick Titiano & Kevin Hilman, BayLibre for further details.

If you are short in time, you can also read the presentation slides.

As a side note, all Embedded Linux Conference Europe 2017 videos have been uploaded to YouTube.

USBCEE Tiny-PAT Board Helps Testing USB-C Power Adapters (Crowdfunding)

September 13th, 2017 No comments

USB power delivery allows for up to 100W charging using 20V @ 5A through a USB type C port, and the specifications also mandate supports for various voltages between 5V and 20V. However, some USB-C power adapter that not be fully compliant with the specifications, potentially risking to damage your device. USBCEE Tiny-PAT board has been created in order to test such power adapters to make sure they are compliant with USB PD 2.0/3.0 specifications.

Tiny-PAT board features and specifications:

  • Supported USB Spec Version – PD 2.0 / PD 3.0
  • Max Voltage: 24 V
  • Max Current: 5 A
  • Max Power: 100 W
  • USB type C receptacle
  • Misc – Fail and Pass LEDS, S4 mode button, through holes for VBUS & GND
  • Power Consumption: ~10 mA (may vary based on voltage)
  • Dimensions – 35 x 20 mm

By default, the board will test all power rules advertised by the power adapter, measure the voltage (VBUS), and show whether the test failed or passed with the LEDs on the board.

USB PD 3.0 Power Ratings, Voltages and Currents – Source: Texas Instruments

S4 button is used to switch to manual mode, where you can switch between each power rule, and verify the voltage(s) with a multimeter, external load, or oscilloscope. In that mode, Tiny-PAT could also be used a variable power supply where you can for example, select 5 V/3 A, 9 V/3 A, 15 V/3 A or 20 V/4.35 A  with Apple’s 87 W USB-C power adapter, or 5 V/3 A, 7 V/3 A, 8 V/3 A, 9 V/2.7 A, or 12 V/2 A with Verizon USB charger. The company promises to release schematics under an open license.

USBCEE has launched a CrowdSupply campaign to raise some funds for mass production of the board. A pledge of $40 should get you a Tiny-PAT board shipped at the end of November. Shipping is free to the US, and adds $7 to the rest of the world.

Categories: Hardware, Video Tags: crowdsupply, power, qa, usb

Embedded Linux Conference & Open Source Summit Europe 2017 Schedule

August 27th, 2017 4 comments

The Embedded Linux Conference & IoT summit 2017 took place in the US earlier this year in February, but there will soon be a similar event with the Embedded Linux Conference *& Open Source Summit Europe 2017 to take up in Europe on October 23 – 25 in Prague, Czech Republic, and the Linux Foundation has just published the schedule. It’s always useful to find out what is being discussed during such events, even if you are not going to attend, so I went through the different sessions, and compose my own virtual schedule with some of the ones I find the most interesting.

Monday, October 23

  • 11:15 – 11:55 – An Introduction to SPI-NOR Subsystem – Vignesh Raghavendra, Texas Instruments India

Modern day embedded systems have dedicated SPI controllers to support NOR flashes. They have many hardware level features to increase the ease and efficiency of accessing SPI NOR flashes and also support different SPI bus widths and speeds.

In order to support such advanced SPI NOR controllers, SPI-NOR framework was introduced under Memory Technology Devices (MTD). This presentation aims at providing an overview of SPI-NOR framework, different types of NOR flashes supported (like SPI/QSPI/OSPI) and interaction with SPI framework. It also provides an overview of how to write a new controller driver or add support for a new flash device.

The presentation then covers generic improvements done and proposed while working on improving QSPI performance on a TI SoC, challenges associated when using DMA with these controllers and other limitations of the framework.

  • 12:05 – 12:45 – Free and Open Source Software Tools for Making Open Source Hardware – Leon Anavi, Konsulko Group

The open source hardware movement is becoming more and more popular. But is it worth making open source hardware if it has been designed with expensive proprietary software? In this presentation, Leon Anavi will share his experience how to use free and open source software for making high-quality entirely open source devices: from the designing the PCB with KiCAD through making a case with OpenSCAD or FreeCAD to slicing with Cura and 3D printing. The talk will also provide information about open source hardware licenses, getting started guidelines, tips for avoiding common pitfalls and mistakes. The challenges of prototyping and low-volume manufacturing with both SMT and THT will be also discussed.

  • 14:20 – 15:00 – Introduction to SoC+FPGA – Marek Vašut, DENX Software Engineering GmbH

In this talk, Marek introduces the increasingly popular single-chip SoC+FPGA solutions. At the beginning, the diverse chip offerings from multiple vendors are introduced, ranging from the smallest IoT-grade solutions all the way to large industrial-level chips with focus on their software support. Mainline U-Boot and Linux support for such chips is quite complete, and already deployed in production. Marek demonstrates how to load and operate the FPGA part in both U-Boot and Linux, which recently gained FPGA manager support. Yet to fully leverage the potential of the FPGA manager in combination with Device Tree (DT) Overlays, patches are still needed. Marek explains how the FPGA manager and the DT Overlays work, how they fit together and how to use them to obtain a great experience on SoC+FPGA, while pointing out various pitfalls.

  • 15:10 – 15:50 – Cheap Complex Cameras – Pavel Machek, DENX Software Engineering GmbH

Cameras in phones are different from webcams: their main purpose is to take high-resolution still pictures. Running preview in high resolution is not feasible, so resolution switch is needed just before taking final picture. There are currently no applications for still photography that work with mainline kernel. (Pavel is working on… two, but both have some limitations). libv4l2 is doing internal processing in 8-bit, which is not enough for digital photography. Cell phones have 10 to 12-bit sensors, some DSLRs do 14-bit depth.

Differences do not end here. Cell phone camera can produce reasonable picture, but it needs complex software support. Auto-exposure / auto-gain is a must for producing anything but completely black or completely white frames. Users expect auto-focus, and it is necessary for reasonable pictures in macro range, requiring real-time processing.

  • 16:20 – 17:00 – Bluetooth Mesh with Zephyr OS and Linux – Johan Hedberg, Open Source Technology Center, Intel

Bluetooth Mesh is a new standard that opens a whole new wave of low-power wireless use cases. It extends the range of communication from a single peer-to-peer connection to a true mesh topology covering large areas, such as an entire building. This paves the way for both home and industrial automation applications. Typical home scenarios include things like controlling the lights in your apartment or adjusting the thermostat. Although Bluetooth 5 was released end of last year, Bluetooth Mesh can be implemented on any device supporting Bluetooth 4.0 or later. This means that we’ll likely see very rapid market adoption of the feature.

The presentation will give an introduction to Bluetooth Mesh, covering how it works and what kind of features it provides. The talk will also give an overview of Bluetooth Mesh support in Zephyr OS and Linux and how to create wireless solutions with them.

  • 17:10 – 17:50 – printk() – The Most Useful Tool is Now Showing its Age – Steven Rostedt, VMware

printk() has been the tool for debugging the Linux kernel and for being the display mechanism for Linux as long as Linux has been around. It’s the first thing one sees as the life of the kernel begins, from the kernel banner and the last message at shutdown. It’s critical as people take pictures of a kernel oops to send to the kernel developers to fix a bug, or to display on social media when that oops happens on the monitor on the back of an airplane seat in front of you.

But printk() is not a trivial utility. It serves many functionalities and some of them can be conflicting. Today with Linux running on machines with hundreds of CPUs, printk() can actually be the cause of live locks. This talk will discuss all the issues that printk() has today, and some of the possible solutions that may be discussed at Kernel Summit.

  • 18:00 – 18:45 – BoF: Embedded Linux Size – Michael Opdenacker, Free Electrons

This “Birds of a Feather” session will start by a quick update on available resources and recent efforts to reduce the size of the Linux kernel and the filesystem it uses.

An ARM based system running the mainline kernel with about 3 MB of RAM will also be demonstrated. If you are interested in the size topic, please join this BoF and share your experience, the resources you have found and your ideas for further size reduction techniques!

Tuesday, October 24

  • 10:55 – 11:35 – Introducing the “Lab in a Box” Concept – Patrick Titiano & Kevin Hilman, BayLibre

Continuous Integration (CI) has been a hot topic for long time. With the growing number of architectures and boards, it becomes impossible for maintainers to validate a patch on all configurations, making it harder and harder to keep the same quality level without leveraging CI and test automation. Recent initiatives like LAVA, KernelCI.org, Fuego, (…) started providing a first answer, however the learning curve remains high, and the HW setup part is not covered.

Baylibre, already involved in KernelCI.org, decided, as part of the AGL project, to go one step further in CI automation and has developed a turnkey solution for developers and companies willing to instantiate a LAVA lab; called “Lab in a Box”, it aims at simplifying the configuration of a board farm (HW, SW).

Motivations, challenges, benefits and results will be discussed, with a demo of a first “Lab in a Box” instantiation.

  • 11:45 – 12:25 – Protecting Your System from the Scum of the Universe – Gilad Ben-Yossef, Arm Holdings

Linux based systems have a plethora of security related mechanisms: DM-Crypt, DM-Verity, Secure Boot, the new TEE sub-system, FScrypt and IMA are just a few examples. This talk will describe these the various systems and provide a practical walk through of how to mix and match these mechanisms and design them into a Linux based embedded system in order to strengthen the system resilience to various nefarious attacks, whether the system discussed is a mobile phone, a tablet, a network attached DVR, a router, or an IOT hub in a way that makes maximum use of the sometime limited hardware resources of such systems.

  • 14:05 – 14:45 – Open Source Neuroimaging: Developing a State-of-the-Art Brain Scanner with Linux and FPGAs – Danny Abukalam, Codethink

Neuroimaging is an established medical field which is helping us to learn more about how the human brain works, the most complex human organ. This talk aims to cover neuroimaging systems, from hobbyist to professional, and how open source has been used to build state-of-the-art systems. We’ll have a look the general problem area, why open source was a good fit, and some examples of solutions including a commercial effort that we have been involved in bringing to market. Typically these solutions consist of specialist hardware, a bespoke software solutions stack, and a suite to manage and process the vast amounts of data generated during the scan. Other points of interest include how we approached building a maintainable and upgradeable system from the outset. We’ll also talk about future plans for neuroimaging, future ideas for hardware & discuss areas lacking good open source solutions.

  • 14:55 – 15:35 – More Robust I2C Designs with a New Fault-Injection Driver – Wolfram Sang, Renesas

It has its challenges to write code for certain error paths for I2C bus drivers because these errors usually don’t happen on the bus. And special I2C bus testers are expensive. In this talk, a new GPIO based driver will be presented which acts on the same bus as the bus master driver under inspection. A live demonstration will be given as well as hints how to handle bugs which might have been found. The scope and limitations of this driver will be discussed. Since it will also be analyzed what actually happens on the wires, this talk also serves as a case study how to snoop busses with only Free Software and OpenHardware (i.e. sigrok).

  • 16:05 – 16:45 – GStreamer for Tiny Devices – Olivier Crête, Collabora

GStreamer is a complete Open Source multimedia framework, and it includes hundreds of plugins, including modern formats like DASH, HLS or the first ever RTSP 2.0 implementation. The whole framework is almost 150MB on my computer, but what if you only have 5 megs of flash available? Is it a viable choice? Yes it is, and I will show you how.

Starting with simple tricks like only including the necessary plugins, all the way to statically compiling only the functions that are actually used to produce the smaller possible footprint.

  • 16:55 – 17:35 – Maintaining a Linux Kernel for 13 Years? You Must be Kidding Me. We Need at Least 30? – Agustin Benito Bethencourt, Codethink Ltd

Industrial grade solutions have a life expectancy of 30+ years. Maintaining a Linux kernel for such a long time in the open has not been done. Many claim that is not sustainable, but corporations that build power plants, railway systems, etc. are willing to tackle this challenge. This talk will describe the work done so far on the kernel maintenance and testing front at the CIP initiative.

During the talk it will be explained how we decide which parts of the kernel to cover – reducing the amount of work to be done and the risk of being unable to maintain the claimed support. The process of reviewing and backporting fixes that might be needed on an older branch will be briefly described. CIP is taking a different approach from many other projects when it comes to testing the kernel. The talk will go over it as well as the coming steps. and the future steps.

Wednesday, October 24

  • 11:05 – 11:45 – HDMI 4k Video: Lessons Learned – Hans Verkuil, Cisco Systems Norway

So you want to support HDMI 4k (3840×2160) video output and/or video capture for your new product? Then this is the presentation for you! I will describe the challenges involved in 4k video from the hardware level, the HDMI protocol level and up to the kernel driver level. Special attention will be given to what to watch out for when buying 4k capable equipment and accessories such as cables and adapters since it is a Wild, Wild West out there.

  • 11:55 – 12:35 – Linux Powered Autonomous Arctic Buoys – Satish Chetty, Hera Systems 

In my talk/presentation, I cover the technical, and design challenges in developing an autonomous Linux powered Arctic buoy. This system is a low cost, COTS based, extreme/harsh environment, autonomous sensor data gathering platform. It measures albedo, weather, water temperature and other parameters. It runs on a custom embedded Linux and is optimized for efficient use of solar & battery power. It uses a variety of low cost, high accuracy/precision sensors and satellite/terrestrial wireless communications.

I talk about using Linux in this embedded environment, and how I address and solve various issues including building a custom kernel, Linux drivers, frame grabbing issues and results from cameras, limited power challenges, clock drifts due to low temperature, summer melt challenges, failure of sensors, intermittent communication issues and various other h/w & s/w challenges.

  • 14:15 – 14:55 – Linux Storage System Bottleneck for eMMC/UFS – Bean Huo & Zoltan Szubbocsev, Micron

The storage device is considered a bottleneck to the system I/O performance. This thinking drives the need for faster storage device interfaces. Commonly used flash based storage interfaces support high throughputs, eg. eMMC 400MB/s, UFS 1GB/s. Traditionally, advanced embedded systems were focusing on CPU and memory speeds and these outpaced advances in storage speed improvements. In this presentation, we explore the parameters that impact I/O performance. We describe at a high level how Linux manages I/O requests coming from user space. Specifically, we look into system performance limitations in the Linux eMMC/UFS subsystem and expose bottlenecks caused by the software through Ftrace. We show existing challenges in getting maximum performance of flash-based high-speed storage device. by this presentation, we want to motivate future optimization work on the existing storage stack.

  • 15:05 – 15:45 – New GPIO Interface for User Space – Bartosz Golaszewski

Since Linux 4.8 the GPIO sysfs interface is deprecated. Due to its many drawbacks and bad design decisions a new user space interface has been implemented in the form of the GPIO character device which is now the preferred method of interaction with GPIOs which can’t otherwise be serviced by a kernel driver. The character device brings in many new interesting features such as: polling for line events, finding GPIO chips and lines by name, changing & reading the values of multiple lines with a single ioctl (one context switch) and many more. In this presentation, Bartosz will showcase the new features of the GPIO UAPI, discuss the current state of libgpiod (user space tools for using the character device) and tell you why it’s beneficial to switch to the new interface.

  • 16:15 – 16:55 – Replace Your Exploit-Ridden Firmware with Linux – Ronald Minnich, Google

With the WikiLeaks release of the vault7 material, the security of the UEFI (Unified Extensible Firmware Interface) firmware used in most PCs and laptops is once again a concern. UEFI is a proprietary and closed-source operating system, with a codebase almost as large as the Linux kernel, that runs when the system is powered on and continues to run after it boots the OS (hence its designation as a “Ring -2 hypervisor”). It is a great place to hide exploits since it never stops running, and these exploits are undetectable by kernels and programs.

Our answer to this is NERF (Non-Extensible Reduced Firmware), an open source software system developed at Google to replace almost all of UEFI firmware with a tiny Linux kernel and initramfs. The initramfs file system contains an init and command line utilities from the u-root project, which are written in the Go language.

  • 17:05 – 17:45 – Unikernelized Real Time Linux & IoT – Tiejun Chen, Vmware

Unikernel is a novel software technology that links an application with OS in the form of a library and packages them into a specialized image that facilitates direct deployment on a hypervisor. But why these existing unikernels have yet to gain large popularity broadly? I’ll talk what challenges Unikernels are facing, and discuss exploration of if-how we could convert Linux as Unikernel, and IoT could be a valuable one of use cases because the feature of smaller size & footprint are good for those resource-strained IoT platforms. Those existing unikernels are not designed to address those IoT characters like power consumption and real time requirement, and they also doesn’t support versatile architectures. Most existing Unikernels just focus on X86/ARM. As a paravirtualized unikenelized Linux, especially Unikernelized Real Time Linux, really makes Unikernels to succeed.


If you’d like to attend the real thing, you’ll need to register and pay a registration fee:

  • Early Registration Fee: US$800 (through August 27, 2017)
  • Standard Registration Fee: US$950 (August 28, 2017 – September 17, 2017)
  • Late Registration Fee: US$1100 (September 18, 2017 – Event)
  • Academic Registration Fee: US$200 (Student/Faculty attendees will be required to show a valid student/faculty ID at registration.)
  • Hobbyist Registration Fee: US$200 (only if you are paying for yourself to attend this event and are currently active in the community)

There’s also another option with the Hall Pass Registration ($150) if you just want to network on visit with sponsors onsite, but do not plan to attend any sessions or keynotes.

Realtek RTL8710AF (PADI IoT Stamp) vs Espressif ESP8266 (ESP-07) WiFi RF Performance Comparison

October 27th, 2016 4 comments

After I posted about PADI IoT Stamp IoT kit based on RTL8710AF ARM Cortex M3 WiSoC yesterday, I was soon asked whether I could compare the RF performance against ESP8266 modules like ESP-12. I don’t have any equipment to do this kind of test, except for some simple test like testing range with WiFi Analyzer app, but I remember Pine64 told me they had some comparison data a little while, and accepted to share their results.

wifi-rf-performance-testingThe test setup is comprised of Litepint IQ2010 multi-communication connectivity test system and PC software, as well as the device under test (DUT) with PADI IoT Stamp (version with u.FL antenna connector) and ESP-07 ESP8266 module as a u.FL connector is required to connect the test system.

They’ve tested 802.11b, 802.11g, and 802.11n, but for IoT projects 802.11b is the most important as usually long range is more important than data rate. Test results below are based on CH1 input data with 1dBm compensation.

That’s the results for ESP8266…

esp8266-802-11b-test-data

ESP8266 802.11b Data, Spectral Mask and Constellation Diagram

.. and the results for RTL8710 using an 802.11b connection.

rtl8710af-802-11b-test-data

RTL8710AF 802.11b Spectral Mask and Constellation Diagram

The tables show peak and average power, LO leakage, EVM (Error vector magnitude), Frequency error and other parameters. The spectral mask, and constellation diagram are also shown for each case. If you’ve never studied or worked about RF signal, it’s quite all complicated, but can get some insights by reading Practical Manufacturing Testing of 802.11 OFDM Wireless Devices white paper.

A Spectral Mask describes the distribution of signal power across each channel. When transmitting in a 20 MHz channel, the transmitted spectrum must have a 0 dBr bandwidth not exceeding 18 MHz, –20 dBr at 11 MHz frequency offset, –28 dBr at 20 MHz frequency offset, and the maximum of –45 dBr and –53 dBm/MHz at 30 MHz frequency offset and above.

The Constellation Diagram is a representation of a signal modulated by a digital modulation scheme. It is useful to identify some types of corruption in signal quality. The EVM is a measure of the deviation of the actual constellation points from the ideal error-free locations in the constellation diagram (in % RMS or dB), and you’d want to keep this as small as possible.

In both diagrams, it appears that the signal is quite cleaner on PADI IoT stamp compared to ESP8266 module with more distortions. The diagram are not quite clear enough to check the Spectral Mask values. I’m sure we’ll get some more feedback in the comments section.

If you are interested in 802.11g and 802.11n results, you can access the rest of the report.

Test Widevine & PlayReady DRM, HDCP 1.x/2.x, 4K VP9 and H.265 in Android with Exoplayer App

October 21st, 2016 2 comments

I first heard about ExoPlayer in an Android TV Overview presentation at Linaro Connect 2014, but I never really looked into it. The source code is available on Github, and I’ve been given ExoPlayer.apk as it can be used to test UHD H265 support, HDCP 1.x, HDCP 2.x compatibility, PlayReady & Widevine DRM using different format and so on.

ExoPlayer Demo - Click to Enlarge

ExoPlayer Demo – Click to Enlarge

So I installed it on Beelink GT1 Android TV box which I’m currently reviewing, and only include basic Widewine Level 3 DRM, and certainly does not support HDCP features.

There are 9 sections in the app to test various videos and DRM schemes:

  • YouTube Dash
  • Widevine Dash Policy Tests (GTS) – Widewine with or without HDCP, with or without secure video path
  • Widevine HDCP Capabilities Tests – NoHDCP, HDCP 1.0, HDCP 1.1, HDCP 2.0, HDCP 2.1, HDCP 2.2, and HDCP no digital output
  • Widevine Dash MP4, H264 – Various resolution (SD, HD, UHD) for clear or secure videos
  • Widevine Dash WebM, VP9
  • Widevine Dash MP4, H.265
  • SmoothStreaming – Super speed or Super speed (PlayReady)
  • HLS – Apple master playlist, Apple TS media playlist, Apple ID3 metadata, etc…
  • Misc – Various video & audio formats and codecs (MKV, FLV, Google Play videos…)

I tested a few the tests without HDCP nor secure data requirement will work just fine. Widevine secure SD (MP4, H.265) would work fine, but as expect Widevine Secure HD and UHD would not work, and only show a black screen with audio since Level 1 DRM is not supported by my device.

Then I switched to Widewine HDCP 2.2, and to my surprise the video could play… I later found out that HDCP does not kick-in immediately, and if I play the video for a longer time, the video will stop after 9 seconds because Beelink did not get the HDCP 2.2 license for their box.

AFAIC, there’s automatic testing, and each test must be started manually. But it’s still a useful if you are interested in copy protection schemes supported by your Android device.

I’ll complete the post with something unrelated with ExiPlayer, but still interesting to check HDCP support if you own an Amlogic device, as there are some commands to check the status of HDCP:

  • Show whether the TV is currently working with HDCP 2.x or HDCP 1.x:

22 = HDCP2, 11 = HDCP1, off = HDCP not enabled right now

  • Check HDCP authentication status:

1=authenticated ok, 0 = failed to authenticate.

  • HDCP keys for device

00 = no HDCP key, 14 = has HDCP1_key, 22 = has HDCP2_key

  • Check TV HDCP version

22 = TV supports HDCP2, 14 = TV supports HDCP1)

  • Disable HDCP protection:

GradientOne Brings Oscilloscopes, Spectrum Analyzers, Frequency Generators… to the Cloud

July 29th, 2016 No comments

Nowadays, product development often involves working with teams spread across the world, with for example hardware development in the US, software development in India, and manufacturing in China. Resolving issues may require several members of the teams to gather data and work together, and beside the distance issue, you have to handle different timezones too. GradientOne may help facilitating hardware and firmware debugging by connecting test equipments such as oscilloscopes, spectrum analyzers, frequency generators and others to the cloud, so that data can easily be shared, and any member of the team control the equipment remotely, even automatizing measurements if needed. It could also be useful to field application engineers who may bring portable equipment to the customer premises, and have one engineer investigate issues remotely.

GradientOneThere are two ways to integrate equipment with GradientOne:

  • Web user interface to control instruments, set parameters (e.g. trigger, acquisition type, etc), via the web interface.

The company already did the hard work, and current supports Tektronix MDO3000 series oscilloscopes + function generator, Tektronix MDO4000/MSO200/DPO 2000 & DPO 4000 series oscilloscopes, as well as Agilent/Keysight U2000 power meters, and more support is planned for Agilent 859xA/B series spectrum analyzers, Agilent 8340/1 A/B RF signal generators, Chroma 62000P series power supplies, Agilent 34401A digital multimeters.

GradientOne_Web_Interface

Customer will benefits from data storage, organization, search, reporting, collaboration, signal replay, etc… through the interface.

  • API to work with any existing test script to support sending test data and instrument configuration to GradientOne cloud as well as retrieve the data/configuration.

The HTTP(S) & JSON API is useful to add instruments not yet supported by the Web UI, and for customers who want to keep using their existing instrument scripts but securely (OAuth 2.0 authenticate) store and retrieve data from GradientOne cloud.

The promo video below quickly shows some of the features of GradientOne service.

The company also offer on-site or online (Google Hangouts) live demos to interested companies. More details can be found on GradientOne website.

Antutu Video Tester Automatically Tests Video and Audio Codecs & Playback Quality in Android

December 1st, 2014 11 comments

When I read a review about MK808B Plus this morning, I noticed the reviewer used Antutu Video Tester to evaluate video/audio performance of the device. Somehow I had never noticed it, and Antutu developers claim it can not only check whether video or audio codec are supported, but the tool can also give an appraisal of video quality:

AnTuTu Video Test is a professional tool for testing video playback capability of Android Smart TV, set-top boxes and other devices. It integrates a few featured videos and testing algorithms that can help users judge the playback performance of the devices clearly. AnTuTu video test can not only detect the video playback formats devices support, but also can test the playback quality of devices.

Antutu_Video_TesterSo I decided to try it out on Open Hour Chameleon Android media player based on Rockchip RK3288 processor. The first test you click on Video Test will it download the video samples (155 MB), all very short files based on Sintel video from the Blender Foundation with different resolutions, video & audio codecs. Once the download is complete, the test will automatically, and it last just maybe 2 or 3 minutes, so it’s much faster than manual testing.

Let’s check out the results and list of files.

Antutu_Video_Tester_Open_Hour_Chameleon

Video Tester Results (Click to Enlarge)

So they test a bunch of videos with 1080p and 2160p resolution with the most common codec, but it’s far from extensive. Based on this table, the only problem with the box is that it can not play DTS or AC-3 files with the video player (stock?) used in the tester. So overall it does not look that bad. But since I noticed some 1080p pixelated videos, and/or skipped frames, Chameleon got just 263 points, which is rather low compared to some other television sets or TV boxes, and should mean Antutu Video Tester does indeed take into account video playback “quality” as advertised.

Antutu_Video_Tester_Top_Scores_for_TVHimedia Q5 with HiSilicon SoC (three models are available), Letv C1S with a dual core processor @ 1.5 GHz / Mali-400MP2 GPU, and Kaiboer F5 featuring Mstar MSO9180 SoC are the top three TV boxes based on this test, but unfortunately these are mostly reserved to the Chinese market.

Have you tried on your Android media player? What’s your score?

AllWinner A80 Octa Core big.LITTLE Processor CPU Usage Under Various Loads in Android 4.4 (Video)

November 23rd, 2014 4 comments

Allwinner A80 is one of the few octa core processors featuring ARM’s big.LITTLE technology currently available on the market. The processor comes with four ARM Cortex A15 (big) cores, and four ARM Cortex A7 (LITTLE) core, and tasks will be scheduled to different processor depending on the load to optimize power consumption on mobile devices. However, earlier big.LITTLE processors like Samsung Exynos 5410 has some serious limitations, as they only supported “cluster migration” meaning you could only use the Cortex A7 cluster or Cortex A15 cluster at any given time, so Exynos 5410 could only make use of four cores at most due to hardware limitations. They also used to be two software implementations: In-kernel Switching (IKS) and Global Task Scheduling (GTS). The former could only handle one type of core at the same, and the latter, which I believe is now used in all new devices, can handle any combination of cores, so an octa core big.LITTLE SoC can indeed make use of all its eight cores.

Antutu_3D_CPU_Usage
To make sure it was the case with Allwinner A80 SoC, I did a little test using PVRMonitor app on Tronsmart Draco AW80 mini PC. I did this test to check all eight cores can be used, and to see which cores and how many cores are used for various loads such as multi-tab web browsing and gaming. The scheduler was set to Performance with No-frills CPU Control app.

I’ve run Antutu, the Android stock Browser with multiple tabs open, and Beach Buggy Blitz 3D racing games in the video above. The takeaway for this short test is that Allwinner A80 can run its eight cores simultaneously, but in typical use, it’s rare to see more than four cores used simultaneously. I forgot to include video playback in the video, so I tried to play 4K videos and H.265 videos with Kodi 14, and normally (hardware video decoding) only two Cortex A15 are used (around 30% per core),  and when software video decoding is needed (H.265), at most four cores are used, so it looks like Kodi has not been optimized yet to make full use of octa systems, at least on Allwinner A80.

So in Android mini PCs, there’s usually very little gain from an octa core processor instead of a quad core processor, unless you run apps that can make use of all cores such as video transcoding apps, or you want it convert it into a Linux mini PC to compile software or run a server.