Archive

Posts Tagged ‘standard’

USB 3.2 To Bring 20 Gbps Transfer Rate to Existing USB type C Cables

July 28th, 2017 9 comments

The USB 3.0 Promoter Group has recently announced the upcoming USB 3.2 specification that defines multi-lane operation for compatible hosts and devices, hence doubling the maximum theoretical bandwidth to 20 Gbps.

Only USB Type-C cables were designed to support multi-lane operation, so other type of USB cables will not support USB 3.2, and stay limited to 10 Gbps. USB 3.2 will allow for up to two lanes of 5 Gbps, or two lanes of 10 Gbps operation, so if you want to achieve 20 Gbps transfer rate, you’ll need a USB Type C cable certified for SuperSpeed USB 10 Gbps, beside hosts and devices that comply with USB 3.2.

Layout of the pins in a Type-C connector

Anandtech explains that two high speed data paths are available in USB type C connector as shown above, which are also used for alternate modes, and the USB 3.1 standard makes use of one of those paths for 10 Gbps transfer, and the other path for alternate mode, but USB 3.2 allows for both to be used for 10 Gbps transfers hence achieving up to 20 Gbps. If both paths are used for alternate modes, then transfers will be limited to USB 2.0 speeds (up to 480 Mbps).

That’s a good development, but it will be further be confusing to consumers, as most companies do not clearly explain the capabilities of their USB type C interfaces or/and cables. USB type C cables can be made for USB 2.0 (480 Mbps), USB 3.0 / USB 3.1 Gen 1 (5 Gbps), or USB 3.1 Gen 2 (10 Gbps), and only the latter will support USB 3.2.

Categories: Hardware Tags: standard, usb

Bluetooth Low Energy Now Supports Mesh Networking for the Internet of Things

July 19th, 2017 8 comments

The Bluetooth Special Interest Group (SIG) has announced support for mesh networking for BLE, which enables many-to-many (m:m) device communications, and is optimized for large scale device networks for building automation, sensor networks, asset tracking solutions, and other IoT solutions where up to thousands of devices need to reliably and securely communicate with one another. The standard actually specifies 32,767 unicast addresses per mesh network, but that number of nodes is not achievable right now.

Mesh networking works with Bluetooth Low Energy and is compatible with version 4.0 and higher of the specifications. It requires SDK support for the GAP Broadcaster and Observer roles to both advertise and scan for advertising packets, and the FAQ claims Mesh Networking does not require extra power, and the devices only need to wake up at least once every four days or when they have data to transmit. Mobile apps connecting to mesh networking products will use the Bluetooth mesh proxy protocol implemented on top of BLE GAP and GATT APIs.

Bluetooth Mesh Control Model – Server and Client models are also available

You can access access various part of the Mesh Networking standard including Mesh Profile specification 1.0, Mesh Model specification 1.0, and Mesh Device properties 1.0 on Bluetooth website.

The Bluetooth SIG expects commercial products with Bluetooth mesh networking technology to become available later this year. Qualcomm – who purchased CSR – announced Mesh networking support for their QCA4020 and QCA4040 BLE chip in samples today, and commercial availability in September 2017, and Nordic Semi has released a Mesh SDK, and so has Silicon Labs. Since I understand mesh network does not require hardware modifications, then all companies providing BLE solutions should offer it.

Thanks to Crashoverride for the tip.

USB type C to HDMI Cables Coming Soon thanks to HDMI Alt Mode for USB Type-C

June 29th, 2017 1 comment

Some devices already support video output over a USB type C connector, but they normally rely on DisplayPort over USB type C, so you’d either need a monitor that supports DisplayPort, or some USB Type C to HDMI converter. A DisplayLink dock is another solution, but again it converts video and audio signals. But soon you’ll be able to use a simple USB type C to HDMI cable between a capable device (camera, phone, computer, TV box…) and any HDMI TV or monitor.

This is being made possible thanks to HDMI Alt Mode for USB Type-C  that supports all HDMI 1.4b features including:

  • Resolutions up to 4K (@ 30 Hz)
  • Surround sound
  • Audio Return Channel (ARC)
  • 3D (4K and HD)
  • HDMI Ethernet Channel (HEC)
  • Consumer Electronic Control (CEC)
  • Deep Color, x.v.Color, and content types
  • High Bandwidth Digital Content Protection (HDCP 1.4 and HDCP 2.2)

There’s no video or audio conversion inside the cable, but there’s still a small micro-controller to handle messaging to negotiate the alt mode to use, which means the source device will have to be specifically supporting the new standard.

Charbax caught up with a representative of HDMI Licensing Administrator inc. demonstrating USB-C to HDMI cable with a 2-in-1 laptop connected to an HDMI monitor, as well as a camera prototype getting both HDMI signal with CEC support, and power (USB-PD) over a single cable.


The new specification is good news, and we should expect capable devices later this year. We’d just had to hope manufacturers will get serious with logos and description of features of their USB type C connectors, as there are now so many optional features that it could end up getting really confusing to end users. In case you wonder why HDMI 2.0b, with features like 4K @ 60 Hz and HDR, is not supported, the FAQ explains that “the HDMI Forum is responsible for the HDMI 2.0b specification and they have not made any public statements regarding the HDMI Alt Mode for the HDMI 2.0b spec”.

Categories: Hardware, Video Tags: camera, hdmi, standard, usb

Samsung & Amazon Introduce HDR10+ Standard with Dynamic Metadata & Tone Mapping

April 20th, 2017 7 comments

Most recent 4K Ultra HD televisions support high dynamic range (HDR) through standards such as HDR10, Dolby Vision, or Hybrid Log-Gamma (HLG). Samsung and Amazon have jointly introduced an update to HDR10 with HDR10+ that adds dynamic tone mapping & metadata.

The companies describe the issues for HDR10′ static metadata as follows:

The current HDR10 standard utilizes static metadata that does not change during playback despite scene specific brightness levels. As a result, image quality may not be optimal in some scenes. For example, when a movie’s overall color scheme is very bright but has a few scenes filmed in relatively dim lighting, those scenes will appear significantly darker than what was originally envisioned by the director.

HDR10+ will be able to adjust metadata for each scene, and even for each frame, hence solving the issue of darker scenes. If you already own a Samsung TV with HDR10,  it’s not already outdated, as all 2017 UHD TVs already support HDR10+, and 2016 UHD TVs will support HDR10+ through a firmware update.

Amazon Video will be the first streaming service to deliver HDR10+ content, and Samsung also collaborated with other companies to integrate HDR10+ into products such as Colorfront’s Transkoder for post-production master, and MulticoreWare x265 video encoder.

HDR10 – and HDR10+ – is also said to be an open standard, but it could not find the specifications online, and only managed to find that HDR10 Media Profile main  must support EOTF: SMPTE ST 2084, 4:2:0 color Sub-sampling, 10-bit color depth, ITU-R BT.2020 color primaries, and SMPTE ST2086, MaxFALL and MaxCLL metadata defined in CTA 861.3-A standard (free preview) which you can purchase for $67. There must be some sort of CTA Standard for HDR dynamic metadata extensions for HDR10+, but I could not find anything [Update: Maybe SMPTE ST 2094-20-2016?]

Samsung showcased a static vs dynamic tone mapping demo at NAB 2016 last year, but it’s quite hard to see any differences in the video.

Categories: Hardware Tags: amazon, hdr, HDR10, samsung, standard

MIPI I3C Sensor Interface is a Faster, Better, Backward Compatible Update to I2C Protocol

January 11th, 2017 6 comments

I2C (Inter-Integrated Circuit) is one of the most commonly used serial bus for interfacing sensors and other chips, and use two signals (Clock and Data) to control up to 128 chips thanks to its 7-bi address scheme. After announcing it was working of a new I3C standard in 2014, the MIPI Alliance has now formally introduced the MIPI I3C (Improved Inter Integrated Circuit) Standardized Sensor Interface, a backward compatible update to I2C with lower power consumption, and higher bitrate allowing it to be used for applications typically relying on SPI too.

mipi-i3cI3C offers four data transfer modes that, on maximum base clock of 12.5MHz, provide a raw bitrate of 12.5 Mbps in the baseline SDR default mode, and 25, 27.5 and 39.5 Mbps, respectively in the HDR modes. After excluding transaction control bytes, the effective data bitrates achieved are 11.1,20, 23.5 and 33.3 Mbps.

MIPI I3C vs I2C Energy Consumption and Bitrate - Click to Enlarge

MIPI I3C vs I2C Energy Consumption and Bitrate – Click to Enlarge

The MIPI Alliance has also provided a tablet comparing I3C, I2C, and SPI features, advantages and disadvantages.

Parameter MIPI I3C I2C SPI
Number of Lines 2-wire 2-wire (plus separate wires for each required interrupt signal) 4-wire (plus separate wires for each required    interrupt signal)
Effective Data Bitrate 33.3 Mbps max at 12.5 MHz
(Typically: 10.6 Mbps at 12 MHz SDR)
3 Mbps max at 3.4 MHz (Hs)
0.8 Mbps max at 1 MHz (Fm+)
0.35 Mbps max at 400 KHz (Fm)
Approx. 60 Mbps max at 60 MHz for conventional implementations (Typically: 10 Mbps at 10 MHz)
 Advantages
  • Only two signal lines
  • Legacy I2C devices co-exist on the same bus (with some limitations)
  • Dynamic addressing and supports
    static addressing for legacy I2C
    devices
  • I2C-like data rate messaging  (SDR)
  • Optional high data rate messaging
    modes (HDR)
  • Multi-drop capability and dynamic
    addressing avoids collisions
  • Multi-master capability
  • In-band Interrupt support
  • Hot-join support
  • A clear master ownership and
    handover mechanism is defined
  • In-band integrated commands
    (CCC) support
  • Only two signal lines
  • Flexible data transmission rates
  • Each device on the bus is
    independently addressable
  • Devices have a simple master/slave relationship
  • Simple implementation
  • Widely adopted in sensor
    applications and beyond
  • Supports multi-master and multi-drop capability features
  • Full duplex communication
  • Push-pull drivers
  • Good signal integrity and high speed below   20MHz (higher speed are challenging)
  • Higher throughput than I2C and SMBus
  • Not limited to 8-bit words
  • Arbitrary choice of message size, content and purpose
  • Simple hardware interfacing
  • Lower power than I2C
  • No arbitration or associated failure modes
  • Slaves use the master’s clock
  • Slaves do not need a unique address
  • Not limited by a standard to any maximum  clock speed (can vary between SPI devices)
 Disadvantages
  • Only 7-bits are available for device addressing
  • Slower than SPI (i.e. 20Mbps)
  • New standard, adoption needs to be proven
  • Limited number of devices on a
    bus to around a dozen devices
  • Only 7-bits (or 10-bits) are available for static device addressing
  • Limited communication speed rates and many devices do not support the higher speeds
  • Slaves can hang the bus; will require
    system restart
  • Slower devices can delay the
    operation of faster speed devices
  • Uses more power than SPI
  • Limited number of devices on a bus
    to around a dozen devices
  • No clear master ownership and
    handover mechanism.
  • Requires separate support signals for
    interrupts
  • Need more pins than I2C/MIPI I3C
  • Need dedicated pin per slave for
    slave select (SS)
  • No in-band addressing
  • No slave hardware flow control
  • No hardware slave acknowledgment
  • Supports only one master device
  • No error-checking protocol is
    defined
  • No formal standard, validating
    conformance is  not possible
  • SPI does not support hot swapping
  • Requires separate support signals
    for interrupts

You’ll find more technical details by downloading MIPI I3C specifications and/or whitepaper (free email registration required). Note that only MIPI member can have access to the complete specifications.

Via Electronics Weekly

Categories: Hardware Tags: i3c, mipi, sensor, standard

Adding Plus Sign and Tag to Email Address May Help Identify Source of (Spam/Junk) Emails

November 27th, 2016 15 comments

I’ve noticed several commenters using email formatted as [email protected] or [email protected] while posting comments on CNX Software blog, but I just thought they were using some specific emails account or some forwarding techniques to receive emails, but I did not investigate further, and by chance I came across the reason on reddit this morning:

It’s just another character that can be in an email address. For example, [email protected], [email protected], [email protected], and [email protected] are all completely different email addresses.

However, Gmail will ignore a + and everything after it in the username portion of an email address, so [email protected], [email protected], and [email protected] will all go to [email protected]‘s inbox. This is acceptable because Google does not allow + in its login names. Many people use this property to identify the source of an email.

So I could not resist trying by sending myself an email by adding +source1 to my username, and I did receive the email to my inbox as if I had not added the plus sign and “source1” tag/string.

email-address-plus-sign

I’m using gmail for cnx-software.com emails, but I also tried with hotmail, and it worked too. Another reddit commenter mentioned that it’s actually part of RFC5233 standard, but not all email providers support it.

This can be used to trace the source of email. For example, if you’ve commented on this blog only with “[email protected]”, and some day you receive a email entitled “Nose Enlargement Program”  with that exact email address, that will either mean that the whole purpose of CNX Software blog was always to gather email addresses for nefarious purposes, or that the blog was somehow hacked and others took the opportunity. It’s not exactly 100% reliable as spammers who want to hide their source could easily remove any “+tag” string from their email database(s).

Categories: Testing Tags: standard

Wi-Fi CERTIFIED ac Wave 2 Products Support MU-MIMO, 160 MHz Channels, and More

July 5th, 2016 1 comment

802.11ac WiFi is now found in many routers and devices, and the Wi-Fi alliance has so far certified close to 3,000 “Wi-Fi CERTIFIED ac” products. I understand that certification is not mandatory, but if you want to make sure a device works well, the certification at least means the devices have been tested for interoperability, security and application specific protocols, and found to work in a satisfactory manner.

WiFi_Certified_AC_Wave_2

Now the Wi-Fi alliance has announced Wi-Fi CERTIFIED ac Wave 2 certification program with the following new requirements:

  • MU-MIMO (Multi-user Multiple Input Multiple Output) in order to send data to multiple devices at once to improve overall  network efficiency and throughput
  • 160 MHz channels support (not only 80 MHz) potentially doubling transmission speeds
  • Four spatial streams instead of just three spatial streams.
  • Extended 5 GHz channel support by adding more channels in the 5 GHz to reduce interference and congestion.

Currently the following WiFi SoCs, routers, and reference designs are said to be certified with Wave 2 features:

  • Broadcom BCM94709R4366AC
  • Marvell Avastar 88W8964
  • MediaTek MT7615 AP Reference Design and MT6632 STA Reference Design
  • Qualcomm IPQ8065 802.11ac 4-stream Dual-band, Dual-concurrent Router
  • Quantenna QSR1000 4×4 802.11ac Wave 2 Chipset Family

WiFi_CERTIFIED_ac_Wave_2_featuresAccording to Wi-Fi alliance website, You can check “Wi-Fi CERTIFIED™ ac (with wave 2 features)” products by following this link, and at the time of writing there are 9 products listed. However, if you show “advanced filters” in that link, only “DL MU-MIMO” is selected, and if you  start selecting “160 MHz channels” and “Extended 5 GHz Channel Support”, the number drops to two items: Broadcom BCM94709R4366AC and Mediatek MT6632. So either there’s a temporary issue with the website, or the certification does not guarantee all features are included, only MU-MIMO.

AOMedia AV1 is a Royalty-free, Open Source Video Codec Aiming to Replace VP9 and Compete with H.265

July 3rd, 2016 22 comments

The Alliance for Open Media, or AOMedia, is a new non-profit organization founded in 2015 by Amazon, Cisco, Google, Intel Corporation, Microsoft, Mozilla, and Netflix, and more recently joined by AMD, ARM, and NVIDIA, whose first project is to develop AV1 royalty-free and open video codec and format to provide an alternative to H.265 / HEVC, and a successor to VP9.

Alliance_For_Open_Media_AOMedia

The project is a team effort combining teams working on Daala, Thor, and VP10 video codecs, and while AFAIK, AV1 specifications have not been released yet (target: Q1 2017), the organization has already released an early implementation of AV1 video decoder and encoder under the combination of an BSD-2 clause license and the Alliance for Open Media Patent License 1.0 , which can be found on googlesource.com.

So I’ve had a quick my myself following the instructions, by first downloading one uncompressed YUV4MPEG sample:

and the source code:

before building it:

The last command will install the headers, and aomdec video decoder and aomenc encoder.

We also need some scripts to be placed in the path:

Now we can run the script in the directory for the sample(s):

The command will encode all y4m files in the directory at 200 kbps up to 500 kbps at a 50 kbps increment. Encoding only uses one core, my machine is powered by AMD FX8350 processor, and you can see encoding is currently very slow well under 0.5 fps for a CIF video (352 x 288 resolution), but that should be expected because VP9 encoding is already slow (its successor is expected to require even more processing power), and first software implementations are usually not optimized for speed, they are just meant to show the encoding works.

The test scripts will create a bunch of AV1 video files in baseline directory: husky_cif.y4m-200.av1.webm, husky_cif.y4m-250.av1.webm, etc… as well as husky_cif.y4m.stt with some statistics.

Decoding is much faster as it should be:

You can play back the videos with mpv using aomdec for decoding. For example:

AOmedia_AV1_Video_MPV

New video codecs normally take years to replace old ones, but if it gains traction AV1 will likely be used along side VP9, H.265 and H.264 for several years. Considering software and silicon vendors, and content providers (Google/YouTube, Amazon, and Netflix) are involved in the project, I’m quite confident the AOMedia AV1 codec will become popular, and hardware decoder are likely to be implemented in ARM, Intel and  AMD SoCs in a few years.

Thanks to Ohmohm for the tip.