Archive

Posts Tagged ‘standard’

IPv10 Draft Specification Released for IPv6 <-> IPv4 Communications

September 8th, 2017 11 comments

The first time I used IPv6 was in 2000 for my final year project, and for many years, we’ve been told that IPv4 32-bit address space was running out, and a transition to 128-bit IPv6 address was necessary, and would happen sooner rather than later. Fast forward to 2017, I’m still using IPv4 in my home network, and even my ISP is still only giving a dynamically allocated IPv4 address each time we connect to their service. Based on data from Google, IPv6 adoption has only really started in 2011-2012, and now almost 20% of users can connect over IPv6 either natively or through IPv4/IPv6 tunneling. But today, I’ve read that IPv10 draft specifications had been recently released.

What? Surely with the slow adoption of IPv6, we certainly don’t need yet another Internet protocol… But actually, IPv10 (Internet Protocol version 10) is designed to allow IPv6 to communicate to IPv4, and vice versa, which explains why it’s also called IPMix, and it derives its names from IPv6 + IPv4 = IPv10.

IPv10 was created as it was clear that both IPv4 and IPv6 would still be in use for decades to come, but with the Internet of Things, a large number of IPv6-only nodes would come online with still the need to communicate with IPv4 nodes. Existing workaround such as native dual stack (IPv4 and IPv6), dual-stack Lite, NAT64, 464xlat and MAP, either do not allow for IPv4 to IPv6 communication, or are inefficient.

So IPv10 aims to address those shortcomings as described in the draft specs:

It solves the issue of allowing IPv6 only hosts to communicate to IPv4 only hosts and vice versa in a simple and very efficient way, especially when the communication is done using both direct IP addresses and when using hostnames between IPv10 hosts, as there is no need for protocol translations or getting the DNS involved in the communication process more than its normal address resolution function.

IPv10 allows hosts from two IP versions (IPv4 and IPv6) to be able to communicate, and this can be accomplished by having an IPv10 packet containing a mixture of IPv4 and IPv6 addresses in the same IP packet header.

From here the name of IPv10 arises, as the IP packet can contain (IPv6 + IPv4 /IPv4 + IPv6) addresses in the same layer 3 packet header.

IPv10 handles all 4 types of communications IPv4 to IPv6, IPv4 to IPv4, IPv6 to IPv4, and IPv6 to IPv6. You can read the draft specifications for details about the packets.

IPv10 Operation Example – Click to Enlarge

Categories: Uncategorized Tags: ipv10, ipv6, networking, standard

RadioShuttle Network Protocol is an Efficient, Fast & Secure Alternative to LoRaWAN Protocol

September 6th, 2017 5 comments

LoRaWAN protocol is one of the most popular LPWAN standards used for the Internet of Things today, but some people found it “lacked efficiency, did not support direct node-to-node communication, and was too costly and far too complicated for many applications”, so they developed their own LoRa wireless protocol software called RadioShuttle, which they claim is “capable of efficiently sending messages in a fast and secure way between simple LoRa modules”.

Some of the key features of the protocol include:

  • Support for secure or insecure (less time/energy) message transmission, multiple messages transmission in parallel
  • Unique 32-bit device ID (device number) per LoRa member, unique 16-bit app ID (program number for the communication)
  • Security – Login with SHA-256 encrypt password; AES-128 message encryption
  • Air Traffic Control – Nodes only send if no LoRa signal is active on that channel.
  • Optimized protocol –  Message delivery within 110 ms (SF7, 125 kHz, free channel provided); default LoRa bandwidth 125 kHz (125/250/500 kHz adjustable), as narrow bandwidths allow for a longer range; Automatic transmitting power adjustment
  • Operating modes
    • Station, constant power supply recommended –  12 mA in receiving mode, transmitting mode (20 to 100 mA)
    • Node Online (permanently receiving), constant power supply recommended – 12 mA in receiving mode, transmitting mode (20 to 100 mA)
    • Wireless sensor (Node Offline checking) – Node reports back regularly. 1 µA in standby mode, battery operation for years.
    • Wireless sensor (Node Offline) – Node only active if events are reported. 1 µA in standby mode, battery operation for years.

The Radioshuttle library has a low memory and storage footprint with current requirements of

  • 100 kB Flash for RadioShuttle library with SHA256 & AES
  • 10 kB RAM for Node Offline/Checking/Online mode
  • 10 kB RAM for Station Basic mode (RAM depends on the number of nodes)
  • 1 MB RAM for Station Server mode (Raspberry Pi, 10,000 LoRa nodes)

The solution supports various Arduino boards, some ARM Mbed boards (e,g, STM32L0, STM32L4), and Linux capable boards like Raspberry Pi or Orange Pi (planned). Semtech SX1276MB1MAS and SX1276MB1LAS (SX1276-based), MURATA CMWX1ZZABZ-078/091 (found in STM32 Discovery kit for LoRaWAN), and HopeRF RFM95 transceivers are supported.

LonRa Board – Click to Enlarge

The developers have also designed their own LongRa board, compatible with Arduino Zero, based on Semtech SX1276 LoRa radio chip with a 168 dB link budget and support for 868 MHz & 915 MHz frequency. The board can be powered by its micro USB port, or by two AA batteries if you’re going to use the board as a wireless sensor node.

RadioShuttle protocol is not open source for now, and while it support multiple devices as stated previsouly, if you are not using LongRa board, a 25 Euros license is required per device.

 

USB 3.2 To Bring 20 Gbps Transfer Rate to Existing USB type C Cables

July 28th, 2017 10 comments

The USB 3.0 Promoter Group has recently announced the upcoming USB 3.2 specification that defines multi-lane operation for compatible hosts and devices, hence doubling the maximum theoretical bandwidth to 20 Gbps.

Only USB Type-C cables were designed to support multi-lane operation, so other type of USB cables will not support USB 3.2, and stay limited to 10 Gbps. USB 3.2 will allow for up to two lanes of 5 Gbps, or two lanes of 10 Gbps operation, so if you want to achieve 20 Gbps transfer rate, you’ll need a USB Type C cable certified for SuperSpeed USB 10 Gbps, beside hosts and devices that comply with USB 3.2.

Layout of the pins in a Type-C connector

Anandtech explains that two high speed data paths are available in USB type C connector as shown above, which are also used for alternate modes, and the USB 3.1 standard makes use of one of those paths for 10 Gbps transfer, and the other path for alternate mode, but USB 3.2 allows for both to be used for 10 Gbps transfers hence achieving up to 20 Gbps. If both paths are used for alternate modes, then transfers will be limited to USB 2.0 speeds (up to 480 Mbps).

That’s a good development, but it will be further be confusing to consumers, as most companies do not clearly explain the capabilities of their USB type C interfaces or/and cables. USB type C cables can be made for USB 2.0 (480 Mbps), USB 3.0 / USB 3.1 Gen 1 (5 Gbps), or USB 3.1 Gen 2 (10 Gbps), and only the latter will support USB 3.2.

Categories: Hardware Tags: standard, usb

Bluetooth Low Energy Now Supports Mesh Networking for the Internet of Things

July 19th, 2017 8 comments

The Bluetooth Special Interest Group (SIG) has announced support for mesh networking for BLE, which enables many-to-many (m:m) device communications, and is optimized for large scale device networks for building automation, sensor networks, asset tracking solutions, and other IoT solutions where up to thousands of devices need to reliably and securely communicate with one another. The standard actually specifies 32,767 unicast addresses per mesh network, but that number of nodes is not achievable right now.

Mesh networking works with Bluetooth Low Energy and is compatible with version 4.0 and higher of the specifications. It requires SDK support for the GAP Broadcaster and Observer roles to both advertise and scan for advertising packets, and the FAQ claims Mesh Networking does not require extra power, and the devices only need to wake up at least once every four days or when they have data to transmit. Mobile apps connecting to mesh networking products will use the Bluetooth mesh proxy protocol implemented on top of BLE GAP and GATT APIs.

Bluetooth Mesh Control Model – Server and Client models are also available

You can access access various part of the Mesh Networking standard including Mesh Profile specification 1.0, Mesh Model specification 1.0, and Mesh Device properties 1.0 on Bluetooth website.

The Bluetooth SIG expects commercial products with Bluetooth mesh networking technology to become available later this year. Qualcomm – who purchased CSR – announced Mesh networking support for their QCA4020 and QCA4040 BLE chip in samples today, and commercial availability in September 2017, and Nordic Semi has released a Mesh SDK, and so has Silicon Labs. Since I understand mesh network does not require hardware modifications, then all companies providing BLE solutions should offer it.

Thanks to Crashoverride for the tip.

USB type C to HDMI Cables Coming Soon thanks to HDMI Alt Mode for USB Type-C

June 29th, 2017 1 comment

Some devices already support video output over a USB type C connector, but they normally rely on DisplayPort over USB type C, so you’d either need a monitor that supports DisplayPort, or some USB Type C to HDMI converter. A DisplayLink dock is another solution, but again it converts video and audio signals. But soon you’ll be able to use a simple USB type C to HDMI cable between a capable device (camera, phone, computer, TV box…) and any HDMI TV or monitor.

This is being made possible thanks to HDMI Alt Mode for USB Type-C  that supports all HDMI 1.4b features including:

  • Resolutions up to 4K (@ 30 Hz)
  • Surround sound
  • Audio Return Channel (ARC)
  • 3D (4K and HD)
  • HDMI Ethernet Channel (HEC)
  • Consumer Electronic Control (CEC)
  • Deep Color, x.v.Color, and content types
  • High Bandwidth Digital Content Protection (HDCP 1.4 and HDCP 2.2)

There’s no video or audio conversion inside the cable, but there’s still a small micro-controller to handle messaging to negotiate the alt mode to use, which means the source device will have to be specifically supporting the new standard.

Charbax caught up with a representative of HDMI Licensing Administrator inc. demonstrating USB-C to HDMI cable with a 2-in-1 laptop connected to an HDMI monitor, as well as a camera prototype getting both HDMI signal with CEC support, and power (USB-PD) over a single cable.


The new specification is good news, and we should expect capable devices later this year. We’d just had to hope manufacturers will get serious with logos and description of features of their USB type C connectors, as there are now so many optional features that it could end up getting really confusing to end users. In case you wonder why HDMI 2.0b, with features like 4K @ 60 Hz and HDR, is not supported, the FAQ explains that “the HDMI Forum is responsible for the HDMI 2.0b specification and they have not made any public statements regarding the HDMI Alt Mode for the HDMI 2.0b spec”.

Categories: Hardware, Video Tags: camera, hdmi, standard, usb

Samsung & Amazon Introduce HDR10+ Standard with Dynamic Metadata & Tone Mapping

April 20th, 2017 7 comments

Most recent 4K Ultra HD televisions support high dynamic range (HDR) through standards such as HDR10, Dolby Vision, or Hybrid Log-Gamma (HLG). Samsung and Amazon have jointly introduced an update to HDR10 with HDR10+ that adds dynamic tone mapping & metadata.

The companies describe the issues for HDR10′ static metadata as follows:

The current HDR10 standard utilizes static metadata that does not change during playback despite scene specific brightness levels. As a result, image quality may not be optimal in some scenes. For example, when a movie’s overall color scheme is very bright but has a few scenes filmed in relatively dim lighting, those scenes will appear significantly darker than what was originally envisioned by the director.

HDR10+ will be able to adjust metadata for each scene, and even for each frame, hence solving the issue of darker scenes. If you already own a Samsung TV with HDR10,  it’s not already outdated, as all 2017 UHD TVs already support HDR10+, and 2016 UHD TVs will support HDR10+ through a firmware update.

Amazon Video will be the first streaming service to deliver HDR10+ content, and Samsung also collaborated with other companies to integrate HDR10+ into products such as Colorfront’s Transkoder for post-production master, and MulticoreWare x265 video encoder.

HDR10 – and HDR10+ – is also said to be an open standard, but it could not find the specifications online, and only managed to find that HDR10 Media Profile main  must support EOTF: SMPTE ST 2084, 4:2:0 color Sub-sampling, 10-bit color depth, ITU-R BT.2020 color primaries, and SMPTE ST2086, MaxFALL and MaxCLL metadata defined in CTA 861.3-A standard (free preview) which you can purchase for $67. There must be some sort of CTA Standard for HDR dynamic metadata extensions for HDR10+, but I could not find anything [Update: Maybe SMPTE ST 2094-20-2016?]

Samsung showcased a static vs dynamic tone mapping demo at NAB 2016 last year, but it’s quite hard to see any differences in the video.

Categories: Hardware Tags: amazon, hdr, HDR10, samsung, standard

MIPI I3C Sensor Interface is a Faster, Better, Backward Compatible Update to I2C Protocol

January 11th, 2017 6 comments

I2C (Inter-Integrated Circuit) is one of the most commonly used serial bus for interfacing sensors and other chips, and use two signals (Clock and Data) to control up to 128 chips thanks to its 7-bi address scheme. After announcing it was working of a new I3C standard in 2014, the MIPI Alliance has now formally introduced the MIPI I3C (Improved Inter Integrated Circuit) Standardized Sensor Interface, a backward compatible update to I2C with lower power consumption, and higher bitrate allowing it to be used for applications typically relying on SPI too.

mipi-i3cI3C offers four data transfer modes that, on maximum base clock of 12.5MHz, provide a raw bitrate of 12.5 Mbps in the baseline SDR default mode, and 25, 27.5 and 39.5 Mbps, respectively in the HDR modes. After excluding transaction control bytes, the effective data bitrates achieved are 11.1,20, 23.5 and 33.3 Mbps.

MIPI I3C vs I2C Energy Consumption and Bitrate - Click to Enlarge

MIPI I3C vs I2C Energy Consumption and Bitrate – Click to Enlarge

The MIPI Alliance has also provided a tablet comparing I3C, I2C, and SPI features, advantages and disadvantages.

Parameter MIPI I3C I2C SPI
Number of Lines 2-wire 2-wire (plus separate wires for each required interrupt signal) 4-wire (plus separate wires for each required    interrupt signal)
Effective Data Bitrate 33.3 Mbps max at 12.5 MHz
(Typically: 10.6 Mbps at 12 MHz SDR)
3 Mbps max at 3.4 MHz (Hs)
0.8 Mbps max at 1 MHz (Fm+)
0.35 Mbps max at 400 KHz (Fm)
Approx. 60 Mbps max at 60 MHz for conventional implementations (Typically: 10 Mbps at 10 MHz)
 Advantages
  • Only two signal lines
  • Legacy I2C devices co-exist on the same bus (with some limitations)
  • Dynamic addressing and supports
    static addressing for legacy I2C
    devices
  • I2C-like data rate messaging  (SDR)
  • Optional high data rate messaging
    modes (HDR)
  • Multi-drop capability and dynamic
    addressing avoids collisions
  • Multi-master capability
  • In-band Interrupt support
  • Hot-join support
  • A clear master ownership and
    handover mechanism is defined
  • In-band integrated commands
    (CCC) support
  • Only two signal lines
  • Flexible data transmission rates
  • Each device on the bus is
    independently addressable
  • Devices have a simple master/slave relationship
  • Simple implementation
  • Widely adopted in sensor
    applications and beyond
  • Supports multi-master and multi-drop capability features
  • Full duplex communication
  • Push-pull drivers
  • Good signal integrity and high speed below   20MHz (higher speed are challenging)
  • Higher throughput than I2C and SMBus
  • Not limited to 8-bit words
  • Arbitrary choice of message size, content and purpose
  • Simple hardware interfacing
  • Lower power than I2C
  • No arbitration or associated failure modes
  • Slaves use the master’s clock
  • Slaves do not need a unique address
  • Not limited by a standard to any maximum  clock speed (can vary between SPI devices)
 Disadvantages
  • Only 7-bits are available for device addressing
  • Slower than SPI (i.e. 20Mbps)
  • New standard, adoption needs to be proven
  • Limited number of devices on a
    bus to around a dozen devices
  • Only 7-bits (or 10-bits) are available for static device addressing
  • Limited communication speed rates and many devices do not support the higher speeds
  • Slaves can hang the bus; will require
    system restart
  • Slower devices can delay the
    operation of faster speed devices
  • Uses more power than SPI
  • Limited number of devices on a bus
    to around a dozen devices
  • No clear master ownership and
    handover mechanism.
  • Requires separate support signals for
    interrupts
  • Need more pins than I2C/MIPI I3C
  • Need dedicated pin per slave for
    slave select (SS)
  • No in-band addressing
  • No slave hardware flow control
  • No hardware slave acknowledgment
  • Supports only one master device
  • No error-checking protocol is
    defined
  • No formal standard, validating
    conformance is  not possible
  • SPI does not support hot swapping
  • Requires separate support signals
    for interrupts

You’ll find more technical details by downloading MIPI I3C specifications and/or whitepaper (free email registration required). Note that only MIPI member can have access to the complete specifications.

Via Electronics Weekly

Categories: Hardware Tags: i3c, mipi, sensor, standard

Adding Plus Sign and Tag to Email Address May Help Identify Source of (Spam/Junk) Emails

November 27th, 2016 15 comments

I’ve noticed several commenters using email formatted as [email protected] or [email protected] while posting comments on CNX Software blog, but I just thought they were using some specific emails account or some forwarding techniques to receive emails, but I did not investigate further, and by chance I came across the reason on reddit this morning:

It’s just another character that can be in an email address. For example, [email protected], [email protected], [email protected], and [email protected] are all completely different email addresses.

However, Gmail will ignore a + and everything after it in the username portion of an email address, so [email protected], [email protected], and [email protected] will all go to [email protected]‘s inbox. This is acceptable because Google does not allow + in its login names. Many people use this property to identify the source of an email.

So I could not resist trying by sending myself an email by adding +source1 to my username, and I did receive the email to my inbox as if I had not added the plus sign and “source1” tag/string.

email-address-plus-sign

I’m using gmail for cnx-software.com emails, but I also tried with hotmail, and it worked too. Another reddit commenter mentioned that it’s actually part of RFC5233 standard, but not all email providers support it.

This can be used to trace the source of email. For example, if you’ve commented on this blog only with “[email protected]”, and some day you receive a email entitled “Nose Enlargement Program”  with that exact email address, that will either mean that the whole purpose of CNX Software blog was always to gather email addresses for nefarious purposes, or that the blog was somehow hacked and others took the opportunity. It’s not exactly 100% reliable as spammers who want to hide their source could easily remove any “+tag” string from their email database(s).

Categories: Testing Tags: standard