Archive

Posts Tagged ‘facebook’

Oculus Rift Virtual Reality Development Kit 2 Becomes Open Source Hardware

October 11th, 2017 2 comments

Oculus Rift DK2 virtual reality headset and development kit started to ship in summer 2014. The DK2 is kind of VR headset that is connected to a more powerful computer via USB and HDMI, includes hardware for positional tracking, a 5″ display ,and two lenses for each eye.

Since then the company has been purchased by Facebook, and they’ve now decided to make the headset fully open source hardware.

 

Exploded view of Oculus Rift DK2 – Click to Enlarge

The release includes schematics, board layout, mechanical CAD, artwork, and specifications under a Creative Commons Attribution 4.0 license, as well as firmware under “BSD+PATENT” licenses which you’ll all find on Github.

The release is divided into four main folders:

  • Documentation with high-level specifications for the DK2 headset, sensor, and firmware.
  • Cable with schematics and high level specifications for the cables. Custom assembly that would be hard to recreate from source. Allegedly the most complex part of the design
  • Sensor with electrical and mechanical CAD for the positional tracking sensor. Sadly the MCU firmware for this part has not been released, as it is not redistributable.
  • Headset with mainboard firmware, electrical and mechanical CAD for the headset, as well as artwork for the packaging.

Click to Enlarge

A Galaxy Note 3 AMOLED display was used for the headset, and an STMicro STM32 microcontroller handles inertial sensor data, and manages microsecond-precision timestamping for all part of the system.

Normally, such OSHW release would enable a willing individual to reproduce the kit him-/herself, but the company explains that some of the components of the kit are very hard to impossible to source today.

Via Twitter, and tip from Harley.

Facebook Zstandard “zstd” & “pzstd” Data Compression Tools Deliver High Performance & Efficiency

December 19th, 2016 12 comments

Ubuntu 16.04 and – I assume – other recent operating systems are still using single-thread version of file & data compression utilities such as bzip2 or gzip by default, but I’ve recently learned that compatible multi-threaded compression tools such as lbzip2, pigz or pixz have been around for a while, and you can replace the default tools by them for much faster compression and decompression on multi-core systems. This post led to further discussion about Facebook’s Zstandard 1.0 promising both smaller and faster data compression speed. The implementation is open source, released under a BSD license, and offers both zstd single threaded tool, and pzstd multi-threaded tool. So we all started to do own little tests and were impressed by the results. Some concerns were raised about patents, and development is still work-in-progess with a few bugs here and there including pzstd segfaulting on ARM.

Zstd vs Zlib Compression Ratio vs Speed

Zstd vs Zlib Compression Ratio vs Speed

Zlib has 9 levels of compression, while Zstd has 19, so Facebook has tested all compression levels and their speed, and drawn the chart above comparing compression speed to compression ratio for all test points, and Zstd is clearly superior to zlib here.

They’ve also compared compression and decompression performance and aspect ratio for various other competing fast algorithms using lzbench to perform this from memory to prevent I/O bottleneck from storage devices.

Name Ratio C.speed D.speed
MB/s MB/s
zstd 1.0.0 -1 2.877 330 940
zlib 1.2.8 -1 2.730 95 360
brotli 0.4 -0 2.708 320 375
QuickLZ 1.5 2.237 510 605
LZO 2.09 2.106 610 870
LZ4 r131 2.101 620 3100
Snappy 1.1.3 2.091 480 1600
LZF 3.6 2.077 375 790

Again everything is a comprise, but Zstd is faster than algorithms with similar compression ratio, and has a higher compression ratio than faster algorithm.

But let’s not just trust Facebook, and instead try ourselves. The latest release is version 1.1.2, so that’s what I tried in my Ubuntu 16.04 machine:

This will install the latest stable release of zstd to your system, but the multi-thread is not build by default:

There are quite a lot of options for zstd:

Since we are going to compare results to other, I’ll also flush the file cache before each compression and decompression using:

I’ll use the default settings to compress Linux mainline directory stored in a hard drive with tar + zstd (single thread):

and pzstd (multiple threads):

Bear in mind that some time is lost due to I/O on the hard drive, but I wanted to test a real use case here, and if you want to specifically compare the raw performance of compressor you should use lzbench. Now let’s decompress the Zstandard tarballs:

My machine is based on an AMD FX8350 octa-core processor, and we can clearly see that by comparing real and user time, the test is mostly I/O bound. I’ve repeated those test with other multi-threaded tools as shown in the summary table below.

Compression Decompression File Size (bytes) Compression Ratio
Tools Time (s) “User” Time (s) Time (s) “User” Time (s)
ztsd 130.056 91.608 45.124 21.26 1,881,020,744 1.48
pzstd 58.929 86.56 38.175 23.39 1,883,697,296 1.48
lbzip2 84.216 353.84 37.109 167.416 1,855,837,345 1.50
pigz 61.121 121.332 34.36 15.26 1,903,915,372 1.47
pixz 177.596 1233.88 36.24 78.116 1,782,756,524 1.57
pzstd -19 275.361 1939.536 26.85 21.832 1,794,035,552 1.56

I’ve included both “real time” and “user time”, as the latter shows how much CPU time the task has spent on all the cores of the system. If user time is large that means the task required lots of CPU power, and if a task completes in about the same amount of “real time”, but a lower “user time”, it means it was likely more efficient, and consumes less power. pigz is the multi-threaded version of xz algorithm relying on lzma compression which delivers a high compression ratio, at the expense of longer compression time, so I also run pzstd with level 19 compression to compare:

Zstandard compression ratio is similar to the one of lbzip2 with default settings, but compression is quite faster, and much more power efficient. Compared to gzip, (p)zstd offers a better compression ratio, against with default settings, and somewhat comparable performance. pixz offers the best compression ratio, but takes a lot more time to compress, and uses more resources to decompress compared to Zstandard and Pigz. Pzstd with compression level 19 takes even more time to compress, and is getting close to pixz compression, but has the advantage of being much faster to decompress.

OpenCellular is Facebook’s (soon to be) Open Source Wireless Access Platform

July 7th, 2016 4 comments

A few months after Canonical and Lime Micro LimeSDR open source software defined radio aiming to be used as a development platform, but also as the base for low cost cellular or other wireless base stations, Facebook has announced their own open source wireless access platform with OpenCellular project whose goal is to lower the cost of Internet connectivity in remote areas where the infrastructure does not exist.

OpenCellularThis is how Marc Zuckerberg summarizes the project:

We designed OpenCellular as an open system so anyone — from telecom operators to researchers to entrepreneurs — can build and operate wireless networks in remote places. It’s about the size of a shoe box and can support up to 1,500 people from as far as 10 kilometers away.

Along with our solar-powered aircraft Aquila and high-bandwidth laser beams, OpenCellular is the next step on our journey to provide better, more affordable connectivity to bring the world closer together.

But we can get some more details via another post by Kashif Ali, Engineer at Facebook including the following key points:

  • OpenCellular will supporting everything from 2G to LTE.
  • The system is composed of two main subsystems: general-purpose and base-band computing (GBC) with integrated power and housekeeping system, and radio frequency (RF) with integrated analog front-end.
  • The project will become open source over time, with hardware design, firmware and control software source code released publicly.
  • Facebook will collaborate with Telecom Infra Project (TIP) members, whose aim is to “reimagine the traditional approach to building and deploying telecom network infrastructure”.

OpenCellular_GBC_Radio The current GBC system supports four power sources: PoE (power-over-ethernet), solar, DC, and external (lead acid) or internal (lithium ion) batteries, and also includes sensors to monitor temperature, voltage, current, etc.. Two versions of the radio system are available on based on either one SoC (fixed functionality), or one FPGA (software defined radio). The radio can be used a full network-in-a-box when connected to the GBC, or an access point in standalone mode.

The systems have also been designed to allow a single person to install and operate them, and the enclosures are rugged to withstand all kinds of weather. The company has been testing it using 2G connectivity within their office, and expect to release the first reference design this summer.

Thank you Nanik!

Categories: Hardware Tags: 2g, 3G, facebook, fpga, lte, sdr, sensor, solar