Red Hat Enterprise Linux 7.4 Now Fully Supports Arm servers

Orange Pi Development Boards

When hardware vendors announced Arm based servers they also claim support for operating systems such as Ubuntu 16.04 LTS and Red Hat Enterprise Linux, so I assumed software support was more or less where it needed to be with regards to Arm server.

But apparently, it may not have been so, as Red Hat only announced full support for Arm servers in Red Hat Enterprise Linux for ARM a few days ago.

It also started with SBSA (Server Base System Architecture) specifications in 2014, that aimed to provide a single operating platform that works across all 64-bit ARMv8 server SoCs that complies with the said specification. Red Hat then released a developer preview of the OS for silicon and OEM vendors in 2015, and earlier this week, the company released Red Hat Enterprise Linux 7.4 for Arm, the first commercial release for this architecture.

RHEL 7.4 for Arm come with Linux 4.11 kernel and support networking drivers from various vendors such as Hisilicon, Qualcomm, etc…. Linux 4.11 kernel is supposed to be EOL, but in this case, Red Hat must be providing the updates. You’ll find more details about the new Arm operating systems in the release notes, where you’ll also find a link to download the OS, provided you are a customer.

Thanks to ykchavan for the tip

Support CNX Software - Donate via PayPal or become a Patron on Patreon

17
Leave a Reply

avatar
17 Comment threads
0 Thread replies
4 Followers
 
Most reacted comment
Hottest comment thread
6 Comment authors
zoobabfossxplorertheguyukblucnxsoft Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
tkaiser
Guest
tkaiser

Mini remark: AKiTiO seems wrong (PCIe/Thunderbolt attached storage boxes), I would assume that’s Cavium instead 🙂

tkaiser
Guest
tkaiser

@cnxsoft
I would prefer the typo choosing Cavium ThunderX2 since AKiTiO is specialized in (pretty nice!) external storage and PCIe extender stuff that only works well with Thunderbolt enabled Intel boxes.

blu
Guest
blu

@cnxsoft

Yep, RH finally went official.

theguyuk
Guest
theguyuk

Will it support the raspberry pi HPC ?

High Performance Computing (HPC) division over at the Los Alamos National Laboratory has lumped 750 Raspberry Pi boards together, in a system designed (and built) by BitScope that consists of five rack-mounted Pi Cluster Modules, each of which have 150 boards apiece.

tkaiser
Guest
tkaiser

theguyuk :
Will it support the raspberry pi HPC ?

Red Hat is not known to support RPi Trading’s PR stunts. Besides that I’ve never seen such bizarre cluster modules as those. Only 150 RPi 3 on 6 Rack units which means most of the volume is wasted for useless Ethernet cabling. Using SoPine for example combined with cluster baseboards utilizing network switch ICs on the PCB most probably +1000 quad-core A53 could be cramped in such a 6RU enclosure.

fossxplorer
Guest
fossxplorer

According to the latest Fedora release which does support ARM, RPi and PINE64 isn’t supported YET. Since Fedora is the upstream project for RHEL so yeah, RPi nor PINE64 is supported by RHEL7 ARM atm i would assume.

There are so many SBCs to choose from to build such ‘HPC’ server modules. Why the heck RPi?

theguyuk
Guest
theguyuk

@fossxplorer
Down to sponsorship maybe? No heatsink?

zoobab
Guest

All the ARM server blades I saw where super expensive. There was one Gigabyte motherboard, it was 800eur if I remember correctly.

tkaiser
Guest
tkaiser

@zoobab
Well, it’s all about rack density here. Cramping low performing cheap stuff in rack enclosures is more expensive since you need to rent additional racks or even data centers.

blu
Guest
blu

@tkaiser
Re that HPC rpi contraption — SoC choice is one thing, but why they would not use the compute module (supposedly available since Q1’17) is an absolute mystery to me.

tkaiser
Guest
tkaiser

blu :
SoC choice is one thing, but why they would not use the compute module (supposedly available since Q1’17) is an absolute mystery to me.

That would’ve required designing something. So a company specialized in mounting systems for normal Raspberries could demonstrate that these mounting systems also work in the least efficient ‘Cluster Modules’ ever. For the use case in question that might be OK since if I understood correctly it’s not about HPC here but learning to implement HPC correctly and efficiently (eg. adding 500 more nodes to a 500 node cluster and wondering why performance increase is not 100% but eg. only 40% — the crappy Ethernet implementation of all Raspberries could be even an advantage here since it better simulates scaling in large real cluster infrastructures).

Rack density of only 24 nodes per rack unit is somewhat laughable given how small Raspberries are but that’s what you get when every Cluster Module wastes most space for Air, Ethernet cabling and multiple 48 port GbE switches inside. I wonder whether NanoPi Duos that have real Fast Ethernet on pre-soldered pin headers combined with custom baseboards using switch ICs would not be much cheaper (also showing an idle consumption below 50% of RPi 3)

tkaiser
Guest
tkaiser

@blu
Quickly checked it. NanoPi Duo is 25.4 x 50mm in size, has SPI NOR flash for PXE booting, real Ethernet on pin headers and can be powered through pin header too. In a 2RU enclosure with 4 layers of these things on custom baseboards using switch ICs it should be possible to cramp ~500 such boards inside, proper cooling included. At least 10 times the rack density of the Raspberry Pi attempt if Ethernet on pin headers usually combined with a MagJack and ‘switch IC on cluster baseboard’ fit.

tkaiser
Guest
tkaiser

@cnxsoft
Never tried it myself but reading through Armbian forum ‘it just works’ (since a year now). There’s also nothing specific to NanoPi Duo since it’s the same as on every other H2+/H3 device which features SPI NOR flash (various Orange Pi for example). Though AFAIK no board vendor ships currently with SPI NOR flash already populated with a working u-boot config.

theguyuk
Guest
theguyuk
fossxplorer
Guest
fossxplorer

@theguyuk, thanks a lot. I watched all of the videos, very very interesting to hear how closely they’ve worked with ARM ecosystem.
@charbax, thanks a lot, you rock!