OpenWrt 21.02 released with WPA3, HTTPS, TLS enabled by default

OpenWrt 21.02 has just been released with higher security with WPA3, HTTPS & TLS enabled by default, as well as initial support for the Distributed Switch Architecture (DSA), the Linux standard for configurable Ethernet switches.

OpenWrt is the most popular open-source Linux distribution for routers and entry-level Linux-capable embedded systems, and the latest release includes over 5800 commits since the release of OpenWrt 19.07 in January 2020.

WPA3 was already supported in OpenWrt 19.07, but not enabled by default,  OpenWrt 20.02 changes that, together with TLS thanks to trusted CA certificates from Mozilla. That means LuCi interface, wget, opkg package manager can all support HTTPS out-of-the-box. Note that HTTPS redirection can be disabled for LuCI in the configuration files. Another security change is that SELinux is now supported by OpenWrt, but not enabled by default.

OpenWrt 21.02’s DSA implementation replaces the current swconfig system, but not all targets have been ported, so some are still using swconfig. Since the two solutions are much different, a system upgrade will not be able to convert an existing swconfig configuration to DSA configuration.

The new release also updates the syntax of configuration files including board.json. OpenWrt 21.02 will still support the old convention and the LuCI interface can migrate your config automatically to the new syntax.

Various packages have been updated with OpenWrt relying on Linux 5.4.143, busybox 1.33.1, gcc 8.4.0, and the operating system switched from mbedTLS to wolfSSL as the default SSL library. Both mbedTLS and OpenSSL can still be installed manually.  New hardware targets have been added from realtek, Broadcom (bcm4908), and Rockchip RL33xx which should be good news for Rockchip RK3328 and RK3399 boards such NanoPi R2S, Rock Pi 4, Pine64 RockPro64, or which are already supported, but hopefully others like Orange Pi R1 Plus will be added to the list.

Getting new features and more security is always nice, but it does come at the cost of higher requirements. OpenWrt 19.07 already upped systems requirements to 32MB RAM and 4MB storage, but OpenWrt 21.02 increases that to 8 MB flash and 64 MB RAM, and developers even recommends 16MB flash and 128MB RAM if you intend to install extra packages. It’s still possible to build OpenWrt 21.02 for system with 4MB flash and 32MB RAM, but stability cannot be guaranteed, as stated in the 8/64 warning page:

Insufficient RAM for stable operation

32 MB RAM is already deprecated. You will run into issues with an up to date OpenWrt version.
64 MB RAM may have some issues with stability, depending on your hardware and use cases, although it is enough for basic usage
128 MB RAM or more is recommended if software past basic router/AP functionality is to be used

If you’d like to go ahead, upgrading 19.07 to 21.02 is possible, but not from OpenWrt 18.06. Configuration files will be preserved in most cases, and if you’re using swconfig the system may refuse to update due to the new DSA settings. In that case,  a new installation is the only option and you can find images for your target on the download page.

More details may be found in the announcement.

Via Linuxiac

Share this:

Support CNX Software! Donate via cryptocurrencies or become a Patron on Patreon

28 Replies to “OpenWrt 21.02 released with WPA3, HTTPS, TLS enabled by default”

    1. I suppose you meant to refer to Tina Linux (OpenWrt based) instead of Melis (RT-Thread). But anyway, it may indeed take in several years as you pointed out.

  1. Installed it on my “old” and unsupported TP-Link (by the company) gear that I use as APs around the house. It’s still very much a “geeky” OS that requires that you know what you’re doing, as there are no simple “click and forget” type settings for AP/bridge type modes.
    At least the UI is better than it used to be, even though it’s still super plain and basic.

  2. As requirements continue to grow, and I tend to trust them to be really careful on RAM and flash usage, it would be nice to get an estimate of the cut-down between kernel, userland, certain libs etc that’s responsible for the increase. 20 years ago I used to deploy full-featured load balancers running on Red Hat 5.2 (not RHEL) that were taking thousands of concurrent connections in 128 MB RAM. And Red Hat was already considered quite fat an OS by then. We’ve also deployed quite a bunch of antivirus+antispam mail gateways running off a 64MB flash and 128MB RAM. Nowadays we’re saying “64 is OK to start the UI but not much more”. And all this with a significantly less user-friendly OS, so what have we lost in between ? OK, having parts of the rootfs in RAM doesn’t help but it takes less than the flash size anyway. My old 386sx firewall that used to comfortably run kernel 2.2/libc5 off a 16 MB compactflash and 8 MB RAM is totally unfit to its original purpose nowadays 🙂

    1. About “thousands of concurrent connections in 128 MB RAM” – is it 2-5 MBit video streams like nowdays?
      How your old firewall dealing with gigabit network + 100-200 MBit internet connection? You know what if OpenWRT not support hardware NAT offload on router, almost all what multi-core CPU will do is software NAT?
      And then you want to add some VPN connections…

      1. > About “thousands of concurrent connections in 128 MB RAM” – is it 2-5 MBit video streams like nowdays?

        Video streams with a packet size close to Ethernet MTU compared to way more packets 20 years ago? Just look at how IMIX has been defined back then with the typical today’s packet size being not even 10% of the mix.

      2. > About “thousands of concurrent connections in 128 MB RAM” – is it 2-5 MBit video streams like nowdays?

        This is irrelevant given that only CPU is affected by bandwidth, not the RAM. Assigning only 64MB RAM to the TCP stack results in 64kB windows for 1k concurrent connections, which is sufficient for 5 Mbps per stream thus 5 Gbps of traffic. In my case it was only web traffic, up to 1k connections per second and less than 300 Mbps of traffic. Something that a modern fanless CPU would barely notice.

        Regarding my old 386 firewall, it was much more limited, it had a 10 Mbps NIC on one size and a modem on the other one. But again, that’s irrelevant to the software size! Actually nowadays software could be smaller because I wouldn’t need pppd anymore 🙂

        1. I mention all those things to justify every piece of system requirements increase in new release.

          A lot of traffic = CPU usage
          OpenVPN = big dependencies list which eat a lot of flash.
          Fancy web IU eat up RAM, default HTTPS enabled also pull a lot of dependencies.
          Also IIRC WPA3 itself pull full wpad instead of wpad-mini because SSL dependencies.

          In original release post OpenWRT have link to special guide for squashing your image – if you drop web IU, WPA3 and OpenVPN it can fit 4Mb flash freely. On one AP I even drop packet manager 🙂

          And if you use image builder it really simple to “build” you own – just few minutes as almost everything already compiled and just need to be packed into file according to package list provided.

          1. > I mention all those things to justify every piece of system requirements increase in new release.

            Note, I’m not contesting a normal increase, I’m just saying that the announce seems to be much more dramatic.

            > OpenVPN = big dependencies list which eat a lot of flash.

            Not that much, ~ 1MB uncompressed or ~400kB
            on the resulting squashfs image, thus no more than 400kB RAM once loaded.

            > Fancy web IU eat up RAM,

            Not that much, LuCI is written in Lua, which was a great choice as it permits extremely compact high-level code, which is extremely sober even in terms of memory (keep in mind that Lua runs applications in the 80kB of ESP8266).

            > default HTTPS enabled also pull a lot of dependencies.

            The SSL library is big for small systems but will not justify alone that 64MB RAM is too limited. Similarly, you can count on a few hundred kB of squashfs, not more.

            > Also IIRC WPA3 itself pull full wpad instead of wpad-mini because SSL dependencies.

            Possible. But all of this justifies 1 or 2 extra MB of rootfs, which can be critical to run on a 4MB or 8MB flash device, but are usually still OK for 16MB ones, and willl not have for impact to require more than 64MB RAM. Also, 64MB RAM seems to be a common limit due to the physical addressing limit of many embedded SoCs.

          2. Practically there is no flash size between 4MB and 8MB. I am not aware of any device with 6MB flash for example. In OpenWrt 19.07 the free space on the 4MB flash was almost fully used, now there are still some MBs free on a 8MB flash chip.

    2. Keep in mind that routers don’t work like computers, they load and image from the flash intro RAM, so if you have an 8MB compressed OS image in flash, that can 2-3x that in RAM, plus your still need RAM to execute things in, buffers, cache and so on.
      This means that 128MB of RAM ends up being a lot less than you think.

      1. That’s exactly how I’m making our appliances 🙂 However with squashfs nowadays the image doesn’t need to be uncompressed to RAM anymore, it’s kept compressed in RAM, which is why squashfs was adopted by everyone before even being merged in mainline!

        Fortunately I do still have machines running perfectly fine off 64MB RAM / 16MB flash but I know this is getting more limited than it used to. Having tried to fit a 5.12 kernel into a SAM9G20’s 4MB NAND partition made me realize that it’s not possible anymore to have a tiny kernel. Mine used to fit along with their rootfs on a single floppy…

    3. Just had a look at the available memory in my RE450 which is a 64MB device. ~32-33MB is in use, of which 9MB is cache and 3MB is buffer. As only 57MB is available to the system, only leaves about 18-19MB. As this is just a “dumb” AP, it doesn’t really matter, but for those wanting to do a bit more with their device, I can see this being an issue.
      It’s also worth keeping in mind that the Linux kernel itself has gotten a lot bigger in 20 years.

      1. > It’s also worth keeping in mind that the Linux kernel itself has gotten a lot bigger in 20 years.

        Yep that was my observation as well. Over time a lot of features oriented for scalability or runtime reconfiguration got merged without the ability to turn them off when not needed. E.g. sysfs is cool, but it doesn’t come for free. A long time ago you would pass your driver’s arguments on the boot command line, and this was already considered as an improvement above patching+recompiling…

        For example the kernel regularly patches itself on the fly to insert/remove hooks in existing code. This is quite expensive as well as it requires to store a lot of pointers.

        We’d need someone to restart a long-time work like Tom Rini did 20 years ago with his “tiny” patchset. It was useful to spot some places that deserved some improvements.

        1. I think that ship has sailed. With even DDR3 starting to get phased out from new low-end products in favor of LPDDR4, and anything older than DDR3 only used in existing designs, there is near zero commercial interest in optimizing future kernels for low memory systems.

          There is still value in reducing the bloat, but the best I’d hope for is to slow it down or maybe occasionally have a kernel that is smaller than the previous version if someone has a clever idea, but not to get back to kernel sizes from several years earlier.

          1. I think this might be why some vendors have taken a 4.x kernel and basically stuck with it and will stay with it until the end of time.
            In the dashcam space it seems like there are some Cortex A53 chips with 64 or 128MB of integrated DDR. I think they’ll be running an RTOS or a 4.x kernel.

          2. I do not think that they did not upgrade the major kernel version because it needs more RAM. There is no huge increase in the last years, the kernel just needs ~5% more RAM per year. If you activate more new features it gets more. The old kernel probably already fulfilled all the customer requirements the marketing department is aware of. If the customers need a more recent SW stack they should us the next generation hardware.

          3. >I do not think that they did not upgrade the
            >major kernel version because it needs more RAM.

            There are situations where it’s impossible to upgrade as it’s no longer possible to get the kernel image into the storage anymore.

            >There is no huge increase in the last years,
            >the kernel just needs ~5% more RAM per year.

            That’s not so good because you might have something running for years and it doesn’t just grow %5 more RAM each year.

            >If the customers need a more recent SW stack they
            >should us the next generation hardware.

            You mean we should lets devices with old insecure kernels just rot out there?

          4. The flash is now shifting from SPI NOR to SPI NAND also in the low end. Parallel NAND will go away. The cheapest SPI NAND chip you can buy probably has 128MByte, I haven’t seen smaller chips. I do not know how the SPI NOR and SPI NAND prices compare, but they are probably similar. The Router SoCs also can directly boot from SPI NAND. I haven’t seen much eMMC, probably only used in very high end devices.

        2. “We’d need someone to restart a long-time work like Tom Rini did 20 years ago with his “tiny” patchset. It was useful to spot some places that deserved some improvements.”

          I sort of wish it was easier to remove all the support in drivers for hardware you don’t have. Like filtering the compatible strings in an OF match table would stop code for everything else going in..

          For example if you compile the macb ethernet driver you compile in all of the weird hacks needed for all of the different versions of it even if you only have the original really old AT91 version of it. If you are compiling a kernel for an old AT91 machine chances are you don’t have support for any other machines enabled and your kernel will never run on hardware that needs gigabit support, hacks for zynq etc.

          I think maybe the simple panel driver for simple LCDs is another good example. I think if you build that you get parameters for tens of LCD panels you’ll never ever use and I don’t think there is a way to filter them at all.

          But I can’t imagine how to do it without a ton of ugly #ifdef/#endif all over the place and we’re talking about saving a few K here and there so probably no one cares.

          1. This is a very good point. We all know that these drivers require lots of hw-specific code in plenty of areas. One of the benefits of moving from the old “board” model to the DTS one was to remove config options dedicated to certain boards and have more generic kernels. But this comes at the cost that you mention, i.e. you also get everything that you don’t need. That reminds me of the crypto subsystem that you cannot remove because it’s used here and there from various drivers and other subsystems. In the past it was still possible to build without.

            Removing support for all this unused stuff nowadays would require so much work that only the end user could do it to tailor the driver to their own hardware. But I admit that in some cases it could be nice to fall back to the “no acceleration” code (i.e. drop everyhing required to support SG/csum/GSO/GRO/TSO/etc in network drivers), which usually turns the code back to the default macb equivalent. It’s just not what everyone wants either :-/

          2. You can get a Wifi router with 16MB flash (SPI NOR) + 128MB RAM for 17 Euro including taxes and free shipping on Amazon.de, see “Xiaomi Mi Router 4A”. Optimizing for smaller memory does not make much sense any more. If a vendor adds 16MB flash and 128MB RAM instead of 4MB + 32MB it probably makes the product only 0.05$ more expensive or even less. The vendor SDKs also need more memory and the SoC vendor reference boards also use more memory now. Someone told me DDR3 128MB and 256MB chips have more or less the same price.

          3. >You can get a Wifi router with 16MB flash (SPI NOR)
            >+ 128MB RAM for 17 Euro including taxes
            >and free shipping on Amazon.de, see “Xiaomi Mi Router 4A”.

            Great. The problem is lots of people don’t care about that specific router. A lot of people care about if they’ll be able to get a newer kernel into the tiny partition they have on a 16MB SPI NOR is devices they already have in the wild.

            >Optimizing for smaller memory does not make much sense any more.

            Sure it does. Making sure the kernel can still do things it’s always done without ballooning without doing anything more is a good idea. I’m not a do everything in C, what’s this python thingy? Seems like it’s for kids type. I care more about things like adding stuff to the kernel that is great for Facebook, Google etc in their data centres but is totally useless to almost everyone else yet can’t be turned off type of stuff. Fortunately I don’t think a lot of stuff like that has gone in.

            Keeping stuff tight helps everyone by the way. Here’s an example for you:

            The current device tree code reserves 128 characters for the compatible string in of_device_id that is used by drivers to advertise to the rest of the kernel what hardware they can handle.
            AFAIK there are no 128 character compatible strings. The longest is 62 characters. So for every version of some hardware your kernel supports you’re wasting ~66 bytes. Doesn’t sound like a lot but consider how many drivers there are and how every ARM vendor screws up IP blocks on purpose so need multiple custom compatible strings…
            Then in the same of_device_id structure there are two 32 byte fields for name and type that apparently nothing uses. So 128 dead bytes.
            For a bare minimum ARMv7 kernel with only the required drivers compiled in adding hacks to optimize of_device_id saves >70K.

            70K is nothing for a 4GB+ 64bit system.. but surely even big systems like that don’t appreciate moving stuff into CPU caches that is never accesses.

          4. At some point the cost of smaller sizes of RAM/Flash gets more expensive than bigger because if the market demand is low for the smaller capacity then the product is more scarce. I have known devices that we had to upgrade to bigger specs than we needed just because the materials got more expensive.
            So it’s kind of the opposite, keeping to lower spec memory can start costing you money as a business if your design is obsolete.

  3. The feature I’ve been most looking forward to with this new release is the dawn package is now available in the main package repository. Where you have multiple accesspoints it can kick a client to change to a different one when moving around the house.

  4. And this now that I just installed LEDE 17 on my old DIR-600b1 ?

    Actually a 4/32 device, and just needing it as a dhcp during commissioning, no internet.

    But I tried 21.02 rc3 on a zyxel NBG-419v2 (8/64) and ended in a boot-loop, have to try the release when home.

    Next adventure will be the AX3600 ?

Leave a Reply

Your email address will not be published. Required fields are marked *