How to Reduce SD Card Firmware Images Download Size

So I’ve just received a Roseapple Pi board, and I finally managed to download Debian and Android images from Roseapple pi download page. It took me nearly 24 hours to be successful, as the Debian 8.1 image is nearly 2GB large and neither download links from Google Drive nor Baidu were reliable, so I had to try a few times, and after several failed attempt it work (morning is usually better).

One way is to use better servers like Mega, at least in my experience, but another way to reduce download time and possibly bandwidth costs is to provide a smaller image, in this case not a minimal image, but an image with the same exact files and functionalities, but optimized for compression.

I followed three main steps to reduce the firmware size from 2GB to 1.5GB in a computer running Ubuntu 14.04, but other Linux operating systems should also do:

  1. Fill unused space with zeros using sfill (or fstrim)
  2. Remove unallocated and unused space in the SD card image
  3. Use the best compression algorithm possible. Roseapple Pi image was compressed with bzip2, but LZMA tools like 7z offer usually better compression ratio

This can be applied to any firmware, and sfill is usually the most important part.

Let’s install the required tools first:


We’ll now check the current firmware file size, and uncompress it


Good, so the firmware image is 7.4GB, since it’s an SD card image you can check the partitions with fdisk


Normally fdisk will show the different partitions, with a start offset which you can use to mount a loop device, and run sfill. But this image a little different, as it uses GPT. fdisk recommends to use gparted graphical tool, but I’ve found out gdisk is also an option.


That’s great. There are two small partitions in the image, and a larger 6.9 GB with offset 139264. I have mounted it, and filled unused space with zeros once as follows:


The same procedure could be repeated on the other partitions, but since they are small, the gains would be minimal. Time to compress the firmware with 7z with the same options I used to compress a Raspberry Pi minimal image:


After about 20 minutes, the results is that it saved about 500 MB.


Now if we run gparted, we’ll find 328.02 MB unallocated space at the end of the SD card image.
gparted_debian_firmware

Some more simple maths… The end sector of the EXT-4 partition is 14680030, which means the actuall useful size is (14680030 * 512) 7516175360 bytes, but the SD card image is 7860125696 bytes long. Let’s cut the fat further, and compress the image again.


and now let’s see the difference:


Right… the file is indeed smaller, but it only saved a whooping 82,873 bytes, not very worth it, and meaning the unallocated space in that SD card image must have been filled with lots of zeros or other identical bytes.

There are also other tricks to decrease the size such as clearing the cache, running apt-get autoremove, and so on, but this is system specific, and does remove some existing files.

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK Pi 4C Plus

26 Replies to “How to Reduce SD Card Firmware Images Download Size”

  1. Those chinese manufacturers should get a VPS somewhere and host those files with a simple lighttpd (or even better, FTP and Rsync). I had the same problem with BananaPi and other chinese guys, they just don’t get it. Pan and Googledrive are free hosting, but they are not really friendly when it comes to command line downloads.

  2. Does the effficiency of the compression algo matter much once the empty space is all zeros? Surely it’s just a matter of representing one million zeros as “zero a million times” rather than 00000….

    So it seem like sfilling empty space with zeros ought to offer similar space savings, whatever the algo.

    (People may want to offer images using common but less efficient compression like zip; it would be a shame if they failed to use CNX’s method because they thought they’d HAVE to use 7zip…)

  3. @onebir
    The better compression algorithm does not help with the zeros, but with the existing files.
    So you could still use zip, rar, or whatever compression format you want. LZMA should decrease the size further.

    I have tried to compress the firmware (after running sfill) with gzip.

    So it’s still smaller than the original firmware (bzip2), but still ~400 MB larger than with lzma.

    with bzip2:

    A bit smaller, but still ~300MB larger. Maybe I did not use optimal settings, but it looks like it might still be worth using 7z or xz.

  4. I’ve been recommended to use fstrim instead of sfill @ https://plus.google.com/u/0/110719562692786994119/posts/ceb8FeSVPAF

    It is a little bit faster, and the resulting file is a tiny bit smaller (~2.4MB smaller).

  5. You should check lrzip. It can compress some types of files very well at cost of very high RAM usage at compression.

  6. They should distribute by torrent. This is what torrent is really excellent for.

    I’m also surprised that the install is 7GB … that must be with every friggin feature enabled!

  7. Once again, before no-check-certificate must be 2*hyphen –
    something wrong with this comment system because it always remove one

  8. One solution is also to use LZ4

    It can compress at very highspeed (on SSD even at 400MB/s) and the receiving user can decompress the archive file even faster (the bottleneck will be only disk speed)
    Compression rate is much worst than LZMA but you can compress an 8GB file in less than 30 secs with LZ4 on machine with SSD Sata2 (faster with Sata3) where LZMA needs about 20 minutes.
    The bottleneck for LZ4 is disk speed and for LZMA Cpu/Ram speed.

    And if you need higher compress ratio you can recompress LZ4 archive with LZMA.
    This give you a compression ratio more near native LZMA but with compression time 1.5-2.0 times lower (even more faster, like 3-4x, with high compressable data like many zero data blocks).

    An example:

    8GB Odroid C1 OpenElec 6.0 image from an recycled SD and without fill zero operation, so many unused blocks are not zeroed and this will decrease compression ratio since compression algor. must compresses and keeps useless data.

    Original image: 7.948.206.080 bytes
    LZMA (7-Zip defaults) compressed: 2.586.505.436 bytes in 1088 secs (about 18 min)
    LZ4 (level-1) compressed: 3.762.908.774 bytes in 17-18 secs
    LZ4 (level-9) compressed: 3.488.373.848 bytes in 138 secs
    LZ4-1+LZMA compressed: 2.993.996.128 bytes in 784 secs (about 13 min, lz4 time inclusive)
    LZ4-9+LZMA compressed: 2.833.559.090 bytes in 794 secs (about 13 min, lz4 time inclusive)

  9. @wget
    It’s not ideal, but to add code / shell in comments or post, use preformatted style “pre” for example:

    without \.

  10. @cnxsoft
    Maybe I missed something but actual data I assume it is needed and it stays after sfill or fstrim.
    My dd use is like mounting partition and then writing big file with zero’s inside. After dd dies because there is no free space anymore this big file is deleted.

    But like I wrote maybe we are talking about two different things?

  11. @Peter
    Yes, we must be talking about different things.
    What I needed to do is to set a zero value to unused sectors in a mounted file system, so there’s both useful data (files and directory), and unused sector (aka free space). And I need to set all that free space to zero in order to improve the compression ratio, while still keeping the files and folders in the file system.

  12. @cnxsoft
    But then we ARE talking about the same thing.

    Let make practical example. I make SD card and run it. Made few customizations on it. Because of this all the space was used for some temporary files. After I’m done I delete all those files. Files are gone but sectors on card is still occupied with old data. And if I make image from card with dd command and compress it all those unused bytes are still used in image.
    But if I make big bile with zeroes inside after I removed all the unneeded files and delete this big file there will not be any sign of temporary files in unused sectors anymore. They will only contain zeros. And compression ration of final image is better.

    Some time back I also read some article (maybe here) about writing such big sd card images with some clever technics. First program creates image file and actually removes all the zeros. Of course it also creates some metadata file with informations what was removed. So final image files is actually only useful data. Meaning 32 GB card image can be only few GB in size. Writing program then writes only useful data to sd card – it skips zeros.

  13. @Peter
    OK. I guess I now understand what you mean. So you’d type something like
    dd if=/dev/zero of=big_file

    To fill all free space with zeros, and then delete that big file. That would work too.

  14. I agree with @notzed: use bittorrent as it’s agilent against lossy connections.

    Furthermore: I really wonder why Raspi / Raspbian does not provide a 300-500MB base image

  15. Fabry :
    One solution is also to use LZ4
    It can compress at very highspeed (on SSD even at 400MB/s) and the receiving user can decompress the archive file even faster (the bottleneck will be only disk speed)
    Compression rate is much worst than LZMA but you can compress an 8GB file in less than 30 secs with LZ4 on machine with SSD Sata2 (faster with Sata3) where LZMA needs about 20 minutes.
    The bottleneck for LZ4 is disk speed and for LZMA Cpu/Ram speed.

    Combining lz4 with lzma, this is a *great* idea
    (although lz4 -9 is enough for my needs)

Leave a Reply

Your email address will not be published. Required fields are marked *

Khadas VIM4 SBC
Khadas VIM4 SBC