GNUBee Personal Cloud 2 is a DIY NAS Supporting up to Six 3.5″ SATA Drives (Crowdfunding)

GNUBee Personal Cloud 1 is a DIY NAS powered by Mediatek MT7621A MIPS processor that supports up to 2.5″ SATA drives, and runs free and open source software. It was first introduced in March of this year through a CrowdSupply campaign.

The developers are now back with GNUBee Personal Cloud 2 (GB-PC2) with pretty much the same features, but instead of being designed for 2.5″ drives, it supports up to six 3.5″ drive that should offer either more capacity, or a lower total price for an equivalent capacity.

GB-PC2 NAS specifications:

  • ProcessorMediaTek MT7621A dual core, quad thread MIPS processor @ 880 MHz, overclockable to 1.2 GHz
  • System Memory512 MB DDR3 (max supported by MT7621)
  • Storage – SD card slot tested up to 64 GB, 6x 3.5” SATA HDD or SSD (recommended RAID 0 or 1 under LVM, MD, or Linux MD RAID 10)
  • Connectivity – 3x Gigabit Ethernet
  • USB – 1x USB 3.0 port, 2x USB 2.0 ports
  • Serial port – 3-pin J1 connector or 3.5 mm audio-type jack
  • Misc – 2x mainboard fan
  • Power – 12 VDC @ 8A via 5.5 mm x 2.1 mm, center-positive barrel jack
  • Dimensions –  TBD
  • Weight – ~454 g (without drives)

They also added one extra Gigabit Ethernet port for a total of three, and the NAS is obviously larger and heavier than the previous model, as well as requires a beefier power supply. The device can currently run Debian, OpenMediaVault, LEDE, or libreCMC with all documentation, schematics, and source code to be released on Github.

The new GB-PC2 model has also been launched on CrowdSupply with a funding target of $45,000. GnuBee PC2 Starter Kit with two anodized aluminum side plates, six threaded brackets and bracket screws, and 24 drive mount screws requires a $249 pledge. However, you may want to spend $10 more to add the power supply, SD card with firmware image, and USB-to-UART adapter cable for the Delux Kit (Early Bird). Shipping is free to the US, but adds $20 to the rest of the world, with delivery planned for December 31, 2017. Further details may be found on GNUBee website.

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK Pi 4C Plus

20 Replies to “GNUBee Personal Cloud 2 is a DIY NAS Supporting up to Six 3.5″ SATA Drives (Crowdfunding)”

  1. Imagine having spindle HDDs with SSD acting as cache with dm-cache or bcache. Now imagine Beowulf cluster of these 🙂

  2. Tesla :
    Imagine having spindle HDDs with SSD acting as cache with dm-cache or bcache. Now imagine Beowulf cluster of these ?

    I don’t know, there’s a lack of balance somehow:
    – I love dm-cache/bcache, but I doubt the CPU can cope with it that well
    – To avoid having a bottleneck on SATA and keep all slot free for magnetic disks, maybe way more RAM could have done the trick
    – But all in all, 3GigE (actually 2 can be bonded) will throttle you’re throughput to less than half of what 6 disks could have taken in… So yeah ram backed cache with a bit more ram would have been great

    Could be wrong or wanting to much out of it, dunno

    and two SFP+ slots, and another CPU… Yeah ok, got away from the target 🙂

  3. @Tesla
    Seems you have practical experiences with MT7621A and storage performance (since thinking about bcache and SSDs — I found CPU utilization somewhat of a showstopper when playing with this a while ago on a quad core ARM thingie). Unfortunately I’ve not seen any useful numbers so far (applies to both network and storage — only some shoddy single 2.5″ HDD dd numbers). Can you provide some benchmark test results?

    Speaking about performance: I checked quickly github.com/gnubee-git repos and came accross an OMV installer lacking all the stuff that’s needed at least for ARM devices to perform ok-ish or even great as NAS (and I would believe that’s the same with such MIPS SoCs so maybe GNUBee folks might want to look into github.com/armbian/config/blob/dev/softy for Samba and OMV installation tweaks and into Armbian’s armhwinfo service for ondemand cpufreq scheduler and IRQ affinity tweaks)

  4. @JaXX
    Bonded ethernet is not that common. Moreover the 8A power supply requirement is silly. They should have provided software controlled disk spin up. The drives might draw 8A when it’s powering up, but when up and running, it might use 2 to 5 A. I’ve seen some retarded setups with huge 1000W power supplies and in practice the system draws 100W with 12 disks..

  5. Somewhat disturbing: while checking GB-PC1 ressources I found a screenshot showing OMV 3.0.26 running with a kernel 3.10.14 (OMV 3.0.26 is 4 months old and back then 3.10LTS kernel was at 3.10.105 or even higher) and then I found in Github issues GNUBee developers talking about a new ‘firmware’ (BLOB? Seriously?) with a 4.4.52 kernel. What about mainline kernel and building an OS image from scratch? Seems I don’t fully understand the FLOSS aspect of the device…

  6. @Jerry
    Good luck trying staggered spin up with ASM1062 and average disks without hardware mods (but I would assume the MT7621 has enough free GPIOs to control power provided to the disks with MOSFETs or something like that). But I second your concern since almost all PSUs operated way below their maximum rating are pretty inefficient.

  7. @tkaiser
    NAS disks aren’t that much more expensive. Besides they’re much better suited for NAS use. One example is quick failing when bad blocks can’t be read. Btrfs will be screwed when it encounters errors with desktop disks.

  8. Jerry :
    Btrfs will be screwed when it encounters errors with desktop disks.

    Are you really talking about btrfs and not (md)raid?

    Anyway: my main concern with these GNUBees as well as with designs like the (IMO better suited) Helios4 is that they encourage clueless people doing stupid things (especially playing RAID at home). I’ve been somewhat active in OMV community the last few months but have to stop this since it makes me too sad. Every other day the usual ‘Oops! My whole array has gone! Of course I don’t do backup so how do I get my data back?’ threads combined with threads where users are discussing ‘data loss made easy’ recipes (blindly trusting in things working as expected, adding scary additional complexity layers to an already fragile setup consisting solely of unreliable hardware and most importantly never testing anything, especially the important stuff)

  9. Why not a proper enclosed case (with fan to keep things cool) to keep the dust out and other things (eg cables, screwdrivers, screws. whatever) from falling into the array?

  10. @TonyT
    Well, I prefer the usual definition and talk about NAS in contrast to DAS (Network attached vs. Directly attached storage) having in mind that there also exists something in between we called ‘SAN’ ages ago to differentiate from the two others. SAN is network attached block device while NAS uses dedicated (multi-user capable) protocols to access data on a networked device while DAS is always a directly attached block device (oversimplifying of course).

    With DAS and SAN the host accessing these things has to put his own filesystem on them while with a NAS the host uses a foreign and unknown filesystem over specialized protocols abstracting this or that (unfortunately ‘this or that’ is a honest description if you want to cover everything starting with SMB/CIFS from 20 years ago up to today’s reality).

    Anyway: I love backup since I deal with data losses for a living. And I use both DAS devices and NAS boxes for this purpose. Just not multi-disk setups that encourage you to put all your data in a single place but small ARM devices sitting here and there (at parents and friends living in my or other cities).

    On all these devices a cronjob pings macbookpro-tk.local every few minutes and if there’s an answer Netatalk is started, the HDD spins up and TimeMachine does its job. To be honest: All those NAS boxes without ECC DRAM are only used as backup targets since I do not only love data protection/safety but also data integrity (so anything relevant gets archived from time to time on a x64 ZFS box with ECC DRAM and RAIDZ2 to fight ‘bit rot’).

    Besides that if you would’ve written ‘RAID is NOT backup’ I would’ve fully agreed. 🙂

  11. @Jerry
    Yup, seen that on 48 disk rigs (Equalogic and Dell MDs), way oversized triple PSUs, hopefully some (not all) manage to start disks in series with a fraction of a second delay, avoiding have more than a handful spinning up at the same time.
    I think my personal old HP Microserver can do it too

  12. 245$ is quite expensive you can buy a retail nas for that, would be nicer if they would just supply a small cheap board with sata connectors.

  13. I have not been able to find the schematics on github.

    WHat interface used to connect the SATA drives to the SoC?

  14. @Miklos Marton
    Click on the first link in first paragraph. But you shouldn’t stop once you read about PCIe since that’s no guarantee for stellar or just mediocre performance (and at least I have not seen any useful benchmark numbers from any MT7621 device so far except WiTi board and there performance simply sucked).

  15. I was seeking for something like this for a long time: a place I can plug in my old harddisks and use that, with UnRaid or similare, as a backup-device. But this is just too expensive. Especially for 6 disks (well, still better than 4).
    I would pay up tp EUR300 for a mobo, that is the size of an 3.5″ drive, that holds up to 12 disk with a CPU, that can cope with the requirements for such a setup and a network management (IPMI) interface and >1GBe and that can hold >32GB of ECC-RAM. Such things, albeit a little bit more expensive, are available as Mini-ITX, but that is too large for my purpose.

Leave a Reply

Your email address will not be published. Required fields are marked *

Khadas VIM4 SBC
Khadas VIM4 SBC