ORICO 1088USJ3 Multi-Bay SATA to USB3.0 Enclosure Supports up to Ten 3.5″ Hard Drives

USB expansion drives and multi-bay NAS devices are both very common form of storage which have different applications, but I had never thought or seen multi-bay USB expansion drivers until I came across ORICO NS200U3 “2-bay USB 3.0 hard drive dock” on GearBest (~$82) that can handle two 3.5″ hard drives.

This piqued my interest, so I went to ORICO website looking for models supporting more hard drives, and found ORICO 1088USJ3, a 10-bay USB 3.0 to SATA enclosure that can provide up to 80TB of local storage with 8TB drives.

ORICO 1088USJ3 specifications:

  • Storage – 10x SATA III slots up to 6 Gbps for 3.5″ HDD / SSD drive (up to 8TB per drive)
  • Output Interface – USB 3.0 device port up to 5 Gbps
  • Power Supply – 100-240V AC 50-60Hz
  • Protection – Over-current, over-voltage, short circuit, overheat, and power leakage.
  • Dimensions – 389(L) x 203(W) x 501(H)mm (aluminum enclosure)
  • Weight – 11.16kg

The box is said to work with Mac OS, Linux, Unix, and Windows 98 and above, which should be expected since it will likely be seen as a normal USB mass storage system from the operating systems side. You won’t need any tools to install the drives, and hot swapping is said to be supported.

The system implements “Intelligent Dormancy” which decreases heat and wear of hard drives, and saves energy.

Click to Enlarge

That’s a fairly old product since it was announced in 2013, but I only noticed it now. Yet, I could not find any reviews of the 10-bay model, and I don’t know if it is seen as a single drive while connected, and whether it supports RAID or not. One of the 5-bay models appear to be more popular, and it’s easier to find reviews such as the one below for ORICO 9558U3 model, and we’ll see each drive is seen independently by the system. If you’re using a Linux computer or board, you should be able to use LVM to show all drives as one.

There’s another 5-Bay model (9558RU3) with switches on the back used to configure RAID.

The 10-bay model is pretty hard to buy now, and may even be discontinued, but you’ll find ORICO 9558U3 5-bay DAS (Direct Access Storage) for $159 and up on Amazon US, eBay, Aliexpress, and others. ORICO is not the only game in town, and TerraMaster D5-300C 4-bay enclosure with a USB type C connector is fairly popular on Amazon and sold for $229.99. Users’ reviews are usually positive, but there are a few complains about noise, performance, and power supply.

12
Leave a Reply

avatar
12 Comment threads
0 Thread replies
3 Followers
 
Most reacted comment
Hottest comment thread
5 Comment authors
QuindorwillyJerrybantoto masabo sigolatkaiser Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
tkaiser
Guest
tkaiser

Don’t waste your time watching this ‘review’, the Dude who put this on Youtube is clueless like hell (constantly talking about RAID being some sort of backup which couldn’t be more wrong) and there’s zero information provided. Only information at all in the comments section where people warn about this device (category).

For anyone out there thinking about buying such insane devices (USB DAS): think about that you at least need two of them since they are a huge so called ‘Single Point of Failure’ (SPoF). Once the controller inside dies you have no access to your data any more. Only exception: if you only use a pure JBOD (just a bunch of disks) mode and the thing implements this properly (most of these thingies don’t, you can’t for example access SMART data of individual disks or get faked data instead of correct one, no way to configure individual/correct disk spindown and so on…)

bantoto masabo sigola
Guest
bantoto masabo sigola

@tkaiser
u might be thinking of drobo that use their own patented raid-like method

these “cheap” DASes operate normal mirroring (raid 1)

i have a cheap startech usb 2.0/eSata 4 bay das with two 4tb drives in raid 1

i can remove one drive and use a direct sata to usb adapter and i am able to read the files just fine

Jerry
Guest
Jerry

The 5 drive version draws 78W of power (drives need 30W max, two 120mm fans use 5W max). Seems like these don’t support staggered spin-up. It’s pure trash. You want encryption with such big drives and lots of bandwidth. E.g. NBD or iSCSI via 1-10 GBit ethernet, not USB.

tkaiser
Guest
tkaiser

bantoto masabo sigola :
u might be thinking of drobo that use their own patented raid-like method

No, I’m talking about any of these multi-disk USB boxes, especially those that advertise RAID-5. And no, I’m not talking about RAID-1 since I consider this in a home/SOHO environment a horribly inefficient waste of disks. It ‘protects’ from almost nothing while doubling costs and consumption. Last century there were some use cases for this mode existent but they all are related to availability (business continuity).

I outlined my opinion why ‘traditional RAID-1′ is almost stupid for example over there: forum.openmediavault.org/index.php/Thread/18637-Home-NAS-build-FS-info/?postID=146935#post146935 (we have way better options in the meantime, talking about ZFS/zmirrors or btrfs’ own RAID-1 implementation which both work totally different than anachronistic/lousy RAID-1)

Jerry
Guest
Jerry

@tkaiser
If you run ICT business from home or even develop remotely, RAID is nice to have. But it won’t replace backups. It’s just a must for development environments where downtime is expensive.

willy
Guest
willy

@Jerry
In practice, every time RAID runs unmonitored, you only discover the failure once it’s the second one and it’s too late. Most often you’ve lost a second disk in a RAID5 array and you discover that alerts didn’t work for whatever reason and the RAID controller did its job maintaining the array alive for you. So the downtime is reduced only when you detect the first fault. At least in the good old days of IDE, a faulty disk would most often hang the controller, letting you discover something was wrong. Also with todays capacities, it’s very common that an array rebuild takes a few days slowing down all operations in the mean time, which is not really cool. Almost only RAID1 with 3 identical disks avoids all these issues since a new disk can be rebuilt from one of the two other ones without affecting read operations. But it starts to be expensive (and it’s what we have on our file server at work).

tkaiser
Guest
tkaiser

willy :
Almost only RAID1 with 3 identical disks avoids all these issues since a new disk can be rebuilt from one of the two other ones without affecting read operations.

Since it’s 2018 now even with Linux a way better concept is relying on ZFS and using zmirrors combined to a single large pool: jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ (this scales up pretty well even with large array sizes and the more zmirrors you add the higher also the random IO performance). RAID1 when looking at the alternatives these days is sooo lame now…

willy
Guest
willy

@tkaiser
Well, frankly I’ve stopped using out-of-tree FS drivers for servers a long time ago. It used to be too much of a pain to fix kernel bugs, having to deal with incompatibilities. I’ve used to run knfsd+reiserfs+raid on 2.2 when any two-combination of them was already not recommended 😉

Also, for having had to rebuild by hand a RAID5 array (writing equivalent code in C, reversing the on-disk format) a after a controller died, I really appreciate how easy it is to recover all your data from any disk using RAID1 without having to scratch your head too long. It’s obviously only doable when you have reasonable FS sizes though. I even met a guy explaining how he was using RAID1 to replicate servers, it was scary but fast 😉

Quindor
Guest

I did a video review about the 5 bay non-RAID variant a while back. Works well with most Intel chipsets and Linux. I created a 5x10TB array using ZFS (RAIDz2) and did a complete fill & scrub without any issues. Currently using 2 “in production” and they haven’t given me a problem in over a year of usage.

I did notice some strange behaviour using Windows 10 on an AMD Ryzen system though, if you want to use something like that you’re probably better off looking for something with a bit newer chipset.

Anyway, if anyone is interested, check out the video here: https://youtu.be/GAt9pAAwWLg

tkaiser
Guest
tkaiser

@Quindor
8 days scrubbing a RAIDz2 made of 5 x 10TB? Seriously? That’s IMO approximately 7 days too long. Which JMicron chipset is used inside? Do you get faked SMART data from the drives or real values?

Quindor
Guest

tkaiser :
@Quindor
8 days scrubbing a RAIDz2 made of 5 x 10TB? Seriously? That’s IMO approximately 7 days too long. Which JMicron chipset is used inside? Do you get faked SMART data from the drives or real values?

It depends how you look at it. I didn’t need fast storage, just a lot of it. Filling ~30TB of netto space and then scrubbing the 50TB takes a long, long time. Yes directly connected this could be done (much) faster, but for the purpose of backup storage or a NAS, where the maximum speed is 100MB/sec (1Gbit) anyway, I don’t see the problem.

It uses the “JMicron Technology Corp. / JMicron USA Technology Corp. JMS567 SATA 6Gb/s bridge” chipset and I can get the smartdata using “smartctl -a”. Temperatures and other values seem to be correct.

As I mentioned, I did a lot of stress testing in combination with 2 different Intel “servers” and up until now (over a year) I haven’t had any issue with the external cabinet. I’m running ZFS mainly because it’s easy to use and if any form of corruption had occurred, it would find it.

tkaiser
Guest
tkaiser

Quindor :
Filling ~30TB of netto space and then scrubbing the 50TB takes a long, long time.

And resilvering will then take even longer. This is the only thing I’m concerned about since usually people do RAID for a reason. They search protection from disks failing (most of those people now confusing data protection with availability but that’s another story). So once this happens and you replace a disk this will end up with most probably 10 full days of full stress for the remaining 4 disks. If the first disk died because of age then chances are great that another disk soon will die too (though with RAIDz2 you’re still ‘save’) but then the next 10 days resilvering have to start and once two disks die within in this time it’s already game over.

Please keep also in mind that ZFS on Linux (ZoL) doesn’t implement sequential resilvering but simply walks along all the btrees so if the data on your array grew over time the resilvering process can be considered ‘worst case random IO scenario’ which is something HDDs can’t cope with that good anyway. Maybe it’s a good idea to reference this one more time: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

I would assume the JMS567 is combined with one of their SATA port multipliers, eg. an JMB393 (then disks should tell ‘SATA II’) or JMB595 (then they should report SATA III) and at least with the combined JMicron USB-SATA/PMs like JMS561 people report that information queried with ‘smartctl -x’ is not correct (substituting some information for the 2nd disk from the 1st one). Curious whether this applies here too. Can you upload ‘smartctl -x’ output from 1st and 2nd disk to pastebin.com or something similar?

BTW: If the disks report SATA II (JMB393) then the type of PM might be the culprit for the slow operation since unlike JMB595 it does only support CBS port multiplier mode and not the faster FIS based switching (which should help greatly with random IO performance when more than one disk is accessed in parallel which describes pretty much the scrub/resilver/rebuild use case).

The above mentioned 10 disk enclosure must use two JMB393 (since JMB595 is too young) in cascaded mode so this type of workload will perform even worse then.