Detailing my migration from LastPass to Bitwarden.
Anyone who knows me can attest to the fact that I’ve got tons of data files, many of which are large media files. In the past, I’ve kept them on various hard drives. Some of them (with the more often used data) are kept as internal drives on my 2 machines at the house. But much of it is on various internal hard drives that I essentially use as cartridges to an eSATA hot-plug dock (which don’t get me wrong, are awesome for what they do) that’s hooked up to my main workstation. This has not only become a bit of a pain in terms of keeping track of what’s on what drive, but in the long run is an unprotected method of doing things. I’ve been a happy user of Carbonite for backing up some of the smaller and more important things on the 2 machines (I’ve got upwards of 400 gigs of stuff up at Carbonite), but that’s still only a small percentage of it all. So, I’ve spent the last few months researching my various options in data storage platforms. I wanted something flexible, reliable and cost-effective. There are a number of options out there, but I was having trouble finding just the right fit. Then I stumbled upon a storage server platform called unRAID. The more I read about it, the more it appealed to me.
Knowing how much stuff I would eventually want to be storing on this storage server, there were a number of requirements that I wanted before I would commit to a solution:
- Relatively inexpensive
- Accommodate a fairly large number of drives and total storage capacity
- Not have a statistically unlikely event be able to take out the entire array’s worth of data (such as a second drive having a problem during the rebuild of a failed drive)
- Be flexible enough to expand over time with not just new drives, but a mix of drive sizes
- Be easy to maintain and communicate with over the network
A couple of those items rules out a good majority of choices in a hurry – primarily the use of different sized drive, expandability down the road and not having an unlikely event take out the whole array. One tempting solution was a Drobo unit. But the high price on any of their units with a decent number of drives ruled it out almost immediately. And I still would have liked more expandability than they offered. Units from the likes of Buffalo and Netgear were very quickly ruled out for most of the reasons in the list. Other software-based solutions like FreeNAS were considered, and quite frankly were in the running for best choice until I came upon unRAID. Even after I discovered unRAID, I continued hunting for yet another month or so but never found anything else that fit my needs as perfectly.
I spent a while hunting around the forums for unRAID, seeing how good the user community was at helping out other users of the product. I was happy with the level of geek in the user community for it, as well as the support of the platform (just look through this Wiki page with “Best Of The Forums” links to particularly helpful forum posts). Once I finally committed to using the platform, I spent a while sifting through the forums for hardware recommendations to build my server. I was happy to find threads like this one, where other forum members who build a lot of unRAID servers for people had posted the various configurations of hardware they like to use. I began picking and choosing from the listed choices of well-liked components and searching for availability of said items. I was trying to stick with items I could get from TigerDirect, primarily because I could conveniently drop by their local warehouse on my way home from work and pick it all up. It was hit or miss with which items were still commonly available.
I finally found a good motherboard choice that was available, the Biostar TA785G3 HD (which cost $80) It had 6 onboard SATA II ports, a PCI-E x16 slot, and a PCI-E x1 slot, as well as onboard video and a gigabit network chipset that unRAID should like. The primary goal of the board is to be well-liked by unRAID as well as provide as many SATA ports as possible. The PCI-E x16 slot can be used with a controller like this SuperMicro card to add 8 more SATA ports. And the PCI-E x1 slot can be used to add a couple more using any number of add-on cards. This would give the board the capability of getting up to 16 drives hooked up.
I then started trying to find a case I liked. I wanted something that could handle at LEAST a dozen drives, but upwards of the 16 drives the motherboard could handle was ideal. There were well-liked cases, such as the Cooler Master Centurion 590, but by the time I was hunting for it, the thing had become unavailable pretty much everywhere. After a good deal of random searching, I decided that I would drop by Fry’s Electronics and take a look at their long aisle full of cases and see which I best liked by physically checking ’em all out.
So then I set out choosing a CPU and memory. unRAID doesn’t need too much memory and needs very little CPU power. It can run on 512mb of RAM, but at least 1gb is recommended if a number of add-ons were going to be running (which I intended to do). I decided to just put 2gb into it, and selected the Corsair XMS3 2x1gb set (which cost $30). I’m fiercely loyal to Corsair as a brand. I then chose the recommended AMD Sempron 140 CPU. I’m also fiercely loyal to AMD over Intel. It only cost $35 for the retail CPU kit, with fan and everything. Then I had to pick a power supply. Since I’m a fan of Corsair, and it was decently priced, I chose the Corsair CX500 (which was $50).
Lastly, I would need a USB flash memory drive. A 1gb unit is recommended, but I went with a 2gb unit. I just chose one they had in stock from a known brand, PNY. The cool thing about unRAID is that the entire system software boots from a USB flash drive, leaving all the SATA interfaces, hard drive slots and system power available for hard drives that will be part of the storage array.
Having put together my list of part #’s at TigerDirect (I suppose I should call them CompUSA, as they are pretty close to completely transitioning to that name now), I left the office on a Friday and stopped at the TigerDirect warehouse to pick them up. The way it works is that you put the order in with the desk in the back of the small showroom and they call your name when all the items have been brought up from the warehouse for you. While I was waiting for my name to be called, I decided to take a look at what cases they had out on display. Amazingly, amidst the limited half dozen or so cases, I found one that I was really impressed with, the Ultra m923. It was normally $150 but was discounted to $99 at the time. I physically checked out the internals, and all seemed very nice. It has good airflow design, screwless drive mountings, etc. It has 10 externally exposed drive bays and has mountings for 7 drives included (and can do 3 more with one more of their drive cages added). This would be perfect for my starting set of drives. And I could add in some “5-in-3” drive cages to further expand it to the ideal 16 drive count as I expand the array later on. So, rather than stopping at Fry’s (which was further along on my way home), I just got that Ultra case at Tiger as well. It was a bit more than other cases like the above-mentioned Cooler Master 590, but fit the bill so nicely I went for it.
I didn’t initially get any hard drives while I was there because I already had a couple I could make usable. That, and I had a local (as in so close that I don’t even have to go through any stop lights to get to it) CompUSA/TigerDirect retail store to go to for common items like hard drives, and could hop over there once I knew I was satisfied with the rest of the build. I got home and assembled all the equipment. Photos are at the bottom of the post.
Once I had finished assembling things, I hunted around the drives I currently had and decided what I could use with the new unRAID box. I had a number of SATA drives ranging from 750gb to 2tb. I moved the data from one of the 2tb drives to other various drives to free it up. I then went over to the local CompUSA location and picked up two of the Seagate ST32000542AS drives for $70 each (my choice of the drive was primarily for price reasons – and I knew they would work with the unRAID platform). I knew I had the 2tb drive (which I knew was a Seagate) I had cleared off back at the house, and this would give me a good 3 drives to start building the array. unRAID has a free version that works with up to 3 drives, and I could get that going and make sure everything worked well before paying for the unRAID Plus license, which supports up to 6 drives (which you can then upgrade to the Pro license to support up to 19 drives). The plus license was currently $60 after a $10 discount code (with the Pro being $110 after $10 discount code). I got those two drives home and installed.
The initial setup process after first booting up the system off the USB flash drive was INCREDIBLY easy. I tweaked a couple things (network IP settings, etc) and then set about creating a few “user shares” – which are a pretty cool and unique way that unRAID represents and stores the data on the drive. I’ve found the way unRAID distributes data difficult to properly explain, particularly to fellow IT nerds who are used to traditional, striped RAID arrays. Here is a good page for information on how all that works, as I don’t wanna try and type up an explanation of it myself. The main concepts to get down for a user share are the split level and allocation method. See, unRAID stores all of a file on a single drive, rather than striping it across multiple drives like a traditional RAID array would. That means that after a catastrophic failure, any drive can be mounted to any machine that supports ReiserFS file systems and you could read whatever files had been stored on that particular drive. It’s up to you to figure out how you want unRAID’s user shares to distribute folders of files across the various drives it has. It’s very flexible as to how you want to do it. So, even if you had more than one drive completely fail (or simply have enough problems to not completely work), all the data on the other functional drives will still be OK. And like a traditional RAID5 array, it will be able to handle any single drive failure just fine, rebuilding it via parity.
I went with this set of instructions for starting the array without the parity drive (using the 2 new drives) and copying an initial batch of data, then adding the parity drive (the one I had cleared off) and letting it build parity. This process helps speed up the copy of the initial data, as it doesn’t have to build parity during the copy process. I copied about 1tb of data to it to start with (some stuff I already had organized and ready), then let the parity build overnight. I got nice speed copying the initial batch of data to it. The parity build took a long time (I don’t recall exactly, but probably about 10 hours?). After I had the parity built, I ran the recommended secondary parity check, which ran just fine (and also took a while, perhaps a few hours). It turns out that the drive I had cleared off just so happened to be the exact same model drive as the other two I had bought, so all three were matching model drives. Not only that, but the only other 2tb drive I had sitting out to use (but not completely cleared off yet) also happened to be the same model. So, I continued copying data (from that other 2tb drive I wanted to use, and other sources) to the unRAID array and was still getting pretty nice transfer speeds. It’s recommended that the lower RPM “green” drives be used, both because of the lower power use and heat generation, as well as the fact that the network connection is going to be the main limiting factor in speed anyway. So, I got all the stuff copied off that fourth 2tb Seagate drive, and went ahead and ordered a Plus license key since everything had been working perfectly and all looked good. After getting the Plus license activated, I was able to add the fourth drive, no problem.
That is how the system sits as it is. I think I’ve decided against adding any more of the 1.5tb or smaller drives that I also have. The 2tb drives have gotten so cheap, that I’ll probably just continue getting one of those a month or such and adding them into the mix. As the 3tb and larger drives roll out, I’ll have the flexibility of using them (you just have to have the parity drive as large or larger than any of the other drives – so I’ll have to swap out the parity for a 3tb drive first and can then begin using more of them, etc). So far, I’m 100% satisfied with the server.
I added in the unMENU add-on, which adds another web interface that gives more options, can install many more add-on packages, and manages said add-on packages and settings. I added in a number of add-ons via unMENU, such as a mail notification system to notify me of problems (I also set it to send an “all’s good” notification every morning at 6:00 AM). I also put in other add-ons, like one to run a parity check every month to be sure things are still happy, etc. Then I also picked up a 550va APC UPS for power backup, and set up the APC add-on to monitor that UPS via USB connection and shut down that unRAID box as needed. Like I said, it’s a very flexible platform with plenty of nice add-ons and ways of doing things. I did have trouble making the FTP service on it work correctly. Sadly, the FTP component of unRAID seems to be an afterthought that nobody on the forums seems to use much (FWIW, it’s a somewhat recent addition to the platform). I have a need for that functionality, so I setup FileZilla server on an existing Windows box I have at the house (and setup FTP via SSL, etc) pointing that FTP server to the SMB shares via UNC without any problems whatsoever, so that works nicely (essentially a front-end server for accessing the unRAID box from the outside world).
My challenge over the coming months (hopefully not years) is to slowly organize all the stuff I’ve got in all manner of various places. I’m taking this opportunity to get all my data organized, for once. I’m not letting ANYTHING onto the unRAID server that hasn’t been properly checked, tagged, organized or otherwise made-sense-of. It’s a slow and arduous process, but so far the new unRAID server has worked perfectly. Thus far, I’m very happy with choosing to go with it. Oh, and just FYI, at the time of this writing, I’m currently running unRAID 4.7. I started with 4.6, which was current when I did the build a few weeks ago but upgraded to 4.7 once it was released (and the upgrade process could not have possibly been more simple).”