●Stories
●Firehose
●All
●Popular
●Polls
●Software
●Thought Leadership
Submit
●
Login
●or
●
Sign up
●Topics:
●Devices
●Build
●Entertainment
●Technology
●Open Source
●Science
●YRO
●Follow us:
●RSS
●Facebook
●LinkedIn
●Twitter
●
Youtube
●
Mastodon
●Bluesky
Please create an account to participate in the Slashdot moderation system
Forgot your password?
Close
wnewsdaystalestupid
sightfulinterestingmaybe
cflamebaittrollredundantoverrated
vefunnyunderrated
podupeerror
×
24820032
story


Posted
by
samzenpus
October 19, 2011 @06:37PM
from the line-them-up dept.
snydeq writes "InfoWorld's Desmond Fuller provides an in-depth comparison of five entry-level NAS storage servers, including cabinets from Iomega, Netgear, QNAP, Synology, and Thecus. 'With so many use cases and potential buyers, the vendors too often try to be everything to everyone. The result is a class of products that suffers from an identity crisis — so-called business storage solutions that are overloaded with consumer features and missing the ease and simplicity that business users require,' Fuller writes. 'Filled with 10TB or 12TB of raw storage, my test systems ranged in price from $1,699 to $3,799. Despite that gap, they all had a great deal in common, from core storage services to performance. However, I found the richest sets of business features — straightforward setup, easy remote access, plentiful backup options — at the higher end of the scale.'"
You may like to read:
Ask Slashdot: Which OS For an Embedded Display Unit?
Federated Media Lands WordPress.com Deal
This discussion has been archived.
No new comments can be posted.
Load All Comments
Full
Abbreviated
Hidden
/Sea
Score:
5
4
3
2
1
0
-1
More
Login
Forgot your password?
Close
Close
Log In/Create an Account
●
All
●
Insightful
●
Informative
●
Interesting
●
Funny
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
bylarry bagina ( 561269 ) writes:
here [infoworld.com]
twitter
facebook
bybeelsebob ( 529313 ) writes:
This is one segment where build-your-own is still *way* cheaper than any of these crazy setups:
Intel Pentium G620T: $83
Intel DB65AL: $85
8GB DDR3: $50
Hyper 212+ with fans removed: $20
Fractal Design Mini: $100
Corsair CX430: $40
FreeBSD: $0
Total without disks: $378
5 * Hitachi 5k3000: $700
Stick the disks in a raid-z, and wham bam, there's $1078 for 12TB of RAIDed NAS.
Parent
twitter
facebook
bygmack ( 197796 ) writes:
You don't have a hot plug enclosure in there and much all of these will hot plug drives.
bybeelsebob ( 529313 ) writes:
You're technically correct (the best kind of correct). But... While I agree a hot plug bay may be a nice idea, really, on a home NAS, what do you want to hot plug all the drives for? If you're trying to fail a drive and rebuild the array you probably aren't in a position where you want to be using it continuously through the rebuild.
What's the use case for hot plugging the drives?
bygmack ( 197796 ) writes:
And yes I have used a cheap dual drive NAS while it was rebuilding the array. It was slower but still functional.
Hot swap cases make drive management much easier. Drive 3 of 5 needs replacing? Forget tracing cables back to the motherboard just pop out drive 3 in the array and replace it. This also means I can get someone else to do it even if it means walking them through it over the phone.
These things need to be dead easy since I have been going out of my way to tell all of my non techie friends that U
bygmack ( 197796 ) writes:
Don't know how that first "And" got there even though I went out of my way to proofread that.
byBLKMGK ( 34057 ) writes:
I use drive bays in my unRAID. While not hot swappable there's no cable tracing to be done and it's WAAAAY cheaper than the crap these companies sell as "NAS".
bybeelsebob ( 529313 ) writes:
Go look at the case involved now. The thing mounts hard drives in clip around sleds, there's not a single screw needed undoing when you want to swap a drive. Shut down, slide case door off, slide drive out, slide new drive in, boot. Yes, it involves shutting down, and yes, this is a shortcoming, but seriously, what home NAS needs 100.00% uptime?
bybeelsebob ( 529313 ) writes:
1) I disagree about the necessity of ECC, but sure, drop in a 1220L and a C202 board instead if you like.
2) Corsair rebrands Seasonic PSUs.
bySirMasterboy ( 872152 ) writes:
I have a DS1010+ 5-bay model and absolutely love it. It's got 10TB in it right now but I may replace the drives with 3TB models eventually. With a dual-core 1.6GHz atom and 1GB DDR2 ram it easily reads and writes at 100+MB/s via a RAID5 array on my simple home gigabit network.
Also the new NAS' that are Intel-based can run most CLI linux servers and programs which is great. You may need to add more RAM if you run lots of heavy servers or have lots of concurrent users but most have spare ram slots.
The best thing I find about Synology is their every updating and cutting edge Web GUI. They are already using HTML-5 features to support things like dragging and dropping files right into your web-browser to upload files to the NAS remotely.
twitter
facebook
bysortius_nod ( 1080919 ) writes:
I did have a NAS a while ago, but I got rid of it in favour of building up a linux server. I found that NAS performance is slow at best, abysmal at worst, even with 1gbps networking & a decent controller. Unless you go corporate style you're always going to suffer from speed problems.
Having 3 network cards and enough space for 15 drives makes up for the few hundred extra dollars you pay for a DIY NAS. Plus, a DIY NAS has a lot more flexibility than the consumer grade NAS.
Parent
twitter
facebook
byafidel ( 530433 ) writes:
And the homebrew NAS also costs more in time to setup, has no support, and uses probably anywhere from a few dozen to a few hundred watts more power. Basically the majority of the people interested in the tested solutions would not consider a Linux box with some storage to be a viable alternative to those boxes.
Btw I'm very surprised at the performance of the StorCenter px6, considering before the device launched they used it at EMC World to boot 100 VDI machines in like a minute and a half with SSD's I h
bydrinkypoo ( 153816 ) writes:
Basically the majority of the people interested in the tested solutions would not consider a Linux box with some storage to be a viable alternative to those boxes.
Basically the majority of cheap NAS units are a Linux box with some storage, and anyone who doesn't consider them a viable alternative doesn't have the chops to build one.
I have a Geode GX development system here I use as a NAS (with an IEEE1394 card in.) Under 5W not counting the storage itself. Perhaps you are wrong.
byKjella ( 173770 ) writes:
And the homebrew NAS also costs more in time to setup, has no support, and uses probably anywhere from a few dozen to a few hundred watts more power
If you manage to build a storage server that pulls a few hundred more watts of power, you must be doing something very wrong. Even a gaming system with a 2600k and a HD6870 draws 75 watts at idle from the wall socket, and that's roughly the worst possible setup for a storage server you can get.
If you just want 10TB of storage capacity to go with your laptop, setting up a Linux box and sharing it out is dead easy. If you want a full server then the NAS boxes don't deliver that. But I agree, there's a pretty
byDarkOx ( 621550 ) writes:
There are plenty of ARM boards out there now that draw VERY little power. You can get these in formats like Mini-ITX now that will fit in standard cases; which you probably want so you can fit enough drives. They use regular DDR memory in most cases now as well, or have the memory integrated on a SOC type controller. Your favorite Linux distro is most likely available for ARM now as well. There are even four-core arm chips to choose from that are inexpensive.
There is no good reason to be building somet
byafidel ( 530433 ) writes:
Not all of those are atom based, and even the ones that are probably are using embedded optimized designs not general purpose desktop boards, and they have correctly sized power supplies. Can you build a similar project by sourcing the same components? Sure you can, but for cost and availability reasons most home NAS systems that I have seen are just desktop boards thrown in a midsized tower case with a desktop PSU.
byadvid.net ( 595837 ) writes:
I did have a DIY linux fileserver a while ago, but I got rid of it in favour of a turnkey NAS.
DIY linux fileserver = built with standard PC spare parts.
NAS = built with specific hardware tailored for the job (for example mine has a sparc CPU).
My linux fileserver iddle power comsumption was 68W
My NAS iddle power comsumption is 27W
Here, the DIY linux server use 150% more power.
(and makes much more noise)
byKevin Stevens ( 227724 ) writes:
The consumer grade NAS's have ARM processors in them. Couple those with some low power HD's and these things sip power. I have a synology ds211j, and its idle power draw is about 10 watts, maxing out at about 30.
Maybe you can home brew an arm based system... I wouldn't know where to start though. Another plus for off the shelf is the form factor- my nas is about the size of a small shoe box, it is small enough to fit on a shelf out of the way in my coat closet.
bylucm ( 889690 ) writes:
I did have a NAS a while ago, but I got rid of it in favour of building up a linux server. I found that NAS performance is slow at best, abysmal at worst
I would agree with that. However the best scenario I've tried with a Linux machine is using a software raid (or LVM) on a bunch of disks and then setup a iSCSI target, especially convenient in a virtualized environment. Network cards are cheap so it's easy to add custom multipath.
bylucm ( 889690 ) writes:
Reading about Linux's disappointingly lame MD system gave me new respect for Synology's devices, especially for the SOHO environments for which they're suited.
For a larger / business application, ZFS totally rocks, but Oracle has decimated Solaris' visibility, so one is left with no good options.
I have yet to see a typical scenario where the bottleneck is at the back-end (ie: md). The sluggish part is always at the frontend; the higher you get on the OSI layers, the more you need a robust interface. And at this level, Linux is terrific. Having a custom storage machine allows you to put more power where you need it, something that is not possible with a COTS device.
Just look at the huge IBM SAN, the DS8000 or V7000 - basically it's a cluster of AIX servers with lots of disks, and the RAID implementa
byrikkards ( 98006 ) writes:
Totally agree. I originally picked up a Seagate BlackArmor 400 as the price seemed good. It sucked. Performance was crap, took 12 hours to build the array and ended up bricking after the latest firmware update. Took it back (this was only a day or two after buying it) and got a Synology DS411 which blew me away. I am getting 50MB up and 100MB down on a single nic. I could have built my own but decided I didn't want another computer I have to manage. I wanted something relatively turnkey.
byAliasMarlowe ( 1042386 ) writes:
Agree on the Synology recommendation. These are very nice boxes which are quite Linux-friendly, with even their initial setup running on Linux (unlike certain NAS units which use Linux internally but seem to go out of their way to make things awkward for Linux clients).
I have a DS207 which performs admirably as web server, file server, backup server, and media server (it replaced a DS101 some years ago). It will soon be accompanied by a DS211 which will be used as our main home server (files, backup, med
byElectricTurtle ( 1171201 ) writes:
I've had two Synology NASes of different models, and I've used both DSM 2.x and 3.x, and I'll agree that the DSM 2.x interface can be rather slow at times, but the DSM 3.x interface really impressed me. It's very responsive. Neither are as bad as some NASes I've had to contend with, such as shitty Buffalo units.
I haven't tried the email features, but is your problem consistent across updates to the DSM? I find it a bit hard to believe something like that would go uncorrected for very long, and Synology up
byTumbleweed ( 3706 ) * writes:
The price of the two best ones, the Netgear and the QNAP, on Newegg, for the diskless versions, are about $230 apart - about a quarter of a difference. I think I'd go with the Netgear based on that.
The problem with these things is that Thunderbolt is almost here for everyone else (not just Macs), and with SSDs getting less expensive all the time, I think I'd rather wait for a Thunderbolt-connected version for the sake of future-proofing. Plus a version intended only for 2.5" drives would be sized better for
byfailedlogic ( 627314 ) writes:
Quick and easy tip to increase storage space on a budget: buy the 3.5" model and punch a hole in the top corner. When the first side is full flip over the disk and use the other side. You will need to periodically flip the disk over and make note of what side contains the data you want.
Parent
twitter
facebook
byTumbleweed ( 3706 ) * writes:
Quick and easy tip to increase storage space on a budget: buy the 3.5" model and punch a hole in the top corner. When the first side is full flip over the disk and use the other side. You will need to periodically flip the disk over and make note of what side contains the data you want.
Ha. I'm old enough to remember doing that to 5.25" diskettes for my Apple ][.
bywhoever57 ( 658626 ) writes:
The problem with these things is that Thunderbolt is almost here for everyone else (not just Macs), and with SSDs getting less expensive all the time, I think I'd rather wait for a Thunderbolt-connected version for the sake of future-proofing
How is Thunderbolt going to provide a N[etwork]AS?
byfuzzyfuzzyfungus ( 1223518 ) writes:
The PCIe spec is flexible enough that, in theory, you could probably network with it(directly, that is, not just by hanging a gig-E chipset off each host, which would be the sane thing to do). PCIe switches are supposed to be used for fanout of a limited number of host lanes to support more peripherals; but you could likely put one in a separate box, with thunderbolt bridges for communication off-board.
It'd be damned expensive, and I'm sure all sorts of horrible things would happen, given that host-host
bynetworkBoy ( 774728 ) writes:
I made a network link using SATA and a SAS HDD.
Two PCs, each with a single eSATA link to the SAS HDD.
Turn one link on and the other off, dump data on the drive, turn the first link off and the other on, read data from the HDD.
did it just for giggles. Actually was faster than my ethernet connections, but temperamental is inadequate to describe the setup.
-nB
byfuzzyfuzzyfungus ( 1223518 ) writes:
Given that, in the context of ethernet, "Jumbo frame" usually implies a whole 9000 bytes, I'd say that the HDD-based system does have the clear upper hand in potential frame size...
Pity about the latency and the being half-duplex...
byatamido ( 1020905 ) writes:
If you'd been a decade earlier, you could have done it with a SCSI drive and two host controllers, all assigned to different IDs. Then you could have had both drives able to access the drive at the same time. I have no idea how you would have avoiding trashing the file system or poisoned file cache, but I'm sure there's a way.
byrrohbeck ( 944847 ) writes:
Haha. Great minds think alike.
Since we're only talking block transfers for IP the file system is not a problem at all.
Actually the disk would thrash from the bidirectional transfers so two drives would work better. Then you could use each as a FIFO and have all the goodness of its streaming transfer rate.
byrrohbeck ( 944847 ) writes:
With parallel SCSI you don't even switch. Just access the HD from both hosts at the same time.
I did that between a PC and a MicroVAX once :P
byTrongy ( 64652 ) writes:
The newer SMB2 protocol in post Vista version of windows is much more efficient in network usage. Samba 3.6 now has SMB2 support, but the article doesn't say which (if any) of these devices support the newer protocol.
twitter
facebook
byJunior J. Junior III ( 192702 ) writes:
It might be listed under Doki! Doki! Panic support, that's what it was originally called in Japan.
byAnonymous Coward writes:
Holy cow! $1,699 to $3,799" for "10TB or 12TB" of storage?
Case with 8 internal bays: $40
600 Watt Power supply: $35
MB with 8 SATA3 ports: $115
2.5gig dual core processor: $73
8 2TB drives: $800
1 Gig of RAM: $30
Total: $1093, for 16TB of storage. Yeah, yeah, you need one of them as a spare drive for redundancy, and you need an OS. You also need a few minutes to assemble and install. But for that price? Why pay twice as much? Hell yeah, roll my own, baby!
twitter
facebook
byJoe_Dragon ( 2206452 ) writes:
That PSU is to cheap at least get a $50+ one and don't just go for high watts.
get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.
8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.
redundancy you may want raid 6 on a raid card and not on board fake raid and most south bridges only have 6 ports any ways.
Also some low end MB only have 10/100'.
Parent
twitter
facebook
bygman003 ( 1693318 ) writes:
That PSU is to cheap at least get a $50+ one and don't just go for high watts.
Uh, what? I can understanding criticizing a specific PSU brand as being too unreliable or low-quality, but come on! Just saying "any PSU less than $__ is crap, you need to spend at least $__" makes you sound like a classic Conspicuous Consumer.
get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.
This is a NAS, not a server. Half a gig would be sufficient, honestly - I've run some with 256MB. One gig is plenty, unless you want to keep files on a RAMdisk.
8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.
When you're just building a home/small office NAS, you don't need a high-performance RAID card - software RA
byJoe_Dragon ( 2206452 ) writes:
That PSU is to cheap at least get a $50+ one and don't just go for high watts.
Uh, what? I can understanding criticizing a specific PSU brand as being too unreliable or low-quality, but come on! Just saying "any PSU less than $__ is crap, you need to spend at least $__" makes you sound like a classic Conspicuous Consumer.
ok but don't cheap out.
get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.
This is a NAS, not a server. Half a gig would be sufficient, honestly - I've run some with 256MB. One gig is plenty, unless you want to keep files on a RAMdisk.
ok but for $30 you can get 2gb ram
8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.
When you're just building a home/small office NAS, you don't need a high-performance RAID card - software RAID is more than enough. Especially considering the price of those things.
maybe but not all boards have 8 ports and some that's 6 chipset and the other from a add on sata chip also the build in software / fake raid likely will not work across 2 different chips like that. And even with 8 ports you still need 1 for the OS disk or you can mix the OS with the data drives.
redundancy you may want raid 6 on a raid card and not on board fake raid and most south bridges only have 6 ports any ways.
8 hard drives is not enough to justify RAID 6, unless they're EXTREMELY unreliable drives. Especially since that cuts down your storage capacity down to 12TB - not that good.
RAID 6 is only needed when it's possible for a drive to fail, and then for another to fail while the array is still recovering. There's no point in doing it with only 8 drives.
8 drives in raid 0 is a major risk. Raid 5 uses less space.
Also some low end MB only have 10/100'.
True. But then again, how many switches and computers are still only 10/100? Maybe you don't, but I still work daily with stuff that maxes out at Fast Ethernet.
Plus, a $115 mobo isn't "low-end", at least by my definition. It's a fair assumption that if it has 8 SATA ports, you're going to have 10/100/1000 Ethernet.
The case needs to have room for 8 HDD's + a os disk and good cooling.
bygman003 ( 1693318 ) writes:
ok but for $30 you can get 2gb ram
Yeah? For $30 I can also add a nice SD/MicroSD card reader. And it would be just as beneficial to the system. Just because RAM is cheap, doesn't mean you need to cram absolutely everything full of it.
maybe but not all boards have 8 ports and some that's 6 chipset and the other from a add on sata chip also the build in software / fake raid likely will not work across 2 different chips like that. And even with 8 ports you still need 1 for the OS disk or you can mix the OS with the data drives.
Once X79 comes out, you'll have 10 ports, naturally. In any case, software RAID, at least under Linux, can handle disks on any widely incompatible set of chipsets. As well as separating the OS onto a disk partition on just one drive.
8 drives in raid 0 is a major risk. Raid 5 uses less space.
That would be relevant, if we were talking about RAID 0. RAID 5 and 6 are ident
byJoe_Dragon ( 2206452 ) writes:
X79 is the high end chip set that needs a i7 cpu that likely $280-$300 + a $200-$250 MB also the cpu has quad channel ram so you may want to have at least 2 ram sticks maybe even 4 also may need a low end pci-e video card as x79 has no build in video so the system may or may not boot up with out one. VS say a lower cost CPU and MB + a hardware raid card at about $300 is about the same (You do not need a i7 for that and with the lower end board on board video is ok) and hardware raid makes so you don't need
byBLKMGK ( 34057 ) writes:
Software RAID does NOT require anything resembling a high end CPU. The LOWEST end Intel CPU undervolted and underclocked could do it. Boot the OS from a USB stick into a RAM disk. You will NOT need more than a gig of RAM if you do it right. Do NOT use hardware RAID, when it pukes and you try to replace the hardware you'll have all sorts of "fun".
You have actually DONE this right not just reading about it and postulating?
byBLKMGK ( 34057 ) writes:
unRAID. Boot from USB, uses a standard albeit not common FS (ReiserFS), only loses one disk to parity, and losing multiple disks doesn't kill the entire storage array. Can host a max of something like 16 disks although I've never gone past 11 on either of my systems. No OS maintenance although if you add on lots of stuff you can get into murky territory. Needs no more than maybe a gig of memory and a SLOW CPU. CPU will NOT be your bottleneck and Celeron or single core whatevers work just fine. A case that c
byJoe_Dragon ( 2206452 ) writes:
Celeron or single core + boot from USB + software raid is not a good idea. At least boot from a sata HDD or maybe firewire
byBLKMGK ( 34057 ) writes:
And WHY pray tell is that? Been doing it for well over 5 years and have gone through more than one USB stick without issue - my current stick is 2 years old. Boot takes about 2 minutes and only ever occurs when I upgrade software or a drive. The USB stick isn't written to during that time and only stores the OS to boot to RAM disk and a single config file. The image on the stick is standard from my vendor and the only thing unique I need backup anywhere is the config file - it's maybe 10K or I can print out
bypnutjam ( 523990 ) writes:
My NAS runs NX, so I can pull up a published firefox, or do bit torrent. Anything I surf in my published firefox leaves no trace on the PC or the DNS servers of the site I am at. They only see an encrypted tunnel to my home PC. A full desktop gives me alot of flexibility at little cost.
bybroken_chaos ( 1188549 ) writes:
RAID 6 is only needed when it's possible for a drive to fail, and then for another to fail while the array is still recovering. There's no point in doing it with only 8 drives.
It's also extremely useful if you run into an unrecoverable read error while trying to rebuild the array.
A lot of standard mechanical drives have an unrecoverable read error rate of about 1-in-10^14 bits (or 1-in-~12TB), meaning you're getting into some pretty nasty chances of hitting an URE on at least one of your disks when you're trying to rebuild the array after a disk failure with a decently-large array. This issue is alleviated when you have storage with an URE rate of 1-in-10^15 or higher (such as so
byswalve ( 1980968 ) writes:
That's why you have cron do a raidcheck once a week. You'll know if you have a drive starting to go TU before it completely fails.
byafidel ( 530433 ) writes:
No, RAID6 is the only useful level of RAID for any 7200 RPM drive over ~1TB other than RAID10. The bit error rate and time to rebuild are too high for anyone who cares about their data to use anything else.
byHuguesT ( 84078 ) writes:
1- I don't know of any good-quality power supply below about $60. Good quality means Japanese capacitor, low ripple, good resistance to micro cuts, no lead, good current on the 12V rail, at least bronze-level efficiency, silence, and so on. Cheap no-name PS eventually fail, sometime taking the whole PC with them. Most people dismiss the PS, but it is an essential investment in a piece of equipment that runs all the time.
Read this [tomshardware.com] for instance.
2- On a homebrew NAS you want to run ZFS, you really do. In fact
byBLKMGK ( 34057 ) writes:
Look at unRAID. One drive supports the parity and the FS is standard ReiserFS. Lose a disk and a rebuild is no problem. Lose TWO disks, and be completely unable to recover with standard tools, and you lose.... two disks of data NOT the entire damned thing. In 5++ years of using this system and having gone through multiple drive failures I've never once lost data. Never once had two disks die at once either and my systems run 24X7X365. I had one machine up to over 11 disks once but with larger disks have bro
byrandallman ( 605329 ) writes:
The only real advantage "real raid" has over "fake raid" is the battery backed cache, so if it doesn't have that, you're probably better off with "fake raid". Your system CPU is faster than the CPU on board (plenty fast for parity calculations) and with "real raid", you have yet another OS (the board's firmware) to keep updated and hope doesn't crash and take our your file system.
I'd rather have the OS handle the disks so there's no mystery disk format and I have complete control from the OS level. ZFS an
byclarkn0va ( 807617 ) writes:
you may want raid 6 on a raid card
You just added hundreds or thousands of dollars to a $1000 NAS and locked yourself to a specific piece of esoteric hardware in one swoop. BIOS-based RAID is a little too pedestrian for serious storage, yes, but there's nothing to be ashamed of in a true software RAID setup (ie, mdadm), even if it means adding SATA ports through a card.
byMoreDruid ( 584251 ) writes:
get a cheap HP ML110 server with a few GB of RAM, load it up with disks. Get a bigger housing if the case is too small. Benefits: remote management (very basic ILO), server grade chipset/CPU if you get the Xeon specced model. I got one of these in a special offer and it runs my linux server very well. 1.6TB RAID 1 (mdraid), off the shelf disks, bought half a year apart so I don't get bitten by some bug that's in one firmware and not the other. Enough CPU/RAM/disk overhead to run the occasional test VMs. I a
bypnutjam ( 523990 ) writes:
I'll bet that thing is loud, most server equipment is.
byCharliemopps ( 1157495 ) writes:
You obviously haven't been involved in enterprise level purchases before. It may seem silly to your average techy but the people buying this equipment need someone to blame when it fails. If you're the head of IS in your company and the little server your suggesting goes down for 24hrs because of some obscure hardware incompatibility, what are you going to say? You built it, you maintained it, now the company has 50 people that sat at their desks pointlessly for 8hrs while you dicked around with drivers. Yo
byswalve ( 1980968 ) writes:
That's why when you home-brew, you have redundant equipment. If you have extras of everything, on the shelf and ready to go, no need for any of that BS.
byNeil Boekend ( 1854906 ) writes:
Well if it's the weather's fault due to lightning or such you'll still get blamed.
byrrohbeck ( 944847 ) writes:
A high quality and quiet home NAS with 4GB and 15GB RAID5:
Fujitsu server $299.99 (http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=6939649), sold out now.
6x 3TB drives, $959.94, less if you buy 5400rpm drives.
----------
$1259.93 + 2 spare 250GB drives
byrrohbeck ( 944847 ) writes:
Erm, 15TB of course.
byDarthVain ( 724186 ) writes:
Yeah I figured that out a long time ago. I figure the only people that buy that crap are people that are lazy, want a really simple solution, or do not have the expertise to do it themselves. That is to say you take one of these pre-assembled NAS, plug it in network, do a wizard, done. For a small biz with no tech support, maybe an option. Also most of the bigger NAS support hot swappable drives, which is nice... thought if you spent about 120$ rather than 40$ on your case you can get hot swappable drives a
byrikkards ( 98006 ) writes:
The DNS is notoriously slow. One of the reasons I discarded them before I decided to go with a 4 bay rather than 2 bay
byElectricTurtle ( 1171201 ) writes:
Synology DS211j $200
Hitachi 1TB $65x2 = $130
$330.
bybagofbeans ( 567926 ) writes:
I got a Fractal Design Array R2 Mini-ITX NAS Case which is gorgeous, takes 6 HDD in a small case. MB is Sapphire Pure Fusion Mini E350 AMD Dual Core E350 which is very low power, 5 x SATA III + 1 eSATA II, USB 3, GbEthernet.
FreeNAS 8 supports the hardware, and ZFS filing system is reliable.
Not enterprise level, but excellent for home use.
bybagofbeans ( 567926 ) writes:
...and you should budget for 8GB memory to run the ZFS filing system properly. Oh, and FreeNAS runs from a 2GB (min) USB stick, doesn't waste a HDD.
To soft-start the investment, you could buy the MB + RAM first, set it up in a cardboard box with a spare PSU and 5 any-size SATA HDDs you have kicking around.
bybigdady92 ( 635263 ) writes:
There is no mention of speed, performance, file copy replication, the ins and out of each solution, just a list of features they all share and how the author went about determining them at his whim. Without metrics this article is just a sales blurb for links. Other websites do it better: Storagereview for one, Smallnetbuilder is the other.
Another wretched sales brouchure disguised as a review by Infoworld.
twitter
facebook
bybryan1945 ( 301828 ) writes:
Well, there were some metrics.
But you're right, when I went to the review, Ghostery popped up 15 or so trackers. Don't think I ever saw that many on one page.
bychrb ( 1083577 ) writes:
The metrics are a bit useless since he hasn't even used the same RAID configuration - three use RAID10, and the others RAID2 and RAID5.
byjtownatpunk.net ( 245670 ) writes:
It's definitely more work to set up than a pre-built appliance and I wouldn't use it in a production environment but it has some advantages and works well as my media server. I particularly like that multiple drives developing a few bad sectors won't render the entire array unrecoverable. That's a bit of a concern when combining multi-terabyte consumer level drives. I currently have 20tb of fault-tolerant storage with room for another 6tb before I run out of ports. With more ports and a larger case, I could go up to 40tb.
twitter
facebook
bygorbachev ( 512743 ) writes:
Big fan of unRAID as well.
I set up a box for home this summer. 20-drive max capacity, currently running on 6.
The extensibility of the system was the biggest selling point for me.
byJumperalex ( 185007 ) writes:
Count me another huge unRAID fan. ZFS has its pluses but the one thing it does not have is the dead simple ease with which to add storage capacity. Yes yes yes I've seen how it is done with ZFS, even played with it myself, but it is NOT the same, it is NOT as seamless, it is NOT as simple. With unRAID I add a drive and that is basically it. It just starts saving data to it as if it were a JBOD.
DROBO came out and I thought that was my solution, then I saw the price tag, the speed, the proprietary FS and
byjtownatpunk.net ( 245670 ) writes:
OMG! There's something else that's better than what I'm using! My choice is flawed and I should dismantle my array immediately even if it does everything I need! :rolleyes:
byafidel ( 530433 ) writes:
The other nice thing about ZFS is L2ARC and ZIL, throw in one or two cheap SSD's and your TB's of cheap storage start to perform like a 5-6 figure array =)
Parent
twitter
facebook
bydnaumov ( 453672 ) writes:
I built a 500$ Atom NAS over 2 years ago and it had better performance then that shown in the charts of that article. And these rigs are over 1000$ today? WTF?
byjuventasone ( 517959 ) writes:
Same here. Intel Desktop Board with integrated Atom. I installed XP with most services disabled, no AV, just some IP cam software. It only has LAN access (no Internet). Runs fast and has never gone down. My favorite part is the power consumption--measuring at the wall with activity was 18W.
byjuventasone ( 517959 ) writes:
Not when the IP cam software only supports Windows.
bysdguero ( 1112795 ) writes:
The metrics were using different raid types from one solution to the next, some say RAID10, some RAID2, etc... The "Intel file copy" test was basically unexplained and it doesn't make sense that a file copy (sequential write/read operation) would have less throughput than random reads/writes (and wtf does he talk about 256k block size in teh legend instead of how big the read/writes are?) as the other test claims to be. Also, the author calls RAID-10 and RAID-6 as modes for someone with more technical knowl
bydbIII ( 701233 ) writes:
Something with some CPU power to take requests and get them out there plus a card that can do RAID6 and still saturate a gigabit network connection (with enough drives) doesn't really cost a lot more than some of those underpowered things.
bybigtrike ( 904535 ) writes:
A Dell T710 is $900 and can take 16 2.5" drives or 8 3.5". If you're not a fan of linux software raid, toss in a PERC controller ($599) and bump the ram up to 4GB ($65) and 8 1.5TB disks at $520 and you're at $2084 for 12TB of storage, in any type of RAID you want.
Parent
twitter
facebook
byDishwasha ( 125561 ) writes:
Use GlusterFS http://www.gluster.org/ [gluster.org] for redundancy spanned across one or more JBOD machines for a much easier hardware and data upgrade path. Use oVirt for easy set up http://www.gluster.com/community/documentation/index.php/GlusterFS_oVirt_Setup_Guide [gluster.com]. Mount GlusterFS directly to your clients or export via iSCSI target, fibrechannel target, FCoE, NBD, or traditional NFS for a more advanced shared storage solution. And you can still run more of a NAS type setup with CIFS, WebDAV, or the like.
byGothmolly ( 148874 ) writes:
I stopped reading the slide show when they not only spelled out a definition for ISCSI, but got it wrong. Horrible article. Zero details, all fluff.
byLifix ( 791281 ) writes:
Dear Slashdoters. I know that you can build a better, faster, cheaper NAS that will perform fellatio over SSH and wipe your ass for you.
But, I don't care... at all. According to you, I overpaid for my two NAS devices, a Drobo FS (serving media) and a Synology DS211+ (photo backups (profoto)). But I'm exceedingly happy with them. Transfer speed is sufficient on the Drobo to serve 1080p content to 2 tv's and an iPad simultaneously, and the Synology keeps up with my image editing software just fine. I've upgraded the drives in the drobo once so far, and just like their videos claim, everything just worked. The Drobo survived a drive failure last year, in the middle of 'movie night,' and video playback from the drobo was unaffected. - I'm glad that these NAS devices were reviewed, but I can't imagine why so many have come to this thread to post their server builds. The people, like myself, buying these NAS devices are buying them so we don't have to build our own servers.
twitter
facebook
byLifix ( 791281 ) writes:
Went back through and actually read the OP. The comparison is absolute shit, there's no mention of input/output speeds on any of the devices, and no clear methodology for handing out scores... advertisement disguised as reporting.
bymrbill1234 ( 715607 ) writes:
Well said - seriously - i'm so over hacking up my own hardware - I just want to buy something, plug it in, and it works. Maybe if I were a teenager with lots of time on my hands, and no money - OK, i'll spend a few days hacking up a NAS - but i'm not. I have a job, make good money, and have a life. I'm willing to pay for convenience.
byalphatel ( 1450715 ) * writes:
It's really not that difficult, even to build your own SAN, let alone NAS. The era of convenience at a cost is probably coming to a close.
bymrbill1234 ( 715607 ) writes:
If this were the case, Apple would be out of business.
byKevin Stevens ( 227724 ) writes:
Indeed- I have had a synology nas since 2007 (first a ds207, upgraded to a ds211j this year). There are tons of features right out of the box- I have been living in "the cloud" for years now. When I think of all the time I would have to spend setting up software packages for all of the features syno provides... it makes me want to cry. I have already spent way too much time getting serviio up and running to replace the standard crappy DLNA implementation.
Buying pre-built means it works for you, and you aren
byMukiMuki ( 692124 ) writes:
I think the only problem I have with Drobo is the horrifying warranty options. For a device that has roughly 80% markup, I think it's attrocious that it has a hardware warranty period (1 year) lower than a cheap USB hard drive from just about anyone else (2-3 years). Especially given that once you're out of warranty, a hardware failure ANYWHERE in that device isn't really recoverable from without going out and buying a new Drobo. You can argue that it's unlikely to fail, but I think that, if they really sta
byInitZero ( 14837 ) writes:
I'm entirely, completely in love with Drobo as a NAS device.
The ability to pop out a smaller drive and replace it with a larger drive is amazing - that is simply how technology is supposed to work. I have the Drobo FS at home and the DroboPro FS at work. Having used them for about a year and having tried to make them fail before I moved them into production, I'm very happy with their reliability and performance. (More on performance in a second.)
At the high end, I have used EMC and IBM solutions. At the low
bygjh ( 231652 ) writes:
My own AFP experience with QNAP was terrible, due to the dodgy FOSS stack - I forget which one - that was included. There was no useful way to authenticate (no OpenDirectory, no Kerberos, no way to automate user import). I ended up with iSCSI between the QNAP and the Mac OS Server (ATTO iSCSI) and serving AFP from there, with a 5x speed improvement.
Was I doing something wrong? It doesn't seem to match the AFP figures in the article. Anyone else have similar awful real-world AFP performance?
byMrNemesis ( 587188 ) writes:
...for my needs anyway, so hopefully I can add something to the discussion. I'm one of those traitors who traded a homemade linux NAS for an off-the-shelf model and went through quite a few models before I found one I was happy with.
My initial file server was built into a cheap 4U rackmount and a couple of 3ware cards and provided sterling service. However, it was exceptionally loud and very heavy, and sucked up a fair amount of power. When you've moved house as often as I have, you start to think about whe
bygeggo98 ( 835161 ) writes:
See here: Backblaze Blog: Petabytes on a budget [backblaze.com].
They use JFS on debian.. You can easily add the filesever software of your choice (Samba, Netatalk, NFS, etc.)
On the hardware side they use a Intel mainboard with a Intel Core 2 CPU, a PCIe SATA 2 Controller and 45 SATA 2 discs (each 1.5 TB). They put it in a custom enclosure, the 3D model is available here (25 MB ZIP archive) [backblaze.com]. This all costs less than 8000€ for 67 TB (discs included!).
There is also an update [backblaze.com], where they get 135 TB for less than $8000.
byPtur ( 866963 ) writes:
QNAP comes out as best in nearly all tests, yet they still recommend one of the brands that performed way worse?
(disclaimer: I run a QNAPclub community forum)
byEphemeriis ( 315124 ) writes:
I'm no expert. We've got a NetApp where I work now, and had a Netgear ReadyNAS at my previous job. But I will say that some ability to upgrade is going to be key.
We got the ReadyNAS up and running with just a couple TB of storage because we really didn't think we'd need more than that. Within a year we were full and looking for some way to expand it.
The NetApp here was installed with a good pile of storage... But we've grown our environment so much that we've exceeded the capabilities of the chassis, an
byKenja ( 541830 ) writes:
Drobo is nifty, got one myself. But its a proprietary "RAID" system that they would have had to devote a fair amount of time to explaining and can not be well compared to systems that use more standard RAID 0/1/5 setups.
byrikkards ( 98006 ) writes:
Whole lot cheaper if your time is worth nothing. I have spent maybe a total of 20 minutes getting my NAS set up. I would rather spend my time doing other things than configuring a server.
byNursie ( 632944 ) writes:
And this is why you'll never be as good as the people that really know their systems because they actually enjoy learning about and tinkering with them, setting things up and gaining a little extra insight every time.
byafidel ( 530433 ) writes:
And the small and midsized businesses that are the main audience for these devices don't care, they don't want to be good at fixing obscure problems with LVM, they want a device to hold their data which requires the least amount of staff time to setup and care for.
byfuzzyfuzzyfungus ( 1223518 ) writes:
Unless you are absolutely phobic about exposure to harrowingly technical terms like "raid5", you should approach the drobo with notrivial caution.
They are quite pricey for their size and performance, which has historically been pretty tepid. Probably worse than that(which is a set of vices shared with quite a few other underpowered NAS units), is that their "BeyondRAID" system makes up for some powerful features by being Just Fucking Weird in some annoying ways.
Perhaps my least favorite is the ghastly
byBLKMGK ( 34057 ) writes:
I've nwo met two different people who have had Drobo fail on them. One was a port that went up in smoke - literally - and the other just had performance nosedive for no obvious reason. Both moved to unRAID and the guy who had the bad performance found no issues with his drives. Drobo has some nice features but at the end of the day I'm happy using something I built so that if I need to replace or repair it's easy - and drive expansion is as easy as slapping a drive on another port and adding it. I just have
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
●
313 commentsAcer To Raise US Laptop Prices 10% After Tariffs
●
278 commentsCalifornia Now Has 68% More EV Chargers Than Gas Nozzles, Continues Green Energy Push
●
275 commentsCalifornia Has 48% More EV Chargers Than Gas Nozzles
●
275 commentsMuch of the World's Solar Gear is Made Using Fossil Power in China
●
275 commentsBYD Unveils New Super-Charging EV Tech With Peak Speeds of 1,000 kW
Federated Media Lands WordPress.com Deal
Ask Slashdot: Which OS For an Embedded Display Unit?
Slashdot Top Deals
Slashdot
●
●
of loaded
●
Submit Story
If A = B and B = C, then A = C, except where void or prohibited by law.
-- Roy Santoro
●FAQ
●Story Archive
●Hall of Fame
●Advertising
●Terms
●Privacy Statement
●About
●Feedback
●Mobile View
●Blog
Do Not Sell or Share My Personal Information
Copyright © 2026 Slashdot Media. All Rights Reserved.
×
Close
Working...