zag (16th October 2012)
Check how much data you write to your current file server on a daily basis (not sure how you'd do this) - this would give you a clue as to estimated life of drive (Intel publish the write endurance of their SSDs). To look to extend the life, if you can afford to, overprovision the drives, this gives more space for garbage collection and wear levelling, less storage space, but reportedly longer lifespan.
I would think about having 2 servers with a single SSD in each, running DFS, then, for your backup server, have spinning rust - your backups should send data in large chunks, rather than small random accesses which your file server will have to deal with - throughput to spinning rust isn't too bad, and if you have 4x drives in RAID10 you ensure your backup data is secure, get speed benefits of RAID0 and you should have more space to be able to have a longer backup retention.
Of course, if one of your DFS servers goes down, get it back up as a priority, or, have 3 DFS servers (possibly one in a separate building - though you need to ensure you have decent link speed) and then having one down is less of an issue.
From the size of the drives and as they're being used as for fileserver only, I don't think 10GbE would be of much benefit (how much of your current connections do you currently use)? If you were looking to have the servers as SAN for Hyper-V (Server 2012 with it's hyper-v using SMB - if I've read that correctly on the brief looks I've had at Server 2012 Hyper-V), then 10GbE may be beneficial.
Personally I'd only use enterprise grade kit for enterprise applications but maybe I'm just cautious. I remember seeing an EMC presentation where they compared standard SSDs to storage-grade ones and they were showing how consumer drives wear out much quicker in high read\write environments (although you'd expect them to say that!). Think it was to do with consumer drives writing in two directions and enterprise only uses one to save wear on the flash (although memory is very hazy on this)
No RAID on a file server seems very risky, although you can restore from backup would your users tolerate the downtime of losing a 600GB file server VM because one of the drives failed and then takes however long to bring it back up? Seems like you're sacrificing one of your layers of data protection for speed, is it worth the gamble? Saying that if this is a physical machine it'll take even longer to restore so it's an even bigger gamble.
The SSD Company - STEC - CellCare Technology talks about Stec's technology to extend the life of MLC (there are other manufacturers who use other techs) - but it all depends on how much data you write to the device - EMC look at a point of view of a large enterprise who write several hundred GB of data each day, rather than a school who may write 5-10 GB data per day, not much of an SSD, so as long as wear levelling works correctly, the life of the drive may be longer than the manufacturer suggests - this very much depends on the actual amount of data written. As can be seen on the Stec page, enterprise devices have a much longer working life, but I know of a large number of comercial hosting and service providers who use Intel 320s in their servers.
Last edited by Willott; 16th October 2012 at 10:01 AM.
Excellent discussion chaps, really given me some great ideas.
I've been using the intel 320's for a while now and they are great drives. Our file server probably gets less than 5gb written to it a day so I really don't think the write lifetime will be a problem. I'm sure I read somewhere you would have to write data 24/7 for something like 7 years to reach the limit.
We have had our Sims server on an SSD for 2 years now and that probably has got more IO activity. No problems so far (touch wood).
Our environment is moving towards Hyper-V for all machines except the domain controllers.
Interesting discussion. I have to say I'm in the camp of not running a server without some form of RAID. I'd also be wary of the read/write limits on a file server. A couple of questions do come to mind (I could google the answers...)
1) Whats the price difference between Enterprise grade SSD's and 15k SAS drives?
2) Do Hybrid drives (standard drives with an SSD cache) work in RAID?
My gut instinct is that current SSD's probably couldn't compete with SAS in terms of price/performance?
The price difference is a *lot*. Intel 320 600GB is about £500. A HP enterprise 600GB is about £2,000.
DFS is one of those technologies that sounds wonderful but when I read threads where it's clogged up with a 20GB backlog it doesn't inspire confidence, maybe that's just the unlucky ones?
SSD use in servers is not just down to the drives,
Replacing SAS with SSD in your old shelf will work, have a bit of performance upgrade but is susceptible to the killing of SSD as mentioned, and the performance will not make best use of the SSD.
PROPER SSD San Controllers for raid will both maximise the performance of the SSD and increase the lifespan of the drive.
Hence they cost a $%^$& fortune.
Here's the thing though, a 600Gb SAS2 drive is £250 (WD XE 600GB 2.5" SAS Internal Hard Drive (WD6001BKHG) - www.misco.co.uk). Put in a decent RAID which is going to be faster, more reliable and cheaper? the SAS or the SSD?
I'm looking at SSD's now for desktops and laptops. It's part of my min. spec for new machines, but servers... Still don't think the time is right. Even at the other end of the scale I'm sure WD Velocoraptor's or SSD Hybrids would be a better bet.
My experience on all the clients we run in school is the SSDs have been far more reliable than hard disks.
Our limited experience of SSD's in servers. One heavily used webserver and one Sims server has also been good over the last few years. Both using Intel X-25m drives. The web server particularly had about a 2000% increase in mysql speed. Amazing and surely the future.
There are currently 1 users browsing this thread. (0 members and 1 guests)