tmcd35 (17th July 2014)
" Are SSD's still too expensive for mass storage?"
They are for me
I was talking to a server supplier I have purchased from previously and they said they are supplying hybrid storage, so you have 8 drive bays in a server for storage and you fit two SSD and the rest 15k and you use the SSD as a giant cache.
That's the theory, I have not looked into it anymore but plan to as I need to buy 2 servers soon and cannot afford all SSD but All sas storage seems a bit "rubbish" now we have SSD in desktops.
lol talk to Dell directly. I have every confidence they will pretty much destroy most comparable quotes as well as throw in a 5 year warranty.
Mmm, wonder if I miss-explained the purpose of this thread? Not really looking at quotes at the moment. Was hoping for some EduGeek expertise on the implementation side. Storage Spaces vs RAID, iSCSI vs SMBv3, sort of thing.
My current thinking is to implement a JBOD with Storage Spaces. SSD looks too expensive, 900Gb 15k SAS drives look inviting. The question is how do I partition and advertise that space out, and how do I repurpose my existing server for redundancy?
Also what kind of CPU/Ram would people recommend for a storage server/SAN controller (Windows 2012 based)?
Also, if I go SSD for the OS, first is there any real point in doing that, and second is it really worth while going for two mirrored drives for the OS?
Last edited by tmcd35; 17th July 2014 at 08:10 AM.
I would suggest looking at tiered storage - you don't have to run the same disks for everything.
We run things like our RDS server and SIMS VHDs from SSD (along with mandatory profile images).
We then run the rest of our data from 10k rpm disks (both lower demand VHDs and our normal file storage). 15krpm disks are somewhat pointless - more spindles is better than faster spinning disks.
Storage spaces are basically software RAID - when you set them up, you choose the schema you want to use, such as mirrored or parity. So, you effectively just set them up how you would RAID.
With the speed of CPUs what they are now, the CPU part isn't so important any more. I'd go for a minimum of 8GB RAM but RAM is pretty darn cheap now, so 16GB would be a good base amount to go with.
One thing to think about is the iSCSI vs SMB 3.0 storage question. iSCSI is pretty much a "standard" now, and Windows Server 2012/R2 have the capability to be used as an iSCSI target built in. However, its still more complex than simply running SMB 3.0 shares, but they are newer and less well known.
The question of redundancy is a difficult one. If you're going down this sort of route, the only way of doing it is by using Storage Spaces and "Scale out file server". Basically, you have to have 2 servers with no disks in them (other than the OS), and then have dual homed SAS storage arrays connected to both servers. There aren't many of those arrays on the market yet either. These guys are the main supplier it seems! Server 2012 R2 Storage Spaces
Some instructions for creating such a set up are available here: Deploy Clustered Storage Spaces
The way we've done it, as we couldn't justify spending even more money on hardware was to have 1 storage server with everything on, used as primary. We then had a second identical server which had the file storage (ie. home directories and shared drives) set up to replicate to via DFSR. The VHDs are backed up nightly to the device via a normal backup solution (bearing in mind that we had to use "fixed disk" VHDX files, rather than dynamic).
So, its somewhat redundant, but not perfect. In our case it was caused by my not being involved in the purchase of the original server so I had to make do with what was bought.
Thanks @localzuk, that's just the kind of discussion I'm looking for!
Tiered storage is a worry, albeit a needless one (as in I'm probably making it needlessly complicated). I have to start thinking of how many of each kind of disk is required, their RAID, total capacity, which VHD's are stored in which tier, etc. Since we're talking about 10-16 spindles regardless, and 15k drives shouldn't be that expensive on the budget, I almost favour the JBOD method. Trow the disks at the server and sort it out later...
Reading up on Storage Space and it appears 2012R2 has add in the ability to use SSD's as a write-back cache. Now, that is interesting. Reduce the number of SAS drives and introduce some SSD for cache. Question is how much SSD cache would be appropriate? 10% of the total array size? more/less?
I think I'm favouring Storage Spaces over traditional RAID because of the flexibility it brings with growing volume sizes. It's the Windows answer to ZFS, and that sounds really useful in a VM environment.
We currently use SMB v2 shares for everything. Moving to SMB v3 would be a simple upgrade for us, that said I wonder if introducing iSCSI and Cluster Shared Volumes would be more beneficial for our virtual hosts long term? Better fail over support?
I was thinking of moving our core file data (home drives, public, etc), away from an SMB share on the file server and into a VHD headed up by a VM? I'm now wondering if that might introduce a bit of a bottleneck? A couple of DFS shares between old and new server might be a better solution for user data?
In terms of the old server - I just don't want to get rid of it. Also our file server is the single point of failure at the moment. It think something along the lines of CSV and/or DFS is the answer to the redundancy problem.
Anyone know if VHD's can be run from within a DFS share?
VHD/VHDXs won't run from inside a DFSR group (or at least, replication won't work).
CSVs won't add any redundancy unless you use a third party tool to do that I don't think (such as DataKeeper by SIOS). The design is for multiple hyper-v nodes to connect to a single disk as far as I'm aware. You'd still end up needing to use something like Scale-Out-File-Server if you wanted to do natively in Windows.
With regards to serving files - I don't think the bottleneck is such an issue, so long as you have plenty of network IO available. We've got 10GbE on our storage and hyper-V nodes, so we're miles away from bottlenecking. However, it really depends on your usage! We could currently get away with a pair of 1GbE connections per server to be honest!
Regarding SSD cache - no idea! Never looked at this concept to be honest.
Last edited by localzuk; 17th July 2014 at 09:35 AM.
Mmmm, Might forget iSCSI then and stick with SMB. I could always set up a nightly Robocopy between servers for backup, AFAIR VSS allows live VHD's to be copied this way (I'm sure I've done it). Not the instant on solution at failover I'd like, but manually re-pointing a VM's VHD location and rebooting is quicker than rebuilding hardware and restoring from backup.
There are currently 1 users browsing this thread. (0 members and 1 guests)