Not sure if this was the right place, so feel free to move it mods if it isnt!
I am currently racking my brains trying to find a solution to our current storage problem accross the entire server system here, so I'm looking for some guidance/help :)
Currently all servers run their own storage locally but they are all hitting capacity. Rather than just buying bigger drives for them I want to set the school with something they can grow with for the next 5 or so years.
I have been speaking to a supplier and he has so far recommended a HP P2000 SAS SAN with 12 600GB SAS drives in it. Whilst this seems to cover what I am after I have no experience of either SAN's or more specifically that bit of kit.
Attached is a word doc giving a rundown of what we have here and an idea of what we want to do (I'm probably not making much sense in this post!:D) Any ideas/advice/comments are most welcome, especially from people who have already put in something similar.
12 600GB SAS drives? If you *need* the performance, I'd be amazed.
In your spec : "We are looking for a storage solution that can integrate with all of the above systems in a simple, easy to manage way. The system needs to have at least a basic level of fault tolerance as well as the ability to grow to significant size through upgrades/add-ons etc."
So it depends what you mean by "basic level of fault tolerance". I'd presume RAID, redundant power supplies and dual (or more) GB Nic?
Look at some of the iSCSI capable NAS type devices out there. My preference is thecus boxes (i4500's or 8800 pro) - I've had good experience with them. Others seem to like like QNAP. A slighlty more expensive solution might be the Enhance Technology UltraStor RS8 IP-4, much the same kind of thing but with 4 GB NIC's. Bare boxes, the thecus i4500's cost around £600, the 8800's £1500 while the RS8 will knock you back £2400 ish. You can probably buy several of them for the price of the HP and you can then replace them every year for the price of support on the HP.
How many clients is this lot serving? That way, you can target the storage at what its being used for, rather than the perceived server backend use.
Thanks for the replies.
This is serving around 375 computers at the moment, although it's been mentioned adding 30+ more this summer.
As for the fault tolerance, yes basically redundant of most things like RAID, PSU, networking, etc. I didn't want to be too specific with the supplier as I wanted to see more what they recommended.
I like the sound of those NAS boxes, and I agree the amount of money quoted for the HP kit has made this project a little too expensive!
We are using an 8800 Pro to service network storage areas for around 3000+ users, probably no more than 800 concurrently. The storage is provisioned as iSCSI to a virtualised W2008 server which manages the shares. We also use two i4500's to serve out backup datastores for the VMware hosts but they also host some lower grade (not so critical), live VM's. We do not see performance bottlenecks on these devices with enterprise grade SATA drives.
Originally Posted by BaccyNet
I'll be adding another 8800 pro in the summer to act as a mirror, synced two or three times a day to the primary - just for faster recovery if the worst happens (we do already back up to tape overnight but restoring a few TB would take a day or two). We did for a while host VM's running SQL server and exchange on the i4500's - again with no performance hit but for those applications we really want some live mirroring capability which these devices don't yet offer (but you are talking fairly serious money to put in something that does (or perhaps some really clever work either with windows shadow volumes or cleverly managed VM snapshots - something I hope to figure out eventually!)).
Here are some tips:
Go with cheaper 'near line SAS' disks, but a significant amount of read cache (32gb or so), this will save a lot of money and increase capacity and performance. You only really need the high rpm disks for intense database access.
Get a unit that supports double parity RAID (raid 6) as there are problems running RAID 5 on large arrays. Make sure the machine has a good amount of battery backed write cache to overcome the write penalties associated with parity based RAiD.
Get a unit you can manage easily, there are even windows based units on the market with starwind software which enables some nifty features, including high availability clustering. These are really easy to manage.
Make sure you get some advanced redundancy features, redundant power, fans, nics, remember that if this unit fails, everything stops working (unless you get a high availability cluster).
Consider the virtualisation support of the SAN, you might want to do some hyperV clustering or VMware in the future.
<Edited by Dos_Box>