Thin Client and Virtual Machines Thread, SAN kit purchased, no turning back! in Technical; Exciting times...and thanks to all the guys here for patiently helping me out with suggestions.
Having used to hosting Hyper-V ...
1st July 2012, 11:40 AM #1
SAN kit purchased, no turning back!
Exciting times...and thanks to all the guys here for patiently helping me out with suggestions.
Having used to hosting Hyper-V on local storage for years I have, after much procrastination (and frustrations with no solution fitted the need) purchased an HP MSA P2000 G3 SAN (on a 50 percent cashback and a good deal even without that - Cheers Luke@Millgate!)
I choose the 6GB SAS version because:
1) Good compromise with cost against FC and 10GBe
2) Should have better performance than vanilla iSCSI
3) Don't envisage the school using more than 4 hosts, so I can have multipath direct to the SAN without mucking about with redundant 2xFC switches, 2x10Gbe switches etc, bit simpler.
4) Cost of HBA cards hell of a lot cheaper than 10Gbe NICs or FC HBA cards.
I have a DL380 G7 already running Hyper-V (dual Xeon 5620,24gb RAM, 8 x 300gb on Raid 10) running Finance SQL package,WSUS and Impero, hardly sweating at the moment
So will purchase another 2 x DL380 G7 (also Dual Xeon 5620, 2x146Gb 15k, 64Gb RAM) as hosts, plus 10Gbe NIC for VM to LAN communication.
Will bring the existing DL 380 G7 to spec as well, then transfer the VMs to SANs. With some RAM carried over I should have in the end 3 fairly well specified Hyper-V hosts with 72Gb RAM each. I'm also looking at using a another HP located in another block running a Virtual DC.. replicating all VMs with Veeam as a DR solution (not as comprehensive as a dual SAN set up, but again cost was a factor)
Have a few questions if you guys don't mind:
I'm planning to run SIMS, Exchange, Finance SQL Package (Correro), WSUS, Intranet, Impero, RDS - I hope to move the more mission critical stuff in the list once the MSA P2000 G3 has proved reliable say October. I am planning to buy 10 x 600GB 10K SAS, and maybe using the 6 x 300GB 10K SAS (that would be surplus from the existing DL380 G7) in there as well.
So with 10 x 600GB 10K SAS, 6 x 300GB 10K SAS, how would you curve up the RAID for different access needs? Would you not bother with the existing 300Gb SAS but get bigger/faster SAS drives instead? (again I did it for cost reasons and reusing old components)
With 10Gbe Ethernet for VM to LAN, Would you just bung all your VMs onto that one pipe for bandwidth or would you use Virtual LAN software to curve up the 10Gbe pipe(or again keep it simple?)
Looking forward to setting this up ! :-)
Cheers all !
IDG Tech News
1st July 2012, 12:21 PM #2
We have a similar setup, one pol of fast sas drives for active data in raid 6 and a stack of sas drives for a backup staging area and archive. The sas interface is really good and faster by far than iscsi unless you're doing 10gb.
1st July 2012, 01:00 PM #3
That is reassuring, I was never sure vanilla iSCSI would have carried the school for 3-4 years if I implemented it now. 10Gbe / FC was coming up £23k-£40k for a decent, redundant setup(two switches, one SAN) .The SAS solution, with a bit of poking about with qouting, is coming under £15k. leaves me enough to get some SSD drives into teachers PCs as a bonus!
1st July 2012, 01:58 PM #4
Yea, we went iSCSI at one place around 4-5 years ago and when it came to doing the next place and SAS was avalible it was a nobrainer, cut 20k off the price add to the speed and cut out the need for a whole bunch of extra switches. For our deployment it would nave been stupid not to.
1st July 2012, 02:39 PM #5
Interesting post MrWu, I take it you and SYNACK are running pretty big establishments - with pretty hefty budgets. Makes the set up I inherited sound like peanuts - it was a single Windows 2003 Server and circa 80 clients on XP with SIMS running in the Admin office also off an XP machine in 2007; now expanded to a 2008 standard curriculum tower with a 2008 R2 as second DC and SIMS server, mix of Win 7 and XP clients 130 all told, about a dozen MACs and Mini Mac Server. I suppose now I need to start looking toward exploring the ways forward into the next few years?
Thanks to speckytecky from:
1st July 2012, 03:40 PM #6
The school I work in has 1100 students and 150 staff. Coming from a Small corporate background it was gigantic lol. The network was very tired though, servers were nearly on it's last legs (one CCTV server is over 10 years old, still going though), no proper UPS systems, cabling all over the place, AntiVirus system wasn't implemented properly, no documentation, 6 months down the line, we have a Windows 2008 Domain (soon to be native) Forefront 2010, APC UPS protecting the servers (soon to have a Borri 10kva hopefully powering the whole server room as phase 2 in summer, UniFi Wireless (school wide in Summer) and this Hyper-V project. A lot of the money came about when disasters happened since I started there (I'm a bad luck charm!) Powercuts, hardware failures, network failures..so convinced the SMT that investing in a good network would stop fire fighting and ICT could actually start proactively developing the school's future in ICT.
I suppose my advice is to talk to good folks here, but focus on piorities first but take your time phasing your project. My mistake being a new boy is agreeing to the timescale of getting this done,so ending up with a hefty summer workload.
Get your network infrastruture right first (again, blinkered into replacing old servers and not auditing the cable infrastructure thoroughly I underbudgeted and had to fight for more money to get the school recabled with OM3 fibre - a long term investment worthwhile.)
Virtualising is very popular but again the thing that did my head in most, so much options and cost implications. you will get solutions thrown at you at £50k plus that claim you could saw it in half and still works, plus saving loads of electricity...my argument is do a school need millisecond switchover and whats the saving in power vs me frisking another £20k from my school when they desperately need PCs and projectors replacing, after all, teachers don't thank you for buying a £13k Chassis switch.
So from my point of view, with the money I was given, how can I make maximum use of that money (in my case good backups, a realistic SLA ..quick rebuild, replicating to a server off site to run important VMs if SAN or server room burns down etc)
Yes and some of the schools I have seen that could easily spend money on a core switch that would consume the money I set aside to replace all my edge and core, not saying that's wrong, those schools might have established different requirements.
and No matter how small your network you inherited is, it's your baby and be proud of it. :-)
Last edited by MrWu; 1st July 2012 at 07:41 PM.
Thanks to MrWu from:
speckytecky (1st July 2012)
1st July 2012, 05:41 PM #7
Our places are 500-600 students each with 200+ stations. These run stacks of software centralised, remoteapps for the dirty SMS and local email, filtering, AD, AV, Papercut, local web sites etc. There is also video shareing with clickview etc. These networks have grown to a reasonable size to remove as many limits as possible on stuff like storage space and speed.
While this does mean a decent amount of investment it means we have a fast robust network that can have new services chucked in without much hassel. It also means that the stuff lasts for a rather long time if properly maintained as it supports the standards of the time and has overhead and upgrade potential to keep it going well in future.
Thanks to SYNACK from:
speckytecky (1st July 2012)
1st July 2012, 08:56 PM #8
Is it worth buying 10k/15k 2.5" hdds these days, compared to the prices of ssds?
1st July 2012, 09:19 PM #9
SSDs do everything faster, including die (when they go they loose everything instantly) hence the reluctance to use it as a primary storage medium for the time being. Also there are still very few large RAID devices designed for SSDs and they will actually make it slower (than it could be, faster than SAS though) given the various levels of cache and command optimisation which actually add extra latency so without one designed for SSDs you are not going to get the best performance out of them anyway. If I was using SSDs on a SAN I'd be looking at least RAID 6 preferably 1+5 to double the redundancy and give you a valid chance of avoiding at catastrophy.
Originally Posted by mavhc
Last edited by SYNACK; 1st July 2012 at 09:21 PM.
1st July 2012, 10:14 PM #10
Yeah, was thinking of using SSD drives for my Remote Desktop Servers (they are pretty easy to reimage) The HP stuff is well expensive (a 100Gb to fit a DL360 G7 cheapest is £400 each!)
Originally Posted by SYNACK
1st July 2012, 10:40 PM #11
Buying SSDs from computer makers is some sort of joke/scam, they'll charge last year's prices for last year's models, and then add 50%
1st July 2012, 10:51 PM #12
generally speaking, SSD storage on entry-level and midrange arrays look to be affordable only in specific use case of SSD's as an option
Originally Posted by mavhc
to improve on cache read-write speed. If the cost of engineering FC drives for entire disk shelves is seeing a wholesale move to SAS, due to the
reduced cost of SAS backplane connectivity and ability to intermix SAs and SATA as is the case with kit like P2000, it's obvious which one is
the most cost affective for the foreseeable for entire disk shelfs on the entry level out of SAS or SSD. i think SSD is still niche for a few more years
For Server internal storage, cost and reliability is a big driving force. If you want to go down the route of stuffing 2U servers full of disk rather than
going for 2.5inch drives in an exrternal SAS or iSCSI array then from a price-performance-capacity is there anything on the SSD side that can beat a server chock full of 600gb SAS ent disks ?
1st July 2012, 10:57 PM #13
We are working on something simlar this summer accualy - a farm of 4 RDS servers with OCZ Vertex 4 drives in
Originally Posted by MrWu
Will post up the results once we get everything in place.
1st July 2012, 11:01 PM #14
Cool! Are you using PCIe stuff? shame the HPs only let you us thier own brand HDDs on the servers.
Originally Posted by jamesfed
1st July 2012, 11:11 PM #15
Not this time round (have PCI-E SSD in all of our Hyper-V servers now though), biggest reason not to was we will be using custom 2U cases and half height PCI-E SSD isn't cheap so just normal 2.5" drives this time round.
Originally Posted by MrWu
By Hightower in forum General Chat
Last Post: 25th May 2011, 09:36 AM
By AngryITGuy in forum General Chat
Last Post: 5th July 2010, 09:36 AM
By laserblazer in forum General Chat
Last Post: 11th February 2010, 03:48 PM
By mb2k01 in forum Windows
Last Post: 29th April 2009, 09:09 PM
By nicholab in forum Hardware
Last Post: 31st January 2007, 09:02 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)