Hardware Thread, Server spec out for potential virtualisation project in Technical; So, I'm plotting for the future again and as part of our possible move to being an academy, we will ...
17th February 2011, 05:11 PM #1
Server spec out for potential virtualisation project
So, I'm plotting for the future again and as part of our possible move to being an academy, we will end up taking on a bunch of server roles that the LEA currently provides, meaning our current set up isn't up to the job of it all.
Rather than faff around with adding more servers to what we have at the moment, and therefore chewing through more electricity I am thinking of putting everything onto a proper virtualised system.
For this, I am thinking of the following bits of kit:
2 x HP DL360 G7 servers with 36GB RAM, 2 x SATA hard disks (cheap and cheerful) - it has 4 1GbE connections also.
2 x HP DL370 G7 servers with 36GB RAM, 2 x 146GB 15k RPM SAS hard disks, 2 x FirePro V5800 graphics cards, and 4 1GbE connections (for terminal servers, with RemoteFX)
1 x Overland Storage S1000 with 12 x 300GB SAS drives, and with 4 x 10GbE SFP+ ports
What do people think about that sort of spec? It'd be running everything - so, file server, AD, database server, print server, various web based apps (eg Oliver), Exchange email, Moodle and whatever else I throw at it.
The 2 DL370's would be dedicated to Remote Desktop Services.
17th February 2011, 05:48 PM #2
I m a biggy for virtualisation and even though I have my sql virtualised I always advise physical SQL and if you re going to virtualise SQL dont HA mode it! this should be similar to my setup but I m having cheapy server separate from SAN on own box
What quotes did you get?
17th February 2011, 06:35 PM #3
Not got quotes yet. Just looking at spec.
17th February 2011, 06:59 PM #4
It seems a bit light on storage capacity given (afaict) that Overland unit doesn't have dedupe options.
This assumes you're slinging user data on there as well as vms. If you're creating luns with different raid setups, your storage gets chopped down even further.
17th February 2011, 07:30 PM #5
I'd also say that the 10gbe on the overland is almost certainly overkill. I'm not sure you'd get past saturating a couple of GigE in most setups. The other thing it may be worth doing is holding back for the dedicated RemoteFX addin cards. Should provide better performance than high end graphics cards for RemoteFX
17th February 2011, 07:48 PM #6
The storage system can expand as and when needed. That'd give us a couple of TB space, which is way more than we use or need. (We use about 600GB at the moment in total for user data, and virtualised servers simply don't eat lots of space). Our network is simply not going to expand so fast that we can't plan and add an extra expansion unit to it.
Originally Posted by pete
This would likely be needed by July, are those dedicated cards going to be out by then? The 10GbE bit I can see your point, I wonder what the difference in pricing is between the 1GbE units compared with the 10GbE.
Originally Posted by Soulfish
17th February 2011, 08:01 PM #7
Although not with multiple hosts with multiple Gb cards using the same SAN. The problem with multiple Gb cards in the SAN is that teaming isn't particularly great, nor is MPIO. I put 10Gb in our SAN this time round as I already have 4 hosts connected to it with two dedicated Gb cards each, throw in backup too and the extra bandwidth can be useful. Can get expensive depending on the 10Gb connection though, along with not very bendy cables when using copper 10Gb!
Originally Posted by Soulfish
18th February 2011, 01:28 PM #8
So, any other thoughts? The 2 terminal servers would be used by around 100 clients. Should I be looking at more of them, or more RAM? Or maybe splitting them in 2, with 2 sets of hard drives, with virtual servers running one on each set of disks?
18th February 2011, 06:03 PM #9
I'd skip having any central storage for VM harddrive images and use your VM servers' local drives instead, then you don't worry about having to connect up the VM server and your central storage device. You'd still need a decent-sized storage server, of course, but I'd simply build one - Antec do rack-mount cases if you require rack mount equipment, and you should be able to fit a good number of 2 or 3TB disks in those. A decent (£1,000-ish) RAID card should provide decent performance instead of using SAS drives.
Originally Posted by localzuk
18th February 2011, 06:47 PM #10
@dhicks: I'd disagree about the cenralised storage unless you have some way of mirroring the VM storage so you have a copy on each. This would give you higher availability if not live migration or HA.
Personally, I would look to getting two identical storage servers and replicating the data so that you can eliminate a single point of failure. The 10GbE NICs are overkill too because you'll never shift the data quick enough off the disks on that box - the price of a suitable switch (of which you ideally want two) is going to be similar to that of the storage and you haven't started to factor in 10GbE NICs for your servers.
18th February 2011, 07:16 PM #11
I'd use DRBD, which mirrors block devices over the network.
Originally Posted by Ric_
For storage of user files, I'd go for one 10*2TB disk file server with a hardware RAID card and one 10*3TB disk backup server running software RAID. Hardware failure (other than individual harddrives, which would be taken care of by RAID, obviously) should be very rare and the backup server could take over for the time it took you to get the main file server back up, it might just run a bit slower. This would allow you to then have a backup server that gave users a file share containing a versioned copy of their files - yesterday's, last week's, etc, so restoring an old version of a file is a simple copy-and-paste operation.
I would look to getting two identical storage servers and replicating the data so that you can eliminate a single point of failure.
18th February 2011, 07:53 PM #12
See, what you're suggesting flies in the face of pretty much all advice I've ever seen about virtualisation projects, with the idea being that nothing is restricted to a single device, so that for HA you can switch to a different host in minutes and not have worries about VMs being on one server or another.
The reason I specced 10GbE cards on the Overland is that a) we'd be getting 10GbE modules for our core switch anyway and b) why faff with port trunking if you can just do it in one connection?
Also, the reason I tend to go for devices designed for the task (eg. overland storage or similar) is that I know that company has spent a fair amount of time making sure it does what I want it to, and they know about it and can support it properly rather than relying on my learning more about the nuts and bolts and doing it myself.
18th February 2011, 08:14 PM #13
@localzuk: I'd agree with picking a 'proper' device. There are some extremely cheap storage solutions from Synology, Thecus and Qnap on the XenServer HCL too.
I'm still not convinced that you would saturate a trunked 1GbE setup. The other advantage of a trunk is you can survive a port or cable failure.
Following a recent storage problem, I'm putting quite a lot of thought into re-designing my storage to add more resilience. When I figure it out I'll write about it but I'm currently in the 'my head hursts' stage.
18th February 2011, 08:28 PM #14
The VMs would exist on two servers - DRBD mirrors writes between both devices, so what's written to one disk (or RAID array, etc) gets written to the other. Network-traffic wise, you only have to send writes, reads can happen locally on each machine, and at local SATA / RAID card speeds. DRBD works very well with Xen - when used with HA monitoring tools, Xen's documentation reckons something like 0.3 seconds of downtime when switching VMs between physical machines. I think (not sure) you might need identical hardware on the two machines you are switching between to do that, and there's a minute or two (?) beforehand of set-up time while the system syncs the RAM of the VMs before it switches.
Originally Posted by localzuk
For some reason (probably something to do with SAN sales people...), "server virtualisation" has come to mean "bunch of processing machines with central, shared storage". However, I don't think most schools require a setup like that.
18th February 2011, 08:37 PM #15
Wow, the Qnap stuff is very cheap! QNAP TS-859U-RP+/8TB
Originally Posted by Ric_
By gshaw in forum Thin Client and Virtual Machines
Last Post: 5th September 2011, 10:25 PM
By mrbios in forum Thin Client and Virtual Machines
Last Post: 7th October 2010, 10:12 AM
By andy_whitlock in forum Hardware
Last Post: 27th May 2010, 11:18 PM
By Kamran7860 in forum General Chat
Last Post: 16th January 2010, 01:23 AM
By dezt in forum Thin Client and Virtual Machines
Last Post: 28th April 2008, 04:20 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)