+ Post New Thread
Page 1 of 3 123 LastLast
Results 1 to 15 of 33
Hardware Thread, Server spec out for potential virtualisation project in Technical; So, I'm plotting for the future again and as part of our possible move to being an academy, we will ...
  1. #1

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841

    Server spec out for potential virtualisation project

    So, I'm plotting for the future again and as part of our possible move to being an academy, we will end up taking on a bunch of server roles that the LEA currently provides, meaning our current set up isn't up to the job of it all.

    Rather than faff around with adding more servers to what we have at the moment, and therefore chewing through more electricity I am thinking of putting everything onto a proper virtualised system.

    For this, I am thinking of the following bits of kit:

    2 x HP DL360 G7 servers with 36GB RAM, 2 x SATA hard disks (cheap and cheerful) - it has 4 1GbE connections also.
    2 x HP DL370 G7 servers with 36GB RAM, 2 x 146GB 15k RPM SAS hard disks, 2 x FirePro V5800 graphics cards, and 4 1GbE connections (for terminal servers, with RemoteFX)
    1 x Overland Storage S1000 with 12 x 300GB SAS drives, and with 4 x 10GbE SFP+ ports

    What do people think about that sort of spec? It'd be running everything - so, file server, AD, database server, print server, various web based apps (eg Oliver), Exchange email, Moodle and whatever else I throw at it.

    The 2 DL370's would be dedicated to Remote Desktop Services.

    Thoughts?

  2. #2

    Join Date
    Apr 2008
    Posts
    853
    Thank Post
    111
    Thanked 112 Times in 108 Posts
    Rep Power
    46
    I m a biggy for virtualisation and even though I have my sql virtualised I always advise physical SQL and if you re going to virtualise SQL dont HA mode it! this should be similar to my setup but I m having cheapy server separate from SAN on own box

    What quotes did you get?

  3. #3

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841
    Not got quotes yet. Just looking at spec.

  4. #4


    Join Date
    Dec 2005
    Location
    In the server room, with the lead pipe.
    Posts
    4,660
    Thank Post
    276
    Thanked 780 Times in 607 Posts
    Rep Power
    224
    It seems a bit light on storage capacity given (afaict) that Overland unit doesn't have dedupe options.

    This assumes you're slinging user data on there as well as vms. If you're creating luns with different raid setups, your storage gets chopped down even further.

  5. #5

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,400
    Thank Post
    303
    Thanked 304 Times in 263 Posts
    Rep Power
    82
    I'd also say that the 10gbe on the overland is almost certainly overkill. I'm not sure you'd get past saturating a couple of GigE in most setups. The other thing it may be worth doing is holding back for the dedicated RemoteFX addin cards. Should provide better performance than high end graphics cards for RemoteFX

  6. #6

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841
    Quote Originally Posted by pete View Post
    It seems a bit light on storage capacity given (afaict) that Overland unit doesn't have dedupe options.

    This assumes you're slinging user data on there as well as vms. If you're creating luns with different raid setups, your storage gets chopped down even further.
    The storage system can expand as and when needed. That'd give us a couple of TB space, which is way more than we use or need. (We use about 600GB at the moment in total for user data, and virtualised servers simply don't eat lots of space). Our network is simply not going to expand so fast that we can't plan and add an extra expansion unit to it.

    Quote Originally Posted by Soulfish View Post
    I'd also say that the 10gbe on the overland is almost certainly overkill. I'm not sure you'd get past saturating a couple of GigE in most setups. The other thing it may be worth doing is holding back for the dedicated RemoteFX addin cards. Should provide better performance than high end graphics cards for RemoteFX
    This would likely be needed by July, are those dedicated cards going to be out by then? The 10GbE bit I can see your point, I wonder what the difference in pricing is between the 1GbE units compared with the 10GbE.

  7. #7
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,476
    Thank Post
    10
    Thanked 500 Times in 440 Posts
    Rep Power
    114
    Quote Originally Posted by Soulfish View Post
    I'd also say that the 10gbe on the overland is almost certainly overkill. I'm not sure you'd get past saturating a couple of GigE in most setups. The other thing it may be worth doing is holding back for the dedicated RemoteFX addin cards. Should provide better performance than high end graphics cards for RemoteFX
    Although not with multiple hosts with multiple Gb cards using the same SAN. The problem with multiple Gb cards in the SAN is that teaming isn't particularly great, nor is MPIO. I put 10Gb in our SAN this time round as I already have 4 hosts connected to it with two dedicated Gb cards each, throw in backup too and the extra bandwidth can be useful. Can get expensive depending on the 10Gb connection though, along with not very bendy cables when using copper 10Gb!

  8. #8

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841
    So, any other thoughts? The 2 terminal servers would be used by around 100 clients. Should I be looking at more of them, or more RAM? Or maybe splitting them in 2, with 2 sets of hard drives, with virtual servers running one on each set of disks?

  9. #9

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,664
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by localzuk View Post
    1 x Overland Storage S1000 with 12 x 300GB SAS drives, and with 4 x 10GbE SFP+ ports
    I'd skip having any central storage for VM harddrive images and use your VM servers' local drives instead, then you don't worry about having to connect up the VM server and your central storage device. You'd still need a decent-sized storage server, of course, but I'd simply build one - Antec do rack-mount cases if you require rack mount equipment, and you should be able to fit a good number of 2 or 3TB disks in those. A decent (£1,000-ish) RAID card should provide decent performance instead of using SAS drives.

  10. #10

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,592
    Thank Post
    109
    Thanked 770 Times in 598 Posts
    Rep Power
    182
    @dhicks: I'd disagree about the cenralised storage unless you have some way of mirroring the VM storage so you have a copy on each. This would give you higher availability if not live migration or HA.

    Personally, I would look to getting two identical storage servers and replicating the data so that you can eliminate a single point of failure. The 10GbE NICs are overkill too because you'll never shift the data quick enough off the disks on that box - the price of a suitable switch (of which you ideally want two) is going to be similar to that of the storage and you haven't started to factor in 10GbE NICs for your servers.

  11. #11

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,664
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by Ric_ View Post
    I'd disagree about the cenralised storage unless you have some way of mirroring the VM storage so you have a copy on each. This would give you higher availability if not live migration or HA.
    I'd use DRBD, which mirrors block devices over the network.

    I would look to getting two identical storage servers and replicating the data so that you can eliminate a single point of failure.
    For storage of user files, I'd go for one 10*2TB disk file server with a hardware RAID card and one 10*3TB disk backup server running software RAID. Hardware failure (other than individual harddrives, which would be taken care of by RAID, obviously) should be very rare and the backup server could take over for the time it took you to get the main file server back up, it might just run a bit slower. This would allow you to then have a backup server that gave users a file share containing a versioned copy of their files - yesterday's, last week's, etc, so restoring an old version of a file is a simple copy-and-paste operation.

  12. #12

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841
    See, what you're suggesting flies in the face of pretty much all advice I've ever seen about virtualisation projects, with the idea being that nothing is restricted to a single device, so that for HA you can switch to a different host in minutes and not have worries about VMs being on one server or another.

    The reason I specced 10GbE cards on the Overland is that a) we'd be getting 10GbE modules for our core switch anyway and b) why faff with port trunking if you can just do it in one connection?

    Also, the reason I tend to go for devices designed for the task (eg. overland storage or similar) is that I know that company has spent a fair amount of time making sure it does what I want it to, and they know about it and can support it properly rather than relying on my learning more about the nuts and bolts and doing it myself.

  13. #13

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,592
    Thank Post
    109
    Thanked 770 Times in 598 Posts
    Rep Power
    182
    @localzuk: I'd agree with picking a 'proper' device. There are some extremely cheap storage solutions from Synology, Thecus and Qnap on the XenServer HCL too.

    I'm still not convinced that you would saturate a trunked 1GbE setup. The other advantage of a trunk is you can survive a port or cable failure.

    Following a recent storage problem, I'm putting quite a lot of thought into re-designing my storage to add more resilience. When I figure it out I'll write about it but I'm currently in the 'my head hursts' stage.

  14. #14

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,664
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by localzuk View Post
    See, what you're suggesting flies in the face of pretty much all advice I've ever seen about virtualisation projects, with the idea being that nothing is restricted to a single device, so that for HA you can switch to a different host in minutes and not have worries about VMs being on one server or another.
    The VMs would exist on two servers - DRBD mirrors writes between both devices, so what's written to one disk (or RAID array, etc) gets written to the other. Network-traffic wise, you only have to send writes, reads can happen locally on each machine, and at local SATA / RAID card speeds. DRBD works very well with Xen - when used with HA monitoring tools, Xen's documentation reckons something like 0.3 seconds of downtime when switching VMs between physical machines. I think (not sure) you might need identical hardware on the two machines you are switching between to do that, and there's a minute or two (?) beforehand of set-up time while the system syncs the RAM of the VMs before it switches.

    For some reason (probably something to do with SAN sales people...), "server virtualisation" has come to mean "bunch of processing machines with central, shared storage". However, I don't think most schools require a setup like that.

  15. #15

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,973
    Thank Post
    519
    Thanked 2,505 Times in 1,945 Posts
    Blog Entries
    24
    Rep Power
    841
    Quote Originally Posted by Ric_ View Post
    @localzuk: I'd agree with picking a 'proper' device. There are some extremely cheap storage solutions from Synology, Thecus and Qnap on the XenServer HCL too.
    Wow, the Qnap stuff is very cheap! QNAP TS-859U-RP+/8TB

SHARE:
+ Post New Thread
Page 1 of 3 123 LastLast

Similar Threads

  1. Server Virtualisation Project Proposal
    By gshaw in forum Thin Client and Virtual Machines
    Replies: 34
    Last Post: 5th September 2011, 09:25 PM
  2. Terminal server potential load...
    By mrbios in forum Thin Client and Virtual Machines
    Replies: 2
    Last Post: 7th October 2010, 09:12 AM
  3. Server Upgrades - Potential Trip Hazards...
    By andy_whitlock in forum Hardware
    Replies: 4
    Last Post: 27th May 2010, 10:18 PM
  4. Quick Survey on Virtualisation for University Project
    By Kamran7860 in forum General Chat
    Replies: 25
    Last Post: 16th January 2010, 12:23 AM
  5. Server Virtualisation
    By dezt in forum Thin Client and Virtual Machines
    Replies: 6
    Last Post: 28th April 2008, 03:20 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •