+ Post New Thread
Results 1 to 13 of 13
Hardware Thread, Virtualisation Kit - your recommendation. in Technical; Hello all, We have a budget of 40-45k to spend on some new virtualisation kit. If I lay out our ...
  1. #1
    Paid_Peanuts's Avatar
    Join Date
    Jun 2007
    Location
    South Yorkshire
    Posts
    232
    Thank Post
    11
    Thanked 13 Times in 12 Posts
    Rep Power
    17

    Virtualisation Kit - your recommendation.

    Hello all,
    We have a budget of 40-45k to spend on some new virtualisation kit.

    If I lay out our scenario can people suggest kit we should be looking at?

    Scenario:
    • 2000 users of which 200 are staff
    • storage capacity of 4-5 TB for file storage
    • storage capacity for 2000 exchange boxs (30mb students, 200mb staff)
    • Good enough IO on storage to handle exchange for 2000 users and file shares


    We run:
    • 3 file servers
    • 2 domain controllers
    • SMS server
    • MOM server
    • application server
    • sharepoint server
    • SQL server


    If i have missed anyother info off people need please let me know.

    I am awaiting a response from Dell and IBM to what they reccomend and would welcome peoples comments.

    Cheers

  2. #2

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,590
    Thank Post
    109
    Thanked 762 Times in 593 Posts
    Rep Power
    180
    So are you asking about virtualisation solutions ro storage solutions?

  3. #3

    Join Date
    Jul 2005
    Location
    Rugby
    Posts
    432
    Thank Post
    17
    Thanked 66 Times in 61 Posts
    Rep Power
    35
    Personally i'd get a stack of blade servers and a fiberchannel SAN. With an FC card in each blade and FC switch in the blade chassis. That way each blade can use the SAN for all it's storage needs.

    It's your preference as to what virtualisation software you use, we currently use a mix of vmware, microsoft virtual server and hyper-v. Vmware is no double the best, but hyper-v is more than competent and comparatively amazing value. Obviously you'd want to cluster the blades using your virtualisation software to provide some fault tolerence.

    Matt

  4. #4
    Paid_Peanuts's Avatar
    Join Date
    Jun 2007
    Location
    South Yorkshire
    Posts
    232
    Thank Post
    11
    Thanked 13 Times in 12 Posts
    Rep Power
    17
    I know we are going to run ESX over 3 servers and have the rough specifications of those sorted.

    Its more the storage solution i am interested in finding out about as this shared storage is going to be for all virtual servers running accross 3 ESX boxes.

  5. #5

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,620
    Thank Post
    845
    Thanked 883 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    We like HP servers here. I'm running a Dual Quad Xeon server (8-cores ) with 16Gb RAM on our primary ESX box. We are in the process of purchasing a Single Quad Xeon with 8Gb RAM as a load balancing (?) secondary server.

    Were also looking into SANMelody to control our DAS through iSCSI. Having played with the demo, I'd suggest looking into SANMelody if you haven't already got a good SAN in place. A Cheapo server/converted workstation, couple of NIC's and a SCSI card are all that's needed. For the DAS connecting to the SANMelody server, go for quality SAS drives if you can afford them. We're using SATA-II here which are doing a remarkable job, but can be a tad slow under high usage.

    We support about 1800 users across around 900 desktops/laptops/thinclients. We're looking at running about 15 VM'd servers across the two boxes come the end. Currently our 8-core beast barely breaks a sweet running the 6 servers already virtualised.

  6. #6

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,622
    Thank Post
    1,240
    Thanked 777 Times in 674 Posts
    Rep Power
    235
    Quote Originally Posted by Paid_Peanuts View Post
    We have a budget of 40-45k to spend on some new virtualisation kit.
    So do we! Oh, sorry, missed that 'k' at the end there...

    I'd go for a good number of identical rackmount (or blade) machines. Fit each one with 3 hot-swap SAS 300GB drives, a proper hardware RAID controller with on-board cache capable of RAID 5, twin quad-core processors and 8GB of RAM. Just looking quickly at the Dell website, those should cost around 3000 each. Get 10 (plus a spare). That'll give you around 5TB of storage and as much processor power as you could need (you'll need a rack and air-conditioning, of course, and a good network switch). Put an identical virtual file server on each machine, organise a load-balancing system so file requests are distributed (could do something real-time and complex or simply distribute user's file areas around the file servers). Skip having a SAN, unnecessary overhead.

    --
    David Hicks
    Last edited by dhicks; 29th February 2008 at 02:19 PM.

  7. #7
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,427
    Thank Post
    10
    Thanked 488 Times in 428 Posts
    Rep Power
    111
    Quote Originally Posted by dhicks View Post
    Skip having a SAN, unnecessary overhead.
    not if you want any migration, load balancing and high availability features.

  8. #8

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,590
    Thank Post
    109
    Thanked 762 Times in 593 Posts
    Rep Power
    180
    As DMcCoy says, a SAN is essential and ideally you will want two storage devices and switches to provide redundancy and resilience. Remember that you will be counting on the SAN for ALL your servers - the SAN disappears and so do ALL your servers!

  9. #9

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,622
    Thank Post
    1,240
    Thanked 777 Times in 674 Posts
    Rep Power
    235
    Quote Originally Posted by DMcCoy View Post
    not if you want any migration, load balancing and high availability features.
    I figured with the setup above, if a harddrive goes you simply pull it out and slot another one in place and leave the hardware RAID controller to rebuild. If a motherboard goes then you simply pull all the harddrives out of a machine, slot them in the spare and boot up. You migrate virtual machines around via whatever system you have's management console and you deal with load-balancing at the VM level (distribute user file areas around your virtual file servers, or have a dedicated load-balancing VM that directs requests to the appropriate place).

    Edit: Hmm, I see where the people advocating SANs are coming from - if a machine goes bang a spare or backup can take over with no physical access needed. However, this might be over-engineered for a school - overnight file server failures will wait until first thing in the morning when you get in to swap harddrives around, and during the day a few minutes downtime isn't going to matter too much (and anyway, we're talking about what should be a very rare occurance here, a physical server failing so badly it stops). Stuff like web servers can be duplicated at the VM level and stored on separate physical servers for high availability.

    --
    David Hicks
    Last edited by dhicks; 29th February 2008 at 03:31 PM.

  10. #10

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,620
    Thank Post
    845
    Thanked 883 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    Personally I'd go for a good quality Direct Attached SCSI RAID box, or two, connected to a reasonable spec'd server running SANMelody over iSCSI. RAID-50 with a couple of hot spares on the array. Maybe two SANMelody server's connected to the same DAS box(es) for a bit of redundancy.

    If you want to knock yourself out, blow the budget and admit to total paranoia - then replicate the above in a second physical building.

  11. #11

    Join Date
    Oct 2007
    Location
    Newcastle Upon Tyne
    Posts
    452
    Thank Post
    147
    Thanked 66 Times in 57 Posts
    Rep Power
    43
    I personally think a SAN in a school in not over engineering for a school. If any of my servers go down, it impacts the way the school runs, staff rely on the system to work. SAN and clustering gives me the peace of mind that if there is a problem, users are not affected.

    Depends on your school and how important they view uptime. Schools are pushing to use IT more and more in lessons, with some lessons built entirely around computer based resources. As the Network manager I need to ensure these lessons can be carried out.

    Personally am going to back IBM as well, I have been impressed with there support and hardware, so far no issues bar a couple of hardrives. One was DOA and the other failed in the SAN, No problems new drive supplied the raid took care of the lot and nobody noticed.

  12. #12

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,622
    Thank Post
    1,240
    Thanked 777 Times in 674 Posts
    Rep Power
    235
    Quote Originally Posted by TronXP View Post
    I personally think a SAN in a school in not over engineering for a school. If any of my servers go down, it impacts the way the school runs, staff rely on the system to work. SAN and clustering gives me the peace of mind that if there is a problem, users are not affected.
    Fair enough - I had a bit more of an investigate into SANs over the weekend, and I can see how they would defiantly be a benefit. However, I still don't think that fibre-channel, SAS-drive, multi-access, contract-supported SANs are appropriate for a school. Basically, the whole thing boils down to a cost/features tradeoff - I figure the money is better spent elsewhere.

    I came accross ATA Over Ethernet (AoE) as I was reading up on SAN stuff - the original poster should defiantly take a look. It basically ditches the IP layer and associated complexity and overhead that you get with iSCISI and just uses Ethernet to transmit data. The important part is that this data is still (locally) switchable with a decent modern switch (with backpressure and flow control, seemingly). I was pondering "hmm, why doesn't someone make a eSATA switch"?, and AoE would seem to be the nearest thing (better, even, as you can aggregate 1GBps ethernet ports together to provide whatever bandwidth you like, within limits). You get to stick a system together with standard low-cost components you can buy from anywhere but still get the instant-failover facility that a SAN offers you.

    It strikes me that making a SAN device now involves getting a decent-sized PC case, a motherboard with a bunch of SATA connections, a multi-port gigabit ethernet PCI express card and a wodge of SATA drives to use in RAID 10 or 50. That should be very doable for under 2000 - say for a 1.5TB RAID 50 server. Five of those would meet the original poster's storage requirements nicely. Another 10,000 should easily cover a decent switch and some processors-in-boxes to be your virtual servers.

    --
    David Hicks

  13. #13
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 70 Times in 42 Posts
    Rep Power
    33
    Quote Originally Posted by dhicks View Post
    It strikes me that making a SAN device now involves getting a decent-sized PC case, a motherboard with a bunch of SATA connections, a multi-port gigabit ethernet PCI express card and a wodge of SATA drives to use in RAID 10 or 50. That should be very doable for under 2000 - say for a 1.5TB RAID 50 server. Five of those would meet the original poster's storage requirements nicely.
    When I got quotes on SANs one company did put something like that forward with a commercial SAN OS running on top of it. It was the cheapest quote of the lot, but ultimately I didn't go with it because it seemed too important a device to risk problems with. For not much more than your figure you can get a commercial iSCSI SAN with a bazillion bays and loads of redundancy features (raid5/6/etc, redundant fans, PSUs, NICs...) all backed up with a healthy warranty and support as well as a dedicated OS and tons of hardware acceleration.

    Whether your environment would need FC rather than iSCSI or SAS drives rather than SATA is something only you would know. I would want to be very happy about the speed and reliability of a SAN before I used it to virtualise everything though.

SHARE:
+ Post New Thread

Similar Threads

  1. virtualisation ?
    By mac_shinobi in forum Mac
    Replies: 5
    Last Post: 6th February 2008, 02:49 PM
  2. Software recommendation please.
    By kennysarmy in forum Windows
    Replies: 3
    Last Post: 8th January 2008, 03:22 PM
  3. I need a laptop recommendation
    By beeswax in forum Recommended Suppliers
    Replies: 14
    Last Post: 2nd July 2007, 07:22 PM
  4. Disaster recovery with virtualisation
    By sidewinder in forum Windows
    Replies: 6
    Last Post: 30th April 2007, 01:28 PM
  5. Server Recommendation Please
    By altecsole in forum Hardware
    Replies: 29
    Last Post: 21st October 2005, 08:15 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •