Hardware Thread, Virtualisation Kit - your recommendation. in Technical; Hello all,
We have a budget of £40-45k to spend on some new virtualisation kit.
If I lay out our ...
29th February 2008, 01:23 PM #1
Virtualisation Kit - your recommendation.
We have a budget of £40-45k to spend on some new virtualisation kit.
If I lay out our scenario can people suggest kit we should be looking at?
- 2000 users of which 200 are staff
- storage capacity of 4-5 TB for file storage
- storage capacity for 2000 exchange boxs (30mb students, 200mb staff)
- Good enough IO on storage to handle exchange for 2000 users and file shares
- 3 file servers
- 2 domain controllers
- SMS server
- MOM server
- application server
- sharepoint server
- SQL server
If i have missed anyother info off people need please let me know.
I am awaiting a response from Dell and IBM to what they reccomend and would welcome peoples comments.
IDG Tech News
29th February 2008, 01:45 PM #2
So are you asking about virtualisation solutions ro storage solutions?
29th February 2008, 01:48 PM #3
Personally i'd get a stack of blade servers and a fiberchannel SAN. With an FC card in each blade and FC switch in the blade chassis. That way each blade can use the SAN for all it's storage needs.
It's your preference as to what virtualisation software you use, we currently use a mix of vmware, microsoft virtual server and hyper-v. Vmware is no double the best, but hyper-v is more than competent and comparatively amazing value. Obviously you'd want to cluster the blades using your virtualisation software to provide some fault tolerence.
29th February 2008, 01:49 PM #4
I know we are going to run ESX over 3 servers and have the rough specifications of those sorted.
Its more the storage solution i am interested in finding out about as this shared storage is going to be for all virtual servers running accross 3 ESX boxes.
29th February 2008, 02:12 PM #5
We like HP servers here. I'm running a Dual Quad Xeon server (8-cores ) with 16Gb RAM on our primary ESX box. We are in the process of purchasing a Single Quad Xeon with 8Gb RAM as a load balancing (?) secondary server.
Were also looking into SANMelody to control our DAS through iSCSI. Having played with the demo, I'd suggest looking into SANMelody if you haven't already got a good SAN in place. A Cheapo server/converted workstation, couple of NIC's and a SCSI card are all that's needed. For the DAS connecting to the SANMelody server, go for quality SAS drives if you can afford them. We're using SATA-II here which are doing a remarkable job, but can be a tad slow under high usage.
We support about 1800 users across around 900 desktops/laptops/thinclients. We're looking at running about 15 VM'd servers across the two boxes come the end. Currently our 8-core beast barely breaks a sweet running the 6 servers already virtualised.
29th February 2008, 02:14 PM #6
So do we! Oh, sorry, missed that 'k' at the end there...
Originally Posted by Paid_Peanuts
I'd go for a good number of identical rackmount (or blade) machines. Fit each one with 3 hot-swap SAS 300GB drives, a proper hardware RAID controller with on-board cache capable of RAID 5, twin quad-core processors and 8GB of RAM. Just looking quickly at the Dell website, those should cost around £3000 each. Get 10 (plus a spare). That'll give you around 5TB of storage and as much processor power as you could need (you'll need a rack and air-conditioning, of course, and a good network switch). Put an identical virtual file server on each machine, organise a load-balancing system so file requests are distributed (could do something real-time and complex or simply distribute user's file areas around the file servers). Skip having a SAN, unnecessary overhead.
Last edited by dhicks; 29th February 2008 at 02:19 PM.
29th February 2008, 02:23 PM #7
not if you want any migration, load balancing and high availability features.
Originally Posted by dhicks
29th February 2008, 02:42 PM #8
As DMcCoy says, a SAN is essential and ideally you will want two storage devices and switches to provide redundancy and resilience. Remember that you will be counting on the SAN for ALL your servers - the SAN disappears and so do ALL your servers!
29th February 2008, 02:46 PM #9
I figured with the setup above, if a harddrive goes you simply pull it out and slot another one in place and leave the hardware RAID controller to rebuild. If a motherboard goes then you simply pull all the harddrives out of a machine, slot them in the spare and boot up. You migrate virtual machines around via whatever system you have's management console and you deal with load-balancing at the VM level (distribute user file areas around your virtual file servers, or have a dedicated load-balancing VM that directs requests to the appropriate place).
Originally Posted by DMcCoy
Edit: Hmm, I see where the people advocating SANs are coming from - if a machine goes bang a spare or backup can take over with no physical access needed. However, this might be over-engineered for a school - overnight file server failures will wait until first thing in the morning when you get in to swap harddrives around, and during the day a few minutes downtime isn't going to matter too much (and anyway, we're talking about what should be a very rare occurance here, a physical server failing so badly it stops). Stuff like web servers can be duplicated at the VM level and stored on separate physical servers for high availability.
Last edited by dhicks; 29th February 2008 at 03:31 PM.
29th February 2008, 02:53 PM #10
Personally I'd go for a good quality Direct Attached SCSI RAID box, or two, connected to a reasonable spec'd server running SANMelody over iSCSI. RAID-50 with a couple of hot spares on the array. Maybe two SANMelody server's connected to the same DAS box(es) for a bit of redundancy.
If you want to knock yourself out, blow the budget and admit to total paranoia - then replicate the above in a second physical building.
3rd March 2008, 12:48 PM #11
I personally think a SAN in a school in not over engineering for a school. If any of my servers go down, it impacts the way the school runs, staff rely on the system to work. SAN and clustering gives me the peace of mind that if there is a problem, users are not affected.
Depends on your school and how important they view uptime. Schools are pushing to use IT more and more in lessons, with some lessons built entirely around computer based resources. As the Network manager I need to ensure these lessons can be carried out.
Personally am going to back IBM as well, I have been impressed with there support and hardware, so far no issues bar a couple of hardrives. One was DOA and the other failed in the SAN, No problems new drive supplied the raid took care of the lot and nobody noticed.
3rd March 2008, 09:46 PM #12
Fair enough - I had a bit more of an investigate into SANs over the weekend, and I can see how they would defiantly be a benefit. However, I still don't think that fibre-channel, SAS-drive, multi-access, contract-supported SANs are appropriate for a school. Basically, the whole thing boils down to a cost/features tradeoff - I figure the money is better spent elsewhere.
Originally Posted by TronXP
I came accross ATA Over Ethernet (AoE) as I was reading up on SAN stuff - the original poster should defiantly take a look. It basically ditches the IP layer and associated complexity and overhead that you get with iSCISI and just uses Ethernet to transmit data. The important part is that this data is still (locally) switchable with a decent modern switch (with backpressure and flow control, seemingly). I was pondering "hmm, why doesn't someone make a eSATA switch"?, and AoE would seem to be the nearest thing (better, even, as you can aggregate 1GBps ethernet ports together to provide whatever bandwidth you like, within limits). You get to stick a system together with standard low-cost components you can buy from anywhere but still get the instant-failover facility that a SAN offers you.
It strikes me that making a SAN device now involves getting a decent-sized PC case, a motherboard with a bunch of SATA connections, a multi-port gigabit ethernet PCI express card and a wodge of SATA drives to use in RAID 10 or 50. That should be very doable for under £2000 - say for a 1.5TB RAID 50 server. Five of those would meet the original poster's storage requirements nicely. Another £10,000 should easily cover a decent switch and some processors-in-boxes to be your virtual servers.
4th March 2008, 05:26 PM #13
When I got quotes on SANs one company did put something like that forward with a commercial SAN OS running on top of it. It was the cheapest quote of the lot, but ultimately I didn't go with it because it seemed too important a device to risk problems with. For not much more than your figure you can get a commercial iSCSI SAN with a bazillion bays and loads of redundancy features (raid5/6/etc, redundant fans, PSUs, NICs...) all backed up with a healthy warranty and support as well as a dedicated OS and tons of hardware acceleration.
Originally Posted by dhicks
Whether your environment would need FC rather than iSCSI or SAS drives rather than SATA is something only you would know. I would want to be very happy about the speed and reliability of a SAN before I used it to virtualise everything though.
By mac_shinobi in forum Mac
Last Post: 6th February 2008, 02:49 PM
By kennysarmy in forum Windows
Last Post: 8th January 2008, 03:22 PM
By beeswax in forum Recommended Suppliers
Last Post: 2nd July 2007, 07:22 PM
By sidewinder in forum Windows
Last Post: 30th April 2007, 01:28 PM
By altecsole in forum Hardware
Last Post: 21st October 2005, 08:15 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)