So are you asking about virtualisation solutions ro storage solutions?
We have a budget of £40-45k to spend on some new virtualisation kit.
If I lay out our scenario can people suggest kit we should be looking at?
- 2000 users of which 200 are staff
- storage capacity of 4-5 TB for file storage
- storage capacity for 2000 exchange boxs (30mb students, 200mb staff)
- Good enough IO on storage to handle exchange for 2000 users and file shares
- 3 file servers
- 2 domain controllers
- SMS server
- MOM server
- application server
- sharepoint server
- SQL server
If i have missed anyother info off people need please let me know.
I am awaiting a response from Dell and IBM to what they reccomend and would welcome peoples comments.
So are you asking about virtualisation solutions ro storage solutions?
Personally i'd get a stack of blade servers and a fiberchannel SAN. With an FC card in each blade and FC switch in the blade chassis. That way each blade can use the SAN for all it's storage needs.
It's your preference as to what virtualisation software you use, we currently use a mix of vmware, microsoft virtual server and hyper-v. Vmware is no double the best, but hyper-v is more than competent and comparatively amazing value. Obviously you'd want to cluster the blades using your virtualisation software to provide some fault tolerence.
I know we are going to run ESX over 3 servers and have the rough specifications of those sorted.
Its more the storage solution i am interested in finding out about as this shared storage is going to be for all virtual servers running accross 3 ESX boxes.
We like HP servers here. I'm running a Dual Quad Xeon server (8-cores ) with 16Gb RAM on our primary ESX box. We are in the process of purchasing a Single Quad Xeon with 8Gb RAM as a load balancing (?) secondary server.
Were also looking into SANMelody to control our DAS through iSCSI. Having played with the demo, I'd suggest looking into SANMelody if you haven't already got a good SAN in place. A Cheapo server/converted workstation, couple of NIC's and a SCSI card are all that's needed. For the DAS connecting to the SANMelody server, go for quality SAS drives if you can afford them. We're using SATA-II here which are doing a remarkable job, but can be a tad slow under high usage.
We support about 1800 users across around 900 desktops/laptops/thinclients. We're looking at running about 15 VM'd servers across the two boxes come the end. Currently our 8-core beast barely breaks a sweet running the 6 servers already virtualised.
I'd go for a good number of identical rackmount (or blade) machines. Fit each one with 3 hot-swap SAS 300GB drives, a proper hardware RAID controller with on-board cache capable of RAID 5, twin quad-core processors and 8GB of RAM. Just looking quickly at the Dell website, those should cost around £3000 each. Get 10 (plus a spare). That'll give you around 5TB of storage and as much processor power as you could need (you'll need a rack and air-conditioning, of course, and a good network switch). Put an identical virtual file server on each machine, organise a load-balancing system so file requests are distributed (could do something real-time and complex or simply distribute user's file areas around the file servers). Skip having a SAN, unnecessary overhead.
Last edited by dhicks; 29th February 2008 at 02:19 PM.
As DMcCoy says, a SAN is essential and ideally you will want two storage devices and switches to provide redundancy and resilience. Remember that you will be counting on the SAN for ALL your servers - the SAN disappears and so do ALL your servers!
Edit: Hmm, I see where the people advocating SANs are coming from - if a machine goes bang a spare or backup can take over with no physical access needed. However, this might be over-engineered for a school - overnight file server failures will wait until first thing in the morning when you get in to swap harddrives around, and during the day a few minutes downtime isn't going to matter too much (and anyway, we're talking about what should be a very rare occurance here, a physical server failing so badly it stops). Stuff like web servers can be duplicated at the VM level and stored on separate physical servers for high availability.
Last edited by dhicks; 29th February 2008 at 03:31 PM.
Personally I'd go for a good quality Direct Attached SCSI RAID box, or two, connected to a reasonable spec'd server running SANMelody over iSCSI. RAID-50 with a couple of hot spares on the array. Maybe two SANMelody server's connected to the same DAS box(es) for a bit of redundancy.
If you want to knock yourself out, blow the budget and admit to total paranoia - then replicate the above in a second physical building.
I personally think a SAN in a school in not over engineering for a school. If any of my servers go down, it impacts the way the school runs, staff rely on the system to work. SAN and clustering gives me the peace of mind that if there is a problem, users are not affected.
Depends on your school and how important they view uptime. Schools are pushing to use IT more and more in lessons, with some lessons built entirely around computer based resources. As the Network manager I need to ensure these lessons can be carried out.
Personally am going to back IBM as well, I have been impressed with there support and hardware, so far no issues bar a couple of hardrives. One was DOA and the other failed in the SAN, No problems new drive supplied the raid took care of the lot and nobody noticed.
I came accross ATA Over Ethernet (AoE) as I was reading up on SAN stuff - the original poster should defiantly take a look. It basically ditches the IP layer and associated complexity and overhead that you get with iSCISI and just uses Ethernet to transmit data. The important part is that this data is still (locally) switchable with a decent modern switch (with backpressure and flow control, seemingly). I was pondering "hmm, why doesn't someone make a eSATA switch"?, and AoE would seem to be the nearest thing (better, even, as you can aggregate 1GBps ethernet ports together to provide whatever bandwidth you like, within limits). You get to stick a system together with standard low-cost components you can buy from anywhere but still get the instant-failover facility that a SAN offers you.
It strikes me that making a SAN device now involves getting a decent-sized PC case, a motherboard with a bunch of SATA connections, a multi-port gigabit ethernet PCI express card and a wodge of SATA drives to use in RAID 10 or 50. That should be very doable for under £2000 - say for a 1.5TB RAID 50 server. Five of those would meet the original poster's storage requirements nicely. Another £10,000 should easily cover a decent switch and some processors-in-boxes to be your virtual servers.
Whether your environment would need FC rather than iSCSI or SAS drives rather than SATA is something only you would know. I would want to be very happy about the speed and reliability of a SAN before I used it to virtualise everything though.
There are currently 1 users browsing this thread. (0 members and 1 guests)