We use XenServer here (5 hosts, 2 SANs) haven't bothered with the HA as yet but DCs are virtualised (95% of servers are in fact) just the print server (seems to run quicker like that) and my TMG and proxy servers are physical just because the hosts don't have enough NIC ports to have them going through our router.
One larger server - a Dell R510 (or similar from whichever supplier you prefer - VeryPC's server's look interesting, being designed to be power efficient), with as much RAM as you can reasonably afford (up to about 32GB seems to be reasonable on those at the moment, but shop around). Dual processors, with as many cores as possible, but don't worry too much about the clock speed - you want more cores available to run virtual machines. Something like the R510 will take (I think) 8 harddrives - get a good hardware RAID controller (decent ones start from about £300), but you can get harddrives from anywhere you like, Dell charge a silly amount for their harddrives. If you're using XenServer, it only uses a 4GB partition for the actual VM host OS, and I think those R510s probably have an internal USB port for a memory stick that you could use for the OS, leaving all your harddrives free for VMs.
For a school, you'll probably need VMs for a DC, print server, MIS, and a general apps server. If your budget only runs to the single server then you'll also need a VM for a file server. Your VM system will probably pool the storage for you and allow you to split it up at will to share between VMs as you need, but on something like Xen you can assign a VM an actual block device, so you can hand a VM a whole RAID array to itself if you want, avoiding multiple layers of abstraction at the disk layer. If you can afford a bit more, have a separate, good-sized server for file shares (just a processor, RAID card and a bunch of harddrives is all you need, so you could just build it yourself) and a big-as-you-can-afford file server for backups (but that one doesn't need to have a RAID card, it doesn't matter if it's a bit slower). Sync files nightly from your live file server to the backup with rsync or similar, deduplicating files on the backup server so you can store several weeks worth of changes to your assorted file shares.
If you have a separate physical location to place servers, or if you need a second server to take some load, then probably a second R510 or similar but maybe with a single processor and less RAM - just enough to run vital services if the other machine conks out and you need to get it back running. You backup server should also be able to act as a main, if slower, file server if needed.
I started our virtualisation project in a slightly unconventional way. I could not get the money to purchase a 2 hosts + SAN system so instead I went for 1 host + SAN with the 2nd host in the following years development bid. I went for a SAN so that I could use VMotion in VMWare to cope with failures in the hardware. It would be possible to do a system on the cheap by running several VM's on a single host's internal storage, but you would loose the ability to migrate them instantly on hardware failure (or maintenance).
Once I got the 1 host + SAN up and running I only migrated a couple of servers to it rather than everything. I did this because I did not want to get into the position of having all my critical systems running on one physical server. It did mean that I could upgrade my aging and slow SIMS server this year and then migrate other servers to the virtual environment once I get the 2nd host.
I did decide to get an expert in to configure VMWare and the iSCSI SAN even though I cost quite a lot. I thought that even though I could do it I wanted to make sure that the foundation of my systems were solid. The last thing I wanted with a virtual system was to have the physical systems or VMWare to be flaky. I also thought that as we are currently a little understaffed that I could not afford the time out to research and plan the setting up.
Thats the beauty of virtualisation, you can build it in stages.
There is absolutely no need to do everything all in one go.
So much to think about. So much in the air. SO many conflicting pieces of advice.
I have just completed my virtualisation project. We run a 2 node cluster with a SAN for now with a third machine connected to san for backup. As we run VMWare 5 and VCentre and I found that I could not get my LTO5 picked up on VMWare. So that is physical running Backup Exec and then the virtual agents are installed on each VM. The hardware is 2 Dell T410's with 32GB ram and 2.4 XEON processors and a HP P2000 storageworks SAN with a single controller with 3TB storage ( this is enough for now as we are only a Middle School ) On this we have everything virtualised and we are running 11 VM's and thus far everything is running fine..... Probably put the death curse on it now!!!
We started out getting our SAN a couple of years ago to increase the storage across our user servers which were rapidly filling up, and which were also quite old and needed replacing. We knew that we would virtualise at some point too, so the SAN made sense.
We did the virtualisation last year and we have the following setup -
1 x EMC CX4 SAN
3 x SAN Hosts (32GB Ram Dual 6-Core Processors)
12 x Virtual Servers
Our FRDC and Exchange servers are still physical at the moment as they are fairly new and still under their hardware warranty. Once that runs out, they will be turned into VMs too.
Our experience has been really positive and the system is really easy to use. My big concern was for the performance of the SIMs server, but its never been an issue.
I cant speak about the other flavours of Virtualisation software, but VMWare has worked perfectly for us.
Been a Hyper-v fan myself as on a school license 3node clustering and SCVMM integration is about as cheap as it gets.
Shared storage makes it easy to move VM's from host to host be it a CSV or a mountable LUN individual servers with local storage for VMs is fine for a bunch of application servers but storage that can be re-assigned on the fly between hosts has clear advantages.
That being the case it doesn't have to be £30k's worth of Equallogic or Lefthand....
In our experience of cheap SANs they have all performed well and been highly resilient however as volumes get larger the challenge of backups, snapshots & recovery get more and more complex.
This normally results in what should be a simple Virtualization project into a complex and sometimes costly excersise to provide the 24/7/365 expectations of the SMT to whom once you start evangelising about your new virtual server farm will expect if not demand it!
Without total duplication of hardware and storage this type of enterprise uptime target is impossible to guarantee on a tight budget, but Windows Server 8 and Hyper-V 3 look set to change this completely.
Hyper-V Replica: New VM replication tool for cost-conscious IT shops
If I were starting from scratch I would certainly recommend taking a close look at what Hyper-V 3 is shortly to bring to the Virtual circus, storing and running VMs over SMB shares, live VM replication, NFS support, all features that on paper at least levels the playing field and will eliminate many of the challenges currently encountered when trying to achieve utopia on a budget of £2.50 per user....
I'm currently waiting on confirmation of a virtualisation project at our campus - but rather than buying a pile of new stuff, this is a 'mid-way' project. We have a DAS array already, which I intend to utilise by replacing the controller card for a newer model to achieve RAID 6, then upgrade 3 relatively new Dell R610s (the old provider put 4GB RAM and 1 CPU in each) to have dual quad core CPUs and 32GB RAM each, stick Windows Storage Server 2008 R2 on one of the older machines to run the DAS (with its deduplication built in), and move things on to that set up using Hyper-V.
Total cost is about £4k.
Then, in a few years time when the DAS should be replaced, it will be replaced with at least 1, probably 2 SANs.
There is no need to do everything at once - doing so is very costly.
HP ProLiant DL165 G7 Server series Small & Medium Business
We just are small primary i think getting SANs would be overkill, maybe get a extra NAS to replicate
Any thoughts anyone?
Eventually we are hopefully going down the RDS route so would a SAN then be necessary?
Last edited by owen1978; 7th March 2012 at 03:48 PM.
SANS are designed for large scale storage in data centers really...
Why even bother with centralized storage? We just use the internal disk of the server.
Virtulization lets you take images of your servers to your hearts content so there is no need to really worry about downtime of a disk if you have a regular backup strategy.
EDIT: I specked up a Dell 1u server with the fastest Xeon chip and 24gb of ram for under a grand the other day. Can't say fairer than that
Last edited by zag; 7th March 2012 at 04:32 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)