That would have been the sensible way to do it.
Unfortunately I wish the SMT had a bit more common sense, they talked to me about this massive project when I started before Christmas where decision has already been made to go wireless, whereby the recommendation I made was to look at the infrastructure, server and Electrics first. Being new boy or honeymoon period they did find the money but the date and deadline to get whatever done never shifted from September 2012, which in a way I shot myself in the foot for suggesting a strategy lol!
On the good side the school has been upgraded to OM3, UPS phase 1 done, new digital displays, DPM sorting out all backups, Forefront replacing Sophos, Exchange 2010 migrated, Windows 2008 domian installed, test wireless with Unifi installed...they can't complain for lack of progress.
Summer will be insane, school wide wireless, replace all switches / core + VLAN, Replace filservers, virtualise, windows7 roll out, new terminal server farm etc. new email domain, new website.
PS hardest part I find working at a school(my background was corporate) ICT does not have a huge voting power on decisions. Still, nice busy place to be!
Sorry I digress from the thread, but mrbios I would love to be where you are right now, you seem to have your setup nicely organised.
@mrbios - I think I will be taking a few steps back and slowing things down
I think I will start with the server I have and start testing/playing on that with Hyper-V and some test VMs.
With the old single box paln you stay on the treadmil of refreshing all the hardware and going through the pain of migrating from all the old servers to new servers, and that cost soon mounts up.
With VM, if the performance is a bit lack luster. Chuck in some more virtual ram or an extra virtual proc.. If the Hypervisor is being stretched then It's time for an additional one and split the load.
We could never do what we want with traditional servers, partly because of power, heat and space restrictions emposed during new build.
This is or main site server room, we have another cabling closet on this site. The other site has a 4 Cab server room and 4 small cabling closets.
Some of the equipment ( Dell Stuff ) was from our old buildings but we have put them to good use as VMware and Xen standalone hosts using lcoal storage. Still more work to be done as we have only just released the legacy EMC san from it's former duties.
I've had my servers virtualized for a few years now.
I wouldn't bother with SANS or redundancy personally. Thats more use for datacenters and banks who need it 24/7 100% uptime thing and are willing to pay through the nose for it.
In a school a few rack servers, hyper-v and a good bit of backup software is all you need. I think it cost us less than 6k to do the lot.
If I had a local hard disk fail, I would simply restore the full Hyper-v server backup I make every 2 weeks, then the latest nightly database/file backup. Everything could be up and running in 45mins.
Virtualization makes disaster recovery so much easier.
And yeh don't do it all in one go. We have really benefited from the advances in technology by doing it over a number of years. Just as an example the original servers I bought for about 1k came with 16gb of ram. These days I can get 32GB RAM and an SSD drive to boot for that.
All of them are reliable, did have a stick of ECC cache memory die in the A16E a while back, but that's an incredibly rare occourance for a stick of ECC memory to completely fail though, so i think we were just unlucky there. As for performance, they're pretty good, for a short period of time we had the entire VM storage on ONE 4 disk RAID6 array on the A16E, only time they visibly struggled was during backups.
What sort of VMs are you looking to improve performance on, SQL perchance? I find the SANs themselves cope with everything i throw at them, the only limiting factor is disk IOPs which could be the issue in your case?
However, in my experience, most schools and smaller businesses don't actually need proper, millisecond-response failover. Sudden, unexpected, catastophic hardware failure should, hopefully, be very rare, and a few minutes of downtime while you boot up VMs on another machine are generally perfectly acceptable. This gets rid of the need for real-time mirroring of virtual machine images in exchange for taking regular backup snapshots instead and having those snapshots exported to a different storage volume. This is also rather easier to set up as a proper backup solution, with previous versions of virtual machine images available in case of disaster or configuration problems.
I'd use that Dell R510 as your main VM host, hosting your Domain Controller, print server, MIS server and general applications server(s) on local storage. Give the R310 an equal amount of local storage and have it regularly take backup snapshots from the R510 (have the R310 host another domain controller, though, don't take images of the first domain controller, DCs don't restore well from image backups as they use a lot of time-sensitive data). You can host as many extra VMs as you can fit on, although you might want to bear in mind that if one machine conks out one day you'll want to be able to run a minimally functional system on one machine.
Use one of your other servers as a dedicated file server for user files - we've found FreeNAS to work well, with a nice, GUI-based interface and support for ZFS. Bear in mind that ZFS' best feature, block-level deduplication, takes a lot of RAM - your file server might need 8GB or more of RAM and a decent processor to make best use of ZFS. Have another dedicated server, with at least 1.5 times the file server's storage size, as a backup for the file server, with regular snapshots or backups available so users can get to previous versions of files.
@mrbios - that's a very good question. I have a feeling there is no official support for SSDs in the Netapp chassis yet. But I'll do some research!
Oddly I find our print server doesn't really have much of an IOPS issue. Just SIMS...<spits!>