@dhicks - in terms of the SAN cost issue, that really is a no-brainer these days with iSCSI for schools. Gigabit switches, iscsi gigabit integrated network cards on servers and the cheaper cost of iscsi arrays compared to Fc or multiprotocol mean that it's very cost effective. I would also say the TCO is much lower than some sort of distributed system where you've got lots of bits of hardware that can balls things up.
As dhicks has said, SANs aren't just about speed - centralized storage management and the flexivility of a a centralized storage solution are the main advantages aswell as fault tolerance and scalability. And i think they're very much a technology appropriate for a school environment, with Microsofts SAN plug'n'play architecure sourcing and setting up an iscsi SAN has never been easier.
Also, if we're talking about virtualization NAS is also a viable alternalitive for storage and access of virtual machine files. Again it's the idea of the centralized storage pool model which tolerates hardware failure and eases the management burden.
Originally Posted by TheFopp
Yes. For MS Virtual Server I don't think you really have a choice - there is 1 actual file per virtual hard drive (so if you virtualise a server which had 2 physical drives then you end up with a DriveC.vhd file and a DriveD.vhd file - names of your choice, obviously!)
Virtualisation of your Datacentre is probably the worse thing you can do in a school.
Once you sell the concept to the SMT they will immediately begin to assume;
It's Virtual so it must be easy to implement, takes absolutely no room up so the datacentre can be turned into a staff meeting room, is incredibly fast as there are no moving parts, and costs "Virtually" nothing.
Of course, ongoing licensing can be paid for with the "Virtual Money" from the DFeS!:rolleyes:
I'm thinking of the kind of system that would have less bits of hardware in it - you could have just the two servers with some wires running directly between them. No need for a SAN device, no need for dedicated switches. You just get a bunch of harddrives in a decent sized RAID array and shove them in a case with a couple of decent processors and a motherboard.
Originally Posted by torledo
Note: the above system is slightly theoretical until I can actually get Xen and DRDB to install/compile/run/hell, anything at the same time...
What's the practical limit for RAID size? A given system is only going to be capable of so much data throughput, whether it's hauling data off harddrives and on to the network, or running VMs locally on them. What's the optimal number/performance ratio? Is, say, a 10-disk RAID 10 array going to be around the best performance you'll get? Or a 6-disk RAID 10? Or a 9-disk RAID 50 array (3 RAID 5 arrays striped)?
We started of with this set up. And our original plan was to stick with this set-up. We bought an HP DL380 server to run ESX and a Promise vTRAK MP310 RAID array. The array as 12x 320Mb SATA-II drives. 2x5 RAID-5 arrays stripped (RAID-50) and 2xhot spares. The vTRAK is plugged directly connected to th ESX server via 320mbps SCSI. We'd always planned to buy a second ESX server to be connected to the vTRAKs second SCSI controller this April.
Originally Posted by dhicks
Our main file server duly ran out of space and is still not ready to be virtualised. So as an emergency measure we put a new SCSI card in the file server and connected it to the VTRAKs second SCSI controller. Giving it a LUN and solving the space issue.
Now, we bought the second ESX server and found we have no way of connecting it to the vTRAK while our file server is using the second controller port. Also we realised that when the file server is virtualised we will have a high spec'd physical server (dual 3Ghz HT Xeon, 8Gb Ram) and nothing to use it for.
We decided the file server will make a good back if one of the two ESX servers went down. We are going to put the free VMWare server on this server and give it access to the vTRAKs RAID Array. We are also looking for a new MIS and that may be given a VM or it may have a physical server and allocated space on the vTRAK.
So we potentially have 4 servers all needing access to the RAID Array and only enough physical connections for 2 of them to actually be connected. The solution?
We got a beefed up desktop that was used as a server until that machine was virtualised. Shoved in a SCSI controller and some NIC's. Plugged it into the vTRAK in place of the file server. Used a bit of software called SANMelody, and Bob just might be your uncle (he's not mine?), All servers have access to the 2.5Tb RAID-50 4-drive redundancy Array through the SAN.
As I said above, we learnt from our mistakes. We should have bought both ESX servers and the SAN day one. Instead we tried to save money, use DAS instead of SAN, buy the server piece mill - and we have paid the price.
What ever you do, if you decide that a DAS RAID Array is right for you, or decide a SAN is right for you, if you are going to virtualise you need to research, research, research - plan, plan, plan - then research and plan some more. Then buy what you need day one - don't cut corners or try to put off buying anything that is really needed day one. And after all that expect there to be problems and downtime for issue you never expected or for things you though you'd planed for.
I've got plenty to think about from this thread about how I approach virtualisation (If at all!!!! ;) ). Please keep any thoughts coming though as I'm not going to be attempting this until the Summer hols at the earliest!