"I'm going to play devil's advocate for a bit.."
OK El Diablo bring it on.
A. I take it, from your initial paragraph, that you don't work in a school (any more?). Your comment " If you've got a site where the main use of PCs is Office, SIMS and browsing the internet, and they currently have very few or very old PCs, then VDI may offer a major improvement over what they've currently got."
Why not use terminal services!!! runs all of these perfectly at a fraction of the cost. At least give me a case where only VDI can be used.
1) "Okay, using your first example of SIMS - I think most people say SIMS has to be physical is because for a long time Capita said they wouldn't support SIMS in a virtual environment, and no one wants their huge MIS system to be unsupported. Also, SIMS is largely just an SQL server, and SQL usually has fairly high disk I/O requirements (depends on how heavily you use SIMS). Disk I/O doesn't always virtualise very well, unless you have good local storage on your VMware/Hyper-V/whatever host (not recommended) or a good network link to your SAN with fast disks (expensive)."
Not the case any more SIMS support virtual.
Seriously! This is a school and SIMS we are talking about not a ten thousand simutaneous user transactional database. At most we have 200 simultaneous user connections add to that remote calls for Sharepoint etc total no more than 300 (and that's being optomistic). Look at the disk activity and CPU (see below) this morning during peak time, not exactly stressing the system. Note the output of the SAN (V7000 below), given that it can achieve up to 50,000 IOPS not exactly having too push hard is it. Lets base the arguments on things that actually happen in schools not what someone thinks might be the case.
To that end, I would also be interested in anyones stats from the following thread
"Using your second example of a DC, it's more that you shouldn't have just virtual DCs, you should have at least one physical DC/DNS/DHCP server. The reason for this is simple - dependencies. Say everything goes down (long powercut or whatever) and your VMware/whatever host talks to the SAN via its hostname. You power up the VMware/whatever host, but it can't see the SAN because there's no server up doing DNS. You can't power up the virtual DC that does DNS, because to do that you need the host to see the storage. Same issue if something on the infrastructure side needs AD authentication for it to start up. You're stuck in a loop of dependencies that are very easily solved by having one physical DC. You can get around this by not having anything on your infrastructure that uses DHCP, hostnames (rather than IPs) or AD authentication, but most people just prefer to have a physical DC/DNS/DHCP server that they can power up first, and then start up their virtual hosts and storage."
Why would you have communication between SAN and host dependent on name resolution? That to me is a poor design decision. We had one or two power cuts early on and learnt this the hard way. Why is this a case for having a physical DC? With the advent of Server 2012 restoring a virtual DC is possible (no AD count number issues). We have the luxury of having a second virtual setup in a physically seperate location (another block) in which we host a third DC acts just like a physical one.
"2) Bottlenecks - Say I run 20 physical servers and they've all got 1Gb connections. Say they all peak at around 80%, and maybe half of them peak around the same time of day. If I virtualise all 20 of those servers and put them on one physical host with a 1Gb or 2Gb link, then at some point it's very likely I'll run into network bottlenecks that are (technically) due to the virtual infrastructure. I don't know anyone running 8Gb or 10Gb links, because in reality in a school very few of us are running that kind of network throughput. I do know people with blade chassis that have very high consolidation ratios and are coming close to maxing out 4Gb links though, and have a perfectly good reason to consider 10Gb networking."
The use of "say", "maybe" and "likely" do not fill me with confidence. You give a scenario which is similar to what we have (on average 25 VMs per host) difference is I am actually basing my comments on fact rather than supposition.
"VDI can be absolutely awesome if it's put in the right place."
So where is this place? Give me a use case that only VDI can fullfil?
"You keep some servers physical to either avoid loops of dependencies or single points of failure, or because you can't afford the hardware needed to give the required performance on a virtual server."
The only things you need physical are backup (even some of that can be virtual) and firewall. The rest...
"You do 10Gb networking if you need it - most of us don't. If you're consolidating a lot of physical servers down onto one server by virtualising them, then you need to provide sufficient bandwidth for all of those servers. For some people that might only be 1Gb, for some people that might be 10Gb."
10Gb please spare me! Perhaps if everybody is streaming video, music and large media files ALL of the time. Show me a place that needs 10Gb networking and I'll show you a network riddle with viruses and classes watching the latest Pixar movie.