this is how ours is setup
this is how ours is setup
There are 6 NIC ports in each server with + 1 management port. 4 + 1 as standard in the server and then an extra network card.
We will be using HyperV2012 but hosted servers will only be 2008R2 as the LEA do not support 2012 at the moment.
48 servers, what the heck are you running there?? How did you end up with so many?
As you can see from the attached image I use two IP ranges 10.blah.131.? for management & vmotion and 10.blah.16.? for standard curriculum traffic (subnets withheld for legal reasons:)). There is another VLAN (5) which uses the curriculum nics which is for DMZ stuff. This has another IP range 192.blah.blah.? Any layer 2 switch can deal with this.Attachment 17698. Hope this helps
As for bottlenecks and more information related to possible variables that may affect the setup:-
See post #42 on this thread
for details of ESX specs. All have 100Gb of memory and are at about 80% usage. CPU usage monitoring (again over a protracted period of time) shows a maximum of 15% CPU usage (Exchange & SIMS).
Storage is IBM V7000 (46 x 600Gb 10K SAS, 2 x 200Gb SSDs) with 8Gb fibre SAN switches and HBAs. Switches are HP 3500yl aggregators with 5406zl at core and 4200vl at the edge (ie Gb to desktop) all with a minimum 2Gb trunked fibre between each.
2. As to the 48 servers... all sorts of stuff read post #6 on this thread.
I have collected a vast number of VI performance metrics over the past 6 years and am happy to share specific tests and how they were collected. What I have not seen is any figures that justify (in a school environment) the use of 10Gb fibre. As to the "future proofing" over the next 4-5 years or so, believe me schools requirements will not change that much in the short term, in fact their local bandwidth needs will, more than likely, change downwards as potentially more services move off site.
No-one has mentioned 10GbE fibre. My host servers have 10GbE ports onboard, and the storage server has it as an extra card. They plug into an 8 port 10GbE module in our HP 5406zl switch, which was full. So, our purchasing decision was to buy the 1GbE versions of the servers and then trunk things, or just spend the £2k on a 10GbE module for the core, and then we used copper 10GbE cables (gbics formed into the cable).
Worked out cheaper for us than going 1GbE.
2. OMG you have taken seporation to the next level in that, I know that memory dedupe exists but that kind of seporation is very specialised. I hate to think of the queues your VMs have to wait in to get access to the network cards propper and all the traffic on virtual DMA mapping.
Impressive storage but 8GB shared between that many hosts still provides a bottleneck. We also have a hp 5412 which is a nice bit of equipment but again, every school and implementation is different.
2. 48 VMs split across 3 ESX that's an average of 16 VMs per host. Given there are 4 nics trunked per host that's theoretically 4 VMs per nic. Now given that there are 6 cores per CPU ie 12 per host that's 12 cores for 16 machines lets say worst case 2 machines sharing one core so the core has to cycle 2 machines for the 4 nics. Let also look at a worst case where 2 file servers are on the same host using the same core and the same nic and are offering 100Mb files to 2 different users. They could then only offer half of the card's 1Gb capability. Explain to me how that represents a bottleneck?
3. 8Gb across 3 hosts a bottleneck??? Not sure what you're talking about here, the SAN fibre or something else? If it's the fibre you're talking about then trust me when I say it really doesn't get any better than that. Look at the screen shot below. This is the V7000 during a school day (usually no more than 3000 IOPS) where I set 3 SAN Veeam backups going (all day) as well, just to see what effect this had on performance and the user experience. Bear in mind that the V7000 (in this setup) is capable of serving 56000 IOPS and has 8Gb cache per controller. It had no effect, no latency in delivering files web pages etc.
2. It is not about the cores, NIC traffic has to travel on the virtual bus and then go into the pool of avalible devices, if you have 16 hosts or even 4 competing for one NIC at some point your NIC queues are going to get expanded if nothing else. The CPUs are another matter, If they are Core I level then there are several hardware DMA channels built in to the cpu. If you are using older CPUs or even have a stack of hosts on each making DMA alls then you could end up with network and other hardware traffic taking the long way round through the CPU instead of direct hardware communication. We are not scaleing to to two users here but dozens or hundreds per instance as that level of seporation implies.
3. You have 4GB comming our of every host which is 12GB/s vs 8GB/s of storage bandwidth for everything going out to the hosts and into the VMs, if nothing else there is potential. It also does get better, 10GB/s iSCSI or even better teamed 10GB/s. There is also the option of multiple SANs to spread out the bandwidth.
Now, don't get me wrong, your system probably runs great and may have addressed most or all of the contention issues but you can't take your system and apply it to every system everywhere. You don't know what everyone else is doing or requireing. It may not even be bandwidth but realtime, ultra time sensitive stuff. Anyhow I am just saying that your single use case does not constitute proof that everyone else is somehow wrong or unjustified.
2. See below. I'm not sure how long you have been working in a school and with virtualisation and also what collected date you are basing your conclusions on (I'm sure your "theoretical" propositions have some basis in fact) but I would suggest you information bears little resemblance to reality.
3. You really need think about the context in which you are offering these scenarios. Schools do not have tens of thousands of users logging in simultaneously, there are probably 3-400 maximum at any one time. You just don't need massive network or storage bandwidth to service this number of concurrent users. Again if you have evidence to the contrary then please share it.
2. Oh patronising... Look up the CPU architecture and performance metrics on network cards it is based in fact.
3. as above I am only responding to the scaling that you put forward as an example, that kind of seporation is scaled for thousands of users, I am going on your setup. I am fully aware of the fact that most schools are not that big.
Again I am just providing alternatives, your view of the one true way just pushes my buttons, you do not allow for any system or schenario that is not your own.