GrumbleDook (3rd January 2011)
Will need to ask my bossman about that and off course he would want to get involved!!??!!
Not BETT this year - too much to do
Last edited by ASJ; 3rd January 2011 at 10:33 PM. Reason: typos
GrumbleDook (3rd January 2011)
We have just further optimised ICA/Xenapp performance - if you download the latest f/w from our website you should see a noticeable increase in performance (again!)
(Sorry - was meant to be a PM to Imiddleton as a response to post he left above - please ignore if not him - or not an Axel ICA user!)
Last edited by Axel; 5th January 2011 at 02:16 PM.
Dave O - Just wanted to say I really appreciate you taking the time to write that original post. A lot of the things you've said reiterate what I've been trying to get across to management for a long time, so it's nice to hear it all from someone who's clearly 'been there'.
I don't 100% agree with all the VDI stuff you've said, but then I haven't deployed it myself, only played around with it. I did test one (nameless) thin client solution and was not at all impressed so I can 100% see where you're coming from. On the flip side I've tried the Sun/Oracle VDI stuff from Cutter that others have mentioned and I've been really impressed (especially when coupled with Sun SGD) so I still think it has potential as a solution.
Sounds like you've had a rough two and a half years, props for sticking with it.
Fantastic post and thread. I've got more to say on the matter but have not got time!!
Thanks for this post, very interesting, have shared with the rest of the PT techs. Thanks again
Time having passed from my original post I will, therefore, update you on our present position. The truth is nothing much has changed. The HyperV solution is working well. The refactored ESXs (IBM x3650s) are as solid as ever, the fibre attached SAN spins on. The advent of Windows 2008 R2 SP1 and VMM 2008 R2 SP1 have allowed the server count (mainly terminal servers) to go up, which I'm not sure is a good thing. DPM continues to back up effectively. Actually tested the DR routine with a machine I built on the HyperV and it worked there's a first. I can now officially say I'm impressed with HyperV. I would'nt use it on mission critical services but it has it's place.
Licensing with EES has made things a lot simpler. Took a while to squeeze the Sharepoint and Exchange forefront license keys out of Microsoft but we got there in the end. A year of stability has turned my attention from the backend platform to a much more worthwhile venture into documentation and VLE stuff. If I say so myself my ability to write IT related policy documents, strategic plans etc is awsome (OK its maybe just alright). The VLE/MLE/learning plaatform whatever they call it nowerdays is coming along nicely. I get a lot of stick about our "website" from the other techies in Rotherham, all I can say is that it will be up when we're good and ready.
I should probably start the VLE stuff on a different thread but hey.
Sharepoint 2010, Exchange 2010 fully integrated, Salamander Soft (Richard Wills) for site management, class site creation, SLK customisation, calendar integration, exposing SIMS data through bespoke webparts and general making stuff work. If you have read my other threads you will know my views on so called experts and consultants in the case of Richard W he actually is one! and a nice chap too. If you get the chance have a look at SalamanderSoft it does what it says on the tin.
The other members of my team are retro-gamers and have organised a week long event promoting the UK gaming industry. Speakers from the industry are coming in to help give students an idea of what is needed to get involved including coding, narrative and art work. Looks like it will be a good week. Have a look at Games Britannia
I'm rambling and going off topic now. Not sure I will be able to offers anything else to this topic but I hope my experiences will help you.
On that note, I was wondering if people would be interested in me doing a presentaion on this and other virtualisation topics, hosted at my school, half/full day, with a chance to have a look at all of our gubbins. Let me know what you think on this thread.
Last edited by Dave_O; 8th May 2011 at 07:14 PM.
Embazzy (12th May 2011)
What a refreshing and honest response from someone who made the same mistake as me. VDI is 'the way to go' as to speak. However, rushing into things and enabling a mindset that it will cater for everyone's needs is naive to say the least. Been there and done that. Trying to explain to teachers that they can no longer play DVD's very well was the first blow. Having to then justify to the geography department that they can no longer run Google Earth Pro on this new flashy and expensive system was the next. Ultimately it created more problems than it solved. Especially as regards to supporting the product once installed by the specialists. We all know the score here, being in IT, you are expected to become a master of citrix and Vmware overnight!
I'd suggest VDI is the way forward, but careful planning and training of support staff needs to come BEFORE its introduced to the network.
Reading back on some of the threads on VDI an old Edwin Starr song sprang to mind to mis-quote if I may "VDI, what is it good for? Absolutely nothing..."
OK. That's not strictly true, staff external access yes, in school via thin clients and old boxes... a waste of time, money and effort. Another year on and I do have to say that the thin clients have been no problem at all... when running Terminal services. VDI is still not there by any stretch of the imagination. VMWare were pretty cheesed off when I terminated my SnS just over a year ago. I did ask about new features etc in the coming year (they could not really quote anything major) and my decision has been vindicated, what have they given us since last year? 5.1 (I'm on 5.0) wow massive improvement.
As for virtualisation generally I have a couple of question
1) I don' t understand the arguments about stuff having to be physical. Why does SIMS have to be physical, DC physical etc? Is it performance? Do you really think in a school environment (we have 1450 students and 180 staff) that SIMS will be pushed to the absolute physical (excuse the pun) limit? I have sat and watched SIMS over many hours (years), IOPS, network, processor, memory and cannot figure out why it has to be physical? Please someone enlighten me!
2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? Now I've got 8Gb fibre which I have come to realise is actually stupid, I could have saved money and stayed with 4Gb. It never uses more than 1.5Gb even when running Veeam backups over fibre. If this is for networking rather than attached storage (iScSI) then this is even more insane. As an example, I normally have 4 x 1Gb NICs teamed for host connectivity so I disconnected 3 of these from each host and ran them with essentially just one NIC each for a week just to see what would happen and you know what? no difference. The HP switch monitoring software never registered above 60% on any of the NIC ports at any time even with 720 workstations accessing work at the beginning (read) and at the end (write).
Just in case I didn't make it clear at the beginning, forget VDI it's too expensive both for the kit and the licensing and just use terminal services, it does a good job within it's limitations. The important thing is knowing what these limitations are. Deploy in areas where these limitations will never be reached and you'll have happy staff and students.
Jamo (4th March 2013)
The idea of "more is better" mentatlity is one that leads to schools throwing away a lot of money at unneeded equipment and configurations.
Last edited by mrbios; 3rd March 2013 at 05:13 PM.
This is a great thread. Thanks for sharing you critical review of your strategy, it has actually inspired me to step up to a challenge I can see coming at my place.
On over speccing your links. Your traffic summary is remarkably similar to mine, and I've got half the number of end user devices, a completely different architecture and and half of those are coming over a SINGLE 1GB/s link from the wireless controller. This similarity is suspicious. I wonder if the data collection timescales are smoothing out the peaks in our raw respective data sets?
You also say that reducing your bandwidth by a factor of 4 had no effect. Your only measure was traffic reported by the switches. How about client performance, log on time (under load), and UI responsiveness on the desktops?
My motivation for asking, I hope, is encapsulated in this anecdote: I once had (to my shame) all my server VM's running on one host for nearly a month. Once I realised and corrected it, I went back and looked at the data, we were trucking along at 70% CPU and RAM on the box and there was roughly the same traffic running out of one box as we had normally on two (4x 1GBe per host for iSCSI and network). When I made an off the cuff remark to one of the office team, only then did they say "well it has seemed a lot slower over the last few weeks than it was before". The trouble being that running at half capacity provided a 'just good enough' service, and the granularity of the monitoring system did not indicated that I was pushing the capacity limits of the links and CPU (oh yes for there was only one CPU running the whole server suite!).
Just to jump onto this thread late...I'm running a terminal services farm for part of my estate, somewhere around 80/90 machines in curriculum and another 50 ish in admin locations. Works generally well, and bar some issues with some wyse units I would say its ok. We have a mix of wyse and windows thin PC units.
For your traditional graphics intensive areas, eg ICT, Art, Design, Music - physical machine is still the way... with over 300 machines lurking around the site.
On the other bits of the thread...I'm also a great advocate of Salamander; as well as being a Sharepoint, Exchange and Lync integrated user
Last edited by TheScarfedOne; 4th March 2013 at 10:34 PM.
2) When I say no effect, that means everything, login time, internet speed (Smoothwall and TMG back end firewall both virtual) file access, Sharepoint and Exchange etc etc.
Bear in mind the rest of the virtual infrastructure was working fine, ESXs SAN etc, given that there are 3 hosts, the NIC load would have been across all 3 as different services (DC, Sharepoint, Exchange file services) would probably been spread across them (don't know for sure as I let VMWare deal with all that stuff ie vmotion, load balancing etc). I probably wouldn't advocate just 1Gb NIC but certainly no more than 2.
How come you were not alerted that one of the hosts was not functioning? As for CPU, one CPU in each host is plenty. See below for CPU usage
Last edited by Dave_O; 3rd March 2013 at 08:22 PM.
Is VDI suitable for everyone? No. VDI, if done properly, will lower your long term costs. Will it be cheaper in the short term? No, very unlikely. If you've got well-spec'd fat clients and are maxing out their CPU/RAM/GFX capabilities, then it's unlikely that thin clients will give you better performance. If you've got a site where the main use of PCs is Office, SIMS and browsing the internet, and they currently have very few or very old PCs, then VDI may offer a major improvement over what they've currently got. If you've got a new build and they need a huge number of PCs but can't afford to upgrade them every three years and need something that isn't going to get broken by users, VDI may be perfect.
2) Bottlenecks - Say I run 20 physical servers and they've all got 1Gb connections. Say they all peak at around 80%, and maybe half of them peak around the same time of day. If I virtualise all 20 of those servers and put them on one physical host with a 1Gb or 2Gb link, then at some point it's very likely I'll run into network bottlenecks that are (technically) due to the virtual infrastructure. I don't know anyone running 8Gb or 10Gb links, because in reality in a school very few of us are running that kind of network throughput. I do know people with blade chassis that have very high consolidation ratios and are coming close to maxing out 4Gb links though, and have a perfectly good reason to consider 10Gb networking.
That's my two pence anyway.
- VDI can be absolutely awesome if it's put in the right place. If someone's got high-powered fat clients and swaps them for thin clients expecting a performance increase, they're probably going to be disappointed. If they want a new, manageable, large array of desktops and don't need video editing capabilities (or similar), then VDI may be perfect for them.
- You keep some servers physical to either avoid loops of dependencies or single points of failure, or because you can't afford the hardware needed to give the required performance on a virtual server.
- You do 10Gb networking if you need it - most of us don't. If you're consolidating a lot of physical servers down onto one server by virtualising them, then you need to provide sufficient bandwidth for all of those servers. For some people that might only be 1Gb, for some people that might be 10Gb.
Like I said, just my opinion and just playing devil's advocate.
In your environment do you have heavy Multimedia file transfers, or run with roaming profiles offline files and searchindexer enabled? For us these are where we see the maximum load on the network links.
You appear to have 3x 12 Core 3Ghz Xeons. This is roughly equivalent 3x the grunt of my two hosts. Mine run at substantially over 50% during peak load. If I had two of those beasties of yours I would have the capacity to not need additional upgrades for another three years at least. However, by going cheaper than you I am already over capacity, and extremely limited in what I can do next without a third server.
In other news, I've got busy week. If I find the time I will tighten up my performance stats collectors and post an analysis to a new thread, for comparison.
psydii (4th March 2013)
There are currently 1 users browsing this thread. (0 members and 1 guests)