+ Post New Thread
Page 3 of 5 FirstFirst 12345 LastLast
Results 31 to 45 of 61
Thin Client and Virtual Machines Thread, My conclusions on VDI and other things in Technical; Hi Tony Will need to ask my bossman about that and off course he would want to get involved!!??!! Not ...
  1. #31
    ASJ
    ASJ is offline
    ASJ's Avatar
    Join Date
    Jun 2008
    Location
    Northampton
    Posts
    28
    Thank Post
    0
    Thanked 4 Times in 4 Posts
    Rep Power
    13
    Hi Tony

    Will need to ask my bossman about that and off course he would want to get involved!!??!!

    Not BETT this year - too much to do
    Last edited by ASJ; 3rd January 2011 at 10:33 PM. Reason: typos

  2. Thanks to ASJ from:

    GrumbleDook (3rd January 2011)

  3. #32
    Axel's Avatar
    Join Date
    Apr 2008
    Posts
    239
    Thank Post
    32
    Thanked 43 Times in 32 Posts
    Rep Power
    22
    Ian,

    We have just further optimised ICA/Xenapp performance - if you download the latest f/w from our website you should see a noticeable increase in performance (again!)

    Matthew

    (Sorry - was meant to be a PM to Imiddleton as a response to post he left above - please ignore if not him - or not an Axel ICA user!)
    Last edited by Axel; 5th January 2011 at 02:16 PM.

  4. #33
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Dave O - Just wanted to say I really appreciate you taking the time to write that original post. A lot of the things you've said reiterate what I've been trying to get across to management for a long time, so it's nice to hear it all from someone who's clearly 'been there'.

    I don't 100% agree with all the VDI stuff you've said, but then I haven't deployed it myself, only played around with it. I did test one (nameless) thin client solution and was not at all impressed so I can 100% see where you're coming from. On the flip side I've tried the Sun/Oracle VDI stuff from Cutter that others have mentioned and I've been really impressed (especially when coupled with Sun SGD) so I still think it has potential as a solution.

    Sounds like you've had a rough two and a half years, props for sticking with it.

    Chris

  5. #34
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63
    Fantastic post and thread. I've got more to say on the matter but have not got time!!

    Butuz

  6. #35
    johnymac's Avatar
    Join Date
    Jun 2006
    Location
    Bradford
    Posts
    55
    Thank Post
    0
    Thanked 1 Time in 1 Post
    Rep Power
    0

    Very interesting

    Thanks for this post, very interesting, have shared with the rest of the PT techs. Thanks again

  7. #36

    Join Date
    Sep 2008
    Posts
    190
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    VDI update

    Time having passed from my original post I will, therefore, update you on our present position. The truth is nothing much has changed. The HyperV solution is working well. The refactored ESXs (IBM x3650s) are as solid as ever, the fibre attached SAN spins on. The advent of Windows 2008 R2 SP1 and VMM 2008 R2 SP1 have allowed the server count (mainly terminal servers) to go up, which I'm not sure is a good thing. DPM continues to back up effectively. Actually tested the DR routine with a machine I built on the HyperV and it worked there's a first. I can now officially say I'm impressed with HyperV. I would'nt use it on mission critical services but it has it's place.

    Licensing with EES has made things a lot simpler. Took a while to squeeze the Sharepoint and Exchange forefront license keys out of Microsoft but we got there in the end. A year of stability has turned my attention from the backend platform to a much more worthwhile venture into documentation and VLE stuff. If I say so myself my ability to write IT related policy documents, strategic plans etc is awsome (OK its maybe just alright). The VLE/MLE/learning plaatform whatever they call it nowerdays is coming along nicely. I get a lot of stick about our "website" from the other techies in Rotherham, all I can say is that it will be up when we're good and ready.

    I should probably start the VLE stuff on a different thread but hey.

    Sharepoint 2010, Exchange 2010 fully integrated, Salamander Soft (Richard Wills) for site management, class site creation, SLK customisation, calendar integration, exposing SIMS data through bespoke webparts and general making stuff work. If you have read my other threads you will know my views on so called experts and consultants in the case of Richard W he actually is one! and a nice chap too. If you get the chance have a look at SalamanderSoft it does what it says on the tin.

    The other members of my team are retro-gamers and have organised a week long event promoting the UK gaming industry. Speakers from the industry are coming in to help give students an idea of what is needed to get involved including coding, narrative and art work. Looks like it will be a good week. Have a look at Games Britannia

    I'm rambling and going off topic now. Not sure I will be able to offers anything else to this topic but I hope my experiences will help you.

    On that note, I was wondering if people would be interested in me doing a presentaion on this and other virtualisation topics, hosted at my school, half/full day, with a chance to have a look at all of our gubbins. Let me know what you think on this thread.

    Dave O
    Last edited by Dave_O; 8th May 2011 at 07:14 PM.

  8. Thanks to Dave_O from:

    Embazzy (12th May 2011)

  9. #37

    Join Date
    May 2011
    Location
    Leeds
    Posts
    2
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    @ Dave

    What a refreshing and honest response from someone who made the same mistake as me. VDI is 'the way to go' as to speak. However, rushing into things and enabling a mindset that it will cater for everyone's needs is naive to say the least. Been there and done that. Trying to explain to teachers that they can no longer play DVD's very well was the first blow. Having to then justify to the geography department that they can no longer run Google Earth Pro on this new flashy and expensive system was the next. Ultimately it created more problems than it solved. Especially as regards to supporting the product once installed by the specialists. We all know the score here, being in IT, you are expected to become a master of citrix and Vmware overnight!

    I'd suggest VDI is the way forward, but careful planning and training of support staff needs to come BEFORE its introduced to the network.

  10. #38

    Join Date
    Sep 2008
    Posts
    190
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Reading back on some of the threads on VDI an old Edwin Starr song sprang to mind to mis-quote if I may "VDI, what is it good for? Absolutely nothing..."

    OK. That's not strictly true, staff external access yes, in school via thin clients and old boxes... a waste of time, money and effort. Another year on and I do have to say that the thin clients have been no problem at all... when running Terminal services. VDI is still not there by any stretch of the imagination. VMWare were pretty cheesed off when I terminated my SnS just over a year ago. I did ask about new features etc in the coming year (they could not really quote anything major) and my decision has been vindicated, what have they given us since last year? 5.1 (I'm on 5.0) wow massive improvement.

    As for virtualisation generally I have a couple of question

    1) I don' t understand the arguments about stuff having to be physical. Why does SIMS have to be physical, DC physical etc? Is it performance? Do you really think in a school environment (we have 1450 students and 180 staff) that SIMS will be pushed to the absolute physical (excuse the pun) limit? I have sat and watched SIMS over many hours (years), IOPS, network, processor, memory and cannot figure out why it has to be physical? Please someone enlighten me!

    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? Now I've got 8Gb fibre which I have come to realise is actually stupid, I could have saved money and stayed with 4Gb. It never uses more than 1.5Gb even when running Veeam backups over fibre. If this is for networking rather than attached storage (iScSI) then this is even more insane. As an example, I normally have 4 x 1Gb NICs teamed for host connectivity so I disconnected 3 of these from each host and ran them with essentially just one NIC each for a week just to see what would happen and you know what? no difference. The HP switch monitoring software never registered above 60% on any of the NIC ports at any time even with 720 workstations accessing work at the beginning (read) and at the end (write).

    Just in case I didn't make it clear at the beginning, forget VDI it's too expensive both for the kit and the licensing and just use terminal services, it does a good job within it's limitations. The important thing is knowing what these limitations are. Deploy in areas where these limitations will never be reached and you'll have happy staff and students.

  11. Thanks to Dave_O from:

    Jamo (4th March 2013)

  12. #39
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,552
    Thank Post
    363
    Thanked 264 Times in 216 Posts
    Rep Power
    101
    Quote Originally Posted by Dave_O View Post
    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? Now I've got 8Gb fibre which I have come to realise is actually stupid, I could have saved money and stayed with 4Gb. It never uses more than 1.5Gb even when running Veeam backups over fibre. If this is for networking rather than attached storage (iScSI) then this is even more insane. As an example, I normally have 4 x 1Gb NICs teamed for host connectivity so I disconnected 3 of these from each host and ran them with essentially just one NIC each for a week just to see what would happen and you know what? no difference. The HP switch monitoring software never registered above 60% on any of the NIC ports at any time even with 720 workstations accessing work at the beginning (read) and at the end (write).
    It's nice to find someone who shares my thoughts on this. Nothing to add really as you've pretty much covered my whole opinion on this, i get the feeling not enough monitoring and research go into a lot of the implementations/recommendations people give and receive.

    The idea of "more is better" mentatlity is one that leads to schools throwing away a lot of money at unneeded equipment and configurations.
    Last edited by mrbios; 3rd March 2013 at 05:13 PM.

  13. #40

    Join Date
    Jul 2006
    Location
    London
    Posts
    1,265
    Thank Post
    111
    Thanked 242 Times in 193 Posts
    Blog Entries
    1
    Rep Power
    74
    This is a great thread. Thanks for sharing you critical review of your strategy, it has actually inspired me to step up to a challenge I can see coming at my place.

    On over speccing your links. Your traffic summary is remarkably similar to mine, and I've got half the number of end user devices, a completely different architecture and and half of those are coming over a SINGLE 1GB/s link from the wireless controller. This similarity is suspicious. I wonder if the data collection timescales are smoothing out the peaks in our raw respective data sets?

    You also say that reducing your bandwidth by a factor of 4 had no effect. Your only measure was traffic reported by the switches. How about client performance, log on time (under load), and UI responsiveness on the desktops?

    My motivation for asking, I hope, is encapsulated in this anecdote: I once had (to my shame) all my server VM's running on one host for nearly a month. Once I realised and corrected it, I went back and looked at the data, we were trucking along at 70% CPU and RAM on the box and there was roughly the same traffic running out of one box as we had normally on two (4x 1GBe per host for iSCSI and network). When I made an off the cuff remark to one of the office team, only then did they say "well it has seemed a lot slower over the last few weeks than it was before". The trouble being that running at half capacity provided a 'just good enough' service, and the granularity of the monitoring system did not indicated that I was pushing the capacity limits of the links and CPU (oh yes for there was only one CPU running the whole server suite!).

  14. #41
    TheScarfedOne's Avatar
    Join Date
    Apr 2007
    Location
    Plymouth, Devon
    Posts
    1,155
    Thank Post
    683
    Thanked 169 Times in 154 Posts
    Blog Entries
    78
    Rep Power
    85
    Just to jump onto this thread late...I'm running a terminal services farm for part of my estate, somewhere around 80/90 machines in curriculum and another 50 ish in admin locations. Works generally well, and bar some issues with some wyse units I would say its ok. We have a mix of wyse and windows thin PC units.

    For your traditional graphics intensive areas, eg ICT, Art, Design, Music - physical machine is still the way... with over 300 machines lurking around the site.

    On the other bits of the thread...I'm also a great advocate of Salamander; as well as being a Sharepoint, Exchange and Lync integrated user
    Last edited by TheScarfedOne; 4th March 2013 at 10:34 PM.

  15. #42

    Join Date
    Sep 2008
    Posts
    190
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Quote Originally Posted by psydii View Post
    I wonder if the data collection timescales are smoothing out the peaks in our raw respective data sets?

    You also say that reducing your bandwidth by a factor of 4 had no effect. Your only measure was traffic reported by the switches. How about client performance, log on time (under load), and UI responsiveness on the desktops?
    1) No, sat and watched it most of one day (mid-week) had some peaks at around 80% (which is what you would expect) but that was it.

    2) When I say no effect, that means everything, login time, internet speed (Smoothwall and TMG back end firewall both virtual) file access, Sharepoint and Exchange etc etc.

    Bear in mind the rest of the virtual infrastructure was working fine, ESXs SAN etc, given that there are 3 hosts, the NIC load would have been across all 3 as different services (DC, Sharepoint, Exchange file services) would probably been spread across them (don't know for sure as I let VMWare deal with all that stuff ie vmotion, load balancing etc). I probably wouldn't advocate just 1Gb NIC but certainly no more than 2.

    How come you were not alerted that one of the hosts was not functioning? As for CPU, one CPU in each host is plenty. See below for CPU usage

    VCenter.jpg
    Last edited by Dave_O; 3rd March 2013 at 08:22 PM.

  16. #43
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by Dave_O View Post
    Reading back on some of the threads on VDI an old Edwin Starr song sprang to mind to mis-quote if I may "VDI, what is it good for? Absolutely nothing..."

    [...]

    Just in case I didn't make it clear at the beginning, forget VDI it's too expensive both for the kit and the licensing and just use terminal services, it does a good job within it's limitations. The important thing is knowing what these limitations are. Deploy in areas where these limitations will never be reached and you'll have happy staff and students.
    I'm going to play devil's advocate for a bit... I've recently started working for a company that have done a lot of VDI installs, and I can honestly say there are 1000+ seat VDI deployments that are a huge success, and that thin clients work much better for them than fat clients did. It all depends on the scenario, the users' requirements, and what their expectations are. We've got sites where staff can walk up to a machine, plug in a smartcard, be logged on in 5 seconds, unplug the card and be logged off instantly, then go home, go to a website, and log back in to their session from home exactly where they left off. We've also got sites with machines spread out over a wide campus, and the staff on-site would struggle with managing them. With VDI everything's central, and on the very rare occasion a thin client does break, it can either be fixed from the central server or the hardware swapped out in a couple of minutes.

    Is VDI suitable for everyone? No. VDI, if done properly, will lower your long term costs. Will it be cheaper in the short term? No, very unlikely. If you've got well-spec'd fat clients and are maxing out their CPU/RAM/GFX capabilities, then it's unlikely that thin clients will give you better performance. If you've got a site where the main use of PCs is Office, SIMS and browsing the internet, and they currently have very few or very old PCs, then VDI may offer a major improvement over what they've currently got. If you've got a new build and they need a huge number of PCs but can't afford to upgrade them every three years and need something that isn't going to get broken by users, VDI may be perfect.

    Quote Originally Posted by Dave_O View Post

    As for virtualisation generally I have a couple of question

    1) I don' t understand the arguments about stuff having to be physical. Why does SIMS have to be physical, DC physical etc? Is it performance? Do you really think in a school environment [...]

    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? [...]
    1) Okay, using your first example of SIMS - I think most people say SIMS has to be physical is because for a long time Capita said they wouldn't support SIMS in a virtual environment, and no one wants their huge MIS system to be unsupported. Also, SIMS is largely just an SQL server, and SQL usually has fairly high disk I/O requirements (depends on how heavily you use SIMS). Disk I/O doesn't always virtualise very well, unless you have good local storage on your VMware/Hyper-V/whatever host (not recommended) or a good network link to your SAN with fast disks (expensive). Using your second example of a DC, it's more that you shouldn't have just virtual DCs, you should have at least one physical DC/DNS/DHCP server. The reason for this is simple - dependencies. Say everything goes down (long powercut or whatever) and your VMware/whatever host talks to the SAN via its hostname. You power up the VMware/whatever host, but it can't see the SAN because there's no server up doing DNS. You can't power up the virtual DC that does DNS, because to do that you need the host to see the storage. Same issue if something on the infrastructure side needs AD authentication for it to start up. You're stuck in a loop of dependencies that are very easily solved by having one physical DC. You can get around this by not having anything on your infrastructure that uses DHCP, hostnames (rather than IPs) or AD authentication, but most people just prefer to have a physical DC/DNS/DHCP server that they can power up first, and then start up their virtual hosts and storage.

    2) Bottlenecks - Say I run 20 physical servers and they've all got 1Gb connections. Say they all peak at around 80%, and maybe half of them peak around the same time of day. If I virtualise all 20 of those servers and put them on one physical host with a 1Gb or 2Gb link, then at some point it's very likely I'll run into network bottlenecks that are (technically) due to the virtual infrastructure. I don't know anyone running 8Gb or 10Gb links, because in reality in a school very few of us are running that kind of network throughput. I do know people with blade chassis that have very high consolidation ratios and are coming close to maxing out 4Gb links though, and have a perfectly good reason to consider 10Gb networking.

    That's my two pence anyway.

    • VDI can be absolutely awesome if it's put in the right place. If someone's got high-powered fat clients and swaps them for thin clients expecting a performance increase, they're probably going to be disappointed. If they want a new, manageable, large array of desktops and don't need video editing capabilities (or similar), then VDI may be perfect for them.
    • You keep some servers physical to either avoid loops of dependencies or single points of failure, or because you can't afford the hardware needed to give the required performance on a virtual server.
    • You do 10Gb networking if you need it - most of us don't. If you're consolidating a lot of physical servers down onto one server by virtualising them, then you need to provide sufficient bandwidth for all of those servers. For some people that might only be 1Gb, for some people that might be 10Gb.


    Like I said, just my opinion and just playing devil's advocate.

  17. #44

    Join Date
    Jul 2006
    Location
    London
    Posts
    1,265
    Thank Post
    111
    Thanked 242 Times in 193 Posts
    Blog Entries
    1
    Rep Power
    74
    Quote Originally Posted by Dave_O View Post
    1) No, sat and watched it most of one day (mid-week) had some peaks at around 80% (which is what you would expect) but that was it.
    Given your in depth self critical analysis, I think that you know this probably isn't really good enough to be sure. In the same way that I know there is no excuse for my not noticing all my VMs were running on one host. (It didn't trip any of the built in alerts, and we didn't look at the console for a month because it was just working and we were concentrating on other matters). Automated Monitoring and reporting is king for enabling good decisions to be made, or past decisions effectively reviewed.

    In your environment do you have heavy Multimedia file transfers, or run with roaming profiles offline files and searchindexer enabled? For us these are where we see the maximum load on the network links.

    Quote Originally Posted by Dave_O View Post
    2) When I say no effect, that means everything, login time, internet speed (Smoothwall and TMG back end firewall both virtual) file access, Sharepoint and Exchange etc etc.
    Fair enough.

    You appear to have 3x 12 Core 3Ghz Xeons. This is roughly equivalent 3x the grunt of my two hosts. Mine run at substantially over 50% during peak load. If I had two of those beasties of yours I would have the capacity to not need additional upgrades for another three years at least. However, by going cheaper than you I am already over capacity, and extremely limited in what I can do next without a third server.

    In other news, I've got busy week. If I find the time I will tighten up my performance stats collectors and post an analysis to a new thread, for comparison.

  18. #45

    Join Date
    Sep 2008
    Posts
    190
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Quote Originally Posted by psydii View Post
    In the same way that I know there is no excuse for my not noticing all my VMs were running on one host. (It didn't trip any of the built in alerts, and we didn't look at the console for a month because it was just working and we were concentrating on other matters). Automated Monitoring and reporting is king for enabling good decisions to be made, or past decisions effectively reviewed.
    Damn straight! No excuse. You'll be first against the wall when the revolution comes!

    Quote Originally Posted by psydii View Post
    In your environment do you have heavy Multimedia file transfers, or run with roaming profiles offline files and searchindexer enabled? For us these are where we see the maximum load on the network links.
    Yes to multi media - make heavy use of a streaming media server, yes we use roaming profiles (kids profiles are fairly small we controll the desktop, staff profiles are large - they are crap at managing them, No to off line files, seachindexer? don't know.


    Quote Originally Posted by psydii View Post
    In other news, I've got busy week. If I find the time I will tighten up my performance stats collectors and post an analysis to a new thread, for comparison.
    As a starting point on performance, if you have a look at the thread below, I would be interested in your figures for SQI IO.


    SIMS SQLOI

  19. Thanks to Dave_O from:

    psydii (4th March 2013)

SHARE:
+ Post New Thread
Page 3 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. Ulteo VDI
    By wesleyw in forum Thin Client and Virtual Machines
    Replies: 5
    Last Post: 29th November 2010, 03:17 PM
  2. VDI
    By karlieb in forum Thin Client and Virtual Machines
    Replies: 0
    Last Post: 22nd July 2009, 11:32 AM
  3. t5720 and VDI
    By OllieC in forum Hardware
    Replies: 4
    Last Post: 13th July 2009, 02:06 PM
  4. VDI over Wireless
    By barney in forum Thin Client and Virtual Machines
    Replies: 0
    Last Post: 22nd April 2009, 08:00 AM
  5. VMWare VDI
    By gshaw in forum Thin Client and Virtual Machines
    Replies: 3
    Last Post: 27th November 2008, 11:21 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •