+ Post New Thread
Page 4 of 5 FirstFirst 12345 LastLast
Results 46 to 60 of 61
Thin Client and Virtual Machines Thread, My conclusions on VDI and other things in Technical; "I'm going to play devil's advocate for a bit.." OK El Diablo bring it on. A. I take it, from ...
  1. #46

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    "I'm going to play devil's advocate for a bit.."

    OK El Diablo bring it on.

    A. I take it, from your initial paragraph, that you don't work in a school (any more?). Your comment " If you've got a site where the main use of PCs is Office, SIMS and browsing the internet, and they currently have very few or very old PCs, then VDI may offer a major improvement over what they've currently got."

    Why not use terminal services!!! runs all of these perfectly at a fraction of the cost. At least give me a case where only VDI can be used.

    1) "Okay, using your first example of SIMS - I think most people say SIMS has to be physical is because for a long time Capita said they wouldn't support SIMS in a virtual environment, and no one wants their huge MIS system to be unsupported. Also, SIMS is largely just an SQL server, and SQL usually has fairly high disk I/O requirements (depends on how heavily you use SIMS). Disk I/O doesn't always virtualise very well, unless you have good local storage on your VMware/Hyper-V/whatever host (not recommended) or a good network link to your SAN with fast disks (expensive)."

    Not the case any more SIMS support virtual.

    Seriously! This is a school and SIMS we are talking about not a ten thousand simutaneous user transactional database. At most we have 200 simultaneous user connections add to that remote calls for Sharepoint etc total no more than 300 (and that's being optomistic). Look at the disk activity and CPU (see below) this morning during peak time, not exactly stressing the system. Note the output of the SAN (V7000 below), given that it can achieve up to 50,000 IOPS not exactly having too push hard is it. Lets base the arguments on things that actually happen in schools not what someone thinks might be the case.

    To that end, I would also be interested in anyones stats from the following thread

    SIMS SQLOI

    "Using your second example of a DC, it's more that you shouldn't have just virtual DCs, you should have at least one physical DC/DNS/DHCP server. The reason for this is simple - dependencies. Say everything goes down (long powercut or whatever) and your VMware/whatever host talks to the SAN via its hostname. You power up the VMware/whatever host, but it can't see the SAN because there's no server up doing DNS. You can't power up the virtual DC that does DNS, because to do that you need the host to see the storage. Same issue if something on the infrastructure side needs AD authentication for it to start up. You're stuck in a loop of dependencies that are very easily solved by having one physical DC. You can get around this by not having anything on your infrastructure that uses DHCP, hostnames (rather than IPs) or AD authentication, but most people just prefer to have a physical DC/DNS/DHCP server that they can power up first, and then start up their virtual hosts and storage."

    Why would you have communication between SAN and host dependent on name resolution? That to me is a poor design decision. We had one or two power cuts early on and learnt this the hard way. Why is this a case for having a physical DC? With the advent of Server 2012 restoring a virtual DC is possible (no AD count number issues). We have the luxury of having a second virtual setup in a physically seperate location (another block) in which we host a third DC acts just like a physical one.

    "2) Bottlenecks - Say I run 20 physical servers and they've all got 1Gb connections. Say they all peak at around 80%, and maybe half of them peak around the same time of day. If I virtualise all 20 of those servers and put them on one physical host with a 1Gb or 2Gb link, then at some point it's very likely I'll run into network bottlenecks that are (technically) due to the virtual infrastructure. I don't know anyone running 8Gb or 10Gb links, because in reality in a school very few of us are running that kind of network throughput. I do know people with blade chassis that have very high consolidation ratios and are coming close to maxing out 4Gb links though, and have a perfectly good reason to consider 10Gb networking."

    The use of "say", "maybe" and "likely" do not fill me with confidence. You give a scenario which is similar to what we have (on average 25 VMs per host) difference is I am actually basing my comments on fact rather than supposition.


    "VDI can be absolutely awesome if it's put in the right place."

    So where is this place? Give me a use case that only VDI can fullfil?

    "You keep some servers physical to either avoid loops of dependencies or single points of failure, or because you can't afford the hardware needed to give the required performance on a virtual server."

    The only things you need physical are backup (even some of that can be virtual) and firewall. The rest...

    "You do 10Gb networking if you need it - most of us don't. If you're consolidating a lot of physical servers down onto one server by virtualising them, then you need to provide sufficient bandwidth for all of those servers. For some people that might only be 1Gb, for some people that might be 10Gb."

    10Gb please spare me! Perhaps if everybody is streaming video, music and large media files ALL of the time. Show me a place that needs 10Gb networking and I'll show you a network riddle with viruses and classes watching the latest Pixar movie.

    SIMS.jpg
    SIMS-CPU.jpg
    V7000.jpg

  2. #47
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Ok, not looking to get into an argument, just offering my observations.

    Yes, up until very recently I used to work in a school. I now work for a company that does (amongst other things) big VDI deployments and we've had success with them for many years. I only say that to point out that VDI is out there and does work for people.

    I'm a little confused over your argument of VDI vs Terminal Services, I see them as the same thing just using different protocols: Wikipedia - Desktop virtualization. Are you referring to "Terminal Services = Thin Clients pointed towards TS/RDS servers" vs "VDI = thin clients pointed towards a dedicated VDI provider like VMware View"? If so then we use both, depending on the customer's needs. Sometimes VDI with Terminal Servers is the best option, but I'd still call that VDI. Maybe I've misunderstood you?

    Re SIMS - Yes, they do support it now, but not everyone may realise that which is why some people might still be wary. Also, not everyone can afford a SAN like yours, but do use SIMS really heavily. If your SQL server is running lots of things besides SIMS and you've got a decent SQL server already, why virtualise it on to storage hardware which may not be able to keep up if it's doing lots of other things? If you've got a good SAN or are using fast local storage and don't hammer SIMS, then I agree, virtualise it.

    Re DC - Yep, in an ideal world nothing would depend on anything else and you could just power up the machines how you like. However, not everyone has an ideal set up like that, or maybe they're using hardware/software that at some level requires AD/DNS/DHCP and there's nothing they can do about it. Maybe they've still got other physical servers for other applications that can't be visualised, and they don't want to lose AD authentication if their virtual infrastructure goes down. If you can avoid that then great, but not everyone can, and so having the physical DC is useful to a lot of people.

    Re Bottlenecks: It was a hypothetical scenario, thus the 'say', 'maybe' and 'likely'. No, that scenario probably won't apply to many schools, but I also don't know of any schools that have put 10Gb in either. Uni's supporting tens of thousands of users probably will find a use for 10Gb.

    So where is this place? Give me a use case that only VDI can fullfil?
    I think I'm confused over the differentiation between pure VDI vs VDI using Terminal Servers. To me they're both VDI, and which one I'd use would depend on the scale of the customer and what OS they're running. I don't think there's any case where VDI in general is the only solution, but I do believe there's ones where it's the best (it might be for speed, for manageability, for data security, for device security/vulnerability/damage, for long-term costs, etc.)

    The only things you need physical are backup (even some of that can be virtual) and firewall. The rest...
    If you have the luxury of not needing access to AD/DNS if your virtual infrastructure is down. Not everyone has that luxury, or the ability/budget to make it so.

    10Gb please spare me! Perhaps if everybody is streaming video, music and large media files ALL of the time. Show me a place that needs 10Gb networking and I'll show you a network riddle with viruses and classes watching the latest Pixar movie.
    Just because you don't need 10Gb, it doesn't mean no one does. I don't know of any schools that would require 10Gb, but like I said, I imagine the bigger unis probably do.
    Last edited by Duke; 4th March 2013 at 12:42 PM.

  3. #48

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by Duke View Post

    I'm a little confused over your argument of VDI vs Terminal Services, I see them as the same thing just using different protocols: Wikipedia - Desktop virtualization. Are you referring to "Terminal Services = Thin Clients pointed towards TS/RDS servers" vs "VDI = thin clients pointed towards a dedicated VDI provider like VMware View"? If so then we use both, depending on the customer's needs. Sometimes VDI with Terminal Servers is the best option, but I'd still call that VDI. Maybe I've misunderstood you?

    Also

    I think I'm confused over the differentiation between pure VDI vs VDI using Terminal Servers. To me they're both VDI, and which one I'd use would depend on the scale of the customer and what OS they're running. I don't think there's any case where VDI in general is the only solution, but I do believe there's ones where it's the best (it might be for speed, for manageability, for data security, for device security/vulnerability/damage, for long-term costs, etc.)
    You've not misunderstood me, just what the difference is between VDI and terminal services. Using Wikipeida as a source, seriously!?? The fundamental difference is the level of isolation afforded by VDI over terminal services. Isolation of machine instances and applications.

    Quote Originally Posted by Duke View Post
    Re SIMS - Yes, they do support it now, but not everyone may realise that which is why some people might still be wary. Also, not everyone can afford a SAN like yours, but do use SIMS really heavily. If your SQL server is running lots of things besides SIMS and you've got a decent SQL server already, why virtualise it on to storage hardware which may not be able to keep up if it's doing lots of other things? If you've got a good SAN or are using fast local storage and don't hammer SIMS, then I agree, virtualise it.
    Not sure what point you are making here? Are you an advocate of physical? If so give me some evidence to support your argument, as I have supplied in my post.

    Quote Originally Posted by Duke View Post
    If you have the luxury of not needing access to AD/DNS if your virtual infrastructure is down. Not everyone has that luxury, or the ability/budget to make it so.
    ?? This is not a luxury it's by design. If you mean having a secondary Virtual infrastructure then yes.

    Quote Originally Posted by Duke View Post

    Re DC - Yep, in an ideal world nothing would depend on anything else and you could just power up the machines how you like. However, not everyone has an ideal set up like that, or maybe they're using hardware/software that at some level requires AD/DNS/DHCP and there's nothing they can do about it. Maybe they've still got other physical servers for other applications that can't be visualised, and they don't want to lose AD authentication if their virtual infrastructure goes down. If you can avoid that then great, but not everyone can, and so having the physical DC is useful to a lot of people.

    Re Bottlenecks: It was a hypothetical scenario, thus the 'say', 'maybe' and 'likely'. No, that scenario probably won't apply to many schools, but I also don't know of any schools that have put 10Gb in either. Uni's supporting tens of thousands of users probably will find a use for 10Gb.

    Just because you don't need 10Gb, it doesn't mean no one does. I don't know of any schools that would require 10Gb, but like I said, I imagine the bigger unis probably do.
    These are schools we are talking about not universities, why quote an example that is, to all intents and purposes, outside the scope of this forum.

  4. #49
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Ok, looks like we're not going to agree on any points here so I'll leave it. I was merely trying to make the point that although VDI clearly hasn't worked for you, and that you don't need a physical SIMS server, DC or 10Gb, those things aren't true for everyone.

  5. #50

    Join Date
    Mar 2013
    Posts
    3
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by TheScarfedOne View Post
    Just to jump onto this thread late...I'm running a terminal services farm for part of my estate, somewhere around 80/90 machines in curriculum and another 50 ish in admin locations. Works generally well, and bar some issues with some wyse units I would say its ok. We have a mix of wyse and windows thin PC units.

    For your traditional graphics intensive areas, eg ICT, Art, Design, Music - physical machine is still the way... with over 300 machines lurking around the site.

    On the other bits of the thread...I'm also a great advocate of Salamander; as well as being a Sharepoint, Exchange and Microsoft Lync integrated user
    Hey ScarfedOne,
    do you have some examples?

  6. #51

    Join Date
    May 2009
    Posts
    575
    Thank Post
    0
    Thanked 89 Times in 50 Posts
    Rep Power
    30
    Out of interest, what has been the level of support that IT helpdesks have performed on VDI installations?

    Have you been expected to install, configure and support the whole lot?

    Did you get specialists in to install and configure this and then handed over the day to day support to you with them available to troubleshoot any problems?

    Or any other variation of the above?

    My reason for asking this is that whatever your opinions on VDI are, have you not noticed that you have had to increase your skill set dramatically to support a VDI installation? Has this resulted in an increased salary or were you just expected to add it to your daily duties? Was any training provided to you?

  7. #52


    Join Date
    May 2009
    Posts
    2,961
    Thank Post
    259
    Thanked 785 Times in 596 Posts
    Rep Power
    286
    Quote Originally Posted by Dave_O View Post
    1) I don' t understand the arguments about stuff having to be physical. Why does SIMS have to be physical, DC physical etc? Is it performance? Do you really think in a school environment (we have 1450 students and 180 staff) that SIMS will be pushed to the absolute physical (excuse the pun) limit? I have sat and watched SIMS over many hours (years), IOPS, network, processor, memory and cannot figure out why it has to be physical? Please someone enlighten me!
    I suspect it's mostly about support. When they are diagnosing performance problems (caused primarily by shoddy code and database design), they don't want to be debugging a dodgy VM set-up. So a physical server with it's own dedicated resources is the simplest way to decouple from that.

    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!?
    I'm putting in 10GB connections from core to edge because we need to replace cabling and I expect the investment to run 5-10 years. I don't expect to be using the available bandwidth any time soon, but nor can I easily forecast bandwidth use over those kind of time scales.

    On VDI, I'm looking at it as a means of providing remote access for students rather than having thin clients replace cheap desktop PC's. I've steered clear of it so far because the promised cost savings just don't add up and if a consultancy can't even do basic arithmetic, what hope have they configuring a VDI solution!

    Some great posts in this thread. Thanks to all participants.

  8. #53

    Join Date
    Sep 2008
    Posts
    102
    Thank Post
    4
    Thanked 20 Times in 13 Posts
    Rep Power
    22
    Quote Originally Posted by Dave_O View Post

    1) I don' t understand the arguments about stuff having to be physical. Why does SIMS have to be physical, DC physical etc? Is it performance? Do you really think in a school environment (we have 1450 students and 180 staff) that SIMS will be pushed to the absolute physical (excuse the pun) limit? I have sat and watched SIMS over many hours (years), IOPS, network, processor, memory and cannot figure out why it has to be physical? Please someone enlighten me!
    There is no need (now) to have a physical anything. This is down to changes and advances in Virtualisation software/OSs and applications. Remember when it was advised that ISA server should be a physical server and that you should have a physical DC. As things change people or specifically IT techs are reluctant to change because of our 'KISS' and 'if its not broken dont fixt it' outlook on things. Hence if a physical SIMS and DC works... why change? I am replacing our SIMS server as we speak and that new one is going to be a physical server, but only because I want it to be, not because it has to be. Personal preference. Capita now accept that a Virtual SIMS server should have no issues, but they still have documents which say, and I agree, that a poorly configured virtual infrastructure can be worse than an older physical server.

    As mentioned I think it all depends on IT tech expertise, skills, confidence and the equipment you have and how it is configured.

    Quote Originally Posted by Dave_O View Post
    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? Now I've got 8Gb fibre which I have come to realise is actually stupid, I could have saved money and stayed with 4Gb. It never uses more than 1.5Gb even when running Veeam backups over fibre. If this is for networking rather than attached storage (iScSI) then this is even more insane. As an example, I normally have 4 x 1Gb NICs teamed for host connectivity so I disconnected 3 of these from each host and ran them with essentially just one NIC each for a week just to see what would happen and you know what? no difference. The HP switch monitoring software never registered above 60% on any of the NIC ports at any time even with 720 workstations accessing work at the beginning (read) and at the end (write).
    We have 10GB fibre and two ESX hosts each with 8 x 1GBNICs, maybe overkill, maybe just that we were upgrading every single switch in school installing a new managed wireless network and deploying over 1200 iPads and 100 macbooks ontop of our 800 desktops.

    We dont use anywhere near the bandwith we have available and we stream media, have over 1200 ipads, use VoIP and everything else you can expect in a school environment but as always we like to provide resilience or over-spec for future proofing or sometimes buying the best we can now to mitigate not getting funding in the future. I am sure we are all guilty of this. I am contemplating upgrading to a 20GB backbone to protect against a single point of failure.

    I feel it is now appropriate to say I work in the same LA as @Dave_O and know him very well. He is not always right and he will admit this, he is not always confrontational or arrogant as he may come across. I like to think of it as more critical friend like!

    JayEmm

  9. #54

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    I am always right!
    Don't mistake confrontation for someone who questions.
    Arrogance is subjective, depending on perspective and personal understanding.

  10. #55

    Join Date
    Sep 2008
    Posts
    102
    Thank Post
    4
    Thanked 20 Times in 13 Posts
    Rep Power
    22
    Quote Originally Posted by Dave_O View Post
    Arrogance is subjective, depending on perspective and personal understanding.
    Hence why I said "comes across" lol

  11. #56


    Join Date
    Feb 2007
    Location
    51.405546, -0.510212
    Posts
    8,776
    Thank Post
    223
    Thanked 2,632 Times in 1,939 Posts
    Rep Power
    779
    This is quite an interesting article on the subject of VDI performance...

    Comparing the CPU Performance of Physical and Virtual PCs (VDI) Helge Klein

    When you move users from a physical PC to a VDI environment you may find that they are not too happy with their new machine’s performance – it happened to me. To quantify things I took a series of measurements comparing the old PCs we migrated away from with both VDI machines and the new PCs available to some.

    Closing Thoughts
    When moving power users (a.k.a. knowledge workers) from a PC to a VDI machine CPU performance is a topic that needs as much attention as IO performance. In your project, do not rely on benchmarks alone – those are always synthetic and may or may not match what you see in reality – but have real humans test the applications they use in the way they use them. I have seen “harmless” Excel sheets turn out to be massive CPU hogs running for hours or even days. Differences in performance were noticed immediately by the users.

  12. #57

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    [QUOTE=Dave_O;939038]

    2) Why do people think Virtual infrastructures are a bottleneck and try to scale them to be able to run the equivalent of NASA? I read on here that someone is suggesting using 10Gb connections? What the hell for!!? Now I've got 8Gb fibre which I have come to realise is actually stupid, I could have saved money and stayed with 4Gb. It never uses more than 1.5Gb even when running Veeam backups over fibre. If this is for networking rather than attached storage (iScSI) then this is even more insane. As an example, I normally have 4 x 1Gb NICs teamed for host connectivity so I disconnected 3 of these from each host and ran them with essentially just one NIC each for a week just to see what would happen and you know what? no difference. The HP switch monitoring software never registered above 60% on any of the NIC ports at any time even with 720 workstations accessing work at the beginning (read) and at the end (write)./QUOTE]

    The reason for suggesting 10GbE for virtual infrastructure primarily has to do with iSCSI and the significantly reduced latencies of 10GbE compared to Gigabit. It's the matter of typical 2-10 microseconds latency vs 25-150 microseconds for Gigabit (http://storage.dpie.com/downloads/so...ition.pdf).The increased throughput is secondary to the importance of reduced latencies in most (non-SSD storage) virtualised environments. This would affect your write and read latencies in particular and overall responsiveness. Pair 10GbE with an SSD or hybrid storage solution and you will have fantastic performance. Pair SSD with Gigabit and the performance is very good, but you will not be able to get anywhere close to maximising the capabilities (or value) of your backend storage.

    Regarding VDI, my only experience with it has been with Sun VDI and SGD. These solutions work fantastically, and were always heads and shoulders above the rest being one of the most mature and best technologies (since 1999). However, Oracle have recently seen fit to kill off the Sun Ray product line, so I can't even recommend them anymore. But, unless someone buys the technology off of Oracle and resurrects it, I probably wouldn't touch any VDI solution with confidence. Certainly not VMware's solutions. They make very good server virtualisation products, but VDI is pretty rough...

    Enjoyed your original comments and advice, much if it was spot on.

  13. #58

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by Dave_O View Post
    "I'm going to play devil's advocate for a bit.."
    Why would you have communication between SAN and host dependent on name resolution? That to me is a poor design decision. We had one or two power cuts early on and learnt this the hard way. Why is this a case for having a physical DC? With the advent of Server 2012 restoring a virtual DC is possible (no AD count number issues). We have the luxury of having a second virtual setup in a physically seperate location (another block) in which we host a third DC acts just like a physical one.
    I have to admit that I do agree with the logic of having a physical DNS server around. Maybe I'm just paranoid, but I have gotten caught with my pants down finding out that systems that should have no problem with DNS being down actually having lots of problems with it being down! Lets just say that unless you test EVERYTHING whilst DNS is down then it is a risk. Small? Probably. One I'm willing to take again? Not a chance.

    Quote Originally Posted by Dave_O View Post
    "VDI can be absolutely awesome if it's put in the right place."

    So where is this place? Give me a use case that only VDI can fullfil?
    1. Defence
    2. Banks
    3. Hospitals
    4. Intel Agencies

    None of these should really be using anything OTHER than a VDI solution. In fact, that's what many of them are using. Government agencies and the military are the biggest customers Oracle has. Many of them are now spitting their dummies over Oracle killing the Sun Ray.

    Education is not on the above list because education isn't one of those use cases where VDI is the only answer. In most cases, it's not even the best answer in an Educational environment. VERY large deployments (10,000+) though - THEN it might be the only viable and cost effective solution.

  14. Thanks to seawolf from:

    Duke (15th August 2013)

  15. #59
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,650
    Thank Post
    164
    Thanked 217 Times in 200 Posts
    Rep Power
    66
    VDI up until now has really been hamstrung by the storage back-ends not being up to the job - hard disk based is horribly inefficient if you want to match the performance of any half-decent desktop. Seems like there's some new ways of doing things coming through using flash \ RAM now and if those work as well as suggested it might finally be possible to give a user experience that's more positive than the equivalent desktop

  16. #60
    AButters's Avatar
    Join Date
    Feb 2012
    Location
    Wales
    Posts
    465
    Thank Post
    140
    Thanked 107 Times in 82 Posts
    Rep Power
    41
    Just replaced one IT suite of thin clients with Dell Optiplex 7010 fatties.

    It will be the first of many over the next few years.

SHARE:
+ Post New Thread
Page 4 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. Ulteo VDI
    By wesleyw in forum Thin Client and Virtual Machines
    Replies: 5
    Last Post: 29th November 2010, 03:17 PM
  2. VDI
    By karlieb in forum Thin Client and Virtual Machines
    Replies: 0
    Last Post: 22nd July 2009, 11:32 AM
  3. t5720 and VDI
    By OllieC in forum Hardware
    Replies: 4
    Last Post: 13th July 2009, 02:06 PM
  4. VDI over Wireless
    By barney in forum Thin Client and Virtual Machines
    Replies: 0
    Last Post: 22nd April 2009, 08:00 AM
  5. VMWare VDI
    By gshaw in forum Thin Client and Virtual Machines
    Replies: 3
    Last Post: 27th November 2008, 11:21 AM

Thread Information

Users Browsing this Thread

There are currently 2 users browsing this thread. (0 members and 2 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •