+ Post New Thread
Page 4 of 4 FirstFirst 1234
Results 46 to 54 of 54
Thin Client and Virtual Machines Thread, VDI / Terminal services in Technical; Originally Posted by Dave_O Funscott - don't let VDI consume too much of the SAN IOPs (hence the reason for ...
  1. #46

    Join Date
    May 2009
    Posts
    579
    Thank Post
    0
    Thanked 89 Times in 50 Posts
    Rep Power
    30
    Quote Originally Posted by Dave_O View Post
    Funscott - don't let VDI consume too much of the SAN IOPs (hence the reason for putting in local storage in our solution). Having said that ILIO should reduce the IOPs required by a large amount. We found that shutting down all W7 VMs reduced IOPs on the SAN by 4000 (as reported by the V7000).
    Yes good point and another reason why VDI hasn't been overly popular amongst support staff here. We more or less had to buy in a new SAN to cope with the extra strain that all those VM's put on it. Two classes were using photoshop after the original install and we saw a decrease in overall network performance. Once VDI was throttled somewhat to ensure the stability of the rest, its usability became less attractive and users complained about the performance in big hitting applications. These ran far smoother on even a 3 year old PC.

    However, another brand new and faster SAN later, it does seem to be up to a reasonable speed.

  2. #47

    Join Date
    Nov 2013
    Posts
    21
    Thank Post
    0
    Thanked 1 Time in 1 Post
    Rep Power
    0
    Thought I would add a little update.

    Running Hyperv on 2012r2 datacentre with 2012 standard r2 hosting scvmm 2012 r2 (sql 2012 r2) on a R910. this is running xendesktop 7.1 and i have a windows 7 pro 64bit guest vm (2gb 2cpu). at the moment this is hosted on our SAN and averaging 1600 iops from the guest vm (the san isnt used for vdi mainly storage) and the client device is a xenith pro 2 (dell wyse). where to start

    we are unable to test atlantis ilio diskless at the moment as their 2012 version isnt avaiable (fingers crossed should get a copy soon)

    however had plenty of time to play about with the zero client and have the following findings

    good points
    setup fairly simple
    bootup time from cold around 5 seconds when on standby 3 seconds.
    we have also found normal domain logins (before using citrix profile managment) to be far faster than logging onto a production desktop (even with a ssd disk).

    bad points

    no flash redirect (no zero client supports this)
    youtube etc very watchable in normal / second screen size (wouldnt know it was not using gpu to render). full screen is unacceptable
    dell wyse support. one of the main reasons i would not buy this device in bulk. will not even tell you what the latest firmware version is without a support contract (do give 90 days software downlaod if you register the device) im not the only one bitching about their aftersales..



    waiting on delivery of a trial axel client (will get back to you mat with the results)

    we have also tested the web reciever which works very well and can see it being very good for users who have their own devices (in the future)


    so where does this leave us?

    very interested in the ilio software, dave will probably have more to say about this as hes got a vmware version which he is testing and seeing good gains in performance. the main drawback for us is the lack of flash redirect on the zero cleint which works great for everything else we are looking for. so the way forward is probably going to be to put xendesktop on hold / use as remote access until nvidia vgpu grid becomes available (new tech very limited devices supported at the moment) but will be the future of vdi installs.
    Last edited by funscott; 11th February 2014 at 10:33 PM.

  3. #48

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,817
    Thank Post
    272
    Thanked 1,138 Times in 1,034 Posts
    Rep Power
    350
    Quote Originally Posted by funscott View Post
    Thought I would add a little update.

    Running Hyperv on 2012r2 datacentre with 2012 standard r2 hosting scvmm 2012 r2 (sql 2012 r2) on a R910. this is running xendesktop 7.1 and i have a windows 7 pro 64bit guest vm (2gb 2cpu). at the moment this is hosted on our SAN and averaging 1600 iops from the guest vm (the san isnt used for vdi mainly storage) and the client device is a xenith pro 2 (dell wyse). where to start
    I would say that is a fair bit of iops for that small amount of things running.

    With regards to Atlantis there is another called pernixdata which i have heard very good things about.

  4. #49

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Running 50 VMs on a Dell R720 256Gb memory 2 x 8 core 2GHz processors using 6 x 146Gb 15K SAS drives RAID 10 for snapcones. ESX1 5.0 VMware View Premium - 35 Wyse V10Ls internally 15 staff VM available after hours. Atlantis ILIO diskless (in memory). Internally the V10Ls don't support PCOIP (tried a P25 with PCOIP and it works well) so are RDP. Performance reported and observed is good. Staff report that external access (using the default PCOIP) is good ie quick browser response, can play youtube videos etc without being choppy. IOPs on a VM (using IOMeter) are 900. This is an average of 3 runs on different machines (run when 20 VMs in use) using "default". Login times are roughly the same as a desktop ie 30-45 seconds (roaming profile, no profile on machine and performed when 20 VMs in use).

    Overall I am impressed with Atlantis ILIO and the performance gains achieved with this technology. It's early door (only day 2 in anger) yet so I will keep you posted but the density you can achieve on a single server (the spec above can support up to 80VMs with little or no loss of performance) is impressive and cost effective. It has the added advantage of being able to scale easily and has no impact or necessity for a SAN.

    This may be a game changer for me but ask me again in a couple of weeks.

  5. Thanks to Dave_O from:

    richbrowncardiff (10th March 2014)

  6. #50
    Paid_Peanuts's Avatar
    Join Date
    Jun 2007
    Location
    South Yorkshire
    Posts
    232
    Thank Post
    11
    Thanked 13 Times in 12 Posts
    Rep Power
    17
    Quote Originally Posted by funscott View Post
    so where does this leave us?

    very interested in the ilio software, dave will probably have more to say about this as hes got a vmware version which he is testing and seeing good gains in performance. the main drawback for us is the lack of flash redirect on the zero cleint which works great for everything else we are looking for. so the way forward is probably going to be to put xendesktop on hold / use as remote access until nvidia vgpu grid becomes available (new tech very limited devices supported at the moment) but will be the future of vdi installs.
    funscott - do you have any further update on your POC? What are your thoughts and feelings on the technology now you have used it for a few months? Will you be pursuing this on a broader scale?

  7. #51

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Update on progress of VDI

    OK so we were going to buy Atlantis ILIO (diskless) for 50 desktops, which turned out to be about £3,000. Expensive but seemed worth it. After much testing we found that we needed the persistent disk version, asked for a re-quote expecting it to be perhaps a bit more something like £3,200. Nope £7,000!!!! So after many expletives we told Atlantis where to go. I then refocused on Nexenta. Nice technology, not quite as effective as Atlantis but seems to work OK. What was even better is that there is a community version (not the VDI version - which lets face it is just the storage with some bells and whistles which I don't need) which is free. So I put that on and here we are. Will report back on performance later after half term. I do like NexentaStor, simple and effective. Lets see what happens now.

    Also looking at the Microsoft licensing of VDI. Bit like Pandora's box. More on that when Paid_Peanuts gets the final word from MS.

  8. #52

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    students.pngstaff.pngUpdate II (this time it's personal)

    Ended up ditching NexentaStor community as we got better performance from raw disks. The final configuration we ended up with was:-

    Dell R720, 256Gb memory, 2 x 2GHz 8 core processors, ESXi boot from SD cards, 12 x 146Gb 15K SAS in a R10 (usable capacity 680Gb), 4 x 300Gb SSDs arranged in 2 x R1 300Gb

    VMs are Windows 7 2Gb memory, 2 vCPUs optimized for VDI

    All pools are thin provisioned (VMware View 5.1)

    VM pools are

    - students, 45 VMs - gold image and replicas on one of the SSD based drives, VMs on SAS drives

    - staff, 25VMs - gold image and replicas on one of the SSD based drives, VMs on other SSD based drive

    With 15 students logged in student pool IO Meter (512B 25% read) gives 4720 iops
    With no staff logged in staff pool IO Meter (512B 25% read) gives 4923 iops

    With these 70 machines running on the 1 ESX server it uses just over 140Gb of memory (as expected) and approximately 9000 MHz CPU (steady state). With an approximate density of 4.4 VMs per core. The SAS drives are capable of delivering just over 2000 iops in their present arrangement giving a worst case of 28 iops per VM but as you can see above this is unlikely to happen in normal usage. I found that from an off state to steady state powering on all VMs as quickly as possible take approximately 30 minutes, with login possible before that but performance is slow. Normal login from steady state for a class of approximately 25 students (using thin clients to access the VMs) is 65 seconds (roaming profile, no profile on the box) - compared to 45 seconds for equivalent physical PC in same state. How this would scale for more than 25 simultaneous logins I don't know but as long as class logins were staggered by 2 minutes there should be no problems.

    There seems to be very little difference in the reported usability of either machine pool. I remain unconvinced about the value of using SSDs as oppose to raided SAS drives in a school context.

    I believe this configuration will support up to 100 VMs with little in the way of performance degradation. Cost of all the server side hardware is around £6000 ie £60 per VM
    Last edited by Dave_O; 3rd June 2014 at 11:47 AM.

  9. Thanks to Dave_O from:

    AButters (3rd June 2014)

  10. #53
    AButters's Avatar
    Join Date
    Feb 2012
    Location
    Wales
    Posts
    477
    Thank Post
    143
    Thanked 107 Times in 82 Posts
    Rep Power
    42
    Interesting!

    Your IOmeter results are from within the VM?

  11. #54

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    26
    Your IOmeter results are from within the VM? - YES

SHARE:
+ Post New Thread
Page 4 of 4 FirstFirst 1234

Similar Threads

  1. when to use vdi and when to use basic terminal services
    By johno in forum Thin Client and Virtual Machines
    Replies: 5
    Last Post: 28th November 2010, 10:10 AM
  2. Mac Terminal Services
    By StuartC in forum Mac
    Replies: 5
    Last Post: 1st September 2009, 08:17 PM
  3. Terminal Services
    By faza in forum Windows
    Replies: 15
    Last Post: 1st June 2006, 10:37 AM
  4. Terminal Services +USB
    By Dos_Box in forum Windows
    Replies: 4
    Last Post: 25th May 2006, 01:26 PM
  5. Autograph on Terminal Services
    By Norphy in forum Thin Client and Virtual Machines
    Replies: 9
    Last Post: 12th May 2006, 11:53 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •