Thin Client and Virtual Machines Thread, VDI / Terminal services in Technical; Originally Posted by Dave_O
Funscott - don't let VDI consume too much of the SAN IOPs (hence the reason for ...
30th January 2014, 04:36 PM #46
Yes good point and another reason why VDI hasn't been overly popular amongst support staff here. We more or less had to buy in a new SAN to cope with the extra strain that all those VM's put on it. Two classes were using photoshop after the original install and we saw a decrease in overall network performance. Once VDI was throttled somewhat to ensure the stability of the rest, its usability became less attractive and users complained about the performance in big hitting applications. These ran far smoother on even a 3 year old PC.
Originally Posted by Dave_O
However, another brand new and faster SAN later, it does seem to be up to a reasonable speed.
11th February 2014, 10:31 PM #47
- Rep Power
Thought I would add a little update.
Running Hyperv on 2012r2 datacentre with 2012 standard r2 hosting scvmm 2012 r2 (sql 2012 r2) on a R910. this is running xendesktop 7.1 and i have a windows 7 pro 64bit guest vm (2gb 2cpu). at the moment this is hosted on our SAN and averaging 1600 iops from the guest vm (the san isnt used for vdi mainly storage) and the client device is a xenith pro 2 (dell wyse). where to start
we are unable to test atlantis ilio diskless at the moment as their 2012 version isnt avaiable (fingers crossed should get a copy soon)
however had plenty of time to play about with the zero client and have the following findings
setup fairly simple
bootup time from cold around 5 seconds when on standby 3 seconds.
we have also found normal domain logins (before using citrix profile managment) to be far faster than logging onto a production desktop (even with a ssd disk).
no flash redirect (no zero client supports this)
youtube etc very watchable in normal / second screen size (wouldnt know it was not using gpu to render). full screen is unacceptable
dell wyse support. one of the main reasons i would not buy this device in bulk. will not even tell you what the latest firmware version is without a support contract (do give 90 days software downlaod if you register the device) im not the only one bitching about their aftersales..
waiting on delivery of a trial axel client (will get back to you mat with the results)
we have also tested the web reciever which works very well and can see it being very good for users who have their own devices (in the future)
so where does this leave us?
very interested in the ilio software, dave will probably have more to say about this as hes got a vmware version which he is testing and seeing good gains in performance. the main drawback for us is the lack of flash redirect on the zero cleint which works great for everything else we are looking for. so the way forward is probably going to be to put xendesktop on hold / use as remote access until nvidia vgpu grid becomes available (new tech very limited devices supported at the moment) but will be the future of vdi installs.
Last edited by funscott; 11th February 2014 at 10:33 PM.
12th February 2014, 07:22 AM #48
I would say that is a fair bit of iops for that small amount of things running.
Originally Posted by funscott
With regards to Atlantis there is another called pernixdata which i have heard very good things about.
13th February 2014, 11:19 AM #49
Running 50 VMs on a Dell R720 256Gb memory 2 x 8 core 2GHz processors using 6 x 146Gb 15K SAS drives RAID 10 for snapcones. ESX1 5.0 VMware View Premium - 35 Wyse V10Ls internally 15 staff VM available after hours. Atlantis ILIO diskless (in memory). Internally the V10Ls don't support PCOIP (tried a P25 with PCOIP and it works well) so are RDP. Performance reported and observed is good. Staff report that external access (using the default PCOIP) is good ie quick browser response, can play youtube videos etc without being choppy. IOPs on a VM (using IOMeter) are 900. This is an average of 3 runs on different machines (run when 20 VMs in use) using "default". Login times are roughly the same as a desktop ie 30-45 seconds (roaming profile, no profile on machine and performed when 20 VMs in use).
Overall I am impressed with Atlantis ILIO and the performance gains achieved with this technology. It's early door (only day 2 in anger) yet so I will keep you posted but the density you can achieve on a single server (the spec above can support up to 80VMs with little or no loss of performance) is impressive and cost effective. It has the added advantage of being able to scale easily and has no impact or necessity for a SAN.
This may be a game changer for me but ask me again in a couple of weeks.
Thanks to Dave_O from:
richbrowncardiff (10th March 2014)
29th May 2014, 12:47 PM #50
- Rep Power
funscott - do you have any further update on your POC? What are your thoughts and feelings on the technology now you have used it for a few months? Will you be pursuing this on a broader scale?
Originally Posted by funscott
29th May 2014, 01:07 PM #51
Update on progress of VDI
OK so we were going to buy Atlantis ILIO (diskless) for 50 desktops, which turned out to be about £3,000. Expensive but seemed worth it. After much testing we found that we needed the persistent disk version, asked for a re-quote expecting it to be perhaps a bit more something like £3,200. Nope £7,000!!!! So after many expletives we told Atlantis where to go. I then refocused on Nexenta. Nice technology, not quite as effective as Atlantis but seems to work OK. What was even better is that there is a community version (not the VDI version - which lets face it is just the storage with some bells and whistles which I don't need) which is free. So I put that on and here we are. Will report back on performance later after half term. I do like NexentaStor, simple and effective. Lets see what happens now.
Also looking at the Microsoft licensing of VDI. Bit like Pandora's box. More on that when Paid_Peanuts gets the final word from MS.
3rd June 2014, 11:43 AM #52
students.pngstaff.pngUpdate II (this time it's personal)
Ended up ditching NexentaStor community as we got better performance from raw disks. The final configuration we ended up with was:-
Dell R720, 256Gb memory, 2 x 2GHz 8 core processors, ESXi boot from SD cards, 12 x 146Gb 15K SAS in a R10 (usable capacity 680Gb), 4 x 300Gb SSDs arranged in 2 x R1 300Gb
VMs are Windows 7 2Gb memory, 2 vCPUs optimized for VDI
All pools are thin provisioned (VMware View 5.1)
VM pools are
- students, 45 VMs - gold image and replicas on one of the SSD based drives, VMs on SAS drives
- staff, 25VMs - gold image and replicas on one of the SSD based drives, VMs on other SSD based drive
With 15 students logged in student pool IO Meter (512B 25% read) gives 4720 iops
With no staff logged in staff pool IO Meter (512B 25% read) gives 4923 iops
With these 70 machines running on the 1 ESX server it uses just over 140Gb of memory (as expected) and approximately 9000 MHz CPU (steady state). With an approximate density of 4.4 VMs per core. The SAS drives are capable of delivering just over 2000 iops in their present arrangement giving a worst case of 28 iops per VM but as you can see above this is unlikely to happen in normal usage. I found that from an off state to steady state powering on all VMs as quickly as possible take approximately 30 minutes, with login possible before that but performance is slow. Normal login from steady state for a class of approximately 25 students (using thin clients to access the VMs) is 65 seconds (roaming profile, no profile on the box) - compared to 45 seconds for equivalent physical PC in same state. How this would scale for more than 25 simultaneous logins I don't know but as long as class logins were staggered by 2 minutes there should be no problems.
There seems to be very little difference in the reported usability of either machine pool. I remain unconvinced about the value of using SSDs as oppose to raided SAS drives in a school context.
I believe this configuration will support up to 100 VMs with little in the way of performance degradation. Cost of all the server side hardware is around £6000 ie £60 per VM
Last edited by Dave_O; 3rd June 2014 at 11:47 AM.
3rd June 2014, 01:11 PM #53
Your IOmeter results are from within the VM?
3rd June 2014, 01:39 PM #54
Your IOmeter results are from within the VM? - YES
By johno in forum Thin Client and Virtual Machines
Last Post: 28th November 2010, 10:10 AM
Last Post: 1st September 2009, 08:17 PM
Last Post: 1st June 2006, 10:37 AM
By Dos_Box in forum Windows
Last Post: 25th May 2006, 01:26 PM
By Norphy in forum Thin Client and Virtual Machines
Last Post: 12th May 2006, 11:53 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)