If you've got a good network (eg 1Gbps switched) and fast raid storage and cached storage on the server(s) serving the OS files are you likely to see a performance improvement (eg in boot up times, application start up times) than on a machine with a single local SATA disk? I would have thought that if your OS files are coming over 1Gbps network from the servers cache memory there is a potential increase in performance over a local file recovered over a 70Mbs SATA link just from the disk access speeds.
In the context of a classroom full of people doing that though... 10 machines all with their own local SATA disks compared to 1 central server.
The data you're streaming over the network also has to reside somewhere, a disk presumably - so there's somewhere else to add in a performance measure.
From wikipedia Serial ATA - Wikipedia, the free encyclopedia
"First-generation SATA devices often operated at best a little faster than parallel ATA/133 devices. Subsequently, a 3 Gbit/s signaling rate was added to the physical layer (PHY layer), effectively doubling maximum data throughput from 150 MB/s to 300 MB/s.
For mechanical hard drives, SATA 3 Gbit/s transfer rate is expected to satisfy drive throughput requirements for some time, as the fastest mechanical drives barely saturate a SATA 1.5 Gbit/s link. A SATA data cable rated for 1.5 Gbit/s will handle current mechanical drives without any loss of sustained and burst data transfer performance. However, high-performance flash drives are approaching SATA 3 Gbit/s transfer rate."
So although the SATA 3Gb/s interface supports 300MB/s mechanical drives can't yet provide the data fast enough to saturate a 1.5Gb/s or 150MB/s interface unless they are solid state making the virtual network drive faster assuming that the 200MB/s over the network has already taken off the TCPIP overheads..
Take into account the overheads of seek times etc for random access to the local disk (remember the max speed tests are most likely on sequential data) and accessing data from cache memory on the server over a 200MB/s link starts to look better by comparison (remember
However all this might be academic as either system is likely to be fast enough. The benefits will come from the reliability, predictability and down time. If one of your 1000 school PCs local disk dies it might take an hour to get it swapped out. If an update takes all 1000 PCs down I'll let you calculate how long it would take to roll them all back. With a virtual disk you could just point the PCs at the previous non updated version to put them all back.
I was under the impression that a combination of drives, interfaces and disk seek times meant that although you could theoretically get 1.5Gb/s from a SATA drive if reading sequential data the overheads of seek times etc meant that it was closer to 70MB/s by the time you factored in random access. If the 200MB (not sure of the source on this but its a vague recollection) of OS files needed to boot the thin client are being pulled from cache memory on the server you don't have the seek time to factor in.
We can discuss the merits of thin clients until the cows come home....
Tell me where in the UK can I see all of this thin client technology in action in a 'typical' secondary school environment with 800-1000 students, run from a remote server farm > 30km away.
I am ready to be convinced; just show me a school that has successfully implemented a solution with 400-500+ devices in concurrent use; a mix of fixed and wireless thin clients for office, web browsing along with multimedia rich PCs supporting video editing, photoshop,CAD.
I want to see & time what happens at startup, lesson start, lesson end..... I want to see real world use.... I want to see whole classes working, printing... the works.....
I also want to see what happens when a server goes down; when a WAN link fails..... when there is a power failure at the data centre....
Anyone else interested?
webman (3rd December 2009)
I'd certainly be interested currently we use thin client where suitable but music, media and tech still have fat clients. I'd love to see serious work loads over a long distance in action or on the same LAN for that matter.
Why do the servers have to be offsite 30km away? Why not put them where they are needed and manage them remotely rather than manage them locally and deliver the apps remotely. Common sense. Think this is mentioned earlier in this thread as well, IINM I think it was you that raised this last time too...
I also think if you read back through this you'll see we're not talking about 100% thin clients or 100% fat clients or that the thin clients are all server based as opposed to vdi, zero or any other flavour.
Using the "zero" type David mentioned earlier the OS runs locally so all your local devices will work as before, IWBs would be no issue. For all intents is a fat PC, its just the disk is remote and the files are stored on a server.
There is a school that, in my head I've pictured as being over near the east coast, that is predominantly thin client and linux based as well. I talked with them a couple of years ago when we were looking at a linux based thin client option on our network boot menu on our fat clients so that they could still be used for word processing, web browsing etc if the local windows installation failed. We've visited a 2ndary in Mansfield this year that uses Citrix to deliver applications to the desktop. They've had it as a managed service for a couple of years now and are very happy with the improvements in reliability and performance its brought.
I think that a lot of the multimedia issues with "traditional" server computing based thin clients have only been resolved in the last couple of years so there won't be many schools that have yet taken the step and it usually is a pretty big step as there is usually a fairly high minimum number of workstations required to make it cost effective. When I looked we needed about 70 thin clients. Many schools replace less that this each year so its not a viable option short term unless you get an opportunity to swap it all out like BSF...
I'm hoping to see some good demonstrations and cost comparisons at the BETT show.
C U there?
broc (4th December 2009)
Technically, since all RBCs / LAs are great big WANs, under BSF you don't need to have all your servers in the same location.
A smart company could spend time placing servers at key schools in the WAN (even all of them) and then use some load balancing to use local servers first and then remote servers next.
But hey ... that is just talking sensible.
Our BSF proposals so far include local caching servers for data if I'm not mistaken. So far they seem to be quite sensible technically, just a bit blinkered: you must have more PCs, new Dell ones, rather than looking to transform IT they appear to be looking just to renew it.
Anyone looking at deploying thin client as part of BSF would not get away with centralising it without some or the majority of the processing servers on site.
What you could do is centralise additional capacity that can be automatically booted by any school to provide additional processing capacity when required. The latest Xen server can boot addition servers to provide additional processing power when required. It then powers them off again to save power when the load drops. (I think)
There are currently 1 users browsing this thread. (0 members and 1 guests)