Thin Client and Virtual Machines Thread, Wyse Streaming Manager in Technical; Has anyone had a chance to play with this yet? I've read about the concept and it sounds pretty cool ...
4th April 2006, 11:12 AM #1
Wyse Streaming Manager
Has anyone had a chance to play with this yet? I've read about the concept and it sounds pretty cool being able to run an OS locally on thin clients over the network without the traditional multimedia/performance problems on thin clients.
It does make me wonder how quick it would be as it's loading the whole OS over the network! Hopfully it's not another RM NetLM! I could see it being pretty slow streaming apps over the network and the costs of 'thin' client Wyse hardware for WSM isn't far off the cost of a full PC!
4th April 2006, 11:51 AM #2
Re: Wyse Streaming Manager
The concept of semi-thin clients has been around for a while. The wyse terminals are pretty expensive for what they do. We build EPIA based computers for far less than the costs of wyse, with much better specs and have thinstation booting from the network. It boots the whole OS in a few seconds (IIRC its about 9MB linux image). thinstation has a list of contributed packages such as firefox,flash. I'd like to see VLC in the list for media streaming. Not sure how difficult it would be to add it? but it would only add a few MB to the image.
A commercial OS does semi-thin for media streaming (at £35 per license - think we pay more for XP!) http://www.precedence.co.uk/products/thinit/
16th October 2006, 12:28 PM #3
Re: Wyse Streaming Manager
The other thing to note about the Wyse Streaming Manager is the spec of the required terminal. I thought that this software may be the answer to all my users' complaints (yeah right!) regarding multimedia performance but then I saw that the clients must have a 1GHz processor.
On the plus side, the advertising blurb says that any device matching the client specification (think of those musty old PCs you threw away in the summer holidays) can be used as a terminal...
15th January 2011, 12:24 AM #4
- Rep Power
We have just deployed 70 Wyse R900Ls on WSM, the solution is very scalable and fault tolerent. We have no performance issues whatsoever, our clients boot the OS quickly and they are able to run CAD & mutilmedia as well as traditional fat clients.
We found that network latency and disk IO are the most important performance factors and to ensure excellent client performance all clients are connected at 1 Gbps into the same core switch (HP Procureve 5412) as the server which is running at 10 Gbps.
For high speed throughput and low IO latency we stored the OS images on a 256 GB PCIE SSD card
Costs are very comparable to PCs (be it low end) and we are hoping to scale to upwards of 1200 clients over the next 2 years. Unlike other some desktop virtulalisation products shared storage is not a requirement which helps keep CAPEX and OPEX costs down.
16th January 2011, 02:53 PM #5
Couple of questions and observations:-
If I'm reading this correctly (apologies if I'm not), but why have you got a server with a 10Gbps network card? I assume the HP 5412 has got the appropriate module at the other end. What stats on the HP switch lead you to this purchasing decision? Personally, In a school environment, I have never seen an implementation that needs this level of throughput. I also have an HP 5412 with a 24 port fibre module that acts as the core to our network (800 PCs and 46 virtual servers) and have never seen utilisation go above 25% on any one port. Do you have any stats that show high nic utilisation? Does this solution require such a high level of throughput? If so then I personally would question its scalability.
How many clients do you have at present? Exactly how long do they take to boot? are they connected directly to the HP5412? or are they on edge switches which connect to the HP5412 via 1Gbps links?
17th January 2011, 12:11 AM #6
- Rep Power
For clarity, the server is an ESXi Host (Dell R815 with 4 x 4 12 core CPUs, 64 GB RAM) which will have several guest instances. We purchased 2 x 2 port 10 Gig fibre NICs as this results in less physical cabling and indeed is actually cheaper (for us) than several 4 port copper Gig cards. The switch has the appropriate modules installed. We are an FE/HE college and tend to use a lot of CAD, mulitmedia and rendering by ultimately around 2000 clients so we are scaling for future needs (with a 5 year server lifecycle) not just the here and now. As my earlier post, the clients and server all connect directly into the 5412 and clients take around 20 seconds to boot from power on. We regulary see Gig ports on our network with utilisation at 70% (be it to our EMC filers where all the CAD content etc. is stored), so our network can be pretty busy hence the reasoning for 10 Gig from the off so to speak. We are still at POC stage at present but once I have firm stats I will post.
By LukeC in forum Thin Client and Virtual Machines
Last Post: 3rd July 2007, 03:10 PM
By Geoff in forum Thin Client and Virtual Machines
Last Post: 4th December 2006, 12:42 PM
By danIT in forum How do you do....it?
Last Post: 11th May 2006, 06:20 PM
By StewartKnight in forum How do you do....it?
Last Post: 20th December 2005, 12:57 PM
By Dos_Box in forum General Chat
Last Post: 9th September 2005, 07:21 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)