flyinghaggis Posted April 4, 2006 Posted April 4, 2006 Has anyone had a chance to play with this yet? I've read about the concept and it sounds pretty cool being able to run an OS locally on thin clients over the network without the traditional multimedia/performance problems on thin clients. It does make me wonder how quick it would be as it's loading the whole OS over the network! Hopfully it's not another RM NetLM! I could see it being pretty slow streaming apps over the network and the costs of 'thin' client Wyse hardware for WSM isn't far off the cost of a full PC!
CyberNerd Posted April 4, 2006 Posted April 4, 2006 The concept of semi-thin clients has been around for a while. The wyse terminals are pretty expensive for what they do. We build EPIA based computers for far less than the costs of wyse, with much better specs and have thinstation booting from the network. It boots the whole OS in a few seconds (IIRC its about 9MB linux image). thinstation has a list of contributed packages such as firefox,flash. I'd like to see VLC in the list for media streaming. Not sure how difficult it would be to add it? but it would only add a few MB to the image. A commercial OS does semi-thin for media streaming (at £35 per license - think we pay more for XP!) http://www.precedence.co.uk/products/thinit/
Ravening_Wolf Posted October 16, 2006 Posted October 16, 2006 The other thing to note about the Wyse Streaming Manager is the spec of the required terminal. I thought that this software may be the answer to all my users' complaints (yeah right!) regarding multimedia performance but then I saw that the clients must have a 1GHz processor. On the plus side, the advertising blurb says that any device matching the client specification (think of those musty old PCs you threw away in the summer holidays) can be used as a terminal...
joegalliano Posted January 15, 2011 Posted January 15, 2011 We have just deployed 70 Wyse R900Ls on WSM, the solution is very scalable and fault tolerent. We have no performance issues whatsoever, our clients boot the OS quickly and they are able to run CAD & mutilmedia as well as traditional fat clients. We found that network latency and disk IO are the most important performance factors and to ensure excellent client performance all clients are connected at 1 Gbps into the same core switch (HP Procureve 5412) as the server which is running at 10 Gbps. For high speed throughput and low IO latency we stored the OS images on a 256 GB PCIE SSD card Costs are very comparable to PCs (be it low end) and we are hoping to scale to upwards of 1200 clients over the next 2 years. Unlike other some desktop virtulalisation products shared storage is not a requirement which helps keep CAPEX and OPEX costs down.
Dave_O Posted January 16, 2011 Posted January 16, 2011 Joegalliano Couple of questions and observations:- If I'm reading this correctly (apologies if I'm not), but why have you got a server with a 10Gbps network card? I assume the HP 5412 has got the appropriate module at the other end. What stats on the HP switch lead you to this purchasing decision? Personally, In a school environment, I have never seen an implementation that needs this level of throughput. I also have an HP 5412 with a 24 port fibre module that acts as the core to our network (800 PCs and 46 virtual servers) and have never seen utilisation go above 25% on any one port. Do you have any stats that show high nic utilisation? Does this solution require such a high level of throughput? If so then I personally would question its scalability. How many clients do you have at present? Exactly how long do they take to boot? are they connected directly to the HP5412? or are they on edge switches which connect to the HP5412 via 1Gbps links?
joegalliano Posted January 17, 2011 Posted January 17, 2011 Dave-O For clarity, the server is an ESXi Host (Dell R815 with 4 x 4 12 core CPUs, 64 GB RAM) which will have several guest instances. We purchased 2 x 2 port 10 Gig fibre NICs as this results in less physical cabling and indeed is actually cheaper (for us) than several 4 port copper Gig cards. The switch has the appropriate modules installed. We are an FE/HE college and tend to use a lot of CAD, mulitmedia and rendering by ultimately around 2000 clients so we are scaling for future needs (with a 5 year server lifecycle) not just the here and now. As my earlier post, the clients and server all connect directly into the 5412 and clients take around 20 seconds to boot from power on. We regulary see Gig ports on our network with utilisation at 70% (be it to our EMC filers where all the CAD content etc. is stored), so our network can be pretty busy hence the reasoning for 10 Gig from the off so to speak. We are still at POC stage at present but once I have firm stats I will post. Joe
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now