I think it's important to retain a mix of platforms throughtout the IT environment....unix, linux and windows servers in the back end, linux based thin clients, windows and mac clients.
licensing costs is only one part of tco....i think open source applications have come far enough for them to replace windows apps in certain instances. OOo i think is great, gimp is great, and for stuff like web development i think we should be teaching young people standards rather than applications. Final cut and fireworks are the only media apps i consider to be true industry standard, but are they going to be taught at gcse..many of the apple media suites schools have lavished grants on tend to use the piece of kak that is ilife rather the flagship final cut products. Then there's the issue of education specific, years old windows apps
i agree with geoff about it being a staged rollout....start off in the back office where users won't notice, in server based computing where the famiiar apps are still presented. And then slowly onto the desktop with specific areas where the move can be made away from provdinign legacy windows apps to specific users ....it's important for any linux depoyment to provide hardware compatability and ease of deployment for admins. Redhat desktop has failed to catch on in business, novell/suse have made a big song and dance about their client server products but i think ubuntu is the only of doing what redhat and suse failed to do in penetrating into the desktop.
Whether or not to rollout linux and how far to go with it, really brings up the issue of how we use ICT in the dreaded teaching and learning.....
Last edited by torledo; 17th May 2008 at 01:18 PM.
Servers first, then desktops. But... I would suggest you try a few users on Linux desktops - the new little laptops from Asus and friends might be a good trial.
My only concern with moving to unix based workstations is the restrictions that that can impose upon you - having spent the last 15 years as a UNIX only person, I'm constantly amazed by the wealth of apps out there for Windows to do random little things. Particularly in the education market, you just don't get those on any platform other than Windows (and a little bit of Mac here and there). Either Terminal Server (accessed via rdesktop) or tarantella (browser based), sorry, Sun Secure Global Desktop (grr) would do the job well. Even then, I would guess that you would still need a few machines that are "fat". I've inherited an entirely terminal services based setup, and it sucks because there's no room for flexibility - the users are straight-jacketed. Adding just a roomful of fat clients has made the world of difference.
Have you considered SunRays? If you're thinking about going all Linux, then there's really no reason not to go SunRay for most of the school, and keep a few desktop PC's around for anything that requires local hardware or dedicated resources (e.g. Sibelius, or video editing). A 90/10 split should work fairly well, depending on your needs.
Andy at the Cutter Project should be able to point you in the right direction on SunRays.
I worked on one every day since about 2000 (and supported many customers who ran large installations of them), and I won't hear a bad word said about them. Especially when combined with smartcards (you've got to love the roaming ability, no waiting to "resume your session" or whatever). The cost of ownership is stupidly low, especially if you get the BOGOF edu deal, 5 year warranty, and they use about 4 watts each. Oh, and no moving parts, and very very very little heat. When one of Sun's US campuses switched to them, the rep from the power company rang up and asked if we'd shut the office, because the power consumption graph had fallen through the floor.
We've just finished a year trial of Linux workstations which was truncated to two terms. Now I used free distributions, using commercially supported systems with support may have different results, but since that's more expensive than using windows...
Here's my opinion if of any use:
I started with ubuntu feisty on fat clients and edubuntu (feisty) LTSP on a lab of thin clients with a windows terminal server for those educaitonal apps. Migrated to Gutsy (which was a terrible, unstable and buggy release), then to opensuse 10.3 for both fat and thin client ltsp.
LTSP is superior to windows terminal services - no issues with sound and clients feel much more like fat clients.
99% of all computer use on my school network involves either an office application or a web-browser.
SSH, RSYNC, rdiff-backup etc.
There is an abundance of flash based tools for education on the web.
Kids love web based flash games as much or more so than windows games.
Sherston are offering their titles through web delivery.
Trend is a migration from desktop to software as a service.
Standalone clients work wonderfully. Got to love apt (or even yum at a push)
Ease of updating
No threat of viruses / malware
The linux method of filesharing is awful. NFS is a dog that uses far too much bandwidth and isn't suitable for a large number of clients. A windows like local caching method is in development.
OpenLDAP / NFS central authentication needs a better one click setup - took me a week to get my head round it. (I posted how to guides on setup at ubuntuforums)
SECURE OpenLDAP (TLS) needs better documentation / easier setup. It's too much of a pain so I didn't bother doing it.
Bugs on ubuntu gutsy's nautilus would cause processes to swamp the nfs server and cause it to kernel panic. (This did not happen with feisty btw).
Mounting a remote home folder caused a variety of issues - most notable under buggy gutsy where it would stream hundreds of meg across the network on login - causing client to be unusable for ~5-10 mins.
Lost access to all windows applications
All in all, I spend a lot less time now that the network is back running windows clients (XP professional + terminal server) than when running linux clients. I still run linux servers (although I have to say papercut is well worth the money over pykota - which I also donated to).
I run a smoothwall school guardian box, a debian intranet server and a dedicated rented ubuntu web server and it excells at these tasks. All my thin clients are hp t5725s running debian set up to boot throguh to a windows password box for terminal services. I would pay more for linux in these servers than windows but I think that linux needs a bit longer to sort out some of it's networking issues for full deployment.
I would use linux for web access stations anywhere without hesitation, and the best thing you can do is trial client stations yourself - the amount you'll learn on the way is phenominal.
The linux method of filesharing is awful. NFS is a dog that uses far too much bandwidth and isn't suitable for a large number of clients.
Sorry, but on what are you basing that? NFS is most certainly not a dog, and most certainly works very well for a large number of clients - that's what it is designed for! Having run an NFS server that served 25,000 simultaneous clients, supported many customers with huge NFS deployments (100,000 clients), fixed some bugs in NFS and the Solaris automounter, and done that job from a thin client with my home directory and applications all living on NFS, alongside 45,000 users, I can confidently say that you've got something wrong somewhere - maybe you've got a duff NFS implementation somewhere (I mean in the distro), a badly configured server, or something odd going on with your network.
I had used both fstab nfs mounting and the autofs method.
The home directory was mounted as well as a couple of other shares.
Throughout my trialling I was using debian based distros for the majority (debian or ubuntu).
A lot of problems with NFS were distro related (bugs in nautilus causing server to get swamped for example).
Debian appeared to perform better than ubuntu as an in place swap nfs server.
I accept that the nfs implementation on these distros could have contributed to / been the cause of my problems, and while I did try a variety of clients (including fedora and suse), the server was always debian based (i had trouble connected ubuntu clients to a suse NFS server). I also did not try a non-linux *nix server such as bsd (or solaris) which may be better.
I should have said " In my opinion - NFS on the linux distros I tried is a dog"
I did a lot of tweaking in an attempt to eliminate the problems - specifically increasing the number of nfs deamons running on the server, switching the mount methods (static to autofs) and adjusting the mount options.
I also made a large number of posts across the ubuntu forums and subscribed to some of the mailing lists. - Some of the posts I read mentioned nfs being fine for small networks up to 10 clients - had a quick search and couldn't find them to reference though.
Additionally, I was using NFSv3 as I couldn't for the life of me get NFSv4 to work.
While I appreciate that NFS can be and is used in extremely large implementations, I stand by my view that using nfs (or anything else for that matter) to remote mount the home folder is not as good in theory as caching the data locally and sending it back and forth during sign on/off - less network traffic is a better thing.
I also maintain that trialling it in this way will teach you loads and that linux networking does have room for improvement - although perhaps more in the "work as an appliance" or "apt-get install the-network" sense.
AFAIK Karoshi was designed and built up by people like us (e.g not professional software developers) who've done a wonderful job that deserves lots of praise and they were in an enviroment (Direct IP to the internet - not tied to an RBOC dictated IP range) that worked for them.
You CAN change the ip addresses (check out my posts and Jo's replies on the Karoshi forum about 12-18mth ago) - but it takes a little bit more effort and all it needs is a pre-configuration tool to setup the IP ranges you need but since the developers don't need it - it was never written and tested (AFAIK - times might have changed since I last looked at it)
But its a very impressive piece of work IMO
Is the forum dead??? [EDIT] just checked out site - no forum! and a few other changes that now explains your post![/EDIT]
I agree that the developers of Karoshi have done an amazing job and that its a great piece of software that could make everyones life so much easier, but I still think its a bit odd that you can't use the IP ranges and hostname you want right from the work go.
Even if the developers can use their own ranges, most schools can't.
But, I decided to crack on with Karoshi. I set it up as a PDC and now get to my next problem. I can't log in to the Network Manager or Technician parts of the system.
I've tried using the username and password that I use to log onto the box itself, but I still can't get in.
The problems you describe stem from the fact that you were using NFSv3.
Really? How can you determine that?
Originally Posted by kesomir
I stand by my view that using nfs (or anything else for that matter) to remote mount the home folder is not as good in theory as caching the data locally and sending it back and forth during sign on/off - less network traffic is a better thing.
Unlike Windows, there really shouldn't be that much extra data flying back and forth with a Linux (gnome/kde or any other window manager) login. You'll have, not necessarily in this order:
- auth data (a few bytes)
- name service lookup for the automounter (a few bytes)
- mount (a few bytes)
- source .profile
- read gconf settings (assuming gnome)
- read Desktop
Shouldn't be a lot else...
Seriously, there's really no reason not to use NFS - it's a much lighter weight protocol than smb/CIFS, and the whole Linux desktop login is much much much lighter weight than Windows profiles.
You also have to remember that whereas Windows will copy the user's profile to the box, then work on it locally, the Linux/UNIX way is just to read/write it on the NFS share. It really doesn't care whether the user data is local or remote, it doesn't even know about it!
If you actually look at how it's implemented (I'm talking Solaris here, 'cos that's what I know, but I can't imagine Linux does it much differently), the desktop manager and login has no idea where the "profile" data is, all it has is a bunch of file descriptors that point to vnodes. It calls read() (or write) and then the OS works out what flavour of read op it needs - vnode maps to an inode or rnode, and the vnode contains a vnode_ops pointer to a struct which maps out what actual function to call for each op, e.g. read -> nfs_read. This is how the OS works out that a file in /dev can be rm'd from the f/s (unlink -> ufs_unlink or ext3_unlink), but when you open it it opens the actual device rather than the file.
[*]Debian appeared to perform better than ubuntu as an in place swap nfs server.
Sorry, I know I'm being an argumentative pedant (I can't help it!), but what do you mean by swap nfs server? The way I read it is that you're using NFS to serve up swap space in someway, but I must be misunderstanding you!
I have replied to your PM. For security reasons the login for the web management is separate from the actual box.
If anyone else has any complaints about the system if you could email them to me I would be grateful and we will do our best to make the changes.
Most of what Simon said was correct, except that we don't have a direct connection, we have to have an IP range given to us via the county like most other schools, its just that we have a firewall, that uses NAT so that in the internal green and DMZ orange zones we can have whatever IPs we like. You can build a cheap firewall using IPcop and 3 network cards, which will do the same job for your own school.
Saying that we are looking into making the changes so that all the system can have its own IPs, the first step is that we are stream-lining the current IPs so that you can just edit a text file and it will just work. The second step, once we have LDAP fully tested is to allow for any IPs, the reason we were restricted before hand was for security and data reasons, so that the servers knew who to talk to etc, this would have been no problem if all schools use just 1 server, actually that would make our life easier, but because we need to be able to spread the load and the fact that Karoshi scales up to 8 servers we had to have a way of dealing with this.
One another note, Dover Grammar School for Boys have been running a nearly fully Linux system for 4 years. We have 12 servers, use both Sims and CMIS, and have 350 computers. Now Sims is used for finance only, so they have laptops running XP to do that. CMIS is web based, so for our admin staff we have 10 XP computers but all other staff can access it from any web browser inside and outside school. The 350 curriculum computers about 300 are Linux and 50 windows, but we are daily getting asked if 'said teacher' can have theirs 'upgraded' to the same as 'so and so' down the corridor.
You are quite welcome to visit....as well as anyone else interested.
How do you lock down your windows workstations. Is there something like group policy that you can apply?
If you open up a group policy ADM file, you'll see that it refers to registry keys. These keys can be set in a login script, which can be defined in the samba configuration.
Alternatively you'd need to wait for Samba4
It takes the information of which group the user is in, and provides them with a specific type of lockdown, for example itadmins have administrator rights, yr2005 will be locked down but may have different icons to yr 2007. The lockdowns are mostly done by regedits, and group policies, that like stated above are dragged down by each user who logs in and are removed when they logout. All these can be deleted or added, in the groups 'kix file' which has a list of regedits etc in them.
ps. This also means we can use roaming profiles, but we don't recommend it.
Last edited by linuxgirlie; 21st May 2008 at 12:12 PM.