MIS Systems Thread, Sims Server Performance Benchmarks in Technical; I swear the app and the client specs play a huge role. I frequently pull data out of SQL servers ...
12th November 2013, 12:29 PM #46
I swear the app and the client specs play a huge role. I frequently pull data out of SQL servers direct and never have performance issues.
The problem is between the server app call receiver and the client aka: PIBSACRAC
12th November 2013, 12:33 PM #47
The problem is the way the data is stored.
Pulling pure data from the server is pretty quick - that's what SQL does.
However, when you then try to relate that data to other data in a database which isn't fully normalised, you end up with a major slowdown as it has to do a heck of a lot of processing when it shouldn't have to. The same issue if tables aren't properly indexed.
This is requested by the application, but processed by the server. Some aspects are processed client-side (the word/excel type reports), so client speed applies there.
12th November 2013, 12:41 PM #48
True, but I've borrowed or created my own multi-joined queries recently, even across multiple virtual servers from totally different services, and the performance is still pretty awesome. I know the same data in the app would take an age, and worse still for example if it's my photo db if i call on the photo field even if i don't want to display it.
If we can track down the query / stored proc that is being run as part of the benchmark i'd be interested to see performance run in sql management studio from the client machine, i bet it's much quicker. The client is relying on a lot of RAM to do anything and then display it.
I don't agree with the earlier statement about leaving SIMS open all day, i recommend closing it every hour, and more frequently if doing heavy editing. If you've imported photos, you've had it. It won't release the memory or the last photo used until you completely shut it down.
Not to say all the blame lies at client side, i'm just saying i think it's a big factor. I know different clients around our school suffer differing performance and that's not a server issue.
12th November 2013, 04:24 PM #49
Closing it at the end of the day is wise, closing it every hour means you'll be re-building the cache on the client - so basically you'll be undoing all the hard work Capita have done.
Originally Posted by vikpaw
The issue you've got is like @localzuk has said it should be fully normalised, so when you write someones new telephone number you only do 1 write. However from a reporting point of view you want it to be dimensional basically because you don't want to have to do loads of joins as they are performance heavy, you just want one table with all the keys in it (Fact) and a few tables off that with the detail (dimensions) and the keys want to be simple int keys. Problem they've got is it's a mismatch, they've tried to please too many people.
13th November 2013, 10:15 AM #50
Caching of what? The data is considered volatile and re-read with every move. You can show this with Profiler and Wireshark. For example, the pupil browser will run the sta_pix_EditStudentInformation_getstudent ...etc.... stored procedure every single time, even if you repeatedly flip-flop between two pupils. A quick experiment shows no caching is used. Wireshark shows the text (in clear text) coming from the server every time over the network. Perhaps I've misunderstood what you mean.
Originally Posted by matt40k
I have to admit they've normalised the data quite a bit since I last cared to look at it. In your example, updating a phone record writes to the sims_telephone table only. This is a simple table linked by person_id - well normalised in anyone's book. A while ago, if memory serves (and I might be quite wrong), there were several telephone fields within the person record, but this has changed. I dare say someone could find horrors, but I think they are improving the underlying structure.
Originally Posted by matt40k
Last edited by jinnantonnixx; 13th November 2013 at 10:31 AM.
13th November 2013, 12:14 PM #51
One would hope that they don't re-query the school name for starters. Not sure I can be bothered to decomple it, even if their is a file call "Cache.dll" and expand on my pretty poor list of "the school name" of things that Capita cache on each login.
13th November 2013, 12:24 PM #52
I'm not convinced by the caching argument, mostly because general usage eats up enough RAM over time to obliterate any positive impact caching might be making.
In practise i've found an hourly or bi-hourly shutdown of the client helps loads, and is worth the 11 sec hit / longer because it appears to make things run faster.
I just tell users to close it down whenever they take an eye-rest break which they should do hourly anyway. I'm sure most leave it on all day, but they know when it starts to slow down what they need to do is turn it off and on again! I can't be sure why this works, but it does. That or the presence of an IT Technician in the room makes a huge difference to performance for some unknown reason.
I had it on for half a day with no apparent impact just the homepage on screen, and it had used 100 MB RAM. Flicking through a few staff records and registers and it has tripled and not been returned. Maybe it cached the photos, but it was working quickly enough anyway. Getting the student list - registration groups query parameters box, still took 15 secs. Don't know why. Running it was quick though. Loading a second time has now reduced the load time slightly.
Running the benchmark addresses report is taking 15 secs still though, to pop into Word.
Perhaps someone from Capita could post their benchmarks and we can see if it's of the same order as ours, if so, then it must be running as expected.
13th November 2013, 12:50 PM #53
The thing is, it's .NET, so it does the memory management it's compilated JIT - just-in-time. So unlike C programs which load the entire program at once, .Net and Java will load as it needs it, so the footprint is pretty small, but it will grow, so although you've opened and closed windows, it's been loading more and more modules as it goes and it's not likely to unload them once they are load. So once you've opened SIMS, go into your register, then into system manager then into reports and so on, closing each time will return a bit of memory but not as much as closing SIMS then re-opening for each area. I'm not saying this is a good idea, closing SIMS for each area, it's a bad idea. You should notice that once you've gone into edit marks it's faster to return to it because it's already loaded the .net library and built the cache. Each time you close SIMS you have to do authentication request, a pull of the school diary, check you messages, load all the homepage guff, build the cache then load whatever module your using.
Closing SIMS at the end of the day makes sense, closing SIMS when you go to lunch makes sense, closing SIMS when you make a big change makes sense. Closing SIMS every hour or less is like painting your computer red because it'll go faster.
Thanks to matt40k from:
vikpaw (13th November 2013)
13th November 2013, 01:58 PM #54
I only advise it where it works, and it's usually admin staff who are doing big jobs, or spending all day entering new student applications. So it is the right thing to do.
Most staff change class rooms every 90 mins, so it's automatic.
I see what you mean. Since that post, the RAM usage has halved, but it is still 50% more than it was while idling all morning. I guess it's happy with sporadic use.
The bulk of server side performance was introduced with the last few updates, so they have definitely changed something, and there is probably a lot more tweaking that could be done, not of the server / hardware but the server side software. A lot of issues were attributed to the homepage widgets which can still take their toll.
13th November 2013, 02:49 PM #55
I should really dig out the reply email I got when I asked what the new homepage widgets performance impact were when I tested it in beta all those years ago.
13th November 2013, 02:53 PM #56
Indeed. As @jinnantonnixx mentions, it appears they've been doing more database work lately. Which would account for quite a bit. Also, .Net is becoming more mature, and performance improvements have been made by Microsoft. So, its quite possible that some of the speed increases are down to Microsoft too.
Originally Posted by vikpaw
13th November 2013, 03:13 PM #57
As vendors have started to offer hosted solutions they are experiencing the pain of their design choices first hand, which is forcing them to make improvements that otherwise they would let slide. Building reports seems to be slow in most MIS's. A good one will let you get on while it builds and then emails it to you when done.
Also dotnet has been very hard to debug and tune for performance until quite recently (or so i read the inference to be from "Defrag Tools") so any performance issues may just be that banks have got all the best developers and those of us who think £15K is a lot of money for an 1500 seat organisations main LoB application get the left overs.
That said we have got a locally hosted web based mis with 7 years of data in it and it is lightening fast (though reports take a while to build).
14th November 2013, 10:50 AM #58
Actually i meant performance impact (negative) - but yes, along with that, they have also improved as you say to mitigate that effect.
Originally Posted by localzuk
I've always thought companies should use their own products. e.g. if you're selling a VLE, use the built in chat room to do a sales meeting, not Skype or WebEx.
If you're selling a mailing solution send via that, not Exchange. Obviously doesn't work for all products, but use them where you can, cos it shows the weakness to you if it exists, and shows off the solution to the customer which is great.
10th January 2014, 01:03 PM #59
Much better CPU performance on Autumn 2 release.
Well done Capita.
The program is still slow loading and solus is very unresponsive, but its an improvement from the last release.
12th January 2014, 10:09 AM #60
We had a few problems recently with our power supply to the server room.
This had a couple of very unusual side effects, at least they were very unexpected.
After a power loss and some issues with the backup generator, we found a few places had no or reduced power. We had to take a lot of systems off the UPS which no longer would accept an under voltage supply - due to a different issue. The result was that a lot of the systems started under performing.
I'd never experienced such issues, but the ping response on a lot of servers went haywire and became very unpredictable and a lot of network traffic seemed to slow.
Specifically, some SIMS reports would just hang and not even crash out which was when i got involved. I could run the reports with a query that i knew would only return very few results and they would be fine, so it started timing out sooner as well as responding slower: load times were massively sluggish, for the first open of app as well as individual modules.
Another thing we noticed was that on our blade chassis the VMs on a couple of the blades had better performance than others, and moving the SIMS VM to the "better" blade, improved things, but not totally. We later found out that some auto safety feature in the blade had detected low power and throttled some of the blades. Shame it wasn't configured to tell us, and we had no control over it!
Anyhoo, all is back to near normal now, yet we still have a slight under voltage and it does still have a slight effect. The flow isn't smooth here and the UPS definitely makes a difference, we got a new one in the end. However, generally the other items that weren't on the UPS and the system as a whole were largely affected by the power.
Power quality was a big factor, obvious when you think about it, but not something we'd ever have considered had we not had a failure, and with a working UPS, we might not even have noticed for a long time.
So when doing these tests and benchmarks, it's worth thinking outside the box. I've never really experienced the slowness others talk about with SIMS, and wouldn't have thought anything of it, until it suddenly affected me. I probably would have quite happily blamed Capita for shoddy programming, which in some cases might be true, but you don't often think the network is to blame when things are working ok. I can't compete with the likes of @bossman 's network and speeds, but we're pretty good now, even with a poor supply.
Hope that helps the odd person or two.
Thanks to vikpaw from:
bossman (12th January 2014)
By kennysarmy in forum Thin Client and Virtual Machines
Last Post: 28th November 2012, 05:22 PM
By SimpleSi in forum Windows Server 2000/2003
Last Post: 6th July 2012, 08:18 PM
Last Post: 24th May 2006, 08:00 AM
By faza in forum Hardware
Last Post: 23rd May 2006, 08:31 AM
By browolf in forum Hardware
Last Post: 14th October 2005, 11:03 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)