A number of contributors have commented on us releasing several patches to improve the PI calculations and some have complained about having to install the August release. The following is background that may help to explain the situation.
A key function of the SIMS Examinations Organiser application is to provide to the SLT the earliest and most accurate possible approximation to the school’s performance by the standards specified for the annual Achievement and Attainment Tables. It is understood that an approximation is the best that can be hoped for: for example, the MIS cannot be aware of results achieved by candidates who were on the school roll for the January Census, but have left and taken their exams elsewhere. Nevertheless, many schools place a high value on the calculations that the software performs.
For a number of reasons, the process is neither as straightforward or robust as it could be.
1. Procedure: in previous years there has been a Statement of Intent around April, a chance for comment, and a Final Decisions document in July. Since the Final Decisions never differed much from the original Statement, it was reasonably safe to specify and code on the basis of the earlier document. This year the Statement of Intent appeared on 4th July, and included two new measures relating to Modern Foreign Languages, which had been foreshadowed in last year’s documents. No Final Decisions document has appeared at all.
DCSF and its agents have the luxury of using the three months between release of results and their own publication to iron out anomalies, so that enquiries in July regarding exactly how a particular measure will be calculated do not always produce a clear response. That said, the officials at the AAT unit have been as helpful as they reasonably can be. Unlike the School Census, the AAT process does not include MIS suppliers as part of the loop, but this does not stop our customers from assuming that we are somehow privy to earlier and more detailed information than they are themselves, which is the opposite of the truth.
2. Discount Codes: there appears to be little coordination between the various agencies concerned. QCA, we understand, are responsible for issuing them in the first place; the UABs publish them as part of the Option basedata; DCSF incorporate them in the Post-16 Learning Aims QAN data, originally as a sledgehammer to crack the Art and Design nut, but now, particularly in the context of diploma-related data, resulting in a lot of misleading, inaccurate and stop-gap data in the published tables; finally LEAP and Forvus actually process them for AAT purposes.
The 2+ A*-C in Science measure, now in its second year, was made more complex by the introduction during the year of new subject codes, with no information beyond what we had gathered in 2007 as to what combinations were permissible. To compound the issue further, the apparent absence of co-ordination or direction as to the use of these codes meant that every board, other than AQA, issued erroneous data, which was only corrected after we identified the errors and pointed them out. Not all corrections were in place by the end of the year, so we had to document the anomalies to users, to ensure that they had updated their basedata to the corrected version, or were able to make the necessary manual corrections in time
3. Key Skills and Functional Skills remains a very confused area. There is no standard way of indicating that certification has been achieved. To overcome this we make a practice of issuing ‘proxy basedata’ on behalf of OCR, WJEC and Edexcel (including ALaN) to enable users to record completion in a calculable way. However, even once this is done, calculation is still problematic, in that Functional Skills counts 0.5 of a GCSE, Key Skills 0.75, but the UABs tell us that the same tests are used for both.
4. Edexcel Digital Applications: In setting up a four-unit scheme, Edexcel created awards for candidates who gained 1, 2 or 4 units, but apparently did not foresee the possibility of candidates achieving 3. Hence the late emergence of ‘CiDA+’, which has no distinctive level code, and cannot therefore be distinguished from CiDA other than by reading the title. The opportunity to rectify this in the 2008 version of the Formats has not been taken, so we do not expect this situation to improve next year.
The problem is further compounded by the fact that schools apply for all ‘widths’ of certification, presumably so that they may be awarded the best certification achieved; but Edexcel respond by issuing them all, even if they are U. This can make discounting unreliable, even allowing for the masking of CiDA+.
5. ONAT: OCR have introduced EDI for OCR National qualifications for the first time, starting with registration in September 2007. Edexcel do the same with BTEC, but issue certification results for July. OCR however have insisted, over several months of discussions, on issuing results against the original September registration, which schools do not find intuitive. They finally announced their mechanism in the last week of term in July.
6. Hybrid qualifications: FSMQ and Functional Maths (AQA 93001P). These qualifications have two parallel personalities: as grade-bearing certifications in their own right, as FSMQ and FSKL, and mark-bearing units contributing respectively to GCE and GCSE certifications. We have argued with AQA that it is not conceptually possible without misrepresentation to combine these two into a single record, and there is no provision in the Formats to do so, either as basedata Options or as results. Two option records and two results for each are required.
7. We have raised the timeliness of the necessary information to calculate the tables with DCSF in the past and will be raising it with them again as this situation is difficult for all parties.
I always knew exam results were a hotch-potch, but that's just ridiculous!
There are currently 1 users browsing this thread. (0 members and 1 guests)