Tagged: omr

Well, we run the latest OMR sessions (numeracy & literacy) – there does seem to be little problem we need to iron out if I’m going to recommend this to staff and others for a more significant pilot. The problem is one of scaleability. After you upload the image (TIFF file) via our photocopier / scanner. Some of the scans need to be reviewed, after which when you open the csv file with the results others have not had the results included … so you need to manually add these to the csv (results) file. When you then check these records in the software (Form Return) they are listed as not yet processed … this is an inconvenience on small numbers (about 20% occurrence with the two problems on a submission), but it would simply not work for larger cohorts.

My plan is, with one more submission left, can I tweak things, use the software more effectively to reduce the rate of this occurring. Fingers crossed 🙂

We are on the 5th iteration in Health. Some lessons learnt to date (these are all so obvious ….);

reduce the printing costs through remove the unique id for each test. So you can re-use the question booklets

get the discipline team to finalise the question booklet, and write answers before sending to us to create as create the answer sheets

include a box for them to add their name, as well as there candidate answer

include a clear title on the answer sheets which is generic enough not to cause questions. This is important when two papers and being tested in the same session

check the spreadsheet to investigate any blank cells for the answer – it seems some get through, not sure why this happens. But a responsibility should be for the person scanning the answer sheets to make sure all cells are entered, unless left blank on the answer scripts. this will let you double check, so don’t assume the system will do it for you 🙂

ensure the sensitivity setting is appropriate on the answer sheet – to reduce the likelihood of scan errors

include a visual display of what we mean by shade the whole bubble – we still get lots with lines across etc.,

OK – I’m keen to start rolling out the use of OMR for objective feedback and assessment. So, given we are using the UCS Corporate Development session evaluation template, I’ve OMR’d it 🙂 This will include some free text responses (which we’ll need to key).

Also, given it’s a survey and we aren’t 100% on who will attended, we’ll need to undertake a little csv file work.

Well, we’ve run another two sets of interview tests for candidates using the OMR (FormReturn) system. From my perspective this was much better than the first iteration … good news 🙂

I thought the answer sheets were better laid out

it included negative making (which worked)

it included candidates entering their applicant number as opposed to writing their name. This meant I could auto join the results with the person, as opposed to cut and paste

the processes and responsibilities are much clearer (nothing to do with the software but important for scaleability). The discipline team have taken responsibility for managing and printing the question booklets

Interestingly, this time we had a significant increase in the number of answer sheets which hadn’t been processed correctly (either it couldn’t read an answer or their application number). So there was an increase in the time I spent having to work through the unprocessed sheets in the software adding in the answer based on their paper copies. This isn’t a pain for me given the numbers (30 candidates). However, it wouldn’t scale particularly well. I assume, we could reduce this time by getting them to complete use a pen to fill in the bubbles at the end of the exam. Hence the contrast would be better than a pencil. I also, think there is a setting around the sensitivity. I’ll look into this for the next time.

Another need for next time is to auto generate the results (final score) as a percentage, as well as a absolute figure

The final enhancement for next time is I’d like to send the answer sheet as a PDF to the discipline team and they can print it out. I’ll then be able to design a work flow process.

The following a worked up responsibilities and work flow document for deploying an MCQ Exam via the OMR service

Step 1: Initial start up meeting

Owner: Discipline Team, and Elevate Team

Discuss the process, identify work packages and agree on people involved. A timetable is agreed based on the following work flow. Agree on results format (what they need), and who should receive the results.

Step 2: Writing the questions, mocking up the answer sheet

Owner: Discipline Team

write the question booklet (include the questions, the instructions, give the test a unique number)

mock up the answer sheet (identify the number of available answer options, the correct responses, any marks / negative marking, and the unique test number)

pass the completed question booklet and mocked up answer sheet to The Elevate Team

Note >> the discipline team is responsible for writing and printing the question book. The Elevate Team are given the final copy of the mock up’s answer sheet

Step 3: User Accounts

Owner: Discipline Team

based on earlier conversations in Step 1, the discipline team will need to send (as an csv file) the user accounts for the exam (first name, last name and email address) to the Elevate Team

Step 4: Writing the answer sheet, and testing the system

Owner: The Elevate Team

create and test the answer sheet

sign off … by printing off the answer sheets

Note >> the Elevate Team will need 5 working days to create and test the answer sheet

Step 5: The Exam

Owner: The Discipline Team

print all the question booklets, invigilate the exam, ensure students follow the instructions

hand the completed answer sheets to The Elevate Team (we will try to collect from the room at the end of the session)

Step 6: Process the Results

Owner: The Elevate Team

Check the submissions and scan them

process the results

create the results doc (as agreed in step 1), and upload to Google Docs to be shared with the appropriate people