Monday, July 23, 2012

CAA 2012

The week before last I attended the 15th International Computer Assisted Assessment conference in Southampton. This was the fifth time I've attended CAA, and for the second year our QTI projects were running a preconference workshop the day before. We made use of my LTIQuizzes QTI system during the preconference workshop, and were able to develop questions in Uniqurate and demonstrate them being delivered in Moodle using LTIQuizzes. As Paul has already mentioned in his blog this exercise was quite useful for finding minor problems with the software, and LTIQuizzes will be benefiting from that.

Sarah and Paul have both beaten me to blogging about the conference, so I'll keep this short try and avoid repeating what they have said.

I was involved with three papers and a poster, though my co-authors did the majority of the presenting. The first of these to be presented, “Peer Assessment Assisted by Technology” co-written with Sarah Honeychurch, Craig Brown & John Hamer was unusual for me in that it didn’t mention QTI. Sarah did the majority of the presentation with me helping out. At the end of the conference we found to our huge surprise that we'd won the Joanna Bull Prize for Best Paper!

Our poster, “Integrating Standards-Based Assessment into VLEs” co-created with David McKain & Sue Milne, was one of just five posters there. It covered recent work on QTIWorks and my LTI connector. This picture shows Sue describing some of the finer points to Sarah...

On day two one strand was almost completely taken up by QTI related papers from collaborators on the JISC funded QTI-PET, QTI-DI and Uniqurate projects. The first was Graham Smith's paper describing our QTI support website, originally set up under QTI-IPS and being maintained under QTI-PET. Paul then described the latest developments on QTI authoring with Uniqurate. Sue presented our paper "So Who Uses QTI?" which has a look at some of QTI projects in other contries as well as commercial developments, and the final QTI paper was Dick Bacon's presentation demonstrating several useful question types that can be shared by our QTI systems.

Other interesting presentations included Melody Charman and Chris Wheadon's paper on "(Mis)adventures in E-Assessment" which covered a variety of work varying from small pilots to major studies. One interesting thing they mentioned was the idea of using large numbers of comparisons of pairs of artefacts such as e-portfolios to rank them rather than marking. This is based on the work of Donald Laming coverede in his book 'Human Judgment: The Eye of the Beholder'. I struck me that this might allow a computer mediated grading system for open access courses, with students doing comparisons of their peer's work. Sally Jordan's presentation 'Short-answer e-assessment questions: five years on' looked at automatic marking of short answer questions used in the online assessment component of the OU's S104 science course. Although the initial work was done using complex commercial software, she has got equally good results with simpler marking algorithms, which are now available in Moodle.