In previous work we showed how student-produced entity-relationship
diagrams (ERDs) could be automatically marked with good accuracy when
compared with human markers. In this paper we report how effective the
same techniques are when applied to syntactically similar UML sequence
diagrams and discuss some issues that arise which did not occur with ERDs.
We have found that, on a corpus of 100 student-drawn sequence diagrams,
the automatic marking technique is more reliable that human markers. In
addition, an analysis of this corpus revealed significant syntax errors in
student-drawn sequence diagrams. We used the information obtained from
the analysis to build a tool that not only detects syntax errors but also
provides feedback in diagrammatic form. The tool has been extended to
incorporate the automatic marker to provide a revision tool for learning how to
model with sequence diagrams.