Here are the summary sheets of my student evaluations, carried out by
the University's Learning
and Teaching Centre. Students typically fill them in in class,
and give scores ranging from Strongly Agree (5) to Strongly Disagree
(1) on a number of questions; they're then averaged across all
students.

There's no official interpretation of the numbers; there used to be
some guidelines ("Science classes always score worse, so don't be
disheartened at lower scores"), but they've disappeared. But I think
they're still more useful than no information. In interpreting them
for myself I consider a score of Neutral (3) as a Pass.

COMP225, 2013
evaluations.
I got better evaluation on my feedback for this unit than for COMP348
in the same semester below, even though I gave more extensive feedback
on two assignments in that unit. In this one we
used OpenLearning as the
platform for the class, where there was a lot of interaction in
comments / responses / likes / etc, so maybe that's a kind
of feedback that's preferred to detailed assignment comments.

COMP348, 2008
evaluations.
Written comments scanned as well.
I was particularly happy with a change I made to feedback on
assignments this year, and by comparing scores across years it seems
students were happy too. It's also the first unit where a student
thought I didn't make the work hard enough -- that's never happened
before, with the usual feeling being that I err on the side of being
too challenging. It's a hard balance to strike.

COMP348, 2006
evaluations.
Only 7 people out of 76 responded to this, again because it was an
online evaluation; that's the last time I'll be using that approach.
They were a pretty happy 7, though.

COMP225, 2006
evaluations.
And only 2 out of 69 responded to this. That was because I tried out
the online evaluation set-up. My conclusion is that, regardless of
email and website reminders, it's not a very good way of getting
students to give feedback.