The automatic feedback system as a teaching tool provides a number of
advantages:

students can get their code checked at any time (and outside time
tabled laboratory sessions)

checks are done by machine and thus objective and fair (i.e. exactly
the same testing strictness is employed for all students)

students can re-submit their code for testing as often as they like,
and thus iteratively improve their code. The testing system will stop
testing once it has come across a single check that has failed. It will
only report one failure at the time.

Demonstrators and lecturer in laboratory session can focus on
explaining concepts and advising on style. They do not have to
assess code for correctness and have thus more time per student to
support the learning process.

However, two comments about the system are important:

the auto testing is carried out by a machine and as such the
feedback it provides is expressed in a particular way that we need to learn to interpret.

This page and the examples below provide training how to
read the protocol from failed checks, and how to understand what
the error is that the autotesting system complains about. (This
will not explain what is wrong in your code, it will just say that
the way the code reacts to particular input values appears to be
wrong.)

We encourage students to ask demonstrators
and lectures in the laboratory sessions and help session to explain the error messages you get from the feedback
system, in particular in the beginning of the course.

The messages from the autotesting system can tell you exactly what the
symptom is of the problem/bug/missing feature in your code, and you should understand and use that information to
use your time effectively when trying to debug your code. In principle, you
should try to repeat the test that the testing system carried out, and then
debug your code until that test produces the right output.

We have tried to help you by adding comments in the testing code that outline
what is being tested. These may be helpful to read as well. You may need to
scroll up a little in the reported error from the testing system to come to understand
what test is being carried out.

While the actual testing is carried out by a machine, the tests
have been written by a human. It is possible that one or more are
incorrect.

If you do not understand why your program fails a particular test,
then there is a chance that the test is wrong. Please get in touch
with demonstrators/lecturer to address this, and we will investigate.

We have added line numbers to the output above to make the explanation easier:

lines 1 provides a summary of all questions 'passed' (i.e. no mistake) or 'failed' (i.e. some mistake).

lines 4 to 7 show a list of all exercises and for each of those whether they have passed or failed. In this case, there was only one exercise (to write the function add) and thus there is only line 7 reporting that this function has failed a test.

from line 9 down, a detailed report is provided for every question
that failed a check. In this case, there is only the testing of the add function and the report for this starts in line 12.

the actual code that carries out the testing is written in Python
and shown starting in line 14. Because it is TESTing the function
ADD, the test function is called test_add.

the testing system will go through a number of assert statements: each of those will check a certain property or behaviour that the function add should fulfil (the condition given to the assert statement should evaluate to True for a check to pass.)

the first such assert statement is in line 15:
assert s.add(0, 0) == 0

the file you submitted is known in the test system as s -- you can think of this as s for Submission (or s for student). The function s.add() is thus the function you have written and submitted.

line 15: the assert statement assert s.add(0, 0) == 0 will call s.add with arguments (0,0) and expects that the result of this is the same as 0 (== 0).

Our implementation of add is thus called with a=0 and b=0 and will
return 0 (because the average of 0 and 0 is zero). This 0 is
compared with the other zero (on the right hand sign of the
comparison operator ==) and the check is passed.

This first check done here (i.e. checking that 0 + 0 = 0) does not find
the bug in the function (which computes (a + b) * 0.5 instead of the
required sum a + b) and is passed.

the next check (line 16) will fail though:

line 16: the assert statement assert s.add(1, 2) == 3 will call s.add with arguments (1, 2) and expects that the result of this is the same as 3 (== 3).

In more detail, the assert statement

calls the function s.add(1, 2) and replaces s.add(1, 2) with the value that the function returns.

the return value of s.add(1, 2) is 1.5 (as we can see line 17
on the left of the comparison operator ==)

the assert statement will compare this value (i.e. 1.5) with 3 (line 17) and notice that they are not the same, thus raising an AssertionError (the AssertionError is mentioned in line 21).

the submission system reports that a test has failed. In particular, we need to look for the greater than sign (>) in the left most column which points to exactly the assert statement that has failed. In this case, this is line 16.

Lines 17, 18 and 19 provide a bit more detail on intermediate steps
in evaluating the assert statement that has failed.

So what has happened? The code we submitted returns 1.5 when given a=1 and b=2 because it (wrongly) computes the mean of a and b, rather than the sum. The testing system knows that the sum of 1 and 2 should be 3, and this is exactly the test carried out in line 16.

A single failed assert statement will result in an overall fail
for that exercise. Once a test has failed, no further tests will be
carried out for that function.

If this exercise is not assessed (i.e. students can re-submit their
work), you should study the error above, try to fix the code and
re-submit.

In the case here, the question you should ask (to find out what is
wrong) is "why does my code return 1.5 where as it should return 3 for input arguments a=1 and b=2". You can actually copy the test to Python prompt and run:

>>> add(1, 2) == 3

For the code with the bug shown above, this should display:

>>> add(1, 2) == 3
False

where as once the bug is fixed, you should see:

>>> add(1, 2) == 3
True

Often, it may be more instructive to not carry out the comparison, but just display the return value, i.e.

>>> add(1, 2)

where the incorrect response would be:

>>> add(1, 2)
1.5

and the correct answer should read:

>>> add(1, 2)
3

The feedback from the testing system should help you debugging your
code.

Here, the very first assert statement (line 7) fails: it as trying to
call the function s.add() with arguments (0,0). However, that
function object does not exist: the only function object that does
exist in s (which is the submitted file) is called
add_those_numbers.

The system thus reports (line 8) 'module' object has no attribute
'add'.

The 'module' refers to the module s (i.e. your code), and the remainder of the
message simply says: there is no add in module s.

As always, the greater than sign (>) shows the line in which an
assert statement has failed, and subsequent lines give more
information about the type of failure. In this case, the test fails
because the function that should be tested (i.e. add) appears to
be missing.

This kind of error will always be raised if a particular question has
not been attempted.

The function add needs to return the value of a+b. The type of
a and b has not been specified in the instructions. We would thus expect the function
to work at least for Python data types int and long (both integers),
floating point numbers float and complex numbers.

(Dynamic typing will ensure that the return value is of the same type
as the input arguments a and b (if a and b are of the same
type), and if the plus operator can deal with all the types.)

Let's assume the following mistake: the function add that we submit returns the result of the
calculation as a string (this may seem like a good idea but it is
not: if the result of a calculation is required as a string, it can be
converted into a string later):

line 5 contains the clue: the call of s.add(0, 0) evaluates to
None (the special Python object None).

The comparison of None with 0 fails.

The underlying problem is that the submitted definition of the
add function does not use the return keyword to return the
value of a + b (instead the value is printed). Any function
not using the return keyword will return the special value None.

Remark: Note that in this case (where the function prints something)
the new section Captured stdout appears in line 10: this shows
the 0 that was printed when s.add(0, 0) was called.

line 10 contains a comment we have put before the test to give a hint
about the following test. It suggests that the test in the next line is
to see whether the function has a documentation string.

line 11 reads:

assert s.add.__doc__ != None

The object that is being tested here is s.add.__doc__, i.e. the
__doc__ attribute of the function add in the submitted file
s.

This is the documentation string: if it is defined, it is a
string. If it is undefined, this attribute s.add.__doc__ is the
special Python value None.

The documentation string is what is displayed when we pass the function
object add to the help function, for example:

>>> help(add)

The comparison operator (!=) checks for inequality, i.e.

s.add.__doc__ != None

will evaluate to be True if s.add.__doc__ is NOT None, i.e. if
the s.add.__doc__ exists (so all is
well, because then the documentation string is defined; and this is what the
test is for).

If, however, s.add.__doc__ is None (because it was not defined
as in our example here), then the whole expression s.add.__doc__
!= None evaluates to False and thus the assert statement fails.

Occasionally, assert statements are preceded by Python comments
in the line above (as here where the comment is in line 10 and the
assert statement in line 11). These comments have been placed there to
help understanding what is being tested: they are worth reading --
often there is no need to look into the detailed error message if
that line provides enough context, or at least knowing about the context
will make understanding the purpose of the assert statement easier.

The testing system will try first import the student file under the name s. I.e. if the the submitted file is demo.py, then the testing file
will want to run a command like:

import demo as s

If there are syntax errors in the demo.py file, this will be impossible, and the testing cannot even be started. In this case, an email along these lines is returned:

Test failure report
====================
import s.py
-----------
When trying to import your submission, we have
encountered an error, and as a result, this submission
has failed. A likely problem is indentation.
You should find that Python reports some error
(SyntaxError most likely) when you execute your file
MYFILE *before* submission (either by pressing F5 in IDLE,
or running "python.exe MYFILE.py").

As the text says, the best way to avoid/fix this, is to make sure that the file is valid Python, i.e. executing it does not raise any error messages.

Finally, please do ask for help in interpreting error messages (both from the testing system and those that your code produces as you do the exercises): the error messages you receive in the feedback are (slightly enhanced) standard Python error messages. Learning how to read those will benefit fit you far beyond this course. This goes back to the saying that "writing code is easy -- debugging it is hard".