Using QuizTeq to Create Programming Quizzes

When teaching introductory programming, the real test of mastery is whether a student can actually program the solution to a problem. Many of us who teach multiple large sections find a large number of students giving up part way through because the cannot work the problems. Good programming quizzes can reveal those student weaknesses early so that help can be provided.

QuizTeq is a tool for creating richly interactive, diagnostic quizzes that can be automatically graded. In this post you will see how to use drawing, dragging and typing to create quiz problems that are tailored to intro programming quizzes.

5 Styles of Questions for Programming Quizzes

We have identified 5 general types of questions that can be used to test student’s programming skills. These techniques, as well as QuizTeq’s features, are not focused on a particular language. These can be applied to virtually any introductory programming experience. The 5 styles are:

QuizTeq provides tools for creating highly interactive questions that work in the programming context rather than multiple choice or true/false. The questions are easy to create and can be automatically graded, which is important to diagnostic quizzing.

Bug finding

These quiz questions consist of stating a programming problem, showing a solution that is incorrect and then asking the student to underline or otherwise highlight the error using digital ink.

QuizTeq does not understand programming. The problem shown is just text and images over which the students draw their answers. The grading process is to take the digital ink strokes and assign feedback/points to each of them.

A basic part of QuizTeq’s grading strategy is to show you many student answers at once so that you can see the kinds of answers the students are giving and so that you can grade many of them at once. In the example above the students have provided three groups of answers: the right answer on line 2 where there is an unwanted semicolon, all of line 2 which is students that don’t want to commit and a wrong answer on line 4. In some cases we have found as many as seven different clusters of wrong answers. Each cluster represents a particular misunderstanding on part of some subset of students. Each cluster is an opportunity to give specific feedback.

We start grading by creating a feedback item for the correct answer and then selecting the strokes that correspond to the correct answer as you can see above. In this way we grade many students all at once in a consistent way. We then create a new feedback item and the strokes that we have already graded disappear, as shown below.

We continue grading clusters of answers until they are all gone. When we return to the question, the grading that we did will be converted to rules that are then automatically applied to any remaining student answers. This allows us to grade may answers without even looking at them. QuizTeq will then show us any answers for which our rules did not apply and we can grade these exceptions as we did before. These grading rules can be saved and reused in subsequent semesters to provide even more grading speedup.

Debugging is a fundamental programming task and one that students frequently struggle with. Problems like this can rapidly exercise debugging skills in the small before students get bogged down in larger programming projects.

Sign up for FREE to try out QuizTeq

E-Mail *

Email

Bug Correction

Bug correction is like bug finding except that we ask the student to fix the problem rather than just locate it. As before, we state a programming problem and provide a program fragment to solve the problem but one that contains an error. We then can provide one or more program phrases that the student can drag onto the erroneous code to fix it. This exercises slightly higher skills than simple bug finding.

The image above shows a simple bug correction problem with a single draggable item (circled in blue) that the student can use to correct the problem. The image below shows the correct answer.

Grading Dragging

Grading dragging questions is much the same as grading ink strokes. Lots of draggable object answers are shown at the same time as in the image below. We can then select clusters of them and assign feedback to them as we did with ink.

Questions can be even more rich when we add multiple possible correction items from which the student can choose. The image below shows the same problem with many possibilities for correction. Adding more options raises the complexity of the possible sets of answers. There are still a finite number of possibilities, but number is so large that students derive no hope from guessing. This forces the student to think harder about the programming problem. If we are using this quiz as a learning incentive, we might provide the student with a reference to a section of the textbook so that they can learn the answer for themselves.

Sign up to explore the student experience for FREE

E-Mail *

Email

Code insertion

Code insertion problems provide a way for students to engage directly in the programming problem without us giving up our automatic grading. To automatically grade arbitrary code is, of course, an uncomputable problem. However, the verify the correctness of a short fragment is quite easy.

For this type of question we state a programming problem, show a solution to the problem and then remove part of the code and replace it with a type in box. The student is asked to provide the missing code. The image below shows a code insertion problem with a student answer already provided. The set of possible answers is quite large but in practice students only provide a few different answers that correspond to their particular misunderstandings.

Grading Code Insertion

In the image below we see the text answer grading display. Note that each unique student answer is shown along with the number of students who provided that answer. These answers are sorted by popularity. Almost always the correct answer is the most popular.

We provide a feedback item and then select the answer string that most closely corresponds to that feedback. Our selected string then shows above with list of other strings sorted by their similarity to our selected answer string. If we desire, we can select one of these other strings to specify how similar a student answer must be to our selected answer. This allows us to easily accept extra spaces, differences in case or minor spelling errors. As a teacher it is our choice as to how exact the match must be. In this particular case we do not chose the next most similar string because it is wrong.

We continue creating feedback items until all of the strings have been removed from the bottom list. We then ask for another batch of student answers. In providing the next batch, QuizTeq, will apply our rules to grade other answers without us needing to look at them. With a large section this speedup is very advantageous.

Sign up now for FREE to try our grading tools

E-Mail *

Comment

Code tracing

A very common program quiz problem is a code trace. We provide a fragment of code and ask what it will print out or what the value of a variable will be at some point in the code. The student then will “play computer” to figure out the answer. This is a good strategy for exercising a students knowledge of how language features work. An example of a code tracing problem is shown below.

Code tracing has its advantages, but it is a form of problem that is not often found in coding practice. Code tracing uses the text type in form of question. These are graded in the same fashion as was shown for code insertion problems.

Parsons’ problems

Parsons’ problems are a technique for testing programming on a small scale without tackling the full code analysis problem. They were first described by Parsons and Haden in 2006. One simple form is to give all of the lines of code required to solve a problem but in jumbled order. The problem for the student is to rearrange them in the correct order. The grading then is based not on a knowledge of the programming language but on the order of the items.

Parsons’ problems in the original jumbled order form are easy to create in QuizTeq. One simply writes the desired code, makes each line of code draggable and then scrambles the order. The following image shows an example of Parsons’ problem. Such problems are easier to grade if you give the students a target for where each item is to go, as is shown below.

There are actually several variants of Parsons’ problems. Denny has identified 5 and there are others. All of the variants that we have seen are easily created in QuizTeq using either dragging or drawing as the interactive tool. In fact, QuizTeq’s grading rules can test for correct indentation, if that is desired.

QuizTeq can easily create a variety of questions to explore a student’s knowledge of programming. Come give it a try.