ITICSE 2011 was a fun, interesting event in Darmstadt, Germany last week. Here’s a brief report on my experience of ITICSE, completely biased and centered on the events that I attended, and only highlighting a handful of papers.

Ulrik Schroeder’s opening keynote (slides available) focused on the outreach activities from his group at Aachen University, with some high-quality evaluation. The most interesting insight for me was on their work with Lego Robotics. I raised the issue that, in our GaComputes work, we find that girls get more excited about computing and change their attitudes with other robots (like Pico Crickets or Pleo Dinosaurs) more than Lego Robotics. Ulrik agreed and said that they found the same thing. But boys still like and value being good at Lego Robotics, and that’s important for their goals. He wants to find and encourage the girls who do well at the same robots that the boys like. He wants the girls to recognize that they are good at the same CS that the boys do. It’s a different goal than ours — we’re more about changing girls’ view of CS, and they’re more about finding and encouraging girls who will succeed at the existing definition of CS.

I went to a paper session on A Tool to Support the Web Accessibility Evaluation Process for Novices. They had a checklist and rubric, including role playing (what would an older user do with this site? A sight-impaired user? Someone whose native language isn’t English?) to use in evaluating the accessibility of a website. I liked the tool and was wondering where the same model could be used elsewhere. I got to thinking during this talk: Could we do a similar tool to support the design of curriculum that encourages diversity? Could we provide checklists, rubrics, and role plays (How would a female CS student respond to this homework write-up?) to help faculty be more sensitized to diversity issues?

The coolest technology I saw was WeScheme — they’ve built a Scheme-to-JavaScript compiler into a Web page, so that students can hack Scheme live from within the browser. I was less impressed by the paper presentation. They’re using WeScheme in a middle school, where the kids are doing code reviews (“which most undergraduate programs in the US don’t do”) and presenting their work to “programmers from Facebook and Google.” Somebody asked during Q&A, “How do you know that most undergraduate programs don’t do code reviews?” They had no evidence, just an informed opinion. I’m worried that the paper as a model for outreach. Are Facebook and Google programmers willing to visit all middle schools in the US? If not, this doesn’t scale. Nonetheless, the technology is amazing, and I expect that this is the future of programming in US schools.

Probably the paper that most influenced my thinking was Orni Meerbaum-Salant’s paper on Habits of Programming in Scratch (same session). They studied a bunch of students’ work in Scratch, and identified a number of common misconceptions and errors. What was fascinating was that the bugs looked (to me) a lot like the ones that Elliot Soloway found with the Rainfall Problem, and the issues with concurrency were like the ones that Mitchel Resnick found with Multilogo and that John Pane found with HANDS. That suggests that changing the environment doesn’t change the kinds of errors students are making. And since all student programming misconceptions come from our instruction (i.e., students don’t know much about programming before we teach them programming), it means that we’ve been teaching programming in basically (from a cognitive perspective) the same way since Pascal.

The paper reporting on a multi-year professional development effort in Alice was really interesting. They had lots of great stories and lessons learned. The most amazing story for me was the school district where, not only were the CD/DVD players disabled, but the IT staff had used glue guns to fill in the USB ports on the school computers. The IT administration wanted there to be no way for teachers to load new software onto those computers. How depressing and frustrating!

All the papers in the session on Facilitating Programming Instruction were thought provoking. Paul Denny’s project measures how much thrash and confusion that students face in figuring out Java — and there’s a lot of it. Shuhaida Mohamed Shuhidan (“Dina”)’s dissertation work is yet another example of how little students understand about even programs as simple as “Hello, World!” I really liked that Matt Bower is exploring how learning a second language can influence/interfere with the first language learned, but I was disappointed that they only used self-report to measure the influence/interference. Without any kind of prompt (e.g., an example program in the first language), can you really tell what you’ve forgotten about a first language?

My keynote went well, I thought (slides available). I talked about CS for non-majors, for professionals who discover CS late in life, and for high school students and teachers. After lunch the third day, I headed off to Aachen University to visit with Ulrik’s group, so I didn’t get to see more. IITICSE was a lot of fun and gave me lots of good ideas to think about.

Thanks for the kind words about WeScheme. I’m sorry that my presentation was a low point relative to the work, but then again, that’s much better than the other way around!

Since you offer an highly impressionistic view of what I said, let me fill it out. I said that our volunteers are a combination of college students and professional programmers. The professional programmers happen to be from companies such as Google and Facebook, but are not at all limited to them (as I said in my talk). But to address your scalability concern, the general supply of students and programmers is extensive around the country.

The students who do code reviews do so to the audiences present at the end-of-term celebration. The audience includes parents, teachers, principals, etc. If it so happens that the course was taught by a team from Google, then it’ll also include that team of Google engineers (as well as others from Google). It is not the case that Google and Facebook engineers travel the country to conduct code reviews (though with WeScheme and a some imagination and technology, this is achievable without any travel needed…).

The issue of code reviews was anyway a complete sideshow to the point of the presentation (which was about WeScheme). I brought it up in one sentence, as an issue that I think computing educators don’t do nearly enough. If I’d been thinking quicker on my feet, I’d have done an instant poll of the people in the room, and then at least we’d be ~50 data points better informed. I have a guess as to what the outcome of the poll would have been. I realized this a half hour later. Oh well; l’esprit de l’escalier.

As you will recall, there was ample time for further discussion (I intentionally finished my talk about 3-4 minutes early). Anyone in the room, including both the original questioner and you, was welcome to point to literature with more information about the prevalence of code reviews in computing education. I again find it telling that the assembled wisdom of computing ed in that room did not.

Every year I scan at least paper titles from many conferences, and if I’d found titles that mentioned code reviews I’d surely have read them. But I may be scanning poorly, or looking for the wrong terms (“code review” vs “code inspection” vs “codewalk”…). So, why don’t you provide some references or other data for people reading this?

The CS education community has been using code reviews as part of the efforts to incorporate studio-based learning into computer science education. I particularly recommend the work of Chris Hundhausen who has carefully been studying what factors lead to successful pedagogical code reviews, versus those that lead to negative attitudes about coding and computer science (e.g., on-line code reviews are markedly more negative and result in more negative attitudes than the face-to-face ones they’ve studied). Harriet Taylor was talking about this effort at ITICSE, and I think that she said that the program is up to 75 CS departments participating. While I don’t know of any record of how many CS departments there are in the country, I know that we’re about 27 in Georgia, so 75 is already a pretty significant start — and those are just the ones involved in the research, not those who are incorporating the ideas but aren’t part of the evaluation effort.

The studio-based learning effort in CS has been prominent in the SIGCSE community the last couple years, with whole day workshops proceeding the last couple symposia. Agreed that your point about code reviews was a sideshow to your main effort, but since code reviews are a prominent part of CS education research efforts today, your comment was jarring and suggested a disconnect between the WeScheme efforts and the good ideas in the CS Ed community. No, the words “code reviews” might not appear in the titles, but that’s because the code isn’t the focus. CS education research is a different sub-discipline of computer science than software engineering or programming languages, so it’s not surprising that the language changes a bit. The focus in the CS Ed community is on the learning, and in studio-based learning, it’s about borrowing successful learning practices from design fields like Architecture and incorporating them into CS. “Code reviews” are a great form of critical design dialog, and they’re being used for their pedagogical purpose in these efforts, but as part of studio-based learning, not as a first-class focus.

One other clarification: wescheme isn’t used in “a” middle school, but in a series of after-school classes nationwide. The google and facebook engineers are among those who teach these classes for us — they are not just attending code reviews.

That doesn’t change the scalability point, but the problem is not where we expected it to be. Finding volunteer teachers is easier than finding after-school programs that can support good STEM classes. We’ve learned a lot about the infrastructure needed to teach computing education in these contexts. We think after-school is important for reaching students with no access viable access to computing education, but even successful national programs lack the consistency for classes that seriously build on themselves week after week.

It’s an interesting discussion for those seeking interim routes around the school system. I’d enjoy following up on this with anyone else who plans to be at ICER.

This is a very helpful clarification — thanks. I was loosely aware of the studio-based efforts, but on (brief) inspection they seemed rather different from our more narrow focus on code-reviews. What I understand from your response is that they both are and are not. So now I know what to track better in future.

[…] Trip report on ITICSE 2011: Robots for girls, WeScheme, and student bugs in Scratch. (computinged.wordpress.com) Eco World Content From Across The Internet. Featured on EcoPressed Green Overdrive: An exclusive look at Getaround's new app By chaikens, on August 2, 2011 at 3:41 pm, under Education. No Comments Post a comment or leave a trackback: Trackback URL. « Mind/Brain more like muscles than a computer Shortcomings of Alice, Scratch, Myro, etc. » LikeBe the first to like this post. […]

[…] work in CSLearning4U, and the challenges of teaching computing in high schools. I told her about the Alice project report which found that they couldn’t install Alice because the computers in their high schools had […]