Android Projects Retrospective

The Android Projects class I ran this semester has finished up, with students demoing their projects last week. It was a lot of fun because:

I find mobile stuff to be interesting

the students were excited about mobile

the students were good (and taught me a ton of stuff I didn’t know)

the course had 13 people enrolled (I seldom get to teach a class that small)

The structure of the class was simple: I handed each student a Samsung Galaxy 10.1 tab and told them that the goal was for each of them to do something cool by the end of the semester. I didn’t do any real lectures, but rather we used the class periods for discussions, code reviews, and a short whole-class software project (a sudoku app). Over the last several years I’ve become convinced that large lecture classes are a poor way for students to learn, and teaching a not-large, not-lecture class only reinforced that impression. If we believe all the stuff about online education, it could easily be the case that big lecture courses are going to die soon — and that wouldn’t be a bad thing (though I hope I get to keep my job during the transition period).

The course projects were:

A sheet music reader, using OpenCV — for the demo it played Yankee Doodle and Jingle Bells for us

A tool for creating animated GIFs or sprites

A “where’s my car” app (for the mall or airport) using Google maps

Finishing and polishing the sudoku app that the whole class did (we had to drop it before completion since I wanted to get going with individual projects)

An automated dice roller for Risk battles

A generic board game engine

A time card app

A change counting app, using OpenCV

A bluetooth-based joystick; the demo was using it to control Quake 2

A cross-platform infrastructure (across Android and iOS) for games written in C++

The total number of projects is less than the number of students since there were a few students who worked in pairs. I can’t easily count lines of code since there was a lot of reuse, but from browsing our SVN repo it’s clear that the students wrote a ton of code, on average.

Overall, my impression that desktops and laptops and dying was reinforced by this course. Not only can tablets do most of what most people want from a computer, but also the synergy between APIs like Google maps and the GPS, camera, and other sensors is obvious and there are a lot of possibilities. Given sufficient compute power, our tabs and phones will end up looking at and listening to everything, all the time. OpenCV’s Android port is pretty rough and it needs a lot of optimization before it will work well in real-time on tabs and phones, but the potential is big.

Android has a lot of neat aspects, but it’s an awful lot of work to create a decent app and it’s depressingly hard to make an app that works across very many different devices. This is expected, I guess — portable code is never easy. The most unexpected result of the class for me was to discover how much I dislike Java — the boilerplate to functional code ratio is huge. This hadn’t seemed like that much of a problem when I used to write standalone Java, and I had a great time hacking up extensions for the Avrora simulator, but somehow interacting with the Android framework (which seems pretty well done) invariably resulted in a lot of painful glue code. The students didn’t seem to mind it that much. It’s possible that I’m exaggerating the problem but I prefer to believe they’re suffering from the Stockholm syndrome.

Hi Lee, I used Avrora pretty heavily for several sensor network projects like “safe TinyOS.” It’s the only tool I know of that provides useful information about CPU usage and such (no fun to do this with a logic analyzer).

I’ve forgotten what all plugins I wrote, but one let motes run a printf-style function, another measured code coverage. A couple of my students used it too.

Java’s boilerplate is horrendous. It’s the number-one reason we dropped it in favor of Python as our CS1 programming language.

I think there is a huge under-exploited design space for mid-level programming languages that have running time on the order of Java but with concise code; I’m not convinced that Scala has done the trick.

Gabe, I agree that there’s a gap to be filled, probably by something more on the Scala / Ocaml side, or perhaps by a variant of a more dynamic language like Python that supports static typing at module boundaries.

The other thing that aggravates me about Java is the JVM startup time. On my SSD-equipped laptop Eclipse takes like 20 seconds to be fully up and running. There are technical solutions — for example, VMs on Android have low startup cost — but mainstream JVMs do not seem to have managed to figure this out, nor do 15 years of hardware improvements seem to have made a dent in the problem.

Imagine teaching post-secondary software development in the future where you need to start the course by introducing the students to desktop machines and email 😉

The “desktop machines” item assumes developers will still need/want to use a desktop for [cross-]development rather than a phone/pad type device and assumes future generations will skip desktop machines in favour of phone/pad devices.

The “email” item assumes developers will still use email, mailing lists, and patches to perform [open-source] work over a public network and assumes the younger generation will prefer twitter over email for communication (something we’re already seeing).

You wrote:
> Overall, my impression that desktops and
> laptops and dying was reinforced by this
> course.

I’m curious, what leads you to think this (that
desktops/laptops are dying; I presume that last “and” was a typo for ‘are’)?

I see no signs at all of this. ISTM the key characteristics of these devices are:
tablet: portable, has small screen, no keyboard, very easy to use on-the-go
laptop: portable, screen size varies from small to moderately-large, has keyboard, needs a lap or table to use
desktop: not portable, screen size varies from moderately-large to huge, has keyboard, can have high-{power,performance} parts

Given how different these characteristics are, i.e., how different the “ecological niches” occupied by these different gadgets are, I don’t see any of them “dying out” any time soon.

What sorts of computers did your students in this course use to write their code & documentation?

Hi Jonathan- You’re right, PCs are not going to die. However, they are becoming power toys and work tools for technical people. The CS students in my course are hardly representative.

The niche analysis doesn’t work. You might just as easily have argued that 8-inch hard disk platters or DEC minicomputers filled niches, and therefore would always exist.

A better analysis is to look at different groups of users and figure out what kind of device meets their needs in a cost effective way. I believe phones and tablets meet the needs (email, text, games, web, etc.) of the vast majority of casual users, plus they have other advantages like mobility and not requiring sysadmin skills.

Yep, JVM startup time is a bit of a problem, especially if you just want to write a simple command-line utility. E.g. on my Macbook the JVM takes 0.25 seconds to start up before even loading the main class. That’s unacceptable for a command line utility, especially one you might want to use in a script.

Hi Jonathan, I think the keyboard vs. not distinction is a good one, but have you seen the new ASUS Transformer Prime? I’m tempted to get one, but on the other hand it looks kind of exactly like my little Macbook Air.

Hi,
“keyboard vs. not” is definitely a good distinction *right now*, but I’m pretty sure that someone will come up with new input metaphors for touch screens so that even typical “computer nerd” stuff can be accomplished without a real (or virtual) keyboard.

Fast and intelligent symbol browsers together with a new take on editors can make it just as fast/productive to program directly on-screen as using a keyboard.

Touch screens might also be the final missing piece for true graphical languages to take over the world… 🙂

@John:
> For me (and I’m guessing for you) any
> device w/o a keyboard is basically a toy,
> but I’ve learned that I’m so far from being
> representative that extrapolating my own
> preferences is useless.
Exactly. A tablet would be useless to me. But $spouse is contemplating getting one in a few months to be able to read pdfs on the bus and when travelling…

I looked at that ASUS ad. I find it amusing to see them bragging about a “chiclet keyboard” — to me that’s a term of derision dating back to the IBM PCjr. I guess I’m showing my age and/or how finickey I am about keyboard feel… 🙂

@Anders:
I’m skeptical. I’m sitting in front of a laptop right now, and if I look at what I’ve done in the past hour (ssh through a gateway to a bunch of remote machines to check on long-running jobs there, copy data files from remote machines back to laptop, write an E-mail, work on redesigning my code for analysing those data files, edit some system configuration files for my laptop, look at web sites for meetings I might go to next summer, edit my upcoming-events file), I don’t see a keyboard-less computer being very plausible for my usage pattern.

@Jonathan:
It’s fully understandable to be skeptical! 🙂 Apple first started talking about the Newton device in the late 80’s. I owned a Palm for a few years and was even on the software team for the first mobile phone to be marketed as a ‘smartphone’. (The Ericsson R380).
None of these devices, or any other similar attempts for that matter, did not really change our mindset on how to communicate with a computer.

First 20 years later the convergence of ‘cheap’ multi-touch screens, cheap and power efficient processing power, ‘standardized’ web service protocols and the invention of the lightweight application and a few new cool ways of interacting with a screen has given us an abstraction that is really useful for a lot of use cases.
I find myself using my Samsung smartphone for a lot of stuff that I didn’t even do with a computer just a few years ago.

I don’t think that the evolution will stop here, but it might just as well take another 20 years for the next break-through in how we interact with computers.

@Anders:
A key problem is and will be that of *writing*, i.e., somehow getting free-form
natural-language text from a human brain into a computer. Maybe another 20 years
will finally bring voice recognition which is better than a keyboard…..

Jonathan and Anders, I more or less agree with both of you. The question of how to interact with computers is a pretty deep one. Some sort of in-brain interface is probably where it goes in the long run…