Friday, October 29, 2010

I've been studying for my upcoming PhD comprehensive exams, and my major topic is computer graphics. Even though I haven't actually had the opportunity to take a class on graphics, I'm finding I've seen a lot of the material in the book I'm reading (Fundamentals of Computer Graphics). One thing that struck me recently is the book's alternate approach to explaining perspective projection.

I learned projection based on the pinhole camera model while learning computer vision. The way to think of it is that light hits an object at point P, and some of it travels through the centre of the camera at point O. Inside the camera is an image plane (Y1), which might be, for instance, film or a digital sensor. The particular bit of light from P will hit the image plane at point Q. When light is traced from each point on the object back to the image plane, an image of the object will be formed upside down. The math behind figuring out exactly what the image will look like and where it will be involves similar triangles and the like.

This same concept, known as perspective projection because the final image will have perspective (parallel lines that don't look parallel) is still true in the Fundamentals of Computer Graphics explanation. But in this case, we want to be able to express the projection in terms of an orthographic projection, something that was already established in the book mathematically. Orthographic projection is when an image is created, but parallel lines stay parallel. Architectural and model drawings are often drawn this way.

Objects further away look smaller with perspective projection (left), but not with orthographic (right)

An orthographic projection works by drawing a straight line from P to the image plane, where that line is perpendicular to the image plane. It turns out that any point along the line that passes between O and P in the diagram above will appear on the image plane in the same place in a perspective projection. So if we transform points on that line so that they now become perpendicular to the image plane, we can do an orthographic projection and get the same result (since all points on a line perpendicular to the image plane will appear on the image plane in the same place for an orthographic projection).

I enjoyed seeing perspective projection from this point of view; it actually helped me understand the geometry behind it all a bit more deeply. It makes you wonder what other topics we could explain in two or more simple ways, and how many students would benefit from doing so.

Customization: Increased complexity, trusting users to make the right choices (particularly with security settings), issue of it being good for users and bad for developers.

Communication / Social computing: Governing what goes up online, expectations of use and access in organizations, combining with other applications.

Context: Better help systems, have to be careful with personal information.

Convergence: Inter-application communication.

One cool thing I noticed about this list, especially near the end, is that many of these trends fit into the realm of augmented reality. Context is pretty obvious, given that the very nature of AR indicates its importance, but a lot of these other ideas are still at least a bit related. An example given of convergence was MIT's Sixth Sense for the fact that it provides information in a way that makes it 'always available' - AR in general can make information always available, since, at least conceptually, it doesn't require that you switch context between what you are doing in the real world and the information related to that task. Customization of one's own world is possible because of AR's inclusion of virtual objects. AR applications can definitely be social in a built-in kind of way, but also in a talking-to-people-in-the-real-world kind of way. There are even more ways to fit the remaining trends into AR but I'll leave that up to the imagination of the reader.

Are there other up and coming interfaces that speak to this list of trends?

Tuesday, October 26, 2010

I submitted a paper to this year's SIGCSE (ACM's Special Interest Group on Computer Science Education) conference that didn't get in. The reviews were actually fairly positive overall; I got the impression that even though it was an experience report and not a research paper, it needed to be more like the latter (so I'll know where to improve for next time). Thanks to the power of the Internet, luckily, I can share the paper and hope that those who would find it useful will stumble upon it.

You can find the paper, called "Adding Computer Science to an Introductory Computing Class for Non-Majors," on my portfolio page about the course. My main purpose for the paper was to show that arts students are capable of learning more difficult computer science topics if they are taught in the right way, and that they actually enjoy gaining insight into how computing works. My hope is that other departments that have "using computers" courses for non-majors rather than "computing" courses will consider trying something new.

One reason I decided to attend this event is to force myself to make a research poster for my recent work on how cognitive theories help explain the value of augmented reality. (If nothing else, our lab needs more posters to hide the dirty while walls we aren't allowed to paint.) I'm pretty happy with the results of the poster, and will definitely post it on my portfolio a little later this year with more info about the research (just want to wait until the related paper has been reviewed).

The other poster and presentation are based on a presentation that the CU-WISE co-founders have given before at NCWIE and at Grace Hopper. Our poster is going to be pretty simple with some photos as discussion points and room to pin up our promotional and outreach supplies. The talk is only ten minutes and is going to go over some of our keys to success. I'm particularly excited about these because Barb and Natalia will be coming to present! Better yet Natalia is bringing her new baby. Yay! :D

Finally, I'm looking forward to a program that's a little different from other conferences I've attended. Being much smaller than Grace Hopper and the like, ONCWIC is able to be more intimate. We're actually going to be able to meet everyone personally if we want to (musical appetizers at 6:15!), and they're all going to be local! There's an evening social event followed by games and desserts (which counts as social to me, so it's like a double header).

Last but not least, I'm hoping to check out Fort Fright at Old Fort Henry before heading home Saturday night. This is going to be an amazing weekend.

Wednesday, October 13, 2010

I love it when something so simple is so effective. Tom Moher's 2006 paper [ACM, CiteSeer] describing his work on what he calls Embedded Phenomena was a case of "why didn't I think of that?" for me for sure. He offers an affordable way to integrate digital information into standard classroom practice, and while he doesn't use the term augmented reality, I think the systems created definitely are.

The abstract of the paper goes like this:

‘Embedded phenomena’ is a learning technology framework in which simulated scientific phenomena are mapped onto the physical space of classrooms. Students monitor and control the local state of the simulation through distributed media positioned around the room, gathering and aggregating evidence to solve problems or answer questions related to those phenomena. Embedded phenomena are persistent, running continuously over weeks and months, creating information channels that are temporally and physically interleaved with, but asynchronous with respect to, the regular flow of instruction. In this paper, we describe the motivations for the framework, describe classroom experiences with three embedded phenomena in the domains of seismology, insect ecology, and astronomy, and situate embedded phenomena within the context of human-computer interaction research in co-located group interfaces and learning technologies.

As mentioned in the abstract, the paper reports on three different projects. In each, simple tablet computers act as windows into another world. Their placement in the classroom matters. For example, the solar system project, HelioRoom, has the tablets positioned so that the centre of the classroom becomes the sun, and planets orbit around it in a proportionally correct small scale. As the planets orbit around, they appear in the tablet windows at exactly the time they would had they actually been travelling around the entire room. This makes the digital information location-dependent, and this is what makes it an instance of augmented reality.

One of the things that struck me about this use of technology in the classroom is how easily the teacher could continue working how he or she always has. I remember another educational games author pointing out that we can't bring all kinds of new and exciting technology to the classroom and expect teachers to be able to learn how to teach in a whole new way as well as learn the new technology. Instead, we need to first bring technology that supports the way the classroom already works, and in the future begin slowly transitioning to new ways of teaching. If you look at the pictures included in the paper, you'll see students working on charts, in groups, with teacher direction -- heck, you'll even see those traditional Styrofoam model planets hanging from the ceiling! Everything teachers did before they still do; they just have a new way to visualize things in a spatially and temporally aware way.

I'd really like to see more projects that use simple technology like this in education. Sure, it'll be great when we all have our own augmented reality glasses and can recreate detailed simulations right in front of our eyes, but those days are a long way away. Let's use what we have now to create engaging learning environments without having to drastically shift our way of teaching quite yet.

Thursday, October 7, 2010

The Grace Hopper Celebration of Women in Computing had two special technical tracks added to the program this year: open source and human-computer interaction. While I was definitely happy to see the open source track, it was the HCI talks that really got me excited. I'm just getting into HCI myself, choosing it as one of my topics for my PhD comprehensive exams and submitting my first CHI paper. There was so much to learn from a variety of great speakers!

Friday, October 1, 2010

When I tell someone about the Grace Hopper Celebration of Women in Computing , I start by explaining the dance parties. I tell them, “You wouldn’t think that an all-female dance would be fun… but you’d be wrong. There’s nothing like dancing with hundreds of technical women who let loose because there’s nobody around to feel stupid in front of.”