Archives

Categories

Category: Human-Computer Interaction (HCI/CHI)

A couple of weeks ago I had the good fortune of sitting in on lecture of a scientific visualization class* at Tufts at which Ben Fry, creator of many great works of visualizations that can even be called information art as well as the visualization toolkit, steroids Processing, was guest speaking. The talk was great, spanning lots of work and interesting commentary.

Some notes:

Ben showed quite a bit of his previous work – some of it would be familiar to readers of his book, Visualizing Data.

Showed off some of his work that has appeared in movies, highlighting the fact that he is asked to add rows of standard grey computer buttons to his work because it doesn’t look “real” otherwise.

Talked about some experience teaching classes, particularly the challenges of classes with mixtures of cs students and artists. Making CS students do projects more artsy and artists do more interactive, technical work can be interesting. He showed off some examples of student work. (One cool student project asked a set of Nobel laureates what type of pets they had. Quite a few found time to respond and the results are here.)

The coolest demos were of some of the work he’d done for Oblong Industries (Not a lot of information online now- here’s one cnet article)- they have a working Minority Report-style gesture interface that allows one to control a computer with hand movements. Paired with the right interface, this looks to make light work of navigating through vast amounts of multidimensional data. Ben showed some videos, along with a demo (running on his macbook pro w/o the fancy hardware it was still really cool).

* I’d been asking for a class like this to be offered several times while I as still working on my degree at Tufts, but to no avail. Of course it’s offered right after I graduate!

I noticed this morning that google finance has a new stock screener feature that lets you choose stocks with features in a certain range by way of an interactive sparkline. These are miniature graphs that go inline with text. In this case the graph is a histogram that indicates how much of the stock market falls into each part of the range – this will give one a quick preview how inclusive their search parameters are.

Tufts Health Plan (no relation to the school) made an poor design choice for guiding people through multistep processes on their website. The convention for wizards has been left to right just about forever; the button for continue should be on the right, and back/cancel should be on the left. Combine that with the same mapping for back and forward in web browsers and I’m pretty sure the mental model most users have of this activity is left to right. So much for stimulus-response compatibility here:

I’ve been looking for a new job of late, so I’ve been looking at a lot of company websites to get a feel for the company. I know the saying is you can’t tell a book by its cover (even though a cover can catch your eye and make you buy it anyway), but can you tell much about a company by its website?

I like to think you can.

The following factors tend to weigh heavily against a company in my mind (especially if they create web applications):

Site looks ugly or broken under firefox (I know most people use IE still, but come on. This also indicates they may be writing IE-only webapps)

Messy javascript

Poor HTML- no css, lots of inline css

Bad information design

I understand that a lot of these companies probably outsource their web presence, but I would think that if there were some talented designers at a company, one of them would raise some concerns about or fix the issues above, particularly poor information design.

Here’s a case study. One of my recruiters told me to take a look at Outstart which appears to be in the business of information delivery (e-learning etc) via the web. So it was especially alarming that they didn’t seem to be able to deliver information about their product line very effectively. Take a look at the screenshot below (taken from here). I’m willing to bet that a large percentage of visitors to the page try to click on the product names (in blue, bolded) next to the short descriptions before figuring out that doesn’t work and using the menu at left. Talk about misleading information scent. (Click the image for a larger version)

Other strikes here (besides the different order of the products in the page and in the menu):

Parts of the site don’t render well in Firefox. (like the country drop down box) I don’t want to work on an IE-only app again. Ever.

The url is ugly and complex. It contains at least 100 characters, many of which are in hexadecimal. They break down into three coordinates on the menu to decide which page to show. Only each id is a 32 hex characters, which means there are 10^24 possible menus, and the same number of possible items per menu, and the same number of base menus. I guess they’re thinking about growth, or adding the entire internet to their menu structure. At 10^72 combinations, they might be able to have a page for every atom in the universe. Way to plan ahead for growth.

The HTML is broken. There’s a chunk of CSS before the html tag. No Doctype.

I’d expect more from a company that builds web apps to deliver e-learning, wouldn’t you?

A colleague pointed out this open source project that allows users to visualize the mouse movements of users as a heatmap – the hotter the area, the more the mouse has been used there. Its a neat idea, a well executed visualization and a great that the code is shared, but I wonder about the utility of the resulting data.

Heatmaps are usually used in this context with eye-tracking data – that is it shows where the users look on the page, the movement patterns between sections, and how long they spend there. This data is useful for understanding if the layout makes sense, to understand where to place things so the user will see them.

I don’t think that cursor position is a good proxy for attention while using any application on a computer. I don’t constantly mouse over things on a page, paticularly when reading some amount of content. Maybe its the case that I do it unconsciously enough that there is some real meaning to the data, or maybe there are groups of users who do this all the time?

My colleague pointed out one good use for this – to identify elements of the page that are misleadingly affording interaction – are people clicking on stuff that isn’t clickable? Otherwise I fear people will read too much into the heat maps, and would be better off with just a click stream around the page. I wonder if even a clickstream provides solid enough data upon which
to draw conclusions with any degree of certainty.

Theres a commercial offering of a similar capability called Clicktale. They provide video simulations of the user’s mouse interactions with the page – from the limited information they have, it doesn’t look like they have visualization tools, and who has time to watch all that video?

Siggraph 2006 is in Boston,
and they had a free public reception this afternoon (which I saw in one of the free weekly papers,
but was unable to confirm through official channels). I headed over there with Marty, and fortunately the paper was right. We were able to check out the emerging technology area (their page is here and a video preview is here) There were many really neat applications. Lots of what my advisor at Tufts would call reality-based-interfaces (RBI) where the user interacts with a computer application by manipulating real physical objects. There were many table top devices, one where multiple users could collaborate to create “music” (more like sound) by manipulating a large number of objects on a projector table. Turning objects to make them louder and softer and moving them around to change their interactions.

I think my favorite demo that I actually got to use was the Forehead Retina System because it made me really able to sense objects through physical sensations on my forehead. The effect really has to be experienced to be believed. It worked really well for linear objects, where it was easy to feel a line moving back and forth on my forehead, but not so much for a round object where the effect just felt mushy.

We also got to see the Art Gallery where there were some cool works, including an exhibit where you could interact with butterflies in side a mirror.

I’m not sure why this amazed me so much this morning. I got to work, unlocked my computer and suddenly my gmail notifier tells me I have mail – only that mail was some spam I didn’t bother reading this morning on the way out the door. It didn’t just arrive, so what must have happened is that it detected one of: screensaver turning off, machine being unlocked, or keyboard/mouse activity and then resumed checking.

It seems obvious thinking about it now that its pointless to tax an infrastructure by checking for updates that the user won’t even see – I guess I just never thought about it all that hard before.

I know based on what I’ve read about the effects of interruptions on productivity that its counter productive to for me (or anyone) to run any sort of mail notifier; indeed I’ve hated the way my organization has tended to use email as a lame substitute for IM. Problem is I’ve gotten into the bad habit of compulsive mail checking anyway – which puts me in the browser and then there I am checking my feeds and digg. Trouble. So I think for me, right now it turns out to be better this way.

I think its interesting that calendaring tools can understand the definition of complicated event reccurence rules, as well as exchange those definitions in a powerful standard format, but that the user interfaces on the tools I have used (ical, google calendar) don’t actually support creating events with anything more than the simplest recurrence relationships… Goes to show that the bottleneck in many systems is still the interface between the human and the computer.

The other day I received my first $50 parking ticket of the street cleaning season. The rules on my street, even side cleaning on the second and fourth Wednesday, odd side cleaning on the first and third Tuesday seem simple enough to follow, but I still think Somerville’s chief revenue stream must be parking violations.

I thought perhaps I can set the events in iCal, upload that file to google calendar, and get SMS reminders. Turns out one can’t specify a recurring event like second and fourth Wednesday in iCal or google calendar. iCal’s interface allows one and only one “nth day of the month” recurrence. This made me wonder – is this stuff even possible in the iCalendar format?

So I checked the iCalendar spec in RFC2445 and sure enough, it supports powerful enough recurrence rules to handle any conceivable event schedule. Here’s an example that will handle the odd side street cleaning, April through November of every year:

RRULE:FREQ=MONTHLY;INTERVAL=1;BYDAY=1TU,3TU;BYMONTH=4,5,6,7,8,9,10,11

I edited the one 1st Tuesday rule generated by iCal in a text editor to arrive at this iCal file. Imported it back into iCal, and it rendered the recurrences perfectly. Google calendar also reads the file well even saying under details “Every first and third tuesday”.

Its a shame there’s all this underlying power, yet the user interface allows only a small sliver of it. The 80/20 rule probably dictates an organization doesn’t put in the resources to develop and support a really complicated UI for creating event reccurences, but I would think some user facing tool would support that. Are there any out there?

Are radio buttons going out of style? When I was using turbo tax recently, I saw several cases where two or more logically mutually exclusive choices were represented by checkboxes rather than radio buttons. Here’s one of them:

Although, as the expression goes, never attribute something to malice that could be just plain incompetence, it does seem that the designers at Intuit must surely have made a considered choice in not using radio buttons anywhere in Turbotax.

Is there a reason for that? I wonder if “today’s youth” even grow up having used a radio with buttons like that – I suppose you could get through your life using an iPod etc and never encounter controls like an old fashioned radio. I think even radios themselves muddy the waters on this: I recall the original radio in the 1987 Camry I used to drive had four or five radio channel buttons, but you could also use them in primitive chords: press two at the same time to select the virtual button between them.

UI affordances tend to have mirrored the world where possible, but perhaps on this front, the world is moving faster.

My Applied Design of Software User Interfaces class is lots of fun, but turns out to be a lot more work than I had anticipated (No programming or math, this can’t take much time!). I’ve spent hours learning Fireworks and Illustrator, and gotten much better at mocking up user interfaces in the process. I’ve also found that staring at the screen for hours tweaking pictures seems to be a different level of eye stress than programming, presumably due to having to focus on the slightly bigger picture than a line of code at a time.

We’ve had three weekly assignments to mock up user interfaces (including preliminary conceptual designs, affinity designs, and user profiles) which are given in the form of project for our mock design companies, and the deliverables must be color printed and bound, which makes one take an extra level of pride in the work – more of the same pride that keeps one up way too late making final design fixes.

I thought I’d share one of my designs from the last homework assignment: it was to mock up a design for a handheld, touchscreen computer (tablet) that would assist users in their tour of an art museum. The screen at the right (click to enlarge) uses the device’s ability to know fairly precisely where it is to present only the art nearest the user, from which the user can select one for more information. There’s some things I don’t love about the design now that I’ve had more time to reflect, but I still like it. I learned a lot about Adobe Illustrator in creating the icons for the nav buttons on the left (even though they still look like they were drawn by a Quailtard) – I also learned that Illustrator can’t do angular gradients like that I had in mind for the radar button, so I had to resort to the Gimp for that.