Since arriving at UW almost four years ago, I’ve been involved in student politics. I’ve not talked about it here as most of it is rather prosaic and, for those not at UW, largely irrelevant. Every now and then, though, something comes up which might be of wider interest..

In early May, GPSS, the Graduate and Professional Student Senate, ran its second Science and Policy Summit, an event for academics, policy-makers, and the general public that looks at the interactions between scientific research and policy development. This year, we ran two panels, one on the impact of bioinformatics on preventive medicine, and the other on the role of science in public political discussion, focusing on the US Presidential debates. As well as the panels, we also ran a series of short talks, inspired by the TED model.

There were about 10 talks, 10 minutes each, delivered by a mix of graduate students, post-docs, and faculty, and of a very high quality. Here’s a few topics to whet your interest:

Couch safety, various failed attempts at regulation, and the fate of your cat

The ups and downs of developing tourism as a means of restoring communities and ecosystems in SE Asia

A passionate argument for the necessity of cosmology research

The woeful state of healthcare in the US (not health funding, but healthcare itself), including some rather damning statistics and factoids, presented as humorously as possible

We recorded all of the talks, and they’ve now been posted to YouTube. They’re all rather interesting, and worth at least a look, even if the film quality isn’t all I’d hoped when I filmed them.

The idea is that if someone is warned of an imminent seizure in advance, say 20 minutes out, they can remove themselves from unsafe or embarrassing situations, take other precautions (lying down, perhaps), or take fast acting drugs that might stop it from happening. This is a big deal, as it helps the 30% or so of those suffering from the disease for whom conventional drug treatment doesn’t work. It may also allow some of those receiving drugs to come off them, reduce their dose, or shift to less effective drugs with fewer side effects. It might also help reduce the number of deaths from epilepsy-related accidents – 50,000 annually in the US, apparently.

The technology’s actually fairly simple. There’s three main parts, none of which appears to be particularly magical.

an array of electrodes implanted on the surface of the brain, beneath the skull, but outside the dura mater, and thus not in contact with or penetrating the brain itself

a set of signal processing and machine learning algorithms that classify brain patterns into risk levels based on training data previously collected from that individual and from the public in general

A mobile device or phone app that warns the patient of periods of increased risk

It’s in trials at the moment in Australia, and is apparently performing well, with no known associated adverse events.

Apparently, this all started out with Jaideep’s PhD research into the sensorimotor function of moths – he basically designed chips and implants small enough to put in a moth, then studied the different nerve signals associated with its wings as it flew. They also did trials of the epilepsy detection technology on dogs, as they also suffer from epilepsy. Unfortunately, I was unable to find copies of the cute pictures he showed in the talk.

If you’re interested in hearing more, there’s a 5 minute video article from ABC News in Australia talking about it. It’s formatted a bit weird, so you might need to download it and switch to the second audio track in VLC.

Edit: Apparently the electrodes are implanted beneath the dura mater, but outside the arachnoid mater. So, between the second and third membranes that encase the brain.

I was completely unaware of this, but apparently cases of academic misconduct, as evidenced by the retraction of papers from journals and other publication venues, have been on the rise.

According to the article, retractions from journals in the PubMed database have increased by a factor of 60 over ten years, from 3 in 2000 to 180 in 2009. That’s insane!

What’s going on, then? I suspect one or more of the following:

Worsening of the academic rat-race – the ever-increasing focus on publishing metrics in academia pressures researchers to publish, ideally in high-impact journals. Some may be willing to make up data in order to do so.

The rush to compete – Given the prestige attached to publishing first and the role of this prestige in securing grant funding, researchers may be taking shortcuts, overlooking shortcomings in their study designs, or failing to spend enough time verifying their results and data.

Commercial involvement – I can’t cite numbers, but my impression is that commercial research funding has increased over the last decade or so, particularly in high-stakes fields such as pharmaceuticals. Commercial funding is associated with bias and poor researchpractice.

Increased detection – It seems likely that today’s increased reliance on information technologies and shared repositories of data and publications would make it easier to detect fraudulent papers. Similarly, since communication is much easier today than it was even 10 years ago, it may be easier for editors to unearth patterns of fraudulent work.

One caveat: this result derives from PubMed, which primarily includes medical and pharmaceutical research, as well as some auxiliary technology and basic science. Does this pattern of misconduct apply in other fields, or is it particular to medicine?

Improved review processes are necessary, but it’s not clear how quickly change will come. Problems with peer review have been acknowledged for more than 20 years, with a report from 1990 showing that only 8% of members of the Scientific Research Society considered it effective as is. Despite this, in most venues, peer review functions the same way it always has.

There may be some movement, however. CHI, for example, includes the alt.chi track in which research is reviewed in a public forum before selection by a jury, which seems to offer a good compromise between open and free criticism, and peer-driven moderation. There’s also a special conference coming up entitled “Evaluating Research and Peer Review – Problems and Possible Solutions” – it was the Call for Papers for this that got me writing this post.

From my perspective, an ideal research review system would at least:

Expose all research data and methodology to unlimited, non-anonymous, public, scrutiny. Special rules might be employed to protect commercially sensitive material, but there needs to be a balance.

Allow meta-moderation. That is, allow the critique of critiques. To do this, reviewers need to have persistent identities, and signifiers such as the credentials and review history of each user need to be available.

Integrate review work into the research contribution of academics. As it is, peer review work is primarily voluntary, and the level of commitment of reviewers is thus presumably highly variable.

What else should a review system incorporate? How could such a system fail? Why might it not be adopted?

Update 2012-05-09: It’s not clear whether the aforementioned study relied on the same set of journals each year, or whether they used the full PubMed database each year. It’s probable that the PubMed mix has changed over the decade; for example, the NIH’s public access policy requiring publicly funded research be placed into PubMed was trialed in 2005, and made mandatory in 2008.

I spent Saturday at the HCI for Peace workshop representing the Voices from the Rwandan Tribunal project. It was fairly informal, with only 10 participants, which made it easy for everyone to participate in the discussion. Several participants presented projects they’ve worked on, including:

Lahiru Jayatilaka, a Sri Lankan PhD student from Stanford, who presented his work on improving land mine detection systems by tracking the detector tip and allowing the operator to mark detection points that are then displayed back along with the detector’s path, making it easier to determine the shape of an object detected. In trials with the US Army, he also found that his tool significantly aided in training by making it easier for trainers to see the patterns used by students. He’s looking for funding and collaborators to help him bring the tool to maturity so he can start to spread it to NGOs working in land mine detection and removal around the world.

Janak Bhimani, a TV director and producer pursuing a PhD at the Keio Media Design lab, who presented a documentary he produced collaboratively with a small group of online volunteers about the aftermath of the Tohoku earthquake last year in Japan, called “Lenses + Landscapes“. Based on his experience with it, he’s become interested in tools for greater online collaboration in documentary making and, particular, in documentaries that evolve over time; what he calls the ‘growing documentary’.

John Thomas, a CHI veteran from IBM research, who presented his work on building a library of patterns for socio-technical systems that can avoid, deescalate, or assist in the resolution of conflicts. These focused more on a personal level than a societal one, but the general ideas run true to larger scales, and furthermore, large conflicts often emerge from small disagreements. He ran through several examples; here are a couple that struck me:

Who speaks for Wolf? – Based on a Native American story, this pattern suggests that in any decision making activity where one or more stake-holders are absent, it is important to identify that fact, and determine whether someone else at the meeting is able to speak with sufficient authority and knowledge on behalf of that stake-holder. By doing this, misunderstandings and conflicts can be avoided.

The Rule of Six – Whenever one is forced to make an assumption or interpretation because of limited or biased knowledge, one should attempt to come up with at least 5 other possible explanations before accepting the first (and probably easiest) one. This is particularly true with regards negative assumptions, and is basically a method for giving the benefit of the doubt.

Evangelos Kapros, a Greek PhD student at the University of Dublin’s Trinity College, who presented and discussed challenges in information visualization and data management with regards understanding flows of immigration and other critical demographic processes that sometimes lead to conflict.

Also in attendance were Juan Pablo Hourcade, an Assistant Professor at the University of Iowa and organizer of the event; Lisa Nathan, an Assistant Professor at the University of British Columbia, co-PI on the Rwandan project, and a former student at UW; Daniela Busse, from Samsung Research; Daisy Yoo, a student and colleague of mine at UW also working on the Rwandan project, and Kelsey Huebner, an undergraduate assisting Juan-Pablo with running the workshop. Neema Moraveji, director of the Calming Technology Lab at Stanford, was not present, but gave a short presentation on his work in ‘calming technology’ via Skype.

As well as individual project presentations, we also discussed the place of HCI in peace-making, peace-keeping, and harmony. A number of points and questions were salient:

The complexity of the term ‘peace’ is challenging, and requires much thought. We seemed to be conceptualizing peace as more than just the absence of war, but as a general promotion of peacefulness, including the avoidance of conflict, the promotion of harmony and calmness in life, and efforts to restore peace and order after events such as natural disasters.

The term peace may be over-broad to the point of being meaningless – by attempting to create a movement of HCI for Peace, are we mirroring the beauty queen who naively says she wants to bring about World Peace with her reign?

What should the research agenda of ‘HCI for Peace’ look like? Suggested approaches included creating tools like Ushahidi that aid others in peace-seeking efforts, working in the field to create new technical solutions that directly foster peace, and observing and understanding the use of technology by others in working for peace.

Who are logical ‘allies’ in this work – what other academics and disciplines should we look to for collaboration?

In the time available, it was impossible to come to any detailed consensus on these issues, and it was generally agreed that further thought and development would be necessary. Interactions magazine has offered us a spot as the cover article in an issue later this year, and we’re hoping that this will give us an opportunity to address these concerns in more depth.

All up, a fascinating and rewarding way to spend a day. Not to mention an excellent lunch and tasty pizza and conversation at the end of the day!

I’m feeling pretty jazzed at the moment about patronage as a funding model for creative endeavours.

It’s a pretty simple idea: instead of today’s dominant practice, where creative works are funded and owned by someone expecting to make money back from advertising or sales through a limited distribution channel, under patronage, creators fund their work by appealing directly to potential fans, asking them to put up funds in advance in return for various rewards and input into the work. Historically, patronage was widespread, and meant that artists, musicians, and philosophers gathered in the courts of sympathetic nobles to seek funding, lending their creativity to the glory of kings and emperors. In return, nobles gained prestige as patrons of the arts as well as substantial influence over the works created.

Today’s patronage models are a little different, in that they rely on a much more broader base of patrons. Instead of seeking out extremely wealthy individuals to fund entire works, creators can appeal to a worldwide audience through the internet, collecting many small contributions directly from the people who care most about their work. This is a good thing for creators and patrons alike:

Patrons are more likely to receive satisfying entertainment, as their preferences factor directly into the creator’s decision making process.

Creators have a guaranteed audience of fans in the form of their patrons. Since creators often labor under artistic motivations, this can mean a lot – it’s easier to feel confident in one’s creations if you know that others like the general idea. In other words, it’s easier to take risks.

There is greater opportunity for ‘pure’ creative vision as middlemen who muddy the waters by pandering to advertisers and the lowest common denominator are eliminated.

Creators are encouraged to think about their works upfront, and their ideas are subject to initial scrutiny that can validate and refine them. There’s less chance of groupthink, and a more articulate design process.

Niche genres can thrive, particularly if they’re willing to start out small and run lean. Projects that address many fewer fans can be funded.

The public domain is richer. Since funding is provided up front by patrons, there’s less reason for creators not to put their work in the public domain, enriching us all. In particular, it makes it easier for non-fans to try out things they wouldn’t normally buy, potentially converting them into fans.

It might be that patronage isn’t the best funding model for all creative works, but here’s a few examples where it’s been successful:

Kickstarter is a thriving internet example. Through it, I’ve contributed to open source recordings of classical music, RPG-themed short films, and comics. Their model allows creators to make proposals through video and written presentations, to offer rewards for patrons at different levels, and to selectively fund projects based on whether a sufficient amount is raised. Projects range from a few hundred dollars to several million, and support from computer games to music, crafts to special events, and gadgets to fine art.

Most modern orchestras run on a hybrid patronage / ticket fee model. The Seattle Symphony, for example, runs an annual budget of $24m on about half ticket sales, half patronage. Patrons get additional benefits such as social events, access to musicians, and lectures, as well as a certain level of prestige (much as noble patrons once did).

Wolfgang Baur’s Open Design project does tabletop RPG design on a patronage model, allowing patrons to participate in the design process, democratizing not only the funding, but also the creativity itself.

There are many more – these are just the few I’ve paid close attention to.

As traditional publishing industries that rely on firmly controlled distribution of hard-copy works continue to erode, it’ll be very interesting to see how patronage evolves. The fact that big box book stores are dying doesn’t mean people don’t want to read, and the collapse of newspapers has little to do with the public’s interest in the news. It’s just that the old business models are increasingly being undermined. I don’t foresee corporate creative endeavours going away, but I do expect them to become less dominant in the long term, and patronage seems a likely means of that happening.

Questions for comments:

If patronage comes to dominate creative endeavour, what negative implications might there be?

Are there any creative domains in which patronage won’t work?

Is it possible to fund really big projects (AAA game titles, movies, cathedrals) with patronage?

Interesting piece about the futurist implications of the promising new technologies on the horizon becoming corporate controlled walled-gardens, much as everything is now. It’s clear that some level of profit driven development is good, as it spurs innovation, but it’s also clear that too much moves to stifle innovation. To me, it seems that the iPhone is an example that’s swinging to the stifling end of the spectrum.

I have an iPhone, and I like it, but in some ways I regret buying it – had I known about the imminent release of Android phones back in Sept last year, I would have waited. Aside from the overly optimistic prospect of me writing apps for Android, owning the iPhone makes me feel slightly dirty, like I’ve just been sent a particularly glossy membership card to the NZ National party or some other vaguely nefarious organization. Despite their clear skill at aesthetics and design, Apple just seem sinister to me. It must be all the fanboys. Organizations that have and encourage a cult-like following always disturb me.

I say that the iPhone is not the future, but what I mean by that is that the iPhone is not representative of a future I want to see. The future is not just a retail opportunity and a finer world is not built entirely of consumer goods. I’m not keen on a future where the major technologies of environmental and social mediation are owned and controlled by corporate ideology. As AR creeps closer and closer, the question of who gets to plant a flag in the liminal space of a technologically re-mediated environment becomes a more pressing concern – with new horizons there are always new forms of colonialism.

Interesting comments and discussions. Here’s mine:

Let’s assume we’re talking about the actions that a certain group or subculture can take to adapt these future unfriendly devices for themselves – aboniks is totally right that we can’t somehow convince the mass body public that the abridgement of rights they are barely aware of in the first place is enough reason for them to give up their shiny toys and stop responding emotionally to well crafted marketing. That’s just human nature, and immutable, at least for now.

Granted, the principle of openness could be crafted into a compelling message that might slowly challenge these closed cultures, but that’s an eternal vigilance problem – we’d have to have to resources to push our message on a similar scale, push it hard, and keep pushing it. If we were really capable manipulators, we could try dressing it up in religious clothes, but again, that’s not something a small group of hackers can easily do (though I’m always for starting a cult of technology).

This is all just paraphrasing of the old maxim “show, don’t tell”. Open source and future friendly systems and devices need to beat closed systems at their own game. We have to design systems, devices, whatever it is we design to be more usable, more focused, more elegant, more aesthetically pleasing, and with not necessarily more features, but better and more applicable features.

So, what can we do? Design stuff. Make stuff. Publicize everything we do. Help each other make stuff. Get past ego – it’s not about designing things to make one person or one subgroup look awesome, it’s about designing things to help us all move forward. Hack things. Publish our hacks. Design our creations to work together. Establish open de facto standards before the big corporates come in and foist closed ones upon us. Put every good idea in the commons, and make that commons so visible that patent inspectors can’t help but notice it. Encourage our children.

Some of that’s really practical, some of that’s philosophical. I think both are necessary – ideology without designs is just pretentious pap, design without ideology is all to easily co-opted by the greedy.

Edit: Seems that, two years ago, when I posted this, I left out the link to the original article. How stupid of me.

Imagine a search engine that, instead of just doing text matching, attempts to parse your statement into questions it can answer, then provides you with as many of those answers as it can. Imagine a search engine that can deal with numerical relationships and analysis. Imagine a search engine that’s tailored towards returning facts and knowledge instead of websites.

Next, imagine if you had analytical tools of this nature at your fingertips at all times, and were able to project and share them on surfaces using some form of augmented reality. Finally, imagine what this could do to intelligent argument, discussion, design, and political discourse.

Check out this rather impressive imagining of virtual world construction in a fully tangible VR / AR environment.

The interface used is quite cool and inspirational, but there’s a lot of funky interface videos out there, and the basic idea of creating worlds from within isn’t new; Snow Crash has this sort of thing, and, to some extent, it’s a logical extension and extrapolation of Wayne Piekarski’s PhD work in using AR to build 3D models on the world around us. That said, it’s a very polished imagining of this idea, and well worth the watch.

What I really liked, though, is the emotional context in which this is placed – the film’s not just a cool interface concept, but rather an example of how virtual worlds and technology might be able to provide emotional support of a sort. Effectively, the protagonist is creating worlds to embody and relive his memories. Once, our memories were limited to shared stories, then writing, then photos, then video – it seems logical that, if 3D environments and simulated experiences could be captured, then these too would be something that we collect, file away for posterity, and maybe share with our friends.

Imagine if, instead of showing wedding photos to friends who couldn’t make it, you could compellingly simulate the experience of being there.

Why do I blog this?I’ve always loved world building, and the idea of being able to easily create and experience worlds excites me. To really be compelling, though one would need to be able to create believable simulated people and animals to populate the world; as it is, the world in this video seems somewhat lonely.

So, everyone knows what r stands for, right? What about v? Or f(x) and f’(x)? OK. How about x, y, and z?

If you’re not a math geek of some kind, you’re probably not reading anymore, but just in case you are, the point is that each of these letters has a common meaning in a lot of mathematical notation – p is a probability, v some arbitrary vector, f(x) and f’(x) some arbitrary function and its derivative, and x, y and z, are coordinates in 3-space.

The problem is that a lot of the time, this isn’t true, and even when it is true, it’s hard to tell exactly _which_ probability or set of coordinates you might be talking about.

Good math books typically get this – they define their notation, and use it consistently. If p means probability in chapter 1, it probably doesn’t mean ‘an arbitrary solution to the dual problem’ in chapter 2, unless it’s been explicitly re-defined. Each symbol should correspond to one particular value or concept at any given time. This makes the text easier and faster to read, and avoids all sorts of nasty confusion.

So, why is it that people presenting mathematical results always assume that you know their notation? If they throw up a complicated expression using a bunch of different letters, why do they assume that you know that r doesn’t actually mean radius (even though it’s shown on a circular diagram), and that, today, we’re using g to refer to probability, not p (except for that slide near the end, because it’s from a different slide set).

You’d think this just happens in badly prepared and presented seminars. Unfortunately, either you’re wrong, or I have an uncanny ability to attend only seminars that meet that criteria.

So, if you’re ever in a position to be presenting mathematical notation to a bunch of people, please, please, do the following..

Introduce your notation. Tell the audience what each letter means as soon as you start using it.

Don’t change what x means halfway through your talk, unless you really have to. If you’re using x to just mean ‘some arbitrary value’, that’s OK, but tell people that.

Each value should refer to only one thing at a time. This is particularly problematic if you’re working through an algorithm that re-uses the same notation every step. Is B the initial basis matrix you chose, or the basis matrix at step 3?

If you’re re-introducing some notation you briefly mentioned at the beginning, mention it again.

If your expression expresses some important relationship, verbalize it – read it out. If your expression is really large but still important for your audience to understand, not just accept, break it down and read it out. If you can’t do that, your audience won’t get it.

If you’re just showing algebraic steps, question why you included them in the first place. If you’re not expecting your audience to work through the algebra while you’re talking, leave it out.

Just because you think p always means probability, don’t assume you can get away with not defining it. If a letter has different meanings in different fields, you’re bound to confuse at least one person. Sure, they might be able to work it out from context, but they shouldn’t have to. Besides, p means the probability of what, exactly?

I could go on, but instead, I refer people to Polya’s lovely short rant on the subject in ‘How to Solve It’. There’s a free version online. It’s on page 134.

People seem to forget that the entire point of notation is the economical expression of an idea for the purpose of memory or communication. Furthermore, memory is really just a special case of communication – you’re communicating with your future self. Imagine how confused they’ll be if, in your notes, q means different things without clear distinction. Imagine how confused your audience will be, not having been you in the first place.

This all boils down to this general point about communicating – if you don’t value your idea enough to make sure your audience understands, don’t bother opening your mouth. Play Minesweeper instead.

While waiting for pizza this evening, I read an article by David Allan Grier in IEEE Computer about the ways in which technology has changed entertainment, particularly the theatre, over the last 40 years or so.

In particular, he discusses how automated lighting, sound and so forth can afford a stage manager the opportunity to calibrate the response of the audience by controlling the timing of cues much more closely, much in the same way a live television producer does the same. What this has meant is that show production, in addition to be a massive organizational exercise, is now a performance unto itself.

Later, he goes on to talk about ways in which producers of other media gauge audience reaction and adapt accordingly – focus groups for TV and movies, golden ears for music, and now, with technology, learning systems based on customer profiling and crowd-sourcing, that can supplement socially driven recommendations such as friends or local record store owners – last.fm being a prominent example.

So inspired, here’s an interesting extension that occurred to me:

What if specialized AI, running locally, could be injected into traditionally mass-produced media like music, TV, or movies to act as a kind of virtual stage manager? It could observe you, the audience, a focus group of one, then tweak the timing, the content, the tone, and even the script of media to better suit your current mood, your tastes, to stimulate you in ways to which you are more sensitive, or even to better fit your available time.