HCI User Advocate

Software makers and users often have conflicting goals - with the makers winning. Yet they all too often shoot themselves in the foot by distrusting the users - their customers. Or worse, maltreating them. It is time to get angry about bad and malicious software design. This Blog calls software designers on the carpet - giving them credit and shame where they deserve it.

April 1, 2016

Researchers announced an astonishing breakthrough by creating
Quantum User Interfaces (QUIs) which promise to deliver remarkable increases in
user capabilities. The startling idea is to replace millions of
user actions with a single action that accomplishes a full month or
year of work. A simple metaphor that conveys the power of quantum
user interfaces is the well-established global search and replace
command. Using a traditional Graphical User Interface (GUI), a large
database could be updated by changing thousands of occurrences of “coffee” with
“tea” manually while a single global search and replace command could make the
change much more rapidly. However, even the best modern GUIs have no way to
speed up more complex tasks such as making many versions of a document for
different audiences. With a QUI, a user could specify several kinds of drinks
and entrees simultaneously, and all versions of the document would exist at
once and be available for any purpose. With a QUI, the user can do in one step
what might take hours to otherwise accomplish. Researchers have come
forward with many powerful examples.

QUIs are different than GUIs which are based on discrete
operations. While some GUIs can be automated to speed them up or parallelized
through collaborative computing approaches, QUIs work by user interfaces which
can be in a superpositions of states. That is, some graphical elements can be
in multiple states at the same time – so, for example, a button can be both
pressed and not pressed simultaneously. This enables vast speedups by going
beyond the Model Human Processor and software architectures such as Model-View Controller. However, they do not violate our basic understanding of Human-Computer Interaction and bedrock HCI principles such as Fitts’ law remains intact.

The power of QUIs is measured by counting the number of GUI
commands that are replaced by a single “quantum user interface click”, commonly
called QUICKS. Many of the projects yield 1,000 QUICKS but some show the
possibility of 1,000,000 QUICKS (1 mega-QUICK). Longer-term hopes are to
push toward giga-QUICK and peta-QUICK designs.

Certain problems can be solved much more quickly with QUIs than
with traditional GUIs. Even those GUIs that use the best currently known
approaches such as Direct Manipulation, Gestalt Theory for visual design, and
Collaborative Computing approaches can be dramatically sped up. For example, Doctoral
student Ambrose Light’s work at the University of Maryland’s Human-Computer
Interaction Lab conveys the exponential increases in user abilities that
researchers expect QUIs will bring. Ambrose has documented 3 mega-QUICK
speed ups in complex tasks such as replacing the work of teams of citizen
scientists by executing a single command. By configuring the geographical
database to include the possibility of a bird sighting or not of every known
species at every location simultaneously, the mechanical data entry of each
observation is replaced with simple confirmation. This command is called the
“gather” operation since it can bring millions of bird sightings into a nearly
error-free database within 1-2 hours.

Similarly, doctoral student Hadassah Agrawala at the
University of Washington, has demonstrated a QUI with unheard of power to help
job-seekers by listing themselves as both interested and not interested at
every company that has open jobs listed which is expected to dramatically
reduce national unemployment by a full percentage point. This QUI enables
job-seekers to issue a single command that returns a precisely selected set of
job descriptions from dozens of independent databases. Then the QUI
filters them into a single file, organized geographically, and ranked by
similarity to the users’ abilities. Hadassah reports performance in the
50-70 mega-QUICK range. She has launched a startup company, QUICKJOB that
has already drawn $11.7M in funding, and will provide a public service, for a
fee, within three months.

Even as QUIs become commercially available, other
researchers are already racing off to the even more ambitious String User
Interfaces (SUIs) that depend on the vibrational structure of queries as they spread
through exo-scale databases. SUIs would replace hierarchical file structures
with wave-like superposition files, in which even high-dimensional
intersections are resolved in less than a millisecond. By using
relativistic compensations, searches that used to take hours can be performed by
novice users in seconds. While still expected to be 5 years away from
commercialization, SUIs can combine action and reaction in a single operation.
QUICKJOB has already filed a provisional patent application showing how
employers can react by simultaneously offering and not offering jobs to all the
applicants and having the vibrational equilibrium perform the proper matching
so exactly the right applicants end up with job offers. Skeptics are not
yet convinced that these results are guaranteed to perform the right matches, and
are currently modeling whether a combination of money saved through speedups
and legal protections through provisional employment contracts would provide
benefit overall. Clearly this approach is promising enough as there has been a
chain reaction of openly published papers (see arXiv.org) claiming ever greater
abilities for SUIs.

Worldwide attention is gathering for the July
16-18 Symposium on Quantum User Interfaces: New Technologies (SQUINT) to
be held in College Park, MD. Conference Chair Ben Neb hints at further
breakthrough announcements that have enticed participation by large numbers of
journalists, venture capitalists, and government agency funders.

October 15, 2015

SIGCHI seeks nominations for its five major annual awards. All nominations are due by November 15, 2015.

SIGCHI identifies and honors leaders and shapers of the field of human-computer interaction with the SIGCHI Awards. We recognize individuals who have contributed to the advancement of the field of human-computer interaction. There are five kinds of SIGCHI Awards that are selected by two committees based on your nominations, so please submit them! We encourage you to consult the SIGCHI Awards web page (www.sigchi.org/about/awards) for past awardees and more detailed award descriptions.

The SIGCHI Achievement Awards committee selects the following three awards. Submit nomination material to sigchi-achieve-awards@acm.org, to the attention of Steve Feiner (CHI Academy Member) who is the committee chair:

SIGCHI Lifetime Research Award: Individuals who have contributed the very best work in shaping the field, SIGCHI's highest honor for research contributions.

SIGCHI Lifetime Practice Award: Individuals who have made outstanding contributions to the practice, application, and understanding of human-computer interaction, SIGCHI's highest honor for practice contributions.

CHI Academy: Individuals who have made substantial contributions to the field of human-computer interaction should include a summary of the person's contributions with evidence of the cumulative contribution, influence of the work on others, and development of new directions.

The SIGCHI Service Awards committee selects the following two awards. Submit nomination material to sigchi-service-awards@acm.org, to the attention of Ben Bederson (SIGCHI Adjunct Chair for Awards) who is the committee chair:

SIGCHI Social Impact Award: Individuals who through their work have made substantial contributions to pressing social needs.

SIGCHI Lifetime Service Award: Individuals who have contributed to the growth and success of SIGCHI through extended service to the community over a number of years.

Nominations should include:

a brief summary (maximum one page, preferably a PDF) of how the nominee meets the criteria for the award.

(optional) a link to the nominee's CV, if available

(optional) names and contact information of people who both endorse and are knowledgeable about the qualifications of the nominee.

January 2, 2015

I am the product of my experiences, and a significant part of my lifetime experiences are the books I have read. Strangely enough, I have kept track of every non-work (and many work-related) books that I have read since 1991. I used to write these down on paper, but a few years ago I started keeping track of them on goodreads.com. Goodreads is a fine service, and offers a nice way to see what your friends are reading.

But Goodreads does not offer any way of seeing any overview of the books one has read, and loses the opportunity to gain any insight into a person's overall readings.

So I created an interactive visualization to try and understand what the 246 books I read over the last 24 years actually were. Go try out the interactive visualization (which is not mobile friendly), and then finish reading this. The visualization is based on Keshif, a free and general tool built by Adil Yalcin, a grad student working with me. It works by showing "facets" of the dataset and supports very lightweight exploration by mousing over the facets to see how they interact with other facets. For example, in 2014, I read a lot by Hugh Howey (I loved Wool!), and those books were mostly written in 2009 or later. Also, I clearly got re-hooked on science fiction.

This approach lets you see things like most commonly read authors and genres. Also when I read books. For example, you can see that I read a lot in 1992 right after I finished my B.S. during my "gap year" when I lived in Alaska. I also started reading significantly more books 3 years ago - exactly when I bought my Kindle - which seems to be consistent with what others have found.

Anyway, take a look and see if you can learn anything else about me. Also, the code is all freely available, and shouldn't be too hard to adapt to your books if you are a bit of a web hacker.

Some technical notes:

Data comes straight from Goodreads API, but I downloaded it to avoid cross-site permissions issues.

Goodreads does not provide book genres through their API, so I manually created those in a google docs spreadsheet and load them separately and merge the two data sources.

July 25, 2014

Update 8/15/14: Google has fixed this! The list view is now properly dense with about twice as many items as the thumbnail view. Thanks Google!

I was excited to see Google post their new design for Drive. Given that this is a major new design with no doubt significant thought behind it, I was surprised - and dismayed - to see that the two primary views of documents (list view and thumbnail view) show the same number of documents. The textual list view, typically designed for efficient scanning of large numbers of items based on full title and some metadata shows no more items than the image based view.

New Google Drive with list view on left, and thumbnail view of same documents on right.

This may seem like an esoteric design detail, except it isn't. It goes to the heart of design decisions where there is a single design that must serve a huge community (probably hundreds of millions of users). Depending on your view, design is driven down to the least common denominator, or perhaps it is better viewed as regression to the mean.

My guess is that the whole redesign is built to better support touch screen systems given the huge vertical space between items. This is great for all those phone and tablet users, and even the occasional Chromebook Pixel user. But what about us folks in an office with a big screen and a mouse? I know that Apple serves the causal computer using market first, but now even Google has abandoned the regular office worker!

The reason this is an issue is because the trade-off between these two approaches - at least in my view - is primarily between the density of items, and amount of interaction. That is, one view (typically the thumbnail view) should have fewer items with a sparser layout requiring more interaction to see more items. But the painful and expensive scrolling operation (in comparison to moving your eye) is sometimes worth it because of the additional information made available by the additional information per item. The other view (typically the list view) should have more items with a denser layout so you can quickly scan through many items without having to scroll as much. Arguable, the additional owner and date information in the text view provides the relative advantage, but make no mistake - the text view is a very sparse design.

No doubt, the major vendors are wise to provide an excellent experience for mobile and touch users. But PLEASE don't abandon us desktop users. Don't make the mistake that Microsoft did with Windows 8. You can not design a single experience for all people and all form factors. You must detect what systems people are using and provide an optimized experience for that design. Or at least offer customization options.

Interface designers that build a single experience for all humans on the planet or doomed to disappoint huge segments of the market. Don't let this be you.