Category Archives: Critical perspectives

On Saturday March 8th, Malaysia Airlines flight MH370 departed at 12:41 a.m. local time and was due to land in Beijing at 6:30 a.m. on the same day. The flight was carrying 227 passengers and 12 crewmembers. 20 days later, the only thing we can precisely say about this flight, as obvious as it may sound, is that the airplane went missing and its whereabouts is still unknown. But the puzzling question on everyone’s mind has been left answered: how could an aircraft like the Boeing 777-200ER simply vanish off the face of the Earth?

The motivation behind such a disquieting question is due to the trust and reliance on the Boeing’s Triple Seven. The aircraft is built with state of the art science and technology and, according to aviation specialists, is considered one of the world’s safest jetliners with a near-perfect safety record. The 777 has transponders, sensors and communication equipment that, even if it’s not triggered manually, still send data periodically and automatically. Mohan Ranganathan, an aviation safety consultant who serves on India’s Civil Aviation Safety Advisory Committee, said it was “very, very rare” for an aircraft to lose contact completely without any previous indication of problems. “The 777 is a very safe aircraft – I’m surprised,” (The Guardian, 2014). The situation becomes even more intriguing in light of the fact that the last known location of the airplane was the Strait of Malacca, which along with the area flown / to be flown by MH370, is one of the most radar monitored and busiest airways.

Knowing that the event was so heavily surrounded by technology adds to our frustration: “how could the best technology out there have failed us?” It is not surprising that people turn to technology looking for answers. Technology gives us single cause with a single effect and it is also predictive. Since the pieces of technology – the airplane or any debris – have not been found, no answers could have been given, and because our society is so strung up on this inexistent precise science, we pressure authorities for proper answers. The Malaysian authorities, seeking a scientific answer and trying to look progressive, released a statement saying that everyone on that flight is dead based on a complicated and confusing math theory. Unfortunately, the family members of the MH370 passengers got such uncomforting answer through a text message on their mobile phones.

I’m not here to discuss the possible theories out there to explain the plane’s disappearance, or to say whether the passengers are alive or not. I’m trying to stress that our technological deterministic hunger had led us to situations of absurdity, discomfort and frustration, just like what is happening to the MH370 event. Such mind set, made us find, in a mathematical formula, the answer for a very complex social situation. The answer given by the Malaysian authorities is causing international and political tensions since China is demanding Malaysia to hand over all relevant satellite data analysis on the missing plane. If these frictions keep happening, it could compromise the efforts done by the international search team since nations that are not happy with the way things are handled could leave the team.

Up to this point, the MH370 case is a clear example of technological determinism, to the point of being presented at “Introduction to Social Informatics” lectures along with WIRED Magazine statements. It is too soon to make any precipitated conclusions, but in such case we can already notice a suspension of ethical judgment and the unintended consequences caused by “naïve science”. From now on, I hope the passenger’s family find better comfort and the authorities involved in this case be less technological deterministic, even if society demands them to be so. As David E. Nye (2007) stated: technologies do not drive change. They are the product of cultural choices and their use often has unintended consequences.

Like this:

During the last year of my undergraduate education, I (Shad) encountered my first experience of the video games are(n’t) Art debate. While there was certainly a lot of passion surrounding the argument, the logic was somewhat lacking. One side seemed to center around the fact that Grand Theft Auto: Vice City (which had been released earlier that year) contained vestiges of the stylistic aesthetic of the 1980s and presented a compelling point for social engagement with a distinct cultural setting from recent history. Alternately, the opposing side argued that these elements were simply superficial, and that the game’s message, at least in terms of any artistic merit, did not represent a real cultural statement to the degree required by the title of Art. However, these arguments seemed to center on the games themselves, as if Art were a property that is intrinsically part of some artifacts and intrinsically not part of some other artifacts. In short, the argument had missed the social connections that surround Art evaluation: the relationship between the concepts of Art, the Art World (comprised of critics and consumers of Art who ascribe a cultural value as well as a monetary value to Art objects), and the objets d’art themselves. At the time, I felt as if the current courses of debate were not ever going to result in any kind of conception of video games as Art, and that it would be a while before the discourse would develop to a point where video games could be spoken of as Art.

Ten years have passed since that point. At CHI 2013, the opening plenary was presented by Paola Antonelli, Senior Curator and Architecture & Design Director for Research & Development at the MOMA. Her presentation focused on exhibitions that have looked at video games as Art at the MOMA and, more broadly, the importance of the relationship of design to art (and vice versa). It seems that, at least in practice if not in theory, my question from a decade earlier has been partially answered. Video games are beginning to be treated as Art is treated. Design, as an applied art, could act as an indicator or close relative of Art, but not a true member of the club. Video games should be appreciated in a manner similar to Art, video games, as designed experiences, can equally be treated as art.

Art and art theory have had a history of relevance to HCI, as is especially evident in the ACM SIGGRAPH Digital Arts Community (http://siggrapharts.ning.com/) and example topics including (but not limited to) the convergence of goals of Art and HCI (e.g. Sengers and Csikszentmihályi, 2003, Blythe 2013), collaboration (e.g. Adamczyk et al., 2007, England, 2012), and creativity support (e.g. Morris et al., 2009, Kerne et al 2013). While not a comprehensive list, from this it can be discerned that there is some sort of connection between the aims of HCI and Art, that there are challenges in the connecting of the two (both in terms of aims and in terms of what is considered valuable in a piece of art) and that supporting art is one possible goal for interaction design. As a possible overarching theme, there are elements of Art that are important to the practice of HCI and the creation of technology in general, but that there are both practical issues such as the economics of art and concerns of would-be collaborators and theoretical issues such as the density of art theory. Supporting creativity makes a convenient bridge point because it is a concept of equal importance to art as it is to HCI.

Returning to the consideration of video games as art, from the end of technological design there is some sort of convergence between the two and that this has warranted looking at art as a mean of understanding interaction design. As partial confirmation of this, it would seem that the Art World (of which the MOMA is certainly a part) has an interest in looking at some of the results of interaction design, including video games. So, for both stakeholders in the discourse, there is a benefit to treating video games like Art. But again I return to the question of discourse surrounding video games, which we believe leads straight to the questions of why and how to study video games.

So why should it matter that video games are beginning to be considered as art is considered. First, it means that there may be even greater cause to take video games seriously. Not just Games With a Purpose or games that are explicitly made to embody a political statement (such as the excellent games created by Lucas Pope (http://dukope.com/) but video games in general. Previous work has already started to look at video games from an ethnographic standpoint (see Boellstorff et al 2012 as well as the individual works of all its authors) as well as more quantitative approaches that look at data taken from play (e.g. Yee et al, 2012). There also have been calls for a much more in-depth study of games as a source of “social rationality,” taking a more critical stance of their content (Grimes and Feenberg, 2009). As a continuation of this trend, it seems like the way that games are observed – as both an aspect of social engagement and a reflection of society in general, needs to change. As more artistic elements become prevalent in a greater number of games, it will be important to understand how these elements developed in a historical sense. Even in games that do not attempt to challenge the norms and folkways of virtual worlds, as players become more aware of video games as art, there performances within those games may very well change with respect to this perception. Looking at virtual worlds as some sort of indicator of social phenomena, then, not only has a number of different approaches but seems to demand them to varying degrees. Employing the tactics of art theory and new media along with ethnographic investigations and analysis of data traces may very well result in new understandings not only of games, but also society and art.

It is an exciting time for the study of games. While they now have an increasing number of different meanings to different people, the fact that they have importance is becoming more difficult to ignore. However, along with the increased potential of game studies, there is also a necessity to broaden the approaches used to study virtual worlds.

Like this:

Questions about the practice of ethnographic research, both as a method and as an analytic way of knowing, have been a focus of my dissertation work. The new Ethnography and Virtual Worlds: A Handbook of Method by Boellstorff, Nardi, Pearce, and Taylor has been helpful to think through my own ethnographic experiences. Although the subjects of my research do not inhabit virtual worlds as defined within this handbook, the bulk of their interaction occurs through networked digital media. The handbook defines a virtual world as requiring the following traits: place, worldness, multi-user, persistence, and user embodiment (p 7). The groups that I study construct a social world (Star and Clark) that exist offline and online across many different media platforms (for example, interaction happens in person, through text messaging, across Twitter, YouTube, Facebook, and other online media), and as such they do not inhabit a particular virtual place. I have called this type of social engagement transmediated sociality (Terrell 2011).

While Boellstorff et al encourage ethnographers of virtual worlds to follow their informants into contexts (both online such as blogs, message forums, and Facebook and offline such as meetups and conferences) that extend beyond the in-world platform around which they are centralized (for instance, Second Life or World of Warcraft) ethnography of groups that are decentralized, spread across many online/offline spaces might be different in nuanced, but meaningful ways.

Doing ethnographic research with groups that are highly transmediated has presented a number of different challenges. Participant observation, a key component of ethnographic research, can be particularly challenging in transmediated settings. In my experience, participant observation can happen in two different ways. First one can attend, participate in, and observe events that are more formal and scheduled. In my work this is something like attending a wizard rock concert or a festival, which may be digitally mediated or may be in person. The second way one needs to participate is to just hang out, to be around to interact with others or observe interactions and cultural production as they happen in mundane everyday interaction, without a scheduled event.

Learning, knowing, and deciding where to hang out seems to be the most difficult aspect of participant observation of transmediated groups because one’s informants could be, and indeed are, hanging out in several different spaces all at once. As researchers we must struggle to define our field site. This never seems to be a simple task, even when our field site is apparently tied to a specific space; we must make choices about whom and what we include within our study. This is true for sites that are both virtual and non-virtual. While I recognize the difficulty in defining one’s field site, I wonder the extent to which the transmediated nature of the groups that I study give this struggle a new dimension.

In what ways is the lack of persistent placeness needed for the construction of a virtual world a challenge to the construction of the ethnographic field site? How does one decide where to hang out when the people she is studying could be interacting in several other mediated spaces? Are the challenges faced by the ethnographer of transmediated groups different than those faced by the ethnographer of virtual worlds where place is more strongly defined and more centrally located?

These are of course broad questions, but they are issues with which I struggle. I would love to hear your thoughts and experiences.

Like this:

Big Data seems to be the new buzzword of the moment and the solution to all of society’s problems. Often we hear people coming up with studies involving a great amount of data aggregated from Twitter, Facebook and so on. I truly believe these studies are good; they take snapshots of scenes, let us know of interesting moments in a specific time and give us an overall idea of the problem.

boyd and Crawford (2012) define big data as “a cultural, technological, and scholarly phenomenon that rests on the interplay of: (1) Technology: maximizing computation power and algorithmic accuracy to gather, analyze, link, and compare large data sets. (2) Analysis: drawing on large data sets to identify patterns in order to make economic, social, technical, and legal claims. (3) Mythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.” (p. 663)

Big Data is usually thought as big numbers, the big N approached quantitatively. These numbers are generated based on people’s produced data; people that are online and constantly talking, sharing, posting, tweeting and “liking” things. But what about the people that are not doing that frequently, or even, not doing these activities at all? If we take Big Data and extend it to the ones experiencing digital inequalities, we would be imposing a colonial practice in which the voice of those constantly online will be obscuring the voice of those who are not. These voices are often clashing in different of contexts since they are rooted in social tensions and differences of power.

So, how can Big Data tell us the story of the people that are on the “wrong” side of the digital divide?

Mary L. Gray (2011) makes the case that Critical Ethnography is a practice of Big Data. She invites us to think of Big Data not solely as numbers and quantitative approaches, but also as a practice that is able to balance the value of ethnographic significance and statistical significance. Big Data is usually deeply concerned in mashing as much number as possible to be able to have some sort of reliability and statistics strength. The more you can get, the more reliable the information is.

Qualitative work is often seen as being too specific and doesn’t tell us anything, but Gray argues the opposite, qualitative approaches tell us something different, they give us a different perspective of the story. Ethnographic significance should be integrated as a complement in collaboration with statistical significance, so we are able to get something transformatively different.

I agree with Gray; at an earlier post here on the Social Informatics Blog (Digital Divide Research: one myth, problem and challenge) I make the case that the Digital Divide Research should move on from the statistical charts, census and Big Data, and go in the field to tell us about the context of those who are not on the internet, or not as often due to digital inequalities.

Big Data was the reason why I ended up going to the slum of Gurigica in Vitoria, Brazil. According to the census, the locals have a very low access to the LAN Houses and Telecentros that are inside the community. But if it wasn’t for my ethnographic research, I would have never known that this was happening due to the activities of the drug cartel that didn’t allow them to circulate freely on the streets. Therefore, Critical Ethnography is a powerful tool to approach the issues of the Digital Divide and contextualize the notions that Big Data gives us.

Like this:

I have always enjoyed fixing computers. This is not because of the challenges that are presented by the process of computer repair (although there is a certain amount of enjoyment to be found there as well) but because it is interesting to hear how people feel about their computers both in terms of their normal functioning and their malfunctioning. There seemed to be a near-infinite number of ways that people had come up with to make the functioning (or malfunctioning) of these machines make sense. I came to think of these little quirky approaches to grappling with the black box of computational devices as little rituals. Cultural anthropologist Victor Turner describes rituals as symbolic actions, grouping them alongside other forms of symbolic action such as social drama and metaphor (4). However, I did not have a concrete definition of what a technological ritual was; I just knew it when I saw it.

Fundamental to these is the idea that rituals are activities that occur in the material world, but have some sort of importance beyond their material qualities. Metaphor has become an important to aid users in understanding the functioning of the otherwise complex functioning of digital devices (e.g. 1). Digital technology also has its share of social drama: Facebook relationship status being one way to solidify a romantic engagement between two people. Even ritual itself has been spoken of in the context of computation. One study has examined how “ritualized interactions often play a major role in the performance and experience of the art or performance work,” (2) while another has looked at how ritual activities could be used to make virtual characters seem more like real characters (3). However, art performances hold a kind of lofty ambition and a focus on making virtual characters have rituals focuses on representing people to make them easier to interact with. I wonder how looking at the more everyday practices of people as they relate with technology could lead to a better understanding of both people and the technology they use. As an example of how to look at technological interactions in terms of ritual, I point to Merlin Mann’s Inbox Zero.

It is common to hear people complain about having too much email. It takes a lot of time to sort through all of one’s messages, it causes problems with missed communication, and it can make people feel overwhelmed with the amount of information they are receiving. As an answer to this problem, Merlin Mann describes Inbox Zero (http://inboxzero.com/) , a way of handling email overload. At one level, this is a prescription of simple actions of sorting, removing and addressing the demands presented in a person’s inbox. However, it is also a set of small actions that in combination hold a certain higher personal and social value. The empty inbox described by the processes name not only reduces distractions when new email comes in, it also gives a symbol of technological well-adjustment. It is social in the sense that the person’s relations to others are kept in check. The material of Inbox Zero is an empty in box, it’s meaning is control of technology in a way that also incorporates interactions with other people.

This idea of ritual, as it pertains to technology, is still quite rough. However, as HCI has focused more on experiences and the designing thereof, the kind of duality of meaning that comes from ritual acts may prove to be a valuable way of understanding the relationships between the form and function of artifacts and the meanings that people ascribe to them. Looking at interactions as rituals may point to better understandings of digital artifacts and the people who interact with them.

Designers tend to approach ideas from a certain bias, which may require some explanation. While design is focused on the process of creating artifacts, it is rarely a straightforward endeavor. Of particular importance is the accountability that comes from creating a new artifact, the ethics of design so to speak. In the most general and common sense, the impetus is to solve a problem, and the solution is assessed on the basis of its efficacy. This can be thought of as the function of a particular design – what it does as a means of resolving a problem. The designer, in the ideal circumstance, builds that function into the artifact. In addition to this functional aspect, there is also a process of changing and reframing problems [see Nelson for more clarity on this]. This procedure carries with it yet another aspect of evaluation– the framing of the problem is judged on the basis of how well it captures some aspect previously unconsidered that, nonetheless, is integral to resolving the problem. To put this all more simply, a design can fail procedurally due to improper problem framing, regardless of how well the it functions, or it can fail functionally, regardless of how well the procedure of framing the problem goes. The results of either of these failings have implications for the designer. A failing of functionality indicts the designer on charges of poor craftsmanship, while a failure of procedure points to general ineptitude. The inverse is equally true – merit is given for functional and novel approaches.

While there are a number of good and bad designs in the world, this topic has been covered considerably, and so the nature of such evaluation will not be addressed here. The proceeding is presented with the hopes of identifying how a designer is ethically tied to the success or failure of an artifact. If this is taken as true, then what happens in the grey areas? If two ends of the spectrum refer back to the designer, is it not reasonable that the middle ground has a similar effect? The situation above becomes socially relevant when one considers Winner’s argument that artifacts can have politics [Winner]. Those politics become built into the artifact both procedurally and functionally; both with implications for the designer. In the case of Winner’s examples, Moses’s bridges are problematic due to their function – their function is limited by the way they were made. Alternately, the tomato harvester suffers from a procedural issue – namely that the framing of the problem showed greater concern for efficiency and cost-effectiveness than the implications of mechanization with economical and ecological consequences. In both cases, Winner’s description seems to fit well within a model of accountability as prescribed by design. But lets suppose a situation where the decisions are not quite so clear. As an example of such a situation, consider this Pennsylvania polling station.

In a Philadelphia polling station in the 2012 election, one of the booths had a problem regarding candidate selection. When the space on the screen occupied by Barak Obama’s name was clicked, the box for Mitt Romney would be checked. Now, in a situation similar to Moses’s bridges it could be imagined that this machine was designed with the specific intent of favoring a specific candidate. This would be a functional aspect, in that the artifact’s functioning had a specific bias. But let us suppose that the person who posted this video’s first inclination (from going into “troubleshoot mode”) is correct and the problem is a malfunction rather than a deliberate decision. It seems reasonable that a touchscreen could break, particularly if used repeatedly (as would be the case of a polling station). Then it would seem that the accountability would fall upon the individual who chose that particular touch screen, making it procedural – rooted in a concern of cost over functional robustness. This need not imply any political orientation with regards to Romney and Obama, but it certainly represents a political statement nonetheless. However, suppose that such was not the case. Suppose, rather, that the reduced size of one option’s button was the result of a contextual issue. A power surge, a component broken during shipping, or any number of events that had happened to that specific machine could be at fault. In such a case, what would be the ethical standing of the designer? Would the complexities of the context caused a newly emergent political stance without an actor behind it, or is there an implication at the level of deciding to use such a machine in the first place?

If that sounds somewhat far-fetched, consider the 2010 “Flash Crash.” Sommerville et al. describes how a $4.1 billion block sale that was “executed with uncommon urgency” resulted in a “complex pattern of interactions between high-frequency algorithmic trading systems… that buy and sell blocks of financial instruments on incredibly short timescales” [Sommerville]. The systems employed had functioned together well, until that context had arisen. But when that context DID arise, roughly $800 billion disappeared [ibid]. As in the final hypothetical situation regarding the voting booth, it becomes difficult to consider the ethical position of the designer(s). Both describe systems of systems (the algorithms in the market and the technological parts of the voting machine). Both also describe situations where the final result is emergent, as opposed to a situation that is deliberately created. Risatti makes a distinction between function, and emergent application: use (Risatti). It would seem that these issues fall more under latter than the former, and by virtue of the fact that use is not constructed into the artifact in the way that function is, that the designer is somewhat free from blame. After all, designers cannot be expected to be capable of predicting the future, can they?

As a somewhat unsettling conclusion to this case study, what happens when the model of accountability that is defined by function and procedure becomes less common? It is becoming more difficult to consider any one given technology in isolation. Phones sync to computers that sync to bank accounts; information is stored to a cloud where multiple people, from multiple devices, can access it. Systems of technology are moving towards systems of systems of technology. As this increases, the chances for emergence also increase. Buried in this complex scenario is a notion that is as lucid and cutting as what Winner expresses: if artifacts have politics, do systems have politics as well? It seems evident that the answer is a resounding “yes.” However, that answer only leads to a more worrisome question. If systems have politics, who is accountable for those politics?

Like this:

A few months ago, in an effort to start eating better, I began using an iPhone app to count calories. For four months, I diligently entered every precisely portioned amount of food I consumed into my smartphone. I was also running a lot. I kept track of how far I was running, for how long, at what pace, etc. For the most part I engaged in this bookkeeping adventure alone– praising myself when I landed below my weekly calorie goal and berating myself when I didn’t. I soon realized, however, that there was a whole world of people out there doing the same thing I was and that we formed this thing called ‘the quantified self movement.’

I quickly learned that self-tracking, bio-data or personal analytics, as it is sometimes called, is a growing area of interest for smartphone users, data-philes, journalists, marketers, the tech industry, the health, industry, etc. There are articles circulating from the Economist on the topic, there was a 2012 SXSW competition using personal data generated by BodyMedia, a TED talk on the subject, websites, a Facebook page and daily Twitter conversations all about the quantified self. Also, there’s an annual international conference dedicated to understanding and capitalizing on the quantified self. It’s embarking on its third year (the first two were sold out).

One of the founders of the quantified self-movement, Gary Wolf, suggests that bio-tracking devices and the social practices that accompany them help to change our sense of self in the world. In his TED talk, he says that these tools are mirrors that tell us about who we are and that they should be used help us improve ourselves. “They are tools for self-discovery, self-awareness and self-knowledge,” he says. Used in this way, according to Wolf, we also see our “operational center, our consciousness and moral compass” more clearly.

This is true, of course, of all media. Facebook, and before it, TV, radio, magazines, theater, literature, oral histories, hieroglyphics, etc. have always shown us who we are by showing us abstracted depictions of ourselves. These media portrayed the peasants, the aristocracy, the moral citizen and the outcast. The obvious difference is that over time, mediated depictions of ourselves have become more and more individualistic and personal.

As months went on in my own self-tracking experience, I began to grow tired of the constant bookkeeping. As I entered my default breakfast into the program morning after morning on the bus ride to school, I began to realize that I was becoming somewhat obsessed with life decisions that amounted to very small amounts of food. However, I also noticed I was changing my life to maximize exercise opportunities whenever I could. As I became more and more obsessed with the numbers my iPhone app was generating every day, it seemed I was making healthier life choices. In addition, I realized that I was gaining more and more emotional satisfaction, happiness and excitement from the hobby. I started feeling like I was becoming hedonistically yet healthily addicted to consuming the numbers my life was producing.

The student of socio-technical studies inside of me couldn’t get over the contradictory feelings I was having about all of this. I wanted to understand it better. After bludgeoning many of my loved ones and friends by imposing lengthy conversations on these topics and thinking and reading about the role numbers play in our lives (and have only played for a relatively short part of human history) (oh, and I should mention that I’m enrolled in my first statistics course ever at the moment. ☺). It occurred to me that the thrill derived by self-tracking behaviors can be traced back to fundamental pedagogical advice Plato gave to Socrates: “know thyself.” Plato advised Socrates that only after one knows himself, can he then begin trying to know “obscure” things. Furthermore, then one also has a better platform from which to understand others and human beings in general. The numbers, then, that our bodies create – like all previous forms of media— are a part of a fundamental quest for humans to help know ourselves better.

So, if it is the case that we use these new biometric tools to extend, yet again, our quest to know ourselves, as a society, we land in one of two places. 1.) after thousands of years we still do not know ourselves but we are now closer to doing so or 2.) we may need to realize that we can never know ourselves completely through fixed abstractions like numbers (or media). Personally, I’m partial to the latter conclusion.

Drawing on media materiality scholarship, I would argue that each mediated reflection of ourselves has its own advantages and shortcomings in its ability to show us who we are. Numbers, offer us a clean, neat, easily digestible packet of information about who we are. I’ve seen many self-quantifiers refer to numbers as beautiful. My heart rate is 107/64. I consumed 1543 net calories yesterday. I walked 2.1 miles, mowed my yard for 33 minutes and did yoga for 60 minutes. These data are precise, clean, digestible.

What numbers do not- and cannot- capture are the chaos that is an inherent part of the human condition. Humans are messy. Emotions drive us to do things we would never expect. We dance, cry, laugh, sing, kiss and fight when we least expect it. The unanticipated invitation for beers outside in March in the warm sun (when the plan was to do statistics homework in the library) is memorable where the bar graph on my iPhone that tells me I’ve met my weekly caloric intake for the past 4 weeks in a row is not. These unknowable surprises, one might argue, are the most beautiful aspects of being human and are only weakly depicted through abstracting them into fixed mediated form (especially numerical form).

I think numbers are helpful. However, I hope there is never a time when that unanticipated invitation for beers outside in March comes and I decide to go solely based on how those beers will impact the weekly bar graph on my iPhone.

In the New York Times today there is an article about Google X, the top-secret lab for big ideas at Google. According to the article, the future being imagined here is “a place where your refrigerator could be connected to the Internet, so it could order groceries when they ran low. Your dinner plate could post to a social network what you’re eating. Your robot could go to the office while you stay home in your pajamas. And you could, perhaps, take an elevator to outer space.”

This is indeed a compelling vision.. maybe. Am I the only one who finds this future a little underwhelming, maybe even problematic and dysfunctional? For one thing, aren’t there already enough what-I-had-for-lunch tweets without plates getting in on the action? And what if the plate (because of course it has artificial intelligence) decides to chime in with some commentary: ‘pizza leftovers again?! @John’sMom are you seeing this?’.

And while staying at home in pajamas does sound pretty attractive, how does sending your robot into the office help? Does it make typing noises at your computer so people think you’re there? Does it go to meetings for you? Does it make decisions for you? What if it messes up? Could you really relax at home in your pajamas knowing that your robot might create a huge mess (bureaucratic or physical) that you will need to clean up? What if your robot knows how you really feel about your coworker and gets into a fight with your coworker’s robot? Could your robot be fired? Could your robot get you fired? Could it get promoted? Who would be held responsible for its actions: you, the robot, the robot’s designer? Would the robot have a moral compass, and if so, whose? Would everyone send their robots in for them, so the workplace would be entirely robots? Would it be all the same to them if the lights and heat were shut off to save electricity? Would there be robot unions to protest this mistreatment?

And then there’s the grocery-ordering refrigerator. This seems to be one of the most common images of a digital future of pervasive computing, no doubt inspired by a moment of watching the last few drops of milk drip onto still-dry cereal and thinking ‘man, I wish the refrigerator could have just taken care of that.’ But what kind of groceries would it order? It stands to reason that a digital refrigerator might need to deal in SKUs, which would make it easy to order more frozen pizza but maybe more difficult to order ‘the best-looking local in-season fruit’. Also, what infrastructure would this require? In addition to the refrigerator, the ordering system would need to be in place on the grocery store end, as well as maybe a delivery service. It’s hard to imagine smaller markets being able to invest in this, and vendors at the local farmers’ market would be out of the loop entirely. This would undoubtedly be unproblematic for many people, but it is significant that these biases could be encoded in technical systems that could encourage already-existing (unhealthy) habits to become even more entrenched.

As Langdon Winner has argued, technologies shape forms of life: technology design is ultimately about choosing ways of living, of ordering the world around us and our activities in it. While geeky technophiles tend to do a pretty good job of dreaming up some very cool and labor-saving technologies, they are less good at envisioning the forms of life that they might institute.

This is where more nuanced and critical approaches like Social Informatics might be useful. As scholars who study social dimensions of technologies we are used to teasing apart their various social, cultural, philosophical, historical, political, and ethical aspects, and looking at them critically. These aspects are just as much, if not more, important than technical feasibility, yet they are discussed far less frequently (if at all) during technology development and assessment. Maybe one of the reasons for this is that our existing critical approaches focus on technologies that already exist, not ones that have yet to be implemented.

But why should geeks working at big corporations with deep pockets be the ones who get to decide what our (digital) future should look like? What sorts of futures might Social Inforfmatics scholars envision? And as we’re imagining futures, could we also maybe move past our own laziness to consider how we might build a future with less inequality and more justice, less stress and more health, less poverty and excess and more true wealth and happiness?

All of these may sound like unattainable goals. But imagining a future in which they are true would be a first step toward making them a reality. And I would take that over a ‘smart’ refrigerator any day.