The worst thing I read this year, and what it taught me… or Can we design sociotechnical systems that don’t suck?

Note: Shane Snow wrote a long and thoughtful email to me about this post. While we agree to disagree on some substantive issues, primarily our thoughts about the future of VR, we also found quite a bit of common ground. He noted that my essay, while mostly about the ideas, strays into the realm of ad hominem attacks, which wasn’t my intention. I’ve removed one comment which he accurately identified as unfair.

I am deeply grateful to Shane for taking the time to engage with my piece and to make changes to his original essay.

With a recommendation like that, how could I pass it up? And after reading it, I tweeted my astonishment to Susie, who told me, “I write comics, but I don’t know how to react to this in a way that’s funny.” I realized that I couldn’t offer an appropriate reaction in 140 characters either. The more I think about Snow’s essay, the more it looks like the outline for a class on the pitfalls of solving social problems with technology, a class I’m now planning on teaching this coming fall.

Using Snow’s essay as a jumping off point, I want to consider a problem that’s been on my mind a great deal since joining the MIT Media Lab five years ago: how do we help smart, well-meaning people address social problems in ways that make the world better, not worse? In other words, is it possible to get beyond both a naïve belief that the latest technology will solve social problems and a reaction that rubbishes any attempt to offer novel technical solutions as inappropriate, insensitive and misguided? Can we find a synthesis in which technologists look at their work critically and work closely with the people they’re trying to help in order to build sociotechnical systems that address hard problems?

Obviously, I think this is possible – if really, really hard – or I wouldn’t be teaching at an engineering school. But before considering how we overcome a naïve faith in technology, let’s examine Snow’s suggestion a textbook example of a solution that’s technically sophisticated, simple to understand and dangerously wrong.

Some of these explorations are more successful than others. In Snow’s essay about prison reform, he identifies violence, and particularly prison rape, as the key problem to be solved, and offers a remedy that he believes will lead to cost savings for taxpayers as well: all prisoners should be incarcerated in solitary confinement, fed only Soylent meal replacement drink through slots in the wall, and all interpersonal interaction and rehabilitative services will be provided in Second Life using the Oculus Rift VR system. Snow’s system eliminates many features of prison life – “cell blocks, prison yards, prison gyms, physical interactions with other prisoners, and so on.” That’s by design, he explains. “Those are all current conventions in prisons, but history is clear: innovation happens when we rethink conventions and apply alternative learning or technology to old problems.”

An early clue that Snow’s rethinking is problematic is that his proposed solution looks a lot like “administrative segregation“, a technique used in prisons to separate prisoners who might be violent or disruptive from the general population by keeping them in solitary confinement 23 hours a day. The main problem with administrative segregation or with the SHU (the “secure housing unit” used in supermax prisons) is that inmates tend to experience serious mental health problems connected to sustained isolation. “Deprived of normal human interaction, many segregated prisoners reportedly suffer from mental health problems including anxiety, panic, insomnia, paranoia, aggression and depression,” explains social psychologist Dr. Craig Haney. Shaka Senghor, a writer and activist who was formerly incarcerated for murder, explains that many inmates in solitary confinement have underlying mental health issues, and the isolation damages even the sound of mind. Solitary confinement, he says, is “one of the most barbaric and inumane aspects of our society.”

Snow and supporters might argue that he’s not trying to deprive prisoners of human contact, but give them a new, safer form of contact. But there’s virtually no research on the health effects of sustained exposure to head-mounted virtual reality. Would prisoners be forced to choose between simulator sickness or isolation? What are the long-term effects on vision of immersive VR displays? Will prisoners experience visual exhaustion through vergence-accommodation, a yet-to-be-solved problem of eye and brain strain due to problems focusing on objects that are very nearby but appear to be distant? Furthermore, will contact with humans through virtual worlds mitigate the mental problems prisoners face in isolation or exacerbate them? How do we answer any of these questions ethically, given the restrictions we’ve put on experimenting on prisoners in the wake of Nazi abuse of concentration camp prisoners.

How does an apparently intelligent person end up suggesting a solution that might, at best, constitute unethical medical experiments on prisoners? How does a well-meaning person suggest a remedy that likely constitutes torture?

Oddly, none of their solutions involved virtual reality isolation cells. In fact, most of the solutions they proposed had nothing to do with prisons themselves. Instead, their solutions focused on over-policing of black neighborhoods, America’s aggressive prosecutorial culture that encourages those arrested to plead guilty, legalization of some or all drugs, reform of sentencing guidelines for drug crimes, reforming parole and probation to reduce reincarceration for technical offenses, and building robust re-entry programs to help ex-cons find support, housing and gainful employment.

In other words, when Snow focuses on making prison safer and cheaper, he’s working on the wrong problem. Yes, prisons in the US could be safer and cheaper. But the larger problem is that the US incarcerates more people than any other nation on earth – with 5% of the world’s population, we are responsible for 25% of the world’s prisoners. Snow may see his ideas as radical and transformative, but they’re fundamentally conservative – he tinkers with the conditions of confinement without questioning whether incarceration is how our society should solve problems of crime and addiction. As a result, his solutions can only address a facet of the problem, not the deep structural issues that lead to the problem in the first place.

Many hard problems require you to step back and consider whether you’re solving the right problem. If your solution only mitigates the symptoms of a deeper problem, you may be calcifying that problem and making it harder to change. Cheaper, safer prisons make it easier to incarcerate more Americans and avoid addressing fundamental problems of addiction, joblessness, mental illness and structural racism.

Evgeny Morozov has offered a sharp and helpful critique to this mode of thinking, which he calls “solutionism”. Solutionism demands that we focus on problems that have “nice and clean technological solution at our disposal.” In his book, “To Save Everything, Click Here”, Morozov savages ideas like Snow’s, whether they are meant as thought experiments or serious policy proposals. (Indeed, one worry I have in writing this essay is taking Snow’s ideas too seriously, as Morozov does with many of the ideas he lambastes in his book.)

The problem with the solutionist critique is that it tends to remove technological innovation from the problem-solver’s toolkit. In fact, technological development is often a key component in solving complex social and political problems, and new technologies can sometimes open a previously intractable problem. The rise of inexpensive solar panels may be an opportunity to move nations away from a dependency on fossil fuels and begin lowering atmospheric levels of carbon dioxide, much as developments in natural gas extraction and transport technologies have lessened the use of dirtier fuels like coal.

But it’s rare that technology provides a robust solution to a social problem by itself. Successful technological approaches to solving social problems usually require changes in laws and norms, as well as market incentives to make change at scale. I installed solar panels on the roof of my house last fall. Rapid advances in panel technology made this a routine investment instead of a luxury, and the existence of competitive solar installers in our area meant that market pressures kept costs low. But the panels were ultimately affordable because federal and state legislation offered tax rebates for their purchase, and because Massachusetts state law rewards me with solar credits for each megawatt I produce, which I can sell to utilities through an online marketplace, because they are legally mandated to produce a percentage of their total power output via solar generation. And while there are powerful technological, market and legal forces pushing us towards solar energy, the most powerful may be the social, normative pressure of seeing our neighbors install solar panels, leaving us feeling ike we weren’t doing our part.

My Yale students who tried to use technology as their primary lever for reforming US prisons had a difficult time. One team offered the idea of an online social network that would help recently released prisoners connect with other ex-offenders to find support, advice and job opportunities in the outside world. Another looked at the success of Bard College’s remarkable program to help inmates earn BA degrees and wondered whether online learning technologies could allow similar efforts to reach thousands more prisoners. But many of the other promising ideas that arose in our workshops had a technological component – given the ubiquity of mobile phones, why can’t ex-offenders have their primary contact with their parole officers via mobile phones? Given the rise of big data techniques used for “smart policing”, can we review patterns of policing, identifying and eliminating cases where officers are overfocusing on some communities?

The temptation of technology is that it promises fast and neat solutions to social problems, but usually fails to deliver. The problem with Morozov’s critique is that technological solutions, combined with other paths to change, can sometimes turn intractable problems into solvable ones. The key is to understand technology’s role as a lever of change in conjunction with complementary levers.

Don’t assume your preferences are universal
Shane Snow introduces his essay on prison reform not with statistics about the ineffectiveness of incarceration in reducing crime, but with his fear of being sent to prison. Specifically, he fears prison rape, a serious problem which he radically overestimates: “My fear of prison also stems from the fact that some 21 percent of U.S. prison inmates get raped or coerced into giving sexual favors to terrifying dudes named Igor.” Snow is religious about footnoting his essays, but not as good at reading the sources he cites – the report he uses to justify his fear of “Igor” (parenthetical comment removed – EZ, 6/29/16) indicates that 2.91 of 1000 incarcerated persons experienced sexual violence, or 0.291%, not 21%.Shane has amended his post, and references another study that indicates a higher level of coerced sexual contact in prison.

Perhaps isolation for years at a time, living vicariously through a VR headset while sipping an oat flour smoothie would be preferable to time in the prison yard, mess hall, workshop or classroom for Snow. But there’s no indication that Snow has talked to any current or ex-offenders about their time in prison, and about the ways in which encounters with other prisoners led them to faith, to mentorship or to personal transformation. The people Shane imagines are so scary, so other, that he can’t imagine interacting with them, learning from them, or anything but being violently assaulted by them. No wonder he doesn’t bother to ask what aspects of prison life are most and least livable, which would benefit most from transformation.

Much of my work focuses on how technologies spread across national, religious and cultural borders, and how they are transformed by that spread. Cellphone networks believed that pre-paid scratch cards were an efficient way to sell phone minutes at low cost – until Ugandans started using the scratch off codes to send money via text message in a system called Sente, inventing practical mobile money in the process. Facebook believes its service is best used by real individuals using their real names, and goes to great lengths to remove accounts it believes to be fictional. But when Facebook comes to a country like Myanmar, where it is seen as a news service, not a social networking service, phone shops specializing in setting up accounts using fake names and phone numbers render Facebook’s preferences null and void.

Smart technologists and designers have learned that their preferences are seldom their users’ preferences, and companies like Intel now employ brilliant ethnographers to discover how tools are used by actual users in their homes and offices. Understanding the wants and needs of users is important when you’re designing technologies for people much like yourself, but it’s utterly critical when designing for people with different backgrounds, experiences, wants and needs. Given that Snow’s understanding of prison life seems to come solely from binge-watching Oz, it’s virtually guaranteed that his proposed solution will fail in unanticipated ways when used by real people.

Am I the right person to solve this problem?
Of the many wise things my Yale students said during our workshop was a student who wondered if he should be participating at all. “I don’t know anything about prisons, I don’t have family in prison. I don’t know if I understand these problems well enough to solve them, and I don’t know if these problems are mine to solve.”

Talking about the workshop with my friend and colleague Chelsea Barabas, she asked the wonderfully deep question, “Is it ever okay to solve another person’s problem?”

On its surface, the question looks easy to answer. We can’t ask infants to solve problems of infant mortality, and by extension, it seems unwise to let kindergarden students design educational policy or demand that the severely disabled design their own assistive technologies.

But the argument is more complicated when you consider it more closely. It’s difficult if not impossible to design a great assistive technology without working closely, iteratively and cooperatively with the person who will wear or use it. My colleague Hugh Herr designs cutting-edge prostheses for US veterans who’ve lost legs, and the centerpiece of his lab is a treadmill where amputees test his limbs, giving him and his students feedback about what works, what doesn’t and what needs to change. Without the active collaboration of the people he’s trying to help, he’s unable to make technological advances.

Disability rights activists have demanded “nothing about us without us”, a slogan that demands that policies should not be developed without the participation of those intended to benefit from those policies. Design philosophies like participatory design and codesign bring this concept to the world of technology, demanding that technologies designed for a group of people be designed and built, in part, by those people. Codesign challenges many of the assumptions of engineering, requiring people who are used to working in isolation to build broad teams and to understand that those most qualified to offer a technical solution may be least qualified to identify a need or articulate a design problem. Codesign is hard and frustrating, but it’s also one of the best ways to ensure that you’re solving the right problem, rather than imposing your preferred solution on a situation.

On the other pole from codesign is an approach to engineering we might understand as “Make things better by making better things”. This school of thought argues that while mobile phones were designed for rich westerners, not for users in developing nations, they’ve become one of the transformative technologies for the developing world. Frustratingly, this argument is valid, too. Many of the technologies we benefit from weren’t designed for their ultimate beneficiaries, but were simply designed well and adopted widely. Shane Snow’s proposal is built in part on this perspective – Soylent was designed for geeks who wanted to skip meals, not for prisoners in solitary confinement, but perhaps it might be preferable to Nutraloaf or other horrors of the prison kitchen.

I’m not sure how we resolve the dichotomy of “with us” versus “better things”. I’d note that every engineer I’ve ever met believes what she’s building is a better thing. As a result, strategies that depend on finding the optimum solutions often rely on choice-rich markets where users can gravitate towards the best solution. In other words, they don’t work very well in an environment like prison, where prisoners are unlikely to be given a choice between Snow’s isolation cells and the prison as it currently stands, and are even less likely to participate in designing a better prison.

Am I advocating codesign of prisons with the currently incarcerated? Hell yeah, I am. And with ex-offenders, corrections officers, families of prisoners as well as the experts who design these facilities today. They’re likely to do a better job than smart Yale students, or technology commentators.

The possible utility of beating a dead horse

It is unlikely that anyone is going to invite Shane Snow to redesign a major prison any time soon, so spending more than three thousand words urging you to reject his solution may be a waste of your time and mine. But the mistakes Shane makes are those that engineers make all the time when they turn their energy and creativity to solving pressing and persistent social problems. Looking closely at how Snow’s solutions fall short offers some hope for building better, fairer and saner solutions.

The challenge, unfortunately, is not in offering a critique of how solutions go wrong. Excellent versions of that critique exist, from Morozov’s war on solutionism, to Courtney Martin’s brilliant “The Reductive Seduction of Other People’s Problems”. If it’s easy to design inappropriate solutions about problems you don’t fully understand, it’s not much harder to criticize the inadequacy of those solutions.

What’s hard is synthesis – learning to use technology as part of well-designed sociotechnical solutions. These solutions sometimes require profound advances in technology. But they virtually always require people to build complex, multifunctional teams that work with and learn from the people the technology is supposed to benefit.

Three students at the MIT Media Lab taught a course last semester called “Unpacking Impact: Reflecting as We Make”. They point out that the Media Lab prides itself on teaching students how to make anything, and how to turn what you make into a business, but rarely teaches reflection about what we make and what it might mean for society as a whole. My experience with teaching this reflective process to engineers is that it’s both important and potentially paralyzing, that once we understand the incompleteness of technology as a path for solving problems and the ways technological solutions relate to social, market and legal forces, it can be hard to build anything at all.

I’m going to teach a new course this fall, tentatively titled “Technology and Social Change”. It’s going to include an examination of the four levers of social change Larry Lessig suggests in Code and which I’ve been exploring as possible paths to civic engagement. It will include deep methodological dives into codesign, and into using anthropology as tool for understanding user needs. It will look at unintended consequences, cases where technology’s best intentions fail, and cases where careful exploration and preparation led to technosocial systems that make users and communities more powerful than they were before.

I’m “calling my shot” here for two reasons. One, by announcing it publicly, I’m less likely to back out of it, and given how hard these problems are, backing out is a real possibility. And two, if you’ve read this far in this post, you’ve likely thought about this issue and have suggestions for what we should read and what exercises we should try in the course of the class – I hope you might be kind enough to share those with me.

In the end, I’m grateful for Shane Snow’s surreal, Black Mirror vision of the future prison both because it’s a helpful jumping off point for understanding how hard it is to make change well using technology, and because the US prison system is a broken and dysfunctional system in need of change. But we need to find ways to disrupt better, to challenge knowledgeably, to bring the people they hope to benefit into the process. If you can, please help me figure out how we teach these ideas to the smart, creative people I work with who want to change the world and are afraid of breaking it in the process.

37 Responses to The worst thing I read this year, and what it taught me… or Can we design sociotechnical systems that don’t suck?

1. Am I the only person around who though Shane Snow’s essay was obviously meant as satire?
2. The problem you highlight is not totally new. Take, for example, Indian journalist P. Sainath’s Everybody Loves a Good Drought, a 1996 book that highlights scores of tragicomic failures in India’s attempts at poverty reduction.
3. I wonder if, by jumping off from Lessig’s ‘four levers,’ you are endorsing a rather cramped view of civic involvement and social change. In my experience, we don’t get involved and change things by creating regulatory schemes and legal apparatuses — though those may be things we wind up endorsing. Rather, we start from our motivated commitment to various communities and networks. It’s our odd emotional connection to various groups and to each other that motivates our activism — not any mechanized manner in which society imposes goodness.
4. I’m working on a new book (very early stages) which touches on some of these issues too.

Ethan,
Enjoyed this post. One suggestion for your class: market it more broadly than “just” to engineers/other techie types. Therapy/rehab/counseling personnel may not create technical devices for their clients but they are often solving problems for them as the experts, when a solving problems _with_ them approach might yield better results. Would be happy to discuss this in more detail over email including how I have come at it teaching master’s-level speech-language pathology students.

Particularly, the issues of how and who to engage/involve the targets of “solutions” in the process of creating those solutions. She works with policy change which is an interesting to complement to technological change because it is often thought of as the “slow” actor compared to “fast” commercialization of technology processes. In both cases however the need to incorporate the social dynamics and contexts seem important, if not essential. There may be helpful examples in her work which highlight to students that even in the small scale of municipal government policy the diverse otherness of the subjects of “solutions” is not easily understood and enhanced by engagement.

It seems like you’re trying to solve a problem that involves Shane Snow himself. Did you talk to him about what he thought? (Not suggesting that’s necessary, but given that one of the key problems here is that Snow is attempting to solve a problem that involves other people without working with them, I wondered what stopped you reaching out to him or made that not spring to mind as part of your solution. Maybe he’s not part of the problem? Or he is, but it’s a different problem from the one you’re trying to solve here?

(In fact, there are several examples of water sourcing technology that do and don’t work for the end user — the good folks at the Metropolitan Waterworks Museum http://waterworksmuseum.org/ can connect you or link to their Waterworks Wednesdays speaker series.) (Disclaimer: this museum is my former employer and I think they do good work.)

Very interesting article. But maybe you’ve been a little too harsh on Snow. Here’s my thought:

Let’s just assume for the sake of discussion that prisons are indeed experiencing rape levels in the 20% range (though apparently that’s not what the data says). That would be a vast problem on its face (whether you binge watch Oz or not).

If that were the case, then I think you could argue that society is ALREADY running an experiment (of a sort) on prisoners. It’s experimenting to see how pervasive these violations must become before change is required. That seems unethical to me. So the current state is paradoxical in this sense. Whatever we do, we seem to be running experiments on a captive set of subjects. So I am not sure that a trial of the Snow scenario can be easily dismissed on ethical grounds. What if a group of prisoners agree to try it of their own volition?

Secondly, I think that exploring other related problems (like racially biased, incarceration rates, absurdly paternalistic drug laws, etc.) is laudable and welcome. But it’s not appropriate to wish away the underlying problem as a mere “symptom.” The problem of prison sex violations may deserve our attention — even if it’s not the only problem that deserves attention. Changing the subject doesn’t seem legitimate in this case.

Anyway, I think it was smart of you to invite your readers to help you “co-design” your course. It sounds fascinating. You’ve given me some important things to think about — and you are obviously serving your students well.
Best,

This is a debate that happened and was solved in 1970s, within the Appropriate Technology movement. Marxists wrote books critiquing Alternative Technology as mild reforms, not leading to the revolution. Meanwhile, Undercurrents magazine wrote articles on practical things people could do (a forerunner of the maker movement). Engineers teamed up with sociologists to work on science and technology for the people, starting from the needs of the poor. I worked for 4 years with Kenyans on the Kenya Ceramic Jiko, a charcoal stove the reduced deforestation, stopped children getting burns from the stove, increased the disposable income of families spending 1/3rd of their income on charcoal for cooking, and started a new artisanal industry producing 250,000 stoves per year. Such socio-technical developments came from the synthesis of knowledge from local cooks and advanced engineering science, working with an association of 100 local NGOs. So working on someone else’s problem is fruitful – as long as you go there and work with them – and avoid the fragmentation of disciplines since the 1970s which prevents such interdisciplinary work, and the assumption that getting rich is the purpose of all development, rather than something that condems you to burn in Hell.

Minor point – I was rather confused by the compound term “code sign” before I realized you meant “co-design”. Even with that knowledge, I still tended to read it as “code sign”. Please considering using the hyphenated version.

I’m sure your group will have read already, The Design of Everyday Things, by Donald Norman. An excellent work of fiction asks these questions as well, The Diamond Age, by Neil Stephenson.

A closely related question to the ones you’ve posted here, is, why do news broadcasts find it easier to cover bad news than good news? Good news stories tend to be light and fluffy and not very challenging to understand, while bad news stories are rich and complex and full of expert opinions. I believe that there’s a bias here having to do with class relationships: good news for employers is bad news for employees. Good news for landlords is bad news for renters. Good now for investors is bad news for the environment. Complex, intellectually challenging good news stories are never unalloyed.

So this suggestion to improve prisons, already assumes that prisons are themselves solving a problem. There is plenty of evidence to suggest they’re making a problem worse. Good news for prison wardens is bad news for prisoners, and the general public when the prisoners are released. (And the families of the prisoners, who are themselves receiving state sanctioned punishment).

Another area where this kind of tar baby sucks people in, is questions of carbon footprint and global warming and how to move our current insatiable appetite for energy away from oil and toward renewables. It’s taken on faith that the total energy supply must inevitably grow, never shrink, and renewables must therefore be just as easy to tap as the older, dirtier sources of energy. Very few people seem willing to question this growth at any cost mantra.

To my way of thinking, most of these questions boil down eventually into some version of, “does the economy exist to serve human beings, or do human beings exist to serve the economy?”

The ManEatingRobot post is basically describing The Cubes from Almost Human/Iso-cubes from Judge Dredd like it’s a good idea. I mean sure he can say it’s a “thought experiment” but one doesn’t go that far into the thought without partially thinking it’s a good idea.

It’s gross that he thinks this in anyway resembles a model of restorative justice like they have in Norway.

My suggestion is having a look at Appreciative Inquiry and incorporating that as a base framework. It is a co-creative discovery and design process that seeks out and harnesses the energy of the bits that work not the bits that are broken. While a lot of the articles are focused around organisations, the premise is very adaptable to visioning and co-creating in other ways, I’ve used it as a base for all my work for the last 20 years in community led development, even in interviews for feature stories when I was working as a reporter. The real crux of it it is in ‘asking the right questions’ something you identified in your above article as a vital part of successful and meaningful change. Even just the articles around framing the right questions may be helpful. Great article. I really enjoyed it. Your students are lucky. I’ve attached a couple of links, but there is lots and lots of info if you google :) Warm regards, Anneleise https://appreciativeinquiry.case.edu/https://design.umn.edu/about/intranet/documents/AppreciativeInquiry-Asking%20Powerful%20Questions.pdf

ps if you want an example of how a community has used Appreciative Inquiry for empowered and lasting change http://www.lyttelton.net.nz, I’ve written a case study document which is on the ‘about us’ page. We produced the document because we had people all over the world who were asking how we were doing what we were doing so successfully and we were struggling to keep up with all the visitors! Lyttelton was half destroyed by an earthquake a couple of years after that case study was written and there are also some very good papers on how the existing framework of community significantly empowered and supported recovery efforts.

The main one, which is actually diminishing the importance of those academic references, is to codesign the syllabus itself. Been doing this in a few courses after reading about it through Indiana University’s Faculty Center for Excellence in Teaching (they called it the “collaborative syllabus”, but your version could be true codesign). It can work quite well and it sure would be fitting in this case. Of course, it’s “hard and frustrating”. But it’s very rewarding, in terms of the learning experience itself.

Other suggestions to engage learners in this particular course would have to do with their spheres of agency, experience, and expertise.

The latter could lead to a specialised version of “peer-instruction”. Engineers are typically quite proud of what they (think they) know. So basing an activity on getting them to teach something they know well could give interesting results. A snappier version would be “if you’re so knowledgeable about the topic why don’t you get somebody else to grasp it?”. But it can be handled with tact and help them interact with people who have a different frame of mind from theirs.

The part about experience is meant to be about something they do know full well, but may have difficulty bringing out. Especially when it comes to being users of technology. In many podcasts and blogs, John Siracusa has become recognised as someone who will go “hypercritical” on tools he actually uses. His expertise as an engineer does come back on occasion, but there’s a striking difference with the work of “Dr. Drang” who will meticulously analyse a product for all its flaws. Getting students to “pull a Siracusa” and rant over product design could lead to interesting exchanges, especially if facilitated properly.

The part about agency is somewhat similar, but it’s about recognising the limits to what they can do. Any course having to do with social change needs to address the fact that university students aren’t simply there “to make the world a better place”. This is where people like Hoodfar (currently the victim of Iran’s regime) can provide a lot of insight. Her research on “voluntourism” was a key part of our courses preparing Canadian volunteers going to Uganda. As both an anthropologist and a feminist, she could bring a lot to a course like this, were Iran’s prison system as conducive to thoughtful action as some US prisons are dreamt up to be.

[Edit: Gada Mahrouse is the one who taught on “voluntourism” with the Concordia Volunteers Abroad Programme, not Homa Hoodfar. Apologies. The #FreeHoma campaign has been on my mind, especially when we talk about prisons. Both Hoodfar and Mahrouse are Concordia colleagues.]

If you haven’t read “The Zimbabwe Bush Pump: Mechanics of a Fluid Technology”, well, I cannot recommend it highly enough. It has amazing insight into the relationships between technology, inventors, and communities.

You ask for suggestions of what to read and activities for students at one point in this article. I teach a class on technology and disability at Virginia Tech, and we spend a lot of time talking about how design goes wrong in the context of disability. For so many items, designers unveil with a great flourish and lots of media hype, and then those items do not end up in wide use due to an array of factors. Or, they do end up in wide use to the detriment of disability cultures or destruction of disabled lives. Or, the very ways they are presented are ableist and reinforce tired tropes that invoke pity or heroic overcoming in the context of disability. Or, it’s a great thing, but with no infrastructure to allow people to use and enjoy it.

Two things that I think are great, particularly on this topic, are exoskeletons and cochlear implants. To look at some of the really passionate writing by disabled people about exoskeleton projects — and how many of these projects reinforce bias against the people they purport to serve — really blows a lot of students away. Bill Peace has some great articles on his blog about it. I also have a Harriet McBryde Johnson reading with this unit that I read aloud on the first day of class that talks about resenting the well-meaning people who want to fix the character (it’s a novel).

The second topic that captivates students is the study of the cochlear implant and its reception (and the protest it caused). _Sound and Fury_ is an older documentary, but it still really engages people on the topic in a personal way, and, added to more recent writing about Deaf community with some recent media coverage of Nyle DiMarco’s stance on sign language, it could bring a lot out in discussion with students who don’t necessarily have a background in disability studies.

Anyway, best of luck. My whole reading list can be found here: techanddisability.com. I’ll be changing it up a bit in the next month or so before my next semester starts. Best wishes on your own course.