The Singularity University’s SciFiDI (DI stands for “Design Interlligence”) workshops use Science Fiction thoughts and methods to inspire innovation. They gather corporations, start-ups, development organizations and mix them with illustrators, writers, futurists and designers.

The “Design Intelligence for the Future Home” graphic book that illustrates the workshops’ outcomes is cool, yet not really convincing. The four stories lack complexity and sometimes, consistency. What’s interesting, though, is that the world in which the stories happen is almost dystopian, clearly contradicting the Singularity University’s usual positive outlook on a future where technology has solved “humanity’s most urgent, persistent challenges“.

However, workshop participants seemed to enjoy the experience and find it useful, as Alison Berman reports – and that is probably all that matters in the end. Her post also gives us a glimpse into the methodology used.

Organized by Aquitaine Europe Communication, directed and curated by myself and Daniel Erasmus, Ci’Num was a global, multicultural, 3-year foresight process (2005-2007) which intended to shed a new light on the future of our digital civilizations, taking into account geopolitical, cultural and economic differences. Our focus was:

on the specific contribution of, and challenges related to, the emergence of ubiquitous and “intimate” technologies stemming from the convergence between nanotech, biotech, IT and cognitive science;

on the social appropriation and production of technology;

and on the ways, tools and methods through which we become empowered to shape our personal and collective futures – i.e., not on figuring out the most likely futures, but in recognizing uncertainties and looking for ways to maximize choices and opportunities in any given future.

At the request of Melbourne’s Deakin University, in 2016, the Canadian writer and journalist Cory Doctorow wrote an interesting story on how the real-world development of self-driving cars could go really, really wrong.

As Doctorow himself puts it: “The story, Car Wars, takes the form of a series of vignettes that illustrate the problem with designing cars to control their drivers, interspersed with survey questions to spur discussion of the wider issues of governments and manufacturers being able to control the operation of devices we own and depend on.” (actually, the survey questions don’t really help “spurring discussions”, as Deakin professor Gleb Beliakov provides his own, unequivocal and somewhat laconic answer to all of them – you can, however, view the survey results here)

In the story, the interaction between highly intelligent self-driving software, rules and exceptions forced into the car systems by all kinds of authorities, and a well-planned act of behavorial hacking, forces most of the city’s car into behaving like a herd of frightened buffaloes driven over the edge of a cliff. All, but one cleverly (although illegally) software-hacked car. But of course, if you had the right to hack your car, and if everyone did it, the situation could get even worse. Or could it?