Tag Archives: augmented reality

As a professor of design and a design fiction researcher, I write academic papers and blog weekly about the future. I teach about the future of design, and I create future scenarios, sometimes with my students, that provoke us to look at what we are doing, what we are making, why we are making it and the ramifications that are inevitable. Primarily I try to focus both designers and decision makers on the steps they can take today to keep from being blindsided tomorrow. Futurists seem to be all the rage these days telling us to prepare for the Singularity, autonomous everything, or that robots will take our jobs. Recently, Jennifer Doudna, co-inventor of the gene editing technique called CrisprCas9 has been making the rounds and sounding the alarm that technology is moving so fast that we aren’t going to be able to contain a host of unforeseen (and foreseen) circumstances inside Pandora’s box. This concern should be prevalent, however, beyond just the bioengineering fields and extend into virtually anywhere that technology is racing forward fueled by venture capital and the desperate need to stay on top of whatever space in which we are playing. There is a lot at stake. Technology has already redefined privacy, behavioral wellness, personal autonomy, healthcare, labor, and maybe even our humanness, just to name a few.

Several recent articles have highlighted the changing world of design and how the pressure is on designers to make user adoption more like user addiction to ensure the success of a product or app. The world of behavioral economics is becoming a new arena in which we are using algorithms to manipulate users. Some designers are passing the buck to the clients or corporations that employ them for the questionable ethics of addictive products; others feel compelled to step aside and work on less lucrative projects or apply their skills to social causes. Most really care and want to help. But designers are uniquely positioned and trained to tackle these wicked problems—if we would collaborate with them.

Beyond the companies that might be deliberately trying to manipulate us, are those that unknowingly, or at least unintentionally, transform our behaviors in ways that are potentially harmful. Traditionally, we seek to hold someone responsible when a product or service is faulty, the physician for malpractice, the designer or manufacturer when a toy causes injury, a garment falls apart, or an appliance self-destructs. But as we move toward systemic designs that are less physical and more emotional, behavioral, or biological, design faults may not be so easy to identify and their repercussions noticeable only after serious issues have arisen. In fact, we launch many of the apps and operating systems used today with admitted errors and bugs. Designers rely on real-life testing to identify problems, issue patches, revisions, and versions.

In the realm of nanotechnology, while scientists and thought leaders have proposed guidelines and best-practices, research and development teams in labs around the world race forward without regulation creating molecule-sized structures, machines, and substances with no idea whether they are safe or what might be long-term effects of exposure to these elements. In biotechnology, while folks like Jennifer Doudna appeal to a morally ethical cadre of researchers to tread carefully in the realm of genetic engineering (especially when it comes to inheritable gene manipulation) we do not universally share those morals and ethics. Recent headlines attest to the fact that some scientists are bent on moving forward regardless of the implications.

Some technologies such as our smartphones have become equally invasive technology, yet they are now considered mundane. In just ten years since the introduction of the iPhone, we have transformed behaviors, upended our modes of communication, redefined privacy, distracted our attentions, distorted reality and manipulated a predicted 2.3 billion users as of 2017. [1] It is worth contemplating that this disruption is not from a faulty product, but rather one that can only be considered wildly successful.

There are a plethora of additional technologies that are poised to refine our worlds yet again including artificial intelligence, ubiquitous surveillance, human augmentation, robotics, virtual, augmented and mixed reality and the pervasive Internet of Things. Many of these technologies make their way into our experiences through the promise of better living, medical breakthroughs, or a safer and more secure life. But too often we ignore the potential downsides, the unintended consequences, or the systemic ripple-effects that these technologies spawn. Why?

In many cases, we do not want to stand in the way of progress. In others, we believe that the benefits outweigh the disadvantages, yet this is the same thinking that has spawned some of our most complex and daunting systems, from nuclear weapons to air travel and the internal combustion engine. Each of these began with the best of intentions and, in many ways were as successful and initially beneficial as they could be. At the same time, they advanced and proliferated far more rapidly than we were prepared to accommodate. Dirty bombs are a reality we did not expect. The alluring efficiency with which we can fly from one city to another has nevertheless spawned a gnarly network of air traffic, baggage logistics, and anti-terrorism measures that are arguably more elaborate than getting an aircraft off the ground. Traffic, freeways, infrastructure, safety, and the drain on natural resources are complexities never imagined with the revolution of personal transportation. We didn’t see the entailments of success.

This is not always true. There have often been scientists and thought leaders who were waving the yellow flag of caution. I have written about how, “back in 1975, scientists and researchers got together at Asilomar because they saw the handwriting on the wall. They drew up a set of resolutions to make sure that one day the promise of Bioengineering (still a glimmer in their eyes) would not get out of hand.”[2] Indeed, researchers like Jennifer Doudna continue to carry the banner. A similar conference took place earlier this year to alert us to the potential dangers of technology and earlier this year another to put forth recommendations and guidelines to ensure that when machines are smarter than we are they carry on in a beneficent role. Too often, however, it is the scientists and visionaries who attend these conferences. [3] Noticeably absent, though not always, is corporate leadership.

Nevertheless, in this country, there remains no safeguarding regulation for nanotech, nor bioengineering, nor AI research. It is a free-for-all, and all of which could have massive disruption not only to our lifestyles but also our culture, our behavior, and our humanness. Who is responsible?

For nearly 40 years there has been an environmental movement that has spread globally. Good stewardship is a good idea. But it wasn’t until most corporations saw a way for it to make economic sense that they began to focus on it and then promote it as their contribution to society, their responsibility, and their civic duty. As well intentioned as they may be (and many are) much more are not paying attention to the effect of their technological achievements on our human condition.

We design most technologies with a combination of perceived user need and commercial potential. In many cases, these are coupled with more altruistic motivations such as a “do no harm” commitment to the environment and fair labor practices. As we move toward the capability to change ourselves in fundamental ways, are we also giving significant thought to the behaviors that we will engender by such innovations, or the resulting implications for society, culture, and the interconnectedness of everything?

Enter Humane Technology

Ultimately we will have to demand this level of thought, beginning with ourselves. But we should not fight this alone. Corporations concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Humane technology considers the socio-behavioral ramifications of products and services: digital dependencies, and addictions, job loss, genetic repercussions, the human impact from nanotechnologies, AI, and the Internet of Things.

To whom do we turn when a 14-year-old becomes addicted to her smartphone or obsessed with her social media popularity? We could condemn the parents for lack of supervision, but many of them are equally distracted. Who is responsible for the misuse of a drone to vandalize property or fire a gun or the anticipated 1 billion drones flying around by 2030? [4] Who will answer for the repercussions of artificial intelligence that spouts hate speech? Where will the buck stop when genetic profiling becomes a requirement for getting insured or getting a job?

While the backlash against these types of unintended consequences or unforeseen circumstances are not yet widespread and citizens have not taken to the streets in mass protests, behavioral and social changes like these may be imminent as a result of dozens of transformational technologies currently under development in labs and R&D departments across the globe. Who is looking at the unforeseen or the unintended? Who is paying attention and who is turning a blind eye?

It was possible to have anticipated texting and driving. It is possible to anticipate a host of horrific side effects from nanotechnology to both humans and the environment. It’s possible to tag the ever-present bad actor to any number of new technologies. It is possible to identify when the race to master artificial intelligence may be coming at the expense of making it safe or drawing the line. In fact, it is a marketing opportunity for corporate interests to take the lead and the leverage their efforts to preempt adverse side effects as a distinctive advantage.

Emphasizing humane technology is an automatic benefit for an ethical company, and for those more concerned with profit than ethics, (just between you and me) it offers the opportunity for a better brand image and (at least) the appearance of social concern. Whatever the motivation, we are looking at a future where we are either prepared for what happens next, or we are caught napping.

This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create. Designers and those trained in design research are excellent collaborators. My brand of design fiction is intended to take us into the future in an immersive and visceral way to provoke the necessary discussion and debate that anticipate the storm should there be one, but promising utopia is rarely the tinder to fuel a provocation. Design fiction embraces the art critical thinking and thought problems as a means of anticipating conflict and complexity before these become problems to be solved.

Ultimately we have to depart from the idea that technology will be the magic pill to solve the ills of humanity, design fiction, and other anticipatory methodologies can help to acknowledge our humanness and our propensity to foul things up. If we do not self-regulate, regulation will inevitably follow, probably spurred on by some unspeakable tragedy. There is an opportunity, now for the corporation to step up to the future with a responsible, thoughtful compassion for our humanity.

There was a flurry of reports from dozens of news sources (including CNN) last week that an Amazon Echo, (Alexa), called the police during a New Mexico incident of domestic violence. The alleged call began a SWAT standoff, and the victim’s boyfriend was eventually arrested. Interesting story, but after a fact-check, that could not be what happened. Several sources including the New York Times and WIRED debunked the story with details on how Alexa calling 911 is technologically impossible, at least for now. And although the Bernalillo, New Mexico County Sheriff’s Department swears to it, according to WIRED,

“Someone called the police that day. It just wasn’t Alexa..”

Even Amazon agrees from a spokesperson email,

“The receiving end would also need to have an Echo device or the Alexa app connected to Wi-Fi or mobile data, and they would need to have Alexa calling/messaging set up,”1

So it didn’t happen, but most agree, while it may be technologically impossible today, it probably won’t be for very long. The provocative side of the WIRED article proposed this thought:

“The Bernalillo County incident almost certainly had nothing to do with Alexa. But it presents an opportunity to think about issues and abilities that will become real sooner than you might think.”

On the upside, some see benefits from the ability of Alexa to intervene in a domestic dispute that could turn lethal, but they fear something called “false positives.” Could an off handed comment prompt Alexa to make a call to the police? And if it did would you feel as though Alexa had overstepped her bounds?

Others see the potential in suicide prevention. Alexa could calm you down or make suggestions for ways to move beyond the urge to die.

But as we contemplate opening this door, we need to acknowledge that we’re letting these devices listen to us 24/7 and giving them the permission to make decisions on our behalf whether we want them to or not. The WIRED article also included a comment from Evan Selinger of RIT (whom I’ve quoted before).

“Cyberservants will exhibit mission creep over time. They’ll take on more and more functions. And they’ll habituate us to become increasingly comfortable with always-on environments listening to our intimate spaces.”

These technologies start out as warm and fuzzy (see the video below) but as they become part of our lives, they can change us and not always for the good. This idea is something I contemplated a couple of years ago with my Ubiquitous Surveillance future. In this case, the invasion was not as a listening device but with a camera (already part of Amazon’s Echo Look). You can check that out and do your own provocation by visiting the link.

I’m glad that there are people like Susan Liautaud (who I wrote about last week) and Evan Selinger who are thinking about the effects of technology on society, but I still fear most of us take the stance of Dan Reidenberg, who is also quoted in the WIRED piece.

“‘I don’t think we can avoid this. This is where it is going to go. It is really about us adapting to that,” he says.’”

Nonsense! That’s like getting in the car with a drunk driver and then doing your best to adapt. Nobody is putting a gun to your head to get into the car. There are decisions to be made here, and they don’t have to be made after the technology has created seemingly insurmountable problems or intrusions in our lives. The companies that make them should be having these discussions now, and we should be invited to share our opinions.

Though I tinge most of my blogs with ethical questions, the last time I brought up this topic specifically on this was back in 2015. I guess I am ready to give it another go. Ethics is a tough topic. If we deal with this purely superficially, ethics would seem natural, like common sense, or the right thing to do. But if that’s the case, why do so many people do the wrong thing? Things get even more complicated if we move into institutionally complex issues like banking, or governing, technology, genetics, health care or national defense, just to name a few.

The last time I wrote about this, I highlighted Michael Sandel Professor of Philosophy and Government at Harvard’s Law School, where he teaches a wildly popular course called “Justice.” Then, I was glad to see that the big questions were still being addressed in in places like Harvard. Some of his questions then, which came from a FastCo article, were:

“Is it right to take from the rich and give to the poor? Is it right to legislate personal safety? Can torture ever be justified? Should we try to live forever? Buy our way to the head of the line? Create perfect children?”

These are undoubtedly important and prescient questions to ask, especially as we are beginning to confront technologies that make things which were formerly inconceivable or plain impossible, not only possible but likely.

So I was pleased to see last month, an op-ed piece in WIRED by Susan Liautaud founder of The Ethics Incubator. Susan is about as closely aligned to my tech concerns as anyone I have read. And she brings solid thinking to the issues.

“Technology is approaching the man-machine and man-animal
boundaries. And with this, society may be leaping into humanity defining innovation without the equivalent of a constitutional convention to decide who should have the authority to decide whether, when, and how these innovations are released into society. What are the ethical ramifications? What checks and balances might be important?”

Her comments are right in line with my research and co-research into Humane Technologies. Liataud continues:

“Increasingly, the people and companies with the technological or scientific ability to create new products or innovations are de facto making policy decisions that affect human safety and society. But these decisions are often based on the creator’s intent for the product, and they don’t always take into account its potential risks and unforeseen uses. What if gene-editing is diverted for terrorist ends? What if human-pig chimeras mate? What if citizens prefer to see birds rather than flying cars when they look out a window? (Apparently, this is a real risk. Uber plans to offer flight-hailing apps by 2020.) What if Echo Look leads to mental health issues for teenagers? Who bears responsibility for the consequences?”

For me, the answer to that last question is all of us. We should not rely on business and industry to make these decisions, nor expect our government to do it. We have to become involved in these issues at the public level.

Michael Sandel believes that the public is hungry for these issues, but we tend to shy away from them. They can be confrontational and divisive, and no one wants to make waves or be politically incorrect. That’s a mistake.

So while the last thing I want is a politician or CEO making these decisions, these two constituencies could do the responsible thing and create forums for these discussions so that the public can weigh in on them. To do anything less, borders on arrogance.

Ultimately we will have to demand this level of thought, beginning with ourselves. This responsibility should start with anticipatory methodologies that examine the social, cultural and behavioral ramifications, and unintended consequences of what we create.

But we should not fight this alone. Corporations and governments concerned with appearing sensitive and proactive toward the environment and social justice need to add a new pillar to their edifice as responsible global citizens: humane technology.

Back on May 19th, before I went on holiday, I promised to comment on an article that appeared that week advocating that we would better off with artificial intelligence (AI) as President of the United States. Joshua Davis authored the piece: Hear me out: Let’s ElectAn AI As President, for the business section of WIRED online. Let’s start out with a few quotes.

“An artificially intelligent president could be trained to
maximize happiness for the most people without infringing on civil liberties.”

“Within a decade, tens of thousands of people will entrust their daily commute—and their safety—to an algorithm, and they’ll do it happily…The increase in human productivity and happiness will be enormous.”

Let’s start with the word happiness. What is that anyway? I’ve seen it around in several discourses about the future, that somehow we have to start focusing on human happiness above all things, but what makes me happy and what makes you happy may very well be different things. Then there is the frightening idea that it is the job of government to make us happy! There are a lot of folks out there that the government should give us a guaranteed income, pay for our healthcare, and now, apparently, it should also make us happy. If you haven’t noticed from my previous blogs, I am not a progressive. If you believe that government should undertake the happy challenge, you had better hope that their idea of happiness coincides with your own. Gerd Leonhard, a futurist whose work I respect, says that there are two types of happiness: first is hedonic (pleasure) which tends to be temporary, and the other is a eudaimonic happiness which he defines as human flourishing.1 I prefer the latter as it is likely to be more meaningful. Meaning is rather crucial to well-being and purpose in life. I believe that we should be responsible for our happiness. God help us if we leave it up to a machine.

This brings me to my next issue with this insane idea. Davis suggests that by simply not driving, there will be an enormous increase in human productivity and happiness. According to the website overflow data,

“Of the 139,786,639 working individuals in the US, 7,000,722, or about 5.01%, use public transit to get to work according to the 2013 American Communities Survey.”

Are those 7 million working individuals who don’t drive happier and more productive? The survey should have asked, but I’m betting the answer is no. Davis also assumes that everyone will be able to afford an autonomous vehicle. Maybe providing every American with an autonomous vehicle is also the job of the government.

Where I agree with Davis is that we will probably abdicate our daily commute to an algorithm and do it happily. Maybe this is the most disturbing part of his argument. As I am fond of saying, we are sponges for technology, and we often adopt new technology without so much as a thought toward the broader ramifications of what it means to our humanity.

There are sober people out there advocating that we must start to abdicate our decision-making to algorithms because we have too many decisions to make. They are concerned that the current state of affairs is simply too painful for humankind. If you dig into the rationale that these experts are using, many of them are motivated by commerce. Already Google and Facebook and the algorithms of a dozen different apps are telling you what you should buy, where you should eat, who you should “friend” and, in some cases, what you should think. They give you news (real or fake), and they tell you this is what will make you happy. Is it working? Agendas are everywhere, but very few of them have you in the center.

As part of his rationale, Davis cites the proven ability for AI to beat the world’s Go champions over and over and over again, and that it can find melanomas better than board-certified dermatologists.

“It won’t be long before an AI is sophisticated enough to
implement a core set of beliefs in ways that reflect changes in the world. In other words, the time is coming when AIs will have better judgment than most politicians.”

That seems like grounds to elect one as President, right? In fact, it is just another way for us to take our eye off the ball, to subordinate our autonomy to more powerful forces in the belief that technology will save us and make us happier.

Back to my previous point, that’s what is so frightening. It is precisely the kind of argument that people buy into. What if the new AI President decides that we will all be happier if we’re sedated, and then using executive powers makes it law? Forget checks and balances, since who else in government could win an argument against an all-knowing AI? How much power will the new AI President give to other algorithms, bots, and machines?

If we are willing to give up the process of purposeful work to make a living wage in exchange for a guaranteed income, to subordinate our decision-making to have “less to think about,” to abandon reality for a “good enough” simulation, and believe that this new AI will be free of the special interests who think they control it, then get ready for the future.

A few weeks ago I gushed about how my students killed it at a recent guerrilla future enactment on a ubiquitous Augmented Reality (AR) future. Shortly after that, Mark Zuckerberg announced the Facebook AR platform. The AR uses the camera on your smartphone, and according to a recent WIRED article, transforms your smartphone into an AR engine.

Unfortunately, as we all know, (and so does Zuck), the smartphone isn’t currently much of an engine. AR requires a lot of processing, and so does the AI that allows it to recognize the real world so it can layer additional information on top of it. That’s why Facebook (and others), are building their own neural network chips so that the platform doesn’t have to run to the Cloud to access the processing required for Artificial Intelligence (AI). That will inevitably happen which will make the smartphone experience more seamless, but that’s just part the challenge for Facebook.

If you add to that the idea that we become even more dependent on looking at our phones while we are walking or worse, driving, (think Pokemon GO), then this latest announcement is, at best, foreshadowing.

“The phone has generally sucked for AR because holding it up and looking through it is tiring, awkward, inconvenient, and socially unacceptable,” says MacIntyre. Adding more of it doesn’t solve those issues. It exacerbates them. (The exception might be the social acceptability part; as MacIntyre notes, selfies were awkward until they weren’t.)”

That last part is an especially interesting point. I’ll have to come back to that in another post.

My students did considerable research on exactly this kind of early infancy that technologies undergo on their road to ubiquity. In another WIRED article, even Zuckerberg admitted,

“We all know where we want this to get eventually,” said Zuckerberg in his keynote. “We want glasses, or eventually contact lenses, that look and feel normal, but that let us overlay all kinds of information and digital objects on top of the real world.”

So there you have it. Glasses are the end game, but as my students agreed, contact lenses not so much. Think about it. If you didn’t have to stick a contact lens in your eyeball, you wouldn’t and the idea that they could become ubiquitous (even if you solved the problem of computing inside a wafer thin lens and the myriad of problems with heat, and in-eye-time), they are much farther away, if ever.

Student design team from Ohio State’s Collaborative Studio.

This is why I find my student’s solution so much more elegant and a far more logical trajectory. According to Barrett,

“The optimistic timeline for that sort of tech, though, stretches out to five or 10 years. In the meantime, then, an imperfect solution takes the stage.”

My students locked it down to seven years.

Finally, Zuckerberg made this statement:

“Augmented reality is going to help us mix the digital and physical in all new ways,” said Zuckerberg at F8. “And that’s going to make our physical reality better.”

Except that Zuck’s version of better and mine or yours may not be the same. Exactly what is wrong with reality anyway?

If you want to see the full-blown presentation of what my students produced, you can view it at aughumana.net.

Note:Currently the AugHumana experience is superior on Google Chrome. If you are a Safari or Firefox purest, you may have to wait for the page to load (up to 2 minutes). We’re working on this. So, just use Chrome this time. We hope to have it fixed soon.

We often associate the term disruption with a snag in our phone, internet or other infrastructure service, but there is a larger sense of the expression. Technological disruption refers the to phenomenon that occurs when innovation, “…significantly alters the way that businesses operate. A disruptive technology may force companies to alter the way that they approach their business, risk losing market share or risk becoming irrelevant.”1

Some track the idea as far back as Karl Marx who influenced economist Joseph Schumpeter to coin the term “creative destruction” in 1942.2 Schumpeter described that as the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” But it was, “Clayton M. Christensen, a Harvard Business School professor, that described it’s current framework. “…a disruptive technology is a new emerging technology that unexpectedly displaces an established one.”3

OK, so much for the history lesson. How does this affect us? Historical examples of technological disruption go back to the railroads, and the mass produced automobile, technologies that changed the world. Today we can point to the Internet as possibly this century’s most transformative technology to date. However, we can’t ignore the smartphone, barely ten years old which has brought together a host of converging technologies substantially eliminating the need for the calculator, the dictaphone, land lines, the GPS box that you used to put on your dashboard, still and video cameras, and possibly your privacy. With the proliferation of apps within the smartphone platform, there are hundreds if not thousands of other “services” that now do work that we had previously done by other means. But hold on to your hat. Technological disruption is just getting started. For the next round, we will see an increasingly pervasive Internet of Things (IoT), advanced robotics, exponential growth in Artificial Intelligence (AI) and machine learning, ubiquitous Augmented Reality (AR), Virtual Reality (VR), Blockchain systems, precise genetic engineering, and advanced renewable energy systems. Some of these such as Blockchain Systems will have potentially cataclysmic effects on business. Widespread adoption of blockchain systems that enable digital money would eliminate the need for banks, credit card companies, and currency of all forms. How’s that for disruptive? Other innovations will just continue to transform us and our behaviors. Over the next few weeks, I will discuss some of these potential disruptions and their unique characteristics.

Artifact from the future.

Ed. note: I define an artifact from the future as something you might bring back as evidence that you were there. A sort of proof of what it is and what is there. Think Rod Taylor and the flower from The Time Machine.

Closing excerpts from a journal found in 2101.

I was born on this day in 1990. Somehow it seems as though my 111th birthday would be more auspicious, more cherished, more celebratory. They tell me I can live forever if I want. But the question that burns into my brain is, “Why?” What is left? What is there here that I should look forward too?

When I was a boy, before the surge, I remember looking forward to going to the ocean. The trip couldn’t come soon enough and the days seemed like years until finally we would go. We would drive in a car, my parents and I, and I would stand there with my feet in the wet sand and feel the warm water lap at my toes. Perhaps it was the waiting that made it all so meaningful. We don’t wait anymore. We don’t have to. If I want the ocean to circle around my ankles and feel my feet sink into the soft, supple sand, I have only to plug-in. I can smell it, feel it, hear it, and see it. If I want, I can even dip my finger into the water and experience that unmistakably intense saltiness. When I’m ready to come back, I simply unplug. I think that it is the ocean, but I know that it is not. I don’t have to wait for it.

Already I’ve had 3 organ replacements, grown from my own DNA, I’ve spent thousands upon thousands of hours in the V; the virtual world we have created out of our own fantasies, dreams and perversions. Nothing is real there, and there is no waiting. The crimes I have committed there are harmless they tell me, even therapeutic. It keeps us docile in the real world. But I think there is damage. I know there is. It goes beyond the virtual. It wreaks havoc in my soul. People don’t believe in souls anymore. They don’t have to. If you never die, what’s the difference?

My avatar tells me that death is the final frontier the one thing you can’t experience in the V.

Soon I will know for certain. Here is my plan: It’s difficult to gain access to the mag train tunnel, but I’ve found a way in. They say that when a mag train hits you at 700 miles an hour you vaporize. I kind of like the thought of that.

There’s really no one to say goodbye to. If anyone wishes to pursue the vapor trail to me, my memories and persona are in the vault at the IABank on Prosser Strasse. My account number is #459LK077JE28977. If anyone wants to know.

Don’t lie, wouldn’t it be fun to kick the tires?

In a previous blog, I posted a rambling essay on roboethics and the misuse of synths. However, in the world of The Lightstream Chronicles, most synths are used well within the limits of the law and those like Marie and Toei are almost ubiquitous. As we can see in page 79, Lee Chen’s houseboy is a synth, and a masseuse. People in the 22nd century look at synths the way people in the 21st century looked at cars. The wealthier you are the more likely you are to have more than one synth and probably more sophisticated in their design and feature set. Those in the lower income brackets might have an older or more basic model. You might even find those who are strapped for New Asia credits to be doing their own synth repairs and cobbling together parts from scrap or other models.

The average future family has one synth, like Marie, usually a domestic who doubles as a nanny, cooks cleans, and handles a variety of household chores. The price for a domestic synth varies based upon what the unit is capable of doing. Slightly analogous to the smart phone of the 21st century, the feature set of a synth can be augmented or uploaded with apps, called scripps, that could include features such as language proficiency, and levels of expertise such as in medicine or music. Scripps are moderately priced, but based on the model there are limitations to scripp memory. Special functions such as sex organs are another optional feature.

There is also a booming business in synth companions. At the top of the line are recent improvements on nearly human characteristics such as those present in Keiji-T. Keiji’s T-Class designation is, of course, reserved for the police force, however the domestic counterpart would be an H-Class. This class of synth can also be modeled to a near-exact visual duplicate grown from its owners DNA.

Synths can also be built to resemble any species or even combination of species. Nearly half of all domestic pets are synthetic. Popular cross-species varieties are Homo sapiens crossed with Canis Lupus Familiaris, Reptilia, Felidae, Ursidae, and Delphinidae. Many of these blends can also be done in the lab combining human DNA though the variations are considerably fewer.

Anyway, I’ve just returned from holiday, I have been virtually free from the computer for nearly a week. I finished two books, started a third, and did a lot of mental tweaking to my story.

Without tipping my hand (too far) to the plot of my graphic novel (since it is not 100% solidified), I can say that it has always dealt with ramifications and implications of a somewhat transhumanist future, a world where scientism rules the day. As the prologue to my screenplay states, “Scientific advances have enabled the manufacture of life-like robots. Known as synthetics, these robots are found in all walks of life and can be virtually indistinguishable from humans.” Some of my key characters fit this description and even my humans are considerably augmented, enhanced and amplified.

While my story includes a fair amount of mystery and action, I never intended the read to be one dimensional. I hope to thread some thought-provoking themes and opposing ideas into the mix. This is especially relevant in lieu of the fact that my paper, the whole design fiction aspect of this project, is an examination of the design culture relationship. What we design will affect our culture and vice versa. What happens when we are able to design and create near-humans? What will we teach them? How will we use them? What capabilities should they have or not have? What will separate our future, synthetically augmented human sons and daughters from their purely synthetic counterparts? What role will ethics play in this future drama? After all, there is no science to ethics.

Meanwhile, all of these questions seem to be surfacing around me in our current cultural environment as we see a flurry of discussion about Kurzweil’s optimistic singularity and Vernor Vinge’s less than optimistic predictions of that same technology gone astray. In fact, Kurzweil has even enlisted Michio Kaku, Deepak Chopra and a host of other “thinkers” and, of course the mandatory celebrities (no doubt for their scientific insight) for a live discussion on the topic that will be coming to a theater near you.

I guess this means my novel is timely.

I’ve also done some additional thinking on stylistic texture and setting, especially in light of the fact that recent press releases have put the locale for the upcoming screen adaptation of Akira in “New Manhattan”. Hmmm.