Tag: Privacy

“You don’t understand,” said the soldier who sat next me, who was speaking into his phone. His hand was shaking. “They’re dangerous. Really dangerous. You need to find somewhere safe! Go to your mother, and call me as soon as you get there.”

He hung up, and held the phone with both hands on his lap. I could see the beads of sweat forming on his forehead.

“Is everything OK? Is there something I can help with?” I asked politely.

He shot a frightened look toward me. “Did you hear what’s happening on LinkedIn?” he asked.

“A bit,” I said. “What, exactly? What did they do now?”

“It’s not what they did, it’s what was done to them,” he muttered, and buried his head between his hands. “Didn’t you hear that LinkedIn was hacked? One hundred and seventeen million encrypted user passwords are now being sold to anyone who can pay all of two thousand dollars for them, and I’ve heard that hackers who’ve scanned these encrypted passwords were able to decipher ninety percent of them. That means that over one hundred million user accounts are now hacked. What’s more, I’ve just returned from Afghanistan. Do you know what this means?”

“No,” I said. “What?”

“I fought the Taliban there, and now, they know who I am,” he muttered. “I had always worn a nametag on my uniform, and any Afghan wanting to take revenge on me will have already found my password. He’ll know where I live, based on the personal details in my account. They know who my wife is. They know how to get to our house!”

“Oh.” I said. “This is the world without privacy that we’re all afraid of. But it’s OK. They won’t find your wife.”

He looked up with a miserable glance. “Why not?”

“Because LinkedIn was already hacked once, four years ago, in 2012.” I explained. “They just didn’t understand how serious the problem was back then. They thought that only six and a half million passwords were stolen. Now, it turns out for all of that time, Russian hackers had all of those passwords, and although they really may have used them during that time – they might have already sold them to the Chinese, to ISIS, or to other centers of power – you can still set your mind at ease, provided that you changed your password.”

“Actually, I did,” he said. “In 2013, I think.”

“So you see? Everything’s OK,” I reassured him. “Or, in more exact terms, sufficiently OK, since this whole episode should teach us all an important lesson. Real privacy doesn’t exist any more. One of the more secured companies in the world was hacked, and this event wasn’t exposed for four years. Now, think about it, and tell me, yourself – what are the chances that some of the world’s databases hadn’t been hacked yet by the intelligence services of countries like Russia, China, or even the United States, working under the radar?”

He thought for a moment. “None?” he suggested.

“That’s what I think, too.” I said. “Hey, Snowden managed to steal enormous amounts of information from the National Security Agency of the United States, and no one was even aware that the information disappeared until he let the cat out of the bag himself. He was just one more citizen concerned about what this agency was doing. What are the chances that the Chinese haven’t managed to bribe other people at the agency to send them the information? Or that the United States hadn’t located its own agents in Russian or Chinese communities, or anywhere else in the world? Chances are that all of this information about us – not just passwords, but identifying particulars, residential addresses, and so on – are already in the hands of large governments around the world. And yes, ISIS may also have gotten its hands on it, though that’s a bit less likely, since they aren’t as technologically advanced. But one day, a Russian or Chinese Snowden will funnel all of this information to Wikileaks, and we’ll all know about everyone else.”

“But only within the period that information was gathered in,” he said.

“Right,” I answered. “That’s why I’m claiming that we’ve all lost our historical privacy. In other words, even if one day we enact new legislation to protect private information, a large portion of the information will already be circulating around the world, but it’s only valid during the period it was gathered in. It’s nearly certain that by today, various intelligence services can piece together impressive profiles of much of the world’s population, though they can only rely on the information gathered during that time. So even if ISIS managed to get its hands on those passwords, and even if they managed to hack your profile during the period between 2012 and 2013 and extract data about you without you knowing about it, the big question is if you were even married at the time.”

“Yup,” he said. “But I was married to my ex-wife, in a house I used to live in. Does this mean that ISIS could get to her?

“If all of these assumptions are true, then yes.” I said. “Maybe you should call her and warn her?”

He hesitated for a moment, and shrugged.

“It’s OK,” he said. “She’ll manage.”

This article was originally written by me in Hebrew, and translated and published at vpnMentor.

History is a story that will never be told fully. So much of the information is lost to the past. So much – almost all – the information is gone, or has never been recorded. We can barely make sense of the present, in which information about the events and the people behind them keeps being released every day. What chance do we have, then, at fully deciphering the complex stories underlying history – the betrayals, the upheavals, the personal stories of the individuals who shaped events?

The answer has to be that we have no way of reaching any certainty about the stories we tell ourselves about our past.

But we do make some efforts.

Medical doctors and historians are trying to make sense of biographies and ancient skeletons, in order to retro-diagnose ancient kings and queens. Occasionally they identify diseases and disorders that were unknown and misunderstood at the time those individuals actually lived. Mummies of ancient pharaohs are x-rayed, and we suddenly have a better understanding of a story that unfolded more than two thousand years ago and realize that the pharaoh Ramesses II suffered from a degenerative spinal condition.

Similarly, geneticists and microbiologists use DNA evidence to end mysteries and find conclusive endings to some historical stories. DNA evidence from bones has allowed us to put to rest the rumors, for example, that the two children of Czar Nicholas II survived the 1918 revolution in Russia.

The Russian czar Nicholas II with his family. DNA evidence now shows conclusively that Anastasia, the youngest daughter, did not survive the mass execution of the family in 1918. Source: Wikipedia

The above examples have something in common: they all require hard work by human experts. The experts need to pore over ancient histories, analyze the data and the evidence, and at the same time have good understanding of the science and medicine of the present.

What happens, though, when we let a computer perform similar analyses in an automatic fashion? How many stories about the past could we resolve then?

We are rapidly making progress towards such achievements. Recently, three authors from Waseda University in Japan have published a new paper showing they can use a computer to colorize old black & white photos. They rely on convolutional neural networks, which are in effect a simulation of certain structures of a biological brain. Convolutional neural networks have a strong capacity for learning, and can thus be trained to perform certain cognitive tasks – like adding color to old photos. While computerized coloring has been developed and used before, the authors’ methodology seems to achieve better results than others before them, with 92.6 percent of the colored images looking natural to users.

This is essentially an expert system, an AI engine operating in a way similar to that of the human brain. It studies thousands of thousands of pictures, and then applies its insights to new pictures. Moreover, the system can now go autonomously over every picture ever taken, and add a new layer of information to it.

There are boundaries to the method, of course. Even the best AI engine can miss its mark in cases where the existing information is not sufficient to produce a reliable insight. In the examples below you can see that the AI colored the tent orange rather than blue, since it had no way of knowing what color it was originally.

As I previously discussed in the Failures of Foresight series of posts on this blog, the Failure of Segregation is making it difficult for us to forecast the future because we’re trying to look at each trend and each piece of evidence on its own. Let’s try to work past that failure, and instead consider what happens when an AI expert coloring system is combined with an AI system that recognizes items like tents and associates them with certain brands, and can even analyze how many tents of each color of that brand were sold on every year – or at least what was the most favorite tent color for people at that time.

When you combine all of those AI engines together, you get a machine that can tell you a highly nuanced story about the past. Much of it is guesswork, obviously, but those are quite educated guesses.

The Artificial Exploration of the Past

In the near future, we’ll use many different kinds of AI expert systems to explore the stories of the past. Some artificial historians will discover cycles in history – princes assassinating their kingly fathers, for example – that have a higher probability to occur, and will analyze ancient stories accordingly. Other artificial historians will compare genealogies, while yet others will analyze ancient scriptures and identify different patterns of writing. In fact, such an algorithm had already been applied to the Bible, revealing that the Torah has been written by several different authors and distinguishing between them.

The artificial exploration of the past is going to add many fascinating details to stories which we’ve long thought were settled and concluded. But it also raises an important question: when our children and children’s children look back at our present and try to derive meaning from it – what will they find out? How complete will their stories of their past and our present be?

I suspect the stories – the actual knowledge and understanding of the order between events – will be even more complete than what we who dwell in the present know about.

Past-Future

In the not-so-far-away future, machines will be used to analyze all of the world’s data from the early 21st century. This is a massive amount of data: 2.5 quintillion bytes of data are created daily, which would fill ten million blu-ray discs altogether. It is astounding to realize that 90 percent of the world’s data today has been created just in the last two years. Human researchers would not be able to make much sense of it, but advanced AI algorithms – a super-intelligence, in some ways – could actually have the tools to crosslink many different pieces of information together to obtain the story of the present: to find out what movies families had watched on a specific day, in which hotel the President of the United States stayed during a recent visit to France and what snacks he ordered on room service, and many other paraphernalia.

Are those details useless? They may seem so to our limited human comprehension, but they will form the basis for the AI engines to better understand the past, and produce better stories of it. When the people of the future will try to understand how World War 3 broke out, their AI historians may actually conclude that it all began with a presidential case of indigestion which happened at a certain French hotel, and which annoyed the American president so much that it had prevented him from making the most rational choices in the next couple of days. An hypothetical scenario, obviously.

Futuronymity – Maintaining Our Privacy from the Future

We are gaining improved tools to explore the past with, and to derive insights and new knowledge even where information is missing. These tools will be improved further in the future, and will be used to analyze our current times – the early 21st century – as well.

What does it mean for you and me?

Most importantly, we should realize that almost every action you take in the virtual world will be scrutinized by your children’s children, probably after your death. Your actions in the virtual world are recorded all the time, and if the documentation survives into the future, then the next generations are going to know all about your browsing habits in the middle of the night. Yes, even though you turned incognito mode on.

This means we need to develop a new concept for privacy: futuronymity (derived from Future and Anonymity) which will obscure our lives from the eyes of future generations. Politicians are always concerned about this kind of privacy, since they know their critical decisions will be considered and analyzed by historians. In the future, common people will find themselves under similar scrutiny by their progenies. If our current hobby is going to psychologists to understand just how our parents ruined us, then the hobby of our grandchildren will be to go to the computer to find out the same.

Do we even have the right to futuronymity? Should we hide from next generations the truth about how their future was formed, and who was responsible?

That question is no longer in the hands of individuals. In the past, private people could’ve just incinerated their hard drives with all the information on them. Today, most of the information is in the hands of corporations and governments. If we want them to dispose of it – if we want any say in which parts they’ll preserve and which will be deleted – we should speak up now.

Pepper is one of the most sophisticated household robots in existence today. It has a body shape that reminds one of a prepubescent child, only reaching a height of 120 centimeters, and with a tablet on its chest. It constantly analyzes its owner’s emotions according to their speech, facial expressions and gestures, and responds accordingly. It also learns – for example, by analyzing which modes of behavior it can enact in order to make its owner feel better. It can even use its hands to hug people.

No wonder that when the first 1,000 Pepper units were offered for sale in Japan for $1,600, they were all sold in one minute. Pepper is now the most famous household robot in the world.

Pepper is probably also the only robot you’re not allowed to have sex with.

According to the contract, written in Japanese legal speak and translated to English, users are not allowed to perform –

“(4) Acts for the purpose of sexual or indecent behavior, or for the purpose of associating with unacquainted persons of the opposite sex.”

What does this development mean? Here is the summary, in just three short points.

First Point: Is Pepper Being Used for Surveillance?

First, one has to wonder just how SoftBank, the robot’s distributors in Japan, is going to keep tabs on whether the robot has been sexually used or not. Since Pepper’s price includes a $200 monthly “data and insurance fee”, it’s a safe bet that every Pepper unit is transmitting some of its data back to SoftBank’s servers. That’s not necessarily a bad thing: as I’ve written in Four Robot Myths it’s Time We Let Go of, robots can no longer be seen as individual units. Instead, they are a form of a hive brain, relying on each other’s experience and insights to guide their behavior. In order to do that, they must be connected to the cloud.

This is obviously a form of surveillance. Pepper is sophisticated enough to analyze its owner’s emotions and responses, and can thus deliver a plethora of information to SoftBank, advertisers and even government authorities. The owners could probably activate a privacy mode (if there’s not a privacy mode now, it will almost certainly be added in the near future by common demand), but the rest of the time their behavior will be under close scrutiny. Not necessarily because SoftBank is actually interested in what you’re doing in your houses, but simply because it wants to improve the robots.

And, well, also because it may not want you to have sex with them.

This is where things get bizarre. It is almost certainly the case that if SoftBank wished to, it could set up a sex alarm to blare up autonomously if Pepper is repeatedly exposed to sexual acts. There doesn’t even have to be a human in the loop – just train the AI engine behind Pepper on a large enough number of porn and erotic movies, and pretty soon the robot will be able to tell by itself just what the owner is dangling in front of its cameras.

The rest of the tale is obvious: the robot will complain to SoftBank via the cloud, but will do so without sharing any pictures or videos it’s taken. In other words, it won’t share information but only its insights and understandings of what’s been going on in that house. SoftBank might issue a soft warning to the owner, asking it to act more coyly around Pepper. If such chastity alerts keep coming up, though, SoftBank might have to retrieve Pepper from that house. And almost certainly, it will not allow other Pepper units to learn from the one that has been exposed to sexual acts.

And here’s the rub: if SoftBank wants to keep on developing its robots, they must learn from each other, and thus they must be connected to the cloud. But as long as SoftBank doesn’t want them to learn how to engage in sexual acts, it will have to set some kind of a filter – meaning that the robots will have to learn to recognize sexual acts, and refuse to talk about them with other robots. And silence, in the case of an always-operational robot, is as good as any testimony.

So yes, SoftBank will know when you’re having sex with Pepper.

I’ve written extensively in the past about the meaning of private property being changed, as everything are being connected to the cloud. Tesla are selling you a car, but they still control some parts of it. Google are selling you devices for controlling your smart house – which they then can (and do) shut down from a distance. And yes, SoftBank is selling you a robot which becomes your private property – as long as you don’t do anything with it that SoftBank doesn’t like you to.

And that was only the first point.

Second Point: Is Sex the Answer, or the Question?

There’s been some public outrage recently about sex with robots, with an actual campaign against using robots as sex objects. I sent the leaders of the campaign, Kathleen Richardson and Erik Brilling, several questions to understand the nature of their issues with the robots. They have not answered my questions, but according to their campaign website it seems that they equate ‘robot prostitution’ with human prostitution.

“But robots don’t feel anything.” You might say now. “They don’t have feelings, or dignity of their own. Do they?”

Let’s set things straight: sexual abuse is among the most horrible things any human can do to another. The abuser is causing both temporary and permanent injury to the victim’s body and mind. That’s why we call it an abuse. But if there are no laws to protect a robot’s body, and no mind to speak of, why should we care whether someone uses a robot in a sexual way?

Richardson’s and Brilling basically claim that it doesn’t matter whether the robots are actually experiencing the joys of coitus or suffering the ignominy of prostitution. The mere fact that people will use robots in the shape of children or women for sexual release will serve to perpetuate our current society model in which women and children are being sexually abused.

Let’s approach the issue from another point of view, though. Could sex with robots actually prevent some cases of sexual abuse?

Assuming that robots can provide a high-quality sexual experience to human beings, it seems reasonable that some pent-up sexual tensions can be relieved using sex robots. There are arguments that porn might actually deter sexual violence, and while the debate is nowhere near to conclusion on that point, it’s interesting to ask: if robots can actually relieve human sexual tensions, and thus deter sexual violence against other human beings – should we allow that to happen, even though it objectifies robots, any by association, women and children as well?

I would wait for more data to come in on this subject before I actually advocate for sex with robots, but in the meantime we should probably refrain from making judgement on people who have sex with robots. Who knows? It might actually serve a useful purpose even in the near future. Which brings me to the third point –

A week ago I covered in this blog the possibility of using aerial drones for terrorist attacks. The following post dealt with the Failure of Myth and covered Causal Layered Analysis (CLA) – a futures studies methodology meant to counter the Failure of Myth and allow us to consider alternative futures radically different from the ones we tend to consider intuitively.

In this blog post I’ll combine insights from both recent posts together, and suggest ways to deal with the terrorism threat posed by aerial drones, in four different layers suggested by CLA: the Litany, the Systemic view, the Worldview, and the Myth layer.

To understand why we have to use such a wide-angle lens for the issue, I would compare the proliferation of aerial drones to another period in history: the transition between the Bronze Age and the Iron Age.

From Bronze to Iron

Sometime around 1300 BC, iron smelting was discovered by our ancient forefathers, assumedly in the Anatolia region. The discovery rapidly diffused to many other regions and civilizations, and changed the world forever.

If you ask people why iron weapons are better than bronze ones, they’re likely to answer that iron is simply stronger, lighter and more durable than bronze. However, the truth is that bronze weapons are not much more efficient than iron weapons. The real importance of iron smelting, according to “A Short History of War” by Richard A. Gabriel and Karen S. Metz, is this:

“Iron’s importance rested in the fact that unlike bronze, which required the use of relatively rare tin to manufacture, iron was commonly and widely available almost everywhere… No longer was it only the major powers that could afford enough weapons to equip a large military force. Now almost any state could do it. The result was a dramatic increase in the frequency of war.”

It is easy to imagine political and national leaders using only the first and second layer of CLA – the Litany and the Systemic view – at the transition from the Bronze to the Iron Age. “We should bring these new iron weapons to all our soldiers”, they probably told themselves, “and equip the soldiers with stronger shields that can deflect iron weapons”. Even as they enacted these changes in their armies, the worldview itself shifted, and warfare was vastly transformed because of the large number of civilians who could suddenly wield an iron weapon. Generals who thought that preparing for the change merely meant equipping their soldiers with an iron weapon, found themselves on the battlefield facing armies much larger than their own, because of new conscription models that their opponents had developed.

Such changes in warfare and in the existing worldview could have been realized in advance by utilizing the third and fourth layers of CLA – the Worldview and the Myth.

Aerial drones are similar to Iron Age weapons in that they are proliferating rapidly. They can be built or purchased at ridiculously low prices, by practically everyone. In the past, only the largest and most technologically-sophisticated governments could afford to employ aerial drones. Nowadays, every child has them. In other words, the world itself is turning against everything we thought we knew about the possession and use of unmanned aerial vehicles. Such dramatic change – that our descendants may yet come to call The Aerial Age when they look back in history – forces us to rethink everything we knew about the world. We must, in short, analyze the issue from a wide-angle view, with an emphasis on the third and fourth layer of CLA.

How, then, do we deal with the threat aerial drones pose to national security?

First Layer: the Litany

The intuitive way to deal with the threat posed by aerial drones, is simply to reinforce the measures and we’ve had in place before. Under the thinking constraints of the first layer, we should basically strive to strengthen police forces, and to provide larger budgets for anti-terrorist operations. In short, we should do just as we did in the past, but more and better.

It’s easy to see why public systems love the litany layer, since these measures create reputation and generate a general feeling that “we’re doing something to deal with the problem”. What’s more, they require extra budget (to be obtained from congress) and make the organization larger along the way. What’s there not to like?

Second Layer: the Systemic View

Under the systemic view we can think about the police forces, and the tools they have to deal with the new problem. It immediately becomes obvious that such tools are sorely lacking. Therefore, we need to improve the system and support the development of new techniques and methodologies to deal with the new threat. We might support the development of anti-drone weapons, for example, or open an entirely new police department dedicated to dealing with drones. Police officers will be trained to deal with aerial drones, so that nothing is left for chance. The judicial and regulatory systems are lending themselves to the struggle at this layer, by issuing highly-regulated licenses to operate aerial drones.

An anti-drone gun. Originally from BattelleInnovations and downloaded from TechTimes

Again, we could stop the discussion here and still have a highly popular set of solutions. As we delve deeper into the Worldview layer, however, the opposition starts building up.

Third Layer: the Worldview

When we consider the situation at the worldview layer, we see that the proliferation of aerial drones is simply a by-product of several technological trends: miniaturization and condensation of electronics, sophisticated artificial intelligence (at least in terms of 20-30 years ago) for controlling the rotor blades, and even personalized manufacturing with 3D-printers, so that anyone can construct his or her own personal drone in the garage. All of the above lead to the Aerial Age – in which individuals can explore the sky as they like.

Exploration of the sky is now in the hands of individuals. Image originally from DailyMail India.

Looking at the world from this point of view, we immediately see that the vast expected proliferation of aerial drones in the near decade would force us to reconsider our previous worldviews. Should we really focus on local or systemic solutions, rather than preparing ourselves for this new Aerial Age?

We can look even further than that, of course. In a very real way, aerial drones are but a symptom of a more general change in the world. The Aerial Age is but one aspect of the Age of Freedom, or the Age of the Individual. Consider that the power of designing and manufacturing is being taken from nations and granted to individuals via 3D-printers, powerful personal computers, and the internet. As a result of these inventions and others, individuals today hold power that once belonged only to the greatest nations on Earth. The established worldview, in which nations are the sole holders of power is changing.

When one looks at the issue like this, it is clear that such a dramatic change can only be countered or mitigated by dramatic measures. Nations that want to retain their power and prevent terrorist attacks will be forced to break rules that were created long ago, back in the Age of Nations. It is entirely possible that governments and rulers will have to sacrifice their citizens’ privacy, and turn to monitoring their citizens constantly much as the NSA did – and is still doing to some degree. When an individual dissident has the potential to bring harm to thousands and even millions (via synthetic biology, for example), nations can ill afford to take any chances.

What are the myths that such endeavors will disrupt, and what new myths will they be built upon?

Fourth Layer: the Myth

I’ve already identified a few myths that will be disrupted by the new worldview. First and foremost, we will let go of the idea that only a select few can explore the sky. The new myth is that of Shared Sky.

The second myth to be disrupted is that nations hold all the technological power, while terrorists and dissidents are reduced to using crude bombs at best, or pitchforks at worst. This myth is no longer true, and it will be replaced by a myth of Proliferation of Technology.

The third myth to be dismissed is that governments can protect their citizens efficiently with the tools they have in the present. When we have such widespread threats in the Age of Freedom, governments will experience a crisis in governance – unless they turn to monitoring their citizens so closely that any pretense of privacy is lost. And so, it is entirely possible that in many countries we will see the emergence of a new myth: Safety in Exchange for Privacy.

Conclusion

Last week I’ve analyzed the issue of aerial drones being used for terrorist attacks, by utilizing the Causal Layered Analysis methodology. When I look at the results, it’s easy to see why many decision makers are reluctant to solve problems at the third and fourth layer – Worldview and Myth. The solutions found in the lower layers – the Litany and the Systemic view – are so much easier to understand and to explain to the public. Regardless, if you want to actually understand the possibilities the future holds in any subject, you must ignore the first two layers in the long term, and focus instead on the large picture.