]]>The Chaos Computer Club (CCC) is Europe’s largest association of hackers. For more than thirty years they are providing information about technical and societal issues, such as surveillance, privacy, freedom of information, hacktivism, data security and many other interesting things around technology and hacking issues. Members of the CCC are often consulted by German politicians for digital topics.

Every year between Christmas and New Year they gather at a massive conference to discuss aforementioned issues and celebrate together a few days where they create their own hacker utopia.

This year, activists from the CCC seem to be fed up with politicians, who often do not follow the advice given by the white hackers and rather be corporate sellouts than care about internet security, data privacy and computer education.

Here is an (incomplete) summary of some of the most interesting talks from the congress:

The TAN-procedure used by many mobile banking apps is not secure

Although the-factor-authentication is implemented in online banking via TAN procedure, this method has been eroded over time. Now it is common to have an online banking app and the corresponding TAN-app on the same smartphone. Even worse we see a development of having TAN-requests implemented in the online banking app itself. Security in such apps is provided by something called “app hardening” which is often implemented by third parties.

This poses severe security risks, as this talk shows how such a system can be hacked by compromising the smartphone of the user (e.g. if that user downloads a compromised app).

Solution:

App hardening should be an additional safety feature and cannot be a replacement for true two-factor-authentication

Banks should start taking this seriously. Right now they claim there are no known instances of hacked accounts and therefore they don’t have to create a different system.

The infrastructure for electrical charging station is a security nightmare

Bad news if you are an owner of an electric car.

Fundamental security principles are not implemented in many charging stations:

They use the “Mifare” keycard-system that has been known to have security flaws for over a decade. Card numbers can be just copied and be used to create a clone of the card. “It’s like I am paying with a photocopy of my bank card at the supermarket and the cashier is accepting it” says security researcher Matthias Dalheimer.

The communication between charging station and the billing back end is also poorly secured. The card number is transmitted to the provider – without any encryption at all. This way, card numbers can be stolen and be used to create fake cards.

The charging stations itself have USB ports that can be used to feed code into the system. This way, card numbers from other people can be harvested.

What does that mean?

Hackers can charge their car while having other users pay for it.

Hackers can manipulate charging stations to pay nothing at all.

Owners of charging stations can manipulate the system to bill you more than you actually charged.

People responsible for the system ignore the matter and don’t want to fix the security issues.

A software for counting and analyzing votes was used for the German election in 2017 despite having several open security issues

In many parts of Germany a software called “PC-Wahl” is used to count votes.

The CCC identified massive security flaws in the software and corresponding servers. The used encryption was flawed, electronic signatures were lacking and the webspace used for updates wasn’t properly secured.

The CCC actually created an open-source-package addressing some of the security issues, but is was not use by the developing company.

Even though we hear that twitter is full of social bots manipulating the political discourse, the reality is that social bots are really hard to systematically identify

Data journalist Michael Kreil performed a twitter network analysis to answer the question: “How can we reliable identify social bots?”

A common method is to define a cutoff like 30, 50 or 100 daily tweets, and then simply state the rule: “Everyone who tweets more than that is a bot.”

A method proposed by researchers from the oxford university proposed that a social bot can be identified by posting more than 50 times with a certain political hashtag at an event.

These rules were investigated and shown to be not effective to identify bots.

A reliable method for automatically identifying bots is currently non-existent.

The impact of social bots, however, is probably overstated. Kreil argues that what we call “Fake News” are nothing more than Memes, that are shared excessively and have a huge reach. But they are probably not a threat to the political discourse.

China is about to implement a mandatory social credit system that aims to keep Chinese citizens obedient in 2020

Katinka Kühnreich told us about China’s newest attempt to create obedient citizens: A gamified online social credit system (SCS).

Social rating systems already exist in China, but in 2020 it becomes mandatory to use one if you are a Chinese citizen. Right now, there exist many different governmental SCSs in different regions. Additionally eight companies are allowed to form private SCS (e.g. Ali Baba, Tencent)

Here is how it works:

The System uses big data and machine learning to create a score for each citizen of how good he or she is.

Ali Babas Social Credit System, for example, takes various online and offline data as input:

activity from your social media profile

data from payments and products bought

data from authorities like courts, debtors registry

data from its dating app Baihe

and more…

So if you post an independent news article about the stock market collapse, your score goes down. If you share an article from the official state news outlet about how well the economy is doing, your score goes up.

Your Score has real world consequences: Higher scores make it easier to get the paperwork needed for traveling or getting a loan. Penalties for having a low score are being discussed by Chinese authorities, like lowering internet speed, or restricting the jobs you are able to hold.

Not only your own behavior affects your score, but also the score of friends in your social media network. Every score is public and can be seen by other citizens. The system tells you, if you have a friend with a low score, which is dragging your own score down.

We don’t know how the mandatory system will work, as the Chinese government doesn’t give any information to the public. But it is expected that every data input that is useful will probably be used.

Why is China doing this?

The system makes sure that deviant citizens are isolated by their friends by creating a system that „rewards“ positive (aka government-friendly) behavior, instead of using force and oppression, which risks sparking revolts.

China is a „transformation society“ and has massive social problems.

Social control has a long history in China.

Since China became a digitized society, it is only natural that control became digitized as well.

Can’t you just avoid the System by using fake names or other services?

You cannot use most services, or even make online comments, without true name registration.

digital payment methods are overrunning cash in China.

It’s almost impossible to evade the system, which means starting 2020, if you are a Chinese citizen, you have to play this game that defines your life (housing, job, school) and the life of everyone you know.

Internet-of-Things devices are a security nightmare

We have around 8.4 billion Internet-of-Things (IoT) devices at the end of 2017, unfortunately many of them have security flaws. Barbara Wimmer gives examples of how the internet of things went wrong:

There are many instances where botnets of hacked IoT devices sent spam mails or performed DDoS-attacks. For example, a university was attacked by a botnet, consisting of its own vending machine, smart light bulbs & 5000 other IoT devices.

A smart webcam, that was installed by a woman to keep eye on her dog, was someday following her instead of her dog and a voice said „hola seniorita“ over the integrated microphone.

Smart toys with cameras and microphones often have unsecured Bluetooth connections, which means that anyone with a smartphone, close enough, could connect with the toy, listen to the child and even speak to them over the toy.

The smart doll „Cayla“ even got forbidden in Germany by law. It is judged as a „prohibited broadcasting station“ and parents who do not destroy it will be fined.

Even Sex toy are hacked: a vibrator-controlling app records sounds made during sex and stores them on peoples phones, without their knowledge. More security research for sex toys can be viewed at „the internet of dongs“

digital assistants, like Google home and Amazon echo, collect a lot of data from you, which is sold in various ways to monetize that information. What happens to the data is completely non-transparent to the user. Additionally both devices already got hacked and have not proven secure.

As more devices come online the problem will only get worse.

Up till now, best is to stay away from IoT devices altogether, unless you made a thorough check on the security features of the device you want to buy. Otherwise you have to assume that it is not safe.

Solutions:

Currently, manufactures do not have to provide essential information about the security of their devices, such as how long it will it receive security updates. Making this information mandatory would be huge step forward.

A security star rating system (similar to energy labeling) for IoT devices would be beneficial for customers to quickly identify secure products.

Vendors should be forced to close security holes instead of ignoring them.

Vendors should provide us at least with an email-address where we can easily report security flaws.

Mandatory offline-mode for electronic devices should always exist.

Something equivalent to an airbag and seat belt for the digital age would be nice to help less tech-savvy users.

Product liability and clear update policy is needed.

In short: We need more regulation.

The Coming “General Data Protection Regulation” in May 2018 is already very helpful.

Xiaomi’s vacuum cleaning robot can be used to spy on your home

The vacuum cleaning robot “Mi Robot Vacuum” from Xiaomi uses a camera and a laser distance measurement device to create a detailed map of your home, that is saved on Xiaomi’s servers. All robots use the same initial password “Rockrobo”, which can be exploited by hackers to use the robot to spy on your apartment.

Jean Rintoul presents a vision of a world where medical imaging is cheap and easily accessible. This would enable preventive scans as opposed to scans, just when something goes wrong.

Electrical Impedance Tomography is a new cheap technique that sounds promising.

Advantages: Cheap and good time resolution

Disadvantages: Low spatial resolution

The ‘Open Electrical Impedance Tomography Project’ is an open source project that aims to push this technology further to enable better spatial resolution and make this technology available to the developing world, which have no access to expensive imaging methods like MRI or CT scans.

The blockchain is a revolutionary technology but has some problems to solve in order to be really useful

Zooko, the founder of Zcash, talked about cryptocurrencies.

Bitcoin basically is a world ledger

Ethereum basically is a world computer

Both have scaling limitation.

Bitcoin can only do around 3 transactions per second

Ethereum can only run a limited amount of programs per second

Lightning Networks are an attempt to overcome the scaling limitations. Ethereum attempts to solve its scaling issues through something called ‘Sharding’, where they enable programs to be computed through serial processing.

Initial coin offerings (ICOs) (An ICO is a way for startups to get funding money by creating a coin on a blockchain and give it away to investors) – $5 bn in 2017 (Some real inventors, Many Scammers)

Retail (e.g. buying games on Steam) – $1 bn/year (Now this is dead. Due to scaling issues you have to pay a transaction cost of 10-40$ per transaction, which makes it unpractical to use Bitcoin for retail)

CryptoKitties – ca. $20 M so far (This is a game where you can send each other digital kittens. All kittens are unique and you can breed two kittens with each other to create a new unique kitten. All kittens, and their family trees, are secured to be unique by the world computer Ethereum. It’s ridiculous to the point that the founder of Ethereum, Vitalik Buterin, threatened to leave Ethereum if people won’t use it for useful applications.

„Dark markets“ (drugs) – around $100 M/year

Current problems that need to be solved:

scale,

safety !!! (Total amount of coins that have been stolen: $10 billion in Bitcoins, $1 billion in ether)

not many applications (There will be probably more games in the near future)

Other interesting points of the talk:

Ethereum has a lot of coders trying stuff out on the network. This leads to a network effect which is a very strong predictor of the success of a software platform or service.

Ambitious future tech is sprouting out of the blockchain, like ‘future prediction markets’ or replacing Uber through a blockchain service.

Top three countries in crypto trading volume are Japan, US and South Korea in that order.

In the US nobody knows what agency has the authority to implement some regulation for cryptocurrencies and there are dozens which are potentially responsible.

A questionable data bank used by banks blacklists innocent people denying them financial services

Faced with new responsibilities to prevent terrorism and money laundering, banks have built a huge surveillance infrastructure sweeping up millions of innocent people. An accidental leak granted a rare opportunity for journalists to examine a database used to make decisions affecting people and organizations all over the world.

The ‘world-check’ database is the gold standard for banks to check whether someone is trustworthy to pay credits back or suspected to do money laundering. The content of the list is secret.

Questionable sources like ‘breitbart’ and ‘stormfront’ are used for analysis to put a person on that database, which can lead to being denied getting a credit, or getting that persons bank account closed.

That resulted in innocent actors suffering including a mosque that had its bank account shut without explanation, activists blacklisted for a peaceful protest, and ordinary citizens whose political activities were secretly catalogued.

Reuters, who provides the data base, says responsibility for critical decisions (like closing a bank account) lies at the banks, while banks in general just trust the data bank without further research and therefore put responsibility on Reuters.

]]>https://www.themindfiles.com/quick-tour-around-talks-chaos-communication-congress-2017/feed/0https://www.themindfiles.com/quick-tour-around-talks-chaos-communication-congress-2017/We Live in the Age of Burnout. Here is What Helps.http://feedproxy.google.com/~r/TheMindFiles/~3/r0zejZfqUrU/
https://www.themindfiles.com/live-age-burnout-helps/#respondMon, 24 Apr 2017 18:14:42 +0000https://www.themindfiles.com/?p=295Society has made a profound transformation in the 21st century. Along with globalization and the digital revolution, also work has changed. In the past, we had a society that ran on discipline. People in factories were openly exploited and kept... Continue Reading →

]]>Society has made a profound transformation in the 21st century. Along with globalization and the digital revolution, also work has changed. In the past, we had a society that ran on discipline. People in factories were openly exploited and kept down with oppressive techniques. Now, people have a lot of individual freedom. That seems good, as we can use this freedom for self-fulfillment and working towards the life that we want for ourselves. Yet the reality looks pretty bleak: Over 60% of people feel stressed (in Germany) and burnout rate increased by the factor 10 during the last ten years (from 4.6 sick days to 55.6 sick days per 1000 workers because of burnout).

Why are people so stressed, despite all the freedom and possibilities we have compared to past generations?

The ugly truth we have to realize is, that great individual freedom leads to self-exploitation and depression. With a lot of freedom, we feel that we are responsible for our own fate. On first sight that is good, because we have the freedom to live our lives to the highest fulfillment. On the downside, our ideal self, that we imagine, is in most cases an unobtainable goal that puts a lot of pressure on us, which often leads to self-exploitation.

Today, we feel like we have all the opportunities, so we alone are responsible for our success – or our failure. We feel a great pressure to perform, which leads to perfectionism. This perfectionism isn’t confined to the realm of work, but seeps into all aspects of our lives. We want to be healthy, fit, well-performing, have a good career and a fulfilled private live. The modern consumer-capitalist society is based on an endless perfectionism paradigm, which, on the surface, tries to make us better and happier, but, on a deeper level, is very dangerous for the human mind. The apparent freedom we have today leads to compulsion. We only have the illusion of freedom, but in reality, we are slaves running in a self-optimization treadmill.

At work, this means not only our labor force is exploited. Today it is expected that the whole human being is obstructed into the job. Your emotions are a resource that you should use to optimize communication at work. Managers use ’emotional management methods’ to motivate and ‘inspire’ others to do their jobs more effectively. Empathy is degraded to being a tool that serves maximizing efficiency.

The digital transformation has increased information overload. Our ways of communication became a lot faster and diverse. Emails, smartphones and social media has been adopted by billions of people in the blink of a decade. Employees have to adapt to the new demand to be ‘always on’, and be able to be contacted at any times.

We have to work on all levels to counteract the processes that lead to self-exploitation, in order to use our freedom to become healthy and happy; and not end up as self-optimization junkies.

So, what can we do?

Individual Measures against Burnout

Individual measures against burnout are mainly based on common sense. Useful self-management includes defining clear career and private goals, and sorting out how to achieve them, without overloading. Especially, to get rid of perfectionism is of tremendous importance, as well as to know your own strengths and weaknesses.

The phenomenon of burnout and work stress has been discovered as a new, promising market. There are many apps, by now, which are designed to help you take care of yourself, and build up resilience against stress at work.

You could argue that this is just another way the system tries to exploit the common man, who should buy products to self-optimize. On the other hand, many apps are designed to specifically fight self-exploitation and teach you to fight perfectionism.

Consumer trends show a shift from ‘optimizing fitness & diet’ towards a more holistic ‘balancing of the mind-body connection’. Also, in public perception, there seems to be a shift from ‘stress as a status symbol’ towards seeing ‘stress as toxic’. This is a good trend.

However, studies have shown that individual-focused measures against burnout show little sustained positive effects, if they are not combined with measures at the work place.

Organizational Methods against Burnout

Organizational methods have only been implemented systematically a couple of years ago, which is why long-term studies that assess effectiveness of such measures are quite rare. Here is what we know so far.

Especially something called ‘High Quality Leader Member Exchange’ has been proven successful. Here, mentors are introduced, who are superiors to employees, but have copious exchanges with them, to surface perspectives and functionalities of the various specific jobs in the company. This intensive mentoring resulted in workers who were able to develop their competences and get comprehensive feedback. This way, they were able to solve role conflicts and effectively prevent burnout.

The authors highlight a method as especially effective, that focuses on communication. Over the course of four months, a team compiled various suggestions for changes within the company. The biggest problems this “task-force” detected where high psychological demands of workers, low leeway in decision making, few social support and little acknowledgement of effort. By implementing measures that aimed to solve exactly these problems, even three years after introducing the method, a reduction in burnout was visible.

Another important domain that becomes more and more important is media competency: It is not only important to know how to use a smartphone, but also when it makes sense to turn it off. Studies have shown that after a 3-minute interruption, workers need around 20 minutes to refocus completely to a task. Multitasking is – from a neurological perspective – a misconception. Users who are trained to operate a lot of digital media at once are better trained to absorb a lot of information, but they are worse at separating important from unimportant information.

Here is a summary of the best organizational measures against burnout:

Trust your employees and don’t control them too tightly.

Provide many options for employees to develop and sharpen their skills.

Define clear goals of a job so that a successful practice can be determined.

Teach media competency

Ensure intensive communication between workers and management, to effectively surface how you can provide the resources employees need to do their jobs effectively.

Can we trust our corporations to help us or do we have to change the system first in which they operate?

Societal Measures against Burnout

Since too much freedom seems to get us into this mess, how do we fight that? Do we have to get rid of freedom? Or do we have to redefine it?

Karl Marx defined freedom as ‘succeeding relationships with each other’. He was very well aware that individual freedom is a deceit of the capital. Capital uses its freedom to proliferate, while the individuals in a society have nothing left but the freedom to compete with each other. In a neoliberal society, freedom is perverted to be the genital of the capital which uses it to procreate itself.

This is the great seductive power of the neoliberal ideology. It makes us exploiting ourselves. The apparent freedom we have makes it impossible to protest against this ideology, as we are culprit and victim at the same time. Most people aren’t even aware of their oppression.

A friendly power is stronger than an oppressive one. Instead of protesting against an outside enemy and criticizing the society we live in, we tend to be self-critical. Violence is turned towards the self, which goes as far as committing suicide. Within this reality. in which the oppressors are invisible, a revolution becomes near impossible. There is no clear enemy to fight against in the neoliberal system. The force that keeps the system in place is the individual freedom, which is unassailable, because it is seen as an absolute good. The bad outcomes result only because of the way our human mind works.

We have to acknowledge that a system, that gives us a lot of individual freedom, but at the same time pushes us to use this freedom to exploit ourselves, is not our friend.

]]>https://www.themindfiles.com/live-age-burnout-helps/feed/0https://www.themindfiles.com/live-age-burnout-helps/The Quick and Dirty Guide of How the Brain Workshttp://feedproxy.google.com/~r/TheMindFiles/~3/scpgWSVwt1k/
https://www.themindfiles.com/quick-dirty-guide-how-brain-works/#respondSun, 12 Feb 2017 13:49:55 +0000https://www.themindfiles.com/?p=262“One of the difficulties in understanding the brain is that it is like nothing so much as a lump of porridge.” – Richard L. Gregory The brain is annoyingly hard to understand. Maybe this guide helps a little. If... Continue Reading →

]]>“One of the difficulties in understanding the brain is that it is like nothing so much as a lump of porridge.” – Richard L. Gregory

The brain is annoyingly hard to understand. Maybe this guide helps a little.

If we look at the brain, we see that it’s a network of calculators that connect together. Those calculators are brain cells called neurons. Each of those cells takes an input, does some calculation, and gives an output.

Each calculator consists of millions of different molecular machines that work in harmony, to ensure that each neuron is able to do its calculation function.

How do these machines know what to do?

They don’t! The machines work just by following the laws of physics and stochastic probability. For example if such a molecular robot is positively charged, he wants to go to an area that is negatively charged. Or if some area is too crowded, he wants to go where it’s less crowded.

Some of the best visualizations of the inner workings of a cell are created by XVIVO Scientific Animation, who produced several educational videos for Harvard University, like this one:

The calculation function of a neuron is not always the same. Neurons are able to adapt and change their calculation functions, based on past inputs. They do this by changing their internal structure, or the structure of their connections to other neurons, based on the patterns of how they get activated. Much like a tree which grows in the way the wind blows.

The rule of how those connections change is:

“What fires together wires together”

It means that, if two neurons activate at roughly the same time, their connection gets stronger. If they no longer fire synchronously, their connection gets weaker. These changes in the connections between neurons are the physical basis of our memories.

Neurons are never moving still. They are constantly stretching their dendrites, like they are searching for other neuron-buddies that they can hold hands with.

Billions of neurons connect together to form feedback loops. There are various feedback loops in your brain doing things like regulating your body temperature, heart rate or your breathing patterns. Basic functions like that are regulated by deep structures of the brain, like the brain stem, which connects the spinal cord to the rest of your brain. There are also feedback loops doing more complex stuff that involves cognition, like when you are driving a car.

On top of the brain stem there is the limbic system – often referred to as the reptilian brain. Its primary function is regulating your emotions and telling you what behavior to do more or less via the feedback signals “pleasure” and “pain”.

Above the limbic system, we have the neocortex which generalizes over all situations that we experience and detects patterns. Basically our brain is a multi-layered pattern recognition machine. That machines enables us to detect patterns within patterns within patterns and so on…

In other words: The brain is clustering information into higher and higher levels of abstraction. On the highest level our brain creates a mental simulation of the world (including ourselves).

Neurons in the cortex are organized in small circuits called cortical columns, made of around 100 – 400 neurons each. There are around 20-100 million columns in the brain, which all act as single units that talk to their neighboring columns to form larger computational units. Once this has a certain size it is called a brain area.

Brain areas talk with each other and form a processing stream. A common metaphor is that the brain works like an orchestra, where each brain area is like an instrument to make up the music of the mind.

In reality, there are just few brain areas that seem to be highly specialized. The brain is, at its heart, an interconnected organ. It is very difficult to define clear-cut purposes for individual regions. It’s like each player in the orchestra would switch instruments all the time and sometimes playing the flute, trombone and the piano all at once. It’s kind of a mess.

There are a few small nuclei that make up your arousal system. You could say they produce the weather that is going on in your brain. They can flood your system with dopamine, serotonin, acetylcholine and other neurotransmitters to make you calm, focused, stressed, motivated, sleepy or social. You can take drugs to affect the weather in your brain quite drastically.

Your mind is spread across two hemispheres. There is a persistent myth that the right brain is the creative one and the left brain is the logical, analytical one. This is bullshit. The real difference is that the right hemisphere has a wider focus on things and thinks more globally, while the left hemisphere thinks more narrowly and is concerned with details. But both hemispheres are involved in creativity, and both hemispheres are involved in reason. They just use different kinds of attention.

The hemispheres are connected by the corpus callossum – a communication highway between the left and right brain that ensures that both hemispheres make up a coherent self.

Cutting your corpus callossum would create basically two human beings in one body. Each hemisphere would develop their own personality, which goes as far as having a christian hemisphere and an atheist one, which was observed in a patient by Indian neuroscientist Ramachandran.

The strange thing is that you wouldn’t even realize that you are two persons. One reason for that is that one of them – the right brain – cannot speak because spoken language is produced in the left brain.

Another reason is that the hemispheres wouldn’t be entirely disconnected. They still merge at another area – the brain stem. But it’s a longer route, with a lot less axons, which makes the integration not as coherent as it was with the corpus callossum intact.

That’s it. The thing, that you think is you, is a squishy lump of cells. Billions of calculation machines that take inputs from exterior reality (or each other) to create a simulation of the world including yourself.

There are a thousand ways of how stuff can go wrong in there and I think everyone comes, at some point, to the conclusion that every human has developed and curated his or her own personal insanity.

Although it seems we already know a lot about how the brain works, there are even more open questions that we cannot answer yet, like:

How do single neurons compute?

How do circuits of neurons compute?

Why do we sleep and dream?

What is the neural basis of subjective experience, cognition and attention?

It’s one of the most fascinating fields that you can study these days.

]]>https://www.themindfiles.com/quick-dirty-guide-how-brain-works/feed/0https://www.themindfiles.com/quick-dirty-guide-how-brain-works/The State of Artificial Intelligencehttp://feedproxy.google.com/~r/TheMindFiles/~3/-JrjDQrAq6k/
https://www.themindfiles.com/state-artificial-intelligence/#respondThu, 29 Sep 2016 19:21:57 +0000https://www.themindfiles.com/?p=199When I started my Bachelor studies of cognitive science, older semesters recommended us young freshmans to read a book. This book was described as “the bible of cognitive science”. And the promise was that, by reading the gospel, we will... Continue Reading →

]]>When I started my Bachelor studies of cognitive science, older semesters recommended us young freshmans to read a book. This book was described as “the bible of cognitive science”. And the promise was that, by reading the gospel, we will be inside the club and true followers of the cult. It was “Gödel, Escher, Bach”, a 800 page tome about mathematics, art, computer science, biology, zen buddhism and much more, written by Douglas Hofstadter.

Naturally I bought the book and devoured page after page and finally understood – very little. The book explores how self-referential formal rules might allow systems to acquire a high-level state like “meaning”, despite being made of “meaningless” elements. In order to make his case, Hofstadter jumps between the details of knowledge representation theory and philosophical discussions of the notion of “meaning” itself. Heavy stuff.

At its heart the book explores the question Alan Turing posed in 1950:

“Can machines think?”

The answer Hofstadter gives in his book is “Probably. If you can formulize the right model”. His intuition is, that something he calls “strange-loops” might be crucial for consciousness to emerge from a system. Strange loops create self-referential systems that, by moving only upwards or downwards through a hierarchy, find themselves back to where they started. Strange loops remain one of the most interesting attempts to model consciousness. However a functional AI, based on that model, has not been successful so far.

M.C. Eschers pictures, like this one, where a pair of hands are drawing each other, are visual examples of strange loops. (M. C. Escher. Drawing Hands, 1948)

But Hofstadter wasn’t the first who tried to answer Turing’s question.

The Birth of Artificial Intelligence

The brood chamber of which artificial intelligence crawled out into existence was the Dartmouth conference, held in 1955, where a dozen of brilliant researchers met at Dartmouth University in Hanover, New Hampshire. Those guys were the crème de la crème of computer scientists like John McCarthy, Herbert Simon, Claude E. Shannon and Marvin Minsky. Their goal was to formally describe the process of learning and every other feature of intelligence, to create a machine that can simulate “thinking”. At this conference the term artificial intelligence was coined and the event is held in history as the birth of AI.

Originally planned to be finished at the end of the summer, not one of the goals has been reached by the Dartmouth group till this day. We still lack the knowledge about how the brain creates those peculiar cognitive states of the mind that we experience every day.

However, they had some minor successes that were pretty revolutionary for their time. One attempt was to formulate reasoning as a simple brute force search algorithm. Given the task of solving a problem like winning a game, or solving a puzzle, the program searches all possible choices it can make, like moving through a maze and backtracking, if it reached a dead end. Such a “General Problem Solver” was able to solve highly formalized problems with small amount of possible choices, like “Towers of Hanoi”. But it failed at any real-world problem, because here the amount of choices are way too high, which leads to a combinatorial explosion. Because time complexity grows exponentially with problem size, a lot of computing power is needed to go through all possible choices in a reasonable time. That kind of computing power wasn’t around back then.

Yet their initial successes let to a little hype and a phase where AI was heavily funded by the Advanced Research Project Agency, which later became what is known today as DARPA. AI was blossoming.

But it didn’t last long. In 1973 James Lighthill – a British mathematician – published a heavy critic about the lacking progress of AI research and their failure to produce any real world applications. This led to political pressure from congress and a stop in funding for undirected AI research from the U.S. and the British government. The concurrent halt in progress that endured till the 80s is called the “AI Winter”. During this time people were generally disillusioned with artificial intelligence and the claims of AI researchers where heavily attacked by philosophers like Dreyfus and Searle who argued that the processes inside the machines can never be described as “thinking”. The Hype was over.

The Rise of Expert Systems

Seven years after the conference, an expert system called XCON was created by the Carnegie Mellon University for the Digital Equipment Corporation. Expert systems operate just within small domain of specific knowledge, instead of trying to simulate “general intelligence”. Their simple design made it relatively easy for programs to be built and then modified, once they were in place. XCON was an enormous success and saved the company 40 million dollars per year by 1986. Other corporations, from all over the world, were impressed and started to develop and deploy expert systems en masse and created a new hype. By 1985 over a billion dollars was spent on expert system AIs. By the end of the eighties, it seemed as people didn’t care anymore about machines that achieve the metaphysical goal of “thinking”, they were just happy to use automatic computer systems that actually got some work done.

During this time AI researchers also began to use sophisticated mathematical tools. There was a realization that many AI problems were already solved by mathematicians, economists and other researchers. AI became a more rigorous scientific discipline and made rapid progress by implementing sophisticated algorithms like Bayesian networks, Hidden Markov Models, Neural networks and Evolutionary algorithms.

Also, thanks to Moore’s law, processing power became much cheaper and much more powerful. New algorithms could be implemented that enabled novel applications, that were not possible before, for the machines.

Fast-forward, it’s May 11th 1997. IBMs supercomputer Deep Blue beats the world champion Kasparov in a game of chess, broadcasted live over the internet to 74 million viewers. This was the “moon landing event” of artificial intelligence. That day, AI had arrived in the collective consciousness as a force that will shape our world. Ironically Deep Blue wasn’t even an AI in a technical sense, as IBM points out in a statement:

“Deep Blue, as it stands today, is not a ‘learning system.’ It is therefore not capable of utilizing artificial intelligence to either learn from its opponent or ‘think’ about the current position of the chessboard. … Any changes in the way Deep Blue plays chess must be performed by the members of the development team between games. Garry Kasparov can alter the way he plays at any time before, during, and/or after each game.”

2011, similar to Deep Blue’s media spectacle, another AI was pushed into the arena against a human enemy. This time the machine was called Watson and the game was Jeopardy. Watson won the match against the two human champions by far, demonstrating a revolutionary capacity to understand natural language.

Since then IBM poured a lot of money into improving Watson even further, making it one of the most advanced AI around with an impressive range of applications in its quiver.

What Can AI Do Today?

Some applications like stock market trading, search engines and classifying DNA sequences have already internalized machine learning algorithms for decades. But AI is spreading out into more fields, becoming more ubiquitous each year. Here are just a few of the highlights of what artificial intelligence can do:

A car that can drive itself is a car that can deliver itself to you. It can refuel or recharge itself without you having to worry about it. It can also store (or what we used to call “park”) itself. Self-driving vehicles can make transportation enormously energy-efficient, since, instead using a bulky all-purpose car, most trips can be accomplished by a very small electric on-demand vehicle. Autonomous driving will revolutionize our whole concept of mobility.

But the terror attack in Nice on July 14, 2016 has reminded us, in a terrible way, that cars are also deadly weapons that we put our bodies into. In certain situations, an artificial intelligence that controls such a weapon has to make decisions about life and death. Can something like a “moral code” be programmed?

Researchers at Duke University are trying to accomplish the goal of creating a moral machine by letting the artificial intelligence observe real humans making ethical decisions and learning to identify general patterns in those choices. Another approach, followed by researchers from Northwest University, is to use a model, based on the structure-mapping theory on analogy and similarity by the influential psychologist Dedre Gentner. No matter in what form morality will be represented in self-driving cars, their sheer ability to be better drivers than humans will result in significantly fewer deaths than we have today.

The big breakthrough here is that the algorithm was just let loose on the games without any prior knowledge about how to play them. The A.I. learned to play these games by itself, just by using a trial-and-error system, using scored points as reward on single actions.

This means that Google is pretty advanced when it comes to machines that can learn pretty much anything, as long as the problem space is relatively confined, like in games. The deep Q-network is, in a fact, a simple first version of a general-purpose agent, that is able to continually learn without human intervention.

But also marketers are interested in this software to track customers that pay in cash, to throw personalized ads at them in the future. Until now cash purchases have been impossible to track. But this is about to change when marketers start to employ facial recognition software using the cameras in the brand’s stores, to monitor the products shoppers physically carry out. This will effectively overcome the cash payment barrier.

Similar systems that are able to automatically process videos are also in development.

The Colonel remarked: “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

On July 28, 2015 an open letter was announced at the opening of the IJAI conference urging governments to ban autonomous weapons. To this date this letter was signed by over 20,000 people including Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky and many more. The main fear behind the ban is that networks of autonomous weapons could accidentally ignite a war that quickly spirals out of control. However the open letter had little impact on high-tech militaries like the US, China, Israel, South Korea, Russia and the United Kingdom which all are developing full autonomous weapon systems.

The AI used an evolutionary approach to develop simulation after simulation, until it was able to come up with a gene network model that matched the experimental data perfectly. Creating a scientific model is one of the most creative things a human can do. This particular problem was too hard for humans, who tried to develop such a model for over a century. The computer solved it in just three days (Although the programmers had to work several years to describe the scientific experiments, humans had carried out, in a mathematical language that the computer could understand).

The “Data Science Machine” developed by an MIT startup can already run on any raw data set and create predictive models within a couple of hours. The strength of AIs compared to humans is that humans have to work a long time to develop even a single model of high complexity.

“Humans can typically create one or two good models a week; machine learning can create thousands of models a week.” (Thomas H. Davenport, analytics thought leader)

Developing models is one of the most exciting application for AI in fields where predicting the future is crucial, like in finance or meteorology. The dark side of this kind of AI is that the US military is using them to create models of who might be a terrorist based on metadata like phone calls and geo-data. When a threshold is reached and the algorithm predicts that someone is a terrorist a drone is ordered to hammer its justice from heaven to kill the person and everybody nearby. Without a trial. Without any actual evidence. Just based on an artificial intelligence that created a model of a guy being a terrorist.

Chatbot “Tay” was built to represent a female teenager and was able to learn from its conversations on social media. The people of the internet of course used this fact and fed the bot all kinds of racist, misogynistic and hateful content. Only a few hours after its birth Tay tweeted hate tweets against feminists and gems like “I just hate everybody” and “Hitler was rights. I hate the Jews”. Microsoft pulled the plug.

Critics see in Tay an example of the limitations of artificial intelligence and declare the chabot to be nothing more than a parrot who repeats the things that are presented to her, no matter how stupid. What these critics are missing is that this is exactly the way how human learn too. Imitation is the most crucial learning tool and only very rarely our behavior is based on deep analysis or innovation. You only have to take a quick look on human twitter to realize that – most of time – people are parrots too.

What Can AI Not Do?

Robots are trusted in the controlled environment of factories with specific tasks for decades and are very efficient. To test how Robots perform at tasks that cannot be formalized in a straight-forward way, DARPA designed the Robotics Challenge 2015.

The tasks of the challenge were based on emergency-response scenarios like “open a door and enter a building” or “locate and close a valve near a leaking pipe”. DRC-HUBO, a bipedal humanoid, won the Challenge Finals and can therefore be considered the most advanced humanoid robot to date. Yet the whole event made clear how hard it is for a robot to navigate in a non-smooth environment.

We probably still have a long time until we have to welcome our new robot overlords as this compilation of robots failing the challenges is evidently showing:

The Frequency Bias

A common shortcoming of AIs is that they are not good at dealing with outliers. Take the recommendation algorithm of Netflix for example. The more data I feed into it, the more recommendations it gives me based on my past decisions. But taste in movies is nothing that can be easily formalized.

I generally like classical action movies like The Matrix, Die Hard, Lethal Weapon and Mad Max, yet one of my favorite movies is Jean-Pierre Jeunet’s wonderful film Amelie. The more movies I watch that are based on my “main-taste”, the more the recommender AI is likely to miss unexpected options that I would also like. Humans are not free from this flaw – called frequency bias – but we are much better dealing with it, using our intuition to decode important outliers from unimportant ones. This “Frequency Bias” is a common flaw in modern machine learning algorithms.

The Frame Problem

Natural language software is the next frontier in AI. All important IT companies are trying to create a reliable personal agent that is able to have a really intelligent conversation with a person. So far Google, Microsoft, IBM, Apple and Amazon are head to toe. But despite recent progress, the task of understanding and produce language reveals a crucial limit of modern AI. At the heart of the difficulty is what is known as “The Frame Problem”.

We humans have an intuitive understanding of what is relevant during each moment of our lives. We don’t have to think about what’s relevant, we just know. To grasp what is relevant and ignoring what’s not in real-time thinking is something that has been proven incredibly hard for machines.

It is such a difficult problem because the environment around us is constantly changing. Therefore what’s relevant now can be not relevant just three seconds later; and things that are irrelevant can quickly change to become relevant.

Even building a machine that possesses a comprehensive database from which it can create a detailed model of the world is not enough. It would also need to know what facts are relevant in each particular context. Without being able to tell what is relevant stupid decisions are bound to be made. This is why Google Translate gives following translation from German to English:

The underlying AI does not have a dynamic perspective on language. It doesn’t know what particular frame to use and therefore produce a sentence that clearly makes no sense.

The skill of “knowing what is relevant” is at the core of any intelligent behavior. Engineers have worked on this problem for a long time but suitable solutions for current machine learning algorithms have yet to be developed. Current data-driven models fail to capture the human “magic” of recognizing relevancy. Which means that, unfortunately, personal agents as depicted in the movie “Her” will probably still be science fiction dreams for a long time to come.

Maybe in order to create a machine that can solve the frame problem, we shouldn’t just trust in Moore’s Law. Maybe we have to go back to Douglas Hofstadter’s „Gödel, Escher, Bach“ and think about how a formal model of consciousness might look like, instead of hoping that an AI will just understand what’s relevant in any situation given enough processing power.

Still, modern AIs are pretty advanced. We learned that machine learning algorithms are used these days to develop scientific models (see above). Maybe in some lab, in some part of this world, a machine is currently searching for a model that will lead to its own evolution. What a strange loop that would be…

]]>https://www.themindfiles.com/state-artificial-intelligence/feed/0https://www.themindfiles.com/state-artificial-intelligence/Did Physics Prove that we Don’t have Free Will?http://feedproxy.google.com/~r/TheMindFiles/~3/oPrxH_TgMMI/
https://www.themindfiles.com/physics-prove-dont-free-will/#respondSat, 11 Jun 2016 19:18:49 +0000https://www.themindfiles.com/?p=127Why are some people even questioning that we have free will? Isn‘t it, like the great French philosopher René Descartes argued, simply self-evident that our will is free? Isn’t it absurd to believe, we do not make real decisions, given... Continue Reading →

]]>Why are some people even questioning that we have free will? Isn‘t it, like the great French philosopher René Descartes argued, simply self-evident that our will is free? Isn’t it absurd to believe, we do not make real decisions, given the fact how we struggle to make them every day? Clearly, you are able to freely choose to eat an apple or a banana, and no one in the world could predict, with certainty, which way your decision will manifest.

But a decision may just seem unpredictable to us, because human brains are so incredibly complicated. If an apple falls from a tree, we can calculate its movement, as the laws of physics that apply to the fruit are relatively simple. A brain consists of around 90 billion neurons that are interconnected in complex ways. Additionally each neuron is a complicated biological machine in itself. Because of this tremendous complexity, we cannot calculate behavior that results from the computations of the brain.

The French mathematician Pierre-Simon Laplace introduced the idea of an entity called Laplace‘s Demon. This hypothetical, omniscient being knows everything that can be known about the universe, including every detail of the inner workings of my physical brain. Some argue because he possesses knowledge of all neurological processes, this demon would be able to predict my behavior with absolute certainty. This worldview, that the future of the universe is fully determined by its past, through the laws of cause and effect, was held by many famous thinkers such as Bertrand Russell, Voltaire, Charles Darwin and Albert Einstein.

Laplace’s demon; not always using his powers for good.

The argument basically goes like this:

Our consciousness is fully determined by the physical processes of our brains.

All physical processes can be calculated and predicted, given I have all the necessary information.

That means consciousness can be calculated, and thus, it cannot be free.

There are more than enough philosophical discussions about this problem and most leave you even more confused than you were before. So let’s not do that. Instead let’s have a look if our advanced understanding of physics can help us answering the question: “Do we have free will?”

Physics and Free Will

Our bodies exist in space and in time. We do not know with absolute certainty whether our consciousness exists in the realm of space. But we know that consciousness exists in the domain of time. Our mind travels through time just like our physical body does. Therefore the mind has to adhere to the rules of time, according to physics.

Until the early 1920s, time was believed to progress at a fixed rate in all the universe, no matter where you are. Then Einstein came along and realized: Time is not a label of the whole universe. Time is experienced locally.

Imagine two events happening in two different locations, that occur simultaneously in the reference frame of one inertial observer. Einstein showed that those two events may occur non-simultaneously in the reference frame of another inertial observer. This is called ‘lack of absolute simultaneity’.

The figure above is an oversimplification, since, to get the full picture, we have to take motion into account. A good explanation of this can be viewed in Brian Greene’s show “The Fabric of the Cosmos” about time:

Einstein’s theory shows that the past is not gone. It is just gone for us. At some parts of the universe our past is still to be observed. Also our future is not yet to be happen. It already happened somewhere in the universe. It just needs some time to get to our position, for us to be experienced. Past, present and future are all equally real, they all exist. They exist in something we call space-time. This space-time includes all reality, all space at all times. Future, past and now.

Well, if this is true, it seems like it’s “Game Over” for Free Will. If the future already exist, I cannot decide differently. If there are no alternative possibilities for me to decide upon, I do not have free will, right?

The argument is sound, but Einstein’s relativity theory does not describe all reality. The micro-nature of the cosmos is expressed by something even weirder: Quantum Physics.

The non-existence of free will is only true if our 4-Dimensional reality, that we call space-time, is static and not able to change. Yet there is a mysterious phenomenon called “quantum entanglement” that suggests that reality is not accurately represented by a static 4D block.

Quantum entanglement refers to the phenomenon of particles to affect each other without any time passing, regardless of distance. This is not just a “freak property” that appears for some particles in a lab. It seems to be an intrinsic property of the universe. When several particles are entangled, they form a composite system in which they cannot be described independently from each other. This is known as ‘nonlocality’. Measuring one particle immediately affects all the other entangled particles. The crucial part here is that they affect each other instantaneously. Which means that some information must be either traveling faster than light, or taking a shortcut through a dimension outside of space-time.

Recently, scientists at the National Institute of Standard and Technology (NIST) have proven beyond reasonable doubt that ‘nonlocality’ is actually real.1

If some information can travel faster than light, it opens the possibility for the future to affect the past. A future that is able to affect the past (at least on the quantum level) does not paint a picture of space-time that is static. It depicts a space-time that is able to change, in some way, constantly.

But how can space-time change? Traditionally things change over time, which is not possible in this case, as time (together with space) is the very thing that changes. So what is that thing outside space time? Another dimension? We don’t know for sure. All we know is that there must be something more to reality than the 4-dimensional space-time continuum.

If we combine the insights from relativity and quantum dynamics, we might conclude that past, present and future are already written; but the ink is not yet dry.

The Swiss physicists Nicolas Gisin and Antoine Suarez think that nonlocality can explain free will, because “something is coming from outside space and time”. Since we know almost nothing about this mysterious thing, we cannot make any statements about whether determinism applies to it or not. It might be that free will, if it exists, originates from this mysterious realm without being determined by something else. It seems as free will, much like god or other spiritual claims, is a matter of believe.

Do I personally believe that we have free will?

If I perform a deep introspection on the inner workings of my mind, I have to agree with Bertrand Russell who said: “I cannot find any specific occurrence in my mind that I could call ‘will’.” A free will, to me, sounds like ‘something’ (a decision, or at least some part of a decision) is created out of ‘nothing’. This, to me, sounds impossible due to the fundamental property of the universe that energy cannot be created out of nothing, which is true even in the quantum world.

So let’s assume we don’t have free will. Does this mean we have to concede into an existential crisis? That our lives have no meaning and we might as well just shoot ourselves?

Not necessarily. Even if I don’t believe in free will, I treat it just like I treat the fact that the universe has a diameter of at least 93 billion light years. If I would be consciously aware of this fact all the time, it would make me feel incredibly tiny and insignificant. I couldn’t solve the tasks of my everyday life. So what I do is, I ponder about the universe a bit and then I come back to my human life and focus on the tasks and relationships that have meaning for me, leaving the universe at the doorsteps.