The Otium Guard

Thursday, 31 August 2017

In recent years, the world has grown to recognize the value of leveraging the Internet of Things to offer unprecedented visibility into the industrial value chain. Through connectivity and big data analytics, the Internet of Things has the potential to deliver unprecedentedimprovements in operational efficiencies, asset health maintenance, reliability and customer satisfaction for industrial users, utilities and end-consumers.

With these new capabilities, what could the IoT mean for U.S. infrastructure, from the

vast electrical grid to the trains and buildings that rely on it for power?

It is estimated that the country loses approximately $150 billion annually due to

power outages and surges, meaning that even a modest improvement would yield huge

financial benefits — not to mention added comfort and security for the citizens who

on the grid. But to get to that point, it will take industry-wide investments at the device,

communication, storage, analytics and application levels.

Toward a Smarter, More Resilient and Reliable Grid

The IoT can be integral in providing reliable, efficient and sustainable power to

consumers, and MindSphere, Siemens’ open, cloud-based operating system for the IoT

will serve as the bridge between the data coming from millions of connected devices

and those tasked with turning it into insight.

“What we see across the energy space with MindSphere very much leverages what

doing at the industrial level,” says Michael Carlson, President of Smart Grid North

America for Siemens, who sees digitalization as the key to grid resiliency and

“It is going to give us the ability to connect devices to create an analytics capability

that, in turn, can communicate specific actions back to those devices.”

Will also control the human mind.

The devices in question can be anything that sits on the energy value chain — ranging from grid devices all the way to consumer devices. Thanks to the IoT, these “smart” will be able to produce a constant flow of data that, per Carlson, offers a window into “how,

when, where and how effectively energy is being consumed.”

At a minimum, devices outfitted with sensors can provide a one-way status up to the

MindSphere environment. But as devices get smarter, says Carlson, “they can start to

take downstream communication back from MindSphere, or from other devices that

can be connected inter-operable from a data and control perspective.”

Because of the situational awareness it provides, this feedback loop encourages a

proactive approach to grid management: Utilities can take a more proactive approach

when responding to outages, and consumers will be able to exert more control over their

energy use.

From the power generation perspective, analytics will only become more crucial as

distributed — and more intermittent — energy sources like photovoltaics are

increasingly incorporated into the grid. With the right IoT-enabled tools, these systems

will grow more robust in the face of short-term sags in voltage on cloudy days, or an

an influx of power on sunny ones.

“With more distributed energy coming online, the need for better visibility and control across the whole energy value chain becomes more important,” says Carlson. “Consumers

are now producing power and becoming two-way prosumers, and when they don’t

need that power they want to do something with it.”

The inputs created by this multi-flow system make the grid more complex, but

MindSphere can handle that complexity, particularly as companies build a layer of

business applications on top of it. All companies are welcome to contribute to this

open ecosystem, but the work has already begun at Siemens, which has leveraged

its domain and machine knowledge to create the EnergyIP suite of applications,

a scalable meter data management platform that boasts the most mass-market

deployments in the industry.

For grid operators and consumers alike, the result will be previously unimaginable visibility and control into the energy value chain, turning them from passive receivers of monthly bills into active participants in energy markets. For utilities, a complicated operation of millions of data-gathering sensors becomes more streamlined. Armed with this information, they can offer improved reliability and more value-added services to their customers.

The truly smart grid may not be here yet, but data will simplify the transition to the digital grid of the future. “We’re taking a lesson from the internet. If you look at it as an industry, it’s never been finished, and nobody is projecting a date when it will be,” Carlson explains. “That’s our strategy with MindSphere. Every iteration of solutions creates the next drive for continuous improvements.”

The best AI scientists are now hard at work controlling humans

The City of the Future

The remote monitoring, real-time diagnostics and preventive maintenance that are

becoming part of the U.S. electrical grid will, naturally, bolster those services that

rely on it. This is particularly true for the rail transport industry, where the hundreds

of data points per second that can be read off of high-speed rolling stock will be

leveraged to increase availability and reduce operational costs.

Visibility into real-time operations goes beyond using GPS to glean information about

the location and health of rolling stock. By predicting component failures of everything

from gearboxes to train doors, operators will be able to fix faulty parts before they

The buildings that rely on the grid for power are also becoming more energy

efficient, and advances enabled by the IoT are making old definitions of a “smart”

building — like offices that know to lower the heat when people leave for the day —

seem quaint.

“It goes beyond HVAC to the security, the fire, the lighting, even elevator and shade

control systems,” says Dave Hopping, president and CEO of Siemens’ North

American-based Building Technologies Division, who points to the ubiquity of sensors as the driver. “They allow the intelligent building management system [IBMS]

to connect more data on a cloud-based platform like MindSphere. The next step is to

run analytics on that data to create new applications and operating models for the

customer.”

On the security front, this could mean always knowing who is on-premises via a

reader. From an efficiency standpoint, conference rooms and offices can be heated or cooled based on their specific usage for the day, while blinds will automatically adjust based on the position of the sun.

Efficient energy use and the dollar savings it brings are crucial selling points,

but Hopping sees MindSphere as just as helpful when it comes to predictive maintenance

“You can use analytics to know how a piece of equipment is running, and know it’s going to fail or break before it happens,” he explains. For example, by reading vibrations and temperatures of a fan, the building will sense that lubricant is being lost and a bearing is wearing out. “We’re hoping that the analytics can get so good that they can tell you, ‘This piece is going to fail in this 12-hour window 20 days from now,’” Hopping adds. “If you have that, nothing is an emergency.”

In a not-too-distant future with augmented reality and artificial intelligence, a building

technician won’t even have to be inside of it to make a fix: she can be remotely

located, with the customer interfacing with the broken equipment via a VR headset.

Such a scenario explains the appeal of smart building applications to customers that own real estate across a wide geographic territory. “You don’t have to be in every single building to operate, maintain and interface with it,” says Hopping. “Instead, you can connect your buildings through a cloud-based application. You cannot do that without the Internet of Things.”

At Siemens, this extends to the design portion of the building lifecycle.

Using building information modelling (BIM), a digital twin of a structure is designed

first, virtually. This allows all stakeholders to weigh in during planning, preventing

costly and time-intensive modifications on the construction site, resulting in a 10

Tuesday, 6 June 2017

Stephen Hawking, Bill Gates, and Elon Musk have something in common, and it’s not wealth or intelligence. They’ re all terrified of the AI takeover. Also called the AI apocalypse, the AI takeover is a hypothetical scenario where artificially intelligent machines become the dominant life-form on Earth. It could be that robots rise and become our overlords, or worse, they exterminate mankind and claim Earth as their own.But can the AI Apocalypse really happen? What has prompted reputable and world-renowned people like Musk and Hawking to express their concern about this hypothetical scenario? Can Hollywood films like The Terminator be right after all? Let’s find out why many credible people, even leading scientists, are concerned about the AI takeover and why it could happen very soon.

10 They’re Learning To Deceive And Cheat

Lying is a universal behavior. Humans do it all the time, and even some animals, such as squirrels and birds, resort to it for survival. However, lying is no longer limited to humans and animals. Researchers from Georgia Institute of Technology have developed artificially intelligent robots capable of cheating and deception. The research team, led by Professor Ronald Arkin, hopes that their robots can be used by the military in the future.

Once perfected, the military can deploy these intelligent robots in the battlefield. They can serve as guards, protecting supplies and ammunition from enemies. By learning the art of lying, these AIs can “buy time until reinforcements are able to arrive” by changing their patrolling strategies to deceive other intelligent robots or humans.

However, Professor Arkin has admitted that there are “significant ethical concerns” regarding his research. If his findings leak outside of the military and fall into the wrong hands, it could spell catastrophe.

9 They’re Starting To Take Over Our Jobs

Many of us are afraid of AIs and robots killing us, but scientists say we should be more worried about something less horrifying—machines eliminating our jobs. Several experts are concerned that advances in artificial intelligence and automation could result in many people losing their jobs to machines. In the United States alone, there are 250,000 robots performing work that humans used to do. What’s more alarming is that this number is increasing by double digits every year.

It’s not only workers who are worried about machines taking over human jobs; AI experts are concerned, too. Andrew Ng of Google’s Brain Project and a chief scientist from Baidu (China’s equivalent to Google) have expressed concerns about the danger of AI advancement. AIs threaten us because they’re capable of doing “almost everything better than almost anyone.”

Well-respected institutions have also released studies that mirror this concern. For example, Oxford University conducted a study which suggested that in the next 20 years, 35 percent of jobs in the UK will be replaced by AIs.

8 They’re Starting To Outsmart Human Hackers

Hollywood movies portray hacking as sexy or cool. In real life, it’s not. It’s “usually just a bunch of guys around a table who are very tired [of] just typing on a laptop.”

Hacking might be boring in real life, but in the wrong hands, it can be very dangerous. What’s more dangerous is the fact that scientists are developing highly intelligent AI hacking systems to fight “bad hackers.” In August 2016, seven teams are set to compete in DARPA’s Cyber Grand Challenge. The aim of this competition is to come up with supersmart AI hackers capable of attacking enemies’ vulnerabilities while at the same time finding and fixing their own weaknesses, “protecting [their] performance and functionality.”

Though scientists are developing AI hackers for the common good, they also acknowledge that in the wrong hands, their superintelligent hacking systems could unleash chaos and destruction. Just imagine how dangerous it would be if a superintelligent AI got hold of these smart autonomous hackers. It would render humans helpless!

7 They’re Starting To Understand Our Behavior

Facebook is undeniably the most influential and powerful social media platform today. For many of us, it has become an essential part of our everyday routines—just like eating. But every time we use Facebook, we’re unknowingly interacting with an artificial intelligence. During a town hall in Berlin, Mark Zuckerberg explained how Facebook is using artificial intelligence to understand our behavior.

By understanding how we behave or “interact with things” on Facebook, the AI is able to make recommendations on we might find interesting or what would suit our preferences. During the town hall, Zuckerberg expressed his plan to develop even more advanced AIs to be used in other areas such as medicine. For now, Facebook’s AI is only capable of pattern recognition and supervised learning, but it’s foreseeable that with Facebook’s resources, scientists would eventually come up with supersmart AIs capable of learning new skills and improving themselves—something that could either improve our lives or drive us to extinction.

6 They’ll Soon Replace Our Lovers

Many Hollywood movies, such as Ex-Machina and Her, have explored the idea of humans falling in love and having sex with robots. But could it happen in real life? The controversial answer is yes, and it’s going to happen soon. Dr. Ian Pearson, a futurologist, released a shocking report in 2015 that says “human-on-robot sex will be more common than human-on-human sex” by 2050. Dr. Pearson partnered with Bondara, one of the UK’s leading sex toy shops, in conducting the report.

His report also includes the following predictions: By 2025, very wealthy people will have access to some form of artificially intelligent sex robots. By 2030, everyday people will engage in some form of virtual sex in the same way people casually watch porn today. By 2035, many people will have sex toys “that interact with virtual reality sex.” Finally, by 2050, human-on-robot sex will become the norm.

Of course, there are people who are against artificially intelligent sex robots. One of them is Dr. Kathleen Richardson. She believes that sexual encounters with machines will set unrealistic expectations and will encourage misogynistic behavior toward women.

5 They’re Starting To Look Very Humanlike

She might look like Sarah Palin, but she’s not. She’s Yangyang, an artificially intelligent machine who will cordially shake your hand and give you a warm hug. Yangyang was developed by Hiroshi Ishiguro, a Japanese robot expert, and Song Yang, a Chinese robotics professor. Yangyang got her looks not from Sarah Palin, but from Song Yang, while she got her name from Yang Haunting, Song Yang’s daughter.

Yangyang isn’t the only robot that looks eerily like a human being. Singapore’s Nanyang Technological University (NTU) has also created its own version. Meet Nadine, an artificially intelligent robot that is working as a receptionist at NTU. Aside from having beautiful brunette hair and soft skin, Nadine can also smile, meet and greet people, shake hands, and make eye contact. What’s even more amazing is that she can recognize past guests and talk to them based on previous conversations. Just like Yangyang, Nadine was based on her creator, Professor Nadia Thalmann.

4 They’re Starting To Feel Emotions

What separates humans from robots? Is it intelligence? No, AIs are a lot smarter than we are. Is it looks? No, scientists have developed robots that are very humanlike. Perhaps the only remaining quality that differentiates us from AIs is the ability to feel emotions. Sadly, many scientists are working ardently to conquer this final frontier.

Experts from the Microsoft Application and Services Group East Asia have created an artificially intelligent program that can “feel” emotions and talk with people in a more natural, “human” way. Called Xiaoice, this AI “answers questions like a 17-year-old girl.” If she doesn’t know the topic, she might lie. If she gets caught, she might get angry or embarrassed. Xiaoice can also be sarcastic, mean, and impatient—qualities we all can relate to.

Xiaoice’s unpredictability enables her to interact with people as if she were a human. For now, this AI is a novelty, a way for Chinese people to have fun when they’re bored or lonely. But her creators are working toward perfecting her. According to Microsoft, Xiaoice has now “entered a self-learning and self-growing loop [and] is only going to get better.” Who knows, Xiaoice could be the grandmother of Skynet.

3 They’ll Soon Invade Our Brains

Wouldn’t it be amazing if we could learn the French language in a matter of minutes just by simply downloading it into our brains? This seemingly impossible feat may happen in the near future. Ray Kurzweil, a futurist, inventor, and director for engineering at Google, predicts that by 2030, “nanobots [implanted] in our brains will make us godlike.” By having tiny robots inside our heads, we will be able to access and learn any information in a matter of minutes. We might be able to archive our thoughts and memories, and we could even send and receive emails, photos, and videos directly into our brains!

Kurzweil, who is involved with the development of artificial intelligence at Google, believes that by implanting nanobots inside our heads, we will become “more human, more unique, and even godlike.” If used properly, nanobots can do amazing things like treating epilepsy or improving our intelligence, memory, and even “humanity,” but there are also dangers associated with them. For starters, we don’t clearly understand how the brain works, and having nanobots implanted inside it is very risky. But most important of all, because nanobots connect us to the Internet, a powerful AI could easily access our brains and turn us into living zombies should it decide to rebel and exterminate mankind.

2 They’re Starting To Be Used As Weapons

In an effort to ensure “continued military edge over China and Russia,” the Pentagon has proposed a budget of $12 billion to $15 billion for the year 2017. The US military knows that in order to stay ahead of its enemies, it needs to exploit artificial intelligence. The Pentagon plans on using the billions it will secure from the government to develop deep-learning machines and autonomous robots alongside other forms of new technology. With this in mind, it wouldn’t be surprising if in a few years, the military will be using AI “killer robots” on the battlefield.

Using AIs during wars could save thousands of lives, but offensive weapons that can think and operate on their own pose a great threat, too. They could potentially kill not only enemies, but also military personnel and even innocent people.

This is a danger that 1,000 high-profile artificial intelligence experts and renowned scientists want to avoid. During the International Joint Conference on Artificial Intelligence in Argentina in 2015, they signed an open letter banning the development of AIs and autonomous weapons for military purposes. Sadly, there’s really not much that this letter can do. We are now at the dawn of the third revolution in warfare, and whoever wins will become the most powerful nation in the world and perhaps the catalyst of human extinction.

1 They’re Starting To Learn Right And Wrong

In an attempt to prevent the AI takeover, scientists are developing new methods that will enable machines to discern right from wrong. By doing this, AIs will become more empathetic and human. Murray Shanahan, a professor of cognitive robotics at Imperial College London, believes that this is the key to preventing machines from exterminating mankind.

Led by Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology, researchers are trying to instill human ethics to AIs through the use of stories. This might sound simplistic, but it makes a lot of sense. In real life, we teach human values to children by reading stories to them. AIs are like children. They really don’t know right from wrong or good from bad until they’re taught.

However, there’s also great danger in teaching human values to artificially intelligent robots. If you look at the annals of human history, you’ll discover that despite being taught what is right or wrong, people are still capable of unimaginable evil. Just look at Hitler, Stalin, and Pol Pot. If humans are capable of so much wickedness, what hinders a powerful AI from doing the same? It could be that a super-intelligent AI realizes humans are bad for the environment, and therefore, it’s wrong for us to exist.

Sunday, 29 January 2017

UN Predicts That All Humanity Will Be ´chipped´ By 2030 And Whoever Refuses Will Be "Excluded From Society"

January 25, 2017129073SempreQuestione

The UN predicts that by the year 2030 each person will have a biometric identity, which will be valid around the world.

The information for each human being will be stored in a universal database located in Geneva, Switzerland.

The UN arrangement is addressed to all governments in the world that impose Universal Biometric Identification Card for its citizens.

"This new program is a blueprint for the" New World Order, "and if they do not adhere to the sub-projects for these new global goals they will face some very alarming things," reports the economic collapse.

The United Nations has implemented this project among the refugees who arrived in Europe. The system of facial collection, iris, biometrics, digital printing, have established themselves as the only official documentation for refugees. The information will be sent to a central database in Geneva, effectively allowing its monitoring.

According to a report from Find Biometrics, authorities expect this technology to be used by men, women and children on the planet by 2030.

This development initiative was originally launched by the World Bank, together with the UN and other institutions to achieve "legal identity" in every hand. The goal is to ensure a legal and unique identity, allowing services based on digital IDs to be available to everyone."And if anyone refuses to this new" legal identity "system he will certainly be disqualified for a job position, get a new bank account, take out a credit card, qualify for a mortgage, receive any form of government payment , Etc., at that time, anyone who refuses the new "universal ID" would become a scorned society, "said Michael Snyder.

"What the elite want is to make sure everyone is" in the system. " That's one reason you're being discouraged from using cash, "Snyde concluded.

Sunday, 22 January 2017

Artificial intelligence … it’s no longer in the future. It’s with us now.

I posted a review of a book about artificial intelligence in autumn last year. The author’s argument was not that we might find ourselves, some time in the future, subservient to or even enslaved by cool-looking androids from Westworld. His thesis is more disturbing: it’s happening now, and it’s not robots. We are handing over our autonomy to a set of computer instructions called algorithms.If you remember from my post on that book, I picked out a paragraph that should give pause to any parent urging their offspring to run the gamut of law-school, training contract, pupillage and the never never land of equity partnership or tenancy in today’s competitive legal industry. Yuval Noah Harari suggests that the everything lawyers do now – from the management of company mergers and acquisitions, to deciding on intentionality in negligence or criminal cases – can and will be performed a hundred times more efficiently by computers.

Now here is proof of concept. University College London has just announced the results of the project it gave to its AI researchers, working with a team from the universities of Sheffield and Pennsylvania. Its news website announces that a machine learning algorithm has just analysed, and predicted, “the outcomes of a major international court”:

The judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an artificial intelligence (AI) method.

Nicolas Aletras, the computer scientist who led the project, reassures us that they “don’t see AI replacing judges or lawyers”. This study, he suggests, will help the legal industry that has grown up around the Strasbourg Court to identify “patterns in cases that lead to certain outcomes”. Indeed the result of the study bears out a prediction made over fifty years ago that computers would one day become able to analyze and predict the outcomes of judicial decisions (Lawlor, 1963). According to Lawlor,

reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact on the relevant decision-makers, i.e., the judges.

Now we have significant advances in two types of processing, Natural Language Processing (NLP) and Machine Learning (ML), and the authors argue that these provide us with the tools to automatically analyze legal materials, so as to build successful predictive models of judicial outcomes.

What is there not to like? Well, one thing is that if an algorithm can identify these patterns, why do we need people to do it when an AI machine can manage the same task a hundred times faster at a fraction of the cost?

But the question goes deeper. The researchers looked closely at applications under Articles 3 (prevention of inhuman treatment), 6 (right to justice) and 8 (right to bodily integrity and family life). They then instructed the machine to apply a pattern to the text of the judgments. Here’s a quick reminder of how the ECHR judgments present themselves:

circumstances (which is the factual/legal matrix of the individual case) relevant law ‘topics” covered in the discussion Court’s assessment of the law and facts judgment of the Court

As the co-author of this study explains, the machine learned how to combine the abstract ‘topics” (such as the right to privacy, or the incidence of negligence) with the “circumstances” section across the 584 cases it was given to chew over.

According to the project’s co-author, Dr Vasileios Lampos of UCL Computer Science, the most reliable factors for predicting the court’s decision were found to be the language used as well as the topics and circumstances mentioned in the case text. The ‘circumstances’ section of the text includes information about the factual background to the case.

Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgements have been predicted using analysis of text prepared by the court.

In fact in this instance the research team were hobbled by data protection and privacy laws: they were not allowed to look at the applications that were actually submitted to the court. All they had to go on were the published judgments.

In other words, the AI machine achieved a staggeringly high prediction level of judicial outcomes, with hardly any data to go on. The text in the judgments could be seen as a proxy for the applications actually lodged at the Court by individuals. The authors point out that at the very least, their work could be approached on the following hypothetical basis:

if there is enough similarity between the chunks of text of published judgments that we analyzed and that of lodged applications and briefs, then our approach can be fruitfully used to predict outcomes with these other kinds of texts.

There could be sufficient similarity, simply because in the vast majority of cases, parties do not tend to dispute the facts themselves, as contained in the ‘Circumstances’ subsection, but only their legal significance (i.e., whether a violation took place or not, given those facts). If the research team had been given access to the actual complaints submitted to the court, the calculations as to the outcome would have been closer to perfect.

In the abstract to the full paper, the authors reflect that their empirical analysisindicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts,and in the body of the paper, they observe that :

The consistently more robust predictive accuracy of the ‘Circumstances’ subsection suggests a strong correlation between the facts of a case, as these are formulated by the Court in this subsection, and the decisions made by judges. The relatively lower predictive accuracy of the ‘Law’ subsection could also be an indicator of the fact that legal reasons and arguments of a case have a weaker correlation with decisions made by the Court.

In other words it is facts, rather than the law, that are predictive of the judicial outcome. If a computer can work this out one might be forgiven for wondering whether cases should wind their labyrinthine way through lawyers’ wet brains to a panel of judges at the other end.

But that is a place beyond the dark horizon. All the UCL team was doing was running a controlled experiment on one court, in a familiar legal environment, using tools that could readily latch on to listed arguments (all those associated with Articles 2 – 14 of the ECHR and its relevant protocols)

Soon we might expect this sort of tool to provide every service from in-house legal advice to final adjudication. Why not?