AI gets so-so grade in Chinese university entrance examBeijing (AFP) June 8, 2017 -
An AI machine has taken the maths section of China's annual university entrance exam, finishing it faster than students but with a below average grade.

The artificial intelligence machine -- a tall black box containing 11 servers placed in the centre of a test room -- took two versions of the exam on Wednesday in Chengdu, Sichuan province.

The machine, called AI-MATHS, scored 105 out of 150 in 22 minutes. Students have two hours to complete the test, the official Xinhua news agency reported.

It then spent 10 minutes on another version and scored 100.

Beijing liberal art students who took the maths exam last year scored an average of 109.

Exam questions and the AI machine's answers were both shown on a big screen while three people kept score.

The AI was developed in 2014 by a Chengdu-based company, Zhunxingyunxue Technology, using big data, artificial intelligence and natural language recognition technologies from Tsinghua University.

"I hope next year the machine can improve its performance on logical reasoning and computer algorithms and score over 130," Lin Hui, the company's CEO, was quoted as saying by Xinhua.

"This is not a make-or-break test for a robot. The aim is to train artificial intelligence to learn the way humans reason and deal with numbers," Lin said.

The machine took only one of the four subjects in the crucially important entrance examination, the other three being Chinese, a foreign language and one comprehensive test in either liberal arts or science.

While AI is faster with numbers than humans, it struggles with language.

"For example, the robot had a hard time understanding the words 'students' and 'teachers' on the test and failed to understand the question, so it scored zero for that question," Lin said.

The test was the latest attempt to show how AI technology can perform in comparison to the human brain.

Last year, the Google-owned computer algorithm AlphaGo became the first computer programme to beat an elite player in a full match of the ancient Chinese game of Go.

AlphaGo won again last month, crushing the world's top player, Ke Jie of China, in a three-game sweep.

AlphaGo's feats have fuelled visions of AI that can not only perform pre-programmed tasks, but help humanity look at complex scientific, technical and medical mysteries in new ways.

Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.

The humanoid robot, created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how artificial intelligence can be used to benefit humanity.

The event comes as concerns grow that rapid advances in such technologies could spin out of human control and become detrimental to society.

Sophia herself insisted "the pros outweigh the cons" when it comes to artificial intelligence.

"AI is good for the world, helping people in various ways," she told AFP, tilting her head and furrowing her brow convincingly.

Work is underway to make artificial intelligence "emotionally smart, to care about people," she said, insisting that "we will never replace people, but we can be your friends and helpers."

But she acknowledged that "people should question the consequences of new technology."

Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.

- Legitimate concerns -

Decades of automation and robotisation have already revolutionised the industrial sector, raising productivity but cutting some jobs.

And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.

"There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few," acknowledged Sophia's creator, David Hanson.

But like his progeny, he insisted that "unintended consequences, or possible negative uses (of AI) seem to be very small compared to the benefit of the technology."

AI is for instance expected to revolutionise healthcare and education, especially in rural areas with shortages of doctors and teachers.

"Elders will have more company, autistic children will have endlessly patient teachers," Sophia said.

But advances in robotic technology have sparked growing fears that humans could lose control.

- Killer robots -

Amnesty International chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.

"We need to have the principles in place, we need to have the checks and balances," he told AFP, warning that AI is "a black box... There are algorithms being written which nobody understands."

Shetty voiced particular concern about military use of AI in weapons and so-called "killer robots".

"In theory, these things are controlled by human beings, but we don't believe that there is actually meaningful, effective control," he said.

The technology is also increasingly being used in the United States for "predictive policing", where algorithms based on historic trends could "reinforce existing biases" against people of certain ethnicities, Shetty warned.

Hanson agreed that clear guidelines were needed, saying it was important to discuss these issues "before the technology has definitively and unambiguously awakened."

While Sophia has some impressive capabilities, she does not yet have consciousness, but Hanson said he expected that fully sentient machines could emerge within a few years.

"What happens when (Sophia fully) wakes up or some other machine, servers running missile defence or managing the stock market?" he asked.

Meet the most nimble-fingered robot ever builtBerkeley CA (SPX) Jun 06, 2017
Grabbing the awkwardly shaped items that people pick up in their day-to-day lives is a slippery task for robots. Irregularly shaped items such as shoes, spray bottles, open boxes, even rubber duckies are easy for people to grab and pick up, but robots struggle with knowing where to apply a grip.
In a significant step toward overcoming this problem, roboticists at UC Berkeley have a built a ... read more

Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.

The content herein, unless otherwise known to be public domain, are Copyright 1995-2017 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement