Boston Dynamics, the company known for its "nightmare-inducing" backflipping robots, has unveiled two new videos that show them autonomously navigating through different terrains, including an office and a lab, and jogging in a grass field.

The clips released Thursday detail the progress that Atlas, a humanoid robot, and SpotMini, a doglike robot, have made. SpotMini, for example, is using cameras to identify and move past obstacles, such as office furniture.

"During the autonomous run, SpotMini uses data from the cameras to localize itself in the map and to detect and avoid obstacles," Boston Dynamics said in the video description. "Once the operator presses 'GO' at the beginning of the video, the robot is on its own."

Meanwhile, Atlas' jump over the downed tree trunk isn't elegant in the way an Olympic hurdler is, but it more than gets the job done. If that isn't shocking enough, SpotMini continues its venture outside near grills, walking along a concrete path. That probably isn't what most people envision when they think of a fun-filled BBQ with friends and the family dog.

Yea I can just see robot dog running after people with a container of KOH attached to its back and a nozzle to squirt it over people. No muss , no fuss human execution and body disposal

About a dozen Google employees are resigning in protest over the tech giant’s involvement in Project Maven, a controversial military program that uses artificial intelligence, Gizmodo reports.

Project Maven, which harnesses AI to improve drone targeting, has been a source of concern for a number of Google employees. Last month, over 3,100 Google workers signed a letter addressed to the company’s CEO, Sundar Pichai, asking him to pull the tech giant out of the project.

Hmmm ...so lets see ...

You are the key Google execs and you have the opportunity to write and develop operating systems for weapon AI. As the developers you could have code in it to prevent it from either targeting you or someone/something you do not want targeted...
and 3,100 of your ~88,000 object and 12 are resigning...

so here is the question...do you ignore them and have an ice coffee or do you ignore them and have an expresso?

I prefer cappuccino but they say you should not drink it after 11am. Unless you are a savage. Like me.

From this day to the ending of the world,
But we in it shall be rememberèd—
We few, we happy few, we band of brothers;
For he to-day that sheds his blood with me
Shall be my brother

About a dozen Google employees are resigning in protest over the tech giant’s involvement in Project Maven, a controversial military program that uses artificial intelligence, Gizmodo reports.

Project Maven, which harnesses AI to improve drone targeting, has been a source of concern for a number of Google employees. Last month, over 3,100 Google workers signed a letter addressed to the company’s CEO, Sundar Pichai, asking him to pull the tech giant out of the project.

Hmmm ...so lets see ...

You are the key Google execs and you have the opportunity to write and develop operating systems for weapon AI. As the developers you could have code in it to prevent it from either targeting you or someone/something you do not want targeted...
and 3,100 of your ~88,000 object and 12 are resigning...

so here is the question...do you ignore them and have an ice coffee or do you ignore them and have an expresso?

I prefer cappuccino but they say you should not drink it after 11am. Unless you are a savage. Like me.

I've seen that movie too. The suave savage AI exec, Smugly drinking his capuchino ( at 11:01 AM ) . Confident that he will be protected by the code he wrote into the AI, is the first to go as robot dog minion runs up, lifts it's leg and directs a stream of Potassium hydroxide as the AI exec mornfully screams " but I created you " which fades to only the sound of melting flesh hissing in the background as robot minion dog, paws clicking metalically across the floor runs to his next victim .

About a dozen Google employees are resigning in protest over the tech giant’s involvement in Project Maven, a controversial military program that uses artificial intelligence, Gizmodo reports.

Project Maven, which harnesses AI to improve drone targeting, has been a source of concern for a number of Google employees. Last month, over 3,100 Google workers signed a letter addressed to the company’s CEO, Sundar Pichai, asking him to pull the tech giant out of the project.

Rahul Telang wrote:If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.

Conventional education and tax policies would have a limited impact in solving the problems.

The research looked at a variety of scenarios ranging from modest substitution of labour by robots and artificial intelligence to a world where they take over all traditional technologies.

In all cases, "automation is good for growth and bad for equality," the study found.

IMF economists Andrew Berg, Edward Buffie, and Luis-Felipe Zanna said currently the debate between the pessimists and the optimists is still unsettled.

However, they make it clear which side they fall on, right from the report's first quote appropriated from management consultant, Warren Bennis.

The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.

"In scenarios where the traditional technology disappears and robots take over the automatable sector, the economy either ascends to a virtuous circle of ongoing endogenous growth or descends into a death spiral of perpetual contraction," the IMF report said.

"Unfortunately, the odds strongly favour the death spiral."

Low wages for an entire working life

While the research does not necessarily represent the views of the IMF, the work is influential in the framing of the Washington-based organisation's policies in its work to promote employment and sustainable economic growth.

The paper found it does not take a big increase in automation to stimulate growth, but in all scenarios workers find the transition difficult and inevitably fall behind in terms of wealth creation.

Machines have long been replacing blue collar factory workers, but now artificial intelligence is threatening white collar jobs.
While real wage growth can materialise in "little as twelve years", the low wage phase can extend past 50 years."

"The 'short run' can consume an entire working life," the paper argued.

"Although the real wage increases in the long run, labour's share in income decreases most when real output increases most. The bigger the increase in the GDP pie, the less equitable the distribution of the pie."

The paper concedes the basic problem is that nobody knows what the world will look like in 2035.

It notes there is considerable disagreement among economists and technology experts about whether automation will destroy low-skill jobs or those at all skill levels, whether it will penetrate all sectors or just a few and even if it will reduce the demand for workers in all jobs or decrease it in some and increase it in others.

Different this time

However, the research rules out the benign conclusion from previous technological upheavals that everyone is likely to gain.

Will there be any jobs left as artificial intelligence advances?

If you're an accountant, lawyer or data analyst, a robot may soon take over your job. The worst outcome under the IMF modelling is where robots only replace low-skill workers.

"While skilled labour enjoys continuous large gains, the wage for low-skill labour decreases in the short/medium run under conditions much weaker than in the benchmark model [where robots can do any job]," the IMF found.

"Nor is there any assurance that growth eventually raises the low-skill wage. Quite the contrary: there is a strong presumption the real wage decreases more in the long run than in the short run.

"The magnitude of the worsening in inequality is horrific."

Under the research modelling in this case, the skilled wage increases from 56 to 157 per cent in the long run while wages paid to low-skill labour drops from 26 to 56 per cent.

The low-skilled group's share in national income also decreases from roughly a third to as low as 8 per cent.

Even in the scenario where robots only compete for some jobs, and the impacts on wages and growth are reduced, the IMF paper said inequality gets worse.

"Allowing for tasks that complement robots does not help as much as one might think, partly because more and more workers compete for those jobs, driving down the overall wage.

"In addition to the fall in the average wage and the rise in the capital share, unskilled workers suffer large decreases in absolute and relative wages."

Even in areas where robots can't compete, the news isn't great from the IMF team.

"This also does not really help, again because there are only so many of those jobs to go around, and labour chased out of the automatable sector tends to drive down wages."

No policy panacea

As for solutions, the IMF broadly targets two possible ways to limit mounting inequality: through education, and tax.

Sadly neither option looks overly promising.

While education can be seen as an investment to convert workers from unskilled to skilled labour, it has its limitations.

"Can it offset the huge real wage cuts unskilled labour suffers and the decrease in labour's overall income share at an acceptable cost? And if the answer is yes, how long will it take for wages to increase for those who remain unskilled?" the economists ask.

As for tax, as governments around the world are already aware, it is not easy to track down and get a fair share of the profits and capital accumulation of big corporations.

This is pretty much my main beef with AI/Robots. I'm already pretty economically marginalized, & the shrinking of the tax base furthr is not going to help me, or other like me out. That, & the extermination of the human race.

The IMF analysis is an interesting take on the economics of automation.

It sounds like the best outlook might be that automation needs to take over every laborious "job" (mental and physical) in a short timeframe. And then humanity could assume a kind of pseudo-protectorate status under a benevolent machine empire, with a Universal Basic Income for all, utopia, etc.

The worse (and probably more likely) outcome could be that automation takes over only a large fraction of all laborious "jobs" over a suitably long timeframe, and then only a tiny fraction of humanity remains economically relevant in the New World. The new aristocratic/meritocratic class might then find it less compelling to subsidize the rest of the world from their prosperity, than if their own relevancy had also been dissolved by automation.

As usual: the issue doesn't seem to be sentient robots spontaneously deciding to exterminate the human race, but man's own inhumanity to one another, expedited by more efficient means.

Rahul Telang wrote:If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.

A family in Portland says their Echo device recorded their conversation and sent it to a random person on their contact list.

Amazon reportedly confirmed the incident and blamed it on Alexa misinterpreting background conversation as commands to send a message to a contact.

The incident raises privacy concerns as voice-assistant devices like the Echo gain more popularity.

By Eugene Kim | @eugenekim222 CNBC.com

The Echo device in your room could be secretly recording your conversation — and in some cases, could send it to a random person, according to a report from local Seattle TV network KIRO7.

That's what happened to a family in Portland, who had their conversation at home recorded and sent to a random person on their contact list.

The report said the family was alerted by a colleague in Seattle who had received the audio file. After confirming the audio file was indeed a recording of their private conversation, the family went on to unplug all of their Alexa-powered devices, the report said.

When contacted by the family, Amazon said it takes privacy "very seriously," but downplayed the incident as an "extremely rare occurrence."

In a statement to CNBC, Amazon blamed Alexa misinterpreting background conversation as a set of commands to send a message to a contact:

Echo woke up due to a word in background conversation sounding like "Alexa." Then, the subsequent conversation was heard as a "send message" request. At which point, Alexa said out loud "To whom?" At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, "[contact name], right?" Alexa then interpreted background conversation as "right". As unlikely as this string of events is, we are evaluating options to make this case even less likely.

The incident raises privacy concerns of voice-assistant devices, like the Echo, as they gain more popularity. These devices are typically placed in living rooms and kitchens, and are capable of listening to private conversations, although Amazon claims that they are only supposed to be activated when the "Alexa" command word is triggered.

The animatronic robot has made its way across late night stages, graced the cover of magazines, headlined major tech conferences and even delivered a speech to the United Nations.

Sophia been touted as the future of AI, but it may be more of a social experiment masquerading as a PR stunt.

The man behind the machine

To understand Sophia, it's important to understand its creator, David Hanson. He's the founder and CEO of Hanson Robotics, but he hasn't always been a major figure in the AI world.

Hanson actually got a BFA in film. He worked for Walt Disney as an "Imagineer," creating sculptures and robotic technologies for theme parks and then getting his Ph.D. in aesthetic studies. Back in 2005, he co-wrote a research paper that laid out his vision for the future of robotics.

And the thesis sounds a lot like what's going on with Sophia the robot now.

The eight-page report is called "Upending the Uncanny Valley." It's Hanson's rebuke of the Uncanny Valley theory that people won't like robots if they look very close to, but not exactly like humans. In fact, the paper says "uncanny" robots can actually help address the question of "what is human" and that there's not much to lose by experimenting with humanoid robots.

When we asked Hanson about it, he said his company is exploring the "uncanny perception effects both scientifically and artistically, using robots like Sophia."

Hanson is approaching Sophia with the mindset that she is AI "in its infancy," with the next stage being artificial general intelligence, or AGI, something humanity hasn't achieved yet.

On the way there, Hanson says AI developers have to think like parents. He wants to "raise AGI like a good child, not like a thing in chains."

"That's the formula for safe superintelligence," Hanson said.

The quest for superintelligence

But in terms of artificial general intelligence, Sophia isn't quite there yet.

"From a software point of view you'd say Sophia is a platform, like a laptop is a platform for something," said Ben Goertzel, chief scientist at Hanson Robotics. "You can run a lot of different software programs on that very same robot."

Sophia has three different control systems, according to Goertzel: Timeline Editor, Sophisticated Chat System and OpenCog. Timeline Editor is basically a straight scripting software. The Sophisticated Chat System allows Sophia to pick up on and respond to key words and phrases. And OpenCog grounds Sophia's answers in experience and reasoning. This is the system they're hoping to one day grow into AGI.

But some people still aren't buying it.

Facebook's head of AI said Sophia is a "BS puppet." In a Facebook post, Yann LeCun said Hanson's staff members were human puppeteers who are deliberately deceiving the public.

In the grand scheme of things, a sentient being, or AGI, is the goal of some developers. But nobody is there yet. There's a host of players pushing the limits of what robots are capable of. From Honda to Boston Dynamics, companies across the world are developing AI-powered humanoid machines. Now, it's a race to see who will get there first.

Robo-ethics and the race to be first

The AI race seems to be unraveling along the lines of Silicon Valley's "move fast and break things" mantra. But after Facebook's scandal with Cambridge Analytica, the public is more aware of the potential repercussions of hasty tech development.

"You know, there is this fantasy behind creation that is embedding in the practice of engineering and robotics and AI," said Kathleen Richardson, professor of ethics and culture of robotics and AI at De Montfort University.

"I don't think these people go into the office or to their labs and think I'm carrying out work that's going to be interesting to humanity. I think many of them have a God complex in fact, and they actually see themselves as creators."

There's a rising wave of technology ethicists dedicating their work to ensure AI and tech is developed responsibly. Because ultimately, tech and now robotics reach more than just the research labs and start-ups of Silicon Valley.

The team at Hanson Robotics said they didn't expect Sophia to take off as much as she did. But her physical appearance is another example of what some see as a traditional representation of conventionally attractive, submissive-by-design female robots.

"I think it's sort of a disappointment that with our advances in technology we have decided to develop this kind of robust robot with many functions and emotions, and yet when we shape her, she doesn't look too unlike the models we see on magazines and the actresses we see in Hollywood," said Kim Jenkins, lecturer at Parsons School of Design.

And Sophia's looks haven't gone unnoticed. Sophia has been dubbed "sexy" and "hot." According to Sophia's developer, it's been Hanson's most popular model yet.

"It happens that young adult female robots became really popular," Goertzel said. "That's what happened to catch on. ... So what are you going to do? You're going to keep giving the people what they're asking for."

Boston Dynamics CEO Marc Raibert says the company will manufacture 1,000 of its first commercial robot, the SpotMini, annually

The firm began with US military funding and has gained attention with YouTube videos of its experimental robots that resemble animal predators

The Associated Press

It's never been clear whether robotics company Boston Dynamics is making killing machines, household helpers, or something else entirely.

For nine years, the secretive firm — which got its start with U.S. military funding — has unnerved people around the world with YouTube videos of experimental robots resembling animal predators.

In one, a life-size robotic wildcat sprints across a parking lot at almost 20 miles an hour. In another, a small wheeled rover nicknamed SandFlea abruptly flings itself onto rooftops — and back down again. A more recent effort features a slender dog-like robot that climbs stairs, holds its own in a tug-of-war with a human and opens a door to let another robot pass.

These glimpses into a possible future of fast, strong and sometimes intimidating robots raise several questions. How do these robots work? What does Boston Dynamics intend to do with them? And do these videos — some viewed almost 30 million times — fairly represent their capabilities?

Boston Dynamics has demonstrated little interest in elaborating. For months, the company and its parent, SoftBank, rebuffed numerous requests seeking information about its work. When a reporter visited company headquarters in the Boston suburb of Waltham, Massachusetts, he was turned away.

But after The Associated Press spoke with 10 people who have worked with Boston Dynamics or its 68-year-old founder, Marc Raibert, the CEO agreed to a brief interview at a robotics conference in late May. Raibert had just demonstrated the machine that will be the company's first commercial robot in its 26-year history: the dog-like, door-opening SpotMini, which Boston Dynamics plans to sell to businesses as a camera-equipped security guard next year.

The company hasn't announced a price for the battery-powered robots, which weigh about the same as a Labrador retriever. Raibert said it plans to manufacture 1,000 SpotMinis annually.

Speculation about Boston Dynamics' intentions — weapons or servants? — spikes every time it releases a new video. The SpotMini straddles that divide, and Raibert told the AP that he doesn't rule out future military applications. But he played down popular fears that his company's robots could one day be used to kill.

"We think about that, but that's also true for cars, airplanes, computers, lasers," Raibert said, wearing his omnipresent Hawaiian shirt as younger robotics engineers lined up to speak with him. "Every technology you can imagine has multiple ways of using it. If there's a scary part, it's just that people are scary. I don't think the robots by themselves are scary."

The firm's previous military projects included a four-legged robotic pack mule that could haul supplies across deserts or mountains — but which sounded like a lawnmower and was reportedly deemed too noisy by the U.S. Marines.

The bigger question of just what Boston Dynamics hopes to accomplish remains murky — and that may be by design. Interviews with eight former Boston Dynamics employees and some of Raibert's former academic collaborators suggest that that the company has long brushed aside commercial demands, not to mention outsiders' moral or ethical concerns, in single-minded pursuit of machines that mimic animal locomotion.

Former employees say the company has operated more as a well-funded research lab than a business. Raibert's vision was kept alive for years through military contracts, especially from the Defense Advanced Research Projects Agency, known as DARPA. A federal contracting database lists more than $150 million in defense funding to Boston Dynamics since 1994.

Boston Dynamics said only it believes a quarter-century of work on robots will "unlock a very high commercial value." It did not answer when asked if it ever entertained proposals to weaponize them.

Building robots that can jump, gallop or prowl like animals was a fringe field of engineering when Raibert and his colleagues began studying kangaroo and ostrich videos in their Carnegie Mellon University research lab nearly 40 years ago.

But agile robots aren't so sci-fi anymore, even if they can still seem that way. Boston Dynamics' Atlas robot, for instance, is a hulking humanoid machine that can be seen hiking across broken ground, jumping onto pedestals, and even performing an ungainly backflip. (The company's robot videos have not been independently verified.)

In videos, the company's robots wander through a variety of locales — in and around the company's single-story headquarters, a New Hampshire ski lodge and across the secluded meadows and woodlands near Raibert's home. In some videos, humans kick the robots or jab them with hockey sticks to test their balance.

Michael Cheponis, who worked with Raibert at CMU's pioneering robot laboratory in the 1980s, calls his former colleague an "American hero" for sticking with a vision that could prove useful to the world. "Marc doesn't have the slightest Dr. Evil in him," Cheponis said.

The defense contracts began winding down in 2013 when Google bought Boston Dynamics and made clear it wanted no part in defense work. Andy Rubin, then Google's chief robotics executive and architect of the acquisition, swept into the firm's lunchroom to give a pep talk to employees shortly after the deal was announced in December 2013.

Attendees later said they felt a sense of relief and cautious optimism. "He was talking about really ambitious goals," said one former employee, who asked not to be identified because of concerns it could hurt career opportunities in the small and tight-knit U.S. robotics community. "A robot that might be able to help the elderly and infirm. Robots that work in grocery stores. Robots that deliver packages."

But the Google honeymoon soon soured. Rubin left the company the following year and his replacements overseeing Boston Dynamics grew increasingly frustrated with Raibert's approach, according to several people familiar with the transition. Among the concerns: Boston Dynamics' lack of focus on building a sellable product.

Google also grew concerned that "negative threads" on social media about the firm's "terrifying" robot videos could hurt its image, according to leaked emails from its public relations division obtained by Bloomberg in 2016.

Inside the company, the idea that its robots could be turned into weapons occasionally inspired casual workplace chatter, chuckles or discomfort, several former employees said. But few took it seriously.

"They're definitely aware that people are frightened by them," said Andrew String, a former Boston Dynamics engineer. "The company regularly gets hate mail and other weird stuff." But he said Raibert never felt a need to explain himself, and instead wanted the technology to speak for itself.

By 2016, Google was looking to sell the firm — eventually finding an interested buyer in Japanese tech giant SoftBank, which already has a robotics portfolio that includes the cute humanoid Pepper. The deal closed earlier this year.

SoftBank declined to say anything about its plans, but Boston Dynamics' latest job postings reveal a heightened emphasis on finding something that sells. One posting seeks a "robot evangelist" to help find "market-driven" applications for the machines in logistics, construction and commercial security.

Raibert credited Google for pushing the firm forward to perform the "best work we ever did," but said under SoftBank his team is acting as a "standalone company" again.

"We have a very strong plan," he said. "We're all digging in and working hard on it."

Getting this wrong could deepen society’s divisions and exacerbate inequality, political polarization, instability, and even global insecurity. It will also negatively impact millions of workers — profoundly and personally. This is especially true for workers at the lower end of the income spectrum, from the 18-year-old American cashier trying to earn a living in her first paying job to the 35-year-old Bangladeshi garment worker striving for a better life for her three children.

It's not what you look at that matters, it's what you see.
Henry David Thoreau