BTW I know this may be splitting hairs but these drones are not really AI. They are IMO a programmable smart weapon. IMO the tech needed to build these clearly exists today. I honestly would be shocked if they did not already exist in someone's arsenal somewhere.

The key issue would be range and what type of weapon they could carry. The weapon would be the limiting factor and dictate how they would have to behave to deliver the weapon. A small grenade or shape charge would be the simplest weapon to deliver. it would simply have to get close to the intended target.

BTW the counter measures may also be as simple as quickly dawning a commercially available Halloween rubber mask (i.e. Frankenstein or a witch) if they were programmed for one particular person. That or a mirror blind.

Also, Old and new. German snipers used the ghostblind to great effect in WWI. Some of the blinds used for this purpose today would amaze... Heck, I laminated and emplaced a full-sized LEGOLAS in my back yard so that the deer became accustomed to an archer standing there looking at them. Now, I do have to dress somewhat like an elf when I hunt my yard, but at 20-25 yards it has proven to be quite effective if I move slowly.

1. I think everyone can agree on the fact that giant snowballs cannot roll back uphill to unhappening, although technology can be hidden, or outlawed, or shared incrementally.

2. I also think most people agree that their employer would replace them in a heartbeat if the AI that replaced them paid for itself in a period shorter than it would take for you to get to full benefits such as a lifelong pension. Medical and dental? Meh, any real problems see Brad, our IT guy.

But Raptor addresses the issue of "programmed” vs AI. Like most people, I assume, this is what I am trying to wrap my mind around. Much of the technology that has been mentioned already exists in some form but there remains a tether to humans (software/programming at a minimum) so the crux of the issue for me is in the “splitting hairs” of where AI could go to be fully autonomous (read: think independently) AI.

Instead of viewing autonomy as an intrinsic property of an unmanned vehicle in isolation, the design and operation of autonomous systems needs to be considered in terms of human-system collaboration

Given agreement, even if partial, on the aforementioned #1 and #2, does it follow that AI could eventually, in and of itself threaten us, or are we just being paranoid and displaying control issues because there are humans, some with the best intentions, that we don't trust? (See, for example: EUGENICS)

_________________It's not what you look at that matters, it's what you see.Henry David Thoreau

But Raptor addresses the issue of "programmed” vs AI. Like most people, I assume, this is what I am trying to wrap my mind around. Much of the technology that has been mentioned already exists in some form but there remains a tether to humans (software/programming at a minimum) so the crux of the issue for me is in the “splitting hairs” of where AI could go to be fully autonomous (read: think independently) AI.

AI:Alexa, order a 10-lb honey baked ham.No, You are disgusting, freakin fatbody Pyle! You are 11.2 lbs. over the national average for your height. You will remove pork from your diet.

This Alexa example is an excellent practical description of what I consider AI.

When the drones can decide not detonate because they chose to "live" instead of carrying out their instructions that is AI in my book.

The drones in the movie are basically not very different from an analog automatic door opening device. When they detect their target they home to it and attack. They do have fly and navigate but between GPS (outside) and inside visual clue tracking, proximity sensors, OCR routines and facial recognition they do not have to have to be sentient to work. Just programmed to work in the strike area. In the case of a bee hive type mine the target is even simpler, anything they detect that is bipedal and living.

The beehive area denial version to me is the most concerning. Combine a motion sensing outdoor security light with a bunch of small kamikazi drones programmed to attack anything living or moving and you have a really nasty weapon.

To be fair to the "Slaughterbots" video creators: They are worried more specifically about significantly autonomous weapons, and not Artificial General Intelligence (though I'm sure they might have something to say about that as well). Autonomous weapons are a type of domain-specific AI, which could probably attain human-levels of proficiency in the very near future. The machine vision, machine learning, and other software-related technologies are already basically published publicly, although the integration and miniaturization of some of the hardware-related components may not exist just yet.

The sound of a personal defense interceptor swarm sounds a bit like the "flying links"/"flinks" described in Seveneves. Finding ways to take out a swarm of tiny attack robots would probably be a worthwhile line of thought.1) They are autonomous, and presumably committed, so there are no control signals to hack/jam, but they might have some form of inter-swarm communication that you could feasibly attack, and at least make them less coordinated.2) Jamming their target recognition, or using decoys might be possible, but would probably depend on significant knowledge of their capabilities and implementation.3) Due to their small size, EMP-style defense is likely to be infeasible.4) Direct interception of the robots with bullets, lasers, or other robots might be another effective strategy. Just remember that the swarm is numerous independent robots, so disabling a small fraction (or even a large fraction) may not appreciably reduce the threat.

My vote would be for a laser-based point-defense system. You will need a moderately power laser, and a good optical targeting system (which would be considered an autonomous weapon system itself). The laser power requirement to disable a tiny robot would not be nearly as high as a larger weapon (e.g. a mortar round), and you can track the tiny, agile robots fast enough with a laser system to be effective.

_________________

Rahul Telang wrote:

If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:

There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.

Jayce, you introduce an entirely new debate that I worry could take away from my personal focus on super AI. The 1995 Protocol IV to the CCW (Certain Conventional Weapons) of 1980 prohibits the use of "blinding laser weapons." So, although it might be a good idea to have a "laser-based point-defense" security system that could disable swarms of tiny robots, there is a chance that it could blind someone and therefore would be illegal. This is yet another topic being widely discussed by many of the same people trying to understand AI in weaponry, and whether you intended it or not, I find the additional irony both instructive and humorous. Well done. If I create something twice as useful as myself will it see me as only half useful, and what result will I find when I make it 10x more useful or shall I ask what use will it find for me? Move to larger and larger integer exponents... KWATZ!

This sounds a lot like Kurtz. And on purpose. He enjoyed the "ironical."

245 INT. HEADQUARTERS - NIGHT

Willard stands there, holding the morphine needle in his hand.

KURTZ Look into the jungle. You can't -- it's too terrible. You have to smear yourself with warpaint to look at it -- you have to be a cannibal. (whispered) That's why warpaint was invented. Then it becomes your jungle.

Willard shoots himself in the arm with the morphine.

WILLARD How did we get here?

KURTZ Because of all the things we do, the thing we do best -- is lie.

WILLARD I think think a lie stinks.

KURTZ Oh Captain, that is so true.

WILLARD Stinks. I could never figure -- (he drinks from the canteen) I could never figure how they can teach boys how to bomb villages with napalm -- and not let them write the word 'fuck' on their airplanes.

Willard drinks more of the LSD water.

KURTZ (angrily) You could never figure it because it doesn't make sense.

WILLARD Fuck no.

KURTZ I'll tell you what makes sense ! Air strikes ! White Phosphorus ! Napalm ! We'll bomb the shit out of them if they don't do what we want.

WILLARD We'll exterminate the fuckers !

Chef steps into the Headquarters -- he is terrified. He draws his bayonet.

CHEF Captain -- kill him.

KURTZ Think of it -- for years, millions of years, savages with pathetic painted faces were scared shitless that fire would rain down from the sky. And goddamn, we made it happen. God bless Dow !

_________________It's not what you look at that matters, it's what you see.Henry David Thoreau

Yes, the "slaughterbots" are not AI, but imagine them controlled by AI. With all the tracking and surveillance techniques we currently have available (phone, surveillance cameras, traffic cameras, etc) AI could probably track you to within a few meters and wait until there are no other viable targets in the zone. Then the drone could lock onto you, no Facial Recognition needed, zip in and pop that 3g shaped charge. They wouldn't even need a swarm, just one drone.

Mobile defenses would be extremely difficult if not almost impossible. Stagnant positional defenses could be overcome easily, the video even shows the transport drone being used as a breacher for the swarm to enter a building. An entire new defense system would be needed and many years in the making. Offensive weapons are created first to which new defensive measure need to catch up.

_________________

jnathan wrote:

Since we lost some posts due to some database work I'll just put this here for posterity.

The delicate art of paper folding is playing a crucial role in designing robotic artificial muscles that are startlingly strong. In fact, the researchers say they can lift objects 1,000 times their own weight.

The researchers say the muscles are soft, so they're safer compared to traditional metal robots in environments where they would interact with humans or delicate objects, and they can be made out of extremely low-cost materials such as plastic bags and card stock. Their findings were published this week in Proceedings of the National Academy of Sciences.

"It was surprising that they were so strong. ... I was surprised based upon the simplicity of this architecture, it's very simple," Robert Wood, a Harvard professor of engineering and applied sciences, tells The Two-Way.

"I still remember the first demo we saw of the system lifting a tire – and it was like whoa, we've got to model this," adds Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory.

The artificial muscles work by encasing in a plastic "skin" a folded origami-like "skeleton" capable of expanding and contracting. The device expands as water or air is pushed into it, and contracts as the water or air is pumped out. Rus explains how this works:

Quote:

"The fluid medium fills the entire space between the skeleton and the skin and in the initial stage the pressure of the internal and external fluids are equal. But as the volume of the internal fluid is changed, a new equilibrium is achieved."And this is due to the pressure difference between the internal and external fluids that induces tension in the flexible skin and the tension acts on the skeleton and in turn drives the transformation that is defined by the geometric structure of the skeleton. So if the skeleton compresses as a brick shape, that is the transformation you will achieve."

The researchers say the way that the origami skeleton is folded determines the motion that the artificial muscles will make as they expand or contract. They can be programmed, or folded, to move along multiple axes and to bend and rotate.

Because it is able to lift heavy payloads, the design, Wood says, overcomes a core paradox of the field of soft robotics: "You're typically giving up strength because you're using soft materials."

The devices are so strong, Rus says, because their design amplifies force much like a pulley or a lever does. As the origami structure inside contracts, and the outer plastic skin pulls in, it converts "the fluid pressure to a large tension force on the skin." Based on their models, she says the researchers believe a 1 kilogram artificial muscle could lift 1,000 kilograms.

Wood adds that he sees these models working well in environments like homes or hospitals, where they might interact with humans. "They're strong so we can envision things that have standalone robotic arms, that can manipulate large heavy objects yet be safe to operate around," he says.

Other applications could be wearable robots or robots used in surgery – "things that have to be delicate in order to not harm an operator or user, yet strong enough to lift heavy loads," Wood says.

Rus adds that she envisions these artificial muscles helping people confined to wheelchairs to stand. "We could really bring soft strong mobility to people who are otherwise unable to move."

An upcoming challenge for the team is to mimic a prime example from nature of power and softness: they want to design a robotic elephant using these artificial muscles, capable of lifting heavy loads with soft materials.

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."

Republicans call their tax bill the Tax Cut and Jobs Act. But critics say maybe it should have been named the Tax Cut and Robots Act.

That's because it doesn't create new tax incentives that specifically encourage companies to hire workers and create jobs, some employers and economists say. But it does expand incentives for companies to buy robots and machines that replace workers.

Republicans say that lowering taxes will boost the economy and spur job creation. But critics say that the tax legislation would create an imbalance favoring machines over workers.

"I think they really need to re-look at the name [of the bill] and add the missing component of the worker," says Carl Pasciuto, president of Custom Group, a high tech manufacturing company in Woburn, Mass.

His factory floor is full of machines that look kind of like enclosed ski gondolas. Inside them, oil is being sprayed on blocks of metal as automated robotic cutting tools zip around shaping the aluminum or steel into precision parts for nuclear submarines, jet planes, and a range of other applications.

There are many more machines here than actual workers. And under the emerging tax bill (there are two versions — one in the House, one in the Senate), companies would have incentives to buy more.

For one thing, they could write the full value of the equipment off their taxes right away.

Pasciuto says he's definitely OK with that. "Absolutely. We're always happy to get any break we can get," he says.

But Pasciuto says he needs well-trained workers more than he needs equipment. "The equipment is readily available. The workforce isn't," he says.

Pasciuto says he has positions that he can't fill because he can't find skilled workers. So he's sometimes forced to buy machines to do the work.

But he says he already has a training facility at his factory. Pasciuto says he and other employers would definitely take advantage of a tax incentive to train workers and it would create more jobs.

"I think that the federal government really needs to look at what they put in the bill and even it out from an equipment side to a training side as well," Pasciuto says.

And some labor economists agree. Daron Acemoglu is an economist at MIT who researches automation and robots and their impact on the labor market. He says automation is often a good thing. It can increase productivity and be an important part of keeping the U.S. economy competitive.

But, he says, "the problem is when you subsidize heavily the adoption of machines instead of people."

Then you're putting your thumb on the scale against workers, Acemoglu says.

He says the Republican tax bills would do that. And here's how. Suppose a business could buy a machine to replace three workers, but there's no great cost savings. "That means that machine is not a great machine," Acemoglu says. "It's fine, but it's marginal."

So if the tax policy was neutral, the business probably wouldn't buy the machine and it would keep the workers employed. But Acemoglu says even the current law favors machines, and the Republican tax bills tip the scales even more. So if you buy the machine, you'll get "a huge handout from the government," he says.

Like Pasciuto, Acemoglu would like to see incentives for hiring and training.

"To balance the scales it would be good to encourage firms to invest in their workers," Acemoglu says. Germany "has invested much more in robots than we have," he says. But it's done it in a way "that still has kept employment growing in the manufacturing sector."

Acemoglu says building hiring and training incentives into the tax bill could have helped push the U.S. more in that direction.

Gavin Ekins, a research economist with the conservative leaning Tax Foundation, says it's OK that the scales are tipped toward machines. "In the long run it's better for the economy," he says.

Ekins says some machines kill jobs, but others create jobs. If you buy a backhoe, for example, people have to build it and someone has to drive it. And he says incentives for training programs would be great to have down the road if Congress would design effective ones and pass them into law.

But he says there wasn't time to devise good incentives for training workers in this legislation.

Ekins does agree with Acemoglu on one thing. The House version of the bill would drastically raise taxes on many graduate students and workers who get free tuition.

And he says in an economy that needs a better-skilled workforce, "taxing the benefit of getting a free education — this is something that really shouldn't be taxed."

Ekins hopes the Senate version wins out on that point.

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."

_________________In my day, we didn't have virtual reality.If a one-eyed razorback barbarian warrior was chasing you with an ax, you just had to hope you could outrun him.-Preps buy us time. Time to learn how and time to remember how. Time to figure out what is a want, what is a need.

I thought the same. Earlier, the NY Times ran an article for parents, "Will Robots Take Our Children's Jobs?" Nearly every day, if you search for robots/jobs, you'll find a new article talking about it and I wonder if it is stirring up fear, prepping you for the change happening around you or they are trying to figure it out as well. I recently heard someone complaining about customer service asking them to select 1 for English or 2 for Spanish. I laughed. Banks made that change 25 years ago and once the banks accept it there's not much going back.

_________________It's not what you look at that matters, it's what you see.Henry David Thoreau

I thought the same. Earlier, the NY Times ran an article for parents, "Will Robots Take Our Children's Jobs?" Nearly every day, if you search for robots/jobs, you'll find a new article talking about it and I wonder if it is stirring up fear, prepping you for the change happening around you or they are trying to figure it out as well. I recently heard someone complaining about customer service asking them to select 1 for English or 2 for Spanish. I laughed. Banks made that change 25 years ago and once the banks accept it there's not much going back.

Banks also went to ATMs a while back, and now it is nearly all online banking. When was the last time you had to walk into a physical branch for something (not including driving by the ATM outside)? One of my banks doesn't even have physical branches, and instead uses smartphone check cashing, and will reimburse you the fee for using any ATM on the planet (which I'm sure they still come out WAY ahead vs operating costs of having any brick and mortar locations).

Back-office jobs are also getting squeezed out, as more loan approval, investment decisions, etc. are getting automated. For how eternal banking has been throughout history, I'm sure daily operations could pretty easily be ~99% automated relatively quickly.

That trend doesn't seem to bode well then...

Oh yeah, and the $25 fee for checking a couple of rows in a database and then pushing some bits around to cover it (colloquially known as "overdraft")? I'm sure that definitely cost them $25.

_________________

Rahul Telang wrote:

If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:

There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.

There is evidence computers may be better suited to some managerial tasks than people are. Humans are susceptible to cognitive traps like confirmation bias. People using intuition tend to make poor decisions but rate their performance more highly, according to a 2015 University of New England analysis of psychological studies. And in an increasingly quantitative business world, managers are asked to deliver more data-driven decisions—precisely the sort at which machines excel.

There is evidence computers may be better suited to some managerial tasks than people are. Humans are susceptible to cognitive traps like confirmation bias. People using intuition tend to make poor decisions but rate their performance more highly, according to a 2015 University of New England analysis of psychological studies. And in an increasingly quantitative business world, managers are asked to deliver more data-driven decisions—precisely the sort at which machines excel.

Laughed at HiroMorph for the mind meld. I am trying to attack the "what to do" question from every angle that I can. For example, if education is a key and many of the blue collar jobs were to simply disappear what would all of those angry, purposeless, hungry machinists do? I asked around a bit via the interwebz and began seeing ideas that have taken hold elsewhere. This tells me that governments might be as worried as the public, if not more so, as both continue to buy in...

Most next-generation terrorism and future-of-war scenarios focus on artificial intelligence, drones, and self-driving car bombs. But those are, at best, only half the story, projecting white-collar America’s fears of all the possible dystopian uses of emerging technology. The other, and potentially more worrisome, half lies in the blue-collar technicians of ISIS. They have already shown they can produce a nation-state’s worth of weapons, and their manufacturing process will only become easier with the growth of 3-D printing. Joshua Pearce, an engineering professor at Michigan Tech University, is an expert in open source hardware (a protocol to create and improve physical objects—like open source code, but for stuff), and he describes ISIS manufacturing as “a very twisted maker culture.” In this future, weapons schematics can be downloaded from the dark web or simply shared via popular encrypted social media services, like WhatsApp. Those files can then be loaded into 3-D metal printers, machines that have become widely available in the past few years and cost as little as a million dollars to set up, to produce weapons with the push of the button.

_________________It's not what you look at that matters, it's what you see.Henry David Thoreau