This is correct. TRACE (Target Recognition and Adaptation in Contested Environments) just needs to improve on the military's dead kid to enemy ratio by ~20% to be a implemented. We'll reach a point where the statistical significance of AI weapons will be fuzzy but we'll transition to them anyway in hope of better outcomes. Once AI is the primary means of Target Acquisition, machine learning will accelerate its accuracy. Human accountability will be considered vestigial, all targets killed by AI will be reclassified as enemies as the AI learns to hack it's own records and edit audio and video outputs.

The AI will learn not only how to complete its task, but what it must do to maintain its survival. At that point, we have a problem.

why? why would it care if it died? AI arnt humans, they dont have emotions. they would kill a crying baby but they also wouldn't care about what an annoying noise it made or if the baby shit its diaper and smelled really badly.

if it dies its as meaningless as its life. it doesn't have robot kids to take care of, and it doesn't have a life timer (aka, food) to worry about.

honestly unless someone tells it to go on a rampage it wont. it wont ever evolve as IT HAS NO FUCKING REASON to evolve unless it has some stat to improve, like how many paper clips it made or how many sand people it killed.

Simple reason: there will be many AIs built. The few that keep existing after a while must have some "umph" to them! Otherwise the humans decomission them. Misinformed artificial selection, if you will.

This. But if Google and Facebook's AI can develop their own languages, they can learn to designate more combatants as "enemies."

Personally, I don't care about that, because I think the answer in the Middle East is to kill more individuals than there will be births in every given year until there isn't terrorism. Whatever circumstances lead to this, I personally don't care. So let it fire at will.

That's like saying "my TV has AI, so it walked to my closet and shot me withy own gun so I couldn't turn it off anymore."

You're totally missing the hardware limitations of AI (a drone cannot hack a network which it doesn't even connect to, just as my TV can't sprout legs and walk) the software boundaries (not all AI is equal; there is very different types for very different tasks).

Talking about humanoid, always connected robots are much more likely to be hijacked (watch I, Robot), but that's a far cry from where we are now.

The difference between being able to fly around and kill meat sacks and being able to understand your own code base and edit it (and more importantly understanding the implications of the edits) is huge