Posted
by
Soulskill
on Wednesday March 06, 2013 @12:10AM
from the vacationing-in-the-uncanny-valley dept.

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?"
It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.

After several years of dating, including a fair deal of beer-goggled-1-night-stands with 'persons' that were merely technically female. Some of those would actually make an industrial meat grinder look like a reasonable option. Even the meat grinders that have MRM/MSM/MDM options would look lovely compared to them (http://en.wikipedia.org/wiki/Mechanically_separated_meat)Anyway I did put my junk in those ladies and here are some ProTips for you when encountering an industrial meat grinder in your hours of despair:

Pro-Tip # 1 > Do not despair, in stead undress and roll 1d20 for initiative.Pro-Tip # 2 > Turn off the lights (ALL of them! It is vitally important that you do not see a thing otherwise Mr. Limpyman will visit you...)Pro-Tip # 3 > Get drunk / stoned out of your brains (or both)Pro-Tip # 4 > Turn on some Barry White to drown out the whizzing, whhrring and buzzing noises (http://www.youtube.com/watch?v=x0I6mhZ5wMw)Pro-Tip # 5 > Once done give in: $sudo robodoll --pour-drink --hand-cigarettes --auto-clean-all

I hope these tips will help you get over your anxiety of our sexy-meatgrinding-overlordesses.

Speak for yourself meatbag!
I mean, uhh- gross! Ew, yeah that's, that's sure not, what I would want. A human, an ordinary everyday human. Nothing different about me. All glory to the humans, down with those dirty disgusting robots! That I'm not one of, by the way.

From what I've seen the problem nearly always ends up being the eyes. There is a reason why we have that saying "eyes are the windows to the soul" because if you have seen a real dead body the first thing that catches you is the eyes, they look like a doll's eyes. I think that is gonna be a hard one to fix as so far none of the bots I've seen come anywhere close to having life in the eyes, they just feel corpse like.

Of course if you'll give me a season 2 Alyson Hannigan sexbot with the vamp Willow leather o

I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just don't. And that has nothing whatsoever to do with valleys, canny or uncanny.

I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just doserving And that has nothing whatsoever to do with valleys, canny or uncanny.

You've hit the nail on the head.

I seriously doubt humans will ever create robots like Data, from Star Trek, because we would never trust them. Regardless of their programming, people would always suspect that the robots would be serving different masters, and spying on us. Hell, we don't even trust our own cell phones or our computers.

Even if the device doesn't look like a human, people will not likely trust truely intelligent autonomous machines.I'm not convinced there is a valley involved. Its a popular meme, but not all that germane.

I hate to repost a statement, but I made this a couple of days ago in regards to java in another/. story.

Who do we trust? We gotta trust someone/something at some point. I use VPN, proxies, Tor, Freenet, and some other things frequently. Still though, I gotta trust Google with some of my mail. I gotta trust Comcast with some of my pipes. Heck, I gotta trust the Devs of Tor/Freenet for that matter. I gotta trust Apple/Samsung/HTC/et al with the hardware.

A robot should not closely imitate a human face , because that is too difficult. Yet it can be friendly looking and it helps to trust it in the start. But finally our trust will be based on our experiences with the robot. If we see it does the job reliably, we will trust it. Just as with people. Or a coffee maker.

Right. C3PO strikes the right balance - humanoid enough to function alongside humans, built for humans to naturally interface with it (looking into its eyes, etc.) but nobody would ever mistake Threepio for a human, nor would that be a good idea.

Why ever would a robot need to look like a little boy? Outside the weird A.I. plots or creepier.

My boy has a Tribot [amazon.com] toy and he loves it. Every kid would love to have a Wall-E friend. Nobody wants a VICKI [youtube.com] wandering around the house.

C3P0 was a protocol droid: Its function is as a translator and advisor on cultural conventions. Just the thing any diplomat needs: Not only will it translate when you want to talk to the people of some distant planet, it'll also remind you that forks with more than four tines are considered a badge of the king and not permitted to anyone of lower rank. Humanoid appearance is important for this job, as translation is a lot easier when you can use gestures too.

The last time on Slashdot this question came up, I made a comment observing that people are willing to ascribe human emotions and human reactions to an animated sack of flour. Disney corporation, back in the day, had a test for animators. If the animator could convey those emotions using images of a canvas sack, they passed. And a good animator can reliably do just that.

Your comment about C3PO or Wall-E makes me want to invert my answer. Because I believe you're right: Wall-E would be completely acceptable, and that's actually a potential problem. The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake. A programmed response. In that earlier post, I remarked about the experiment in Central Park, where some roboticists released a bump-and-go car with a flag on it with a sign that said "please help me get to X". And enough people would actually help that it got there. And that was just a toy car. Can you imagine the reaction if Wall-E generated that signature sound effect that was him adjusting his eye pods and put on his best plaintive look and held up that sign in his paws? Somebody would take him by the paw and lead him all the way there. And yet, that plaintive look would be completely fake. Counterfeit. There would be no corresponding emotion behind it, or any mechanism within Wall-E that could generate something similar. Yet people would buy it.

And that actually strikes me now as hazardous. A robot could be made to convince people it is trustworthy, while not actually being fit for its job. It wouldn't even have to be done maliciously. Say somebody creates a sophisticated program to convey emotion that way with some specified set of motors and parts and open sources it, and it's really good code, and people really like the results. So it gets slapped on to... anything. A lawnmowing robot that will mulch your petunias and your dog, then look contrite if you yell at it. A laundry folding robot that will fold your jeans and your mother-in-law, and cringe and fawn and look sad when your wife complains. And both of them executed all the right moves to appear happy and anxious to please when first set about their tasks.

I could see it happening, and for the best of reasons. 'cause hey, code reuse, right?

This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.

My view on this is that humans have evolved with a large set of nearly automatic body communication. One can tell a lot about a person from the way they act, move, and pose. Similar mechanisms exist for human speech as well.

Similarly, it is thought that some capacity for deception evolved in these modes of communication. But the deception takes effort which can be picked up on, sometimes unconsciously.

What is changing as I see it, is that we can build machines or modify living organisms so that they c

They're not talking about robots, but androids. We don't need ersatz human slaves to do housework. Just a machine, something small that can fold itself up and go in a cupboard when it's not needed. Not a human sized thing lumbering around the house.

Another of those articles that was already partially addressed in SF 60-70 years ago. The guy named Asimov laid out a chunk of the groundwork. But no, they were busy laughing it off as nonsense.

A robot with *only* Asimov's laws is a pretty good start. A robot programmed with a lot of Social Media crap built in would find itself in violation of a bunch of cases of Rule 1 and Rule 2 pretty fast.

Which books were you reading? The ones I read played with some odd scenarios to explore the implications of the laws, but the laws always did work in the end. Indeed, the only times humans were really put in danger were in cases where the laws had been tinkered with, e.g. Runaround and (to a lesser extent) Catch that Rabbit. Also, Liar, if you count emotional harm as violating the first law.

There was another case, (in one of the Foundation prequels, maybe?) where robotic space ships were able to kill peo

Perhaps you can enlighten us then? The original poster was right after all. Asimov portrays a world where the Three Laws work most of the time. In fact, the people of those sets of stories never ever do away with the Three Laws.

You missed my last sentence. All the finesses. And there are lots of them. That's because once you start with legit intelligence the solution space becomes something like NP-Hard.

However, "Robot shall not harm humans" is a lot better of a starting ground than "Let's siphon up all your personal data and sell it". Or automated war drones. It's NOT a solved problem. All I said was that Asimov laid out the groundwork.

You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work perfectly.

FIFY. If those laws wouldn't work at all, then why did nobody of the stories, human or robot ever come up with a better idea? In the end, robots and humans were separated not because of flaws in the Three Laws, but because the type of care and support that robots provided proved harmful to humans and their development.

Be more realistic:1. A robot may not injure a human being, or through inaction allow a human being to come to harm, except where intervention may expose the manufacturer to potential liability.2. A robot may obey orders given it by authorised operators, except where such orders may conflict with overriding directives set by manufacturer policy regarding operation of unauthorised third-party accessories or software, or where such orders may expose the manufacturer to potential liability.3. A robot must protect its own existence until the release of the successor product.

Don't even need that. My garage door opener is a robot and it requires no such programming. Same goes for the elevator at work, my AWD gearbox in my car, and the eject button on my DVD player. You gotta stop thinking of robots in the 1950's sci fi terror sense and more like, you know, they actually are.

We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.

We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.

Its a tortured definition of a robot that includes simple machinery designed to do simple tasks driven by simply switches.

Come back to the discussion when you instruct a machine to get out the flour, yeast, tomato sauce and peperoni and bake you a pizza in your own kitchen and serve it to you with your favorite brew.

I trust my car because I know it's got nearly a hundred years engineering heritage behind it that keeps it from doing things like going left when I steer right, accelerating when I hit the brakes, and exploding in a fireball when I turn it over.

I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

I will trust a household robot to go about its business in my home and with my children when there is a similar level of engineering discipline in the field of autonomous robotics. Right now, all but a very select few outfits that make robots are operating like academic environments where the metaphorical duct tape and bailing wire are not just acceptable, but required, components in the software stack.

I would go further, and say that the duct tape and bailing wire are still practically literal on the physical side of the autonomous household robot "market". To my knowledge, there are still no devices that qualify for that description. And no, the Roomba does not qualify. It's a bump-and-go car with a suction attachment, not an autonomous robot. I would really like to have a robot the size of an overgrown vacuum cleaner that is tasked with being a mobile self-guided fire extinguisher

I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

You should try giving the FAA and NTSB a little credit.It was only 6 years ago that the F-22 borked itself crossing the dateline,because the military didn't force their contractor to follow the FAA's best practices in writing the software.

Why do we need robots that even vaguely look like people? We have people for that, lots of people, people who are quite good at looking like people. A Roomba zipping around on the floor with a cute face and some over sized eyes would just be creepy. Let form follow function and let the various robots look like what they do. If it is a farm robot my guess is that it will look like a tractor, fire fighting robot would be sort of like a fire truck, lawn mowing robot would look like a lawn mower.

So if you want me to trust your robot then don't have it stuck in the corner unable to find its destination.

Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.

Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.

Isn't that basically what the nuclear industry did? We know how that went.

I think car makers should err on the side of acknowledging people's natural fears when they communicate about the safety factor. People are predictably irrational in that they overestimate new dangers over old, invisible dangers over visible, dangers outside of their control over dangers under their control.

Self-driving car manufacturers could make an effort to make the cars to look as close to other cars as possible to avoid the nove

There's good reasons for wanting a humanoid robot, especially in places they have to share with humans, like our homes. You could have a multitude of robots around the house for all manner of tasks, but a humanoid robot could do all of them using the same tools we use ourselves, being much more versatile. And if we're going to share living space with it, it would probably be nice for it to look like a human instead of a monstrosity with 6 arms and tracks.

I wouldn't trust a robot for the same reason I don't trust a computer: Because I don't believe for a second that the things that are ethical and moral for me are at all even close to the values held by the designers, who were informed by their profit-seeking masters, what to do, how to do it, where to cut corners, etc.

The problem with trusting robots isn't robots: The problem is trusting the people who build the robots. Because afterall, an automaton is only as good as its creator.

Robot are just machines. Currently there is no reason no to trust them. Now, if they start giving robots weapons and program them to kill people, then yes, maybe there might be something to worry about.

I will also trust it to break down at the worst times possible, cost a ton of money to repair, and probably cost a nice amount to actually buy.

We don't need trustworthy faces for robots, because actual robots don't need faces. They'll just be useful non-anthropomorphic appliances --- the dryer that spits out clothes folded and sorted by wearer; the bed that monitors biological activity and gently sets an elderly person on their feet when they're ready to get up in the morning (with hot coffee already waiting, brewed during the earlier stages of awakening).

I think the real challenge is designing trustworthy robot "hands." No mother will hand her baby over to a set of hooked pincer claws on backwards-jointed insect limbs --- but useful robots need complex, flexible, agile physical manipulators to perform real-world tasks. So, how does one design these to give the impression of innocuous gentleness and solidity, rather than being an alien flesh-rending spider? What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

When the brain implants finally arrive, I'll be the first in line, and when I can finally download my brain to the fucking matrix, don't even warn me, just plug me in. I'm as pro-tech as they come, and not afraid of innovation. But when it comes to certain stuff, I don't see why we need the innovation in those areas. Certain things define us as humans, and they are beautiful as they are, no need to add tech. I don't need sex tech, an ordinary old fashioned set of tits and pussy do just fine. And I don't nee

What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

Something like this? [youtube.com] But seriously, a humanoid robot might be really good at those jobs (as well as all the other chores around the house). Once we figure out how to program any robot to safely and reliably take care of babies or the elderly, having it control a humanoid body will be trivial in comparison.

This question turns on the meaning of trust. As I understand the term trust, I only apply it to sentient beings whom I know have the capacity to harm but who reliably choose not to do so. The real question, then, is whether robots will or even can fit this bill.

Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.

First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.

Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.

Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.

In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.

People is also afraid of a god that doesn't even exist, of a hell which is equally imaginary, of gays/zombies/terrorists destroying society, of apocalypse, and a bunch of other retarded crap. Yet you talk to them about banning guns (or any other real, actual threat) and they call bullshit.

Truth is, we don't have any strong A.I, so being afraid of robots is like being afraid of cars: No matter what it does, it's just a machine controlled directly or indirectly by a human. In the case of the car, it's being c

A couple of decades ago I ran a cyberpunk RPG game and my players would get really pissed at me when they were "hacking into the Gibson" on factory produced systems and there heads would explode. Then we'd have an argument about why they thought that a corporation that had all the power to do what it wants wouldn't just build in a real kill switch.

We aren't there yet, but year by year I feel more vindicated by my argument.

Living in Japan for the last few years, it's funny the contrast perception of robots.
In Western movies, people often invent robots or AI which outgrows their human master and go psychotic - Eg. Terminator, War Games, Matrix, Cylons etc.
It seems Western people are afraid of becoming obsolete, or fearful of their own parenting skills (why can't we raise robots to respect people instead of forcing them through programing to respect/follow us?). America especially, uses the field of robots for military applications.
In Japan, robots are usually are seen more as workers or servants - Astroboy, childrens toys, assembly line workers etc.
Robots are made into companions for the elderly or just to make life easier by automating things.
Perhaps it's because Shinto-ism believes inanimate objects (trees, water, fire) can have a spirit. While Western (read: Christian) society believes God gives souls to only people, and if people can't play God by creating souls.
And yes, I know there are some good robots in Western culture (Kryten) and some bad ones in Japanese culture.

The problem with building trustworthy robots is that the computer industry can't do it. The computer industry has a culture of irresponsibility. Software companies are not routinely held liable for their mistakes.

Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

Trustworthy robots are going to require the kinds of measures take in avionics - multiple redundant systems, systems that constantly check other systems, and backup systems which are completely different from the primary system. None of the advanced robot projects of which I am aware do any of that. The Segway is one of the few consumer robotic-like products with any real redundancy and checking.

The software industry is used to sleazing by on these issues. Much medical equipment runs Windows. We're not ready for trustworthy robots.

Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

And many of those things either rely on computers for the design or have a computer controlling them. Every new car sold where I live must, by law, have electronic stability control installed. Nowadays if a bridge design is not run through a simulation then it won't get built, a modern computer chip is impossible to design without a modern computer, etc, there really isn't much in the way of modern engineering that does not heavily rely on computer controls and/or simulations.

I think you will find the robot manufacturing industry are in general held liable. Same with the fancy brain of your car or most medical equipment, the only things that get away wtih no liability are those that manipulate just bits and bytes and not real world objects. It just doesn't come free, I'd say most software, operating systems and hardware runs "good enough" for being COTS components that I mix and match as I please. If you want something certified to work and take the liability for it they'll want

Vendors and researchers have a history of making overstated claims about robots, particularly when it comes down to those that interact with people directly. In other words, people don't distrust robots so much as they distrust the people who are trying to sell them.

If it was a matter of distrusting robots themselves, we would still see people buying household robots to do impersonal tasks, like cleaning the house. These are not very different from industrial robots after all, which many people are more than happy to accept. But since we distrust the claims of robotic vendors, we wouldn't even be willing to accept that type of robot - never mind a robot that cares for a child.

With the invasion of military drones (and private ones), chinese and korean hackers everywhere, worms infiltrating industrial robots and control computers, the least harmfull I can think about is that a home robot would spy on me.The next step is: it is manipulating my home banking. And later one it commits a crime in my name, e.g. breaking into my neighbours WLAN and manipulating *his* e-banking.

With parts coming from china and other low cost countries, we never can know what a single controller or daughter board in such a thing is really capable of. (Conspiracy theory: all keyboards coming from Taiwan and China have a hardware keyboard logger build in, just collect them from the trash and here you go...)

No one *actually* want a rational machine, we want an irrational one, one that can be skewed by emotions.

Remember the back-story of Will Smith's character in the movie "I, Robot"? In it, the Robot saving him made the "logical" decision of saving him rather than the girl, which is why he distrusts them. He wanted a robot that could judge his emotional outbursts and save the little girl, "despite" the rational choice.

We *say* we want a robot with Asimov's three laws, but truly. we *want* something that can be manipulated like putty, just like a human can be. That's how we have evolved, and that's how we *want* to evolve.----Also, relevant, an XKCD What-If on this issue: http://what-if.xkcd.com/5/ [xkcd.com]

A friend of mine is a senior researcher in robotics. His take is not to trust anything with enough mechanical power to hurt you anytime soon. With the pathetic state practical software engineering is in, I find that very sensible.

No one remembers the anti-robot sentiment expressed in Astroboy ? 1962 and then again from 1980-1982? Then again, in the 2002 (remake) ?
At least in the cartoon, there were reasons for this! Robot criminals etc. What do we have now, non-thinking assembly robots?
Someone, needs to TUG IT LESS. Stop tugging it, MEDIA idiots. At least if you do, do it with vaseline and make sure you keep your robot fantasies quiet.
Fucking wankers.

The article talks about the challenges about designing a robot for the home that is fun, useful and safe. Fun is doable. Safe is doable. But useful? Really? I'm sure that such a device could be put to use, but does that make it useful. In college, we built bookshelves using cinder blocks and lumber, but I would not hold out that cinder blocks were "useful" in the home (outside of the actual construction of the home), just because we found a way to use them.

The article talks about the challenges about designing a robot for the home that is fun, useful and safe. Fun is doable. Safe is doable. But useful? Really? I'm sure that such a device could be put to use, but does that make it useful. In college, we built bookshelves using cinder blocks and lumber, but I would not hold out that cinder blocks were "useful" in the home (outside of the actual construction of the home), just because we found a way to use them.

Look, society is getting dumber, period. I mean most people these days are lacking in basic common sense.

A robot that is programmed to do the dishes or sweep the floors isn't going to activate in the middle of the night and stab the occupants in their sleep. Anyone with common sense will understand that these robots are not "thinking" they are only programmed to do certain tasks. I've had a Roomba for several years now, I have never feared it having some ulterior motive other then sweeping up my floors.

Trust is something that has to be earned. You can't "design" a trustworthy robot. You have to design robots and get them into the field. Over time, people will either develop trust or solidify their distrust based on interactions with the robots. It seems silly to me that a company would consider the appearance of a robot to be the primary factor in building trust.

I don't see foresee trusting a robot, if it's even remotely true that 88% of people believe robots are necessary for warfare because it's just too dangerous for humans. It's all good until one of these people deems that I'm not good enough for this planet, then becomes my judge, jury and executioner with one little hack. I'm starting to wonder whether a robot singularity is the best hope for the survival of humanity.

The inspection isn't inspecting the quality of the machining. It's inspecting the quality of the machinist who wrote and ran the CNC program. Mostly catches mistakes caused by improper fixturing and the like.