Dauntless wrote:while AI is a very serious problem I do like the more hopeful slant offered by RFC's work on the Bolos or even the slightly different but maybe more profound mutineers moon books.

realistic? perhaps not but there are so many AI is evil stories out there and the more positive view seems hard to find

The evil stories hinge on the realities of GIGO - Garbage In Garbage Out. Which is the impetus and logic behind this post. A successful A.I. simply must be tempered with some formal understanding of morals, scruples and values. That is why your own parents are charged with raising you - lest later you be charged. Or worse.

Unethical AI = Evil

Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble.—cthia's father. Incident in ?Axiom of Common Sense

Dauntless wrote:while AI is a very serious problem I do like the more hopeful slant offered by RFC's work on the Bolos or even the slightly different but maybe more profound mutineers moon books.

realistic? perhaps not but there are so many AI is evil stories out there and the more positive view seems hard to find

The evil stories hinge on the realities of GIGO - Garbage In Garbage Out. Which is the impetus and logic behind this post. A successful A.I. simply must be tempered with some formal understanding of morals, scruples and values. That is why your own parents are charged with raising you - lest later you be charged. Or worse.

Unethical AI = Evil

We should be clear. AI does not mean some artificial brain. There are AI systems now. They handle specific problems and because of really good programming that lets them "learn," they can solve many problems far faster than humans.

That does not mean we have sentient artificial brains making key decisions. If we remember Heinlein's The Moon is a Harsh Mistress there was a sentient computer. Other scifi books have them. But it is up to the author how far he wants to go.

Clearly, a real lot of ship systems work on AI. The TAC system really has to. Too many fast decisions. A human makes the key decisions and then sets off a cascade of smaller ones through already programmed systems.

RFC clearly intended NOT to have a sentient AI in his books. We seem humans making decisions and doing things that could be handled by machines. So relax and enjoy it.

Dauntless wrote:while AI is a very serious problem I do like the more hopeful slant offered by RFC's work on the Bolos or even the slightly different but maybe more profound mutineers moon books.

realistic? perhaps not but there are so many AI is evil stories out there and the more positive view seems hard to find

The evil stories hinge on the realities of GIGO - Garbage In Garbage Out. Which is the impetus and logic behind this post. A successful A.I. simply must be tempered with some formal understanding of morals, scruples and values. That is why your own parents are charged with raising you - lest later you be charged. Or worse.

Unethical AI = Evil

ldwechsler wrote:We should be clear. AI does not mean some artificial brain. There are AI systems now. They handle specific problems and because of really good programming that lets them "learn," they can solve many problems far faster than humans.

That does not mean we have sentient artificial brains making key decisions. If we remember Heinlein's The Moon is a Harsh Mistress there was a sentient computer. Other scifi books have them. But it is up to the author how far he wants to go.

Clearly, a real lot of ship systems work on AI. The TAC system really has to. Too many fast decisions. A human makes the key decisions and then sets off a cascade of smaller ones through already programmed systems.

RFC clearly intended NOT to have a sentient AI in his books. We seem humans making decisions and doing things that could be handled by machines. So relax and enjoy it.

I am relaxed. And I do enjoy it. For the sake of completeness, I simply endeavor to point out the difference. RFC's Honorverse includes weak AI. Not strong AI -- there is a significant difference. True AI denotes "intelligence." Intelligence denotes sentience. My posts are for the segment of the class who is not aware of the difference and might appreciate the tidbit.

RFC made a comment on the subject himself which seems to align with my own thoughts with the "Frankenstein Complex" comment.

Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble.—cthia's father. Incident in ?Axiom of Common Sense

Lord Skimper wrote:It is funny reading this. I thought for sure that the dates would have started with a 19 but it seems people really don't know what technology exists now.

As For why robotic AI tech is not in the Books, RFC's choice, simple as that.

Most people don't know a 1/10th of what technology exists right now. Some of the latest technology that DARPA is bringing out right now, publicly. The VAPR program for instance takes solid systems computer circuit boards mostly and turns them into clouds of vapor like magic.

AI exists right now. AI for most real systems is not based on Expert Systems, that was 25 years ago. AI now is Neural based. Drones at DARPA are undergoing Swarm technology and utilization, plus automation flight tech that takes over when drones lose their signal.

Meanwhile, at work controllers are continuously fighting with an AI that is so badly programmed it could be replaced with a 6 year old child and there'd be a significant improvement.

Mind you, this is a company that tried to save money by transferring equipment off old vehicles onto new ones. Only problem is, my base has exactly 5 compatible pieces of the newer equipment. We also have 5 older pieces of equipment that do the same job, but are incompatible with the new fixings, and are more of a danger to physical health even when used correctly.

The main problem of Robots is control. What if someone else take over control? You spend alot of money and work to build robots and your enemies take over control. They save the money, they don´t have to transport the robots to you because they already there and the worst part, if you count on robots you loose your firepower to fight the robots.

I suppose the challange is what you want your robots or AIs to do. RFC doesn't want an AI like Dehak in the Honorverse. He also doesn't want a Terminator and while there are a lot of very smart and versatile systems, they need control and direction. The tactical systems look for things they are designed for (and things that don't fit what is "normal" and notifies an operator.

All sorts of computing power is thrown a chess programs so that now a computer can beat Grand Master level chess players- though not all the time.

It's crunching data and processing calculations as vast speeds. On the other hand, human minds can do all sorts of things that baffle people, particularly people who can't or don't have the experience or just lack "something" (experience, training, some little twist in the dna) that lets other do something.I drive my wife crazy, particularly when driving, by not only seeing animals or birds (and often identifying them from a fleeting glance or catching the edge of shape or bit of motion) and she has never been able to do that- it really annoys her, particulary since I am supposed to be looking at the road and driving. It's not limited to wildlife, I catch all sorts of things that may or may not actualy cross our path but I do catch it. Sometimes it is very intersting but odd stuff going on, just not where I am going to be driving. But I see the deer (or whatever) before it starts to cross the road (or just before it starts to move in that direction). I grew up hunting, patricularly wingshooting. It probably is at least enhanced by experience and I suppose great attention -if on automatic- to things relative to what I enjoy. The picking out a bit of shape that becomes an outline. A flicker sort of becomes a target acquisition/id and the decision chain of what the hell is that, where is it going and what to do/not do. My father had this. My older daughter- who has never hunted- has this- also to the semi-bemusement of my wife who then says "must be genetic" because the younger one doesen't have this particular set of abilities though she has gone hunting with me-she does have others. Probably a combination of several things including that genetic piece about target/danger/ID and do (or not do) something with a bunch of experience and training so that I don't have to think about it to notice something but the thinking (rather than the reaction) end up being more "confirm yes/no" unless things are moving too fast. I really really don't want to hit a deer (or a flying turkey but that is another story) with the car.

Truly thinking machines bring to mind all sorts of bad examples from SIFI horror stores. Particulary the AIs who are going to do something "for your own good"- just like a whole bunch of politicians.

Brigade XO wrote:I suppose the challange is what you want your robots or AIs to do. RFC doesn't want an AI like Dehak in the Honorverse. He also doesn't want a Terminator and while there are a lot of very smart and versatile systems, they need control and direction. The tactical systems look for things they are designed for (and things that don't fit what is "normal" and notifies an operator.

All sorts of computing power is thrown a chess programs so that now a computer can beat Grand Master level chess players- though not all the time.

It's crunching data and processing calculations as vast speeds. On the other hand, human minds can do all sorts of things that baffle people, particularly people who can't or don't have the experience or just lack "something" (experience, training, some little twist in the dna) that lets other do something.I drive my wife crazy, particularly when driving, by not only seeing animals or birds (and often identifying them from a fleeting glance or catching the edge of shape or bit of motion) and she has never been able to do that- it really annoys her, particulary since I am supposed to be looking at the road and driving. It's not limited to wildlife, I catch all sorts of things that may or may not actualy cross our path but I do catch it. Sometimes it is very intersting but odd stuff going on, just not where I am going to be driving. But I see the deer (or whatever) before it starts to cross the road (or just before it starts to move in that direction). I grew up hunting, patricularly wingshooting. It probably is at least enhanced by experience and I suppose great attention -if on automatic- to things relative to what I enjoy. The picking out a bit of shape that becomes an outline. A flicker sort of becomes a target acquisition/id and the decision chain of what the hell is that, where is it going and what to do/not do. My father had this. My older daughter- who has never hunted- has this- also to the semi-bemusement of my wife who then says "must be genetic" because the younger one doesen't have this particular set of abilities though she has gone hunting with me-she does have others. Probably a combination of several things including that genetic piece about target/danger/ID and do (or not do) something with a bunch of experience and training so that I don't have to think about it to notice something but the thinking (rather than the reaction) end up being more "confirm yes/no" unless things are moving too fast. I really really don't want to hit a deer (or a flying turkey but that is another story) with the car.

Truly thinking machines bring to mind all sorts of bad examples from SIFI horror stores. Particulary the AIs who are going to do something "for your own good"- just like a whole bunch of politicians.

Often, it's not excess artificial intelligence that is the problem but a lack of natural intelligence.

It is clear that there is a level of artificial intelligence used but it is essentially invisible. The missiles use AI to steer themselves. Humans tend to set up pre-planned setups but things happen too fast in battles. No one can control thousands of missiles at once and if the tac officer in charge of the task force or group has to work with other humans that slows things down even more.

Just remember the RFC created the universe we are following his commands.