Author
Topic: A real enviorment (Read 5111 times)

Just thinking, is there a such thing as a programing envirment that is suited just for modeling the physical enviorment? Most current programing enviorments are suited for analyzing there own internal little world of 0's and 1's.

I believe it would be incrediably intelligent to create an enviorment which is based on input from the external world and its processing. such a framework would redially support all higher math functions calcuales, trige, analitical gemotry ect.

and the variables ect. in the system would essentually be the establishment of the artifical organisms physical shape and propertys, and abilityes.

The rest would be a system of analizing the raw data from sensors

data would be taken in and stored in a special input class which would support the storing and compresssion of large pieces of data, as well as relevance filters.

This came as i thought today of how intelligence works. an intelligence is the product of its enviorment, and thus i have reasoned that reasoning can only truely come from observation and cotemplation of the enviroment.

for instance a developing kitten waits in a room and stores the data in it. i have realised the direction and intrest of the cat is dynamic outliers in the envirment. the cat quickly looses intrest in humans and the moveing plants, as they "always do that" it instead approches the strange and tatically encounters it, puts it in its mouth to see if it is a power sorce. it establishes the relevance of objects.

shouldent this be how an AI should be brought up?

heres the program set up. the machine takes in input from all its sensors and stackes all of it in arrays of a gereral picture of that second. for example one sec would be a sequence of diffrerent pixel values ( each on a gird) and a clustering of pieces of sound and their directiong on the grid as well, alonge with Sonic input. all on one matrix.

then each pixel would have data in it for the pixel values(IR. visibal, ECT)225:23,234:23,999234:2342,221:23,999and this grid of data would have other things such as sound and its direction attached. i thought of uniquely placeing the sound on the image as well as part of the enviorment.

then the 60 by 60 or what ever grid of data would be stored data pixel by data pixel in 120 artifical nerons.

the next cycle the old data will be stored in a new set of nerons, which will connect with the others based on the relitive sameness of them, and disconnnect with others that are differennt.

eventually this will be the sub chonsiance of teh machine. or something like that

i intend on createing a machine useing a "new" language and this general physolphy. the machine would begin as a drone insided with a CMU cam. in a random section of a normal living room would be a RED bouy with a charger, a positive.

hypothetically the machine would be starteled by people walking and the light sorces, creeping aronund <slowly> (infants dont dart around like nuts.) and establishing what goes on in the room. eventually the data base being built will establish most things in the room as irrevalant ( ill get more to the system for this later) and would eventally see the red bouy as the only intresting thing. thus it would contant it and see it gets energy. the robot by instinct would correlate RED with energy, touching everythng red expecting food. eventually however it will figure the shape of the bouy and know to ignore red shirted humans.

Learning through experimentation is the only way to learn something new (without being told). Unless the robot already has perfect knowledge of the world, which in reality (outside of labs) it will never have . . .

AI is obviously an unsolved problem, but there is tons of past literature on it that could be useful for you.

My approach to AI is different from the mainstream - Im convinced I can do it with only a few microcontrollers . . . Im currently working on my theories with my newest robot. If any of them work out, Ill write a tutorial or three.

I feel the reason why we havent solved AI is not the hardware, but the software. We got freakin teraflop super computers but havent even figured out insect intelligence yet

1. Observe current scene2. record observations3. compare past data to current, anything different, unique here?4. is there anything going on that is unique in the scene,anything different?5. map locations of outliers and approach them. *6. observe outlier for response.7. no response? Look for more new data,

* if something before irrelevant in the situation, such as a wall causes ?disorder?i.e. vibrations in the body/prevents movement, that is to be maped. Such things are to be avoided and only observed if they prove to be bothersome after avoiding by movement

In an enviorment something isn?t goning to approach or stand next to a wall because a wall is boring. The wall always does the same thing this is how obstacle avoidance works.

Humanistic:

The human Framework is same to begin with, exept the device is inhibited to observe only certain pre-built in things.

Starting, The human child does the above, but less aggressively. It does not actively seek outliers after observeing a scene, it looks for a match to its first data input, which would be its parent. All data is centered and is connected to the parent and things of and around it.

The human child is dircted by trainers parents ect. to relevant outliers and observational subjects, such as words and letters and toys and tools. These are recorded in memory as bad or good, based on inputs resulting from close proximity to them. The bad and good are still very animinal, anything which disaccoites its matter is bad. Tickeling and such are maped as good as the response from the mother is positive. Generally the feelings it first felt after birth are good, and the tones and data it hears are maped as good and desireable input. Instead of running or avoiding bad, it emits the base cry response- until it knows enoutg to be told not to and understand how and why not to.

2. after the parent exposes it to more and more human things with positives, it more and more connects those human things with posotives The human quickly by base logic establishes the relationship between tool and human. 2a Positives are provided by having intrest in the things first provided by humans to observe.

3a Eventually the device learns to learn and accepts interrupts and is guided to subjects to observe by other humans, as this provided positives.

3b At first in learning the human is taught by the other physically manaplalting or menally tricking the device into doing the task it wants it to.

3cThe human is interested in the positives, the praise.

4 Later the human is able to mimic other humans as it is shown how to mimic by the same above physical movement. The device associates the movement and the human as a ?provider? of data. The taskset and motions are now releveant 5. it is soon told to do things on its ?own? And is left to do tasks by itself or it will receive a negative. The device begins doing unnatural tasks.

6. after this is accomplished, it is shown to teach other devices. It passes on knoladge and data to the next.Age=7-10

the device is finished with this and is now taught to observe non-human things for input, while abideing to human standards of observation. Because mimicing is now connected with observation in the device, it begins using its learning methods to mimic actions in nature, events, and other devices, and in technology which is other results of this.

8. creativity is born in the device, although is limited to human conformers and learning methods first instated.

Its just an idea i had on Humanistic intelligence versus animial intelligence.which hits anouther key point...are we confuseing the two?

Anouther intresting find in my reasurch today was to find a study suggesting the human mind to be nothing but a constant loop of paths. your intelligence is a single siginal sent from neron to neron. should that small "spark" stop and restart in the wrong location, suposidly the mind is "reset" back to an infinatile state.

For example, if your robot sees a predator, there are many solutions:close camera eyesturn head awayrun away (which direction?)

For the robot to choose the correct response (run away from predator) requires predictive abilities, as well as understanding that a predator is a bad thing and why, too. No idea how to even start on this . . .

start basic - allow the bot to have control of its output devices, such as servos, motors, led's, speakers etc.. then when its batteries run low, it must stimulate its outputs of its own accord, ie light an led, make a noise etc... once i decide it has lit the right led/ made the right noise i will recharge it so it will link that output to being charged (this is rather than pre programming a power level meter). This will incorporate some kind of reward/ punishment sequence i.e. being recharged will be a reward. I may need to pre program some kind of a system for happiness etc which could be a single variable holding a value between 1 and 100, where the robot must try to achieve 100 i.e. happy.

I thought about this because it is the first thing that babies learn both human and animal is that when they cry, they get fed. This will probably be my best starting point in understanding actual ai properly and give me some feedback for creating an effective linkable artificial neuron (which i plan to make as an object with properties and methods such as writing .class files in java).

I also am looking at creating the subconscious area which is the part that will control balance, speed, sensors etc.. which the frontal intelligence just needs to send basic comands to.

Any ideas to start off with the artificial neuron as an object but using C would be helpful, otherwise i could see if there is a java to C cross compiler?

Any ideas to start off with the artificial neuron as an object but using C would be helpful, otherwise i could see if there is a java to C cross compiler?

I attempted emotions (an important aspect for any function AI) on a robot a few years ago and ran in to this problem:

How many steps need to be repeated (and remembered) for a robot to acheive something? This is a problem because only if the final step is completed will any emotion reward be given . . . I was never able to implement learning because of this problem.

Other stuff I achieved:Your idea on using numbers to represent emotion levels works. The hard part is comparing fear vs hunger vs happiness.

Use a timer and the amount of interesting sensory input to happen lately to add a boredom emotion. Or add boredom if some task has been repeated too often lately. If boredom gets really high, have your robot swap to an exploration mode, or a 'do something else' mode. This is great if your robot gets stuck in a corner.

A battery sensor is great for a 'im hungry lets find food' mode. The more hungry it is, the more likely it will go find a recharge station.

If your sensors can detect bad stuff (your cat, table edge, someones foot crashing down, etc.) add in fear for your robot to avoid stuff. Fear can be given a really large modifier to overwhelm to other emotions (useful during emergencies!).

All this emotion stuff needs to be managed by an executive. Sometimes emotions conflict (as with in humans) causing indecisiveness. Voting usually solves this problem, but can be an issue if emotions change mid-task. Balancing emotions can be tricky and requiring tweeking.

As you can see, each emotion requires a very different subroutine. There is your modularity!

This does help certainly, however my main goal would be to produce a system where the robot can decide if something should be feared or not itself etc...

I have a plan in mind for the playing off of emotions against each other, and also for characterisics such as when to get bored etc... this comes from applying the situations to a table (2d array). I have used this system when programming computer applications and games before, I got the basic idea from a book about 11 years but can be expanded on with more columns/ rows and also i have used it for split columns. The original table from the book is below and describes how a computer might decide when to wear a coat outdoors using a scale of hot/cold in the columns and rainy/dry in the rows.

Cold

Average

Hot

X

Y

Y

Y

Heavy Rain

Y

Y

Y

Light Rain

Y

N

N

Dry(no rain)

letter Y represents to wear a coatletter N represents not to wear a coat

As you can see a decision can be made in this form and can be used to weigh up situations such as level of danger to enter a room which may contain a cat against current requirements (like how great the need is to get to a power point).

The above table is good at summing up this type of logic, because as you can see, there arent equal opportunities to wear the coat, there is a heavy weight on the side of wearing the coat.

I hope that you can see what i mean by using the table example(which i believe was a demonstration of basic fuzzy logic)

Also because the table would be stored as a 2d array, each entry would be defined by a number just like the emotions.

This does help certainly, however my main goal would be to produce a system where the robot can decide if something should be feared or not itself etc...

Yea I understood what you meant . . . but emotions need to be hardcoded . . .

For example, your taste buds allow for sweet food to make you happy, and bitter food will make you sad. Thats hard coded. The learning process is for you to visually identify which foods are sweet and bitter, without eating it first . . . is it the color of the food? the shape of the food? where the food comes from? that green stuff on the food? the texture of the food? the age of the food? temperature? the food preperation method?

its this cause and effect relationship problem thats really hard to do without having a very high level understanding of the world around the robot . . .

if a robot finishes 99% of a complicated task, it gets no reward . . .

And yea I agree, fuzzy logic is required for this system, at least for the emotions . . .

This is what keeps putting me off actually making the system. I start off with some code written on a pc expecting to port it to an mcu later, and then realise that i am hard coding things that should be learnt, and then realise that somethings actually need hard coding or obviously nothing happens at all. There is a spiral involved in this. I can never decide what to hard code and what not to, I have just started thinking however that maybe another approach is needed , keeping in mind that it will be robot intelligence rather than trying replicate a biological brain and that maybe robot can seek reward and happiness type emotives in some other way - laterally. (since they lack the senses we have)

you have the above system. with the system the machine acquires this "data" of relevancey versus irrevalancy. the machine stores data from the enviorment comeing from all its sensors in neat matrcies in its "ram"

sound, touch (a film covering its body perhaps), visoion all sensors are reduced right out of the sensor into "pixels"

this is a N value, say 1-255 holding the sum of the data from that sensor for N duration

those pixels are stored in the matrixes. after it is filled with data, it goes to the next one to store them

lets call a filled matrix of data sums a "neron"

it stores the data in a new neron and the two nerons automatically compare there data for sameness and connect on the sameness.

Ok the Will of the machine deflut is to seek outliers in the enviorment.the machine in the first stages of tis life is really messed up, crawling after nothing but eventually moveing towards t hings that are different in the enviorment hypotheitically. what it moves towards is what in the matrix of data sums is a image construction

2234 2342 2342323423 2342 23424 9999923223 23424 2342

^ here it will move to the right until it is centered then it will move fowardsuch as a red chargeing station it dosent know is a chargeing station.

lets say by some chance it runs into it and gets a charge

ok theres a defult posotive it knows about, it must have energy.

so now it refrences all data haveing to do with that point as "good" and those parts of the nerons will be connected and refered to the pleasure center for hunger. now when it is hungry, what does it do? move to red. if not it looks for outliers and attempts to match those outliers or events to what leads to red, what it has to do or see to get energy.

see, now it will begin to "learn" refrece points in the enviorment Ect to get to the charger, as well as human voices ect.

i applolgize for my terriable communication skills, but i really think i have somehting here.

and yes. emotions and needs must be hardcoded thats the hard part.but i believe this my system is a good framework once we figure the "defults" out such as moveing, locomation, and iniate refluxes necessary to do something like the above.

ok so say we set a input range for touch that is high as a "negitive" the Pain defult matrix will release a responce into the nerons whom are receiveing data when the event occures and will connect with those datapoints.

next time (fuzzy logic) it "sees" something relitively close to what hurt it, it will by fuzzy logic depending on how hurt it was the first time "shy" away from the occurance possisabality (this shying is going to have to be a creative instinct defult). if it avoides it, no problem. it will keep moveing away or just divert to the right or what ever works.

but if its hit again then we are going to increase the "threat" of the occurance and have more relevant data describeing the threat as now we have more connected nerons and a better idea of what is harming us. again harm is going to have to be a creative preset.

The other part that i was thinking about was to hard code some subconcious levels that controls motion, ie the concious mcu stimulates a pin on the subconscious mcu and the robot moves forwards, some of the robots sensors will detect changes such as a camera, infrared scanner etc... and the robot should then link stimulating that pin with the sensors updating and therefore gaining an intrinsic knowledge of its own movement capabilities.

Quote

Posted by: Sergius

and yes. emotions and needs must be hardcoded thats the hard part.but i believe this my system

Hard coding emotions arent that hard, its deciding on what emotions and also what is considered a good or bad emotion from the robots point of view. Most of these for animal and people are the same, food, reproduction, rest, pain, loud noises, soothing music... but what would a robot need in terms of emotions?

ok lets seeFor basic emotions which are the defults the robot starts with.

Pain/"addrenal" - a set of things that shouldent happen, and if do divert all the macines attentionthis would be things such as a foot kicks the machine. the pressure sensors (i intend on laceing the lexan of the robot with pizeo crystal strands) will get a really high reading and all data at that event will be burned into the pain center. next time something remotely close happens goes to flight emotin

Flight.- responce to extream pain and dangerous non understood enviormental errors. the robot will be designed to, with its vision or sound rec notice the direction of a threat and move away with speed and fenvor relitive to the precieved threat.

Hunger- anyhting provideing energy is a posotive.

Curiosity* (key for our robot. we want it to act as a child)- when sustained, robot will look and move to anomolies in enviorment, based on the amount of entraphy in the outlier of the sceen. visiual outliers take precitence over audio outliers.

and back to what i've written above about infintile intelligence, i believe the begining robot should be not be very independant in asserting its will. a human should give it its first charge and begin directing the thing around to new areas. a reward system could be made which goes to anouther inportant center

Pleasure- very difficult to emiulate. but certain sensory inputs need to be strived for, such as a pat on the backthis one really illudes me... any ideas?

I'm thinking of using a simple system for initial reward / punishment. Basically just using 2 pushbuttons - 1 for reward i.e. pat on the back, and 1 for punishment i.e. smack on the hand. Bumper switches can also be used for bad stimulus, and also a device to monitor actuators to determine when they are close to a stall level, or when the actuators are being overpowered, this could act similar to a pain mechanism (youre own muscles hurt when they are over powered by something heavy) and hopefully the robot could be made intelligent enough to try to preserve its own actuators. Its the positive that i have less ideas for

but if its hit again then we are going to increase the "threat" of the occurance and have more relevant data describeing the threat as now we have more connected nerons and a better idea of what is harming us. again harm is going to have to be a creative preset.

The problem with this is if your robot is hurt, its too late, its been damaged already (perhaps even the sensors designed to detect the damage). Learning from this could perhaps prevent more damage, but it might probably be too late . . . So your robot needs to be able to learn something without it even happening . . . For example, you dont need to jump off a cliff to know that it will hurt . . .

Perhaps your robot needs an internal physics modeling engine that tries stuff out in its little robot head before doing stuff in reality (this has been tried before on biped robots in Japan).

The learning part would be to A remember the calculated result and B know when to do a simulation run

Quote

but what would a robot need in terms of emotions?

Anything that will help the robot learn to get what it needs (spare parts, power supply, a human to save it when it's stuck, victims, etc)

Quote

Hunger- anyhting provideing energy is a posotive.

an addition to this . . . anything that uses energy is negative - more required energy, the worse it is . . . perhaps a laziness emotion is needed to make the system find the most energy efficient way of doing things?

Quote

I'm thinking of using a simple system for initial reward / punishment. Basically just using 2 pushbuttons - 1 for reward i.e. pat on the back, and 1 for punishment i.e. smack on the hand. Bumper switches can also be used for bad stimulus, and also a device to monitor actuators to determine when they are close to a stall level, or when the actuators are being overpowered, this could act similar to a pain mechanism

Ive seen this done before, and it works. The only big complaint is that it requires a very good 'teacher' that is willing to spend large amounts of time with a system that cannot learn independently . . .