Posted
by
kdawsonon Monday June 23, 2008 @10:03PM
from the no-hacking-furby-does-not-count dept.

Xeth writes "I'm a consultant with DARPA, and I'm working on an initiative to push the boundaries of neuromorphic computing (i.e. artificial intelligence). The project is designed to advance ideas all fronts, including measuring and understanding biological brains, creating AI systems, and investigating the fundamental nature of intelligence. I'm conducting a wide search of these fields, but I wanted to know if any in this community know of neat projects along those lines that I might overlook. Maybe you're working on a project like that and want to talk it up? No promises (seriously), but interesting work will be brought to the attention of the project manager I'm working with. If you want to start up a dialog, send me an email, and we'll see where it goes. I'll also be reading the comments for the story."

I always loved AIs, but I was often told that there is no research on it which is that theoretical; that it's more like a collection of applied domains, like learning neural networks or computer vision.

So, what is the most theoritical aspect of AI research that you know? Or put otherwise, is there a branch of AI research where you prove theorems rather than writing code?

I know it's slightly off topic, but people working on that kind of thing are probably wondering if they should mention it here (wondering if it interests DARPA or not).

A lot of the older AI research is pure theory, but in the last 20 years or so it has been driven by the realization that we don't really have the tools to meet some of the early expectations of the field. If you are interested in the theoretical foundations of AI, though, you might want to look into compression, data representation, and computability, as well as general information theory. Claude Shannon's work would be a good place to start, and is cited frequently enough to give you a guided tour through AI.

This is an interesting post. As far as I knew, things like logic in some way part of AI. One of my PhD supervisors is a monster in logics, and he has published in AI Journal [Elsev.](one of the most presigious journals for AI).

I think a lot of research on Agents is related with logics, including but not limited to CTL, ATL, first order L., that combined with game theory.

What I would suggest you is looking at academia, mostly in Europe (UK, Netherlands [they are good at theoretical research], Germany). If yo

Possibly: what exactly the research on self-awareness/sentient learning systems comprises for you guys. The reason I bring this up is that I left AI as a main research field because even the most flexible research goals set by academia today are just tiny steps of applicative advances in statistical inference and mathematical techniques in general. Not that these things are not useful (machine learning is quite amazing, actually) but the initial goals have little to do with all this. We have surpassed brains in many ways, but the massively parallel brain does not even "learn", let alone think(which is what we want) in the manner in which any AI system today is based. And yes, that includes adaptive ones that change their own set of heuristics.

I spent a lot of my time thinking about neuroscience and reading psychology , and while I slowly moved towards rationalizing certain things, the main obstacles to what I needed to know were deeply biological. How exactly does the mind "light up" certain areas of memory while recalling things (sounds, sights..etc) stored nearby? How "randomized" is this? And how can it be represented mathematically (von neumann architecture)? Is there ANYTHING we do that is not entirely memory based (even arithmetic seems to stem from adaptive memory)? Why do we need to sleep, and what part of the whole "algorithm", if there is one, is dependent on this sleeping business? What exactly does it change in our reasoning?

If we know precisely some good answers, rather than the guesswork in literally all major texbooks, we can begin to model something useful and perhaps introduce much more powerful intelligence with all we have developed in NN, probabilistic systems..etc. I think once sentience is conquered, human beings are going to be immediately made inferior. It's just this abstraction business that is so damn complicated.

I suspect that any assurances from me will mean little and less (you seem to have a well-defined opinion about what DARPA does and why), I think that the ideas I'm pursuing here are sufficiently general that it would be foolish to shy away from them on grounds that they might be used for some military application. You could say the very same about any advanced computing device.

Why is it that the first application that I can think of for such project developed by DARPA is that to use it against the citizens?

Like it or lump it, you are in this boat with everyone else. If AI is solved, it will be used for good and evil. If your country does not use it for evil (extremely doubtful), somebody else's country will. Better yours than theirs. What I mean is that true AI will be an extremely powerful thing; if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass. I don't think you'd like that very much.

Having said that (and to get back on topic), I have been working on ageneral AI project called Animal [rebelscience.org] for some time. Animal is biologically inspired. It attempts to uses a multi-layer spiking neural network to learn how to play chess from scratch using sensors, effectors and a motivation mechanism based on reward and punishment. It is based on the premise that intelligence is essentially a temporal signal-processing phenomenon [rebelscience.org]. I just need some funding. The caveat is that my ideas are out there big time and there is a bunch of people in cyberspace who think I am kook. LOL. But hey, send me some money anyway. You never know.:-D

Lions kill cubs sired by other lions. Penguins and monkeys steal babies from other parents when their babies die prematurely. Ants wage war. Almost all female mantises engage in cannibalism. We have no monopoly on evil. If you believe in a struggle between good and evil, humans are in a unique position clean up things; we understand we're evil and can admit it.

if any country other than yours gets an early monopoly on AI, you can bet they are going to use it to kick your country's ass. I see this line of thinking very dangerous. Many European countries, including one I live in, are certainly able to obtain nuclear weapons, but simply refuse to do so. So, what if "bad guys" (say, terrorists) activate nuclear bomb in here? First, society is reasonably certain our military is not *humiliating* civilians in other countries by randomly killing (especially children) or

Well, good luck. But I can see why they think you're a kook. This AI stuff is a bit too close to "free energy" and "cold fusion" for my liking; decades of research and no progress to speak of, wild promises of what *might* happen if all the problems were magically solved, and an army of AI believers who won't let go of the dream. That's not to say you can't do it, just that lots of smart people have already tried and failed, which is generally a bad sign.

Where did you get the idea that humans will veto unjust orders? You might want to read up about the Milgram experiment, or maybe just consider how the Holocaust happened.

I assure you, the Nazis didn't manage to put together an army of thoroughly evil people -- the vast majority of the Nazis were perfectly ordinary human beings receiving evil orders. We like to think we're different, but that's an incredibly dangerous opinion. It's much better to accept the fact that we are human, and that humans are over

Bollocks. Very few games have AI that's even approximately interesting. The most advanced stuff that's commonly used is stuff like algorithms for navigating around a map and obstacle avoidance that were basically mastered by the robotics community in the late 80s and early 90s.

Show me a game that does something truly novel in terms of AI, and I'll be impressed. I don't see any, though.

AIs, for instance, can not figure out how a map works in a first person shooter entirely on their own. Bots in older games (or on maps without waypoints) will often walk into a wall, stop, get their bearings, and then move in another direction. I loved watching Foxbots in TFC just stand around during a CTF map walking in circles around the flag.

In Half Life 2, the enemy AI runs on paths. There are multiple plotted paths, and it follows them and

It's mainly a teaching + learning system for a system with input and output. I don't see anything built with it answering any rational questions or coming up with new ideas anytime soon, but if you do AI and don't know about them, you better catch up.

I think the Deep Belief Networks of Hinton et al are way ahead of Numenta.. in that they are real science with measurable results that has been reproduced by multiple implementations. The 2006 paper that started it all and Hinton's presentation on google video:

TRBMs have been used in DBNs, look at Learning Multilevel Distributed Representations for High-Dimensional Sequences, [utoronto.ca] Ilya Sutskever and Geoffrey Hinton, AISTATS 2007. But yeah, if you're looking for a job, Numenta are a good place to look. Of course, once you join the company you'll think Numenta's technology is the "one true path" and not even bother looking at the rest of the field;)

I'm a consultant from Slashdot. For a low fee, I can point you towards research materials to save you the time and effort of doing it yourself, and if you elect to pay for the premium service, I also guarantee that all provided materials are not fake. Send me an email and we'll see where it goes:)

It would be great to hear of any interesting original research. It seems to me that most of the news in this space are more about applications of already well known ideas rather then new well publicized developments.

The 'Semantic Web' companies that are springing up all over like Twine, AdaptiveBlue, etc. are the best examples. They seem to be using some basic NLP, classifiers and statistical models to provide various services on the web. This may not be cutting edge artificial intelligence research but,

You've got to quit trying to advance on separate fronts. People have been exploring and reinventing the same old niches for sixty years. Little has changed except for the availability of powerful hardware with which to realize these disconnected bits and pieces. What is needed is a way to bring the many different segments of the AI and robotic communities together, because the solution is not to find the "winning approach", but to realize the value of the various perspectives and combine efforts. This is not a new idea, it is an old one which apparently just doesn't fit into the established research environments. Go to the library and read some old books on AI if you really want an appreciation of how pathetic the progress of ideas (not hardware) has been. To whet your appetite try some of Marvin Minsky's old papers - http://web.media.mit.edu/~minsky [mit.edu] He recognized this situation nearly 40 years ago.

Compliment of the day to you and your entire
family how are you today? Hope all is well with you I hope this email meets you in a perfect condition. I am using this opportunity to thank you inform you that I have come upon a large repository of AI source code left to me by my brother, Prince Abdullah of Nigeria.

It is my desire to transfer this source of of my home country to a place where it will be safe, and I wish your association in this business matter. I've been recommended to you by Mr. Smith of New York. I would like to transfer the source to your FTP server as an escrow service. In recompense, I will offer you 10% of the code, which is LoC 150,000,000.

To complete this transaction which will be beneficial to both of us, please contact my secretary with the following information:

In the moment, I am very busy here in Paraguay because of the investment projects which myself and my new partner are having at hand IN PARAGUAY.Finally, remember that I have forwarded instruction to my SECRETARY MR.Brwon Adebayor, his E-mail, (brwonadebayor@yahoo.com) to assist you on your behalf to send the source code to you as soon as you contact him.

Please I will like you to accept this grant offer with good faith as this is from the bottom of my heart. You should contact my secretary for the claim of you'r 10% which i willingly offer to you immediately you receive this mail, Presently I am in Paraguay.

pls make sure that you inform me as soon as you collect the bank draft so that we can share the joy together. Thanks and God bless you and your family.

For many decades, there has been a push to have an AI that acts just like a human. In other words, it makes rash decisions, based on bad anecdotes and stereotypes, full of mistakes, and then tries to rationalize that everything was planned with intelligence.

AI should understand the failings of human intelligence and fix it. For example, I have the sad job of normalizing health data. Every day, I dread coming into work and going through another million or so prescriptions. Doctors and nurses seem to continually find new ways to screw up what should be a very simple job: What is the name of the medication? What is the dosage? How often should it be taken? When should the prescription start? When should it end? How many refills/extensions on the prescription are allowed before a new prescription must be written? Instead of something reasonable like: "Coreg 20mg. Every evening. 2008-06-10 to 2006-07-10. 5 Refills." -- I get: "Correk 20qd. 10/6/08x5." It seems to me that some form of AI could learn how stupid humans are and easily make sense of the garbage. Of course, there's no reason the AI couldn't replace the doctor and write the prescriptions itself in a very nice normalized form.

I thought the idea of having an AI working like a human (strong AI anyways) was not so it can have all the flaws we have, but so we can have a conscious machine. We are the conscious beings that we know the best, so conscious and human-like becomes largely the same thing.That being said, standardizing human behavior is possible. The easiest way is to set up a standard form for prescriptions with nice fields for name, type, dosage, and etc, and then adding a nice 'other' field just in case. You know, just li

There's an indefinable line out there where AI and human intelligence will meet, in much the same way that alien life and life as we know it will also meet, but will they cross over?

Sure, it'll be cool describing your symptoms to a robot Doc one day when you get a sore throat, and having it drum up some perfectly sripted scrip that actually fixes your throat, but what happens when the robot Doc gets a sore throat?

And it will.

I know we're only talking about the kind of AI that can answer phone calls

A system that performs set tasks isn't necessarily intelligent nor does it necessarily require intelligence. Or to put it another way, just because humans perform some tasks doesn't mean that task requires intelligence to perform.

Intelligence is defined by the likes of free thought, the ability to learn from mistakes, come up with ideas, adapt fluidly to changing situations and so on. If it's not capable of making mistakes it's not capable of learning how to deal with the unpredictable.

My AI page which has several links that go deeper to older write ups is at www.fossai.com [fossai.com]

Basically I say that the better computer vision you make, the better software you can write advanced bots leading up to AI. I see AI as being something we'll naturally get to even if no one makes an effort to it: Our 3d cards are getting better, video games are making better 3d worlds, memory is getting bigger, and computer speeds are getting faster. Even if you couldn't hold AI on a current computer's memory, you have wireless internet that links up with a supercomputer to make thin client bots. So there really isn't anything in current technology that is holding us back except computer vision.

Now I am not so good in the computer vision field, but as I see it(excuse pun), there are two ways to do vision.

1) Exact matching. You model an object in 3d via CAD, a Pixar style, or using Video Trace [acvt.com.au] First you database all the objects that your AI will see in its environment then you make a program that identifies objects it "sees" with computer cameras and laser range finding devices. So then the AI can reconstruct its environment in its head. Then the AI can perceive doing actions on the objects.

I'm currently not in the loop here. I can't talk to anyone at Video Trace because I'm just a person, and they don't want to let me in on their software. So I can't database my desk. So I can't make the program that would identify things.

2) Even better than exact matching is similar matching. No two people look alike besides twins, so you can't really just database in a person and say that is a human. And as humans go, there are different categories such as male and female, and some are androgynous so we can't tell their sex. Similar matching has a lot of potential in its ability to detect things like trees and rocks. Similar matching is good at an environment that is tougher to put into exact matching situations. So just from this information alone, I wouldn't start on similar matching unless you had exact matching working in a closed environment. I'm not saying that some smart individual couldn't come up with similar matching before exact matching. I'm just saying that for myself, I'd start with exact matching, and then extend it with similar matching. There are a lot of clues you can pick up on if you know exact locations of things.

And then once you have singular location vision working, you can add multi point vision working. Multi point vision would mean that if you had more robotic eyes on a scene that you'd gain more detail about it. You could even get as advanced as conflict resolution when one robotic eye thinks it sees something, but another thinks it is something different. The easiest way to think of a good application for this would be if you had a robotic car driving behind a normal semi trick and another robotic car infront of the semi. The robotic car in the back can't see past the semi to guess traffic conditions of when the semi will slow down, but the car in front of the truck can see well, so they can signal to each other information that would let the car in behind the semi truck follow closer. If you get enough eyes out there, you could really start to put together a big virtual map of the world to track people.

I wouldn't say AI that learns like humans is desirable. After all, you'd have to code in trusting algorithms to know who to listen to. I'd say AI that downloads its knowledge from a reliable source is the way to go. It is easy to see: Sit in class for years until you learn a skill, or download it all at once like Neo on training seat.

Anyway, you can do a lot with robots that have good computer vision. Thething that has to be done next is natural language understanding. So far we've discussed the AI viewing a snap shot of a scene and being able to identify the objects. Next you'll have to introduce verbs and moving.

[In one of his classes, we tried (in a very rudimentary way) to give computers a "3D imagination space" by extracting spatial information from natural language and displaying it in a virtual reality environment. (We could visualize sentences such as "the chair is behind the table"). There was also much discussion of visual/spatial metaphors that humans use to understand abstract sen

Better visual recognition and understanding of natural language would certainly aid us in producing better systems but I'm not convinced they'll allow us to simply create intelligent robots.

The search for an intelligent machine requires more than this, it's not just about sensing your environment of which vision is just one facet that is for example (i.e. blind people are still intelligent).

I've had a quick read of your website and whilst interesting I'm not sure that it's entirely correct, I think it overs

I recently threw together a prototype for my company using OpenCV. That OpenCV exists for this sort of thing is a godsend. One of our interns recently completed a UI research project that also relied on OpenCV.

But one of the problems I had while doing it was that whenever I searched for more documentation about the algorithms I was trying to write, all I could find where either papers describing how some researcher's system was better than mine, or some magic MATLAB code that worked on a small set of test images. There were no solid implementations written in C for any of these systems.

I would love to dick around for weeks implementing all these research papers and then evaluating their results and real world performance, but I don't think my boss or my company's shareholders would enjoy that. Like every company, resources are limited for something that isn't making money.

With that said, the best way to further AI research, particularly in the highly marketable fields of machine learning and computer vision (but probably others as well), is to add implementations of cutting edge research to existing BSD-licensed libraries like OpenCV for companies to evaluate. If products that use that research become profitable, private companies are likely to throw a lot more money and researchers at the problem, all competing to one-up the other.

If you think I'm being unrealistic, you should check out the realtime face detection that recent Cannon cameras use for autofocus. Once upon a time, object recognition was considered a cutting edge AI problem.

The Matrix Logic series of books by August Stern should give you some ideas. Maybe DARPA has the resources to test if isospin of oxygen is really the basis of intelligence, as Stern considers plausible, due to the vector basis of "logicspace." Look for that missing particle predicted by logic groups while you're at it. I don't know why those books aren't cited more, or why symbolic logic is still taught as it always has been, when matrix logic makes things so much clearer and more consistent. The vector approach to logic can also replace standard programming structures in everyday code. Instead of if-then or case structures, querying a truth table or testing for equivalence term by term--the usual practice in conventional logic, too--a matrix multiplication can calculate the answer directly, if the terms are properly conceptualized. The books are easy to read, too, very clear and straightforward. Everybody oughta check em out.

CENNS stands for Core Engine Neural Network System, and started as a research consolidation project under DARPA's Intelligent Systems and Software program in 1995. It was a joint effort with the RAND institute to leverage all A.I. research in the past 50 years under a single initiative.

Project SUR paved the way for systems HARPY and HEARSAY-I, then abandoned until 1984, under the Strategic Computing Program. HEARSAY-II introduced the concept of a comm

I doubt he'll comment. But I will: it sounds like bullshit to me. Unless you can propose how exactly somebody might interface a neural network to a knowledge-based system. That's substantially more advanced than any ANN system I've encountered so far, and I've looked at some fairly esoteric ANN designs.

I have the perfect project: A smart knife. Think about it Knives are deadly, deadly weapons. People get stabbed every day. Even innocent people stab themselves all while trying to prepare the simplest of dishes. The solution is simple: Build a knife that knows its target. With an active memory metal that blunts itself to the sharpness of a baseball bat if its positioned at anything other than its target. Furthermore it will dynamically alter its blade to ensure the optimal cut of the material, taking into consideration all of the grain, moisture, temperature, and density of the object. It also has zibgee wireless mesh networking built in to communicate with other intelligent kitchen objects. The cutting board will communicate with the knife to let it know how close it is to the board. It will speak with the oven to let it know the specific moisture and condition of the meat to allow the oven to set the temperature and time of cooking to an optimal level. It will also prob for bacterial, viral of prion content communicating with any compatible devices to warn the user of the danger.

I am actually working on an neural processor. It is primarily, a platform for developing neural applications as appose to an application itself. Similar to how a database provides middle ware functionality. And temporarily coined Neurox.

Neurox is subdivided into two parts:
Firstly a database where neurons have position and are allowed to move or create new connections (plasticity) in a more permanent manner. This can be a slower process. And secondly a processing node, or cluster of nodes, Where a slice o

Peter Turney (whose programs have achieved human level performance on the SAT verbal analogy test) and I have been discussing an experimental test of Ockham's Razor in AI. This is a question that is both fundamentally important and experimentally tractable.

I recommend you read our discussion [wordpress.com] of an experiment to test Ockham's Razor (and related theories such as MDL, algorithmic probability...).

Cyc corp, but it is already working for NSA, has the most advanced AI system I am aware of. I am not sure Cyc is an improvement over Eurisko, its predecessor, but well, it managed to make its creator raise a few dozen million dollars.

Also, dear DARPA official, don't you think that an AI researcher could have ethical reservation about working with the US Army ? I don't try to troll here, this story is already tagged 'skynet', don't you think that many AI researchers are very worried about the mix of milit

Im working on an augmented visual display system that uses a network of firing signals to forge 'paths' in an ever-evolving AI processor for visual recongition. My goal is to completly replace my pc mouse so I can dominate my foe in Warcraft III. Stay away from the USEAST servers or prepare to be dominated.

I have had an interest in AI over the years and have found Gerald Edelman's books particularly insightful.

See:_Neural Darwinism_ (ISBN 0-19-286089-5)_Bright Air Brilliant Fire: On the Matter of the Mind_ (ISBN 0-465-00764-3)

The ideas in these books might be outdated by now but I doubt it. I think the works of Norbert Weiner are still relevant.

I particularly liked the NEAT project, however crude it may be. I like the changing neural topology via genetic evolution concept and think this is consistent with what Edelman tells us really happens in biology.

See: http://www.cs.ucf.edu/~kstanley/neat.html

My other suggestion is to define the many different scopes of the AI. For some, it seems the bar has been placed at natural language processing and full-on human cognition. Without the frame of reference and body of experience of a human though, this seems to be an unrealistic goal. I just don't think we can "program" a computer to do it. To pull it off, this would seem to require duplicating the nervous system of a human to enough of a degree that the AI can experience sensory input compatible with our shared human experience. Think about how many years it takes for a human to reach the level of intelligence we are seeking in AI. I don't think there are any overnight solutions here. We need to teach it like a baby, child, adolescent, and adult. While we may be able to speed train an AI, it may be that there is something to the lack of interesting input that enables us to reflect and refine our mental models of the world. The AI must also continue to interact with the human world in order to stay current.

But AI doesn't have to match a human. There are much simpler organisms we can model as a start that may pay off in other ways. Nature seems to excel at reusing novel patterns and we should exploit that code/model library. The AI produced from this research may not be able to hold a conversation, but it can probably keep an autonomous robot alive and on it's mission, whatever that may be. And I think it's a better foundation for the eventual human equivalent and beyond.

An AI system must at its heart understand the two hemispheres of the human brain and how they process information differently. Though, for example, both hemispheres receive inputs from both eyes, how they process information is radically different. The right brain is looking first at the outline of an object. Then, as that outline has been sketched out, it feeds that information up the column and more specificity is gained. The left hemisphere--being used to process information in a linear sequential manner--looks at individual items inside the image and tries to name them. These two separate processes are then passing information constantly across the corpus callosum and that is how we get our consciousness. An AI system must do this cross pollination.
I have been working on various aspects of this idea for years in the Godwhale Project [google.com].
The first stop on anyone's journey to write this code is no one else than Dr. Roger Sperry. [Nobel Prize 1980].

Maybe begin with a bit of background on the complexity of shotgunning the task: some Hofstadter, maybe some Dennett, maybe something like John Pollock's "How to Build a Person: A Prolegomenon".

Then define, in the sense of a formal systems analysis, the #1 task DARPA would have an AI system perform in 5-10 years and then specialize and concentrate and specialize and concentrate some more in resea

As if I didn't see that coming? I think my UID says I've been here awhile.

It's not that I'm asking Slashdot to do my work for me; I've already got some very strong leads to work on. However, Slashdot occasionally surprises me with people that are thoughtful and working in interesting fields, so I figured I'd give it a shot. Most of the changes in my life have come from sudden and unexpected directions; I wanted to see what serendipity might bring me that deliberation would not.

One of my theories is brains predict possible futures (by modelling reality in parallel), and consciousness is what happens when a brain recursively tries to simulate and predict itself.There are already plenty of nonhuman intelligences around (see your local pet store). And how we handle them is not that great.

I personally am not sure if creation of AI will be a big benefit to humans in the long term. Perhaps augmentation of humans or animals would be more useful.

Shit, forgot to log in before posting. There. Oh, and for all of you young folk that think your ID is low, no, it isn't. Mine isn't even low. I forgot the login credentials for my first acct (in the 100s). Damn.

Don't take this the wrong way, but I think you're drawing conclusions based on some serious misunderstandings, a large leap of faith, and an unfamiliarity with the fields in question.

As far as the requirement for "free will" in computer systems, you've put the cart before the horse and assumed that free will must exist for a system to simulate the mind, without ever proving that the mind is anything other than a deterministic system of unbelievable complexity. To presume that it is nondeterministic because you cannot adequately predict its behavior is pretty obviously bad logic.

The human brain does not take advantage of any known large-scale quantum effects, and, so far as we know, does not exploit any of them to produce random behavior. Once again, the inability to demonstrate a pattern is not evidence that a pattern does not exist.

Asynchronous computing does not produce or take advantage of quantum uncertainty. The levels of quantum uncertainty involved are swallowed by the impact of the deterministic systems they are filtered through, and drowned out by the impact of chaotic but deterministic variations in process scheduling, resource locking, and timing conflicts. The same goes for parallel computing for the same reasons- network latency is a chaotic, not random, phenomenon.

In terms of the use of quantum uncertainty for intelligent systems, there is no doubt that quantum computing holds tremendous promise, but also that its applications are hugely misunderstood. It is not a cure-all for general computing problems, and it particularly does not solve the problem of being insufficiently able to describe the your problem.

Bottom line is that chaos != randomness, and unpredicted != unpredictable. What you've got is good philosophy, but does not accurately depict the state of AI or what we know about the systems you are describing.

Chaos == determinate, non-predictable. I claim that determinism inherently rules out free will. I assume the human mind has free will, and that free will is a prerequisite for true intelligence (not simulating real intelligence). That assumption is a leap, and I fully understand and accept that. But, if you accept that assumption, a chaotic system cannot have true intelligence.

Asynchronous computing, using current models, does not produce or take advantage of quantum uncertainty. But I posit that they