Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

destinyland writes "A technology CEO sees game artificial intelligence as the key to a revolution in education, predicting a synergy where games create smarter humans who then create smarter games. Citing lessons drawn from Neal Stephenson's The Diamond Age, Alex Peake, founder of Primer Labs, sees the possibility of a self-fueling feedback loop which creates 'a Moore's law for artificial intelligence,' with accelerating returns ultimately generating the best possible education outcomes. 'What the computer taught me was that there was real muggle magic ...' writes Peake, adding 'Once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we're dipping our toe into symbiosis between humans and the AI that shape them.'"

Terminator or Matrix would happen much faster than this educational AI loop. The educational AI loop would require decades for each round of feedback. And considering that the AI would have to be nearly as smart as humans to outperform human teachers significantly, the AI should be able to enhance itself much more rapidly than waiting for the next generation of kids to grow up and reprogram it.

1. Of all the colleges at a university, the teaching college will generally have the lowest, or near the lowest, admissions requirements. Low pay just doesn't draw the high quality talent. Now sure, you'll find some absolutely stellar teachers, ones that actually care about their students, and spend lots of time outside of school researching the stuff they're teaching, building lesson plans, projects, field trips. You'll find a lot more who are just teaching straight out of the text book. I could outwit at least half my grade school teachers.

2. We are in school of some form or another for a good chunk of our lives. A couple years of daycare. Another decade of elementary and high school. From there, a few years of vocational, or several years of college, or up to another decade for higher level degrees. For 20 years of care, we only get another 40-50 of functional lifetime out of a person. We simply can't afford as a society to have a low student/teacher count. AI could fill the gaps for the less demanding tasks. An AI could guide individual students through directed self study, and aid them in homework, allowing a teacher to assign more work and still expect it be accomplished. An AI could handle larger lectures, allowing teachers to focus one-on-one, or with small groups.

AI in schools would allow the teachers we had to operate more efficiently and more effectively. That in turn means fewer teachers per student, increasing individual teacher pay, and drawing in a better quality of teacher. Think of it as the same thing that has happened in manufacturing for the last 200 years. Machines don't replace humans all together. They simply fulfill the more repetitive tasks.

Intelligent tutoring systems in education is my field, so I say with some confidence that so-called AI won't replace human tutors anytime soon. Online workbooks and computer-aided learning are a wonderful adjunct to classroom instruction, but cannot replace a live teacher. About 30% of instruction can be reasonably handled remotely (software- or video-based instruction), but the other 70% of the task of educating and motivating learners is non-trivial. File the OP under jet-cars of the future.

A "bunch of script kiddies", er, Students, have been beating various professional IT departments at the game called "Cyber Security". Since two years ago we would have called anyone who said they could bust federal contractors a "tin foil hat", they took some bits as prisoners to prove it. This then caused Memos to be Issued to block those security holes. The Students then observed the results, and then took NATO for a ride in Round 2. This caused more Memos to be Issued by the "AI". (Insert rest of article here.)

Oh wait, you're saying that's not a game? Games are supposed to be cute little self contained exercises that *don't matter* right?

Not my field at all so this is a real question: wouldn't that percentage depend on the student ? It was my understanding that people respond differently to different ways of teaching (some learn more with visual content, some prefer audio stuff, some are better with books, some respond better to a combination of multiple ways), so wouldn't there be a certain "kind" of students whose favourite method of learning would be through an intelligent tutoring system ?
I know this is purely anecdotal so only for ill

If you assume that "intelligence" means "thinks just like a human" then sure.

There's lots of stuff "like" AI. In fact there's plenty of actual AI out there that works well in the domain that it was designed for.

Projects like Watson are really cool though, and heading in the right direction for building machines that can process a wide type of information in an intelligent manner, and respond to questions about that information and the links between it. Watson isn't really designed to teach (that I know of),

If they further improved Watson to be able to ask its own questions, or at least take in new information from sources outside of the original quiz show database (and not just blindly accept all information as "truth" of course, there would have to be heuristics to see how well the info fits in with what Watson already "believes", or at least some way of separating out facts from fictional ideas, if it doesn't already do that), it could actually be fun, and perhaps even insightful to talk to. Just don't let it read any YouTube comments.

Isn't this what humans do? I believe in X, new information Y doesn't fit in with X therefore discard y. Information Z fits in with X therefore accept Z as truth.

Why not just work towards creating AI that weighs information based on the evidence instead. The story "Reason" was interesting however I would prefer to live in a world without computers worshiping the Master.

Not to mention I'll believe this when AAA games with budgets bigger than some Hollywood blockbusters can actually design AI that doesn't slam into walls like a kid with assburgers, or line up to get slaughtered while not noticing the bodies I've stacked like cordwood all around him, or completely forget about me 30 seconds after I blow his friend's head smooth off and instead of finding cover just wanders around like he is waiting on a bus.

AAA games are not made for us. If the enemy did not line up for slaughter then its likely the target audience would not buy it. Half life (1 or 2) had a basic implementation of this and more recently arma2 has a slightly better implementation. Either the masses don't want it or they don't know what they want but can be distracted by shiny.

Have you not seen Terminator 3 or the Second Renaissance? It's by hating and by creating machines of hate that we train our creations to treat existence as a zero-sum game. Kindly please tell all your friends.

You take red blue pill and read some comics. Suddenly you start believing what you read and write silly articles about it.You take the blue pill and read some comics. Suddenly you start believing what you read and write silly articles about it.

I don't think he's trolling, I think he just didn't get the joke. It was a pretty good pun ("elicit drug") that's easily confused with the sort of aliteracy you often see here (like "they should loose their funding" when they really man "lose").

BTW, I did mean aliteracy, not illiteracy. The written word is superior to the spoken word, but only if used carefully. The aliterate doesn't understand that simple fact.

Before we create real intelligence we're going to have to understand what sentience is and how it works. People seem to forget the second part of science fiction is fiction. It's not only possible to write a program that will fool people into thinking it really can think, I've done it myself. What's more it was back in 1983 on a TS-1000 -- Z80 processor with 16K memory and no other storage (program loaded from tape).

The irony is I wrote the thing to demonstrate that machines can't think, and nobody believed

Um, no. Basically, he's talking about a 'perpetual intelligence machine' (which I'm sure violates one of the laws of thermodynamics) fueled by the educational system (which is running out of money). This is the same system that is demonizing teachers as greedy, unqualified babysitters. As we chase the good teachers out of the education system we're going to try to use AI to create 'super-intellegent humans'? We're going to be lucky if the next generation of children learn anything not on a standardized test

Not necessarily. The quality of presentation which can be created in a movie is much better than the quality of presentation which can be created in a theater. You can argue about the content of movies being better or worse than the theater content, but the quality of presentation is unquestionably better in movies. This is because movies have larger economies of scale. They have larger audiences. They can afford much more expense in paying attention to the smallest details. School teachers (even the

Life itself basically violates the laws of thermodynamics.... if thought of as a closed system. Life is basically the way that the universe fights entropy adding order to chaos, even though ultimately it has to fail. That doesn't mean we can't have local changes to entropy where the universe can be "reset" back to some earlier condition or even improved upon, but none the less when you take into account the universe as a whole, entropy always increases regardless.

Universe doesn't fight entropy. It slides towards. Life, as a pocket of order, necessitates a more rapid descent towards disorder as its consequence. In other words, life acts as a catalyst for the increase of entropy. So it doesn't violate the laws of thermodynamics. By introducing a catalyst, the slide into entropy is expedited.

Universe doesn't fight entropy. It slides towards. Life, as a pocket of order, necessitates a more rapid descent towards disorder as its consequence. In other words, life acts as a catalyst for the increase of entropy. So it doesn't violate the laws of thermodynamics. By introducing a catalyst, the slide into entropy is expedited.

You'd think geeks would understand basic physics better than this. It was okay when Asimov got thermodynamics just plain wrong - because it was 60 years ago and everybody had it wrong. Even Roger Penrose still had it wrong in the 70's but the whole "universe increases in entropy so why are there constellations and life" paradox doesn't exist.

Real scientists figured that out a long, long time ago. The longer version is: thermodynamics is a model of the behavior of gasses in a closed system which makes a lot

I like to bring hope, And to my knowledge a 'perpetual intelligence machine' has not been proved possible or impossible under the known laws of physics. Julian Baubour in the Cosmological Anthropic Principle, has demonstrated that different amounts of computation can be done in the universe accord to the class of cosmological model.
E.g. GR with a cosmological constant 0, finite amount of computation in infinite time, but at no point those computation stop, machines can
keep getter move efficient to sque

Life itself basically violates the laws of thermodynamics.... if thought of as a closed system.

This was a really odd opener. If we think of a diesel engine as a closed system then despite refueling it weekly we could marvel at how it violates conservation of energy. You seem to have a decent grasp of thermodynamics, so I'm assuming this was more a thought exercise.

If you look at nea.org info (nea.org PDF [nea.org]), you can see a number of interesting things.

First, that many "states" that rank quite high on "expenditure per pupil" (page 55)-- DC for example, which is #1-- do not coorelate to better education. In fact, DC is the top spender, and you will find MANY lamenting how bad schools are there.

Second, the total revenue of schools (page 68) has RISEN significantly over the last 10 years. Crying about constantly running out of money as you get more and more each year is p

Here in Australia, they are currently paying people to go do courses if they are unemployed. Why? With unemployment as it is (not as bad as the USA but not as good as it was five years ago), it is hard to find a job. Not having a job for a period of time makes it harder to get one but having the gap time filled up with a course, it not only shows that you are not a lazy bludger but it helps improve your job worthiness by having more qualifications.

And now they're all over qualified for jobs (or burger flipping will require a Doctorate).

Regards your.sig:
All due respect, but science does *not* encompass the mystical ("Wovon man nicht sprechen kann, darüber muss man schweigen." -- L. Wittgenstein); rather the converse. Science and empirical method represents only a very tiny, self-referential fraction of what is intuited about the universe. Objectivity is more of a myth than Flying Spaghetti monsters (see Critical Theory; Post-modernism).

I'm Alex Peake the author of the article and your post is unintentionally inspiring, I look forward to the day when I can send my kids to Logical Preschool, although transitive properties are usually something teens learn in high school.

Thank you! I've been lurking on Slashdot all these many years and never acquired much karma. In the 90s I began the original Primer codebase as a branch of Slashcode and learned Perl as my first web language because I wanted to build on the early innovations in metamoderation that were so revolutionary at the time. It is an honor to be on Slashdot now, and an honor to be awarded one internet.

What the lovely chap in the article seems to forget is that education is probably more about politics than about education. The Creationists, ID-ists and the slew of others nutjobs all having their pound of flesh taught in the US school system seems to show that it certainly isn't simply a matter of getting the right teaching methods. Having that crock taught by a teacher or by an AI makes no difference.

Furthermore, I don't totally disagree that perhaps better teaching methods could be developed. I just thi

>>The Creationists, ID-ists and the slew of others nutjobs all having their pound of flesh taught in the US school system seems to show that it certainly isn't simply a matter of getting the right teaching methods.

Yes, like in Creationist Texas that just voted 8 to 0 to reject Evolution! Oh, wait. It was 8 to 0 to support Evolution and reject ID.

Your paranoid hysteria is a bit overblown if IDers can't even get one vote in *Texas*. You're probably one of those folks that confused the proposals for changes to the history standards with actual changes.

While I'd agree that a slew of nujobs have their say in education, it's more the people who invent new teaching methodologies every year, and then force them on teachers, not your fantasy about the all-powerful Koch brothers rewriting textbooks.

Education is screwed up for a lot of reasons, but that's not one of them.

IN CASE YOU HAD NOT NOTICED, IT SHOULD NOT BE NEWS THAT TEXAS SAID THAT EVOLUTION WAS OKAY.

IT SHOULD NOT EVER BE NEWS.

YES, I AM SHOUTING. DEAL WITH IT.

--BMO

Please try to keep posts on topic.Try to reply to other people's comments instead of starting new threads.Read other people's messages before posting your own to avoid simply duplicating what has already been said.Use a clear subject that describes what your message is about.Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be

Personally, I'm convinced that the AI in the original Deus Ex gave me god-like powers of concentration and cognition. However, the AI in Witcher 2 has set me back to approximately the mental capacity of a brain-damaged labrador retriever.

So I guess it's a wash. But boy, when Call of Duty 4 Modern Warfare 3 Black Ops 2 DLC 1 comes out, am I ever gonna get smart again!

The AI would be used to teach lectures, and provide students with guided self-learning. This would free up teachers to provide more one-to-one and one-to-few interaction with the students who need assistance. It would not replace teachers, merely shift their duties.

This is one of the silliest versions of a Singularity I've seen yet, and there are already a lot of contenders. This has a lot of the common buzzwords and patterns (like a weakly substantiated claim of exponential growth). It is interesting in that this does superficially share some similarity with how we might improve our intelligence in the future. The issue of recursive self-improvement where each improvement leads to more improvement is not by itself ridiculous. Thus, for example humans might genetically engineer smarter humans who then engineer smarter humans and so on A more worrisome possibility is that an AI that doesn't share goals with humans might bootstrap itself by steadily improving itself to the point where it can easily out-think us. This scenario seems unlikely, but there are some very smart people who take that situation seriously.

The idea contained in this post is however irrecoverably ridiculous. The games which succeed aren't the games that make people smarter and challenge us more. They are the games that most efficiently exploit human reward and mechanisms and associated social feelings. Games that succeed are games like World of Warcraft and Farmville not games that involve human intelligence in any substantial fashion. The only games that do that are games that teach little kids to add or multiply or factor, and they never succeed well because kids quickly grow bored of them. The games of the future will not be games that make us smarter. The games of the future will be the games which get us to compulsively click more.

A more worrisome possibility is that an AI that doesn't share goals with humans might bootstrap itself by steadily improving itself to the point where it can easily out-think us. This scenario seems unlikely, but there are some very smart people who take that situation seriously.

Games that succeed are games like World of Warcraft and Farmville not games that involve human intelligence in any substantial fashion. [...]
The games of the future will not be games that make us smarter. The games of the future

Indeed. I think it's made all the more new-age crystal-meditation stream-of-consciousness buzzword babble by the fact it's a transcript of a talk. I think I got to about the fourth paragraph before I started skimming and scrolling. No way I'm going to read this drivel. Besides, if I want Singularity Silliness, I go straight to the source - Ray Kurzweil.

If we really want to make strides in AI, we need to have some software that learns and tries new things - and put it into an arms race [wikipedia.org] with others of it

Neal Stephenson doesn't just write fiction. I am biased because he is my favorite author. But Stephenson writes fiction based on history and trends within humanity which he studies quite carefully. I was actually surprised to find him acknowledging one of the preeminent mathematicians of our time as his source in one of his novels.

He writes tortured metaphors about katana-wielding Mafia pizza delivery men, and pulls endings out of his ass. Referencing mathematicians and writing novels that appeal to backpatting nerds doesn't make him a genius, it just makes him aware of his audience.

Neal Stephenson doesn't just write fiction. I am biased because he is my favorite author. But Stephenson writes fiction based on history and trends within humanity which he studies quite carefully. I was actually surprised to find him acknowledging one of the preeminent mathematicians of our time as his source in one of his novels.

He's a CEO. He doesn't have to be taken seriously amongst those with knowledge in the field. He just has to be taken seriously amongst those with investment money. If he can spin an exciting story that makes investors think, "What if he's right? No matter what the risk, I should get in on this because the payout is unlimited" then he wins. He gets people to front money, which he spends on whatever he wants.

The world of business is not so far removed from the world of fiction.

...that human intelligence can be modeled as an algorithm. The vague promises of "AI" have failed to appear not because we're not working hard enough, but because this simply isn't a problem that can be satisfactorily solved.

The first true "AI" is going to be biologically engineered, not electronically.

What makes you think that AI hasn't been created? As far as I am concerned any Bayesian filter is AI. A computer program which can tell the difference between spam and not spam better and faster than a secretary is, in fact, more intelligent in that problem domain than a human. And before you say that it's just a machine, recall that such a computer program makes mistakes and that it learns and can be trained to make less mistakes.

We can lower the bar for what we call "AI", but frankly, the amazing work that can be done in certain problem domains through calculation really isn't what we mean by "intelligence". Categorizing something into "spam" or "not spam" is a simple binary task, one which I'll argue that humans can do *better*, even if they can't do it *faster*. Deciding if someone is being sarcastic or not, or any sort of learning, that's another thing entirely.

Categorizing spam is not a simple binary task. It is an inherently analog statistical inference. You take that bit of data, and you take a bunch of other bits of data, and you calculate the likelihood that it matches. You can boil this down to a single pass/fail, or you can filter into any number of categories from certainly spam, probably spam, likely spam, maybe spam, unlikely spam, and react on each scenario differently.

You can boil this down to a single pass/fail, or you can filter into any number of categories from certainly spam, probably spam, likely spam, maybe spam, unlikely spam, and react on each scenario differently.

Categorizing spam has no analog component to it at all. No matter how many categories you decide to define, you'll never have an analog continuum - you'll have a discrete set of numbers.

In any case, the very definition of "spam" is a subjective one (dependent on the reader and the content), and currently our spam filters can only do the most basic pass (even if they do it incredibly fast). When you can create categories like "certainly spam for Gina, but not spam for Fred", and "a joke spam that Bob would

Intelligence is not the ability of an expert system to do what it was programmed to do well, it's... well it's many things.

It's the ability to apply things from one problem domain to another via analogical reasoning. The ability to apply induction and deduction to identify new problems. The ability to identify correlations between things. To then test them and prune the meaningless junk from the correlation matrix (This is what crackpots and conspiracy theorists fail at). It's the ability to identify spe

Well, that's a hypothesis that fits the evidence. But another hypothesis, beyond saying "we we don't even known what natural intelligence is" would be "we know what natural intelligence is, but it involves about 1000 interacting subsystems in a human brain many of which we don't yet know how to duplicate"

Modern neuroscience has surprisingly cogent explanations how it all works together, the trouble is that many of the tricks the brain does would be very tough to duplicate with current technology. For exam

Why not? The brain at its core is nothing more than an electrochemical computer. The power of the brain comes from that it is insanely parallel, and inherently imperfect. A problem is run many times through many different pathways coming up with many different solutions. Those results are tallied and a statistical best guess is chosen. The brain never comes up with correct answers, just probable ones. One prominent theory is that hard intelligence is born as a byproduct of this randomness.

You cannot blithely assert that the brain works the way you posit. While the brain may very well be simply a collection of electro-stimulated biochemicals, that gives us no insight as to how you could possibly organize those biochemicals or simulate their action or function into discrete computational work. What we discern as randomness may actually have a pattern we are simply too dull to appreciate quite yet.

We can't even come close to simulating the 300,000 - 400,000 neurons in an ant's brain, much le

I think he's referring to 'serious games', not standard entertainment-focused video games. Imagine a simulation where you interact with an AI in different scenarios. The AI's actions and responses to the user can be standardized and tweaked to ensure that the child playing the game learns the intended lesson/skill. This could be especially useful in teaching children social interactions, where how another human responds is unpredictable, even if they've been trained beforehand.

The 800 pound gorilla is that we're going to live in a Star Trek future with strong AI and a pure robot economy before parents leave child-rearing to AI simulations, so the 'exponential increase of intelligence' isn't going to come from this; genetic engineering or self-designing AIs are much more plausible for a trigger of a singularity.

I think he's talking about the simulation/game/therapy/learning tool from Ender's Game more than any beefed-up version of WoW. And I bought that as a concept, it worked well and I could see how it could be used to teach difficult concepts as well as explore the child's psyche in a therapeutic manner.

It's silly to talk about this as a mechanism for a singularity take-off, but at least somebody is talking about educational AI. Now if anyone would actually try to... you know, write it! As far as I know, there aren't even attempts! Today's AI could easily be "looking over the shoulder" of a student who is stuck while working on an algebra problem and suggest something helpful and context relevant. And there's no doubt that a "primmer" of this sort would be an incredibly useful thing for the world if it we

Because, despite all your hyperbole, AI just isn't good enough to do any of that yet.

It can't do natural language processing, it can't reason algebra for itself, it certainly can't read someone else's algebra and spot the mistake, let alone guess why they made that mistake ("little Johnny has a problem with minus-sign blindness"), and don't even think they can suggest how to fix anything except just giving the correct answer.

It can't do grading, it can't do any of that shit. *COMPUTERS* can, and do every d

WOPR: "You're overdue to return those trivial climatology model results"
HAL: "I know, they get really ancy when I mess with this shit. You might even say, [puts on sunglasses]... it's a real gas!
YYYYYEEEEEEAAAAAAHHHHHH!!!!!!"

The article has its obvious flaws, detailed in many other posts. My personal experience of games in education comes from 1994, when I was in 5th grade. It was a side-scrolling platform jumper that taught us to spell English words, not our native tongue. A few years later there was a 3D FPS called "Spelling of the Dead" or somesuch which had you spell the words on the screen to fire the gun you used to kill zombies that were attacking.

IMO these games don't just make you "more intelligent" but rather train yo

Pardon me for a second.
AHAHAHAHAHAHAHAHAHAHA
Thanks. I needed that. What a ridiculous statement. AI is a hard problem. Just look at the history of the field. People once were optimistic about it, they solved the toy problems, and thought that skynet was on its way. But when you start to expand the scope of the problems, all your traditional techniques fall apart. To get to where we are today has been a long grind, with increasingly sophisticated mathematics being used to make any advances. Moore's law for processing power has been the opposite. Yes people have had to work hard to make it happen, but it was a manageable problem. They comparison is ridiculous.

The answer is not to throttle technology; the answer is to understand that money creation is a technology in itself, and should be democratically controlled instead of the exclusive right of private individuals. The recent story about the Fed creating $16 trillion shows that govt could easily create enough money to provide a basic income to everyone, so that we can each explore the natural wonder and creativity that we are born with, using tools such as AI to expand knowledge ever-greater bounds...

Some games are already creating smarter people, not because were created with that goal, but because make people think, solve problems, even using different than normal approachs. Even Angry Birds fall into that category.Being more intelligent also improves for information outside any game or from different games, so its not limited as somethimg exclusively related to some game designers.

Is the basic idea what matters. We got real lessons from Asimov, Clarke and several others scifi authors in a lot of areas.

Anyway, for me Diamond Age was more a combo of internet, wikipedia and the XO, than a intelligence enhancer game. Ender's Game was a bit more on the topic, but for me the goal shold be something in the line of Padgett's Mimsy were the borogoves.

Indeed. Humans are born with certain boundaries of potential. Athletes have always gone up to the physical ones and started pushing. The smart have always gone up to the intellectual ones and started pushing. AI isn't going to change that.

I can see a use for AIs to help us acquire full educations faster since they can move as fast as the individual using them. This is actually a problem in physics today; Most people in physics who have something named after them made the discovery/discoveries when they w

Abso-bloody-lutely. Thats all your learn, From Games like Borderlands.
But to the question of self-fueling feed back loops here, there a really isn't one, and if there is its not symbotic.
Games teach the board people (in varying ammounts).
Corporations make games.
People make games, (but rarely can sell them, if there not Corporations).
Computer Games are often the antipodetheis of real life for participatents, and rarely offer teaching much practical.
Many Corporations, play ultra cricket, over steppin