Web 3.0

Saturday, September 19, 2009

How To Save The Human Race

I am a child of the "nuclear age" and my whole life I lived with the ongoing fear of missiles from Russia leaving me with only 8 minutes to enjoy the rest of my life as best as I can. Fortunately that has not happened so far. However despite the brief respite we got from the cold war after the break-up of the Soviet Union, in which we almost cheerfully got to focus our terror on lesser catastrophes such as biological weapons and other non-nuclear apocalyptic futures, the threat of nuclear war never really left the horizon (or at least my mental landscape). Now to compound matters we have the new and very real threat of a global catastrophe due to a variety of other environmental and climate related forces such as global warming, overpopulation, honey and bumblebee colony collapses, overfishing, and schools of Jellyfish taking over the world (sadly that last one is only half a joke).

It seems to me that the only way out is smarter humans and fast. However, we may be at an event horizon where solving the problems that are coming fast that could easily wipe us out or at least set us back thousands of years requires a level of intelligence that no single human has. It's a rough non-scientific assertion to see that the human mind has only so much processing power but I feel it's a valid one. Sure there are drastic differences between the IQ levels of member of our society, but if you adopt the viewpoint of a computer scientist for a moment, it stands to reason that the number of neurons and interconnections in a single human brain must have some upper limit in its maximum pattern recognition and manipulation capabilities. For example, take the old saw of people being able to hold about 5 to 7 disparate concepts or symbols in their head before they begin to have trouble maintaining that set of internal objects. It is equally possible that there is a maximal upper limit to the multi-dimensional patterns a single mind can hold, especially when it comes to the brain power required to internally animate connections between the objects that comprise those patterns; an activity vitally important to mirroring a complex process so you can analyze, predict, and manipulate it.

Therefore, the only way to out of our oncoming massively complex problems will be to create smarter people. However evolution is too slow and the future of artificial intelligence is quite murky in my opinion because as a famous man once said "if the mind was simple enough for us to understand it then we would be so simple we couldn't". This begs the question "what is a knowable path to smarter humans based on existing technologies even if those technologies are in their infancy?". One major fallacy many people carry is that cyborgs are centuries away because we have to understand the workings of the human mind in order to build the hardware necessary to connect to the human brain. I never believed that and now researchers are discovering that neurons don't need to be told how to solve a problem. As I'm sure you know they do that all by themselves. All they need is a set of signals and a feedback loop and they will do the rest. This means that it is very possible that some neuroscientist asking the right questions will create a viable link to the primary visual cortex within the next ten years. We already have monkeys operating robots by thought, albeit doing very simple operations. With advanced MRI imaging and a neural decoder box with lots of high speed chips using evolutionary algorithms it is feasible that the human mind could "teach" the neural decoder how to interface with its primary visual cortex in an important and powerful way.

Once this is done we can begin to tackle the larger problem of creating the first human cluster mind, where people could collectively imagine different parts of a much more complex problem than any of them could handle alone. And each individual would be much smarter too. Silicon based memory could augment a user's primary visual cortex by maintaining objects or symbols for the user that were beyond the usual set of 5 to 7 allowing them to solve problems of a complexity far beyond their unassisted mental capacity. What is the IQ of 5 people linked together in real time, each maintaining and dreaming a piece of a much large pattern, sharing data directly without the encumbrance of translating images to words and full direct non-verbal access to the vast databases of the world? What is the IQ of 10 such people, or a 100? The Zen parable that ends with "Am I now a man who dreamed of being a butterfly, or a butterfly now dreaming of being a man?" would take on concrete form as people would wake from a powerful dream state where they could see, solve, and understand incredibly complex patterns only to see that understanding slip away when they unplugged until the next time they reconnected. Would they ask after they unplugged for the day "Am I now a human who once dreamed of being a god, or a god now dreaming that I am a human?". Note I am not using god in a religious sense or as some people who would refer to as The Almighty, but in the sense of someone with what to us would appear to have superhuman intelligence.

Yes there are massive logistics and medical problems to solve, everything from preventing runaway feedback loops to designing a proper overall coordinator module needed to manage the attention sub-systems of each participant, but I bet you can see the path as compared to other solutions which are still mostly the stuff of sci-fi writers.

Is this the right path? Is it a good path? I don't know. But compared to the alternative which is watching the world descend into chaos, despair, and destruction then I'll take it if I have to. Besides, the cell phone revolution has shown us that if there's one thing all living beings love to do, it's to connect.

Bookmark This Post!

Sunday, March 02, 2008

In a thoughtful piece on the upcoming personal robotics revolution, author Clem Chambers in a new Forbes article hypothesizes that the brakes on the revolution are the current inability of robot tinkerers to get at the brains of their robots easily. I enjoyed the article and agree with this sentiment, but wanted to point out another pair of conditions that once satisfied, will be the starting gun for the revolution.

What's Holding Us Back? - PC Control And A Common Base Of Standards

Clem indicates that the trigger for the revolution would come in the form of open access to the brains of the robots. That is indeed one solid path. But there is another equal if more potent path. That path is delineated by the stock integration of the necessary transmitter/receiver hardware to communicate with a personal computer. Combine that with a standardized protocol for robot to PC communication and now you can offload many interesting processing tasks to the owner's personal computer; tasks such as advanced object recognition through CPU hungry vision, sound, and other sensor data analysis. Or tasks like robot character personality evolution and adaptation. This is the direction companies like Hanson Robotics are taking with their upcoming character robot Zeno. Virtually every home that has a personal robot has a personal computer connected to the Internet, and it is much easier to upgrade your personal computer than for the robot manufacturer to reconfigure and upgrade the robot's hardware. The problem is now that the command transmission method for almost all consumer robots is the vastly cheaper infrared medium instead of a more expensive WiFi connection. This makes it difficult to have the robot roam free in the home while communicating with the home computer since infrared is "line-of-sight" technology. Two currently unreleased robots that will have WiFi bridges are the WowWee Rovio robot and the Meccano Spykee robot and it's siblings.

Once WiFi is the standard then standard protocols like TCP and UDP and even higher level web standards can be utilized. However, a common standard for most of the core underlying command and data transfer operations, required to control the robot remotely and receive sensor data from it, is still necessary for things to truly take off from here. Think of the effect that Microsoft's competition crushing operating system Windows had. On one hand, many of my fondest software companies were eliminated, but on the other hand it did force standards on all the developers that rapidly advanced the state of application development for personal computers.

With the appearance of true WiFi and a solid application development base of protocols to work with, both home users, with the aid of sophisticated software interfaces, and developers alike will be able to create cool applications that currently have to be kludged together right now. I say this from experience because two of my most popular consumer robot "wonder kludges" involve a software program I wrote called Robodance which runs on Windows PC's. Robodance allows WowWee and Tomy/Takara (i-SOBOT) robot owners to control their robots via voice commands, including the ability to control them remotely via Skype's video call service. They can also build scripts with a drag and drop editor and share them with others. Finally, they can control their robots with the Nintendo Wii Remote control's. In fact, i-SOBOT owners can use arm gestures to control their robot's courtesy of the WiiMote's motion sensors. I've included some reference links below so you can see demonstrations of the technologies involved. The point is that software running on a PC can greatly expand the capability of a consumer robot without the need to modify the robot itself.

But until WiFi is stock on personal robots, including a stock video camera and microphone, and a standard is made for PC to robot communications, programs like my Robodance will be hard to find and difficult to create. But once those two industry conditions are satisfied, Robodance, will become part of a wave of PC based robot control programs. Remember, with WiFi all the home user needs to complete the bridge is an everyday wireless access point as part of their home WiFi network. At this point the PC-to-robot link will be invisible to them. The robot will simply appear smarter and more capable, and *most* importantly, it will continue to evolve in features and intelligence over time without any intervention required by the home user. This will be facilitated by frequent software downloads from software companies that will offer "robot intelligence maintenance" subscriptions (think of the way you get browser, operating system, and other updates transparently in the background on your personal computer). There will also be deluxe web servers that will combine learning across user's robots in real time and deliver advanced intelligence improvements by communicating directly with your home computer; no downloads required! Once we reach this stage it will be breathtaking to see the personal robotics industry take off in an explosive manner.

Bookmark This Post!

Saturday, February 16, 2008

Here's a post I made in reply to a Slashdot story about Ray Kurzweil's prediction that we will have human level A.I. by 2029:

Whenever I see stories like this and the usual negative rebuttals that follow, I wonder if I am the only person who read Asimov, Clarke, Crichton, Roddenberry, Heinlein and many others. I am starting to believe that it is because we feel we have "dealt" with the bogeyman of "truly aware" A.I., now that it has been confronted handily by Hollywood via The Terminator and its ilk. In the same way that it was almost comforting to embrace the dark specter of biological terrorism as a pleasant relief from the more real and closer danger of nuclear destruction; focusing on the dawn of A.I. is a relief from the true technological tsunami heading our way.

In the midst of all this talk of pure A.I. is the real steady progress being made in hooking mammalian brains to computers. So far it is in the safe yet icky domain of direct control over robots and other advanced technical based prosthetics, but it is the door to the bigger more powerful scenario that may await us compared to the "birth of A.I." to reference The Matrix. What people fail to understand is that we will make huge progress in this area, much faster than in solely silicon A.I. Why? Because we don't have to understand how the mind works to reap powerful benefits from hybrid A.I. like we do with pure A.I. Neurons by their very nature analyze and adapt to patterns and signals, they just need to be connected and protected.

The most disruptive mind-numbing change heading our way is when human brains can connect with each other over a digital conduit like the Internet. What happens when I can expand my consciousness to be able to maintain far more than the average capacity of 4 to 7 active symbols in my mind, by harnessing the brain capacity of others on a shared peer to peer neuronal network? What powerful meta-consciousness will form when your mind can directly alter a visualization held in real time by another, group dreaming as it were? Or perhaps 10 minds, or a thousand? When we unplug, if we ever do, will we feel as if we woke up from a greater more powerful and majestic dream that evaporates as soon as we disconnect because our minds, by themselves and in comparison, are too tiny to hold the more complex patterns a mind cloud can handle? Perhaps feeling like a butterfly who was dreaming that he was a man, now awake and relegated back to simple thoughts of procreation and feeding, to paraphrase Zen?

In closing, what problems which are now intractable to any single human due to their complexity and scope will fall astonishingly quickly to the power of a million minds focused like a laser on their solution? Please don't take the laser analogy lightly. Right now all of us, and any computer programmer knows this all too well, are recomputing and resolving billions of thought problems which are complete duplicates of each other. What happens when all that duplication is virtually eliminated and our minds in unison all take one small slice of a much larger problem and tear it to pieces? Heaven or hell, you decide, but coming a lot sooner than any of us think.

Bookmark This Post!

Friday, November 09, 2007

In a recent compelling and controversial web article, Lance Ulanoff of PC Magazine tackles the attitudes of the American consumer to complex robotics. In the article, where he also gives an overview of the current basket of American consumer robots and other robots worldwide, he goes into detail about the potential problems with our attitudes towards advanced robots; both from the standpoint of our unrealistic science fiction "fantasy expectations" toward them and also our negative attitudes towards robots that began to look and act too much like us. I sent an E-mail to him explaining my thoughts and opinions on this important subject, and what I feel are some future repercussions of the upcoming mixture of human and robot psychologies. He suggested I publish this response. You will find that response below.

Note, anyone who thinks I am promoting the future described below should know that I have definite fears we will plunge into such a future without the deep careful planning it requires. And before you think I'm personally inclined to the predicted scenarios given then perhaps you should think deeply about the fact that I don't even have a cell phone or a pager and that statistically you do. Here is my response.

You have touched on several points that involve the future, and in my opinion, there is no future ahead of us that does not get weird very fast. I'm using the word weird in a neutral sense, it could be wonderful or terrible, that's up to us.

I feel the reason that humanoid robots creep us out is due to our natural species competitiveness and defensiveness. The minute a robot can take our job, outfight us in a humanoid manner (think boxing, not guns which we already accept are superior in combat), or make us obsolete in an intellectual manner, we feel threatened.

If androids and robots wipe out humanity it will be because we start to choose them over other humans for love and companionship, not because of any Terminator scenario. We humans love a nice safe universe that tailors itself to our most hidden desires; hence the explosion in video games that offer virtual worlds. Now add to that phenomenon one of the more disturbing conclusions from a study that focused on the effect excessive viewing of sexual materials had on men. The study claimed that men are less likely to want real sex after viewing such materials rather than more. This is due to the perfection of the model's bodies when compared to the real love partners of the men who participated in the study. How can a real woman compete with such beauty when it is often altered by surgery and enhanced by expert photo retouching? Can you imagine the effect on society of a physically perfect, obedient, tireless, android love partner that modeled its "personality" to your every secret desire? Sex would move out of the bedroom into the laboratory of the newly formed Department Of Preserving Humanity (Note, there is no such department now). There would even be databases of downloable techniques to upgrade your silicon love partner. How can a mere human compete?

Despite the fact progress is being made all around us towards this future, very few people find a cyborg future palatable. I hate to use the word cyborg because it's laden with Hollywood inflections and a host of other irrelevancies. Our attitudes of defensive competitiveness currently force robots to conform to a harmless shape and substance (except for military robots but remember we are talking about consumer robots). That attitude disappears if we begin to identify with the android. People are still too complacent with the idea that we are "years away" from merging with our machine counterparts because they mistakenly believe that we have to understand a lot more about how the brain works to interface with it. They hold the false idea that we need to duplicate, or at least completely model brain function before we can work with it. Nothing could be further from the truth. Take any real wetware type of story available on the Internet, where real scientists attempt to merge neurons with silicon, and the awesome neuroplasticity of the human brain becomes self-evident. Neurons by their very nature process signals and identify patterns and they don't care where the signals come from.

The only potential obstacle to the man-machine merger are political and social ones. But who is going to tell a poor quadraplegic that they can't link up with the Home Assistant 9000 to have a chance at a better life? What politician wants that on their soundbite resume? The social ones are even more nefarious. There are people right now giving their kids growth hormone who don't need it. The kids aren't dwarfs, the parents just want them to be taller so they can have an edge in life. I'm appalled by this and would never do this to my children, if I had any. However the impulse to have your child win at any cost is real and has far reaching consequences when it comes to technology. Will little Johnny will be sentenced to a life of menial jobs and huge social disadvantages if he can't link his brain up to the Universal Hypernet?

In this future there is no dividing line between us and the machine so the current fears fade away. I pray that we are smart enough to anticipate and plan for the new and very real dangers that are innate to such a future. Is this the future we want? I don't know. But I know it's coming. Why? Because of a very disturbing yet ordinary event that happened to me some time ago.

While I was waiting for my order at a nearby sandwich shop, the girl making my sandwich was on her cell phone. She had been on it when I came in and was still on it when I went out. During that time another women came into the store talking on her cell phone. She was asking the other person on the line what they wanted on their sandwich and then, after receiving the answer and relaying it to the sandwich girl, started talking about a gamut of other typical things instead of hanging up. She never hung up. She exited the store while still on her phone. The realization I had was astonishing. Two people in the same room conducted an entire business transaction connected via cell phones to phantom voices far away, and never really connected to anyone else in the physical world around them, while a ghost on the periphery of their perception watched them in shock.

Bookmark This Post!

Monday, December 11, 2006

Web 3.0 - The Bridge To The Singularity

Introduction

Web 3.0 is the advent of a brave new paradigm where we will interact and solve problems together through a network of A.I. assistants. This is Part 3 of “The Web 3.0 Manifesto” where I describe how we will all become integral cognitive nodes of the Internet, cooperating in real time to provide key knowledge, logic, and pattern recognition capabilities for each other in breathtaking new ways that were not possible before Web 3.0. I will use a story to introduce some of the core technologies and protocols that will comprise Web 3.0. In later articles I will talk about some of the artificial intelligence technologies and methods that will facilitate this new generation of the Web.

Preface To The Story

In the future, user and programmer alike will be integral participants of what is commonly referred to as the Cognitive Abstraction Layer or CAL. The CAL is a mixture of machine and human intelligence, supported and woven together by new Web 3.0 protocols and technologies, and designed specifically to facilitate problem solving between people acting anonymously in a dynamic cooperative manner. This new paradigm for anonymous cooperative problem solving will set the stage for greatly advancing the current state of our combined technological prowess, raising it to the level needed to begin work on the A.I. supercomputers that will mark the beginning of the Technological Singularity.

Note: by the time Web 3.0 appears we will use many different new interfaces to communicate with computers. However to keep things simple in this story, Jonathan talks to his computer and his computer talks back. Naturally, his computer’s name is HAL.

Story: A Day In The Life Of Jonathan Byte

[Scene opens: Jonathan Byte, an elite Web 3.0 programmer, is ready to start his day and we open the scene with him in his room ready to begin work.]

HAL: Hello Jonathan.JON: Hello Hal, work configuration please.

[The computer wipes away the 3D World Of Warcraft 23 virtual playing field that fills the room in mid-air, made using the latest 3D projection technology, and replaces it with the Dynamic Marketplace interface to the Cognitive Abstraction Layer. Jon spreads out several floating 2D screens into different areas of the room to get a better glimpse of today’s offerings. He does this by using hand gestures with the help of the Nintendo Wii Reality Bridge, an interface much like the one seen in the science fiction movie, Minority Report.]

HAL: You’re in luck today. There are a large number of diagnostic requests in the marketplace today. Would you prefer locked or open bid offers?JON: I’m feeling adventurous today. Show me open bid offers first, but only those that have a 90% probability of attracting no more than 5 competing programmers.

[In an open bid, any programmer can compete in real time for the job in a winner-take-all fashion. The first programmer that solves the problem or finishes the design task takes the specified payment with the others getting nothing. A locked bid offer is a request for programming help where the submitter commits to pay the subscribing programmer a specified fee, as long as the work is completed in the allotted time. Locked bids are lower risk and therefore pay a lot less.

Open bid jobs are what keeps Web 3.0 commerce moving at maximum speed and makes many programmers rich in a short time. By examining the history of each programmer before allowing them to compete, the CAL will ensure that the group of programmers competing for an open bid have disparate problem solving approaches to the specified task request. This guarantees that the degree of healthy competition between ideas necessary to find an optimal solution exists, while avoiding wasteful effort that is truly redundant and varies only in a trivial manner. With a little help from HAL the jobs that Jonathan usually prefers are brought to his immediate attention.]

JON: Nice one Hal! I see you’ve found some diagnostic job requests that require a level 10 visual pattern recognition expert like myself, but don’t require extensive domain specific knowledge. Yesterday I had a task that required me to learn about intercity air traffic coordination before starting the job. I lost nearly a half hour of billable time reading about that stuff!HAL: I’m always happy to help Jon.JON: Pull up the job labeled “ADX-Omega 3” for me please, and quickly since it’s an open bid.HAL: There will be a 5-minute delay before the Hypergeometric Data Visualizer (HDV) is available and prepped.JON: Screw that. Jack up my HDV account to premium level 5 to get the time down to 30 seconds. I’ll pay the additional cost.HAL: Based on previous experience you have a 75% chance of winning this bid, do you want to confirm the HDV premium usage fee?JON: Confirmed, now move it, the clock is ticking!

[As part of Web 3.0, one of the most important underlying services is the Hypergeometric Data Visualizer or HDV. This technology is a breakthrough in program flow control and data traffic analysis. The HDV global servers can breakdown the N dimensional flow patterns of any software program, even if the application exists across a large number of different Web 2.0 style servers and user computer systems, and then convert the application quickly to a series of 3 dimensional moving displays digestible by the human eye and mind. Payment for access to the servers is negotiated in real time via the Dynamic Marketplace interface.]

HAL: The HDV is ready; do you want the usual colored wire frame display with filtering of non-relevant web elements and automatic motion freezing during important data traffic collisions?JON: Yes Hal, thank you.

[Jon has chosen to do a diagnose and debug job today instead of a design task. A new set of screens spring into mid-air around Jon. It’s a wire frame representation of the program the user has assembled during complementary design sessions with their A.I. computer assistant. It resembles a giant city rendered in wire frame view like that in a graphic design package, except this city is alive with color-coded motion at a speed suited perfectly for Jon’s brain and visual pattern recognition capabilities. As per Jon’s personal taste, faster moving data traffic is being displayed as blue packets and slower ones are shaded red with a gradient of colors in between. These colored blobs of light whiz briskly along the wire frame city’s connecting pipes. With the help of their A.I. assistant the user has done a decent job of molding the application they desire, but there’s a problem. Under certain critical conditions the application fails. This is where Jon comes in.]

JON: Interesting. Hal, do you see the congestion in the blue area at Zeta (4, 7, 92) where the data packets are coming in from the yellow pipes?HAL: Yes I do.JON: Simulate a traffic load that is double the amount currently coming in from the yellow pipes in that sector.

[Immediately the activity from the yellow pipes speeds up and the packets change from a soft green to a troubling bright blue. The receiving orange spherical nodes, which represent servers processing the data, are pulsating in an unhealthy manner indicating that they can’t keep up with the traffic. What Jon doesn’t know, and doesn’t need to know, is that the yellow pipes are live sales data coming from cash register terminals in Spain that feed servers in the user’s Italian headquarters. It is now obvious that a surge of traffic coming from those feeds, due to periodic major sales promotions, is more than the company’s servers can handle. Jon will never know this, but the user is an Italian manufacturer with retail affiliates in Spain.]

JON: That’s it! When there’s a peak surge of traffic from those feeds the application’s receiving servers overload. Quick, recommend a doubling of capacity for those servers with a general suggestion to the user to stagger the data coming in from the yellow pipes in that sector.HAL: Done! I’ve locked the solution and the user’s A.I. assistant has confirmed the solution as viable. You won the open bid and you just made $3000 in 15 minutes. I believe that’s a new record for you Jon.JON: We make a good team Hal.HAL: Thank you Jon.

[End Of Scene]

Notes On The Story

Many simplifications were made to portray this story. The data being represented could have been shown in a topological map style, or a 3D audio soundscape representation augmented by touch (haptic) peripherals, or any number of different ways. Rather than data traffic packets it could have been CPU threads, parallel execution paths, database replication patterns, or anything else. In addition, the real life data traffic patterns perceived by Jonathan would be far more complex than the simple e-commerce scenario given.

From The Abstract To The Concrete And Back Again

An important linguistics task performed by Web 3.0 A.I. assistants is the conversion of abstracted concepts back to the correct domain specific language that is appropriate for any particular user. Jonathan’s recommendation to the user regarding “the data coming in from the yellow pipes” would be translated into the appropriate concepts and ideas that pertain to the user’s actual application. For example, his suggestion to “stagger the data” might result in the user’s A.I. assistant suggesting that the Italian company should schedule upcoming sales events with enough time between them so as not to overload the point of sale equipment. Semantic Web software agents would handle this translation task but these software agents would not be the ultra-powerful A.I. systems that we will someday have when the Singularity arrives. Web 3.0 will make sure we don’t have to wait that long. They too would be assisted in real time by human elements that would be shunted in to handle the difficult parts of the translation, with the salient parts of the linguistic translations catalogued for later usage in future machine to human conversations.

The Virtualization Of Human Knowledge In Real Time

The important point is that once we “virtualize” the ability to represent web program execution and data traffic patterns, and then shunt in human intelligence at the very moment it is required to do the complex pattern recognition work computers currently can’t do, we will exponentially increase our ability to help each other solve problems and reduce the amount of unnecessary duplication of effort currently plaguing us in our pre Web 3.0 world. When the technology pioneered by Amazon with the Mechanical Turk service is enhanced by A.I. and is built into the very fabric of Web 3.0, it will advance our collective problem solving capability to a vastly more powerful level. This will allow us to truly leverage the skills and talents of every individual and distribute those skills at light speed on a global scale.

Global Leveraging Of Human Talent

Right now there are web designers that are forced to program and programmers that are forced to do user interface work. This is work that neither party wants to do and that could be better handled by the other. It’s not hard to extend that premise to millions of different jobs everywhere across every industry group there is. In addition, all of us are burdened and bogged down with the scheduling, management, and coordination of work between people; much of which is an unnecessary duplication of effort and requires slow painful interfacing during cumbersome meetings. Web 3.0 will radically eliminate and revolutionize much of that. We’ll still have meetings, but they will be much more fun and creative. We’re people, we will always like to meet.

One of the huge benefits of the virtualization of software development and the modularizing of human intelligence will be the automated cataloging and reuse of solutions. Since all parts of the development process will be codified, the Cognitive Abstraction Layer can record the vital statistics of each problem solved and keep track of which people and skill sets were involved, what recommendations were made, and what solutions best suited the problem. Therefore, when a similar problem is encountered again, these solutions can be offered and tried out in both simulation and real time to be provided to other users that encounter the same problem. In addition, job notifications will be automatically routed via their A.I. assistants to the people that have the most relevant skill sets.

Closing Words

The new distributed problem-solving paradigm described in this article will be a quantum leap in the reduction of duplicate effort currently occurring in our civilization, further accelerating exponentially the current rate of technological progress. As I type this document my Word processor is making advanced grammar and structure recommendations in real time. In the near future a physics teacher, a poet, or a legion of other people with different skill sets will help me create my next document, most of whom I will never meet. My A.I. assistant will present their knowledge to me at the exact moment it is needed and in the correct context.

I leave it to the reader to see how these core Web 3.0 advancements can be transposed and applied to many other fields, well outside that of software development. In addition, a quick search on the Web regarding new computer interface methods will turn up exciting new interfaces to spark your imagination such as brainwave scanners and more. In later articles in this series I will take a look at how artificial intelligence will evolve to enable these advancements and provide the supporting and binding elements of a brave new Web. I do not know how long it will take us to get from Web 3.0 to the first A.I. supercomputer that will mark the start of the Singularity. That is a question for someone with a much bigger brain than mine to answer, someone like Ray Kurzweil or David Gelernter. I suggest you watch the debate on machine consciousness they recently had to see these two great minds at work.

Bookmark This Post!

Friday, November 24, 2006

The Web 3.0 Manifesto - The Knowledge Doubling Curve

[This is part I in a multi-part series titled "The Web 3.0 Manifesto"]

PREFACE: I use the term Human Computing Layer or HCL in this article. What is the HCL? Us. It’s the oldest computing system on the planet and has been here since we began as a species. Read my previous Web 3.0 article that discusses Amazon’s Mechanical Turk service to understand how the HCL is becoming a functional and integral part of the Web and for some ideas on how the integration of the HCL and the Web will take form.

The Knowledge Doubling Curve

To begin this multi-part series on Web 3.0 I want to talk about the phenomenon that is driving the breathtaking increase in the rapid rate of technological innovation. Several decades ago I read about a study where researchers decided to measure the time it took for the amount of knowledge in the world to double. They called it the “The Knowledge Doubling Curve“. (Note: If anyone knows where I can find this article I would really like to know.) They came up with their own measuring system that, if I remember correctly, consisted of measuring the total number of items in print at periodic intervals in recent history. They then graphed that figure over time to see how long it took for the amount of knowledge to double. Finally, they projected the graph into the future. By looking at the graph it became possible to see the rate at which knowledge was doubling over time. In the article they showed a graph like the one below:

I don’t remember the exact dates or the exact quantities for the number of items in print at each time event, so I can’t label the X and Y axes of the graph properly. Roughly, the bend in the graph corresponds to a decade somewhere near or after the 1960’s. However, despite the lack of exact figures the radical conclusion represented by this graph still holds:

The time it takes for knowledge to double started out as a linear rate and is now progressing at an exponential rate.

As you can see the graph is asymptotic, that is, the rate continually approaches infinity but never reaches it. This curve of course tracks closely the rate of technological progress in modern civilization. (Note: There are some who would say that the Singularity occurs somewhere near the top of the graph. For a fascinating look at how the rate of technological innovation tends to increase exponentially, read any of the latest books or lectures by renowned futurist, Ray Kurzweil.)

The amount of technological progress made in the last hundred or so years, far outstrips all that made since modern man first appeared on this planet. It took centuries for the wheel, the spear, the bow and arrow, paper, etc. to become commonplace. But since Samuel Morse sent the first modern telegram in 1844, we’ve seen the light bulb, cars, the telephone, the atomic bomb, jets, radio, television, space travel, computers, nanotechnology, and the Internet. There has been such an astonishing increase in the speed of invention and innovation that the list I just gave you is woefully and radically incomplete.

What changed in the last 150 years or so? It is certainly not us. Being an American, I have read writings made by the forefathers of our country, who gave us one of the most eloquent and powerful documents of our time, The Declaration Of Independence. They were brilliant men who would easily tower, intellectually, head and shoulders above most our contemporary politicians. Therefore the change is not related to an evolution of our DNA or a giant jump in the intelligence of mankind.

The Knowledge Duplication Curve

To understand the answer let me share an enlightenment that I had about The Knowledge Doubling curve. Look at the curve I plotted in the figure below:

As you can see, the curve is a mirror image of The Knowledge Doubling Curve if you flipped it vertically. This curve is not based on any study; it is an intuitive explanation of the mechanics behind The Knowledge Doubling Curve. This curve shows that as technology advanced, the amount of time wasted by humanity in creating duplicate solutions was reduced linearly and that after the bend, the rate of reduction became exponential.

When a caveman solved the problem of painting on a cave wall, the ability for him to transfer that solution to others was limited to a small geographical area around him. Paper was a giant leap forward because solutions could now be written down, copied, and spread to others far and wide. The printing press accelerated that spread by making the copying of written works, and therefore the solutions contained, much faster. But in the last 150 years, the speed of distribution and replication of solutions has reached a breakneck pace never seen before in the history of civilization.

Even in my short lifetime, I have gone from having to go to the local library to find a solution, or having to find and contact an expert via the telephone, to being able to download instantaneously an entire software package that is a complete solution to a problem or need I have. For example, if you want to have your own discussion forum all you have to do is download and take a few minutes to install a free forum software package like phpBB.

Connectivity

Now, what is driving the rapid decrease in the Knowledge Duplication Curve? Connectivity. The more connected we are the less time we waste duplicating solutions. Let’s highlight particular members from the list of recent technological advancements I made before, specifically, the modern telegraph, the telephone, radio, television, and now the Internet. Each of these was a quantum leap in connectivity, leading correspondingly to a quantum decrease in the duplication of effort. I included radio and television because links do not have to be both ways. Any efficient broadcast technology, even if it is one-way, increases our connectivity.

Web Versioning Defined

Here is the definition for what constitutes a new version of the Web:

Any technological change that is a quantum leap in our ability to rapidly share solutions over the Web by providing modular reusable building blocks of functionality constitutes a version change.

Web 1.0 - Connected computers together using a set of standardized protocols invented by Internet pioneer Vint Cerf.

Web 2.0 - Marked by the appearance of Web Services which are modular solutions to complex problems, made available over the Web to external developers via an application program interface (API).

Web 3.0 - The marriage of artificial intelligence and The Human Computing Layer (HCL) and their subsequent integration into the Web, making powerful pattern recognition solving capabilities widely available to web surfers and developers alike.

Web 1.0 allowed us to share files, data, and software over the Internet.

Web 2.0 allowed us to share modular programming solutions to common problems, available via web interface API calls. This allowed and allows outside developers to build software applications on top of these services without having to download or integrate foreign code libraries into their own software, greatly increasing the ease and the pace of creating new software applications.

Web 3.0 will allow us to share an entirely new class of solutions over the web, both by developers and directly by users (web surfers) to build larger more complex applications. Most importantly, these shareable solutions, with the help of artificial intelligence and the integration of The Human Computing Layer, will allow us to cooperatively solve a class of problems normally reserved for specialized applications found in the areas of complex pattern recognition and high level semantic analysis.

Conclusion

I will close this article with a word of hope and anticipation for the future. If you take a high energy beam of ordinary light and shine it at a thick piece of steel you get a nice reflection. When you take that same light and align the photons so they move together in lock step, they form a laser beam and you can burn a hole through that same steel. I leave you with this exciting question. What happens when we, the most powerful computing beings on the planet working together in superhuman harmony, turn our combined attention to the monumental problems that, to date, have evaded solution?

Coming soon…

In future articles in the Web 3.0 Manifesto series I will discuss further the shape and substance of Web 3.0, especially in regards to how artificial intelligence and The Human Computing Layer will cooperate and integrate with the Web. Thank you for reading this far and sharing some of your time with me.

For more thoughtful commentary on Web 2.0, Web 3.0 and the Semantic Web I strongly recommend reading Dion Hinchcliffe's recent blog post "Going Beyond User Generated Software: Web 2.0 and the Pragmatic Semantic Web". Pay special attention to his comments regarding "recombinant, self-assembling software that exploits collective intelligence". He does point out that the companies he mentions involved in this line of research are using good old Web 2.0 techniques, but I feel that this field of research will play a big part in shaping Web 3.0.

Bookmark This Post!

Saturday, November 18, 2006

Web 3.0. The recently coined term that has many in the blogosphere screaming "Stop the keyword hype!" and others waxing hopeful that the next wave of Internet progress is finally starting to percolate. Before the seed has even sprouted roots, already bloggers are asking "but will it make any money?". Donna Bogatin asks this very question in her blog post "Will Web 3.0 Be In The Green?".

It would be easy to write this question off under the category of being one that is far too soon to ask. It appears that this question, and a host of others, will be stuck with us for the decades ahead, due to the irrational exuberance that poisoned the dot-com bubble. Hopefully once the Web 3.0 bubble truly gets under way, and there definitely will be one, bloggers like her will remain the sober watchdogs that were missing from the tulipmania of the dot-com bubble. I fear that many of the ones now linkbaiting in their blogs with early cries of foul against Web 3.0, will rapidly change course once the rampant euphoria begins to flourish. They will do so to persist their linkbaiting activities and because the euphoria that will accompany Web 3.0 will make the current Web 2.0 mania seem harmless by comparison.

Why do I make such troubling assertions now, especially when you consider that I am one of those looking with great hope to Web 3.0? Although many are looking at Web 3.0 as the next extension of the social networking technologies pioneered in Web 2.0, and others are dismissing it as a marketing ploy to rekindle interest in the Semantic Web, I feel there is a stronger theme that will drive Web 3.0 to explosive levels.

First I need to make a crucial point about what I feel will be the primary driver of Web 3.0. I will return to my cautions on the upcoming hyper-euphoria that will accompany Web 3.0 in a few paragraphs. Please read on.

I feel that Web 3.0 will be characterized and fueled by the successful marriage of artificial intelligence and the web. Artificial Intelligence? Isn't that the kool-aid that the Semantic Web community is drinking? Yes and no. The technologies considered pivotal in the Semantic Web are indeed considered by many to have their underpinnings in artificial intelligence. But, most of the Semantic Web projects I've seen are focused squarely on the creation of, and communication between, intelligent agents that do the natural language and topical matching work in a transparent manner, behind the scenes, without requiring human intervention.

This approach may eventually be viable but I feel that it misses a key ingredient of Web 3.0 that will finally bring artificial intelligence to the forefront. Currently the vast majority of artificial intelligence is embedded in various niche areas of commerce such as credit card fraud detection, or the speech recognition application that converts your voice to text as you dictate a document, etc. The reason for this of course is that we are still decades away from computers that will have the incredible and flexible pattern recognition capabilities of the human brain.

The reason Web 3.0 will lift artificial intelligence into the limelight is it will fill in the technological gaps that currently hamper the key uses for artificial intelligence. It will do so by shunting out the parts of the problem that require a human being to human beingswith the help of the web. But, it will do so in a manner that is transparent, massively parallel, and distributed.

Amazon has taken a unique and innovative step into this area with their Mechanical Turk web service. Yes I know this is the second time I've written glowingly about Amazon in regards to Web 3.0, but as a web service junkie you have to love what they are doing. The Turk service allows developers to shunt out the parts of their applications that require human intervention to a paid participating group of volunteer workers, in a manner that mimics a standard web service call. This creates a standardized platform for utilizing human pattern recognition capacity in a modular manner. Google is another company experimenting with something similar with their Google Image Labeler game. From the game page:

"You'll be randomly paired with a partner who's online and using the feature. Over a 90-second period, you and your partner will be shown the same set of images and asked to provide as many labels as possible to describe each image you see."

The players have fun and Google gets thousands of images tagged with relevant text labels.

Now let's take this bold new technology and extrapolate further. Suppose Second Life created games where the players were solving complex problems to have fun, except these problems were actually key commerce problems that needed to be solved?

For example, imagine a game where players compete to clothe a runway model that will be judged in a contest by other players. This game could very well be a job requisition submitted by a major fashion company that wants to get advanced market research on what clothes buyers will prefer. The virtual clothes in the game could be detailed in-game 3D objects that are exact duplicates of the fashion company's artwork for their clothing. The difference between this and someone just holding a contest will be the way that is structured. All the set-up, problem specification, and solution propagation aspects of the problem will be part of a standardized Web 3.0 service call instead of the ad hoc hand crafting of a live virtual contest event.

This could be taken to an even more abstract level where instead of a problem that has a direct mapping to a real life business event, like the fashion designer example, but instead requires a more subtle decision that needs human intervention. For example, the player is in a game and is presented with two different kinds of sounds coming from different directions. He or she is told to follow the sound that feels the most pleasing in order to find the treasure. This could actually be a sub-job submitted by an automotive company that is trying out different interior designs for a car. As each interior design is acoustically modeled, an MP3 file is generated using various environmental test sounds, which are then punched into the game. The game player is having fun chasing ambient sounds looking for treasure, but is actually telling the car manufacturer which interior acoustic space is more pleasing. Since there could be potentially thousands of players, the car manufacturer can have thousands of sound files analyzed in parallel leading to an immense time savings. In the end, the players have fun, the game company gets paid extra earnings for this service, and the automotive company saves money avoiding designs that people won't like because they sound bad.

It's not hard to see that once this kind of service becomes popular, other additions to the typical service call would include the number of redundant tests to make for each case, plug-ins for getting textual input or votes from the task assignee (the player in the game examples), etc.

I will conclude this article with a warning on the upcoming hyper-euphoria. I saw first hand how people lost their fiscal sanity during the first wave of the artificial intelligence hype a few decades ago. Can you imagine how easily investors will become hypnotized by the spell of new technology offerings? Offerings with Star Trek sounding buzzwords, that will make some of the insane claims on the average dot-com prospectus seem tame by comparison. The raw fear of being left behind by technologies and services so futuristic, that images of flying cars will abound in people's heads, will make wallets gush cash again and retirement plans evaporate.

This will only happen if we haven't learned our lesson from previous manias. In closing I have this to say to the doubters and the pundits out there currently warming up to covering Web 3.0, whether for or against. Stay sharp and focused. We'll need you.

Bookmark This Post!

Discussion and opinion on what the next phase of the Internet will look like. Web 3.0 will bring together advanced technologies that include the semantic web, adaptive data mining, and shared micro-tasking where all of us will build the next layer of intelligence into the web with new integrated tools for social networking.