Posted
by
Soulskillon Tuesday August 12, 2014 @08:10PM
from the siri-why-does-my-cat-throw-up-so-much? dept.

paysonwelch sends this report from Wired on the next generation of consumer AI:
Google Now has a huge knowledge graph—you can ask questions like "Where was Abraham Lincoln born?" And it can name the city. You can also say, "What is the population?" of a city and it’ll bring up a chart and answer. But you cannot say, "What is the population of the city where Abraham Lincoln was born?" The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do. Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like "Give me a flight to Dallas with a seat that Shaq could fit in." Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom.

Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do. Viv breaks through those constraints by generating its own code on the fly, no programmers required.

This is so misleading. No program can do anything outside what it is explicitly programmed to do. Viv is programmed to generate code only because it has been explicitly programmed to do so, and can only do so as explicitly laid out in its code.
Sure, the code may go an abstraction layer higher, but the constraints these programs can't break through is the same.
No one knows how to program general intelligence.

In Digital, everything either is a "0" (zero) or a "1" (one), which means, everything is either true, or false

Take 32 of those bits and put them together, now you've got a floating point value that can represent "true" as 1.0, "false" as 0.0, and a few million shades of "maybe" in between those two extremes.

If that's not analog-y enough for you, make it 64 bits and now you can have trillions of shades. And if that's still not enough, add more bits until you've got the resolution you're looking for.

I don't see any significant distinction between analog and digital, since digital logic asymptotically approaches analog as you add bits, and with today's memory sizes there are plenty of bits to go around.

Our meatbrain can cope with a lot of stuffs that the digital computer can't precisely because our brain makes its decision based on imprecise feedback

Or perhaps because it's running a radically different kind of algorithm that no human has ever understood or implemented on a digital computer.