AI and the Singularity.

I know it's a needle in a haystack issue but some radiation should be pretty easy to see. I am not saying we'd detect every civilization out there but if civs with our level of tech are common shouldn't we have noticed at least one of them by now?

The problem with EM signals is that they weaken as they disperse...and they are eventually indistinguishable from background noise. Barring very directed signaling and detection of same, it's extremely unlikely we would notice any EM signaling not intentionally designed to be detected at great distance. Even if there we a million such civilizations, assuming some relatively even distribution of same, you're not just looking for a needle in a haystack, you're looking for a needle in all the haystacks, all the beaches, all the deserts, and all the oceans.

And then there's the likelihood that more advanced civilizations don't leak EM radiation the way we do. It's very wasteful. We've only been "detectable" in this way for just under a century, and it's not likely that we'll continue to be as detectable for more than one more. The very high-powered broadcast transmitters used in the early days of radio have already given way to the more targeted lower-power transmitters of more advanced and regulated radio and TV, and they're likely to go away entirely in the coming century as more people get all their services through a combination of hardline broadband (no extraterrestrially-detectable EM leak) and mobile satellite (very little EM leak) signals.

It's likely that there's a window of detectability for advanced civilizations that only lasts a few centuries at most, before they've stopped wastefully broadcasting EM signals in favor of targeting them with very little waste escaping. So, we could be surrounded by a galaxy full of advanced civilizations, and we'd never know unless some of them decide to intentionally and continually broadcast their presence, or unless we coincidentally target our equipment to just the right point in space at just the right time to coincidentally overhear a newly-developing technological civilization in its first 200 years or so. The latter is overwhelmingly unlikely to happen, and the former is dubious at best--why would a civilization more technologically advanced than ours decide to spend resources continuously broadcasting its existence to the rest of the galaxy? In order to be willing to do that, it would have to believe that letting its existence be known to outsiders is desirable for some reason, and further believe that all outsiders will be either non-predatory or too inferior in technology to harm it. Those are unlikely hurdles.

So, we're simply left with no reason to believe that advanced civilizations will be detectable at all. There could be thousands of them in our galaxy and there's no likelihood we'd ever know, unless we coincidentally trip over an errant signal.

I believe communication with other civilisations over 20-50-100 year lag periods can only really be beneficial or at the very least interesting, If you can't actually get there which it seems is most likely given the laws of the universe the idea of some sort of interstellar war seems unlikely.

I suppose they could send some sort of multi thousand year weapons probe or something, seems unlikely.

I guess I'm saying talking is fine, the distances involves mitigates any risk

If they could travel here or around the milky-way in any meaningful timeframe then I guess we would know about them already.

and we'd never know unless some of them decide to intentionally and continually broadcast their presence, or unless we coincidentally target our equipment to just the right point in space at just the right time to coincidentally overhear a newly-developing technological civilization in its first 200 years or so.

If you read sites like SETI@home, they admit that this is what they overwhelmingly are actually looking for. An intentional signal.

The bandwidth of unintentional signals are simply too high. We might be able to detect things like certain kinds of beacons or military radars of just the right sort, but we are very unlikely to detect the ET equivalent of I Love Lucy reruns. That peters out rather quickly, even with Aricebo sized antennas to hear them.

It's surprisingly hard to find out this information, but let me give you an analog I do know about that gives a flavor of how real this problem is.

Amateur radio operators are allowed to use 1,500 watts (maximum) for their stations. Somebody discovered that if you sent out a signal, at that wattage, on some very narrowly focused arrays of antennas then you can:

1. Bounce a low bandwidth signal off of the moon (think 19th century telegraphy bandwidth) and

2. Expect to barely be able to understand it.

And, in fact, many amateurs have performed this difficult feat.

Nobody talks about bouncing signals off of Mars and hearing those, however. It's absurd. Mars, even at closest approach, is just too far away.

Of course, the moon or Mars are terrible radio reflectors, so the analogy isn't exact. But, it is surely easier to detect signals bouncing off of our imperfect moon, at a mere 250,000 miles away, than it is to detect signals from even Proxima Centauri, which is so far away, we have to measure it with the speed of light to get a distance that our poor minds can tolerate.

And remember, the dissipation is based on the square of the distance, so that non-existent moon that is twice as far as our actual moon is would require Amateurs to be given dispensation to use 6,000 watts to make their scheme work.

The only way to compensate for the distance (and this is a matter for Shannon's laws) is to make the signal very narrowband. Indeed, I would expect ET to have two beacons -- one, which was only a Hz or two, to give the vital message "We are here, we are here, we are here, we are here" and just barely enough instructions to tell where to listen for a higher bandwith signal "near by" that would otherwise be hard to detect because even at 19th century telegraphy equivalents (which is about the best one could expect), it is going to be way harder to successfully detect.

You would want, I suspect, a signal that a part time radio astronomy 'scope can here that would motivate us or someone like us to build a much bigger receiver than we otherwise would.

Or, you might even settle for just the "we are here" bit, which by itself would be revolutionary enough.

(Wiki says that Proxima Centauri is 270,000 times the distance of the earth to the sun. 93 million times 270,000 is a formidable number of miles).

If they could travel here or around the milky-way in any meaningful timeframe then I guess we would know about them already.

Why? This is one of the crux questions. Sure, if you postulate that an advanced civilization invented the ability to teleport about the galaxy with no effort or time lag then you might have more of a basis for asking why they haven't popped in yet, for a brief survey at least. If your theoretical FTL capability required considerable effort and expense, though, then a civilization could exist for many millenia and never get around to dropping way the hell out here in our stellar-sparce arm.

Bouncing around at warp 9 violates all of the physics we know. Aliens that can survive for millions of years travelling at conventional speeds don't violate anything. Assuming they aren't highly communicative with other ships while in transit, there is no reason to think we would detect them. Maybe they hibernate. Maybe they live in slow motion like Treebeard and it takes a whole day just for them to say hello, and a millennium long lull in the conversation isn't unusual. Low power mode for interstellar transit, maybe.

There are a lot of plausible post-singularity scenarios a civilization might evolve towards. If it were easy to predict what happened after you develop run-away self improving technology and intelligence, we wouldn't call it a singularity. We can guess that they won't break physics, but that's about it.

It seems to me like a better question would be, is there a relationship between the intelligence of a system and the complexity of the systems it can design? If the complexity limit of a system's comprehension must always be lower than the complexity of that system, deliberately improving on your own design becomes very difficult. Any computer science PhDs around?

Build a Nicoll-Dyson laser, find a nearby planet and light them up. At least that way, our communications could expect to be seen, even if the inhabitants weren't particularly looking for us.

Even coherent light disperses at distance (actually, I think the proper term is 'diverges', in that it becomes larger, spatially speaking, over distance. It also disperses across fequencies, I think. IANAP). The amount of energy you need for this kind of signaling is big. Like, Huge. Minbdbogglingly large. Look up and watch for a pulsar, and imagine how much energy is being disippated per pulse for it to get to you.

But, as I recall, there is even measurable dispersion when we bounce lasers off of the moon (there's that special mirror gadget that Apollo something-or-other left behind).

A little dispersion is a good thing. Imagine trying to "light up" an alien world not co-planar with us in such a way that it reaches them (as it must) during their night time.

Now imagine instead a laser that disperses, even a little, so that we just have to get it to the star system or, perhaps more favorably, if we can, half the planet's diameter. That would increase the power, of course, but would also increase the amount of time they could "see" our signal, which is another problem. You'd have to count on a reasonably long time interval to (as was said) light up the place so they could distinguish our low Hz signal from the noise. Everybody is, after all, in motion.

I'd imagine that a far better way to signal would be to deploy very large, electrically activated panel that can alternate between opaque and transparent across a decent chunk of spectrum. Basically, your local star is the emitter, and the signaling is accomplished very much like an old navy signal lamp.

I'd imagine that a far better way to signal would be to deploy very large, electrically activated panel that can alternate between opaque and transparent across a decent chunk of spectrum. Basically, your local star is the emitter, and the signaling is accomplished very much like an old navy signal lamp.

I'd imagine that a far better way to signal would be to deploy very large, electrically activated panel that can alternate between opaque and transparent across a decent chunk of spectrum. Basically, your local star is the emitter, and the signaling is accomplished very much like an old navy signal lamp.

You know what kind of a solar sail that would make?

A very good one, I'd imagine. Station keeping would obviously be problematic.

Without weird physics, post singularity intelligence is at most a square or cube of our intelligence. With weird physics you don't know they don't have infinite computing power in finite spacetime, making Dyson Sphere as silly as a mountain of mammoth skulls so tall as to reach the moon.

Square or cube of our intelligence is a lot, but it's by no means "to us as we are to roundworm" because many problems scale very sub-linearly. In exponential problems, for instance, cubing only triples the capacity.

For (some) singularitarians, though, I.J. Good's "intelligence explosion" circa 1965 is the gospel and 1971's work by Leonid Levin and (independently) Stephen Cook on computational complexity theory never existed. And the AI can self improve into extreme godhood by merely improving it's software. This all would have been harmless fun, except some of those singularitarians seem convinced that this near god AI is probably going to kill us all, and write things like "I suppose the difference is whether you're doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we're talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea." , which is creepy and not harmless fun at all.

(The concern with brain simulators is nuts; so it's successful, then what, one more human.)

I'd imagine that a far better way to signal would be to deploy very large, electrically activated panel that can alternate between opaque and transparent across a decent chunk of spectrum. Basically, your local star is the emitter, and the signaling is accomplished very much like an old navy signal lamp.

You know what kind of a solar sail that would make?

A very good one, I'd imagine. Station keeping would obviously be problematic.

There's a pretty simple calculation of thickness of the solar sail that would hover above Sun if stopped in orbit. Hint: the answer is "pretty damn thin". Even then you'd need a lot of materials to make such sail, though.

Build a Nicoll-Dyson laser, find a nearby planet and light them up. At least that way, our communications could expect to be seen, even if the inhabitants weren't particularly looking for us.

Even coherent light disperses at distance (actually, I think the proper term is 'diverges', in that it becomes larger, spatially speaking, over distance. It also disperses across fequencies, I think. IANAP). The amount of energy you need for this kind of signaling is big. Like, Huge. Minbdbogglingly large. Look up and watch for a pulsar, and imagine how much energy is being disippated per pulse for it to get to you.

The Nicoll-Dyson concept has the laser being powered by a Dyson swarm and using a decent fraction of the power output of the parent star together with an effective aperture of many millions of km. With that sort of capability, you could illuminate (or vaporise) planets out to some pretty large distances.

Obviously it's far in advance of anything we could build now or any time soon but I don't think it relies on any magical technology to work so at least in principle it should be possible. Plus, a Dyson swarm would be useful for far more than fooling around with laser super weapons.

Build a Nicoll-Dyson laser, find a nearby planet and light them up. At least that way, our communications could expect to be seen, even if the inhabitants weren't particularly looking for us.

Even coherent light disperses at distance (actually, I think the proper term is 'diverges', in that it becomes larger, spatially speaking, over distance. It also disperses across fequencies, I think. IANAP). The amount of energy you need for this kind of signaling is big. Like, Huge. Minbdbogglingly large. Look up and watch for a pulsar, and imagine how much energy is being disippated per pulse for it to get to you.

The Nicoll-Dyson concept has the laser being powered by a Dyson swarm and using a decent fraction of the power output of the parent star together with an effective aperture of many millions of km. With that sort of capability, you could illuminate (or vaporise) planets out to some pretty large distances.

Obviously it's far in advance of anything we could build now or any time soon but I don't think it relies on any magical technology to work so at least in principle it should be possible. Plus, a Dyson swarm would be useful for far more than fooling around with laser super weapons.

When it's a Dyson swarm, you are visible because you got a frigging Dyson swarm around the star....

edit: also, swarms have speed of light lag between components, so they are not very good for computing.

When it's a Dyson swarm, you are visible because you got a frigging Dyson swarm around the star....

The laser should be visible at far greater distances than the swarm.

Quote:

edit: also, swarms have speed of light lag between components, so they are not very good for computing.

I don't think anyone would want to run the entire swarm as a single computer. You would have a large number of individual machines which may or may not be networked, possible on an ad hoc basis to add computing power as needed.

Who knows, it might work to turn an entire swarm into a single AI but I would imagine it would end up being incredibly alien to us.

When it's a Dyson swarm, you are visible because you got a frigging Dyson swarm around the star....

The laser should be visible at far greater distances than the swarm.

Distance along their beam, though. If laser is not pointed at us, it's as good as if it wasn't there. Unless they scatter the laser light to get advantage from the laser light being narrowband.

Quote:

Quote:

edit: also, swarms have speed of light lag between components, so they are not very good for computing.

I don't think anyone would want to run the entire swarm as a single computer. You would have a large number of individual machines which may or may not be networked, possible on an ad hoc basis to add computing power as needed.

Who knows, it might work to turn an entire swarm into a single AI but I would imagine it would end up being incredibly alien to us.

Either way the entities would want to talk to each other with minimum lag, especially assuming they are social enough to want to talk to us.

When it's a Dyson swarm, you are visible because you got a frigging Dyson swarm around the star....

The laser should be visible at far greater distances than the swarm.

Quote:

edit: also, swarms have speed of light lag between components, so they are not very good for computing.

I don't think anyone would want to run the entire swarm as a single computer. You would have a large number of individual machines which may or may not be networked, possible on an ad hoc basis to add computing power as needed.

Who knows, it might work to turn an entire swarm into a single AI but I would imagine it would end up being incredibly alien to us.

It's be awfully funny if we detected one of these to find that it's just some other civilization's distributed.net cracking contest.

Build a Nicoll-Dyson laser, find a nearby planet and light them up. At least that way, our communications could expect to be seen, even if the inhabitants weren't particularly looking for us.

Even coherent light disperses at distance (actually, I think the proper term is 'diverges', in that it becomes larger, spatially speaking, over distance.

spread in space (that is transverse to the propagation direction) == diverge/diffractspread in time (longitudinal) == disperse

So a broadband laser in space rapidly diverges due to diffraction, but disperses only very slowly (due to the expansion of the universe and such).

Quote:

The laser should be visible at far greater distances than the swarm.

Thing about a (single mode) laser is just that its a beam with very narrow range of output angles. As you make your beam bigger, you are narrowing down the output angles it contains. Essentially, you are making it more and more directional. But compared to the sun, you have to be really directional to outshine it. The sun puts out ~10^26 watts. Your laser maybe trillions of times more directional. But unless you're putting out 10^(26-12) = 10^14 watts, the sun is a hell of a lot brighter when viewed from another star.

If you want to communicate between stars, you need to also exploit wavelength by having a very narrowband beam. Then you can outshine the sun at some specific wavelength, since the sun is distributing those 10^26 watts across ~10^15 Hz. But this will be very hard to signal to someone who doesn't know which wavelength to look for.

Here's a fun thought: if and when NLP ever gets to the point that it can simply steamroll over the technical problems inherent in passing Turing's test, one would expect companies to simply buy licenses when licenses are cheaper than paying for employees. To me, the singularity means perpetual and ubiquitous unemployment for skilled labor and the social tumult that would almost surely follow from it.

Thats just one specific quantum algorithm. I don't think you can generalize it to beyond searching unsorted lists on quantum computers.

Regarding the original point, its not really clear to me what "cube of our intelligence" even means, nor how you even define "intelligence" for purposes of comparison. Very few NP problems are typically solved exactly by humans, and the problems you would want to solve with intelligent machines would probably also not be NP.

SO1OS wrote:

Here's a fun thought: if and when NLP ever gets to the point that it can simply steamroll over the technical problems inherent in passing Turing's test, one would expect companies to simply buy licenses when licenses are cheaper than paying for employees.

Indeed, the obvious application of strong AI would be programming computers, a task which machines would likely be unimaginably better at then humans.

re: cubing, I meant cubing operations done. Some tasks are inherently exponential with no shortcuts - for example weather forecasting requires exponential knowledge, space, and operations, in the prediction length, for predictions that are of comparable exactness.

re: above, the resident singularitarian spoke lol.

No, actually, the automated programming is so far where AI fails worst at. And when we would have what we might describe as programmer "AI", it'd likely be a glorified compiler that does perform symbolic manipulation of some kind and performs sufficiently effective search for solutions, finally taking some burden off programmer. There's other kinds of AI, such as neural networks, which won't magically perform better at programming just because they're living inside a computer. It's some sort of ethnic thinking bias - the computer-people should be intrinsically better at talking to computers.

re: cubing, I meant cubing operations done. Some tasks are inherently exponential with no shortcuts - for example weather forecasting requires exponential knowledge, space, and operations, in the prediction length, for predictions that are of comparable exactness.

Sure, but:

1) I don't think you'd want to use an AI to forecast weather. Thats a hard parallel problem that involves solving a lot of equations over an enormous input data set. That seems much better suited for GPU-like hardware consisting of an enormous number of distributed FPUs coupled to local memory and a high bandwidth interconnect. While its possible (although probably unlikely) that the same machine could run both, the former would seem to have no real application to the latter. Its like saying that a GPU cannot run Javascript efficiently, so why make GPUs? A true observation, but one that misses the point.

2) More generally, while exponentially scaling (or at least non polynomial scaling problems) are always going to be hard to impossible to solve exactly for nontrivial inputs, they're usually not the most interesting problems commercially or even scientifically. A lot of software engineering comes down to solving problems that are hard not because they scale poorly, but because they are complex for trivial inputs. This is why we still care about building faster processors and clusters even though they will make virtually no difference in the evaluation of exponential problems.

From my point of view, an AI is essentially a very specialized type of processing hardware, something like a GPU or a quantum computer that can solve a subset of problems unimaginably faster then a general purpose processor and a much larger set of problems no better or even a lot worse then a general purpose processor.

Dmytry wrote:

No, actually, the automated programming is so far where AI fails worst at.

We have working AI? I think not ...

Dmytry wrote:

And when we would have what we might describe as programmer "AI", it'd likely be a glorified compiler that does perform symbolic manipulation of some kind and performs sufficiently effective search for solutions, finally taking some burden off programmer.

The purpose of a programmer is to use human intelligence and the ability to understand context that comes with it to translate an incomplete human language description of a problem into a rigorous mathematical description, usually involving a programming language based on a context free grammar. Once this context is used to create a complete description of the problem, computers generally take over an actually implement the solution (e.g. a compiler runs and generates a binary).

As an assembly programmer, I can assure you that computers are unimaginably better at this already then the fastest humans. gcc, for instance, can produce machine code of nearly equivalent quality as I (and probably better quality then I if I restrict myself to directly assembling c code without applying my a priori knowledge of the problem) at a rate that is a good 4-5 orders of magnitude faster then I. It will also do so with an error rate that is almost infinitely lower then mine. It can do this because programming a computer ultimately comes down to evaluating an enormous number of boolean logic operations, a task that a digital computer can do trillions of times faster then a human.

The fact is that all modern languages and modern programming techniques as well are designed to work around the extreme difficulty humans have grasping complex logic expressed in the form of a grammar. Almost every rule we teach new programmers is designed to make it easier for them, not for the compiler. Considering the compiler makes little sense given how good they are at the problem.

In spite of these enormous intrinsic advantages, as you point out, modern computers cannot solve much of anything without a programmer. From my point of view, this is because a machine, however smart or fast it is, cannot understand the context of the problem and must have it spelled out mathematically using a grammar by something that does. Additionally a machine may not be able to develop a solution interdependently of the programmer, but that is a secondary problem. A strong AI however would not have this limitation. It would by definition be able to understand the context of a problem and generate a grammar directly that could be fed into a compiler. Furthermore it would be able to do so at an incredibly fast rate because it would not have the extreme difficulty humans have evaluating formal logic.

Dmytry wrote:

It's some sort of ethnic thinking bias - the computer-people should be intrinsically better at talking to computers.

I'm curious why think this? A machine can reduce a boolean function of a large number of inputs that would take a human hours in a few microseconds. It seems inconceivable to me that one could come up with an AI that wouldn't be better at this problem then a programmer just because programmers are so bad at it. Indeed, one need only look at pathological programming languages like Malbolge which are designed to be impossible for anything but a machine to program by virtue of their complexity to see how disadvantaged programmers really are.

Thats just one specific quantum algorithm. I don't think you can generalize it to beyond searching unsorted lists on quantum computers.

One algorithm that has a lot of applications. Search is obviously very general; a quantum mechanical search would effect a quadratic speed up for everything that can be translated into a search function - which is quite a lot!

Thats just one specific quantum algorithm. I don't think you can generalize it to beyond searching unsorted lists on quantum computers.

One algorithm that has a lot of applications. Search is obviously very general; a quantum mechanical search would effect a quadratic speed up for everything that can be translated into a search function - which is quite a lot!

that's probably the least relevant of the agorithms - you need big quantum storage that doesn't de-cohere, probably ain't going to happen. Also, unsorted search complexity is rather irrelevant if alongside memory you have processing. It's only a big deal for current systems where you have CPU here, memory there...for more homogeneous systems, search time grows as n^(1/3) because that's the speed of light lag.

re: cubing, I meant cubing operations done. Some tasks are inherently exponential with no shortcuts - for example weather forecasting requires exponential knowledge, space, and operations, in the prediction length, for predictions that are of comparable exactness.

Sure, but:

1) I don't think you'd want to use an AI to forecast weather. Thats a hard parallel problem that involves solving a lot of equations over an enormous input data set. That seems much better suited for GPU-like hardware consisting of an enormous number of distributed FPUs coupled to local memory and a high bandwidth interconnect. While its possible (although probably unlikely) that the same machine could run both, the former would seem to have no real application to the latter. Its like saying that a GPU cannot run Javascript efficiently, so why make GPUs? A true observation, but one that misses the point.

The point is that some "Jupiter Brain" or a Dyson sphere would not be to mankind as mankind is to roundworm when it comes to this sort of tasks, even in absolute best case where rather than being intelligent all resources are thrown at the task.

There's other examples such as maybe trying to recover data from a frozen brain where structures got shredded by ice. A lot of people been projecting their ideas of god onto future AI so you constantly run into people that just assume future superintelligence can do anything.

Quote:

2) More generally, while exponentially scaling (or at least non polynomial scaling problems) are always going to be hard to impossible to solve exactly for nontrivial inputs, they're usually not the most interesting problems commercially or even scientifically. A lot of software engineering comes down to solving problems that are hard not because they scale poorly, but because they are complex for trivial inputs. This is why we still care about building faster processors and clusters even though they will make virtually no difference in the evaluation of exponential problems.

A lot of clusters are used for simulation.

Quote:

From my point of view, an AI is essentially a very specialized type of processing hardware, something like a GPU or a quantum computer that can solve a subset of problems unimaginably faster then a general purpose processor and a much larger set of problems no better or even a lot worse then a general purpose processor.

Dmytry wrote:

No, actually, the automated programming is so far where AI fails worst at.

We have working AI? I think not ...

We do, it's just quite stupid. We can make software that identifies cat videos, with poor accuracy. Or self driving cars. We are somewhere at the level of a retarded cockroach.

Quote:

Dmytry wrote:

And when we would have what we might describe as programmer "AI", it'd likely be a glorified compiler that does perform symbolic manipulation of some kind and performs sufficiently effective search for solutions, finally taking some burden off programmer.

The purpose of a programmer is to use human intelligence and the ability to understand context that comes with it to translate an incomplete human language description of a problem into a rigorous mathematical description, usually involving a programming language based on a context free grammar. Once this context is used to create a complete description of the problem, computers generally take over an actually implement the solution (e.g. a compiler runs and generates a binary).

As an assembly programmer, I can assure you that computers are unimaginably better at this already then the fastest humans. gcc, for instance, can produce machine code of nearly equivalent quality as I (and probably better quality then I if I restrict myself to directly assembling c code without applying my a priori knowledge of the problem) at a rate that is a good 4-5 orders of magnitude faster then I. It will also do so with an error rate that is almost infinitely lower then mine. It can do this because programming a computer ultimately comes down to evaluating an enormous number of boolean logic operations, a task that a digital computer can do trillions of times faster then a human.

The fact is that all modern languages and modern programming techniques as well are designed to work around the extreme difficulty humans have grasping complex logic expressed in the form of a grammar. Almost every rule we teach new programmers is designed to make it easier for them, not for the compiler. Considering the compiler makes little sense given how good they are at the problem.

In spite of these enormous intrinsic advantages, as you point out, modern computers cannot solve much of anything without a programmer. From my point of view, this is because a machine, however smart or fast it is, cannot understand the context of the problem and must have it spelled out mathematically using a grammar by something that does. Additionally a machine may not be able to develop a solution interdependently of the programmer, but that is a secondary problem. A strong AI however would not have this limitation. It would by definition be able to understand the context of a problem and generate a grammar directly that could be fed into a compiler. Furthermore it would be able to do so at an incredibly fast rate because it would not have the extreme difficulty humans have evaluating formal logic.

Actually not just the human context.

When we want a program that does something, we want any program out of enormous space of programs that do what we want. As it is, programmer has to specify one specific program rather than any program that does what we want, which straightforwardly takes more information to pick, than was originally necessary. Next step in software should be a compiler that is able to take in an ambiguous specification and choose one of possible answers. That would decrease amount of information that has to be produced by the programmer. The last time this happened in any significant way was when we got from assembly to fortran.

Quote:

Dmytry wrote:

It's some sort of ethnic thinking bias - the computer-people should be intrinsically better at talking to computers.

I'm curious why think this? A machine can reduce a boolean function of a large number of inputs that would take a human hours in a few microseconds. It seems inconceivable to me that one could come up with an AI that wouldn't be better at this problem then a programmer just because programmers are so bad at it. Indeed, one need only look at pathological programming languages like Malbolge which are designed to be impossible for anything but a machine to program by virtue of their complexity to see how disadvantaged programmers really are.

Consider a neural network based AI, the kind based on human data. Why would it have a major intrinsic advantage? Granted, it won't have to type by moving hands, and it won't have to see by having eyes, but those aren't really a major impediment for human either. And the augments for that AI would be in some form usable for humans.

The problem with discussion is that everyone defines AI in their own way. E.g. there's broad view where a self driving car is an AI, but a stupid one. The human-like brain simulator may or may not be considered an AI due to it not being really designed, etc.

Thats just one specific quantum algorithm. I don't think you can generalize it to beyond searching unsorted lists on quantum computers.

One algorithm that has a lot of applications. Search is obviously very general; a quantum mechanical search would effect a quadratic speed up for everything that can be translated into a search function - which is quite a lot!

that's probably the least relevant of the agorithms

The least relevant and one of the most highly useful. Do you care to back up this assertion, or am I to take your word that you know better than thousands who have cited Grover's search algorithm?

I to take your word that you know better than thousands who have cited Grover's search algorithm?

You only need to take my word that I know better than you do, the thousands aren't in such a disagreement. Grover's search is in any case more relevant to speeding up algorithms that e.g. search for function roots, than to speeding up actual data search. Current quantum computers are just a few qubits large, and that's because it is hard to maintain coherence over bigger systems. Get big enough and you de-cohere via gravity. It is interesting but not such a big deal practically, especially for database search where in practice for big databases there's always computing power next to the data itself and query time grows as N^1/3 because that's the speed of light lag (you send the search query to all your storage servers, the server that finds data matching the query responds, you get answer in N^1/3 time plus server's delay (constant) ). Quantum data storage has a kind of intrinsic computing capacity, it's almost cheating to claim speed up there.

That said, even if there will be a quantum computing quadratic speed up on top of everything else, the point still holds that there is a plenty of tasks at which some immense super duper post singularity Dyson sphere can be expected to outdo us only by a small integer factor in problem size (unless new physics in which case all bets are off). I find that quite interesting.

And where's the exact claim in question? It really is very silly. The issue isn't so much with search itself, even, as with actual storage - the de-coherence times are way too short, search in a database that expires in microseconds (and can not actually be refreshed ala DRAM other than out of classical storage) wouldn't be very useful. The algorithm is very relevant to search for solutions to equations though, that I agree with.edit: here, http://adsabs.harvard.edu/abs/1995Sci...270..255D and it still holds very much true. There's a breakthrough of boosting the time to whooping 500 microseconds http://www.sciencedaily.com/releases/20 ... 142123.htm but as you raise number of bits afaik the decoherence time decreases. (Needless to say that if you have to copy your database from a classical system into a quantum system to perform the search, it's back to O(N) )

I'm not sure how much algorithmic complexity is actually relevant to AI anyway. It's not like I preform unsorted database lookups; it's not like I execute detailed computational models of atmospheric fluid dynamics. Similarly, I wouldn't expect AIs to be doing that, either, except in a perhaps slightly-more efficient way than me - that is, delegating to some computation device.

Database lookups and computational models are brute-force solutions to problems; intelligence is at least partially about doing better than brute force. If we develop a general-purpose, self-improving AI, the raw computational power available to it is not going to be as important as how efficient can be at using it.

It's something of a favourite with Dymytry. It's been explained to him multiple times (by me, among others) that attempting to horseshoe computational metaphors onto topics like cognition and intelligence are neither commonplace among those who know the area, nor effective in understanding it, but computation is apparently something he knows about, so he keeps trying to drive the discussion in that direction to appear knowledgeable in it.

Thats just one specific quantum algorithm. I don't think you can generalize it to beyond searching unsorted lists on quantum computers.

One algorithm that has a lot of applications. Search is obviously very general; a quantum mechanical search would effect a quadratic speed up for everything that can be translated into a search function - which is quite a lot!

that's probably the least relevant of the agorithms

The least relevant and one of the most highly useful. Do you care to back up this assertion, or am I to take your word that you know better than thousands who have cited Grover's search algorithm?

I read your post above as implying that unsorted database searches constrain the performance of advanced computers for some fundamental reason. This seems like nonsense to me.

More generally, search is not a hard problem given that it is both parallel and, giving indexing, can be performed very efficiently on conventional computers. While quantum algorithms are interesting, they're probably not particularly useful unless implementing quantum computers ends up being trivially easy. It'll likely end up being cheaper/easy to just use conventional computers for these problems.

re: cubing, I meant cubing operations done. Some tasks are inherently exponential with no shortcuts - for example weather forecasting requires exponential knowledge, space, and operations, in the prediction length, for predictions that are of comparable exactness.

Sure, but:

1) I don't think you'd want to use an AI to forecast weather. Thats a hard parallel problem that involves solving a lot of equations over an enormous input data set. That seems much better suited for GPU-like hardware consisting of an enormous number of distributed FPUs coupled to local memory and a high bandwidth interconnect. While its possible (although probably unlikely) that the same machine could run both, the former would seem to have no real application to the latter. Its like saying that a GPU cannot run Javascript efficiently, so why make GPUs? A true observation, but one that misses the point.

The point is that some "Jupiter Brain" or a Dyson sphere would not be to mankind as mankind is to roundworm when it comes to this sort of tasks, even in absolute best case where rather than being intelligent all resources are thrown at the task.

I'm not talking about some stupid scifi nonsense. If you had a practical AI device, you would not want to solve these problems with it, so who cares? Its literally not an interesting question outside of bad fiction.

Dmytry wrote:

redleader wrote:

2) More generally, while exponentially scaling (or at least non polynomial scaling problems) are always going to be hard to impossible to solve exactly for nontrivial inputs, they're usually not the most interesting problems commercially or even scientifically. A lot of software engineering comes down to solving problems that are hard not because they scale poorly, but because they are complex for trivial inputs. This is why we still care about building faster processors and clusters even though they will make virtually no difference in the evaluation of exponential problems.

A lot of clusters are used for simulation.

Which indicates that arguments about complexity theory have little relevance to code that people actually want to run.

Dmytry wrote:

We do, it's just quite stupid. We can make software that identifies cat videos, with poor accuracy. Or self driving cars. We are somewhere at the level of a retarded cockroach.

I don't consider any of this AI.

Cockroaches are just biological microcontrollers.

Dmytry wrote:

redleader wrote:

The purpose of a programmer is to use human intelligence and the ability to understand context that comes with it to translate an incomplete human language description of a problem into a rigorous mathematical description, usually involving a programming language based on a context free grammar. Once this context is used to create a complete description of the problem, computers generally take over an actually implement the solution (e.g. a compiler runs and generates a binary).

As an assembly programmer, I can assure you that computers are unimaginably better at this already then the fastest humans. gcc, for instance, can produce machine code of nearly equivalent quality as I (and probably better quality then I if I restrict myself to directly assembling c code without applying my a priori knowledge of the problem) at a rate that is a good 4-5 orders of magnitude faster then I. It will also do so with an error rate that is almost infinitely lower then mine. It can do this because programming a computer ultimately comes down to evaluating an enormous number of boolean logic operations, a task that a digital computer can do trillions of times faster then a human.

The fact is that all modern languages and modern programming techniques as well are designed to work around the extreme difficulty humans have grasping complex logic expressed in the form of a grammar. Almost every rule we teach new programmers is designed to make it easier for them, not for the compiler. Considering the compiler makes little sense given how good they are at the problem.

In spite of these enormous intrinsic advantages, as you point out, modern computers cannot solve much of anything without a programmer. From my point of view, this is because a machine, however smart or fast it is, cannot understand the context of the problem and must have it spelled out mathematically using a grammar by something that does. Additionally a machine may not be able to develop a solution interdependently of the programmer, but that is a secondary problem. A strong AI however would not have this limitation. It would by definition be able to understand the context of a problem and generate a grammar directly that could be fed into a compiler. Furthermore it would be able to do so at an incredibly fast rate because it would not have the extreme difficulty humans have evaluating formal logic.

Actually not just the human context.

When we want a program that does something, we want any program out of enormous space of programs that do what we want. As it is, programmer has to specify one specific program rather than any program that does what we want, which straightforwardly takes more information to pick, than was originally necessary. Next step in software should be a compiler that is able to take in an ambiguous specification and choose one of possible answers. That would decrease amount of information that has to be produced by the programmer. The last time this happened in any significant way was when we got from assembly to fortran.

You can never take an ambiguous specification and produce a working program though. Well aside from guessing, which fails for any nontrivial task. Not even a human can do that. What a human programmer does is provide enough context that a specification that seems ambiguous at first is reduced to something definite. Thats the difference between a programmer and a compiler. A compiler takes an exact specification written in a grammar, while a programmer takes an specification encoded jointly between some grammar and their knowledge of the problem context.

Dmytry wrote:

redleader wrote:

I'm curious why think this? A machine can reduce a boolean function of a large number of inputs that would take a human hours in a few microseconds. It seems inconceivable to me that one could come up with an AI that wouldn't be better at this problem then a programmer just because programmers are so bad at it. Indeed, one need only look at pathological programming languages like Malbolge which are designed to be impossible for anything but a machine to program by virtue of their complexity to see how disadvantaged programmers really are.

Consider a neural network based AI, the kind based on human data.

Can I consider one that has a conventional digital microprocessor available with nanosecond latency that can evaluate 1000 input boolean functions in microseconds? I think such a device would be able to very quickly implement program flow control.

Dmytry wrote:

The problem with discussion is that everyone defines AI in their own way. E.g. there's broad view where a self driving car is an AI, but a stupid one. The human-like brain simulator may or may not be considered an AI due to it not being really designed, etc.

I don't know what any real AI would look like, but I don't generally assume that we'll do it by simulating human brains given how hard that is.

I'm not sure how much algorithmic complexity is actually relevant to AI anyway. It's not like I preform unsorted database lookups; it's not like I execute detailed computational models of atmospheric fluid dynamics. Similarly, I wouldn't expect AIs to be doing that, either, except in a perhaps slightly-more efficient way than me - that is, delegating to some computation device.

Yes this is my take as well. Asking how well an AI can simulate the weather or run inefficiently programmed SQL databases is like asking how well a GPU can run javascript: silly.