So, it looks like the Mac has crested

Sony is already selling 2000+ pixels per inch screens. They are just small, and you use a magnifying glass to view them. Check out the viewfinder on a NEX 6, or 7 or the Sony Alpha 65 or 77.

To get eye level resolution in those screens Sony would have to have around 7000 pixels per inch. (1/2 inch screen, 6 MP)

Your 2000ppi screens are just hand waving because EH2 and Zero haven't thought of that. Because they haven't thought of it, the average consumer can't possibly find it useful, therefore you might as well be talking about unicorns and leprechauns. And trolls, don't forget the trolls. They are real. I've seen them.

I've engaged, and you have ignored. I asked you specifically how real-time, 3D, photo-realistic rendering would help your average consumer. How they would use it. You ignored it. You have games....and....

Why does there need to be an 'and' here? Games are a $20B industry. They're a fairly significant part of what consumers do with computing devices. They've been the primary driver of the already massive increases in GPU performance we've seen to date. Why would this stop now?

Because those games are not played by the majority of casual consumers. Farmville doesn't need that, nor would it use it. same with angry birds.

The legions of people who do little more than casual surfing/shopping, facebook, email, etc. don't game. Plus--a HUGE HUGE HUGE portion of that $20B gaming industry is centered around gaming consoles.

Why does this matter? We don't have hardware that can do this in laptops or consoles, mainly because they use the exact same design. The first console/GPU that comes with photo-realistic real time 3D is going to make shitloads, no matter what platform.

Because those games are not played by the majority of casual consumers. Farmville doesn't need that, nor would it use it. same with angry birds.

So what? Games have driven GPU performance to grow by three or four orders of magnitude over the last 15 years. Why would this stop now?

Echohead2 wrote:

The legions of people who do little more than casual surfing/shopping, facebook, email, etc. don't game. Plus--a HUGE HUGE HUGE portion of that $20B gaming industry is centered around gaming consoles.

Which work on magic, rather than GPU technology?

Echohead2 wrote:

Here with current tech.

Yeah, in about the same way Internet video streaming was "here with current tech" in 1997.

This is hilarious. If computers can't presently do something, I can't use it as an example of a processor-intensive use case because we don't know how to do it yet. If computers can presently do something, I can't use it as an example of a processor-intensive use case because it's "here with current tech" (no matter how limited the current implementations are).

Echohead2 wrote:

using your laptop? no..they woudl use built-in systems. Which means this is irrelevant to the topic.

This discussion is about general uses for increased local processing power. I have mentioned several times that these aren't all going to be showing up in traditional PC form factors. Stuff like augmented reality is more naturally at home in various sorts of mobile devices, for instance. I explicitly noted that e.g. Kinect showed up as a console accessory.

Echohead2 wrote:

Yeah, and people thought we would all be using voice recognition software by now. Turns out that even when it is very very very acurate, people still don't use it much. It is a fringe case and yes, it works and works pretty darn well. Turns out that people acutally don't want to talk to their computers. Plus--this is done with current tech (actually it is done with decade old tech).

I don't think we can say that people don't want speech recognition tech or that this problem is solved by current tech. If you ever watch the process of a human dictating to another human, current speech recognition tech doesn't come anywhere close to that. You have to speak punctuation, which is intrusive, and you have to be completely literal with every word, i.e. if you use a contraction, but you're writing a formal document that shouldn't have contractions, the computer isn't smart enough to fix that. When giving dictation to a person you can also switch at will between dictating text to be written down (mostly) literally and speaking high-level instructions like "get rid of the paragraph about the widget production numbers" or "add that data here" (with the person being able to infer which data). Current speech recognition software can't do this sort of thing.

Basically, current speech recognition tech subjectively feels sort of like "typing out loud". Giving dictation to a human feels much more natural, and more like a collaborative process. When computers can do something more like the latter, I suspect speech-based text input (and speech-based interaction generally) will be far more popular.

Sony is already selling 2000+ pixels per inch screens. They are just small, and you use a magnifying glass to view them. Check out the viewfinder on a NEX 6, or 7 or the Sony Alpha 65 or 77.

To get eye level resolution in those screens Sony would have to have around 7000 pixels per inch. (1/2 inch screen, 6 MP)

Your 2000ppi screens are just hand waving because EH2 and Zero haven't thought of that. Because they haven't thought of it, the average consumer can't possibly find it useful, therefore you might as well be talking about unicorns and leprechauns. And trolls, don't forget the trolls. They are real. I've seen them.

Sony is already selling 2000+ pixels per inch screens. They are just small, and you use a magnifying glass to view them. Check out the viewfinder on a NEX 6, or 7 or the Sony Alpha 65 or 77.

To get eye level resolution in those screens Sony would have to have around 7000 pixels per inch. (1/2 inch screen, 6 MP)

Your 2000ppi screens are just hand waving because EH2 and Zero haven't thought of that. Because they haven't thought of it, the average consumer can't possibly find it useful, therefore you might as well be talking about unicorns and leprechauns. And trolls, don't forget the trolls. They are real. I've seen them.

You do realize that the viewfinder on a camera is something many consumers use right? This is also an application that may not be straightforward, but makes perfect sense when explained. Here we have exactly the type of thing that you guys are calling nonsense. Sorry your imagination cannot expand beyond TVs.

Well, this is certainly more interesting than the actual thread topic, but sure seems to me rather foolish to argue that general-purpose computing is "good enough" and further technology and UI advances won't reshape (and reshape, and reshape) the field in the years and decades to me. I mean, just look back at the past couple of decades. There's no guarantee things will progress at the same blistering pace, but that's just a question of timing. Dramatic advances - and change - is inevitable.

At this point, I think it pretty much is inevitable, actually. So much of our computing world, while amazing (and mind-blowing compared to the state of computing, say, thirty years ago), is in many ways rather crude. We've barely scratched the surface, and the advances in processors, graphics, battery life, storage, AI, display tech, miniaturization, wearable computing, machine/body interface, etc will unquestionably play a major part in shaping the future - as well as paradigm-shifting ideas and UI advances. You can't pull the two apart really, each drive and feed the other (with commerce and competition greasing the wheels as well).

"Where is my 10 GHz PC"? is just an example of the exhaustion of a very narrow progress in a very specific area.

Well, this is certainly more interesting than the actual thread topic, but sure seems to me rather foolish to argue that general-purpose computing is "good enough" and further technology and UI advances won't reshape (and reshape, and reshape) the field in the years and decades to me. I mean, just look back at the past couple of decades. There's no guarantee things will progress at the same blistering pace, but that's just a question of timing. Dramatic advances - and change - is inevitable.

ZZ has been consistently conflating Moore's Law with clock speed, despite the fact that it's about transistor density.

It's true that for a long time smaller transistors have allowed for higher clock speed, and that this effect isn't a strong as it once was. However, it hasn't completely exhausted itself, and smaller transistors do still, obviously, get you more transistors per unit of area and, in general, more transistors for the same price. Given that many interesting problems are parallelizable (despite ZZ's years of obfuscation on this point), this alone will ensure progress for years to come. And while opinion hasn't coalesced around any one particular path to follow once our conventional silicon IC tech reaches its limits, it's not like we don't have any ideas.

And while opinion hasn't coalesced around any one particular path to follow once our conventional silicon IC tech reaches its limits, it's not like we don't have any ideas.

As the CNET article discusses, there is a coming wall, but there are also over a dozen new technologies and approaches being studied, only one need succeed, but it's more likely we'll end up with several.

Where is my 10 GHz PC"? is just an example of the exhaustion of a very narrow progress in a very specific area.

But a very essential one.

The block of clock doubling has profound effects. If you're read the thread, you'll understand this.

Our ability, as humans and programmers, to make effective use of further Moore doublings is, in fact, in substantial doubt.

The ability to increase response time (as opposed to throughput) is in substantial doubt. The degree of progress available is profoundly reduced no matter how optimistically you try and spin it. Doubling every two years is amazing. No more.

That's not some "narrow" problem; it means that we now have what we didn't have before 2005; a class of applications that can make (some) use of the doubling and a class of app for which the doubling doesn't significantly help, and maybe not a bit. So, we now have winners and losers, which matters. We do not see and I do not expect to see useful 32 core consumer CPUs because of this. Four cores waiting for the next gesture is no faster than one. Killer apps have been in short supply for some time, probably because what would have been killer in 1990, say, fits comfortably in todays GHz CPUs and nobody has to buy anything.

Moreover, what does it mean when consumers eagerly go backwards in technology?

Right now, in mobile, they are eagerly buying machines with slower processors and less capacious SSDs compared to the hard files available.

If progress was so inevitably onward and upward, why is that even happening and happening in such a way that some people even think that PCs are going to be displaced entirely (certainly will to some degree)? Why is the slow and the tiny replacing the fast and the capacious at all?

The only sensible answer is that the absolute performance of GHz processors is so great that it is ample for many (if not most) consumer tasks in absolute terms; in fact the equivalent of 90 MHz Pentium processors are.

On the hardfile side something related is happening; most of us are not filling up those multi-TB drives. Only those with high end video requirements can do so. The "onward and upward" crowd is reduced to claiming all consumers will develop an appetite for high end video. But, they aren't today.

This is all new; we were, in fact, CPU bound for a significant fraction of history, even for consumer tasks. We forget, now, how some web pages were complex enough (back in the day) where it wasn't just network delay that delayed us. That's seldom to never true today. We once had compression for hardfiles, but that is now decades past.

But the critics pretend either none of this is happening or it doesn't matter. Conversely, they pretend that if we point out these failings that we're somehow against all progress, which isn't true at all.

An exponential growth means that many more problems fit under the umbrella after each doubling. The population of the US does not double every two years; your list of friends does not double every year; and so a lot of problems don't double at that rate, either. Eventually, more and more fit the available horsepower, even given our ability to squander it for marginal gains of various sorts.

We thus have a problem in technology here and there (ten more years of Moore, if we get it, is going to give us 1 atom per transistor; it is most unlikely that we will get past that; even Moore himself doubts it). We also have a case where the ordinary capacity of human beings (how many pixels we can usefully deliver on a display, how much of even video resolution) is becoming a factor as well as is some amount of consumer fatigue (video of any sort, never mind high end, is not universal nor is local storage of same). These things weren't an issue before. Before, capacity was constrained enough that you could presume that "something would turn up" because we were "mega" and not "giga". At "giga" its a lot less certain and the argument has actually revealed that because the examples are few and specialized and nobody has really refuted the whole "slower mobile is OK" part of the argument, which contradicts the "upward and onward" thrust.

So the logic is, even though the semi-conductor corporations are spending hundreds of billions on research and development, its all for naught, because you can't think of a use for all that speed and bandwidth. That is just a lack of imagination on your part.

So the logic is, even though the semi-conductor corporations are spending hundreds of billions on research and development, its all for naught, because you can't think of a use for all that speed and bandwidth. That is just a lack of imagination on your part.

And on yours.

Progress simply isn't always linear. Things change. The engineers are offering what they can and hoping that programmers can use it.

But, go ahead, point out to me all those successful 256 core server offerings, won't you, and how they have taken over the server market? Point out to me how people aren't virtualizing things because they don't have any other way of using up the available horsepower?

The facts are out there; you just want to believe in endless progress in a single direction.

The truth is, if 256 CPU (never mind 256 core) machines were really "all that", we'd see a lot more of them than we see.

What we see instead in the server market (a market, as I tirelessly point out, has more money and motivation to go any useful direction than consumer) is actually deploying servers that consume fractions of CPUs and not multiples.

If things were really going your way, the server market would be telling us. What its telling us instead is that it's going another way, overwhelmingly. Instead of 256 core monsters, it's doing things like virtualized systems and Blades.

And, the problem sets there are both larger and more diverse.

But, go ahead, believe what you want. Ignore what's actually been happening and happening for years. I can't stop you.

We understand your point. You essentially are repeating variations of the same thing over and over again while consistently misrepresenting the opposite side. It should also be noted that your argument only makes sense if you strictly define what kind of progress you're talking about to the point of absurdity.

But, go ahead, point out to me all those successful 256 core server offerings, won't you, and how they have taken over the server market? Point out to me how people aren't virtualizing things because they don't have any other way of using up the available horsepower?

The facts are out there; you just want to believe in endless progress in a single direction.

The truth is, if 256 CPU (never mind 256 core) machines were really "all that", we'd see a lot more of them than we see.

Oh for fuck's sake. You're so clueless it's pathetic.

Sales of "big iron" have increased 1000x over the last ten years, matching a similar drop in price due to move from Solaris/IRIX/AIX/HP-UX to Linux. Applications that couldn't justify the ROI when these systems cost 100x what they did now are buying them as prices have dropped from $10M to $100K, not to mention the huge performance gains.

Systems of the scale you are talking about are so commonplace now they don't get any press.

There are clear signs in all three that consumers are either unwilling or unable, on top of technical issues, to consume a lot more than they are.

I'm not "misrepresenting" anyone except those that keep trying to make it such that these three things remain at the core of all future progress. I have nowhere defined that no progress is possible (though I've been accused of it) and I am trying to stick to these topics (though displays topping out is creeping in as another factor in all of this as it is somewhat related).

Those that disagree with this keep bringing up a handful of cases (like 3D video) that are either unsolved or unclear that consumers will embrace them or both.

A lot of my critics seem unwilling to let go of the idea that reality ever says "no." Moore's law had an incredible run. The end of it is in sight and there is even plenty of signs that we are not able to consume all of what is in sight. This is new and my critics are trying to pretend that it isn't.

Moreover, nobody has refuted (because they cannot) that clock doubling is over. However, they want to wave away the implications of that, which are highly significant for many problem types. We're so used to a rising tide lifting all boats that it is going to take a while, apparently, before we notice that it just isn't happening anymore.

There are clear signs in all three that consumers are either unwilling or unable, on top of technical issues, to consume a lot more than they are.

I think you're misled by the trade-offs consumers are willing in make in favor of *much* smaller, more mobile devices with better battery life that are quicker in *other* ways (more responsive, faster to wake from sleep). It's an exchange, not a lack of interest. Speed and capacity are still extremely desirable, and as more media goes digital, and the amount of digital stuff people have grows (and grows and grows), will need more of it.

A lot of my critics seem unwilling to let go of the idea that reality ever says "no." Moore's law had an incredible run. The end of it is in sight and there is even plenty of signs that we are not able to consume all of what is in sight. This is new and my critics are trying to pretend that it isn't.

It's utopian to think Moore's Law will continue at its current pace forever. But it's even dumber to think human beings will be satisfied and satiated with the current state of technology, UI, and feature sets. On all sides - creator, consumer, business - they will not. Guaranteed.

Also, the overwhelming amount of information, data and media, to "consume" requires *further* advances to help sort and manage all that stuff, not less.

Our ability, as humans and programmers, to make effective use of further Moore doublings is, in fact, in substantial doubt.

As I've pointed out several times, it would appear that while in the set of all algorithms a large number do not usefully parallelize very well, in the set of algorithms that are a) interesting to us and b) not already fast enough, there are many that do parallelize well.

ZeroZanzibar wrote:

The ability to increase response time (as opposed to throughput) is in substantial doubt.

There is no general rule that says you can't increase response time with more parallelism, at least not unless you're trying to reduce it below the length of a single clock cycle. It's entirely dependent on the algorithm you're talking about.

ZeroZanzibar wrote:

We do not see and I do not expect to see useful 32 core consumer CPUs because of this. Four cores waiting for the next gesture is no faster than one.

Is this a claim that gesture recognition is not parallelizable? I have not personally implemented a gesture recognition algorithm, admittedly, but machine vision problems appear to lend themselves to parallel implementation.

ZeroZanzibar wrote:

Moreover, what does it mean when consumers eagerly go backwards in technology?

Not what you want it to mean. As noted previously, implying that this means people aren't interested in more computational performance is like implying that anyone who trades off gas milage for other features — which is technically anyone who buys any car except the car with the absolute highest MPG rating on the market — is not interested in any future milage improvements.

Trading off one thing for another today doesn't mean users don't want more of both things if it becomes possible to do that tomorrow.

ZeroZanzibar wrote:

An exponential growth means that many more problems fit under the umbrella after each doubling. The population of the US does not double every two years; your list of friends does not double every year; and so a lot of problems don't double at that rate, either. Eventually, more and more fit the available horsepower, even given our ability to squander it for marginal gains of various sorts.

We still have some very important problems that aren't accommodated by current hardware. Although you and EH2 might feign ignorance about their uses, various AI problems appear to be foundational. That is, technologies that solve them will be like, say, the GUI — widely applicable across a huge number of use cases; something you can usefully integrate to practically any program. Machine vision, for instance, isn't something that you bundle up into a "Machine Vision 1.0" app, which you have to hope is the next killer app. It's something you integrate into video and photo management and editing apps, camera firmware, map creation tools, 3D modeling tools, augmented reality apps, gesture recognition systems, facial expression recognition systems, surveillance systems, robotics, note-taking apps, systems designed to help the disabled.... the list is nearly endless. Some of this is already happening, e.g. Evernote tries to recognize text in images you include in notes, iPhoto tries to recognize faces.

We thus have a problem in technology here and there (ten more years of Moore, if we get it, is going to give us 1 atom per transistor; it is most unlikely that we will get past that; even Moore himself doubts it).

Individual atoms exhibit more complex behavior than individual transistors, which seems to imply it's possible to get more than a single transistor's worth of computation out of one. Even if it's not, there are ways of making progress other than miniaturization. For instance reducing fabrication costs might allow us to practically build much larger processors (current processors are still tiny compared with, say, the brain), and clockless designs could help control power consumption with such devices.

ZeroZanzibar wrote:

We also have a case where the ordinary capacity of human beings (how many pixels we can usefully deliver on a display, how much of even video resolution) is becoming a factor as well as is some amount of consumer fatigue (video of any sort, never mind high end, is not universal nor is local storage of same).

With respect to the display issue, I can see use cases for, say, a 60" retina-quality touch screen. If we assume that means 220 ppi (like the 15" MacBook Pro) that would be something like 73 megapixels.

ZeroZanzibar wrote:

Moreover, nobody has refuted (because they cannot) that clock doubling is over. However, they want to wave away the implications of that, which are highly significant for many problem types. We're so used to a rising tide lifting all boats that it is going to take a while, apparently, before we notice that it just isn't happening anymore.

This is simply not as unprecedented as you're making it out to be. As I have noted previously, we've had problems of this general type for a very long time.

CPUs over the last 20 years have increased in performance much faster than storage devices, for instance. A CPU 1000x as fast with a storage device only 20x as fast bars some applications that would be possible with both a CPU 1000x as fast and storage 1000x as fast. Yet we've managed to write useful code that can take advantage of systems with CPUs 1000x as fast but storage only 20x as fast.

GPU performance has also been increasing faster than CPU performance for quite some time, so there as well we have and example where the market successfully adapts to a case where the performance of one class of tasks is increasing faster than the performance of other classes.

The hell it's not. Performance is not a side effect, it is fundamental to everything, sooner or later.

If computes could only clock at 1 KHz, almost all of this discussion would be moot.

Moreover, you're late to the show. This argument has been around for a while and performance has been explicitly argued more than once.

Specifically, there's been a long series of arguments that multi-core will save us. It won't.

The clock speed doubling loss is a bone in the throat of the discussion, because it is no longer possible to argue that we will have anything like uniform progress in any direction. We used to have that. We don't anymore.

These technologies are not and never have been disconnected from each other.

We used to double the clock and double the disk size regularly. So, there was a balance between increased capacity and the raw, wall clock time it took to process the data. We don't have that now, not without effort and not always even then. Sooner or later, that will blow back and affect aggregate demand for disk, because now only some problems can use it and others effectively can't.

Being late to the conversation doesn't prevent me from, you know, reading. Which you might consider doing.

Clock speed does not correlate to increased performance, otherwise the P4 would have been the ne plus ultra of desktop ciomputing. How'd that work out?

ZZ wrote:

The clock speed doubling loss is a bone in the throat of the discussion, because it is no longer possible to argue that we will have anything like uniform progress in any direction. We used to have that. We don't anymore

We never have. Clock speed is not a constant. You can't measure clock between different architectures and expect that to correlate to "performance."

You do realize that the viewfinder on a camera is something many consumers use right? This is also an application that may not be straightforward, but makes perfect sense when explained. Here we have exactly the type of thing that you guys are calling nonsense. Sorry your imagination cannot expand beyond TVs.

Well, this is certainly more interesting than the actual thread topic, but sure seems to me rather foolish to argue that general-purpose computing is "good enough" and further technology and UI advances won't reshape (and reshape, and reshape) the field in the years and decades to me. I mean, just look back at the past couple of decades. There's no guarantee things will progress at the same blistering pace, but that's just a question of timing. Dramatic advances - and change - is inevitable.

Because for many people it has gotten good enough. Heck, the whole transition to smartphones, tablets (i.e. "Post-PC") is specifically because of this. For so so so many people, they use such a tiny fraction of their computer that new advances doesn't do anything for them.

Of course things will advance, but that doesn't mean people (consumers) will actually use them. THe advances are moving to mobile and such because the potential gains on teh desktop are diminishing. So you shift focus to mobile, etc.

Well, this is certainly more interesting than the actual thread topic, but sure seems to me rather foolish to argue that general-purpose computing is "good enough" and further technology and UI advances won't reshape (and reshape, and reshape) the field in the years and decades to me. I mean, just look back at the past couple of decades. There's no guarantee things will progress at the same blistering pace, but that's just a question of timing. Dramatic advances - and change - is inevitable.

And look at that graph--notice how it was great and all--and then transistor count is coming from multiple cores. Thing is, multiple cores have diminishing returns. Going from 1 to 2 is a big advantage. 2 to 4--better but less so than 1-2. 4 to 8---better for specialized. 8 to 16--don't make me laugh (talking about consumers).

This is exactly the point that ZZ is talking about. Improvements are now going to multiple cores. a consumer can definitely see benefits from dual core and even quad core. But 8, 16, 32 cores? I mean by 2020 it would be something silly like 64 core CPU etc. Those extra cores just won't help consumers worth a damn. yeah yeah, ZnU is going to come in and say taht some magical parallel app will come along to eat those cycles.

So the logic is, even though the semi-conductor corporations are spending hundreds of billions on research and development, its all for naught, because you can't think of a use for all that speed and bandwidth. That is just a lack of imagination on your part.

First, consumers and office drones aren't the only buyers.Second, they are also spending a lot of that hundreds of billions on R&D on stuff that isn't really aimed at consumer desktops/laptopsThird, people will still buy new computers, just longer between purchases (which we are already seeing..use to be people bought computers every 2-3 years. Now it is easily 4 for tons of people--official policy at many corps and govs...and that is moving to 5 as we speak).

Maybe you are; pretty much everyone else is talking at the consumer level. For instance, I can barely get people in this thread to notice what is going on in the server world and the implications for consumer computing. Since they don't like they answers, they pretend it doesn't count somehow, even though it forms a very nice little canary in the coal mine for the ability of consumers to endlessly consume tomorrow's cycles.