Maybe you are; pretty much everyone else is talking at the consumer level. For instance, I can barely get people in this thread to notice what is going on in the server world and the implications for consumer computing. Since they don't like they answers, they pretend it doesn't count somehow, even though it forms a very nice little canary in the coal mine for the ability of consumers to endlessly consume tomorrow's cycles.

Home computers=!"consumer level"Or are you saying that consoles are not relevant to this discussion like EH@?

Why are you still advancing this argument when you have offered no response to my point that display resolution isn't comparable to generalizable computational resources?

because I have repeatedly adn you just ignore it. Just because you you say you have a point doesn't mean you actually do.

You have offered no argument. Your responses have consisted of things along the lines of "Just because you say these things are different doesn't mean they are", with zero acknowledgment of my argument for why they're different.

ZeroZanzibar wrote:

Specifically, there's been a long series of arguments that multi-core will save us. It won't.

The clock speed doubling loss is a bone in the throat of the discussion, because it is no longer possible to argue that we will have anything like uniform progress in any direction. We used to have that. We don't anymore.

You keep repeating the same arguments without engaging with counterpoints. You haven't responded to the point that there are important algorithms that do parallelize well — including apparently some of the most interesting and potentially algorithms that research is being done around. You haven't responded to the point that "uniform progress in any direction" has never been an accurate description for how the industry has advanced.

ZeroZanzibar wrote:

Maybe you are; pretty much everyone else is talking at the consumer level.

There are two related discussions here. One is centered around your argument that we're about to encounter insurmountable technical barriers that will prevent us from continuing to make computing devices faster/cheaper/smaller. This discussion has little to do with the consumer market in particular. The other is EH2's incredulity about the idea that regular people could ever want to do anything with computing devices that current hardware isn't already capable of. That's more focused on consumer use cases. These arguments regularly bleed into each other, adding to the confusion. This has never been an argument exclusively about desktop and laptop form factors (as EH2 has tried to imply a couple of times now). If you go back in this thread, and to the previous thread on this subjects, you'll explicitly see me discussing all of this in the context of cloud infrastructure, game consoles, robotics, mobile devices, etc.

It really doesn't even make sense to discuss the larger technological issues only within the context of a single market, because all of these markets are based off the same technology. Similar process technology (give or take a year here or there), similar programing models, sometimes even the exact same hardware and software (i.e. supercomputers using high-desktop CPUs and GPUs, iOS and OS X sharing lots of code).

Or are you saying that consoles are not relevant to this discussion like EH

I'll let EH speak for himself. But, for the most part, we have not been talking consoles. And, in the end, they don't move the discussion much. They are on the same curve everyone else is, except they do it more in step functions because controllers aren't replaced every year. They may be a little more aggressive, here and there, with their processor and they have strong incentive to push the envelope a bit more than a typical consumer product. And that's if the logic of the game permits it.

But, I really don't see us talking about 64 core game consoles now or any time soon. At the end of the day, they have the same software and hardware problems as everyone else in the PC sphere; perhaps slightly nicer, on average, in terms of parallelism, but that's all. Even if their applications are or can be made a bit "nicer" as far as sharing goes, the sheer hardware problems with sharing catch up to them only a little later than they do at the consumer level. Sometimes. So, the clock doubling issues still will bedevil them.

I did a little informal look, for instance, a couple pages back. If we go by those links, they didn't even have the strong ability to make memory versus CPU/GPU performance trade-offs. The minimum and maximum memories were between 2 and 3 GB when more was available. And, I tried to google up games that those analyzing things claimed were of the more challenging sort in terms of resource consumption.

But, I really don't see us talking about 64 core game consoles now or any time soon.

You know, unless you count GPUs, which these days are useful for things beyond graphics. Fundamentally there's no hard and fast line between GPUs and CPUs, and things like Intel's Many Integrated Core Architecture — which in its next incarnation may put more than 50 simplified x86 cores on one chip — sort of screw with the conventional boundaries.

Incidentally, I ran across this article on Slashdot just now, and in light of this thread the following line literally made me laugh out loud:

Quote:

Performance is awful — but the algorithm is apparently very parallelizable, so this is unlikely to be an insurmountable issue.

Another interesting algoritm that doesn't achieve acceptable performance on current hardware, but parallelizes easily? Well, what do you know.

I'd guess this is effectively another machine vision problem. It's a neat example of what I was saying previously about how in addition to the use cases we can guess at here, these sorts of broadly applicable techniques will likely find many uses in specialized areas that people like us having a general high-level discussion would probably never think of. (Although now that I think about it I do recall running across the notion that text compression and AI are equivalent problems — the same logic would seem to apply to images.)

you'll explicitly see me discussing all of this in the context of cloud infrastructure, game consoles, robotics, mobile devices, etc.

Of course you want to try and change what I have said into something it isn't. BTW...have you read how many people are suspecting that this upcoming round of home consoles might actually be the last generation? Nexbox and PS4 might be the last. Why would that be? HOW could that be. And here is the thing...even if they aren't...people are starting to talk about it. IOW, the end game is in sight for those people. The discussion now is much different than it was in 90s.

BTW...have you read how many people are suspecting that this upcoming round of home consoles might actually be the last generation? Nexbox and PS4 might be the last. Why would that be?

It's a business model problem. Are you trying to say that this has something to do with consoles not needing CP/GPU power? If so, bad argument; plenty of uses have already come up in the thread. But if you need another how about this: modelling and refining the models of objects in a 3D package is hard. Game makers do this even though "scanned" 3D models are easier to make, because scanned models are noisy and cost too many resources to implement directly. If we had a such a wealth of GPU power, using scanned models wouldn't be a problem at all.

Anyway, as for consoles, the cost is too high a barrier to entry. Nobody would commit the resources to take on the current players without a sure foothold, like Steam has with its PC gaming community. And the current players need to change their model to keep up with the market, like the Nintendo's WiiU effort.

Of course you want to try and change what I have said into something it isn't.

Here I am in April 2011 discussing cloud infrastructure in our earlier discussion of the stasis that's allegedly about to strike computing. Here I am earlier in this very thread:

ZnU wrote:

The traditional PC paradigm itself is starting to mature. This means new consumer use cases are likely to show up elsewhere. And they seem to be. Ever used a Kinect? Games designed around it sometimes use clever ticks to hide just how much lag it currently has, but you can see it navigating menus. It sure would be nice to get rid of that lag. Or take something like augmented reality. A mature version of this tech probably requires sophisticated machine vision algorithms (running on the local device; you probably can't do this in the cloud due to latency and bandwidth issues). That's going to require phones and tablets with a lot more CPU grunt.

Your claim that this argument has exclusively been about traditional laptop and desktop form factors is nonsense. But I am curious... are you making this argument because you now in fact recognize that I'm correct about there being consumer use cases for additional computing power beyond those form factors?

Echohead2 wrote:

BTW...have you read how many people are suspecting that this upcoming round of home consoles might actually be the last generation? Nexbox and PS4 might be the last. Why would that be? HOW could that be.

If this does turn out to be true, it certainly won't be because nobody will ever want better graphics than a PS4 or Xbox 720 can deliver, or because it will never be possible to build more powerful gaming hardware, or whatever it is you're trying to imply.

It will be because as general purpose computing devices diversify into more form factors and become more ubiquitous, there's less need for specialty devices — especially since the console market is traditionally set up to sell the same hardware for years, while more general purpose devices like the iPad get revved on an annual basis, sometimes even more often. In other words, if the traditional console model dies out, it will largely be a consequence of the sort of advances you don't believe we'll see, not a consequence of a lack of such advances.

On Thursday, Seattle Mayor Mike McGinn announced the city reached an agreement with Gigabit Squared and the University of Washington to bring 1 Gbps connections, taking advantage of the city’s own underused fiber. Seattle abandoned its plan for a municipal network last summer. A connected city wireless network, which would obviously be slower, is also in the works.

“The plan will begin with a demonstration fiber project in twelve Seattle neighborhoods and includes wireless methods to deploy services more quickly to other areas,” the city wrote in an online statement.

Maybe EH2 can write them a letter telling them they're wasting their money because nobody has any use for that kind of bandwidth.

It's a business model problem. Are you trying to say that this has something to do with consoles not needing CP/GPU power? If so, bad argument; plenty of uses have already come up in the thread.

Individual case are not going to carry the argument here.

Half of the problem here is that people still haven't engaged on one of the serious issues here:

When we had clock doubling, every single application profited and by and large with no effort or next to none.

In the new world, without clock doubling, some apps will profit and others will not profit.

You can sit there all day and dig up favorable cases, and it isn't dispositive. All that means is that the future will favor problems that happen to be easier to parallelize.

The problem is and has been for decades that these kinds of programs are randomly distributed around the problem space.

So, we are going to have a world of winners and losers and there will be a lot of the latter. This is new and it will matter.

The only good news is that absolute performance is now so good, a lot of apps that won't profit won't need to. That's a break we need not have caught, but it further isolates us into the whole "winners and losers" aspect of this.

Formerly, a killer app could be anything. Now, if we are to have one at all (and we haven't lately), it will have to come from an ever-narrower class of application type. Because, every time you double the core count, you reduce the number of applications that profit from it. There's a lot of apps that can profit from two cores versus one. There are many fewer that profit from sixteen.

But, citing the winners (of which there will be many examples) doesn't affect the argument. It doesn't restore the status quo ante or anything approaching it.

Clock doubling is predicted to become more difficult in 5 to 10 years, but that is far from certainty. A bit early to declare the end of progress.

We haven't had clock doubling since 2005. Where have you been?

Get real -- the reason we haven't had clock doubling have to do with things like not being able to dispense with the heat and current leakage and such things.

This isn't some temporary little thing and then progress will resume.

A belief in endless progress is one thing -- this is bordering on denialism.

We should be at maybe 12 GHz right now, if what you say is true, but we aren't. Clock speed isn't even progressing at a slower rate. If anything, we're clocking less now. Lots of 1 GHz CPUs out there today when the max was and probably still could be around 3.

Most CPUs even come with a slow down mode these days where they go into various sorts of sleep modes and under clocking modes. Unlike when clock doubling was really working and we simply clocked them flat out all the time.

Most CPUs even come with a slow down mode these days where they go into various sorts of sleep modes and under clocking modes. Unlike when clock doubling was really working and we simply clocked them flat out all the time.

100% wrong. Slowdown mode is because that massively more performant despite similar clock to 2005 P4 CPU is essentially sitting their twidling its thumbs while you do important things on Facebook. Why would you burn power when you don't have to, after all?

100% wrong. Slowdown mode is because that massively more performant despite similar clock to 2005 P4 CPU is essentially sitting their twidling its thumbs while you do important things on Facebook. Why would you burn power when you don't have to, after all?

You're making my argument for me.

Slow down makes no sense unless there's a lot of time to kill. Slowdowns take work and effort to implement. It takes circuits. Why are they doing it? Because we are living without maximum horsepower and we don't want the heat problems that we didn't used to have. In the past, it wasn't worth the bother, so we didn't do it.

The fact is, there was a day when running Facebook took all the power we had available. Those days are long gone.

As for "massive performance increases" have you bothered to check about throughput versus response time? Most benchmarks are throughput oriented.

"Massive" has to be doubling every two years. We aren't getting that, even at the most optimistic.

You're again countering the way everyone else is. Because all progress hasn't stopped, you're saying I'm wrong. But, my argument isn't that simple. It says that some apps will do well, others won't and that will matter. It says progress is no longer universal. You haven't argued against that at all. You simply say: See this factoid overe is still OK, so everything is OK. But, everything is not OK.

Of course you want to try and change what I have said into something it isn't.

Here I am in April 2011 discussing cloud infrastructure in our earlier discussion of the stasis that's allegedly about to strike computing. Here I am earlier in this very thread:

Right...so you admit that even in 2011 you were changing my arguments into something they weren't. I agree, you have consistently done this.

Quote:

Your claim that this argument has exclusively been about traditional laptop and desktop form factors is nonsense. But I am curious... are you making this argument because you now in fact recognize that I'm correct about there being consumer use cases for additional computing power beyond those form factors?

I love how you do this. You try and change my arugment and/or bring up strawmen and then use that to try and suggest that you are right.

The discussion has always been about consumers.

Quote:

If this does turn out to be true, it certainly won't be because nobody will ever want better graphics than a PS4 or Xbox 720 can deliver, or because it will never be possible to build more powerful gaming hardware, or whatever it is you're trying to imply.

So if they can build it and people want it, why wouldn't it happen? Somethign being possible and something being commercially viable are two different things.

On Thursday, Seattle Mayor Mike McGinn announced the city reached an agreement with Gigabit Squared and the University of Washington to bring 1 Gbps connections, taking advantage of the city’s own underused fiber. Seattle abandoned its plan for a municipal network last summer. A connected city wireless network, which would obviously be slower, is also in the works.

“The plan will begin with a demonstration fiber project in twelve Seattle neighborhoods and includes wireless methods to deploy services more quickly to other areas,” the city wrote in an online statement.

Maybe EH2 can write them a letter telling them they're wasting their money because nobody has any use for that kind of bandwidth.

ahh...trust ZnU to totally miss the point for multiple years and still try and prove wrong an argument that was never made.

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

Because we are living without maximum horsepower and we don't want the heat problems that we didn't used to have. In the past, it wasn't worth the bother, so we didn't do it.

It could be because running full bore means you can't put it in a laptop and that's where all the growth in PCs are right now? Why would you develop specialized laptop and desktop parts when you can just use the mobile parts on the desktop.

Also, where's exactly is the plateau you're referring to? It's difficult to follow your agruments when they muddle around so. CPU performance? Software performance? Some combination of the two?

Quote:

The fact is, there was a day when running Facebook took all the power we had available. Those days are long gone.

What day was that? Other than Flash I can't think of a single web tech that strained systems. Why would anyone develop a consumer service that had a high performance requirement?

Quote:

As for "massive performance increases" have you bothered to check about throughput versus response time? Most benchmarks are throughput oriented.

Your point here?

Quote:

"Massive" has to be doubling every two years. We aren't getting that, even at the most optimistic.

LOL. Thank you for defining my terms for me.

Quote:

But, my argument isn't that simple. It says that some apps will do well, others won't and that will matter. It says progress is no longer universal.

This has always been the case. You're making distinctions of the present that you're not making of the past. Your conclusions are flawed because you aren't comparing like to like.

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

Heck, I'm still using a 2006 2.16GHz C2D iMac at home because it's still capable. I would be incorrect to say however that a quad core Core i7 at 2.9GHz would not be faster, because it is.

It is in fact dramatically so, but if you're using software that is waiting on the user (browsing the web, Facebook, etc) you won't see the difference.

It's only when you're doing video encodes or games that you see the difference. Ripping a DVD takes me several hours; on a modern iMac it would take less than an hour, essentially the time it takes to stream the content off the disc.

It's only when you're doing video encodes or games that you see the difference. Ripping a DVD takes me several hours; on a modern iMac it would take less than an hour, essentially the time it takes to stream the content off the disc.

Which takes no issue whatever with what I am arguing about.

In the clock speed doubling days, everything everywhere improved. All apps got the benefit and we got both response time and throughput benefits, too.

Today, only selected things improve and mostly by not as much, either.

Spouting that video decoding improved or some sort of other specialty item improved does not disagree with the argument.

The argument never said "no more progress." It said it would be selective. Giving examples of selective improvement contradicts nothing.

What hasn't been shown, and indeed cannot be shown is that all applications are still achieving compound performance growth rates that amount to doubling every two years and in both response time and throughput, too.

Those days are gone and all the specific counter-examples aren't a contradiction; they are an illustration of the partial improvement the argument always presumed.

It doesn't refute that many applications are seeing little to no improvement, especially to response time. Those rates exist, but they are substantially lower and it's no use pretending that this has no implications.

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

And yes, a Pentium D would be considered a steaming pile of poo today. It would get maybe a 2 or so on the win 8 windows experience rating. It would barely meet the min recommend specs for chess titans on win 7

It doesn't refute that many applications are seeing little to no improvement, especially to response time. Those rates exist, but they are substantially lower and it's no use pretending that this has no implications.

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

And yes, a Pentium D would be considered a steaming pile of poo today. It would get maybe a 2 or so on the win 8 windows experience rating. It would barely meet the min recommend specs for chess titans on win 7

...but since clock doubling has stopped, so have performance increases! we don't have i7s at 30ghz, so they're the same as pentium Ds! /caricature

Seriously, I thought the clock speed==performance bullshit stopped with intel making 1.5ghz core procs that beat the living crap out of 3ghz pentiums :l

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

And yes, a Pentium D would be considered a steaming pile of poo today. It would get maybe a 2 or so on the win 8 windows experience rating. It would barely meet the min recommend specs for chess titans on win 7

has it really? I mean compared 2012 to 2005 and then compare 2005 to 1998. If you really feel like there is equivlant increases in performance, then I just don't know what to say (for consumers--just incase someone forgot).

I'm sorry if I feel that there is a much larger improvement from a P2-333 to a P-D-3GHz than from a P-D-3GHz to a i7-3Ghz.

I don't think many people would have said a PII-33Mhz was anything other than a complete steaming pile in 2005. However, a P-D-3Ghz would not be considered a steaming pile today.

Heh. I'm stuck using a P-D (I think about 3 Ghz) at work and...it's a complete steaming pile. The damned thing grinds to a standstill whenever I try to open a fairly big PDF, and often crashes when I try to open a big PowerPoint. My demands aren't great--writing letters and briefs, creating fairly basic powerpoints, and doing legal research on line. Even so, I'd desperately love a more modern computer, but my firm has always cheaped out on computer hardware.

Heck, I'm still using a 2006 2.16GHz C2D iMac at home because it's still capable. I would be incorrect to say however that a quad core Core i7 at 2.9GHz would not be faster, because it is.

It is in fact dramatically so, but if you're using software that is waiting on the user (browsing the web, Facebook, etc) you won't see the difference.

It's only when you're doing video encodes or games that you see the difference. Ripping a DVD takes me several hours; on a modern iMac it would take less than an hour, essentially the time it takes to stream the content off the disc.

Or the fact that I cannot play Civ 5 on my system!

But you explain my point. First you are STILL using a 6 year old computer and just said it was still capable. You would find VERY VERY few people in 2005 who had a 1999 computer that they described as "still capable".

Furthermore--you are EXACTLY right--you won't notice it as browsing the web, facebook etc. is waiting on the user, not the CPU. And that happens a LOT with consumers. The differences are not noticeable much of the time.

And yes, a Pentium D would be considered a steaming pile of poo today. It would get maybe a 2 or so on the win 8 windows experience rating. It would barely meet the min recommend specs for chess titans on win 7

And I don't think you get how little consumers tax their systems.

It would be considered a steaming pile of poo to YOU--but not vast swaths of consumers. We have at least 20% of our computers at work from the 2006-2007 time frame. and they are just now getting swapped out (actually, they aren't really getting swapped out, they are just getting downgraded to single/dual purpose units and will be dandy there).

Heh. I'm stuck using a P-D (I think about 3 Ghz) at work and...it's a complete steaming pile. The damned thing grinds to a standstill whenever I try to open a fairly big PDF, and often crashes when I try to open a big PowerPoint. My demands aren't great--writing letters and briefs, creating fairly basic powerpoints, and doing legal research on line. Even so, I'd desperately love a more modern computer, but my firm has always cheaped out on computer hardware.

Probably could use a reformat/reinstall, plus some RAM. I have a Pentium--non D on a GC-MS--the Pentium-D died and was replaced with a regular Pentium (I think with HT).

So if we take these cpu world benchmark as gospel, that would mean that moore's law has been broken for the past 14 years and no one ever noticed.

Now compare that to what I tested, where the numbers show moore's law has not been broken

I'm going to be that guy and point out that Moore's Law has nothing to do with either clock frequencies or computational performance so its quite possible for Moore's law to hold true without computers getting any faster at all.

We can fit far more transistors on a die but improvements in single threaded performance haven't exactly set the world on fire recently. Of course, if you applications are multi-threaded then those extra cores that are being crammed onto a chip will help massively but otherwise, speed increases are dismal. Haswell is going to offer something like a 10% improvement over Ivy Bridge which is hardly impressive.