I didn't even buy the wireless router I'm using, it came free with the Internet hookup. As for the speed, my laptop came with N, like every other laptop sold in the last 2 years.

I love how you keep using YOURSELF as an example of average consumers.

I'm not saying I'm average, but the service I bought is readily available where I live. As more and more people use streaming services like Netflix the popularity of such offerings will go up.

I know lots of people that download all the TV series/movies they watch. Streaming puts higher demands on bandwidth than downloading, particularly if you want to do other things at the same time without interference.

The availability of higher bandwidth will, over time, lead to that bandwidth being utilized by more and more people. I can get 100Mbit for $10 more per month, I didn't bother because I'm not even sure how that would benefit me right now, but it was tempting to bump up to 250mbit as its not that much more expensive and bound to be awesome. Maybe if I had a house full of media consuming teenagers.

Not everyone is going to utilize more than 10, or 20 Mbit internet connections, but you're only justifying your own circumstances if you think people can't, or that only outliers will.

of course outliers exist. I clearly said so in the past. and they actually help drive it for everyone else.

And here is the thing--even you couldn't justify to yourself to get double the bandwidth for only $10 more per month (what is that--25% extra?). And here is the thing--you admit you are nothing like the normal consumer, way higher demands and expectations. So if you couldn't justify it, how would normal consumers?

Difference is, I may not know what to do with that much bandwidth today, but I expect it will all get utilized in the future.

As for the US being a unique snowflake, bull fucking shit, the only thing unique is you allow companies to profiteer, without actually providing service. The US spend huge amounts of taxpayer money to improve Internet speeds via grants and tax breaks and all you got to show for it was companies with higher profit margins and shit broadband speeds. You have private companies challenging the rights of a community to set up their own networks if even a single person in that district could signup for a commercial service. They repeatedly game the system to not provide services and to artificially restrict where, when and how it's used.

I can get better service in the middle of nowhere in Canada than many people in large metropolitan area in the US.

For $120/month I can sign up for 250Mbit in several Canadian cities, I can get 10-20 Mbit well into farm country, and 2-3+Mbit virtually everywhere. (Churchhill is one of the only exceptions, but then there aren't any roads to Churchhill for over half the year either)

As for the US being a unique snowflake, bull fucking shit, the only thing unique is you allow companies to profiteer, without actually providing service. The US spend huge amounts of taxpayer money to improve Internet speeds via grants and tax breaks and all you got to show for it was companies with higher profit margins and shit broadband speeds. You have private companies challenging the rights of a community to set up their own networks if even a single person in that district could signup for a commercial service. They repeatedly game the system to not provide services and to artificially restrict where, when and how it's used.

Seems like you just agreed with me that business/politics is a hurdle in the US that other places don't seem to have. You described some of the outcomes of that. maybe that will change, but I am pessimistic about that situation changing much.

The rest of the world isn't going to sit on their collective asses to wait for the US to smarten up. The US might be the largest producer of content people want, but we're more than happy to consume it without regard to your bandwidth limitations.

The rest of the world isn't going to sit on their collective asses to wait for the US to smarten up. The US might be the largest producer of content people want, but we're more than happy to consume it without regard to your bandwidth limitations.

You mean like the ones we have on our mobile phones? There's only so much usable spectrum. If you think that's going to increase regularly, I suggest more reading on the subject, including the ability to get more towers (without towers, further spectrum reuse isn't on if it is even possible). The truth is, the "mobile" bandwidth future is visible now -- WiFi hotspots. I've seen articles in the WSJ with some regularity on this (and, I think, the Times too).

Radio physics is simply a real barrier here.

Meanwhile, consumers still strongly prefer the cheapest bandwidth available. So, this is a "push" discussion and not a "pull" discussion. At one time, it was a "pull" discussion where consumers were driving higher speeds. It just isn't today, it's something the industry "pushes" and mobile actually further confirms it because the growth is in the place where effective hard caps exist.

the popularity of mobile internet access is impressive when you think about how expensive it is per megabit and how spotty the performance is on even the best carriers.

That Verizon bill is the thing I hate most every month. I want nothing more than to figure out a way to give up the mobile addiction.

Also, as for bandwidth in Canada. ViaSat and it's distributors make lots and lots and lots of money from Canadians who have no choice but to use Satellite internet services. Services that are vastly improved over 10 years ago, but still not ideal.

I actually think the sweet spot for now and the forseeable future on bandwidth is around 15-20Mb.

And anyone who is getting acceptable performance out of their WiFi consistently is lucky. I live in a single family wood construction house and I can't get good performace from my N300 router to my N150 PCIe card from 20' away. If I owned this house, the walls would be down and Cat6 would be going in.

Everything above a 10mb home network is for the average user pretty much unnoticeable*.

His fundamental mistake is failing to understand the sort of push-pull relationship that exists between technological capability and use cases that exploit it, and the effects that process produces over significant time periods.

Connections over 10 Mbps aren't massively useful to consumers not because there is nothing of use to consumers that could be done with much faster connections, but merely because the products and services that would significantly benefit from such connections aren't, for the most part, on the market yet. Why aren't they on the market yet? Because there aren't enough users with those faster connections to make them viable.

What EH2 is doing is like arguing, in 1993, that CPUs 100x as fast would be useless to consumers because none of 1993's mainstream consumer software required anything like that. Of course it didn't. You don't develop products that require computing resources your customers don't have. Yet here we are 20 years later, in a world where even our implementations of 'trivial' tasks that existed in 1993 (like word processing) could never hope to run on 1993's hardware, to say nothing of all the new consumer use cases that have been invented since (e.g. I'm pretty sure a 1993 PC couldn't actually decode an MP3 in real time, never mind an H.264 video stream).

Echohead2 wrote:

You know, those consumers who are on 100Mbs network inside thier house at best, and a whole fuckload is slower than that with wireless?

It's not that uncommon for decent consumer wireless gear to have real-world speeds over 100 Mbps these days. And the assertion that most consumers are on 100 Mbps (wired) ethernet "at best" is a little odd. Gigabit ethernet has been standard on pretty much everything except bottom-of-the-barrel PCs for some years now, and has per-port switch (with consumer-class switches) in the range of $3 to $5. I would expect that many consumers who bother with wired networking do in fact have gigabit. Most consumers likely don't bother with wired networking, however.

I normally wouldn't bother nitpicking about this, but it's another example of how your expectations for computing seem to be stuck in 2006 or so.

What EH2 is doing is like arguing, in 1993, that CPUs 100x as fast would be useless to consumers because none of 1993's mainstream consumer software required anything like that. Of course it didn't. You don't develop products that require computing resources your customers don't have. Yet here we are 20 years later, in a world where even our implementations of 'trivial' tasks that existed in 1993 (like word processing) could never hope to run on 1993's hardware, to say nothing of all the new consumer use cases that have been invented since (e.g. I'm pretty sure a 1993 PC couldn't actually decode an MP3 in real time, never mind an H.264 video stream).

It might be like that--except I, nor anyone else, was making those statements in 1993. Why? Becuase there were clear and obvious needs and use cases that could redily be pointed to that would need more CPU, more RAM, more video, more bandwidth, more HDD space. EASILY. People were worried about it, focused on it, etc. That simply isn't the case today. If you asked me what I would do with a 10x more powerful CPU in 1993 I could have given you a laundry list of things. today....not so much. Same for RAM, HDD space, and bandwidth. If you offered to me today a computer that was a dual xeon 6 core (12 core total) with 64GB of RAM, 20TB HDD space, dual kick-ass video cards, and 1000Mbs internet for free and I am not sure I would bother (assumign I couldn't sell the hardware). It wouldn't really get me anything over what I have now. Oh sure, it would be way faster, way more storage, etc. etc.

This is the part you are totally messed up on. The whole POINT is that it is a recent change. 10 years ago I could have done the same thing, a nice list of things. Now...meh, a couple of maybes, or "nice to haves" or something. Nothing "yep, this is a need" type thing.

Quote:

It's not that uncommon for decent consumer wireless gear to have real-world speeds over 100 Mbps these days.

And go and find them at your non-techie friends houses. Good luck with that.

Quote:

And the assertion that most consumers are on 100 Mbps (wired) ethernet "at best" is a little odd.

It isn't odd--it is reality.

Quote:

Gigabit ethernet has been standard on pretty much everything except bottom-of-the-barrel PCs for some years now, and has per-port switch (with consumer-class switches) in the range of $3 to $5. I would expect that many consumers who bother with wired networking do in fact have gigabit. Most consumers likely don't bother with wired networking, however.

Which shows how little you know about regular people (other than most don't have wired). But I can assure you that most that are wired are on 10/100 and not 1000. Their computer might have it, but their switch or router probably doesn't. Why? Because when they go to the store and see a $40 router with 100Mb ports and 1000Mb are $60...guess which they get? Or the router they get from their ISP is likely 100Mb--because the company will save that money.

Quote:

I normally wouldn't bother nitpicking about this, but it's another example of how your expectations for computing seem to be stuck in 2006 or so.

Which is exactly where consumers are! You can't seem to loko outside of your own situation and look at regular joes and see how they do things. it is silly that you think they are anywhere near as advanced as you think they are. It is absurd. And why woudl they have gigabit in their house? What does it get them? What are they possibly moving around their house that woudl benefit from it? hell, most consumers are not even using their internal network--the sole purpose is to pipe stuff to the internet and back and maybe printing.

Which is exactly where consumers are! You can't seem to loko outside of your own situation and look at regular joes and see how they do things. it is silly that you think they are anywhere near as advanced as you think they are. It is absurd. And why woudl they have gigabit in their house? What does it get them? What are they possibly moving around their house that woudl benefit from it? hell, most consumers are not even using their internal network--the sole purpose is to pipe stuff to the internet and back and maybe printing.

I can think of a few uses, smart homes and digital TV being the two main one's. You're also discounting that most new builds have the network already in place, especially in hi-rise buildings in cities. So the consumer doesn't need to do anything technical on their part.

Which is exactly where consumers are! You can't seem to loko outside of your own situation and look at regular joes and see how they do things. it is silly that you think they are anywhere near as advanced as you think they are. It is absurd. And why woudl they have gigabit in their house? What does it get them? What are they possibly moving around their house that woudl benefit from it? hell, most consumers are not even using their internal network--the sole purpose is to pipe stuff to the internet and back and maybe printing.

I can think of a few uses, smart homes and digital TV being the two main one's. You're also discounting that most new builds have the network already in place, especially in hi-rise buildings in cities. So the consumer doesn't need to do anything technical on their part.

Great--so new builds. That doesn't change the older stuff. Consumers are notoriously dumb about this stuff. The whold "blinking 12:00" syndrome. And to think that those same people who couldn't (or wouldn't) set their VCR clocks are pushign gigabit in their house and rocking a quad-core are just laughly absurd.

It might be like that--except I, nor anyone else, was making those statements in 1993. Why? Becuase there were clear and obvious needs and use cases that could redily be pointed to that would need more CPU, more RAM, more video, more bandwidth, more HDD space. EASILY.

YouTube, to pick one example, wasn't especially obvious in 1993. "Digital video" in general was perhaps obvious in 1993. You have been offered several examples of similarly broad categories of software that will drive future demand, but you have rejected them all, either saying they don't count because they're merely speculative, or saying they don't count because they're "already here" (something that was technically true of "digital video" in 1993 as well).

You have also ignored extensive argument to the effect that you don't even need new use cases to see significant increases in computational requirements over long periods of time — if you look at things like word processing or web browsing, even though these existed a couple of decades ago, modern software to perform these tasks would not remotely run on mid-range '90s hardware.

Echohead2 wrote:

If you offered to me today a computer that was a dual xeon 6 core (12 core total) with 64GB of RAM, 20TB HDD space, dual kick-ass video cards, and 1000Mbs internet for free and I am not sure I would bother (assumign I couldn't sell the hardware). It wouldn't really get me anything over what I have now.

Today's mainstream software is built for today's mainstream systems, not today's high-end systems. This simply does not demonstrate that it's not possible to build software that exploits additional computing resources in ways that benefit normal users. Nor is it new. A major point of the post you are replying to was that in every era the mainstream software was designed for the mainstream hardware.

Again, what you're doing is a bit like looking at a mid-90s SGI workstation (when that was current hardware) and saying "This would be useless to me as a consumer. There's no consumer software that uses hardware-accellerated 3D graphics." That was true. Until hardware-accelerated 3D graphics got cheap, and showed up on lots of consumer systems, and turned video games into a bigger industry than Hollywood movies. It didn't stop there either — those graphics processing resources are now regularly used to improve the speed and quality of routine UI interaction, and seem likely to be pressed into service for other processing tasks, like image and speech recognition.

I've got 80/20 Mbps broadband here at home and whilst I always have a thirst for more bandwidth, my girlfriend wasn't bothered. However, going from a 12Mbps download to 40 then 80 has proven quite an eye opener for her. The simple fact that downloads are ridiculously quick are something you don't appreciate until you have them. On my end, I can buy a game on Steam that's a 5GB download and be playing it 10 minutes later. There's never any worry about swamping the connection because short of running a torrent at well above the default settings, it's not going to happen to any noticable extent.

Quite frankly, this line of discussion is hilarious considering how many people in the UK whine about broadband speeds.

You're also discounting that most new builds have the network already in place, especially in hi-rise buildings in cities.

It's reasonable to discount it, because (IME at least) it isn't really happening.

I just lived in a recently constructed apartment in Westchester and all it had was the CATV feed and no internal CAT-anything whatever. We actually had trouble getting the wireless signal throughout the apartment.

My current home in the Phoenix area was built in the mid 2000s and it, too, has no CAT cable in it at all; just cable TV cable. Moreover, the wiring is such that the internet part of the signal goes to one chosen location.

Like most consumers, we rely on wireless for our home network. My main PC has direct connect and my wife's laptop does, when she remembers to hook it up.

My phone, our tablets, are all WiFi. That's the norm and will be for the imaginable future.

People are into mobility (as you may have noticed in other contexts). They seem quite willing to give up gobs of bandwidth to the actual device to get even a little mobility.

In both places, I did a decent amount of "shopping" before I settled on a place and I don't recall hard line coming up as a conversation point. Nobody really cares; it isn't a selling point, that's for sure.

If you asked me what I would do with a 10x more powerful CPU in 1993 I could have given you a laundry list of things.

Please provide this list from circa 1993, I think that would help your argument.

Seriously?

Better bandwidth for everything. Surfing, photos, videos were horrible (and not just surfing, FTP, etc.--and viewing them was almost as bad--oh and the storage!). (not to mention they were low res). Just running the OS. Running a decently full spreadsheet would slow the computer down dramatically. HDD space was a constant fight with backups to various media, uninstalled unused software for the storage space, never doing full installs because they took too much space.

It might be like that--except I, nor anyone else, was making those statements in 1993. Why? Becuase there were clear and obvious needs and use cases that could redily be pointed to that would need more CPU, more RAM, more video, more bandwidth, more HDD space. EASILY.

YouTube, to pick one example, wasn't especially obvious in 1993. "Digital video" in general was perhaps obvious in 1993. You have been offered several examples of similarly broad categories of software that will drive future demand, but you have rejected them all, either saying they don't count because they're merely speculative, or saying they don't count because they're "already here" (something that was technically true of "digital video" in 1993 as well).

Yes digital video was obvious. You have offered NOTHING close to "digital video" as a driver of performance needs. Digital video was obvious and how it would happen was obvious and the end goal was obvious. You have listed HUGELY speculative things like "AI"...that isn't nearly as obvious, or nearly as the end goal.

Quote:

modern software to perform these tasks would not remotely run on mid-range '90s hardware.

Talk about non-sequitor! Who said anything about running stuff from today on mid-range 90s hardware? NO ONE. THis is some bullshit you JUST made up.

Quote:

Today's mainstream software is built for today's mainstream systems, not today's high-end systems

And that same software will run acceptably on 5-6 year old mid-range hardware!

Quote:

Again, what you're doing is a bit like looking at a mid-90s SGI workstation (when that was current hardware) and saying "This would be useless to me as a consumer. There's no consumer software that uses hardware-accellerated 3D graphics."

Nope, it is nothing like that. I used a mid-90s SGI workstation and would have loved to have its horsepower at my house. I didn't have it. And again--it is nothing like that, because at the time I would have like to have it. That is PRECISELY the difference--now, I have no need for a more powerful computer. REmember, I just said you could offer me a 12 core (2-six cores) with 64GB RAM, 20TB HDDs, twin GPU super, and 1000Mbs internet all for free and I probably wouldn't bother. It would get me nothing over what I have now and mean I have to set it up and move things over and that would be work with no reward.

Quote:

and seem likely to be pressed into service for other processing tasks, like image and speech recognition.

You keep trying to bring up speech recognition while ignoring that it has been readily available for a decade and in no way stresses even 5 year old CPUs, much less one from today.

I've got 80/20 Mbps broadband here at home and whilst I always have a thirst for more bandwidth, my girlfriend wasn't bothered. However, going from a 12Mbps download to 40 then 80 has proven quite an eye opener for her. The simple fact that downloads are ridiculously quick are something you don't appreciate until you have them. On my end, I can buy a game on Steam that's a 5GB download and be playing it 10 minutes later. There's never any worry about swamping the connection because short of running a torrent at well above the default settings, it's not going to happen to any noticable extent.

Quite frankly, this line of discussion is hilarious considering how many people in the UK whine about broadband speeds.

Feel free to discuss how often joe sixpack download anything that size.

Yes digital video was obvious. You have offered NOTHING close to "digital video" as a driver of performance needs. Digital video was obvious and how it would happen was obvious and the end goal was obvious.

The supposed obviousness of the specifics is pure hindsight bias on your part.

Echohead2 wrote:

You have listed HUGELY speculative things like "AI"...that isn't nearly as obvious, or nearly as the end goal.

You have been offered a number of use cases much more specific than "AI".

Echohead2 wrote:

Talk about non-sequitor! Who said anything about running stuff from today on mid-range 90s hardware? NO ONE. THis is some bullshit you JUST made up.

Your argument relies on a particular view of the evolution of consumer computing capabilities that is entirely undermined by looking at long-term trends, so I understand why you prefer not to do that.

Echohead2 wrote:

And that same software will run acceptably on 5-6 year old mid-range hardware!

Some of it. Because not every user upgrades every year, and some users buy low-end systems, "mainstream systems" isn't not synonymous with "this year's mid-range hardware".

Echohead2 wrote:

Nope, it is nothing like that. I used a mid-90s SGI workstation and would have loved to have its horsepower at my house.

It's impossible to take statements like this from you at face value.

Echohead2 wrote:

You keep trying to bring up speech recognition while ignoring that it has been readily available for a decade and in no way stresses even 5 year old CPUs, much less one from today.

Today's speech recognition is still extremely primitive compared with the obvious target — taking dictation as well as a competent human.

Your answer gives away the game here. You're simply looking at the existing capabilities of an existing product and saying "Well, we've got enough CPU power for that, so why would we need more?" You can't seem to imagine that we might ever figure out how to implement new capabilities that require more CPU power, despite the fact that we've been doing that for decades.

Yes digital video was obvious. You have offered NOTHING close to "digital video" as a driver of performance needs. Digital video was obvious and how it would happen was obvious and the end goal was obvious.

The supposed obviousness of the specifics is pure hindsight bias on your part.

No it isn't! I distinctly remember sitting in the lab with my fellow students lamenting the issues and how great it will be when we have better photos, video, music, HDDS, ram, cpu, etc. All of that was a constant. Just because you couldn't see the obvious future at the time is irrelevant. Computer magazines, and popular mags, were also talking about such things. And you seem to completely deny the whole HDD shuffle of worrying about full hard drives. and system RAM issues. and all the other things.

Quote:

You have been offered a number of use cases much more specific than "AI".

The only ones worth mentioning are improved video like 4k or 8k. The other ones of note were doable with todays hardware.

Quote:

Your argument relies on a particular view of the evolution of consumer computing capabilities that is entirely undermined by looking at long-term trends, so I understand why you prefer not to do that.

Actually it is EXACTLY looking at long term trends that I am talking about and how they have CHANGED. the long term trends are no longer happening! That is the entire point, that those trends are changing or already changed.

Quote:

It's impossible to take statements like this from you at face value.

Take it or don't. We had one in the lab and it was great. I would loved to have one at my house. It was much better than mine and I could use that power.

Quote:

Today's speech recognition is still extremely primitive compared with the obvious target — taking dictation as well as a competent human.

Which isn't a lack of computer performance, but rather programming. There is NOTHING to suggest that such a thing can't be done with modernish CPUs, just that the algorithms aren't there. YOu seem to think that the only thing holding it back is CPU cycles. That is NOT the case.

Quote:

Your answer gives away the game here. You're simply looking at the existing capabilities of an existing product and saying "Well, we've got enough CPU power for that, so why would we need more?" You can't seem to imagine that we might ever figure out how to implement new capabilities that require more CPU power, despite the fact that we've been doing that for decades.

Wrong. And yes, it was true for decades and all of a sudden it has stalled. BIG TIME. That is the change. You want to polyanna it away with "it was growing so it will continue" without actually looking around and seeing the differences.

No it isn't! I distinctly remember sitting in the lab with my fellow students lamenting the issues and how great it will be when we have better photos, video, music, HDDS, ram, cpu, etc. All of that was a constant. Just because you couldn't see the obvious future at the time is irrelevant. Computer magazines, and popular mags, were also talking about such things. And you seem to completely deny the whole HDD shuffle of worrying about full hard drives. and system RAM issues. and all the other things.

You and your "fellow students" did not sit around and think up YouTube, Netflix, Facebook, camera phones, etc. at any level of conceptual detail. You certainly didn't get into any significant technical detail. At best you saw broad categories of technology like "digital video", and perhaps wildly speculated (almost certainly incorrectly) how such technologies would manifest in commercial products. We can do that now with technologies like machine vision, augmented reality, and natural language processing, and it's no less compelling than it was 20 years ago.

Echohead2 wrote:

The only ones worth mentioning are improved video like 4k or 8k. The other ones of note were doable with todays hardware.

Yeah, again, like "digital video" was doable with 1993's hardware. QuickTime shipped in 1991, after all.

Echohead2 wrote:

Which isn't a lack of computer performance, but rather programming. There is NOTHING to suggest that such a thing can't be done with modernish CPUs, just that the algorithms aren't there.

While this is not literally impossible, it's ludicrously unlikely. Recall the link I provided several times earlier in the thread to Google's image recognition research, which uses a 16,000 node cluster that was trained how to recognize images of cats. The article notes that the human visual cortex has a million times the number of neurons and synapses as this cluster. Now, maybe the brain is woefully inefficient at this task, such that a very clever algorithm could perform similarly well on your six year old mid-range PC. But I doubt it.

Echohead2 wrote:

YOu seem to think that the only thing holding it back is CPU cycles.

I have never said that, although I do believe development of the algorithms is being held back by current hardware limitations. It's hard to develop and test algorithms that ideally want to be run on much faster hardware. There's also no commercial incentive to come up with consumer use cases for algorithms that don't run on hardware consumers can reasonably own or access.

Take something like H.264, the key enabling technology behind web video. The first version of the H.264 standard was completed in 2003. Why not 1980? Hell, why not 1850? Compression algorithms are just math, right? Developing H.264 didn't require any specific level of computational performance to be widely available. I guess it's just coincidence that H.264 showed up just about as computers fast enough to make its use practical started to ship? Hmm, or maybe there is some relationship between what hardware is available and what algorithms get developed.

Echohead2 wrote:

You want to polyanna it away with "it was growing so it will continue" without actually looking around and seeing the differences.

Why should demand for CPU power happen to stall out at the same time as demand RAM, demand for fixed storage, demand for bandwidth? Why, in particular, should demand for these things happen to stall out at about the same time despite the fact that they have been advancing at unequal rates? There does not appear to be any single external factor that would account for this. On the other hand, skepticism on your part with respect to continued technological advancement provides a very parsimonious explanation for why you believe demand for all of these things is stalling at the same time.

I used a mid-90s SGI workstation and would have loved to have its horsepower at my house. I didn't have it. And again--it is nothing like that, because at the time I would have like to have it. That is PRECISELY the difference--now, I have no need for a more powerful computer. REmember, I just said you could offer me a 12 core (2-six cores) with 64GB RAM, 20TB HDDs, twin GPU super, and 1000Mbs internet all for free and I probably wouldn't bother. It would get me nothing over what I have now and mean I have to set it up and move things over and that would be work with no reward.

I think it has become even more obvious that you are using your 90's self as a model of the 90's average consumer. Problem is you were a geek back then, just like the rest of us. Normal people didn't know or care what SGI workstations were, they didn't know or care about hard drives, they didn't know or care about modem speeds nor spreadsheet performance. Your geek past-self really cared about the hardware, but now you don't. You are projecting your own motivations onto the market, when the market motivations have never actually changed. Just like the kid who loves cars in high school, grows up and says, "nobody cares about hotrods anymore." The average consumer never cared then and doesn't care now. He just buys a tool for a job.

No it isn't! I distinctly remember sitting in the lab with my fellow students lamenting the issues and how great it will be when we have better photos, video, music, HDDS, ram, cpu, etc. All of that was a constant. Just because you couldn't see the obvious future at the time is irrelevant. Computer magazines, and popular mags, were also talking about such things. And you seem to completely deny the whole HDD shuffle of worrying about full hard drives. and system RAM issues. and all the other things.

You and your "fellow students" did not sit around and think up YouTube, Netflix, Facebook, camera phones, etc. at any level of conceptual detail.

Of course not. However we did think about photos and video and transmitting them and viewing them on computers. You do know that it was happing even back then, right? And that the resolutions sucked and the time sucked and finding the storage for it sucked. Right? Or are you so out of it that you didn't know it was happing back then? A lot of it was via FTP.

Quote:

At best you saw broad categories of technology like "digital video", and perhaps wildly speculated (almost certainly incorrectly) how such technologies would manifest in commercial products.

Yes, digital video--you know like the digital video we were watching at horrible resolutions? that we wanted higher? and even talked about the issues involved (bandwidth, file size--which we overestimated based on compression algorithm then), etc.

Quote:

Yeah, again, like "digital video" was doable with 1993's hardware. QuickTime shipped in 1991, after all.

of course it was. Why would you even suggest that it wasn't. Maybe you weren't into computers then and didn't know anything about it. But it absolutely was there.

Quote:

While this is not literally impossible, it's ludicrously unlikely. Recall the link I provided several times earlier in the thread to Google's image recognition research, which uses a 16,000 node cluster that was trained how to recognize images of cats.

and do you not remember that it didn't take that level of CPU power for a single users photos? For fucks sake, would you please READ your own fucking links:

Quote:

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

Feel free to tell me how many digital images people have on their personal computers. Is it 10 million? No? Shocking! And that is the initial doing. guess what--they will take the knowledge gained and put it into a simply program that you can download and run on any computer to find cats in your images. hell, Picasa and others already have facial recognition. Another has image recoginition of locations (I remember reading about it). Again, this isn't the science fiction you make it out to be.

Quote:

I have never said that, although I do believe development of the algorithms is being held back by current hardware limitations.

and you just explained that you have no idea about algorithms.

Quote:

It's hard to develop and test algorithms that ideally want to be run on much faster hardware. There's also no commercial incentive to come up with consumer use cases for algorithms that don't run on hardware consumers can reasonably own or access.

Your lack of knowledge on this subject is SHOCKING.

Quote:

Take something like H.264, the key enabling technology behind web video. The first version of the H.264 standard was completed in 2003. Why not 1980? Hell, why not 1850? Compression algorithms are just math, right? Developing H.264 didn't require any specific level of computational performance to be widely available. I guess it's just coincidence that H.264 showed up just about as computers fast enough to make its use practical started to ship? Hmm, or maybe there is some relationship between what hardware is available and what algorithms get developed.

wow.

Quote:

Why should demand for CPU power happen to stall out at the same time as demand RAM, demand for fixed storage, demand for bandwidth?

They didn't. Each was a bottleneck and each ceased being a bottleneck at different times. and demand for all of them aren't done for everyone, and each person reached their level of need at different times. and I have said that I don't think 10Mbs is the end for consumers. As some have mentioned streaming multiple HD video streams would necessitate about 30Mbs. of course that assumes the ISP lets people have that kind of GB of bandwidth. More and more are capping such things.

Quote:

Why, in particular, should demand for these things happen to stall out at about the same time

I used a mid-90s SGI workstation and would have loved to have its horsepower at my house. I didn't have it. And again--it is nothing like that, because at the time I would have like to have it. That is PRECISELY the difference--now, I have no need for a more powerful computer. REmember, I just said you could offer me a 12 core (2-six cores) with 64GB RAM, 20TB HDDs, twin GPU super, and 1000Mbs internet all for free and I probably wouldn't bother. It would get me nothing over what I have now and mean I have to set it up and move things over and that would be work with no reward.

I think it has become even more obvious that you are using your 90's self as a model of the 90's average consumer. Problem is you were a geek back then, just like the rest of us. Normal people didn't know or care what SGI workstations were, they didn't know or care about hard drives, they didn't know or care about modem speeds nor spreadsheet performance. Your geek past-self really cared about the hardware, but now you don't. You are projecting your own motivations onto the market, when the market motivations have never actually changed. Just like the kid who loves cars in high school, grows up and says, "nobody cares about hotrods anymore." The average consumer never cared then and doesn't care now. He just buys a tool for a job.

Absolutely not. I have no delusions I was a "normal consumer" then or now. I did know average consumers back then. And they always had the same kinds of problems for me to fix for them. Their HDD was full. Their computer was slow (needed more ram--they didn't know why, just that it was). They wanted faster internet. They wanted it to be faster in general. They wanted better video.

I have asked multiple times and no one wants to be truthful about it. When was the last time you had a conversation with your normal computer using friends where they ran out of HDD space, needed more RAM, etc. Or when they ran out of HDD space. I mentioned how it was a big deal when MS came out with disc compression built into the OS. Remember that? Do you use it now? There was a call for people to look at their own hard drives. ZnU declined to participate because his answer would condemn him.

Of course not. However we did think about photos and video and transmitting them and viewing them on computers. You do know that it was happing even back then, right?

Wait, so why did it create demand for more bandwidth and computational resources? I don't understand. It was possible with current hardware. Isn't that the justification you've repeatedly used to dismiss use cases I've offered up for more powerful hardware?

Echohead2 wrote:

and do you not remember that it didn't take that level of CPU power for a single users photos? For fucks sake, would you please READ your own fucking links:

Quote:

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

You've misunderstood the research. You seem to be imagining that they had some algorithm that could perform at the same quality level, with a smaller dataset, on a smaller system (say a single six year old mid-range desktop), and they simply needed all those processors because they wanted to work with a large data set. But that's not it at all. That would make little sense as an AI research project anyway.

The cluster formed a single large neural network, which had to examine millions of images to learn how to recognize images. The whole thing only works at scale. And even then it didn't deliver anything remotely like human competency.

In any event, this whole line of argument on your part is a red herring unless you are actually arguing that current consumer computing hardware will be able to perform human-level image recognition once we hit on the right algorithm. Is that really your position?

Echohead2 wrote:

hell, Picasa and others already have facial recognition. Another has image recoginition of locations (I remember reading about it). Again, this isn't the science fiction you make it out to be.

It's extremely primitive — it's almost precisely analogous to digital video in the early '90s. As far as I know, everything available in commercial photo editing software is using them same simple, hand-coded technique — use simple analysis of contrast to locate major facial features, and then measure the distance between them. That works quite well for dead-on, well lit photos — one could imagine it even outperforming humans in some instances. But the approach doesn't scale. Feed it a photo upside down, and it's helpless. Give it photo with poor lighting, or taken at too much of an angle, and it hasn't got a chance. And these simple algorithms are also known for producing false positives, including hilarious ones like animals or inanimate objects.

To move past this, you probably need to switch to far more computationally intensive machine learning techniques. And if you really want full human-level performance, you need to add something like general intelligence to the system as well. A human can look at a bunch of photos of a party and say "I can't see this guy's face in this photo, but I know it has to be Josh, because I know from that other photo taken five minutes before that that's what Josh was wearing". Ideally you want your photo editing software to do that — and that's how something like photo organization, which you're claiming is a solved problem, suddenly turns into something that probably requires computers several orders of magnitude faster than those currently available to consumers.

Echohead2 wrote:

Your lack of knowledge on this subject is SHOCKING.

Your lack of a substantive reply is SHOCKING. Actually, it's pretty much what I've come to expect. You're clearly attempting to pretend I'm unaware of the fact that it's theoretically possible to develop algorithms for hardware that doesn't exist, but I've acknowledged this repeatedly, and even gave specific examples of it weeks ago.

That point simply can't do the lifting you would need it to do in this argument. While algorithms can be developed for hardware that doesn't exist, it's harder, and there's vastly less incentive to do it. It's also very difficult to design things that will be useful in some future context that you can't fully predict. Again, imagine you're sitting there in 1993, attempting to design a video compression algorithm for the hardware available a decade hence. How practical is it to run large amounts of video through it to check how it performs in real-world conditions? How do you know what tradeoffs you should make between bandwidth and computational intensity, when you're not quite sure exactly what CPU and Internet performance will be like in a decade? You're probably not even sure what the relative cost of various operations will be on the CPUs you're eventually designing for.

Or take something like neural networks. Neural networks actually were invented decades before the hardware necessary to get them to perform non-trivial tasks became available. But as a consequence of that lack of hardware availability, we've never really understood exactly what their capabilities and limitations are, or how to best employ them to solve real-world problems. In fact, a lot of people became very skeptical of the whole approach — until the last few years, when it has become possible to build much larger networks, and suddenly useful things seem to be happening.

So yes, inventing algorithms you don't have the hardware to do anything useful with is hypothetically possible. Understanding future real-world use cases based on those algorithms with any level of specificity? Not so much. More generally, there's a reason why software developers use computers, and work by actually writing code, running it, and seeing what happens, rather than scribbling things on blackboards. Real world software development, of the kind that produces useful, compelling software, is to a large extent an exploratory process. There's a lot of "Let's try it this way, and see if we get useful results", especially on the cutting edge. You can't do that with code your hardware can't run.

Echohead2 wrote:

They didn't. Each was a bottleneck and each ceased being a bottleneck at different times.

Uh, isn't there a really good reason why SIRI has to communicate with Apple's servers in the cloud and is not able to run locally on any system? The primitive AI needs access to tremendous resources to be 'smart'.

I think with the mass adoption of e-wallets, self-driving cars, tele-medicine (e-medicine and medical robots), google glass, rumored iwatch, and other wearable computing, ect.... all of it is adding a little bit more to the computing required of networks, both in speed and raw power. The information being created every day, every year is doubling.

Uh, isn't there a really good reason why SIRI has to communicate with Apple's servers in the cloud and is not able to run locally on any system? The primitive AI needs access to tremendous resources to be 'smart'.

I think with the mass adoption of e-wallets, self-driving cars, tele-medicine (e-medicine and medical robots), google glass, rumored iwatch, and other wearable computing, ect.... all of it is adding a little bit more to the computing required of networks, both in speed and raw power. The information being created every day, every year is doubling.

Actually, the biggest reason for Siri to be in the cload is that Apple can update Siri however, whenever they see fit and Siri is updated for everyone. They got movie and location services working for Canada, no updates necessary for the OS itself.

EH2's protests aside, It's almost trivial to come up hilariously demanding use cases. Let's take augmented reality, for instance. Ideally you want your augmented reality system to build a 3D model of the space around you in real time, use visual data to pinpoint your exact location based on comparisons with a online references, and render photorealistic objects at retina-like resolutions (except on a display that fills your whole field of vision), possibly using data downloaded on the fly, and map them into the scene you're looking at. To provide interactivity, you also probably want the system to recognize your gestures and have a sophisticated general-purpose natural language interface.

And you want the visual part of this to occur with no noticeable latency (probably in under 10 or 20 ms), because it would drive you nuts if virtual objects failed to track instantly as you moved your head or respond instantly as you used gestures to manipulate them. That means the computing resources probably have to be local. Plus, you want to be able to fit hardware with these capabilities — which are beyond those of current high-end desktops — into the AR glasses themselves, or at the very least into a wearable device not much larger than a smartphone.

And a capability like this wouldn't be some niche thing. If you had this technology, and it was reasonably affordable, it would be like the GUI — applicable to a huge range of use cases, for a huge number of users. Like the GUI, it would change the entire way people interacted with computing technology. Unlike the GUI, it could also significantly change the way people interacted with the physical world. Imagine how something like this could change social interaction (people's virtual social presence could be presented during face to face interaction), performing physical tasks (picture AR walkthroughs for assembling your Ikea furniture — or making sure you don't miss a step when inspecting a jet engine), visualizing data (you could project it out into 3D space and move around or through it, as if you had a science fiction style holographic projector), gaming (you could seamlessly mix real-world physical activity with interaction with virtual environments, thus uniting jocks and nerds), artistic and personal expression (you could virtually dress physical spaces — or yourself — to look completely different to anyone using AR glasses), etc.

But hey, current hardware is fast enough to run Picasa and play YouTube videos, so clearly consumers have all the computing power they'll ever need.

EH2's protests aside, It's almost trivial to come up hilariously demanding use cases. Let's take augmented reality, for instance. Ideally you want your augmented reality system to build a 3D model of the space around you in real time, use visual data to pinpoint your exact location based on comparisons with a online references, and render photorealistic objects at retina-like resolutions (except on a display that fills your whole field of vision), possibly using data downloaded on the fly, and map them into the scene you're looking at. To provide interactivity, you also probably want the system to recognize your gestures and have a sophisticated general-purpose natural language interface.

And you want the visual part of this to occur with no noticeable latency (probably in under 10 or 20 ms), because it would drive you nuts if virtual objects failed to track instantly as you moved your head or respond instantly as you used gestures to manipulate them. That means the computing resources probably have to be local. Plus, you want to be able to fit hardware with these capabilities — which are beyond those of current high-end desktops — into the AR glasses themselves, or at the very least into a wearable device not much larger than a smartphone.

And a capability like this wouldn't be some niche thing. If you had this technology, and it was reasonably affordable, it would be like the GUI — applicable to a huge range of use cases, for a huge number of users. Like the GUI, it would change the entire way people interacted with computing technology. Unlike the GUI, it could also significantly change the way people interacted with the physical world. Imagine how something like this could change social interaction (people's virtual social presence could be presented during face to face interaction), performing physical tasks (picture AR walkthroughs for assembling your Ikea furniture — or making sure you don't miss a step when inspecting a jet engine), visualizing data (you could project it out into 3D space and move around or through it, as if you had a science fiction style holographic projector), gaming (you could seamlessly mix real-world physical activity with interaction with virtual environments, thus uniting jocks and nerds), artistic and personal expression (you could virtually dress physical spaces — or yourself — to look completely different to anyone using AR glasses), etc.

But hey, current hardware is fast enough to run Picasa and play YouTube videos, so clearly consumers have all the computing power they'll ever need.

ZnU, fantastic missrepresentation of what EH2 and with some more reservation I hold as position, namely that there is a lack of:

Current usecases for the average consumer. Your (and others!)examples really don't fit that criteria.

We can go back and forth about what the future will offer, but currently demand for higher bandwidth is obviously low... unless we are talking mobile. The consumer chose mobility over speed (as they did with Cpu's). One of those who decries our position then complains that the market is developing much slower than he thought.

Which exactly is showing that average bandwidth demand isn't growing extremely fast. Streaming services will create demand, but as it stands now those are still lacking in Europe (where Northwestern Europe actually has a superior infrastructure over the USA) and there is a significant resistance against pay-per-view. In the USA big content itself is blocking and withholding, so even there with the higher acceptance of pay-per-view there isn't one platform that has so much leverage that it will revolutionise the market (perhaps Netflix????).

So sure, I can think of hilarious demanding use cases. But I sure can't see those in the market right now, at least not for the average user who wants to surf porn and watch his favourite ball game.

ZnU, fantastic missrepresentation of what EH2 and with some more reservation I hold as position, namely that there is a lack of:

Current usecases for the average consumer. Your (and others!)examples really don't fit that criteria.

It's kind of silly to argue over the positions of third parties, but EH2's argument goes far beyond what you're implying here. He claims that in the past it was easy to see consumer use cases that required lots of additional computing power, but today it's not. He has rejected examples of such use cases that have been offered up as being too vague. So I detailed one.

What I've detailed is the full, eventual implementation, as an illustration of just how far we'd have to go to meet the computational demands of the mature form of this use case. That doesn't mean there won't be useful lesser implementations in the interim. Indeed, such things are just now slated to start coming to market, with Project Glass, Oculus Rift, and others. I keep using this comparison, because it's so apt — augmented reality now is very much like digital video in the early '90s. We can see that this is a technology with huge potential, but with current hardware it's difficult to do more than "toy" implementations.

DC wrote:

So sure, I can think of hilarious demanding use cases. But I sure can't see those in the market right now, at least not for the average user who wants to surf porn and watch his favourite ball game.

Again, though, there's nothing new here. Mainstream consumers have pretty much always purchased products and services toward the middle of the market. At any given time, buying high-end hardware (or Internet service, for that matter) provides consumers with marginal benefit at substantially increased cost, because consumer software is designed to run acceptably with the resources acceptable to the average consumer. It simply isn't the case that consumers buying mid-range rather than high-end hardware or services demonstrates a lack of long-term demand for more capacity.

ZnU, fantastic missrepresentation of what EH2 and with some more reservation I hold as position, namely that there is a lack of:

Current usecases for the average consumer. Your (and others!)examples really don't fit that criteria.

It's kind of silly to argue over the positions of third parties, but EH2's argument goes far beyond what you're implying here. He claims that in the past it was easy to see consumer use cases that required lots of additional computing power, but today it's not. He has rejected examples of such use cases that have been offered up as being too vague. So I detailed one.

The criticism of your usercase stands. What EH2 described were actual usercases, even I remeber people complaining about those. What you are describing is an imaginative usercase, a case that is rare even in university circuits.

You can see the change even in adverts. In the 90's the PC specs were spelled out exactly, with big font, what processor, memory size and type, harddisk, display adapter, OS, etc, even power supply size. These days adds for general consumer? Often just general processor family, memory size, OS. Anything else in tiny print at the bottom of page.

In a year or 2 probably even memory size will be dropped, and all specs but processor family will be at the bottom of add in tiny print.

ZnU, if the need for computer power is so obvious, why the shift to mobile? Why is ARM so much more popular than the much more powerful i5 Tablets? Are you now with a straightface trying to fit your usecase with current market directions?

Nobody here will agree with you on this one. Computer power takes a backseat towards mobility.

The criticism of your usercase stands. What EH2 described were actual usercases, even I remeber people complaining about those. What you are describing is an imaginative usercase, a case that is rare even in university circuits.

Consumer digital video wasn't very much of an actual use case in the early '90s, and all you have to do is step back a couple of more years for it to be every bit as speculative as augmented reality now is.

But it's trivial to provide examples of technologies presently at other stages of development/adoption. For instance, machine vision has very early, primitive commercial implementations, in the form of things like the After Effects rotobrush, the facial recognition features in photo editing software, etc. Speech recognition is a little further along, not nearly as good as we'd want it to be, but a credible solution to some problems. 3D is still further along, again not yet in its fully mature form (entirely convincing photorealistic real-time rendering), but nonetheless in a fairly sophisticated state, and the basis for a pretty large industry.

There are many technologies which could benefit from additional computational resources, in various stages of development and adoption. Just like it's always been.

Redo from start wrote:

You can see the change even in adverts. In the 90's the PC specs were spelled out exactly, with big font, what processor, memory size and type, harddisk, display adapter, OS, etc, even power supply size. These days adds for general consumer? Often just general processor family, memory size, OS. Anything else in tiny print at the bottom of page.

In the early '90s, PC users were often tech enthusiasts. Now personal computers are entirely mainstream. You're simply seeing marketing respond to that shift.

DC wrote:

ZnU, if the need for computer power is so obvious, why the shift to mobile? Why is ARM so much more popular than the much more powerful i5 Tablets? Are you now with a straightface trying to fit your usecase with current market directions?

This simply doesn't have the implications you and EH2 want to claim it does. As I have noted at least a dozen times during this discussion, trading off one thing (such as performance) for another thing (such as mobility) within a given generation of devices simply does not indicate a permanent lack of demand for more of the thing that has been traded off. Again, this is like saying that if you buy an SUV rather than a compact car, thus trading off gas milage for cargo space, this demonstrates you will be entirely indifferent to the future development of SUVs with lower gas milage.

In the long run, the diversification of computing devices into additional form factors will only open the door for more use cases, and, by extension, more computationally intensive use cases. Augmented reality wouldn't be of much interest without mobile computing, speech recognition has gained wider adoption because it's useful as an input method on mobile devices without real keyboards, gesture recognition has found its first major use case on a game console (with Kinect) rather than a traditional PC, etc.

Of course not. However we did think about photos and video and transmitting them and viewing them on computers. You do know that it was happing even back then, right?

Wait, so why did it create demand for more bandwidth and computational resources? I don't understand. It was possible with current hardware. Isn't that the justification you've repeatedly used to dismiss use cases I've offered up for more powerful hardware?

Because the resolution was low, bandwidth was slow even then for low-res images. We all wanted higher res images and video. That required more HDDs, more CPU power, higher bandwidth.

Quote:

You've misunderstood the research. You seem to be imagining that they had some algorithm that could perform at the same quality level, with a smaller dataset, on a smaller system (say a single six year old mid-range desktop), and they simply needed all those processors because they wanted to work with a large data set. But that's not it at all. That would make little sense as an AI research project anyway.

I understood it perfectly. You are the one that failed to understand it. Do you think that each person would have to have that power to get the results? Or they run it that way to DEVELOP the algorithm and then simply code it for CPUs?

Quote:

The cluster formed a single large neural network, which had to examine millions of images to learn how to recognize images. The whole thing only works at scale. And even then it didn't deliver anything remotely like human competency.

EH2's protests aside, It's almost trivial to come up hilariously demanding use cases. Let's take augmented reality, for instance. Ideally you want your augmented reality system to build a 3D model of the space around you in real time, use visual data to pinpoint your exact location based on comparisons with a online references, and render photorealistic objects at retina-like resolutions (except on a display that fills your whole field of vision), possibly using data downloaded on the fly, and map them into the scene you're looking at. To provide interactivity, you also probably want the system to recognize your gestures and have a sophisticated general-purpose natural language interface.

And you want the visual part of this to occur with no noticeable latency (probably in under 10 or 20 ms), because it would drive you nuts if virtual objects failed to track instantly as you moved your head or respond instantly as you used gestures to manipulate them. That means the computing resources probably have to be local. Plus, you want to be able to fit hardware with these capabilities — which are beyond those of current high-end desktops — into the AR glasses themselves, or at the very least into a wearable device not much larger than a smartphone.

And a capability like this wouldn't be some niche thing. If you had this technology, and it was reasonably affordable, it would be like the GUI — applicable to a huge range of use cases, for a huge number of users. Like the GUI, it would change the entire way people interacted with computing technology. Unlike the GUI, it could also significantly change the way people interacted with the physical world. Imagine how something like this could change social interaction (people's virtual social presence could be presented during face to face interaction), performing physical tasks (picture AR walkthroughs for assembling your Ikea furniture — or making sure you don't miss a step when inspecting a jet engine), visualizing data (you could project it out into 3D space and move around or through it, as if you had a science fiction style holographic projector), gaming (you could seamlessly mix real-world physical activity with interaction with virtual environments, thus uniting jocks and nerds), artistic and personal expression (you could virtually dress physical spaces — or yourself — to look completely different to anyone using AR glasses), etc.

But hey, current hardware is fast enough to run Picasa and play YouTube videos, so clearly consumers have all the computing power they'll ever need.

This is a nice diaribe, that ignores the hardware in question, pretends that augmented reality is a must-have killer feature, and also acts like it will happen anytime in the near future like he is describing.