No one should ever say 'this party's dead' or 'played a dead bat' because it transgresses Funk's boundary of acceptable idiom?

... Of course I don't object to these phrases, because they actually fit concept.

Neither is premature. And that's the issue. Prematurity.

Looking forward to you cleverly explaining why 'the PC market is dead' can't refer to (as Ed.M notes) a lack of vitality. Rather than everyone in the industry being wiped out by ebola, or some such.

And, why it's premature to observe this lack of vitality now, after four quarters of negative growth (excluding tablets and smartphones of course) and some spectacular hiccups amongst once-solid players.

Ed M. (not Ed.M) notes no such thing.

You are free to twist things to your self-serving purposes all you want, carrot. Have at it.

It's not dead, it's just maturing. PCs are transitioning to be more durable goods, is all. The rate of change for change's sake is slowing.

Instead of 50 different interfaces, we have USB, WiFi, and BlueTooth. USB 2.0 is still good enough for most peripherals, even where it's not optimal. PCs can take more RAM than ever before and RAM is cheaper than ever before, but software isn't growing its RAM needs since more software is moving to the cloud and Windows is more RAM-efficient than previous versions.

CPU needs (or the lack thereof) have been mentioned over and over and over.

Edit for content: A year or so ago, I took a two week road trip, and I didn't take a laptop with me, just my smartphone and my Nook Color with CyanogenMod. I was able to do just about everything that I wanted to do with these two devices. The Nexus 7 that's since replaced the Nook Color is even more capable. Tablets and phones are slowly becoming the go-to devices, so people who don't need MOAR POWAH are able to get by with these devices.

PC sales have slowed because there's no reason to upgrade. My near 2yr old 12gig 2600-i7 that I paid less than $600 for nearly two years ago kicks just as much ass today.

I have had a long discussion about this with some posters who want to daydream away that this isn't the case. The slowdown can be largely if not completely attributed to computers lasting longer between replacement.

I have had a long discussion about this with some posters who want to daydream away that this isn't the case.

You're clearly referring to me here, and depending on how you're interpreting wseaton's post, that's some pretty serious misrepresentation of my position. But before I go any further with this, you're going to need to be more explicit about how you think wseaton's comments conflict with my position; I'm not falling into that trap where you say something vaguely argumentative and then claim every response is attacking a straw man.

Echohead2 wrote:

LOL at splitting them out.

If you consider tablets to be another PC form factor, why wouldn't you split out all three form factors instead of combining two of them just because they've been around longer?

If you consider the tablet just another PC form factor, then how could PCs be dead or threatened?

Pretty easily, given that the term 'PC' means about six different things depending on context

But it's not actually my position that 'PCs are dead'.

Echohead2 wrote:

he said:

Quote:

PC sales have slowed because there's no reason to upgrade.

That is the conflict.

My position, again, is that we have not yet reached the limits of eventual consumer demand for additional computational resources. This is in no way incompatible with the notion that over some specific time period, newer products belonging to some narrow product category might not be very much more compelling than products in that category dating from a few years earlier.

[My position, again, is that we have not yet reached the limits of eventual consumer demand for additional computational resources. This is in no way incompatible with the notion that over some specific time period, newer products belonging to some narrow product category might not be very much more compelling than products in that category dating from a few years earlier.

A position no one disagrees with.

Though, you could look at it and say maybe there isn't...meaning that when cellphones and tablets get to say the performance of an i5 from today, that maybe demand will die off. Remember, people aren't demanding more computational power than is found in desktops--just that their weaker devices catch up to that.

[My position, again, is that we have not yet reached the limits of eventual consumer demand for additional computational resources. This is in no way incompatible with the notion that over some specific time period, newer products belonging to some narrow product category might not be very much more compelling than products in that category dating from a few years earlier.

A position no one disagrees with.

Though, you could look at it and say maybe there isn't...meaning that when cellphones and tablets get to say the performance of an i5 from today, that maybe demand will die off. Remember, people aren't demanding more computational power than is found in desktops--just that their weaker devices catch up to that.

How could we know? After all, Google Glass might implement some ridiculously complex VR HUD system using speech recognition and NLP, which going all at once 24/7 would be pretty freaking demanding for both the CPU and GPU, not to mention RAM.

After all, Google Glass might implement some ridiculously complex VR HUD system using speech recognition and NLP, which going all at once 24/7 would be pretty freaking demanding for both the CPU and GPU, not to mention RAM.

It's Google. The big computing loads will be server-side.

Now, Microsoft's inevitable 5-years-late-and-declared-dead-before-launch version may try to cram an i7 and 16 gigs of ram into the headset, Surface Pro style...

After all, Google Glass might implement some ridiculously complex VR HUD system using speech recognition and NLP, which going all at once 24/7 would be pretty freaking demanding for both the CPU and GPU, not to mention RAM.

It's Google. The big computing loads will be server-side.

Now, Microsoft's inevitable 5-years-late-and-declared-dead-before-launch version may try to cram an i7 and 16 gigs of ram into the headset, Surface Pro style...

Mature augmented reality is probably going to require quite a lot of client-side computation. Bandwidth and latency would be real killers here for a cloud-based approach. Latency especially. If you want virtual objects to appear to track the relative positions of real objects in your field of vision, you probably need latency under 10 or 20 ms.

The PC will continue on as long as it's the most efficient way to enter data. And it still holds a several orders of magnitude advantage over tablets and phones in that area.

There are nuances that flavor this statement. That advantage assumes you don't have a keyboard for your tablet or phone; plug in a keyboard and that assumption, and advantage, disappears.

1) Taking photos used to be performed by cameras and then uploaded, managed, edited, and distributed by PCs. Now it is common for photos to be taken, uploaded, managed, and edited directly on a smartphone. This is even true of video, now.2) Social networking used to be a text heavy task (mailing lists, IRC, chat programs) designed around keyboard and mouse. It is now common for smartphones and tablets, with suitably slimmed down chunks of time/data entry, to perform the same types of things (MMS/SMS/iMessage/FaceBook/Twitter)3) Consuming data, be it video, music, news, or games, is well supported on mobile platforms.

PCs still have an advantage if you need extra processing power, extra storage, and extra screen size, but even that gets whittled away with every passing year (AirPlay gives my iPad a 32" 1080p screen, for example, and every generation the iPad doubles in processing power).

Without a mouse/trackpad however, the ergonomics of just slapping an a tablet in a keyboard dock do not work for any extended period of data input. Reason being, you eventually have to manipulate that data, and constantly stretching out your arm with your extremely low-resolution input method of your fingers is cumbersome - to put it mildly.

So then the interface has have at least some accommodation for that input method. And voila- you're back to having a PC, just one that doesn't come with a keyboard and trackpad permanently attached. The degree of which it adjusts to the input methods will likely determine how successful it will be in replacing actual PC's. The processing power is the least of the hurdles IMO.

[My position, again, is that we have not yet reached the limits of eventual consumer demand for additional computational resources. This is in no way incompatible with the notion that over some specific time period, newer products belonging to some narrow product category might not be very much more compelling than products in that category dating from a few years earlier.

A position no one disagrees with.

Though, you could look at it and say maybe there isn't...meaning that when cellphones and tablets get to say the performance of an i5 from today, that maybe demand will die off. Remember, people aren't demanding more computational power than is found in desktops--just that their weaker devices catch up to that.

How could we know? After all, Google Glass might implement some ridiculously complex VR HUD system using speech recognition and NLP, which going all at once 24/7 would be pretty freaking demanding for both the CPU and GPU, not to mention RAM.

hmm...so from tablets and smartphones you jump to a totally different tech.

Furthermore, it isn't for sure that it woudl require much above a desktop from today.

I'm well aware that they do. The question is how does the OS and resulting applications take advantage of the input method to the point where you're not losing significant productivity over a traditional PC interface.

"Supporting" a mouse and keyboard means very little if that's all it does - when you click a mouse button or hit a key on the keyboard, hey - something happens! It's a matter of optimization, and right now judging by at least RT apps, they are far from optimized for M&K - hence the existence of the desktop.

I simply don't know how you deal with the problem of information density when you move from a very small screen to a larger screen several feet in front of you without changing the interface significantly. If you keep the same sparse level as most tablet apps, you're going to be doing a heck of a lot of context switching and mouse movement. Controlling some of the Metro apps with a mouse and keyboard can be an exercise in frustration due to this (such as needing to move your mouse pointer to the context bar at the bottom of the screen when right-clicking instead of just pointing to the small context menu that would appear in a desktop app).

Quote:

So for some large fraction of the tablet/smartphone base, a tablet+keyboard+mouse = PC in functionality.

"Some large fraction"...means what, exactly? You could say that now for tablets without a keyboard and mouse, there are likely millions, if not tens of millions using them now as a PC replacement for 95% of the time.

My wife uses her iPad to edit Word docs sans keyboard, I cannot tell you how common it is that people edit Word docs, edit docs in general, or use a keyboard in general, other than if the need exists the capability can meet it.

One idea I sometimes see brought up in discussions like this is that - whether for an operating system in general or for more advanced apps like CAD or whatever - there should a separbe ate “mouse mode” with more precise/denser controls etc. I don’t think this is necessary or a good idea and here's why.

An OS or app that supports multiple input modalities should be designed as a multimodal interface, not a modal interface (i.e. there should not be a separate Touch Mode and Mouse Mode, just as Windows desktop apps don’t have separate Mouse and Keyboard Modes; contrary to what you sometimes see written, in Windows 8 there is also not a “desktop mode” for the whole system, but rather the desktop is presented like another app).

Especially as the number of input modalities keeps increasing (yesterday mouse and keyboard; today touch, mouse, keyboard and to some extent pen; tomorrow who knows? speech, pen, gestures, eye tracking etc etc are waiting in the wings) it’s important to allow the user to fluidly switch, interleave or combine multiple modalities arbitrarily as fits tasks, context, posture, hardware, personal preference or aptitude etc. That means being careful to optimize the interface for each modality but without creating entirely separate interfaces or modes (which would make it harder / less fluid to switch/interleave/combine).

The model for this is how keyboard and mouse already work in desktop apps like Excel today, there’s not a separate UI or separate modes for keyboard and mouse, but at the same time it avoids “lowest common denominator”, it’s optimized for both keyboard and mouse allowing you to use the strengths of each one (i.e. you can type formulas and use keyboard shortcuts, not just arrow/tab through the mouse UI; conversely the mouse UI isn’t some kind of virtual keyboard). And then there are special affordances for using mouse and keyboard interleaved (e.g., reference a cell in a formula by clicking it, then continue editing the formula with the keyboard) or combined (e.g., Shift-/Ctrl-click).

So for better mouse usage in productivity / advanced / etc scenarios I think it would be better to think in terms of “mouse shortcuts” rather than “mouse mode”. By analogy, e.g. Metro IE (and OneNote) preserve many of the keyboard shortcuts from the desktop version … well, things like the Office floating toolbar or even Outlook’s flag/delete buttons are mouse shortcuts. Even in the desktop UI, these things are already “on-demand”, much like scrollbars in the new Windows UI. At least in principle they (or something similar) could be ported, purely for mouse, without compromising the touch UI at all – or requiring a big modal switch. (i.e. rather than having to switch the whole app into “mouse mode” and perhaps back out, those UI controls would simply only appear in response to a mouse action).

Not saying those particular ideas are necessarily good ones, just that you can think about this in a more flexible and nuanced way than people generally seem to be thinking …

I simply don't know how you deal with the problem of information density when you move from a very small screen to a larger screen several feet in front of you without changing the interface significantly. If you keep the same sparse level as most tablet apps, you're going to be doing a heck of a lot of context switching and mouse movement. Controlling some of the Metro apps with a mouse and keyboard can be an exercise in frustration due to this (such as needing to move your mouse pointer to the context bar at the bottom of the screen when right-clicking instead of just pointing to the small context menu that would appear in a desktop app).

See, this is a case in point. You can argue the advantages and disadvantages of using the app bar vs. a separate context popup for mouse*, but it doesn't even matter because you could EASILY use a context popup for mouse without changing the interface at all. Just have it appear when you right-click, and use traditional single-/double-/Ctrl-click for selection, multiselection and activation. (or - what I suspect would work better - try to create a new context menu along the lines of the floatie/minibar in Office.) Selection and commanding with touch would continue to work the same way so there would be no change.

*The advantage is that it makes it easier/simpler to multiselect by right-clicking (which would be too annoying if a context menu popped up every time you did it), instead of having to Ctrl-Click. It also simplifies learning the UI in that you have one single consistent concept/mechanism – “right click for app commands” – which takes the place of

1. menu bars 2. context menus 3. single selection 4. multi selection 5. having show/reveal UI modes/commands which many desktop apps have in different ways that you have to figure out for each app (double click ribbon tab? hit Alt? hit F11 for fullscreen? find a button somewhere? who knows?)

This is in keeping with the Win7/8 design principle of “reduce concepts to increase confidence” which tries to simplify the UI by combining multiple mechanisms/concepts into one as much as possible (another example is how the Start screen, or the Win7 taskbar is both a launcher and a switcher, and a place for notifications).

Note, I overall agree that the app bar feels like a step backwards for mouse, I’m just pointing out that there are also some advantages too.

[My position, again, is that we have not yet reached the limits of eventual consumer demand for additional computational resources. This is in no way incompatible with the notion that over some specific time period, newer products belonging to some narrow product category might not be very much more compelling than products in that category dating from a few years earlier.

A position no one disagrees with.

Though, you could look at it and say maybe there isn't...meaning that when cellphones and tablets get to say the performance of an i5 from today, that maybe demand will die off. Remember, people aren't demanding more computational power than is found in desktops--just that their weaker devices catch up to that.

How could we know? After all, Google Glass might implement some ridiculously complex VR HUD system using speech recognition and NLP, which going all at once 24/7 would be pretty freaking demanding for both the CPU and GPU, not to mention RAM.

hmm...so from tablets and smartphones you jump to a totally different tech.

Furthermore, it isn't for sure that it woudl require much above a desktop from today.

It isn't a totally different tech, just a different UI. If it's more convenient to put the power on the phone, rather then on the glasses, then it becomes about phone processors again.

Edit for content: A year or so ago, I took a two week road trip, and I didn't take a laptop with me, just my smartphone and my Nook Color with CyanogenMod. I was able to do just about everything that I wanted to do with these two devices. The Nexus 7 that's since replaced the Nook Color is even more capable. Tablets and phones are slowly becoming the go-to devices, so people who don't need MOAR POWAH are able to get by with these devices.

So not everything, and maybe people don't want to just "get by".

I was able to do about 95% of what I use my computers for with my tablet. The only things I couldn't do were play "serious" games (i.e., more than apps you can find on a tablet) and visit some websites that weren't mobile friendly. For a lot of people, that's more than enough to get by. Not everyone wants to put up with those limitations, but the kinds of people that bought eMachines computers at WalMart would probably be well served by a tablet.

The simple fact is that a lot of people don't really need the power of a desktop system, and even Pentium 4 era systems are capable of doing what they need. It's been known for a while that there hasn't been a killer app for a long time that drives PC sales. Outside of workstations, hobbyists and gamers, people aren't buying new PCs unless they need to replace an older one.

Another way to look at it is that the PC bubble has burst or is about to burst. In the beginning, PCs were something that you had to put together yourself, so only nerds had them. Then, they became commodities, but they were still fairly expensive, so the penetration in the general population was still low. Eventually, the Internet convinced people that had had no interest in a personal computer to buy one. For about a decade, a PC was the only way to get on the Internet. It's only been the last few years that browsing on a tablet has been doable, and that experience has been improving every generation. When you add this trend to the downturn in the economy, it shouldn't be surprising that people aren't spending their money on new computers.

The simple fact is that a lot of people don't really need the power of a desktop system, and even Pentium 4 era systems are capable of doing what they need. It's been known for a while that there hasn't been a killer app for a long time that drives PC sales. Outside of workstations, hobbyists and gamers, people aren't buying new PCs unless they need to replace an older one.

The saving grace of PCs won't be CPU power, in my opinion. It'll be screen size. Until someone puts out a decent flexible display, it's simply physically impossible for a mobile device to increase its screen size without impacting mobility.

That'll eventually converge on TVs, but that requires higher than 1080p.

Theoretically, you can drive a big monitor from a tablet. But then you expose the relative weakness of the tablet GPUs.

The saving grace of PCs won't be CPU power, in my opinion. It'll be screen size. Until someone puts out a decent flexible display, it's simply physically impossible for a mobile device to increase its screen size without impacting mobility.

That'll eventually converge on TVs, but that requires higher than 1080p.

Theoretically, you can drive a big monitor from a tablet. But then you expose the relative weakness of the tablet GPUs.

What are you talking about? iPads already natively drive an internal 2048x1536 displays. Tablet GPUs aren't in fact the weakest link in the ability to drive a 1080p monitor.

The saving grace of PCs won't be CPU power, in my opinion. It'll be screen size.

I agree. Heck. Imagine if the new Haswell chips had 10x the performance and the same price. Would it drive tons of people to go and get them because of the huge leap in CPU performance? I doubt it. I know I wouldn't. It would probably be 40x the performance of my current computer (or more), and I wouldn't bother because it wouldn't get me anything.

But your expressed behaviour and preferences here put you at the luddite end of the nerd curve, so your case isn't so pertinant. Convince people that consumers won't want cool new functions and devices that powerful miniaturisation will deliver, and you'll have a stronger case.

So why not say the saving grace of PC's won't be CPU power, but GPU power?

Because it's not that simple. The use case of a big screen and tablet are different. If all else was equal, plugging or docking a tablet into a large display would get you all the advantages of a large display. But then

a) you're paying all the penalties of extreme miniaturization for a use case that doesn't benefit from it. . i) cost. . ii) lack of maintainability. . iii) lack of reliability (any one component breaks, the entire device may be junk)b) you're going to have to have some sort of hybrid OS and there's no reason to believe that a hybrid desktop/tablet OS that evolves from Android will be better than one that evolves from Windowsc) people are willing to put up with a lot less actual effective performance on a tablet compared to a sit-down desktop/laptop, as long as the perceived performance is good. Put a tablet up on the big screen and all the dirty little tricks they use to hide their lack of actual grunt are much more noticeable.

Mind you, as optical drives become less and less necessary and SSDs come down in price, I think the Mac Mini form factor will be much more common than the traditional desktop form factor.

Long story short, people still use the fuck out of PCs and they will continue to do so even as everyone gets starry-eyed over whatever the latest headlines say.

They aren't dead, and no manner of abuse and fudging of the English language will change that.

The article is as flawed as the attack, and I quote:

Quote:

PC replacement cycles have slowedThe fact is your 4-6 year old PC (or Mac) hardware is good enough. That’s not Windows 8’s fault nor is it your phone’s fault. The PC form factor matured over the past decade with the power, speed and performance good enough for mainstream users.

No, this is in fact Windows 8's fault; if Microsoft cannot compel upgrades by increasing the need for a PC then the side effect is a longer replacement cycle. It is not only Microsoft's fault, but they are in fact at fault.

I just this past weekend placed an order for a 13" MacBook Pro with Retina; for whatever reason the PC world is slow on the uptake, but super high resolution laptops could/should have been normal by 2012, and are not, being beaten to the punch by both iPad and MacBook Pro. Even super high DPI screens are insufficient, but they are at least one thing to drive sales.

The article further expounds:

Quote:

Lots of time is being spent on that “dead” PC form factor. More than 70 percent of all hours spent on computing devices (PCs, smartphones, and tablets) are on a PC.

This is the problem, not the refutation of the problem. That number should be 95%, not 70%. Microsoft should have been pushing Windows into tablets instead of letting Apple and Android shove into the space. They should have displaced Symbian with Windows Phone, instead of allowing Apple and Android into that space.

Then at least when 30% of computing is performed on a non PC they can still claim 92% is performed on a Windows OS.

Quote:

Lots of money is being spent on that “dead” PC form factor. Only 1.5 percent of advertising spend comes from mobile while 20.9 percent comes from online PC users.

Again, this is misleading. From the source you find that PC time is 173 minutes per day with non-PC time being 82 minutes a day. The issue is, again, that Microsoft should be accounting for 92% of that time and not only 67% of ad time. Yes, PCs currently account for the higher ad-spend, but that means that advertisers are spending less on mobile platforms despite mobile platforms accounting for 33% of online time. They are behind and, as per the source article:

Quote:

As outlined in eMarketer’s mobile advertising spending forecast from September, both advertisers and publishers are still working to develop the infrastructure necessary to support larger mobile ad buys.

PCs get the larger ad spend because the infrastructure to target end users exist, not because the PC is the preferred platform (and, by time, that number is shrinking and disproportionate!)

No, the PC isn't dead, at least not yet, but neither is it going to rule the roost in the near future at this rate.

Long story short, people still use the fuck out of PCs and they will continue to do so even as everyone gets starry-eyed over whatever the latest headlines say.

They aren't dead, and no manner of abuse and fudging of the English language will change that.

Of course they aren't dead. Sales are still healthy and the installed base is still expanding. People are confusing quarterly sales with installed base. As the PC increases its longevity, the sales will decline in the quarters.

and that is exactly the point of the link you provided:

Quote:

PC replacement cycles have slowed.

God I love it when people say the same as me:

Quote:

The fact is your 4-6 year old PC (or Mac) hardware is good enough. That’s not Windows 8’s fault nor is it your phone’s fault. The PC form factor matured over the past decade with the power, speed and performance good enough for mainstream users.

Quote:

existing PCs are actually alive and living longer than ever before.

and here is the simple math:

Assume the market is fully saturated (otherwise the math gets to be a PITA).

If the turn-over rate is 3 years and the installed base is 1.2B...then you need 100mil per quarter.

However if the turn-over rate changes to 4 years, then the quarter sales would drop to 75mil. Installed base didn't change.