Gabe Newell said that Apple may be the main problem with getting the Steam Box into living rooms.

Newell, Valve's co-founder, told a class at the University of Texas' LBJ School of Public Affairs that Apple could be a threat to his company's upcoming Steam Box if it gets to the living room first.

Steam Box is Valve-developed hardware that aims to broaden the reach of Steam, which is Valve's digital distribution and multiplayer/communications platform. Right now, Steam delivers a variety of games to a user's desktop computer, but Steam Box will bring these games to the living room -- such as on a TV with Big Picture mode.

Not much else has been disclosed about Steam Box, other than the fact that it's Linux-based and will be an open system (Newell even said that it'd be possible to install Windows onto the Steam Box). While no release date is in sight, Newell worries that Apple may launch a similar platform for the living room -- thus beating Valve to the punch.

"The threat right now is that Apple has gained a huge amount of market share, and has a relatively obvious pathway towards entering the living room with their platform," Newell said. "I think that there's a scenario where we see sort of a dumbed down living room platform emerging — I think Apple rolls the console guys really easily. The question is can we make enough progress in the PC space to establish ourselves there, and also figure out better ways of addressing mobile before Apple takes over the living room?"

While Valve is looking to offer the best hardware for the best possible price point, Newell worries that Apple may make a move first and offer a closed platform that will lack the user-generated content that Valve (through Steam Box) would offer.

"The biggest challenge, I don't think is from the consoles," Newell said. "I think the biggest challenge is that Apple moves on the living room before the PC industry sort of gets its act together."

quote: Trivial really considering the next PS or XBOX won't have all that great of hardware.

It may have lower theoretical performance than current gen PC hardware, but being able to optimize for a single platform can improve performance far more than you'd think. I would not be surprised to see the hardware perform as well as high end PC hardware (GTX 680, HD 7970).

quote: It may have lower theoretical performance than current gen PC hardware, but being able to optimize for a single platform can improve performance far more than you'd think. I would not be surprised to see the hardware perform as well as high end PC hardware (GTX 680, HD 7970).

That would be impossible given the significant gulf in brute processing power between a console GPU and a modern GTX or AMD GPU. Optimization doesn't work that way. Computation speed is not theoretical - it's readily advertised.

Console developers will actually figure out how to improve texture quality, followed by polygon count, followed by post-processing and get it to fit within the limited confines of a known GPU as fluidly as possible to maintain 30 FPS.

PC gamers will retain 60 FPS performance with tesselation, HBAO, and other improvements since they have the superior throughput and processing.

The only concession you get are posting inefficiencies. Console ports have a terrible reputation on PCs because they are poorly optimised for better PC hardware. Those inefficiencies tend to make Console games run terrible on a PC which can give the appearance of near equivalent performance. Mass Effect 1, for example, looked really bad on PCs because that early in the game, Bioware had to cut out a lot of graphical goodies to make it perform well on Console (and PCs inherited those decisions - and worse).

This is not about pushing polygons and improving texture quality, its about optimizing the rendering API to remove inefficiencies caused by needing to support generic hardware, and optimizing drivers and such. This is low level under the hood stuff, not "Game developer X pushed really hard to optimize their engine"

Not as much. There's optimized, and then there's optimized. "Optimization" refers to the process of taking advantage of hardware specifics, but the Steam Box has fewer specifics since the hardware (and probably most of the OS) is off-the-shelf. There isn't as much to optimize for.

That's not the only problem with the Steam Box, either. It's also lacking in convergence, unless Big Picture gets some serious upgrades at some point. The price point is absurd. The Steam Box is, effectively, nothing more than a PC OEM. A stupid one.

Optimizing for a single platform only goes so far. EA, for example, optimizes all of its games for nVidia hardware using CUDA extensions and hardware PhysX. It generally yields about a 15% improvement in performance over ATI's "equivilently powered" competition.

As far as CPU optimizations are concerned, what does one x86 CPU have over another? AMD was missing SSE4 for while, but those commands have negligible performance improvements.

If everything is x86, the whole idea of optimizations is gone. Current x86 titles are already optimized for x86 (unless they're ported from a console) yet new games are constantly pushing the limits of current bleeding-edge hardware running 1080p resolution. There wasn't a single GPU at the launch of Crysis or Battlefield 3 that could run the game at maximum settings at 1080p.

Crysis alone virtually pushed SLI as 'practical' although a year later all the next-gen GPU's ran it well. Consoles don't have this luxury. They're not modular and can not be upgraded. The only good thing that will come out of x86 consoles is easy porting (assuming they are Windows\DirectX-based, which the Sony one wont be, it will likely run a Linux kernel) and backward compatibility if the succeeding future consoles are also x86 (which they wont be, because x86 is dying, and 10 years from now, hopefully we are all running RISC-based CPU's.

Intel has known x86 needed replacement for two decades. IA64 was their answer, but completely impractical in implementation. If AMD hadn't added 64-bit addressable memory extensions to x86, Intel would have pushed IA64 mainstream and spent the billions in their warchest to do it.

lol did you post all of that just to vent your x86 hate? what limits a game is the graphics cards and they for sure aren't x86, as the resolution increases the use of the CPU becomes even less relevant compared to the overall processing power, I'm playing BF3 on a I7 and the CPU barely hits the 30% usage, my graphic card however is like a tornado so I guess it has a 100% usage rate.

I don't see why having a x86 would make a game easy portable, a game made for windows (x86) that uses DirectX sure wont be ported that easily to Linux (x86)... hell what if I told you Apple is using x86 cpus since a long time now?

Intel proved most of your points wrong when they released their last x86 for smartphones.

People claiming the death of x86 have been doing it since over 20 years ago, and guess what if I were you I wouldn't bet that 10 years from one most PC will be using a RISC CPU.

BTW why would anyone wish the death of x86 if it's to replace it with a propriety thing like the ARM ISA, maybe people like to shoot themselves in the foot nowadays.

Wow, working backwards, I'll start with how clueless you are of the x86 ISA.

It is SO hard to license, that currently only two manufactures print wafers built on it. VIA/Cyrix/Transmetta/etc have all given up licensing because it was expensive and more complicated that buying licenses for Microsoft software (which is, and always has been, a disaster, working through vendors that have no idea what is what.)

BOTH ISA's are PROPRIETARY. ARM Holdings is substantially more lax with instruction modifications and additions as long as they are still compatible with their target compiler, ie, ARM v7.

Everything else you said shows that you need to research and understand instruction sets better, and why x86 is NOT desirable for 99% of what people interact with on a device. CISC computing is completely inefficient for almost everything. It can not be made power efficient. Yes, the Intel Atom-based smartphone is pretty fast, but the battery life is lower and it still lags in even optimized conditions (such as sun spider) clock-for-clock.

If you play BF3, you might know other people who have tried BF3 on dual core CPU's. BF3, like most new games is optimized for quad-core, which is you your (and my) i7 run it so well. I fear these game consoles, which as I said aren't modular or upgradable, will severely lack the power of our i7's very quickly, which will again hinder advanced development.

If they're going to go x86 in a console, the only real advantage is transparency (upgradability) which the manufactures wont take advantage of.

There is little-to-no argument you can have with me or any other engineer that CISC architecture is superior to RISC architecture. Why do you think your videocard is so good at what it does, after all? It sure as hell isn't x86 :)

No one has the RISC vs CISC argument anymore. That ship has sailed. ARM CPUs implement CISC features and philosophies now to help overcome some of their legacy constraints, and white papers on the next generation show them doing it even more so. The x86 family has been implementing RISC features and philosophies for years.

Medfield has average battery performance. That's pretty good considering your and others' accusations that it's an insatiable battery monster. It also performs well. This is all on the first try. Chances are, Merrifield is going to embarrass you in this argument. Intel has shown time and again how competent they are in the CPU space. You generally don't want to bump heads with a group that spends as much as they do on R&D.

GPUs actually aren't all that great at what they do. They achieve their goals through brute force throughput and parallelism, mainly because that area is so advanced that computer science makes slow strides in developing efficiencies for it. Most recent advancements in consumer-level graphics technology have been small implementations of CISC philosophies.

Ultimately, your argument is invalid. Both philosophies have their place, which is why this argument died years ago and the two started intermingling. HSA tech will accelerate that miscegenation, and most parts in the near future will be able to draw on each feature where appropriate.

The kind of optimization that happens with consoles is much different than what happens with PC. It is a static platform with much lower overhead and lower level access to hardware. Something like God Of War 2 on the PS2 in 2007 would not be possible on a 1999 PC, you know?

"How great" hardware is depends 100% on what it will be used for, and based on that, I'm guessing "next-gen" consoles will be similar to what you could do with a good midrange gaming PC (i.e. the ballpark of a Core i3/5/A8 + 7850/660 + 4GB of RAM).

Part of my reason for believing that is they'll very likely be optimized for 1080p with the ability to upscale - meaning anything significantly more powerful than the above GPU's would be unnecessary. Also, the above config should be doable in <100 watts, today.

Anyway, and we'll probably find out if I'm right in a few week (mysterious Sony announcement at the end of Feb.).