Below you can see a screen grab from PConline which purports to show the specifications of the GTX 680. While the specs are well within reason, without any way to verify this leak, or to translate the Chinese characters it is hard to have these specs confirmed or denied as they stand. Whether you should take the below with a good dose of NaCl is as of yet unknown but for now we can enjoy the speculation until NVIDIA finally releases the cards for review.

Please feel free to add any speculations, doubts or other leaks in the comments below ... or even a decent translation would be great! You can catch the Google Translation here, if you wish to torture your brain with exclusive exposure.

Introduction, GT 640M Basics

About two months ago I wrote an less than enthusiastic editorial about ultrabooks that pointed out several weaknesses in the format. One particular weakness in all of the products we’ve seen to date is graphics performance. Ultrabooks so far have lacked the headroom for a discrete graphics component and have instead been saddled with a low-performance version of the already so-so Intel HD 3000 IGP.

This is a problem. Ultrabooks are expensive, yet they so far are less capable of displaying rich 3D graphics than your typical smartphone or tablet. Casual gamers will notice this and take their gaming time and dollars in that direction. Early leaked information about Ivy Bridge indicates that there has been a substantial increase in graphics capability, but the information available so far is centered on the desktop. The version that will be found in ultrabooks is unlikely to be as quick.

Today we’re looking at a potential solution - the Acer Aspire Timeline Ultra M3 equipped with Nvidia’s new GT 640M GPU. This is the first laptop to launch with a Kepler based GPU. It is also an ultrabook, albeit it one with a 15.6” display. Otherwise, it isn’t much different from other products on the market, as you can see below.

This is likely to be the only Kepler based laptop on the market for a month or two. The reason for this is Ivy Bridge - most of the manufacturers are waiting for Intel’s processor update before they go to the trouble of designing new products.

NVIDIA has been having a rough life lately with problems besetting them on all sides. Their IGP business has been disembowelled by AMD's Llano and even Intel is now offering usable graphics with the HD3000 on higher end Sandy Bridge chips. The console makers seem to have decided on AMD as the provider of choice for the next generation of products which locks NVIDIA out of that market for years to come, as console generations tend to last significantly longer than PC components. The delays at TSMC have enabled AMD to launch three families of next generation GPU without NVIDIA being able to respond, which not only hurts NVIDIA's bottom line but lets AMD set their own pricing until NVIDIA can finally release Kepler, at a price that will not be wholly of their choosing.

Now according to SemiAccurate they are losing a goodly portion of Apple's MacBook business as well. The supply issues which will be the result of the fabrication problems were likely a big factor in Apple's decision to trim back GPU orders but there is also the fact that the low to mid range GPU could well be going extinct. With the power of the forthcoming Intel HD4000 and AMD's Trinity line of APUs it will become hard for laptop and system makers to justify putting in a discrete GPU since they will have to choose relatively expensive parts to have the discrete GPU contribute to performance. That leaves NVIDIA only providing GPUs for high end MacBooks, a much less lucrative market than the mid range. Don't even mention the previous issue of overheating GPUs.

"That is exactly what SemiAccurate moles are telling us is going on. Nvidia can’t supply, so Apple threw them out on their proverbial magical experience. This doesn’t mean that Nvidia is completely out at Apple, the Intel GPUs are too awful to satisfy the higher end laptops, so there will need to be something in those. What that something is, we don’t definitively know yet, but the possibilities are vanishingly small."

Whether you are an NVIDIA fan or not you are likely anxiously awaiting the release of NVIDIA's new GPU. Since AMD is currently holding the lead on graphics cards they have no competition to lower the price of their high end cards which can beat anything NVIDIA currently has on the market. Depending on the price and performance of the new Kepler chip, its release should have an effect on the pricing of at least one line of AMD card, be it Pitcairn or Cape Verde. This is why SemiAccurate's pegging of the expected release dates, both paper and physical, are worth taking a look at, especially if your GPU recently died and you are looking at getting a new one. We should have a good idea of the price, performance and availability by the end of March.

"Today, March 8, is the day where the press that Nvidia flew in get their ‘tech day’, basically the deep dive on Kepler. Then, like we said a few days ago, Nvidia will paper launch the cards next Monday, March 12. From that point, things get a little hazy because people are arguing over two different days. Some are saying Friday March 23, others Monday March 26, with a few more saying 23 than 26. It could be none of the above though, but if you bet on two weeks out, you won’t be far off."

NVIDIA's Tegra 3 mobile processor may not have much market share (yet), but it sure is powerful! In a recent blog post, the company reiterated just how much it is accelerating applications on mobile platforms including photo editing, remote desktop clients, and even mobile video editing. They further got comments from several of the app developers that stated the Tegra 3 was the piece that made their applications run so smoothly. Granted, some of this is marketing and promotion; however, that fact doesn't make their mobile quad core (4+1 power saving core) SoC any less impressivehardware wise.

Some of the applications NVIDIA heralded include Snapseed, PowerDirector, Splashtop, and Photaf. Snapseed and PowerDirector are photo editor and video editing applications respectively while Photaf is a photo stitching app that allows you to combine multiple shots into panoramas.

Finally, Splashtop is an application that takes advantage of the Tegra 3 to bring a remote desktop client to Android that proclaims to be fast enough to run games as well as the more traditionally desktop access. Both Splashtop and Photaf are available now on Google Play for you to test out while PowerDirector and Snapseed are coming later this year to Android.

Below is the above mentioned video of the Android application developers talking about the benefits of the quad core Tegra 3 processor.

What do you guys think of NVIDIA's "4+1" core processor? Will it stand up to the new Apple A5X chip (according to NVIDIA it wont while Apple says the opposite (obviously), but we'll have to wait until we have an iPad in house to see who's chip is actually faster)?

In a recent press release, the Linux Foundation added four new members, one of which is a big deal in the graphics card industry. In addition to the new members of Fluendo, Lineo Solutions, and Mocana is the green GPU powerhouse NVIDIA. According to Maximum PC, there is talk around the web of the company moving to open source graphics drivers; however, NVIDIA has not released anything to officially confirm or deny.

The Linux Foundation's Logo

Such a move would be rather extreme and unlikely, but it would certainly be one that is welcomed by the Linux community. Officially, the Vice President of Linux Platform Software Scott Pritchett stated the company is "strongly committed" to delivering quality software/hardware experiences and they hope their membership in the Linux Foundation will "accelerate our collaboration with the organizations and individuals instrumental in shaping the future of Linux." Further, they hope to be able to add to and enhance the user and development experience of the open source operating system.

The three other members to join the Linux Foundation specialize in multimedia software (Fluendo), embedded system development (Lineo Solutions), and device-agnostic security (Mocana) but the green giant that is NVIDIA has certainly stolen the show and is the big announcement for them (which isn't a bad thing that they joined, it is kind of a big deal to have them). Amanda McPherson, VP of Marketing and Developer Services for the Linux Foundation wrapped up the press release by saying that all of the new members "represent important areas of the Linux ecosystem and their contributions will immediately help advance the operating system.”

NVIDIA has generally enjoyed good support on the major Linux distributions, but now that they are a member here's hoping they can further improve their Linux graphics card drivers. What is your take on the Linux Foundation's new members, will they make a difference?

Apparently TSMC stopped the entire line about three weeks ago and have not restarted it. This type of thing does not happen very often, and when it does, things are really out of whack. Going back we have heard mixed reviews of TSMC’s 28 nm process. NVIDIA was quoted as saying that yields still were not very good, but at least were better than what they experienced with their first 40 nm part (GTX 400 series). Now, part of NVIDIA’s problem was that the design was as much of an issue as the 40 nm process was. AMD at the time was churning out HD 5000 series parts at a pretty good rate, and they said their yields were within expectations.

AMD so far is one of the first customers out of the gate with a large volume of 28 nm parts. The HD 7900 series has been out since the second week of January, the HD 7700 series since mid-February, and the recently released HD 7800 series will reach market in about 2 weeks. Charlie has done some more digging and has found out that AMD has enough product in terms of finished boards and packaged chips that they will be able to handle the shutdown from TSMC. Things will get tight at the end, but apparently the wafers in the middle of being processed have not been thrown out or destroyed. So once production starts again, AMD and the other customers will not have to wait 16 to 20 weeks before getting finished product.

NVIDIA will likely not fare nearly as well. The bulk of the stoppage occurred during the real “meat and potatoes” manufacturing cycle for the company. NVIDIA expects to launch the first round of Kepler based products this month, but if production has been stopped for the past three weeks then we can bet that there are a lot of NVIDIA wafers just sitting in the middle of production. Charlie also claims that the NVIDIA launch will not be a hard one, and NVIDIA expects retail products to be available several weeks after the introduction.

The potential reasons for this could be legion. Was there some kind of toxic spill that resulted in a massive cleanup that required the entire line to be shut down? Was there some kind of contamination that was present while installing the line, but was not discovered until well after production started? Or was something glossed over during installation that ballooned into a bigger problem that just needed to be rectified (a stitch in time saves nine)?

It seems that there have been a few leaks on NVIDIA's first Kepler based product. Techpowerup and Extreme Tech are both reporting on leaks that apparently came from Cebit and some of NVIDIA's partners. We now have a much better idea what the GTX 680 is all about.

Epic's Mark Rein is showing off his own GTX 680 which successfully ran their Samaritan Demo. It is wrapped for his protection. (Image courtesy of Extreme Tech)

The chip that powers the GTX 680 is the GK104, and it is oddly enough the more "midrange/enthusiast" offering. It has a total of 1536 CUDA cores, runs at 703 MHz core and 1406 MHz hot clock, has a 256 bit memory bus pumping out 196 GB/sec, and has a new and interesting feature that is quite a bit like the Turbo core functionality we see from both AMD and Intel in their CPUs. Apparently when a scene gets very complex, the chip is able to overclock itself up to 900 MHz core/1800 MHz hot clock. It will stay there for either as long as the scene needs it, or the chip approaches its upper TDP limit.

These reports paint the GTX 680 as being about 10% faster than the HD 7970 in certain applications, but in others it is slower. I figure that when reviews are finally released the two cards will have traded blows with each other over who has the fastest graphics card. Let's call it a draw.

The GTX 680 should be unveiled in the next week or so, but initial reviews will not surface until later in the month. Retail availability will be relegated until then, but with the issues that TSMC has had with their 28 nm process (it has been stopped since the middle of February) we have no idea how much product NVIDIA and its partners has. Things could be scarce after the introduction for some time.

GDC 2012 is upon us, and in addition to the Samaritan demo and some gaming goodness, we spotted a leaked image over at Legit Reviews that is allegedly a photo of a production NVIDIA GTX 670 Ti graphics card.

The cooler looks to cover the whole PCB and be of the blower design, funneling hot air of the of the front of the card and out of the case. The connectors include two DVI, one HDMI, and one Display Port. Rumors suggest that the latest NVIDIA cards will be capable of multi-display (>2) from a single card much like AMD cards have been doing for some time.

Not a whole lot is known about the upcoming GK104 "Kepler" GPUs with a good deal of certainty, but we have reported on a few leaks including that the cards will have 2 GB of GDDR5 RAM on a 256 bit memory bus, and that the cards may just be coming out in May. Due to Epic using a working Kepler GPU in their Samaritan demo, that launch date does not sound too far fetched either. On the performance front, there are conflicting rumors; some rumors state that the cards will blow AMD out of the water and other people swear the cards will not be as powerful as the rumors suggest. I suppose we'll find out soon though!

Are you still waiting for NVIDIA's Kepler GPUs or have you jumped on the latest Radeon series?

Last year we saw Unreal unviel their Samaritan demo which showed off next generation gaming graphics using three NVIDIA 580 GTX graphics cards in SLI. Epic games showed off realistic hair and cloth physics along with improved lighting, shadows, anti-aliasing, and more bokeh effects than gamers could shake a controller at with their Samaritan demo, and I have to say it was pretty impressive stuff a year ago, and it still is today. What makes this round special is that hardware has advanced such that the Samaritan level graphics can be achieved in real time with a single graphics card, a big leap from last year's required three SLI'd NVIDIA GTX 580s!

The Samaritan demo was shown at this years' GDC 2012 (Games Developers Conference) to be running on a single NVIDIA "Kepler" graphics card in real time, which is pretty exciting. Epic did not state any further details on the upcoming NVIDIA graphics card; however, the knowledge that the single GPU was able to pull off what it took three Fermi cards to do certainly holds promise.

According to GeForce; however, it was not merely the NVIDIA Kepler GPU that made the Samaritan demo on a single GPU possible. The article states that it was the inclusion of NVIDIA's method for anti-aliasing known as FXAA, or Fast Approximate Anti-Aliasing that enabled it. Unlike the popular MSAA option employed by (many of) today's games, FXAA uses much less memory, enabling single graphics cards to avoid being bogged down by memory thrashing. They further state that the reason MSAA is not ideal for the Samaritan demo is because the demo uses deferred shading to provide the "complex, realistic lighting effects that would be otherwise impossible using forward rendering," a method employed by many game engines. The downside to the arguably better lighting in the Samaritan demo is that it requires four times as much memory. This is because the GPU RAM needs to hold four samples per pixel, and the workload is magnified four times in areas of the game where there are multiple intersecting pieces of geometry.

FXAA vs MSAA

They go on to state that without AA turned on, the lighting in the Samaritan demo uses approximately 120 MB of GPU RAM, and with 4x MSAA turned on it uses about 500 MB. That's 500 MB of memory dedicated just to lighting when it could be used to hold more of the level and physics, for example and would require a GPU to swap more data that it should have to (using FXAA). They state that FXAA on the other hand, is a shader based AA method that does not require additional memory, making it "much more performance friendly for deferred renderers such as Samaritan."

Without anti-aliasing, the game world would look much more jagged and not realistic. AA seeks to smooth out the jagged edges, and FXAA enabled Epic to run their Samaritan demo on a single next generation NVIDIA graphics card. Pretty impressive if you ask me, and I'm excited to see game developers roll some of the Samaritan graphical effects into their games. Knowing that Epic Game's engine can be run on a single graphics card implies that this future is all that much closer. More information is available here, and if you have not already seen it the Samaritan demo is shown in the video below.