Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

MojoKid writes: AMD just unveiled new details about their upcoming Carrizo APU architecture. The company is claiming the processor, which is still built on Global Foundries' 28nm 28SHP node like its predecessor, will nonetheless deliver big advances in both performance and efficiency. When it was first announced, AMD detailed support for next generation Radeon Graphics (DX12, Mantle, and Dual Graphics support), H.265 decoding, full HSA 1.0 support, and ARM Trustzone compatibility. But perhaps one of the biggest advantages of Carrizo is the fact that the APU and Southbridge are now incorporated into the same die; not just two separates dies built into and MCM package.

This not only improves performance, but also allows the Southbridge to take advantage of the 28SHP process rather than older, more power-hungry 45nm or 65nm process nodes. In addition, the Excavator cores used in Carrizo have switched from a High Performance Library (HPL) to a High Density Library (HDL) design. This allows for a reduction in the die area taken up by the processing cores (23 percent, according to AMD). This allows Carrizo to pack in 29 percent more transistors (3.1 billion versus 2.3 billion in Kaveri) in a die size that is only marginally larger (250mm2 for Carrizo versus 245mm2 for Kaveri). When all is said and done, AMD is claiming a 5 percent IPC boost for Carrizo and a 40 percent overall reduction in power usage.

New submitter Dharkfiber sends an article about the Hardened Anti-Reverse Engineering System (HARES), which is an encryption tool for software that doesn't allow the code to be decrypted until the last possible moment before it's executed. The purpose is to make applications as opaque as possible to malicious hackers trying to find vulnerabilities to exploit. It's likely to find work as an anti-piracy tool as well.
To keep reverse engineering tools in the dark, HARES uses a hardware trick that’s possible with Intel and AMD chips called a Translation Lookaside Buffer (or TLB) Split. That TLB Split segregates the portion of a computer’s memory where a program stores its data from the portion where it stores its own code’s instructions. HARES keeps everything in that “instructions” portion of memory encrypted such that it can only be decrypted with a key that resides in the computer’s processor. (That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM, can’t pull the key out of memory.) When a common reverse engineering tool like IDA Pro reads the computer’s memory to find the program’s instructions, that TLB split redirects the reverse engineering tool to the section of memory that’s filled with encrypted, unreadable commands.

MojoKid writes: In all of its iterations, NVIDIA's Maxwell architecture has proven to be a good performing, power-efficient GPU thus far. At the high-end of the product stack is where some of the most interesting products reside, however. When NVIDIA launches a new high-end GPU, cards based on the company's reference design trickle out first, and then board partners follow up with custom solutions packing unique cooling hardware, higher clocks, and sometimes additional features. With the GeForce GTX 970 and GTX 980, NVIDIA's board partners were ready with custom solutions very quickly. These three custom GeForce cards, from enthusiast favorites EVGA, MSI, and Zotac represent optimization at the high-end of Maxwell. Two of the cards are GTX 980s: the MSI GTX 980 Gaming 4G and the Zotac GeForce GTX 980 AMP! Omgea, the third is a GTX 970 from EVGA, their GeForce GTX 970 FTW with ACX 2.0. Besides their crazy long names, all of these cards are custom solutions, that ship overclocked from the manufacturer. In testing, NVIDIA's GeForce GTX 980 was the fastest, single-GPU available. The custom, factory overclocked MSI and Zotac cards cemented that fact. Overall, thanks to a higher default GPU-clock, the MSI GTX 980 Gaming 4G was the best performing card. EVGA's GeForce GTX 970 FTW was also relatively strong, despite its alleged memory bug. Although, as expected, it couldn't quite catch the higher-end GeForce GTX 980s, but occasionally outpaced the AMD's top-end Radeon R9 290X.

Bram Stolk writes So, I am running GNU/Linux on a modern Haswell CPU, with an old Radeon HD5xxx from 2009. I'm pretty happy with the open source Gallium driver for 3D acceleration. But now I want to do some GPGPU development using OpenCL on this box, and the old GPU will no longer cut it. What do my fellow technophiles from Slashdot recommend as a replacement GPU? Go NVIDIA, go AMD, or just use the integrated Intel GPU instead? Bonus points for open sourced solutions. Performance not really important, but OpenCL driver maturity is.

GhostX9 writes: SLR Lounge just posted a first look at the Samsung NX1 28.1 MP interchangeable lens camera. They compare it to Canon and Sony full-frame sensors. Spoiler: The Samsung sensor seems to beat the Sony A7R sensor up to ISO 3200. They attribute this to Samsung's chip foundry. While Sony is using 180nm manufacturing (Intel Pentium III era) and Canon is still using 500nm process (AMD DX4 era), Samsung has gone with 65nm with copper interconnects (Intel Core 2 Duo — Conroe era). Furthermore, Samsung's premium lenses appear to be as sharp or sharper than Canon's L line and Sony's Zeiss line in the center, although the Canon 24-70/2.8L II is sharper at the edge of the frame.

An anonymous reader writes: Tests of the AMD Catalyst driver with the latest AAA Linux games/engines have shown what poor shape the proprietary Radeon driver currently is in for Linux gamers. Phoronix, which traditionally benchmarks with open-source OpenGL games and other long-standing tests, recently has taken specially interest in adapting some newer Steam-based titles for automated benchmarking. With last month's Linux release of Metro Last Light Redux and Metro 2033 Redux, NVIDIA's driver did great while AMD Catalyst was miserable. Catalyst 14.12 delivered extremely low performance and some major bottleneck with the Radeon R9 290 and other GPUs running slower than NVIDIA's midrange hardware. In Unreal Engine 4 Linux tests, the NVIDIA driver again was flawless but the same couldn't be said for AMD. Catalyst 14.12 wouldn't even run the Unreal Engine 4 demos on Linux with their latest generation hardware but only with the HD 6000 series. Tests last month also showed AMD's performance to be crippling for NVIDIA vs. AMD Civilization: Beyond Earth Linux benchmarks with the newest drivers.

DeviceGuru writes CompuLab has unveiled a tiny 'Fitlet' mini-PC that runs Linux or Windows on a dual- or quad-core 64-bit AMD x86 SoC (with integrated Radeon R3 or R2 GPU), clocked at up to 1.6GHz, and offering extensive I/O, along with modular internal expansion options. The rugged, reconfigurable 4.25 x 3.25 x 0.95 in. system will also form the basis of a pre-configured 'MintBox Mini' model, available in Q2 in partnership with the Linux Mint project. To put things in perspective, CompuLab says the Fitlet is three times smaller than the Celeron Intel NUC.

itwbennett writes: In the fierce battle between CPU and GPU vendors, it's not just about speeds and feeds but also about process shrinks. Both Nvidia and AMD have had their move to 16nm and 20nm designs, respectively, hampered by the limited capacity of both nodes at manufacturer TSMC, according to the enthusiast site WCCFTech.com. While AMD's CPUs are produced by GlobalFoundaries, its GPUs are made at TSMC, as are Nvidia's chips. The problem is that TSMC only has so much capacity and Apple and Samsung have sucked up all that capacity. The only other manufacturer with 14nm capacity is Intel and there's no way Intel will sell them some capacity.

Phoronix has taken an in-depth look at progress on AMD's open source Radeon driver, and declares 2014 to have been a good year. There are several pages with detailed benchmarks, but the upshot is overwhelmingly positive:
Across the board there's huge performance improvements to find out of the open-source AMD Linux graphics driver when comparing the state at the end of 2013 to the current code at the end of this year. The performance improvements and new features presented (among them are OpenMAX / AMD video encode, UVD for older AMD GPUs, various new OpenGL extensions, continued work on OpenCL, power management improvements, and the start of open-source HSA) has been nothing short of incredible. Most of the new work benefits the Radeon HD 7000 series and newer (GCN) GPUs the most but these tests showed the Radeon HD 6000 series still improving too. ... Coming up before the end of the year will be a fresh comparison of these open-source Radeon driver results compared to the newest proprietary AMD Catalyst Linux graphics driver.

itwbennett writes According to a report in Korean IT Times, Samsung Electronics has begun production of the A9 processor, the next generation ARM-based CPU for iPhone and iPad. Korea IT Times says Samsung has production lines capable of FinFET process production (a cutting-edge design for semiconductors that many other manufacturers, including AMD, IBM and TSMC, are adopting) in Austin, Texas and Giheung, Korea, but production is only taking place in Austin. Samsung invested $3.9 billion in that plant specifically to make chips for Apple. So now Apple can say its CPU is "Made in America."

MojoKid writes: AMD just dropped its new Catalyst Omega driver package that is the culmination of six months of development work. AMD Catalyst Omega reportedly brings over 20 new features and a wealth of bug fixes to the table, along with performance increases both on AMD Radeon GPUs and integrated AMD APUs. Some of the new functionality includes Virtual Super Resolution, or VSR. VSR is "game- and engine-agnostic" and renders content at up to 4K resolution, then displays it at a resolution that your monitor actually supports. AMD says VSR allows for increased image quality, similar in concept to Super Sampling Anti-Aliasing (SSAA). Another added perk of VSR is the ability to see more content on the screen at once. To take advantage of VSR, you'll need a Radeon R9 295X2, R9 290X, R9 290, or R9 285 discrete graphics card. Both single- and multi-GPU configurations are currently supported. VSR is essentially AMD's answer to NVIDIA's DSR, or Dynamic Super Resolution. In addition, AMD is claiming performance enhancements in a number of top titles with these these new drivers. Reportedly, as little as 6 percent improvement in performance in FIFA Online to as much as a 29 percent increase in Batman: Arkham Origins can be gained when using an AMD 7000-Series APU, for example. On discrete GPUs, an AMD Radeon R9 290X's performance increases ranged from 8 percent in Grid 2 to roughly 16 percent in Bioshock Infinity.

MojoKid writes To say that BioWare has something to prove with Dragon Age: Inquisition is an understatement. The first Dragon Age: Origins was a colossal, sprawling, unabashed throwback to classic RPGs. Conversely, Dragon Age: Inquisition doesn't just tell an epic story, it evolves in a way that leaves you, as the Inquisitor, leading an army. Creating that sense of scope required a fundamentally different approach to gameplay. Neither Dragon Origins or Dragon Age 2 had a true "open" world in the sense that Skyrim is an open world. Instead, players clicked on a location and auto-traveled across the map from Point A to Point B. Thus, a village might be contained within a single map, while a major city might have 10-12 different locations to explore. Inquisition keeps the concept of maps as opposed to a completely open world, but it blows those maps up to gargantuan sizes. Instead of simply consisting of a single town or a bit of wilderness, the new maps in Dragon Age: Inquisition are chock-full of areas to explore, side quests, crafting materials to gather, and caves, dungeons, mountain peaks, flowing rivers, and roving bands of monsters. And Inquisition doesn't forget the small stuff — the companion quests, the fleshed-out NPCs, or the rich storytelling — it just seeks to put those events in a much larger context across a broad geographical area. Dragon Age: Inquisition is one of the best RPGs to come along in a long time. Never has a game tried to straddle both the large-scale, 10,000-foot master plan and the small-scale, intimate adventure and hit both so well. In terms of graphics performance, you might be surprised to learn that a Radeon R9 290X has better frame delivery than a GeForce GTX 980, despite the similarity in the overall frame rate. The worst frame time for an Radeon R9 290X is just 38.5ms or 26 FPS while a GeForce GTX 980 is at 46.7ms or 21 FPS. AMD takes home an overall win in Dragon Age: Inquisition currently, though Mantle support isn't really ready for prime time. In related news, hypnosec sends word that Chinese hackers claim to have cracked Denuvo DRM, the anti-piracy solution for Dragon Age: Inquisition. A Chinese hacker group has claimed that they have managed to crack Denuvo DRM — the latest anti-piracy measure to protect PC games from piracy. Introduced for the first time in FIFA 15 for PC, the Denuvo anti-piracy solution managed to keep the FIFA 15 uncracked for 2 months and Dragon Age Inquisition for a month. However, Chinese hackers claim that they have managed to rip open the DRM after fifteen days of work. The hackers have uploaded a video to prove their accomplishment. A couple of things need to be pointed out here. First,the Chinese team has merely cracked the DRM and this doesn't necessarily mean that there are working cracks out there. Also, the crack only works with Windows 7 64-bit systems and won't work on Windows 8 or Windows 7 32-bit systems for now. The team is currently working to collect hardware data on processor identification codes.

jones_supa writes We all are aware of various chirping and whining sounds that electronics can produce. Modern graphics cards often suffer from these kind of problems in form of coil whine. But how widespread is it really? Hardware Canucks put 50 new graphics cards side-by-side to compare them solely from the perspective of subjective acoustic disturbance. NVIDIA's reference platforms tended to be quite well behaved, just like their board partners' custom designs. The same can't be said about AMD since their reference R9 290X and R9 290 should be avoided if you're at all concerned about squealing or any other odd noise a GPU can make. However the custom Radeon-branded SKUs should usually be a safe choice. While the amount and intensity of coil whine largely seems to boil down to luck of the draw, at least most board partners are quite friendly regarding their return policies concerning it.

MojoKid (1002251) writes "Life is hard when you're a AAA publisher. Last month, Ubisoft blamed weak console hardware for the troubles it had bringing Assassin's Creed Unity up to speed, claiming that it could've hit 100 FPS but for weak console CPUs. Now, in the wake of the game's disastrous launch, the company has changed tactics — suddenly, all of this is AMD's fault. An official company forum post currently reads: "We are aware that the graphics performance of Assassin's Creed Unity on PC may be adversely affected by certain AMD CPU and GPU configurations. This should not affect the vast majority of PC players, but rest assured that AMD and Ubisoft are continuing to work together closely to resolve the issue, and will provide more information as soon as it is available." There are multiple problems with this assessment. First, there's no equivalent Nvidia-centric post on the main forum, and no mention of the fact that if you own an Nvidia card of any vintage but a GTX 970 or 980, you're going to see less-than ideal performance. According to sources, the problem with Assassin's Creed Unity is that the game is issuing tens of thousands of draw calls — up to 50,000 and beyond, in some cases. This is precisely the kind of operation that Mantle and DirectX 12 are designed to handle, but DirectX 11, even 11.2, isn't capable of efficiently processing that many calls at once. It's a fundamental limit of the API and it kicks in harshly in ways that adding more CPU cores simply can't help with.

MojoKid writes It has been over six years since Intel first unveiled its Atom CPUs and detailed its plans for new, ultra-mobile devices. The company's efforts to break into smartphone and tablet sales, while turning a profit, have largely come to naught. Nonetheless, company CEO Brian Krzanich remains optimistic. Speaking to reporters recently, Krzanich opined that the company's new manufacturing partners like Rockchip and Spreadtrum would convert entirely to Intel architectures within the next few years. Krzanich has argued that with Qualcomm and MediaTek dominating the market, it's going to be tougher and tougher for little guys like Rockchip and Spreadtrum to compete in the same spaces. There's truth to that argument, to be sure, but Intel's ability to offer a competitive alternative is unproven. According to a report from JP Morgan, Intel's cost-per-wafer is currently estimated as equivalent to TSMC's average selling price per wafer — meaning TSMC is making money well below Intel's break-even. Today, Intel is unquestionably capable of building tablet processors that offer a good overall experience but the question of what defines a "good" experience is measured in its similarity to ARM. It's hard to imagine that Intel wants to build market share as an invisible partner, but in order to fundamentally change the way people think about Intel hardware in tablets and smartphones, it needs to go beyond simply being "as good" and break into territory that leaves people asking: "Is the ARM core just as good as the Intel chip?"

MojoKid writes Dell's Alienware division recently released a radical redesign of their Area-51 gaming desktop. With 45-degree angled front and rear face plates that are designed to direct control and IO up toward the user, in addition to better directing cool airflow in, while warm airflow is directed up and away from the rear of the chassis, this triangular-shaped machine grabs your attention right away. In testing and benchmarks, the Area-51's new design enables top-end performance with thermal and acoustic profiles that are fairly impressive versus most high-end gaming PC systems. The chassis design is also pretty clean, modular and easily servicable. Base system pricing isn't too bad, starting at $1699 with the ability to dial things way up to an 8-core Haswell-E chip and triple GPU graphics from NVIDIA and AMD. The test system reviewed at HotHardware was powered by a six-core Core i7-5930K chip and three GeForce GTX 980 cards in SLI. As expected, it ripped through the benchmarks, though the price as configured and tested is significantly higher.

An anonymous reader writes: AMD recently presented plans to unify their open-source and Catalyst Linux drivers at the open source XDC2014 conference in France. NVIDIA's rebuttal presentation focused on support Mir and Wayland on Linux. The next-generation display stacks are competing to succeed the X.Org Server. NVIDIA is partially refactoring their Linux graphics driver to support EGL outside of X11, to propose new EGL extensions for better driver interoperability with Wayland/Mir, and to support the KMS APIs by their driver. NVIDIA's binary driver will support the KMS APIs/ioctls but will be using their own implementation of kernel mode-setting. The EGL improvements are said to land in their closed-source driver this autumn while the other changes probably won't be seen until next year.

MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.

An anonymous reader writes: AMD is moving forward with their plans to develop a new open-source Linux driver model for their Radeon and FirePro graphics processors. Their unified Linux driver model is moving forward, albeit slightly different compared to what was planned early this year. They're now developing a new "AMDGPU" kernel driver to power both the open and closed-source graphics components. This new driver model will also only apply to future generations of AMD GPUs. Catalyst is not being open-sourced, but will be a self-contained user-space blob, and the DRM/libdrm/DDX components will be open-source and shared. This new model is more open-source friendly, places greater emphasis on their mainline kernel driver, and should help Catalyst support Mir and Wayland.

Probably -- if the device I want supports itProbably -- if it works as promisedProbably -- credit cards will be like checks in another decadeNot sure -- no strong opinions either wayDoubtful -- not a useful technology to meDoubtful -- it will be too fragmentedDoubtful -- privacy/security concernsDoes throwing my spare change at the cashier count as mobile?