Siggraphhttp://www.pcper.com
PC Perspectivehttp://www.pcper.com/images/podcast-logo-600x600.pngen AMD FireRender Technology Now ProRender, Part of GPUOpenhttps://www.pcper.com/news/General-Tech/%C2%A0AMD-FireRender-Technology-Now-ProRender-Part-GPUOpen
<p>At their Capsaicin Siggraph event tonight AMD has announced that what was previously announced as the FireRender rendering engine is being officially launched as AMD&nbsp;Radeon ProRender, and this is becoming open-source as part of AMD&#39;s&nbsp;<a href="http://gpuopen.com/">GPUOpen initiative</a>.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/%C2%A0AMD-FireRender-Technology-Now-ProRender-Part-GPUOpen" class="inline-image-link" title="View: capsaicin.PNG"><img src="/files/imagecache/article_max_width/news/2016-07-25/capsaicin.PNG" alt="capsaicin.PNG" title="capsaicin.PNG" class="pcper-inline" width="602" height="270" /></a></div></p>
<p>From AMD&#39;s press release:</p>
<blockquote><p><em>AMD today announced its powerful physically-based rendering engine is becoming open source, giving developers access to the source code.</em></p>
<p><em>As part of GPUOpen, Radeon ProRender (formerly previewed as AMD FireRender) enables creators to bring ideas to life through high-performance applications and workflows enhanced by photorealistic rendering.</em></p>
<p><em>GPUOpen is an AMD initiative designed to assist developers in creating ground-breaking games, professional graphics applications and GPU computing applications with much greater performance and lifelike experiences, at no cost and using open development tools and software.</em></p>
<p><em>Unlike other renderers, Radeon ProRender can simultaneously use and balance the compute capabilities of multiple GPUs and CPUs &ndash; on the same system, at the same time &ndash; and deliver state-of-the-art GPU acceleration to produce rapid, accurate results.</em></p>
<p><em>Radeon ProRender plugins are available today for many popular 3D content creation applications, including Autodesk&reg; 3ds Max&reg;, SOLIDWORKS by Dassault Syst&egrave;mes and Rhino&reg;, with Autodesk&reg; Maya&reg; coming soon. Radeon ProRender works across Windows&reg;, OS X and Linux&reg;, and supports AMD GPUs, CPUs and APUs as well as those of other vendors.</em></p>
</blockquote>
<p><a href="https://www.pcper.com/news/General-Tech/%C2%A0AMD-FireRender-Technology-Now-ProRender-Part-GPUOpen" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/%C2%A0AMD-FireRender-Technology-Now-ProRender-Part-GPUOpen#commentsGeneral TechGraphics Cards3D renderingamdcapsaicinSiggraphsiggraph 2016Tue, 26 Jul 2016 01:48:27 +0000Sebastian Peak65874 at https://www.pcper.comAMD Announces Radeon Pro WX Series Graphics Cardshttps://www.pcper.com/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards
<p>AMD has announced new Polaris-based professional graphics cards at Siggraph 2016 this evening, with the&nbsp;Radeon Pro WX 4100, WX 5100, and WX 7100 GPUs.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards" class="inline-image-link" title="View: Radeon Pro WX 7100.jpg"><img src="/files/imagecache/article_max_width/news/2016-07-25/Radeon%20Pro%20WX%207100.jpg" alt="Radeon Pro WX 7100.jpg" title="Radeon Pro WX 7100.jpg" class="pcper-inline" width="602" height="401" /></a></div></p>
<p class="rtecenter"><em>The AMD&nbsp;Radeon Pro WX 7100 GPU (Image credit: AMD)</em></p>
<p>From AMD&#39;s official press release:</p>
<blockquote><p><em>AMD today unveils powerful new solutions to address modern content creation and engineering: the new Radeon Pro WX Series of professional graphics cards, which harness the award-winning Polaris architecture and is designed to deliver exceptional capabilities for the immersive computing era.</em></p>
<p><em>Radeon Pro solutions and the new Radeon Pro WX Series of professional graphics cards represent a fundamentally different approach for professionals rooted in a commitment to open, non-proprietary software and performant, feature-rich hardware that empowers people to create the &ldquo;art of the impossible&rdquo;.</em></p>
<p><em>The new Radeon Pro WX series graphics cards deliver on the promise of this new era of creation, are optimized for open source software, and are designed for creative professionals and those pushing the boundaries of science, technology and engineering.</em></p>
</blockquote>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards" class="inline-image-link" title="View: Radeon Pro WX 5100.jpg"><img src="/files/imagecache/article_max_width/news/2016-07-25/Radeon%20Pro%20WX%205100.jpg" alt="Radeon Pro WX 5100.jpg" title="Radeon Pro WX 5100.jpg" class="pcper-inline" width="602" height="430" /></a></div></p>
<p class="rtecenter"><em>The AMD&nbsp;Radeon Pro WX 5100 GPU (Image credit: AMD)</em></p>
<blockquote><p><em>Radeon Pro WX Series professional graphics cards are designed to address specific demands of the modern content creation era:</em></p>
<ul>
<li><em>Radeon Pro WX 7100 GPU is capable of handling demanding design engineering and media and entertainment workflows and is AMD&rsquo;s most affordable workstation solution for professional VR content creation.</em></li>
<li><em>Radeon Pro WX 5100 GPU is the ideal solution for product development, powered by the impending game-engine revolution in design visualization.</em></li>
<li><em>Radeon Pro WX 4100 GPU provides great performance in a half-height design, finally bringing mid-range application performance demanded by CAD professionals to small form factor (SFF) workstations</em></li>
</ul>
</blockquote>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards" class="inline-image-link" title="View: Radeon Pro WX 4100.jpg"><img src="/files/imagecache/article_max_width/news/2016-07-25/Radeon%20Pro%20WX%204100.jpg" alt="Radeon Pro WX 4100.jpg" title="Radeon Pro WX 4100.jpg" class="pcper-inline" width="602" height="376" /></a></div></p>
<p class="rtecenter"><em>The AMD&nbsp;Radeon Pro WX 4100 GPU (Image credit: AMD)</em></p>
<p>A breakdown of the known specifications for these new GPUs <a href="http://www.anandtech.com/show/10521/amd-announces-radeon-pro-wx-series-wx-4100-wx-5100-wx-7100-bring-polaris-to-pros">was provided by AnandTech</a> in their report on the WX Series:</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards" class="inline-image-link" title="View: WX_Series_Comparison.PNG"><img src="/files/imagecache/article_max_width/news/2016-07-25/WX_Series_Comparison.PNG" alt="WX_Series_Comparison.PNG" title="WX_Series_Comparison.PNG" class="pcper-inline" width="598" height="510" /></a></div></p>
<p class="rtecenter"><a href="http://www.anandtech.com/show/10521/amd-announces-radeon-pro-wx-series-wx-4100-wx-5100-wx-7100-bring-polaris-to-pros">Chart credit: AnandTech</a></p>
<p><a href="https://www.pcper.com/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards" target="_blank">read more</a></p>https://www.pcper.com/news/Graphics-Cards/AMD-Announces-Radeon-Pro-WX-Series-Graphics-Cards#commentsGraphics CardsamdcapsaicinradeonRadeon Pro WX 4100Radeon Pro WX 5100Radeon Pro WX 7100Radeon Pro WX SeriesSiggraphsiggraph 2016Tue, 26 Jul 2016 01:30:32 +0000Sebastian Peak65873 at https://www.pcper.comSIGGRAPH 2016 -- NVIDIA Announces Pascal Quadro GPUs: Quadro P5000 and Quadro P6000https://www.pcper.com/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000
<p>SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.</p>
<p>But new products will indeed be announced.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000" class="inline-image-link" title="View: nvidia-2016-Quadro_P6000_7440.jpg"><img src="/files/imagecache/article_max_width/news/2016-07-25/nvidia-2016-Quadro_P6000_7440.jpg" alt="nvidia-2016-Quadro_P6000_7440.jpg" title="nvidia-2016-Quadro_P6000_7440.jpg" class="pcper-inline" width="602" height="401" /></a></div></p>
<p class="rtecenter"><em>The NVIDIA Quadro P6000</em></p>
<p>NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000" class="inline-image-link" title="View: nvidia-2016-Quadro_P5000_7460.jpg"><img src="/files/imagecache/article_max_width/news/2016-07-25/nvidia-2016-Quadro_P5000_7460.jpg" alt="nvidia-2016-Quadro_P5000_7460.jpg" title="nvidia-2016-Quadro_P5000_7460.jpg" class="pcper-inline" width="602" height="401" /></a></div></p>
<p class="rtecenter"><em>The NVIDIA Quadro P5000</em></p>
<p>The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.</p>
<p>Above it sits the Quadro P6000. This chip has <i>3840 </i><span style="font-style: normal">CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn&#39;t have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. </span><span style="font-style: normal">This means that the Quadro P6000 has 256 more </span><span style="font-style: normal">(single-precision) </span><span style="font-style: normal">shader units than the Titan X, but otherwise very similar specifications.</span></p>
<p><span style="font-style: normal">Both </span><span style="font-style: normal">graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. </span><span style="font-style: normal">These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. </span><span style="font-style: normal">It would be nice if all five connections could be used at once, but what can you do.</span></p>
<p class="rtecenter"><span style="font-style: normal"><div class = "center-article-image"><a href="/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000" class="inline-image-link" title="View: nvidia-2016-irayvr.png"><img src="/files/imagecache/article_max_width/news/2016-07-25/nvidia-2016-irayvr.png" alt="nvidia-2016-irayvr.png" title="nvidia-2016-irayvr.png" class="pcper-inline" width="602" height="350" /></a></div></span></p>
<p><span style="font-style: normal">P</span><span style="font-style: normal">ascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) </span><span style="font-style: normal">is </span><span style="font-style: normal">used in VR applications to essentially double the GPU&#39;s geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.</span></p>
<p><span style="font-style: normal">No price is given for either of these cards, but they will launch in October of this year.</span></p>
<p><a href="https://www.pcper.com/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000" target="_blank">read more</a></p>https://www.pcper.com/news/Graphics-Cards/SIGGRAPH-2016-NVIDIA-Announces-Pascal-Quadro-GPUs-Quadro-P5000-and-Quadro-P6000#commentsGraphics CardsnvidiaquadroSiggraphsiggraph 2016Mon, 25 Jul 2016 20:48:04 +0000Scott Michaud65864 at https://www.pcper.comQualcomm Introduces Adreno 5xx Architecture for Snapdragon 820https://www.pcper.com/news/Graphics-Cards/Qualcomm-Introduces-Adreno-5xx-Architecture-Snapdragon-820
<p>Despite the success of the Snapdragon 805 and even the 808, <a href="http://www.pcper.com/reviews/Processors/Qualcomm-Snapdragon-810-Performance-Preview">Qualcomm&rsquo;s flagship Snapdragon 810</a> SoC had a tumultuous lifespan.&nbsp; Rumors and stories about the chip and an inability to run in phone form factors <a href="http://arstechnica.com/gadgets/2015/04/in-depth-with-the-snapdragon-810s-heat-problems/">without overheating</a> and/or draining battery life were rampant, despite the company&rsquo;s insistence that the problem was fixed with a very quick second revision of the part. There are very few devices that used the 810 and instead we saw more of the flagship smartphones uses the slightly cut back SD 808 or the SD 805.</p>
<p>Today at Siggraph Qualcomm starts the reveal of a new flagship SoC, Snapdragon 820. As the event coinciding with launch is a graphics-specific show, QC is focusing on a high level overview of the graphics portion of the Snapdragon 820, the updated Adreno 5xx architecture and associated designs and a new camera image signal processor (ISP) aiming to improve quality of photos and recording on our mobile devices.</p>
<p><div class = "center-article-image"><a href="/news/Graphics-Cards/Qualcomm-Introduces-Adreno-5xx-Architecture-Snapdragon-820" class="inline-image-link" title="View: sd820-1.jpg"><img src="/files/imagecache/article_max_width/news/2015-08-12/sd820-1.jpg" alt="sd820-1.jpg" title="sd820-1.jpg" class="pcper-inline" width="602" height="338" /></a></div></p>
<p>A modern SoC from Qualcomm features many different processors working in tandem to impact the user experience on the device. While the only details we are getting today focus around the Adreno 530 GPU and Spectra ISP, other segments like connectivity (wireless), DSP, video processing and digital signal processing are important parts of the computing story. And we are well aware that Qualcomm is readying its own 64-bit processor architecture for the Kryo CPU rather than implementing the off-the-shelf cores from ARM used in the 810.</p>
<p>We also know that Qualcomm is targeting a &ldquo;leading edge&rdquo; FinFET process technology for SD 820 and though we haven&rsquo;t been able to confirm anything, it <a href="http://www.pcper.com/news/General-Tech/Samsung-may-be-fabbing-Snapdragon-820">looks very like that this chip will be built on the Samsung 14nm line</a> that also built the Exynos 7420.</p>
<p>But over half of the processing on the upcoming Snapdragon 820 fill focus on visual processing, from graphics to gaming to UI animations to image capture and video output, this chip&rsquo;s die will be dominated by high performance visuals.</p>
<p>Qualcomm&rsquo;s lists of target goals for SD 820 visuals reads as you would expect: wanting perfection in every area. Wouldn&rsquo;t we all love a phone or tablet that takes perfect photos each time, always focusing on the right things (or everything) with exceptional low light performance? Though a lesser known problem for consumers, having accurate color reproduction from capture, through processing and to the display would be a big advantage. And of course, we all want graphics performance that impresses and a user interface that is smooth and reliable while enabling NEW experience that we haven&rsquo;t even thought of in the mobile form factor. Qualcomm thinks that Snapdragon 820 will be able to deliver on all of that.</p>
<p><a href="http://www.pcper.com/news/Graphics-Cards/Qualcomm-Introduces-Adreno-5xx-Architecture-Snapdragon-820">Continue reading about the new Adreno 5xx architecture!!</a></p>
<p><a href="https://www.pcper.com/news/Graphics-Cards/Qualcomm-Introduces-Adreno-5xx-Architecture-Snapdragon-820" target="_blank">read more</a></p>https://www.pcper.com/news/Graphics-Cards/Qualcomm-Introduces-Adreno-5xx-Architecture-Snapdragon-820#commentsGraphics CardsProcessorsMobileadrenoadreno 530qualcommSiggraphsiggraph 2015snapdragonsnapdragon 820Wed, 12 Aug 2015 11:30:00 +0000Ryan Shrout63582 at https://www.pcper.comKhronos Group at SIGGRAPH 2015https://www.pcper.com/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015
<p>When the Khronos Group <a href="http://www.pcper.com/reviews/General-Tech/GDC-15-What-Vulkan-glNext-SPIR-V-and-OpenCL-21">announced Vulkan at GDC</a>, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, <a href="https://www.reddit.com/r/vulkan/comments/3fqzev/whats_the_status_of_vulkan/">fans were hoping</a> that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it&#39;s not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015" class="inline-image-link" title="View: khronos-2015-siggraph-sixapis.png"><img src="/files/imagecache/article_max_width/news/2015-08-10/khronos-2015-siggraph-sixapis.png" alt="khronos-2015-siggraph-sixapis.png" title="khronos-2015-siggraph-sixapis.png" class="pcper-inline" width="602" height="325" /></a></div></p>
<p>The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.</p>
<p>As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn&#39;t stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015" class="inline-image-link" title="View: google-android-opengl-es-extensions.jpg"><img src="/files/imagecache/article_max_width/news/2015-08-10/google-android-opengl-es-extensions.jpg" alt="google-android-opengl-es-extensions.jpg" title="google-android-opengl-es-extensions.jpg" class="pcper-inline" width="602" height="340" /></a></div></p>
<p>While we&#39;re not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the <a href="http://www.pcper.com/news/General-Tech/Google-IO-2014-Android-Extension-Pack-Announced">Android Extension Pack (AEP)</a>. This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like &ldquo;enhanced&rdquo; blending). Better yet, Google will support it directly.</p>
<p>Next up are the desktop standards, before we finish with a resurrected embedded standard.</p>
<p>OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it&#39;s a ratified one. Another extension allows &ldquo;streamlined sparse textures&rdquo;, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.</p>
<p>OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.</p>
<p>And this is when we make our way back to Vulkan.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015" class="inline-image-link" title="View: khronos-2015-siggraph-spirv.png"><img src="/files/imagecache/article_max_width/news/2015-08-10/khronos-2015-siggraph-spirv.png" alt="khronos-2015-siggraph-spirv.png" title="khronos-2015-siggraph-spirv.png" class="pcper-inline" width="602" height="343" /></a></div></p>
<p>SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won&#39;t take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.</p>
<p>As mentioned, we&#39;ll end on something completely different.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015" class="inline-image-link" title="View: khronos-2015-siggraph-sc.png"><img src="/files/imagecache/article_max_width/news/2015-08-10/khronos-2015-siggraph-sc.png" alt="khronos-2015-siggraph-sc.png" title="khronos-2015-siggraph-sc.png" class="pcper-inline" width="602" height="342" /></a></div></p>
<p>For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in &ldquo;safety critical&rdquo; applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.</p>
<p>The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.</p>
<p><a href="https://www.pcper.com/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015" target="_blank">read more</a></p>https://www.pcper.com/news/Graphics-Cards/Khronos-Group-SIGGRAPH-2015#commentsGraphics CardsProcessorsMobileShows and ExposKhronosopenclopenglOpenGL ESopengl scSiggraphsiggraph 2015spirvulkanMon, 10 Aug 2015 13:01:00 +0000Scott Michaud63570 at https://www.pcper.comKhronos Announces "Next" OpenGL & Releases OpenGL 4.5https://www.pcper.com/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45
<p>Let&#39;s be clear: there are <i><b>two</b></i><span style="font-style: normal"> stories here. The first is the release of OpenGL 4.5 and the second</span><span style="font-style: normal"> is</span><span style="font-style: normal"> the announcement of </span><span style="font-style: normal">the &quot;Next Generation OpenGL Initiative&quot;.</span><span style="font-style: normal"> </span><span style="font-style: normal"><a href="https://www.khronos.org/news/press/khronos-group-announces-key-advances-in-opengl-ecosystem">They both occur on the same press release</a>, but they are two, different statements.</span></p>
<h4><span style="font-style: normal">OpenGL 4.5 Released</span></h4>
<p><span style="font-style: normal">O</span><span style="font-style: normal">penGL 4.5 expands </span><span style="font-style: normal">the core specification with </span><span style="font-style: normal">a few extensions.</span><span style="font-style: normal"> Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. </span><span style="font-style: normal">This includes features like direct_state_access, </span><span style="font-style: normal">which allows accessing objects in a context without binding to it,</span><span style="font-style: normal"> and</span><span style="font-style: normal"> support of</span><span style="font-style: normal"> OpenGL ES3.1 </span><span style="font-style: normal">features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.</span></p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45" class="inline-image-link" title="View: opengl_logo.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-15/opengl_logo.jpg" alt="opengl_logo.jpg" title="opengl_logo.jpg" class="pcper-inline" width="220" height="97" /></a></div></p>
<p><span style="font-style: normal">It also </span><span style="font-style: normal">adds a few new extensions</span><span style="font-style: normal"> as an option</span><span style="font-style: normal">:</span></p>
<p><span style="font-style: normal"><b>ARB_pipeline_statistics_query</b></span><span style="font-style: normal"> </span><span style="font-style: normal">l</span><span style="font-style: normal">ets a developer </span><span style="font-style: normal">ask the GPU what it has been doing</span><span style="font-style: normal">. This could be useful for &quot;profiling&quot; an application (list completed work to identify optimization points).</span></p>
<p><b><span style="font-style: normal">ARB_sparse_buffer</span></b><span style="font-style: normal"><span style="font-weight: normal"> </span></span><span style="font-style: normal"><span style="font-weight: normal">allows developers to </span></span><span style="font-style: normal"><span style="font-weight: normal">perform calculations on pieces of </span></span><span style="font-style: normal"><span style="font-weight: normal">generic </span></span><span style="font-style: normal"><span style="font-weight: normal">buffers, without loading it all into memory.</span></span><span style="font-style: normal"><span style="font-weight: normal"> This is similar to </span></span><b><span style="font-style: normal">ARB_sparse_textures</span></b><span style="font-style: normal"><span style="font-weight: normal">... except that those are for textures.</span></span><span style="font-style: normal"><span style="font-weight: normal"> Buffers are useful for things like vertex data (and so forth).</span></span></p>
<p><span style="font-style: normal"><b>ARB_transform_feedback_overflow_query</b></span><span style="font-style: normal"> </span><span style="font-style: normal">is </span><span style="font-style: normal">apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.</span></p>
<p><span style="font-style: normal"><b>KHR_blend_equation_advanced</b></span><span style="font-style: normal"><span style="font-weight: normal"> allows new blending equations between </span></span><span style="font-style: normal"><span style="font-weight: normal">objects. If you use Photoshop, this would be &quot;multiply&quot;, &quot;screen&quot;, &quot;darken&quot;, &quot;lighten&quot;, &quot;difference&quot;, and so forth. On NVIDIA&#39;s side,</span></span><span style="font-style: normal"><span style="font-weight: normal"> this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations </span></span><span style="font-style: normal"><span style="font-weight: normal">with shaders. AMD has yet to comment, as far as I can tell.</span></span></p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45" class="inline-image-link" title="View: nvidia-opengl-debugger.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-15/nvidia-opengl-debugger.jpg" alt="nvidia-opengl-debugger.jpg" title="nvidia-opengl-debugger.jpg" class="pcper-inline" width="602" height="376" /></a></div></p>
<p class="rtecenter"><em>Image from <a href="https://developer.nvidia.com/nsight-visual-studio-edition-videos">NVIDIA GTC Presentation</a></em></p>
<p><span style="font-style: normal"><span style="font-weight: normal">I</span></span><span style="font-style: normal"><span style="font-weight: normal">f you are a developer, <a href="https://developer.nvidia.com/opengl-driver">NVIDIA has launched 340.65 (340.23.01 for Linux)</a></span></span><a href="https://developer.nvidia.com/opengl-driver"><span style="font-style: normal"><span style="font-weight: normal"> beta drivers</span></span></a><span style="font-style: normal"><span style="font-weight: normal"> for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You </span></span><i><span style="font-weight: normal">really</span></i><span style="font-style: normal"><span style="font-weight: normal"> </span></span><span style="font-style: normal"><span style="font-weight: normal">should not have any use for it, at all.</span></span></p>
<h4><span style="font-style: normal">Next Generation OpenGL Initiative Announced</span></h4>
<p><span style="font-style: normal"><span style="font-weight: normal">The Khronos Group</span></span><span style="font-style: normal"><span style="font-weight: normal"> has also announced &quot;a call for participation&quot; to outline a new specification for graphics and compute. </span></span><span style="font-style: normal"><span style="font-weight: normal">They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and &quot;rigorous conformance testing&quot;.</span></span><span style="font-style: normal"><span style="font-weight: normal"> This sounds </span></span><i><span style="font-weight: normal">a lot</span></i><span style="font-style: normal"><span style="font-weight: normal"> like the design goals of Mantle (and what we know of DirectX 12).</span></span></p>
<p class="rtecenter"><span style="font-style: normal"><span style="font-weight: normal"><div class = "center-article-image"><a href="/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45" class="inline-image-link" title="View: amd-mantle-queues.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-15/amd-mantle-queues.jpg" alt="amd-mantle-queues.jpg" title="amd-mantle-queues.jpg" class="pcper-inline" width="602" height="298" /></a></div></span></span></p>
<p><span style="font-style: normal"><span style="font-weight: normal">A</span></span><span style="font-style: normal"><span style="font-weight: normal">nd really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards </span></span><span style="font-style: normal"><span style="font-weight: normal">look nothing like they did a decade ago (or over two decades ago). </span></span><span style="font-style: normal"><span style="font-weight: normal">They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. </span></span><span style="font-style: normal"><span style="font-weight: normal">If we can draw a line in the sand, legacy APIs can be supported but not </span></span><span style="font-style: normal"><span style="font-weight: normal">optimized heavily by the drivers.</span></span><span style="font-style: normal"><span style="font-weight: normal"> After a short time, available performance for legacy applications would be so high that it wouldn&#39;t matter</span></span><span style="font-style: normal"><span style="font-weight: normal">, as long as they continue to run. </span></span></p>
<p><span style="font-style: normal"><span style="font-weight: normal">Add to it</span></span><span style="font-style: normal"><span style="font-weight: normal">, next-generation drivers should be significantly easier to develop</span></span><span style="font-style: normal"><span style="font-weight: normal">, considering the reduced error checking </span></span><span style="font-style: normal"><span style="font-weight: normal">(</span></span><span style="font-style: normal"><span style="font-weight: normal">and other responsibilities</span></span><span style="font-style: normal"><span style="font-weight: normal">)</span></span><span style="font-style: normal"><span style="font-weight: normal">.</span></span><span style="font-style: normal"><span style="font-weight: normal"> As I said on Intel&#39;s DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.</span></span></p>
<p><a href="https://www.pcper.com/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/Khronos-Announces-Next-OpenGL-Releases-OpenGL-45#commentsGeneral TechGraphics CardsShows and ExposamdDirectX 12IntelKhronosMantlenvidiaopenglopengl 4.5OpenGL NextSiggraphsiggraph 2014Sat, 16 Aug 2014 00:33:25 +0000Scott Michaud60920 at https://www.pcper.comRichard Huddy Discusses FreeSync Availability Timeframeshttps://www.pcper.com/news/General-Tech/Richard-Huddy-Discusses-FreeSync-Availability-Timeframes
<p>At SIGGRAPH, Richard Huddy of AMD announced the release windows of FreeSync, their adaptive refresh rate technology, <a href="http://techreport.com/news/26919/freesync-monitors-will-sample-next-month-start-selling-next-year">to The Tech Report</a>. Compatible monitors will begin sampling &quot;as early as&quot; September. Actual products are expected to ship to consumers in early 2015. Apparently, more than one display vendor is working on support, although names and vendor-specific release windows are unannounced.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Richard-Huddy-Discusses-FreeSync-Availability-Timeframes" class="inline-image-link" title="View: amd-freesync1.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-14/amd-freesync1.jpg" alt="amd-freesync1.jpg" title="amd-freesync1.jpg" class="pcper-inline" width="602" height="452" /></a></div></p>
<p>As for cost of implementation, Richard Huddy believes that the added cost should be no more than $10-20 USD (to the manufacturer). Of course, the final price to end-users cannot be derived from this - that depends on how quickly the display vendor expects to sell product, profit margins, their willingness to push new technology, competition, and so forth.</p>
<p>If you want to take full advantage of FreeSync, you will need a compatible GPU (look for &quot;gaming&quot; support in AMD&#39;s official FreeSync compatibility list). All future AMD GPUs are expected to support the technology.</p>
<p><a href="https://www.pcper.com/news/General-Tech/Richard-Huddy-Discusses-FreeSync-Availability-Timeframes" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/Richard-Huddy-Discusses-FreeSync-Availability-Timeframes#commentsGeneral TechDisplaysamdfreesyncg-syncSiggraphsiggraph 2014Thu, 14 Aug 2014 20:59:47 +0000Scott Michaud60913 at https://www.pcper.comIntel and Microsoft Show DirectX 12 Demo and Benchmarkhttps://www.pcper.com/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark
<p>Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had <a href="http://blogs.msdn.com/b/directx/archive/2014/08/13/directx-12-high-performance-and-high-power-savings.aspx">a DirectX 12 demo at their booth</a>. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark" class="inline-image-link" title="View: intel-dx12-LockedFPS.png"><img src="/files/imagecache/article_max_width/news/2014-08-13/intel-dx12-LockedFPS.png" alt="intel-dx12-LockedFPS.png" title="intel-dx12-LockedFPS.png" class="pcper-inline" width="550" height="343" /></a></div></p>
<p class="rtecenter"><em>Variable power to hit a desired frame rate, DX11 and DX12.</em></p>
<p>The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel <a href="http://ark.intel.com/products/76308/Intel-Core-i5-4300U-Processor-3M-Cache-up-to-2_90-GHz">Core i5-4300U</a> with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.</p>
<p>While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.</p>
<p>Intel&#39;s demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel&#39;s demo goes from 19 FPS all the way up to a playable 33 FPS.</p>
<p>Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark" class="inline-image-link" title="View: intel-dx12-unlockedFPS-1.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-13/intel-dx12-unlockedFPS-1.jpg" alt="intel-dx12-unlockedFPS-1.jpg" title="intel-dx12-unlockedFPS-1.jpg" class="pcper-inline" width="550" height="366" /></a></div></p>
<p class="rtecenter"><em>Maximum power in DirectX 11 mode.</em></p>
<p>For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an &quot;ideal scenario&quot; of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer <i>could</i><span style="font-style: normal"> optimize the scene to (maybe) instance objects together, and so forth, but that is </span><i>unnecessary work</i><span style="font-style: normal">. Why should programmers, or worse, artists, need to spend so much of their time developing art </span><span style="font-style: normal">so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?</span></p>
<p><span style="font-style: normal">That, of course, depends on </span><i>how much</i><span style="font-style: normal"> performance improvement we will see from DirectX 12</span><span style="font-style: normal">, compared to theoretical max efficiency</span><span style="font-style: normal">. If </span><span style="font-style: normal">pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-</span><span style="font-style: normal">sized workload, then it allows developers to</span><span style="font-style: normal">, literally, perform whatever solution is most direct.</span></p>
<p class="rtecenter"><span style="font-style: normal"><div class = "center-article-image"><a href="/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark" class="inline-image-link" title="View: intel-dx12-unlockedFPS-2.jpg"><img src="/files/imagecache/article_max_width/news/2014-08-13/intel-dx12-unlockedFPS-2.jpg" alt="intel-dx12-unlockedFPS-2.jpg" title="intel-dx12-unlockedFPS-2.jpg" class="pcper-inline" width="550" height="366" /></a></div></span></p>
<p class="rtecenter"><em>Maximum power when switching to DirectX 12 mode.</em></p>
<p>If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.</p>
<p>If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let&#39;s say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems &quot;the naive way&quot; now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?</p>
<p><a href="https://www.pcper.com/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/Intel-and-Microsoft-Show-DirectX-12-Demo-and-Benchmark#commentsGeneral TechGraphics CardsProcessorsMobileShows and ExposDirectXdirectx 11DirectX 12IntelmicrosoftSiggraphsiggraph 2014Thu, 14 Aug 2014 01:55:53 +0000Scott Michaud60909 at https://www.pcper.comUnreal Engine 4 on Mobile Kepler at SIGGRAPHhttps://www.pcper.com/news/General-Tech/Unreal-Engine-4-Mobile-Kepler-SIGGRAPH
<p>SIGGRAPH 2013 is wrapping up in the next couple of days but, now that NVIDIA removed the veil surrounding Mobile Kepler, people are chatting about what is to follow Tegra 4. Tim Sweeney, founder of Epic Games, <a href="http://blogs.nvidia.com/blog/2013/07/24/sweeney/">contributed to NVIDIA Blogs the number of ways</a> that certain attendees can experience Unreal Engine 4 at the show. As it turns out, NVIDIA engineers have displayed the engine both on Mobile Kepler as well as behind closed doors on desktop PCs.</p>
<p class="rtecenter"><iframe allowfullscreen="" frameborder="0" height="315" src="//www.youtube.com/embed/kp4UvCMfZ0I" width="560"></iframe></p>
<p class="rtecenter"><em>Not from SIGGRAPH, this is a leak from, I believe, GTC late last March.</em></p>
<p class="rtecenter"><em>Also, this is Battlefield 3, not Unreal Engine 4.</em></p>
<p>Tim, obviously taking the developer standpoint, is very excited about OpenGL 4.3 support within the mobile GPU. In all, he did not say too much of note. They are targeting Unreal Engine 4 at a broad range of platforms: mobile, desktop, console, and, while absent from this editorial, web standards. Each of these platforms are settling on the same set of features, albeit with huge gaps in performance, allowing developers to focus on a scale of performance instead of a flowchart of capabilities.</p>
<p>Unfortunately for us, there have yet to be leaks from the trade show. We will keep you up-to-date if we find any, however.</p>
<p><a href="https://www.pcper.com/news/General-Tech/Unreal-Engine-4-Mobile-Kepler-SIGGRAPH" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/Unreal-Engine-4-Mobile-Kepler-SIGGRAPH#commentsGeneral TechGraphics CardsMobileShows and ExposkeplermobilenvidiaSiggraphtegraunreal engine 4Wed, 24 Jul 2013 21:15:19 +0000Scott Michaud57988 at https://www.pcper.comFor G's a jolly good L ohhh... which 20 years can't deny.https://www.pcper.com/news/General-Tech/Gs-jolly-good-L-ohhh-which-20-years-cant-deny
<p><em>OpenGL turned 20 as of the start of this year. Two new versions of the API have just been released during SIGGRAPH: OpenGL 4.3 and OpenGL ES 3.0. <a href="http://arstechnica.com/information-technology/2012/08/opengl-celebrates-its-20th-birthday-with-two-new-versions/">Ars Technica put together</a> a piece to outline the changes in these versions &ndash; most importantly: feature parity between Direct3D 11 and OpenGL 4.3.</em></p>
<p>As much attention as Direct3D gets for PC gamers &ndash; you cannot ignore OpenGL.</p>
<p>Reigning in graphics hardware is a real challenge. We desire to make use of all the computational performance of our devices but also make it easy to develop for in as few times as possible. Regardless of what mobile, desktop, or other device you own &ndash; if it contains a GPU it almost definitely supports either OpenGL or OpenGL ES.</p>
<p>Even <a href="http://lights.elliegoulding.com/">certain up-and-coming websites</a> utilize the GPU to break new ground.</p>
<p class="rtecenter"><div class = "center-article-image"><a href="/news/General-Tech/Gs-jolly-good-L-ohhh-which-20-years-cant-deny" class="inline-image-link" title="View: opengl_logo.jpg"><img src="/files/imagecache/article_max_width/news/2012-08-07/opengl_logo.jpg" alt="opengl_logo.jpg" title="opengl_logo.jpg" class="pcper-inline" width="220" height="97" /></a></div></p>
<p class="rtecenter"><em>The Khronosgraph says 20 years.</em></p>
<p>Two new versions of OpenGL were recently published: OpenGL 4.3 as well as OpenGL ES 3.0. For the first time OpenGL allows programmers to access compute shaders which makes it easier to accelerate computations which do not work upon pixels, vertices, or geometry without bringing in OpenCL or some other API. Unfortunately this feature does not appear to carry over to OpenGL ES 3.0.</p>
<p>OpenGL ES is also important, not just for native mobile development as it is intended, but also because it is considered the basis of WebGL. It is likely that a future WebGL revision will contain the OpenGL ES 3.0 enhancements such as many rendering targets, more complex shaders, and so forth.</p>
<p>But it seems like the major reason why these two revisions were released together &ndash; apart from their timing aligning with the SIGGRAPH trade show &ndash; is because OpenGL and OpenGL ES have been somewhat merged. OpenGL ES 3.0 is now a subset of OpenGL 4.3 rather than some heavily overlapping Venn diagram. Porting from one specification to the other should be substantially easier.</p>
<p>So happy birthday, OpenGL &ndash; just don&rsquo;t go down the toilet on your 21st.</p>
<p><a href="https://www.pcper.com/news/General-Tech/Gs-jolly-good-L-ohhh-which-20-years-cant-deny" target="_blank">read more</a></p>https://www.pcper.com/news/General-Tech/Gs-jolly-good-L-ohhh-which-20-years-cant-deny#commentsGeneral TechGraphics CardsMobileShows and ExposopenglOpenGL 4.3OpenGL ESOpenGL ES 3.0SiggraphTue, 07 Aug 2012 19:33:25 +0000Scott Michaud55015 at https://www.pcper.com