Intel Developer Zone Articleshttps://software.intel.com/en-us/articles/20800
Article FeedenDynamically Linked IMSL* Fortran Numerical Library with Intel® Parallel Studio XE 2018https://software.intel.com/en-us/articles/dynamically-linked-imsl-fortran-numerical-library-with-intel-parallel-studio-xe-2018
<p><strong>Version :</strong> Intel<span style="color:rgb(83, 86, 90); float:none">®</span> Parallel Studio XE 2018 for Windows Update 2, Intel<span style="color:rgb(83, 86, 90); float:none">®</span> Math Kernel Library (Intel<span style="color:rgb(83, 86, 90); float:none">®</span> MKL) 2018 Update 2</p>
<p><strong>Operating System :</strong> Windows*</p>
<p><strong>Architecture:</strong> Intel 64 Only</p>
<p><br /><strong>Problem Description :</strong></p>
<p>An application built by Intel Parallel Studio XE for Windows 2018 Update 2 and <a href="https://software.intel.com/en-us/articles/installing-and-using-the-imsl-libraries/">dynamically linked with IMSL* Fortran Numerical Library</a> will fail to start with an error message like:</p>
<p><br /><em>"The procedure entry point mkl_lapack_ao_zgeqrf could not be located in the dynamic link library C:\Program Files(x86)\VNI\imsl\fnl701\Intel64\lib\imslmkl_dll.dll. "</em></p>
<p><br />
The cause of the error is that symbols removal in <a href="https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-2018-release-notes">Intel MKL 2018 Update 2</a> breaks its backward compatibility with binaries dynamically linked with an old version of Intel MKL such as IMSL* Fortran Numerical Library.</p>
<p><strong>Resolution Status:</strong></p>
<p>It will be fixed in a future product update. <span style="color:rgb(83, 86, 90); float:none">When the fix is available, this article will be updated with the information.</span></p>
<p>There are three <span style="color:rgb(83, 86, 90); float:none">workarounds available to resolve the error:</span></p>
<ol><li><span style="color:rgb(83, 86, 90); float:none"><span>L</span></span>ink IMSL Fortran Numerical Library statically</li>
<li>Link IMSL Fortran Numerical Library without <span style="text-align:left; color:rgb(83, 86, 90); float:none">making use of Intel® MKL, which may have some performance impact</span></li>
<li>Use an older version of Intel MKL DLL such as Intel MKL 2018 update 1 by putting them into PATH setting at runtime </li>
</ol>Mon, 09 Apr 2018 07:24:59 -0700Duan, Xiaoping (Intel)776586Accelerating Media, Video &amp; Computer Vision Processing: Which Tool Do I Use?https://software.intel.com/en-us/articles/accelerating-media-processing-which-tool-do-i-use-when
<figure class="full-content-width"><img alt="" src="/file/483162/download" /></figure><p> </p>
<p>Intel has many awesome software development tools to optimize media and video solutions and applications across edge to the cloud. Common usages that developers need include:</p>
<ul><li>Accelerating video transcoding performance and taking full advantage Intel hardware capabilities and accelerators</li>
<li>Transitioning to higher frame rates and more efficient formats like HEVC</li>
<li>Integrating intelligent vision and deep learning</li>
<li>Innovating immersive experiences like 360 video, real-time cloud gaming, and AR/VR</li>
</ul><p>But sometimes, it's hard to figure out just which development tool or combinartion of tools are best for your particular needs and usages. Below see a list of Intel® Software Development Tools that developers can use to build their media and video applications. </p>
<hr /><h4>For Fast Video or Image Processing</h4>
<ul><li><strong>Use <a href="/en-us/media-sdk" rel="nofollow">Intel® Media SDK</a> for Embedded Linux* for Client, Mobile, Embedded Applications &amp; Devices</strong></li>
<li><strong>Use </strong><a href="/en-us/media-sdk" rel="nofollow"><strong>Intel® Media SDK</strong></a><strong> for Windows* for Server, Desktop, Client, Mobile, Embedded Applications &amp; Devices</strong></li>
</ul><p><strong>Developing for:</strong></p>
<ul><li>Intel® Core™ or Intel® Core™ M processors </li>
<li>Select SKUs of Intel® Celeron™, Intel® Pentium® and Intel Atom® processors with Intel® HD Graphics supporting Intel® Quick Sync Video</li>
<li>Client, mobile and embedded devices - desktop or mobile media applications</li>
<li>OS - Windows* and Embedded Linux*</li>
<li>An Open Source version is also available at <a href="http://ttps://github.com/Intel-Media-SDK/MediaSDK" rel="nofollow">Github</a> under the MIT license </li>
</ul><p><strong>Uses and Needs</strong></p>
<ul><li>Fast video playback, encode, processing, media formats conversion or video conferencing</li>
<li>Accelerated processing of RAW video or images</li>
<li>Screen capture</li>
<li>Used with smart cameras across drones, phones, editors/players, network video recorders, and connected cars</li>
<li>Supports HEVC, AVC, MPEG-2 and audio codecs</li>
<li>Depending on the solution and usage, this software tool is commonly also used with Intel® Computer Vision SDK, Intel® SDK for OpenCL™ Applications, Intel® System Studio.</li>
</ul><p><a class="button-highlight" href="https://software.intel.com/en-us/media-sdk/choose-download">Free Download</a></p>
<hr /><h4><img title="intel media server studio" class="one-quarter-float-right" alt="intel media server studio" src="https://software.intel.com/sites/default/files/managed/ca/f1/Intel_DPD_NEWBoxShots_MSS.PNG" />For Enterprise, Data Center/Visual Cloud, Broadcasting &amp; Embedded Video Transcoding - Use <a href="/en-us/intel-media-server-studio" rel="nofollow">Intel® Media Server Studio</a> for Linux*</h4>
<p>Three editions are available:</p>
<p>FREE Community</p>
<p>Essentials</p>
<p>Professional</p>
<p><strong>Developing for:</strong></p>
<ul><li><a href="/en-us/intel-media-server-studio/details#technical" rel="nofollow">Select Intel® Xeon® or Intel® Core™ processor-based platforms</a></li>
<li>Servers/Desktop, Visual Cloud / Data Center / Embedded</li>
<li>Applications for media, communications infrastructure (video processing, streaming and conferencing; digital surveillance), video cloud &amp; data center</li>
<li><a href="/en-us/intel-media-server-studio/details#technical" rel="nofollow">OS - Linux* </a></li>
</ul><p><strong>Format Support</strong> <strong>- </strong>HEVC, AVC, MPEG-2 and more</p>
<p><strong>Uses and Needs</strong></p>
<ul><li>Used in enterprise, data center, cloud-based media solutions. Common usages includes broadcast, over-the-top (OTT), video-on-demand (VOD), video streaming, video conferencing, visual cloud, cloud gaming, and more.</li>
<li>High-density and fast video decode, encode, transcode</li>
<li>Optimize performance of Media/GPU pipeline </li>
<li>Enhanced graphics programmability or visual analytics (for use with OpenCL™ applications)</li>
<li>Low-level control over encode quality</li>
<li>Debug, analysis and performance/quality optimization tools</li>
<li>Speed transition to real-time 4K HEVC</li>
<li>Need to measure visual quality (Video Quality Caliper)</li>
<li>Screen capture</li>
</ul><p class="text-intel-clear"> <a class="button-highlight" href="https://software.intel.com/en-us/intel-media-server-studio">Free Download &amp; Paid Edition Options</a> </p>
<hr /><h4>For Real-time Communications via Web Applications - Use<a href="https://software.intel.com/en-us/webrtc-sdk"> Intel® Collaboration Suite for WebRTC</a></h4>
<p>This Client SDK builds on top of the W3C standard WebRTC APIs to accelerate development of real-time communications (RTC), including broadcast, peer-to-peer, conference mode communications, and online gaming/VR streaming. </p>
<p>Use with Andrioid*, web (JavaScript* built), iOS* and Windows* applications. </p>
<p><a class="button-highlight" href="https://registrationcenter.intel.com/en/forms/?productid=2607">Free Download</a></p>
<hr /><h4>To Customize Solutions, Balance CPU/GPU workloads and Utilize Intel<sup>®</sup> Processor Graphics - Use<a href="/intel-opencl" rel="nofollow"> Intel® SDK for OpenCL™ Applications</a></h4>
<p><strong>Developing for:</strong></p>
<p>General purpose GPU acceleration on select Intel® processors (see <a href="/en-us/intel-opencl" rel="nofollow">technical specifications</a>). OpenCL primarily targets execution units. An increasing number of extensions are being added to Intel processors to make the benefits of Intel’s fixed function hardware blocks accessible to OpenCL applications.</p>
<ul><li>Provides the ability to customize heterogeneous compute applications and accelerate performance</li>
<li>Balance media workloads between CPU and GPU - uniquely offload compute to Intel® Graphics Technology tailored to your application needs</li>
</ul><p><a class="button-highlight" href="https://software.intel.com/en-us/intel-opencl">Free Download</a></p>
<hr /><h4><strong>For Smart Video, Computer Vision &amp; Deep Learning Inference - Use the<a href="https://software.intel.com/en-us/computer-vision-sdk"> Intel® Computer Vision SDK</a></strong></h4>
<p>To accelerate computer vision solutions and integrate deep learning inference, use the Intel Computer Vision SDK.</p>
<ul><li>Easily harness the performance of computer vision accelerators from Intel</li>
<li>Add your own custom kernels into your workload pipeline</li>
<li>Quickly deploy computer vision algorithms with deep-learning support using the included Deep Learning Deployment Toolkit from Intel</li>
<li>Create OpenVX* workload graphs with the intuitive and easy-to-use Vision Algorithm Designer</li>
<li>Commonly used with Intel® Media SDK and Intel® Media Server Studio.</li>
<li>Usages include smart video, object recognition, computer vision, deep learning inference across vertical solutions like digital surveillance, retail, manufacturing, smart home/cities, healthcare, autonomous driving, and more.</li>
</ul><p><a class="button-highlight" href="https://software.intel.com/en-us/computer-vision-sdk">Free Download</a></p>
<hr /><h4>To Analyze Video Performance &amp; Quality, use Intel® VTune™ Amplifier</h4>
<p>It is a performance profiler that gathers CPU, GPU, threading, OpenCL, and bandwidth metrics to find media processing bottlenecks.Quickly analyze data - filter results on timeline, code source, and on a GPU architecture diagram showing VDBox, VEBox, EU utilization, bus bandwidth.</p>
<div><a class="button-highlight" href="https://software.intel.com/en-us/intel-vtune-amplifier-xe">Choose &amp; Download</a></div>
<hr /><h4><img title="Intel(r) System Studio" class="one-quarter-float-right" alt="Intel System Studio" src="https://software.intel.com/sites/default/files/managed/44/d0/Int_DPD_SysStudio.jpg" />To Improve System Bring-up, Boost Performance, Power Usage, Reliability for System and IoT Device Applications - use Intel® System Studio</h4>
<p>This cross-platform tool suite includes optimizing compilers, highly tuned libraries, analyzers, debug tools, and access to cloud connectors and 400+ sensors.</p>
<ul><li>Helps system and IoT developers improve system bring-up, boost performance, power usage and reliability of system &amp; IoT device applications.</li>
<li>Includes Intel® VTune™ Amplifier and debugger tools</li>
</ul><h5><a class="button-highlight" href="https://software.intel.com/en-us/system-studio/choose-download/purchase-options">Free Download</a></h5>
<hr /><h4>For FPGA - Use <a href="http://altera.com/vip" target="_blank" rel="nofollow">Video &amp; Image Processing Suite MegaCore Functions</a> </h4>
<p>(part of Intel® Quartus® Prime Software Suite IP Catalog)<br />
</p>
<p><strong>Developing for:</strong></p>
<ul><li>All Altera FPGA families</li>
<li>Video and image processing applications, such as video surveillance, broadcast, video conferencing, medical and military imaging and automotive displays</li>
</ul><p><strong>Uses and Needs</strong></p>
<ul><li>For design, simulation, verification of hardware bit streams for FPGA devices</li>
<li>optimized building blocks for deinterlacing, color space conversion, alpha blending, scaling and more</li>
</ul><hr /><h4><span style="color:rgb(0, 113, 197)">Intel® C for Media and Related Tools</span></h4>
<p>We recently open-sourced CM runtime together with the Intel media driver and CM compiler. We also list the CM package in <a href="https://01.org/c-for-media-development-package" rel="nofollow">01.org</a>.</p>
<p>The source code can be accessed at these locations:</p>
<ul><li>Intel Media Driver for VAAPI and Intel® C for Media Runtime: on <a href="https://github.com/intel/media-driver/" rel="nofollow">GitHub</a> </li>
<li>Intel C for Media Compiler and examples: on <a href="https://github.com/intel/cm-compiler/" rel="nofollow">GitHub</a> </li>
<li>Intel Graphics Compiler: on <a href="https://github.com/intel/intel-graphics-compiler/" rel="nofollow">GitHub</a> </li>
</ul><hr /><p class="footnote"> </p>
<p class="footnote">OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.</p>
Thu, 29 Mar 2018 16:30:28 -0700Brenda C. (Intel)604801Troubleshooting Visual Studio Command Prompt Integration Issuehttps://software.intel.com/en-us/articles/troubleshooting-visual-studio-command-prompt-integration-issue
<p><strong>Issue Description</strong></p>
<p>Nmake and ifort are not recognized from a command window, however using Intel Fortran under Visual Studio works perfectly.</p>
<p><strong>Troubleshooting</strong></p>
<p>Follow below checklist to troubleshooting Visual Studio command environmental issues:</p>
<p><strong>1. </strong>Verify whether ifort and nmake are installed correctly:</p>
<p> For Visual Studio 2017, nmake is installed at:</p>
<p> C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.10.25017\bin\HostX64\x64\nmake.exe</p>
<p> Find this by running below commands from a system ifort or nmake setup correctly:</p>
<pre class="brush:bash;"> &gt; where nmake
&gt; where ifort</pre>
<p> Also check whether the location is included from PATH environment:</p>
<pre class="brush:bash;"> &gt; echo %PATH%</pre>
<p><strong>2. </strong>If nmake can be found, verify if VS setup script runs properly.<br />
Start a cmd window, and run Visual Studio setup script manually:</p>
<pre class="brush:bash;"> &gt; "C:\Program Files (x86)\Microsoft Visual Studio\2017 \Professional\VC\Auxiliary\Build\vcvars64.bat"</pre>
<p> An expected output is as below</p>
<p> <span><img alt="vscmd_setup.png" title="vscmd_setup.png" height="113" width="623" src="https://software.intel.com/sites/default/files/managed/0c/27/vscmd_setup.png" /></span></p>
<div><strong>3.</strong><span> </span>If nmake cannot be found. It’s your Visual Studio installation is incomplete. Please try re-install Visual Studio and find instructions from below articles:</div>
<div> - <a href="https://software.intel.com/en-us/articles/installing-microsoft-visual-studio-2017-for-use-with-intel-compilers">https://software.intel.com/en-us/articles/installing-microsoft-visual-studio-2017-for-use-with-intel-compilers</a></div>
<div> - <a href="https://software.intel.com/en-us/articles/installing-visual-studio-2015-for-use-with-intel-compilers">https://software.intel.com/en-us/articles/installing-visual-studio-2015-for-use-with-intel-compilers</a></div>
<div> </div>
<div><strong>4.</strong><span> </span>Got an error in step 2?</div>
<div>
<pre class="brush:bash;"> &gt; "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Auxiliary\Build\vcvars64.bat"
\Common was unexpected at this time.</pre>
</div>
<div>If yes, try debug the setup script set VSCMD_DEBUG environment variable:</div>
<div>
<pre class="brush:bash;"> &gt; set VSCMD_DEBUG=3</pre>
</div>
<div>Run the setup script again and redirect the output to log file:</div>
<div>
<pre class="brush:bash;"> &gt; "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Auxiliary\Build\vcvars64.bat" &gt; setup.log 2&gt;&amp;1</pre>
<p><strong>5.</strong><span> </span>If you got the same error as above, there are some references from Visual Studio community:</p>
</div>
<div> <a href="https://social.msdn.microsoft.com/Forums/vstudio/en-US/56bdd445-adc6-46ab-a383-6714bdb2030d/visual-studio-2010-command-prompt-not-working?forum=vssetup" rel="nofollow">https://social.msdn.microsoft.com/Forums/vstudio/en-US/56bdd445-adc6-46ab-a383-6714bdb2030d/visual-studio-2010-command-prompt-not-working?forum=vssetup</a></div>
<div> The solution is to remove all quotes from PATH environment variable value.</div>
<div> </div>
<div><strong>6.</strong><span> </span>If you got a different error, again get an expected output from any system that runs the script correctly and compare with your current one. This will help you locate which command in the setup script that triggers the error. </div>
<div> You may also consider to report such issue to Visual Studio community directly at</div>
<div> <a href="https://developercommunity.visualstudio.com/spaces/8/index.html" rel="nofollow">https://developercommunity.visualstudio.com/spaces/8/index.html</a></div>
<p> </p>
Thu, 22 Mar 2018 01:27:43 -0700Chen, Yuan (Intel)760786Intel® Media SDK &amp; Intel® Media Server Studio Historical Release Noteshttps://software.intel.com/en-us/articles/intel-media-sdk-release-notes
<p><strong>Release Notes of Intel<sup>®</sup> Media SDK</strong> include important information, such as system requirements, what's new, feature table and known issues since the previous release. Below is the list of release notes from previous releases of different products to track new features and supported system requirements.</p>
<h2>Intel Media SDK</h2>
<table align="left"><thead><tr><th>Version/What's New</th>
<th>Release Date</th>
<th>Release Notes</th>
<th>Platform Support</th>
</tr></thead><tbody><tr><td><a href="https://software.intel.com/en-us/blogs/2017/06/09/whats-new-in-intel-media-sdk-2017-for-windows">2017 R1</a></td>
<td>Jun. 9, 2017</td>
<td><a href="https://software.intel.com/sites/default/files/managed/ba/5b/mediasdk_release_notes_2017.pdf">Windows</a></td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 6</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 7</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Core™ processors (codename Skylake &amp; KabyLake)</span></td>
</tr></tbody></table><h2><span style="color:rgb(83, 86, 90)">Intel® Media Server Studio</span></h2>
<table align="left"><thead><tr><th>Version/What's New</th>
<th>Release Date</th>
<th>Release Notes</th>
<th>Platform Support</th>
</tr></thead><tbody><tr><td><a href="https://software.intel.com/en-us/blogs/2016/07/27/2017-release-whats-new-in-intel-media-server-studio">Professional 2017</a></td>
<td>Sept, 2017</td>
<td><a href="https://software.intel.com/sites/default/files/managed/d2/9e/media_server_studio_professional_release_notes-windows-2017.pdf">Windows</a>*|<a href="https://software.intel.com/sites/default/files/managed/b9/4f/media_server_studio_professional_release_notes-linux-2017.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 6</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Broadwell &amp; Skylake)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/blogs/2017/07/11/whats-new-in-intel-media-server-studio-2017-r3">2017 R3</a></td>
<td>Aug. 1, 2017</td>
<td><a href="https://software.intel.com/sites/default/files/managed/01/a9/media_server_studio_essentials_release_notes_2017_R3_windows.pdf">Windows</a>*|<a href="https://software.intel.com/sites/default/files/managed/df/15/media_server_studio_essentials_release_notes_2017_R3_linux.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 6</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Broadwell &amp; Skylake)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/blogs/2017/01/04/2017-r2-release-whats-new-in-intel-media-server-studio">2017 R2</a></td>
<td>Jan. 4, 2017</td>
<td><a href="https://software.intel.com/sites/default/files/managed/74/cf/media_server_studio_essentials_release_notes_2017_R2_windows.pdf">Windows</a>*|<a href="https://software.intel.com/sites/default/files/managed/5a/24/media_server_studio_essentials_release_notes_2017_R2_linux.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 6</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Broadwell &amp; Skylake)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/blogs/2016/07/27/2017-release-whats-new-in-intel-media-server-studio">2017 R1</a></td>
<td>Sept. 1, 2016</td>
<td><a href="https://software.intel.com/sites/default/files/managed/1b/88/media_server_studio_sdk_release_notes_windows_2017.pdf">Windows</a>*|<a href="https://software.intel.com/sites/default/files/managed/cf/4d/media_server_studio_sdk_release_notes_linux_2017.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 6</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Broadwell &amp; Skylake)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/articles/2016-release-whats-new-in-intel-media-server-studio">Professional 2016</a></td>
<td>Feb. 18, 2016</td>
<td><a href="https://software.intel.com/sites/default/files/managed/af/5d/media_server_studio_professional_release_notes-windows-2016.pdf">Windows</a>*|<a href="http://software.intel.com/sites/default/files/managed/27/0b/media_server_studio_professional_release_notes-linux-2016.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 4</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Haswell &amp; Broadwell)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/articles/2016-release-whats-new-in-intel-media-server-studio">2016</a></td>
<td>Feb. 18, 2016</td>
<td><a href="https://software.intel.com/sites/default/files/managed/1f/3b/media_server_studio_sdk_release_notes-windows-2016.pdf">Windows</a>*|<a href="https://software.intel.com/sites/default/files/managed/ed/dc/media_server_studio_sdk_release_notes-linux-2016.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 4</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Haswell &amp; Broadwell)</span></td>
</tr><tr><td><a href="https://software.intel.com/en-us/articles/r6-release-whats-new-in-intel-media-server-studio">2015 R6</a></td>
<td>July. 2, 2015</td>
<td><a href="https://software.intel.com/sites/default/files/managed/8a/1b/media_server_studio_sdk_release_notes-windows-2015R6.pdf">Windows</a>*/<a href="https://software.intel.com/sites/default/files/managed/ee/4a/media_server_studio_sdk_release_notes_2015R6.pdf">Linux</a>*</td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs of 4</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> &amp; 5</span><span style="color:rgb(83, 86, 90); vertical-align:baseline">th</span><span style="color:rgb(83, 86, 90)"> generation</span><br /><span style="color:rgb(83, 86, 90)">Intel® Xeon® &amp; Core™ processors (codename Haswell &amp; Broadwell)</span></td>
</tr></tbody></table><h2><span style="color:rgb(83, 86, 90)">Intel Media SDK for Embedded Linux*</span></h2>
<table align="left"><thead><tr><th>Version/What's New</th>
<th>Release Date</th>
<th>Release Notes</th>
<th>Platform Support</th>
</tr></thead><tbody><tr><td><a href="https://software.intel.com/en-us/blogs/2017/08/17/whats-new-in-intel-media-sdk-2017-r21-for-embedded-linux">2017 R1</a></td>
<td>Aug. 25, 2017</td>
<td><a href="https://software.intel.com/sites/default/files/managed/7a/8b/mediasdkembedded_release_notes_2017.pdf">Linux</a></td>
<td><span style="color:rgb(83, 86, 90)">Supports select SKUs </span><span style="color:rgb(83, 86, 90)">Intel® Atom™ processors (codename ApolloLake)</span></td>
</tr></tbody></table><p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>For latest documents, getting started guide and release notes, check Intel Media SDK <a href="https://software.intel.com/en-us/media-sdk/documentation/get-started">getting start webpage</a>. If you have any issue, connect with us at <a href="https://software.intel.com/en-us/forums/intel-media-sdk">Intel media forum</a>.</p>
Fri, 16 Mar 2018 10:53:35 -0700Liu, Mark (Intel)759513One Door VR: The First Proof of Concept on Untethered VR Using MSI* Backpack PChttps://software.intel.com/en-us/articles/one-door-vr-the-first-proof-of-concept-on-un-tethered-vr-using-msi-backpack-pc
<p class="intro-paragraph">Corey Warning and Will Lewis are the cofounders of Rose City Games*, an independent game studio in Portland, Oregon.</p>
<p>Rose City Games was recently awarded a development stipend and equipment budget to create a VR Backpack Early Innovation Project. The challenge was to come up with something that could only be possible with an untethered VR setup. In this article, you’ll find documentation about concepting the project, what we learned, and where we hope to take it in the future. Below is the introductory video for the project.</p>
<iframe width="480" height="270" src="https://www.youtube.com/embed/LslmCDKL_bk?feature=oembed" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<h2>Inspirations Behind Project: One Door</h2>
<p>Earlier this year, our team attended the <a href="https://www.theverge.com/2017/2/25/14724636/resident-evil-escape-room-horror-experience" target="_blank" rel="nofollow">Resident Evil Escape Room</a> in Portland, Oregon. Being huge fans of that franchise, experiencing that world in a totally new medium was really exciting, and it got us thinking about what other experiences could cross over in a similar fashion.</p>
<p>At the time, we were also trying out as many <a href="https://www.youtube.com/watch?v=PRrcJHcrlYM" target="_blank" rel="nofollow">VR experiences</a> as we could get our hands on. When we heard about the opportunity to work on an untethered VR experience, we knew there had to be something interesting we could bring to the table.</p>
<p>We’re currently operating out of a coworking space with some friends working on a variety of VR projects. The <a href="https://www.builtbywild.com" target="_blank" rel="nofollow">WILD</a> crew had some experience in merging real space and VR, so I asked <a href="https://twitter.com/gzeap?lang=en" target="_blank" rel="nofollow">Gabe Paez</a> if he remembered any specific challenges he encountered during that project. “Doors” was his response, and I decided to chase after creating a “VR Escape Room” experience, with the idea of moving through doors as the core concept!</p>
<h2>Overview</h2>
<p>The scope of this project is to create a proof of concept VR application using the <a href="http://vr.msi.com/Backpacks/vrone" target="_blank" rel="nofollow">MSI* One VR Backpack</a>. We’re attempting to create a unique experience that’s only possible using this hardware, specifically, an untethered setup.</p>
<p>Right away, we knew this project would require an installation, and because of this, we’re not considering this product for mass market. This will likely be interesting content for exhibitions such as <a href="http://www.gdconf.com/events/altctrlgdc.html" target="_blank" rel="nofollow">GDC Alt.Ctrl</a>, <a href="https://unite.unity.com" target="_blank" rel="nofollow">Unite*</a>, <a href="http://www.virtualrealityla.com" target="_blank" rel="nofollow">VR LA</a>, etc.</p>
<h2>One Door Game Concept</h2>
<p>Players will be in a completely virtual space, interacting with a physical door installation. They will be wearing the MSI VR One* backpack, with a single HTC VIVE* controller, and an VIVE headset. Each level will contain a simple puzzle or action the player must complete. Once completed, the player will be able to open the door and physically step through to the next level. At that point, they will be presented with a new puzzle or action, and the game will progress in this fashion.</p>
<p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-02.png" /><br />
Figure 1. The proof of concept setup for One Door</p>
<p>The player can open the door at any time. However, if a puzzle or action is incomplete, they will see the same level/door on the other side of the installation. We’re considering using an VIVE Tracker for the actual door handle, so that we can easily track and calibrate where the player needs to grab.</p>
<p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-03.png" /><br />
Figure 2. One Door front view</p>
<p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-04.png" /><br />
Figure 3. One Door top view</p>
<h2>Installation Specifics</h2>
<ul><li>The door will need to be very lightweight.</li>
<li>We’ll need support beams to make sure the wall doesn’t tip over.
<ul><li>Sandbags on the bases will be important.</li>
</ul></li>
<li>We should use brackets or something similar that allows assembling and disassembling the installation quickly, without sacrificing the integrity each time.</li>
<li>The VIVE lighthouses will need to be set higher than the wall in order to capture the entire play area.
<ul><li>We’ll need quality stands, and likely more sandbags.</li>
</ul></li>
<li>We may need something like bean bag chairs to place around the support beams/lighthouses to ensure people don’t trip into anything.
<ul><li>Another consideration is having someone attending to the installation at all times.</li>
</ul></li>
</ul><p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-05.png" /><br />
Figure 4. One Door field setup inside the lighthouses</p>
<h2>Our Build-Out</h2>
<ul><li>Mobile, free-standing door with handle and base</li>
<li>MSI VR One backpack and off-site computer for development
<ul><li>Additional DisplayPort-to-HDMI cable required</li>
<li>Mouse/keyboard/monitor</li>
<li>OBS to capture video</li>
</ul></li>
<li>2 lighthouses
<ul><li>Stands</li>
<li>Adjustable grips to point lighthouses at an angle</li>
<li>Placed diagonally on each side of the door</li>
</ul></li>
<li>1 VIVE Tracker
<ul><li>Gaffe tape to attach it to the door, ripped at the charging port</li>
<li>Extension cables and charging cables run to the tracker for charging downtime</li>
</ul></li>
<li>2 VIVE controllers
<ul><li>We didn’t need them, but showing hand positioning was helpful for recording video</li>
</ul></li>
<li>iPhone* to capture real-world video</li>
</ul><p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-06.png" /><br />
Figure 5. Door with VIVE*</p>
<p>This project was very much VR development training for us in many ways. This was our first time working with a VIVE, and implementing the additional physical build out for a new interactive experience created a bit of a learning curve. I feel like a majority of our hang-ups were typical of any VR developer, but of course we created some unique challenges for ourselves that we're happy to have experience with now. I would definitely recommend that VR developers thoughtfully explore the topics below and learn from our assumptions and processes before kicking off a project of their own.</p>
<h2>Our First Time with HTC VIVE*</h2>
<p>We've played with the VIVE a ton, but this was our first time developing for it. Setting up the general developer environment and Unity* plugins didn't take much time, but we had to think very strategically about how to develop and test more seamlessly past that point. Very commonly, it saved us an immense amount of time to have two people on site at a time: One person tending to Unity, while the other moved controllers and trackers, readjusted lighthouses, adjusted room scale, and acted as a second pair of eyes.</p>
<p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-fig-07.png" /><br />
Figure 6. One Door VR development and testing</p>
<p>With regard to hardware specifically as well as our project needing to use a physical prop, we went back and forth on many choreographies for how lighthouses were able to track devices, and even had quite a bit of trouble with hooking up a monitor. Since the MSI VR One backpack has one HDMI output and one DisplayPort input, we had to borrow (and later buy) a DisplayPort-to-HDMI converter to both develop the application and use the VIVE headset simultaneously. Luckily, this didn't delay development for too long, and was a better solution than our initial workaround — attaching the HDMI output to an HDMI switcher that we already had, and flipping between our monitor/dev environment and the headset. Continuing with this process for the duration of the project would have been very unrealistic and a huge waste of time.</p>
<p>We were introduced to more new experiences during this project, like being able to remotely work from home and use Unity's Collaborate feature, exploring how awesome it was to experience VR without being tethered, and becoming very familiar with how quickly we’re able to kick off a VR project.</p>
<h2>Budget</h2>
<p>Almost directly paired with testing new equipment and working with a physical build-out, our budget was another challenge we had to overcome. The recommended list of equipment provided by Intel was not totally covered by the allotted funding, so we had to pick and choose a bare minimum for what we might be able to use in our project, then consider how leftovers could satisfy the hours put in by an experienced developer. Luckily, because of our connections in the local game developer community, we were able to work with one of our friends who's been interested in experimenting on a project like this for some time. Still, if we were to do this project from scratch, we would very likely scope it with a higher budget in mind, as at least two more trackers, converter cables, adjustable joints for the tops of lighthouse stands, and a few other small items would have been considered in our minimum requirements to complete this project on a tighter timeline with a more polished product in mind.</p>
<h2>Location and Space</h2>
<p>From a consumer standpoint, we know that room-scale VR is unrealistic for many, and we still ran into a few issues as we planned for and worked on this project. One of my biggest recommendations to other developers working in room-scale VR would be to buy a tape measure early and make sure you have space solely dedicated to your project for the entirety of its development. We share a coworking space with about 20 other local VR developers, artists, game makers, and web designers, so needing to push our build-out to the side of the room at the end of every dev session added to our overall setup time. It did give us a lot of practice with setup and familiarity with devices, but another interesting revelation was that we never would have been able to do this from any of our homes!</p>
<h2>Unique Build-Out</h2>
<p>Since our project involved a prop (a full-sized, free-standing door), we had to make obvious considerations around moving it, storing it, and occlusion for the lighthouses. When we think about taking our project beyond a prototype, there are so many more issues that become apparent. Thinking about how this project would likely continue in the future as a tech demo, festival/museum installation, or resume piece, we also had to consider that we would need to show it to more people than ourselves and direct supporters. With this comes an additional consideration: safety. We definitely cut corners to very quickly build a functional prototype, but thinking around polish and transportation readiness, we would definitely recommend spending more time and resources towards creating a safer experience catered to those unfamiliar with VR.</p>
<p>As we prototyped, we were able to remember to pick our feet up in order to not trip, slowly move forward to avoid bashing into an outcropping in the door, and find the door handle without any problem. What we've made serves as an excellent tech demo, but we would definitely take another pass at the door prop before considering it any sort of consumable, public product, or experience. To make transportation easier, we would also build the door differently so that we could disassemble it on the fly.</p>
<h2>Moving Forward</h2>
<p>We're confident in what we have as a technical demo for how easy, interesting, and liberating it can be to use the MSI VR One Backpack, and we're also very proud and excited of what we were able to learn and accomplish. So much so that we'd like to continue implementing simple puzzles, art, voiceover, and accessibility features to make it more presentable. After some additional testing and polish, we'd like to shop the prototype around, searching for a sponsor related to content and IP, VR tech, interactive installations, or trade shows so that we can share the project with a wider audience! Intel is a prime candidate for this collaboration, and we'd love to follow up after giving another round on the demo.</p>
<p>Thanks for letting us be a part of this!</p>
<h2>Code Sample (Unity)</h2>
<p style="text-align:center"><img src="/sites/default/files/managed/13/56/one-door-vr-img-01.png" /></p>
<p>When using a peripheral as large as a door, the room choreography needs to be spot-on with regard to your lighthouse and tracker setup — particularly the tracker, which we affixed to our door to gauge its orientation at any given time (this mainly allowed us to tell whether the door was closed or open). We made a simple setup script to position the door, door frame, and door stand/stabilizers properly.</p>
<p>The Setup Helper is a simple tool that provides a solution for the position and rotation of the door and door frame relative to the VIVE Tracker position. Setup Helper runs in Editor mode, allowing it to be updated without having to be in Play mode, but should be disabled after running the application to allow the door to swing independent of the frame in game. Multiple Setup Helpers can be created to position any other geometry that needs to be spaced relative to the door, like room walls, floors, room decor, etc. in order to avoid potential visual/collision-oriented gaps or clipping.</p>
<p>The Setup Helper hierarchy is shown above. The following applies to the areas highlighted in blue, including the tracker (attached to the door) and doorway.</p>
<pre class="brush:cpp;">using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[ExecuteInEditMode]
public class SetupHelper : MonoBehaviour {
public bool setDoorFrameToTracker = false;
public GameObject doorFrameGo;
public Transform trackerTransform;
public bool trackRotation = false;
public Vector3 doorframeShift;//used to set the difference in placement to make it fit perfectly on the tracker position
// Use this for initialization
void Start () {
}
// Update is called once per frame
#if UNITY_EDITOR
void Update () {
if (setDoorFrameToTracker)
SetDoorFrameToTracker();
}
void SetDoorFrameToTracker()
{
doorFrameGo.transform.position = trackerTransform.position + doorframeShift;
if (trackRotation)
doorFrameGo.transform.rotation = trackerTransform.parent.rotation;
}
#endif
}
</pre>
<h2>About the Authors</h2>
<p>Corey Warning and Will Lewis are the cofounders of Rose City Games, an independent game studio in Portland, Oregon.</p>
Mon, 12 Feb 2018 04:49:38 -0800admin755051Cannot Find &quot;stdint.h&quot; after Upgrade to Visual Studio 2017https://software.intel.com/en-us/articles/cannot-open-include-file-vcincludestdinth-when-upgrade-to-visual-studio-2017
<p><strong>Problem Description</strong></p>
<p>When running Visual Studio 2017 C++ compiler under Intel(R) C++ Compiler environment, or Visual Studio 2017 solution contains mixed projects with both Intel compiler and Visual Studio C++ compiler may encounter </p>
<pre>fatal error C1083: Cannot open include file: '../../vc/include/stdint.h': No such file or directory</pre>
<p><strong>Root Cause</strong></p>
<p>In some header files of Intel C++ compiler, we need to include particular Microsoft VC++ header files by path. With Microsoft Visual Studio 2015 and older, we can use the relative path, like “../vc”. Starting from Microsoft Visual Studio 2017 the include directory name contains full VC Tools version number.</p>
<p>For example, the Visual Studio 2017 stdint.h is here:</p>
<pre>c:/Program files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/<strong>14.10.24930</strong>/include/stdint.h</pre>
<p>For Visual Studio 2015, it is here:</p>
<pre>c:/Program files (x86)/Microsoft Visual Studio 14.0/VC/INCLUDE/stdint.h</pre>
<p><strong>Solution</strong></p>
<p>Workaround is to define <strong>__MS_VC_INSTALL_PATH</strong> macro in command line (-D option), e.g.:</p>
<p>-D__MS_VC_INSTALL_PATH="c:/Program files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/14.10.24930" </p>
<p>To resolve the issue still relies on Microsoft's support. Please see our registered an issue in Microsoft's forum:</p>
<p><span style="color:#1f497d"><a href="https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/30930367-add-a-built-in-precompiled-macro-to-vc-that-poin" rel="nofollow"><span>https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/30930367-add-a-built-in-precompiled-macro-to-vc-that-poin</span></a></span></p>
<p>For users encountered this issue, you are encouraged to vote this idea from above link.</p>
Fri, 29 Dec 2017 02:50:50 -0800Chen, Yuan (Intel)754031How to use the Intel® Advisor Collection Control APIshttps://software.intel.com/en-us/articles/intel-advisor-collection-control-apis
<p><strong>Overview </strong></p>
<p>Intel® Advisor collection can be sped up, and the size of samples reduced, by using the Instrumentation and Tracing Technology (ITT) APIs. These ITT APIs were supported in the Intel® Advisor survey collection since the product release but now, as from Intel® Advisor 2018, you can also use the APIs on Trip Counts and FLOP collection. This can make the Roofline analysis an option for larger and longer running applications.</p>
<p>In this article, we will show how to use the collection control APIs to command Intel® Advisor when to start and stop the collection of performance data during the execution of your target application.</p>
<p><strong>Background </strong></p>
<p>Intel® Advisor typically starts collecting data as from the moment the analysis is started. As such, Intel® Advisor may be collecting data for sections of the large codebase in which you may not have interest. With the collection control ITT APIs, you can choose which sections of your source code that Intel® Advisor should monitor and record performance data.</p>
<p><strong>Usage example:</strong> Focus on a specific code section</p>
<p>The first step is to wrap your source code of interest between <em>resume</em> and <em>pause</em> APIs methods and then start Intel Advisor in <strong>paused mode</strong>. When Intel Advisor hits the <em>resume</em> method, it will start collecting performance data and stop when it sees the <em>pause</em> method.</p>
<p>Below are a series of detailed steps with a small code-snippet to get you started:</p>
<ol><li>First, your C/C++ application needs to understand the ITT APIs. In your source code, include the <code>"ittnotify.h"</code> header file, located in the include directory where Intel Advisor has been installed. By default, the installation path on Windows is:
<pre class="code-simple">C:\Program Files (x86)\IntelSWTools\Advisor 2018\include</pre>
On Linux, the default path will be:
<pre class="code-simple">/opt/intel/advisor_2018/include</pre>
<p class="note">Note: The <code> "ittnotify.h" </code>header file contains all the ITT APIs templates that you can use for instrumentation.</p>
<p>Include the path to the header file above so that your compiler knows where to find the library. In Microsoft Visual Studio for example, navigate to <code>Property Pages&gt;C/C++&gt;General&gt;Additional Include Directories</code></p>
<p><span><img alt="intel-advisor-visual-studio-itt-notify" title="Property page of Visual Studio for including the ITT library" src="https://software.intel.com/sites/default/files/managed/30/02/advisor-vs-prop-pages-1.png" /></span></p>
</li>
<li>Finally, link to the ITT library (<code>libittnotify.lib</code>) and recompile your application.In Visual Studio, navigate to the Linker settings (<code>Property Pages&gt;Linker&gt;Input&gt;Additional Include Directories</code>) and add the path to the library. By default, on Windows, the path will be:
<pre class="code-simple">C:\Program Files (x86)\IntelSWTools\Advisor 2018\lib64\libittnotify.lib</pre>
On Linux, the default path is: <code>/opt/intel/advisor_2018/</code>. Then, you would configure your build scripts to include the path to the library and link to the <code>libittnotify.a</code> library by passing <code>-littnotify</code> to your compiler.</li>
<li>Next, you need to start Intel Advisor in <b>Paused mode</b>. Look for the Play with a Pause symbol icon like the one below:
<p><span><img alt="Intel-advisor-start-paused-button" title="Intel Advisor Start-Paused button" src="https://software.intel.com/sites/default/files/managed/1b/18/start-stop.png" /></span></p>
<p class="note">In Intel Advisor, the <strong>Survey Analysis</strong> and the <strong>Trip Counts and FLOP Analysis</strong> support the collection control APIs.</p>
</li>
</ol><h4><span style="color:rgb(83, 86, 90)">Example:</span></h4>
<pre class="brush:cpp;">#include "ittnotify.h"
int main(int argc, char* argv[])
{
// Do initialization work here
__itt_resume(); //Intel Advisor starts recording performance data
for (int i=0;i&lt;size;i++)
{
do_some_math();
}
__itt_pause(); // Intel Advisor stops recording performance data
for (int i=0;i&lt;size;i++)
{
do_some_other_math();
}
return 0;
}</pre>
<p>In the scenario above, Intel Advisor will give you performance data for the loop containing the <code>do_some_math()</code> method and not the one containing the <code>do_some_other_math()</code> method. If you draw the roofline model for that analysis, you would see one dot on the graph, as opposed to two, if you were to run Intel Advisor without the collection control APIs.</p>
Fri, 08 Dec 2017 04:01:29 -0800Shailen Sobhee (Intel)753625Explore Unity Technologies ML-Agents* Exclusively on Intel® Architecturehttps://software.intel.com/en-us/articles/explore-unity-technologies-ml-agents-exclusively-on-intel-architecture
<h2>Abstract</h2>
<p>This article describes how to install and run Unity Technologies ML-Agents* in CPU-only environments. It demonstrates how to:</p>
<ul><li>Train and run the ML-Agents <em>Balance Balls</em> example on Windows* <em>without</em> CUDA* and cuDNN*.</li>
<li>Perform a TensorFlow* CMake build on Windows optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2).</li>
<li>Create a simple Amazon Web Services* (AWS) Ubuntu* Amazon Machine Image* environment from scratch <em>without</em> CUDA and cuDNN, build a “headless” version of <em>Balance Balls</em> for Linux*, and train it on AWS.</li>
</ul><h2>Introduction</h2>
<p><a href="https://unity3d.com/" target="_blank" rel="nofollow">Unity Technologies</a> released their beta version of <a href="https://github.com/Unity-Technologies/ml-agents" target="_blank" rel="nofollow">Machine Learning Agents* (ML-Agents*)</a> in September 2017, offering an exciting introduction to <a href="https://en.wikipedia.org/wiki/Reinforcement_learning" target="_blank" rel="nofollow">reinforcement learning</a> using their 3D game engine. According to Unity’s introductory <a href="https://blogs.unity3d.com/2017/09/19/introducing-unity-machine-learning-agents/" target="_blank" rel="nofollow">blog</a>, this open SDK will potentially benefit academic researchers, industry researchers interested in “training regimes for robotics, autonomous vehicle, and other industrial applications,” and game developers.</p>
<p>Unity’s ML-Agents SDK leverages <a href="https://www.tensorflow.org/" target="_blank" rel="nofollow">TensorFlow*</a> as the machine learning framework for training agents using a <a href="https://blog.openai.com/openai-baselines-ppo/" target="_blank" rel="nofollow">Proximal Policy Optimization (PPO)</a> algorithm. There are several example projects included in the GitHub* download, as well as a <a href="https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md#getting-started-with-the-balance-ball-example" target="_blank" rel="nofollow">Getting Started</a> example and documentation on how to install and use the SDK.</p>
<p>One downside of the SDK for some developers is the implied dependencies on CUDA* and cuDNN* to get the ML-Agents environment up and running. As it turns out, it is possible to not only explore ML-Agents exclusively on a CPU, but also perform a custom build of TensorFlow on a Windows® 10 computer to include optimizations for Intel® architecture.</p>
<p>In this article we show you how to:</p>
<ul><li>Train and run the ML-Agents <em>Balance Balls</em> (see Figure 1) example on Windows <em>without</em> CUDA and cuDNN.</li>
<li>Perform a TensorFlow CMake build on Windows* optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2).</li>
<li>Create a simple Amazon Web Services* (AWS) Ubuntu* Amazon Machine Image* (AMI) environment from scratch <em>without</em> CUDA and cuDNN, build a “headless” version of <em>Balance Balls</em> for Linux*, and train it on AWS.</li>
</ul><p style="text-align:center"><img alt="" src="/sites/default/files/managed/b4/b1/unity-environment.png" /><br /><strong>Figure 1.</strong><em> Trained Balance Balls model running in Unity* software.</em></p>
<p> </p>
<h2>Target Audience</h2>
<p>This article is intended for developers who have had some exposure to TensorFlow, Unity software, Python*, AWS, and machine learning concepts.</p>
<h2>System Configurations</h2>
<p>The following system configurations were used in the preparation of this article:</p>
<p>Windows Workstation</p>
<ul><li>Intel® Xeon® processor E3-1240 v5</li>
<li>Microsoft Windows 10, version 1709</li>
</ul><p>Linux Server (Training)</p>
<ul><li>Intel® Xeon® Platinum 8180 processor @ 2.50 GHz</li>
<li>Ubuntu Server 16.04 LTS</li>
</ul><p>AWS Cloud (Training)</p>
<ul><li>Intel® Xeon® processor</li>
<li>Ubuntu Server 16.04 LTS AMI</li>
</ul><p>In the section on training ML-Agents in the cloud we use a free-tier Ubuntu Server 16.04 AMI.</p>
<h2>Install Common Windows Components</h2>
<p>This section describes the installation of common software components required to get the ML-Agents environment up and running. The Unity ML-Agents documentation contains an <a href="https://github.com/Unity-Technologies/ml-agents/blob/master/docs/installation.md#installation--set-up" target="_blank" rel="nofollow">Installation and Setup</a> procedure that links to a webpage instructing the user to install CUDA and cuDNN. Although this is fine if your system already has a graphics processing unit (GPU) card that is <a href="https://developer.nvidia.com/cuda-gpus" target="_blank" rel="nofollow">compatible with CUDA</a> and you don’t mind the extra effort, it is not a requirement. Either way, we encourage you to review the Unity ML-Agents documentation before proceeding. </p>
<p>There are essentially three steps required to install the common software components:</p>
<ol><li>Download and install Unity 2017.1 or later from the package located <a href="https://store.unity.com/" target="_blank" rel="nofollow">here</a>.</li>
<li>Download the ML-Agents SDK from <a href="https://github.com/Unity-Technologies/ml-agents" target="_blank" rel="nofollow">GitHub</a>. Extract the files and move them to a project folder of your choice (for example, <em>C:\ml-agents</em>).</li>
<li>Download and install the Anaconda* distribution for Python 3.6 version for Windows, located <a href="https://www.anaconda.com/download/" target="_blank" rel="nofollow">here</a>.</li>
</ol><h2>Install Prebuilt TensorFlow*</h2>
<p>This section follows the guidelines for installing TensorFlow on Windows with CPU support only. According to the <a href="https://www.tensorflow.org/install/install_windows" target="_blank" rel="nofollow">TensorFlow website</a>, “this version of TensorFlow is typically much easier to install (typically, in 5 or 10 minutes), so even if you have an NVIDIA* GPU, we recommend installing this version first.” Follow these steps to install prebuilt TensorFlow on your Windows 10 system:</p>
<ol><li>In the Start menu, click the <strong>Anaconda Prompt</strong> icon (see Figure 2) to open a new terminal.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/13/9b/windows-start-menu.png" /><br /><strong>Figure 2.</strong><em> Windows* Start menu.</em></p>
</li>
<li>Type the following commands at the prompt:
<p><code>&gt; conda create -n tensorflow-cpu python=3.5<br />
&gt; activate tensorflow-cpu<br />
&gt; pip install --ignore-installed --upgrade tensorflow</code></p>
</li>
<li>As specified in the TensorFlow documentation, ensure the installation worked correctly by starting Python and typing the following commands:
<p><code>&gt; python<br />
&gt;&gt;&gt; import tensorflow as tf<br />
&gt;&gt;&gt; hello = tf.constant('Hello')<br />
&gt;&gt;&gt; sess = tf.Session()<br />
&gt;&gt;&gt; print (sess.run(hello))</code></p>
</li>
<li>If everything worked correctly, 'Hello' should print to the terminal as shown in Figure 3.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/fe/d7/python-test-output.png" /><br /><strong>Figure 3. </strong><em>Python* test output.</em></p>
<p>You may also notice a message like the one shown in Figure 3, stating “Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2.” This message may vary depending on the Intel® processor in your system; it indicates TensorFlow could run faster on your computer if you build it from sources, which we will do in the next section.</p>
</li>
<li>To close Python, at the prompt, press CTRL+Z.<br />
</li>
<li>Navigate to the python subdirectory of the ML-Agents repository you downloaded earlier, and then run the following command to install the other required dependencies:
<p><code>&gt; pip install.</code></p>
</li>
<li>Refer to the <a href="https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md#building-unity-environment" target="_blank" rel="nofollow">Building Unity Environment</a> section of the “Getting Started with Balance Ball Example” tutorial to complete the ML-Agents tutorial.</li>
</ol><h2>Install TensorFlow from Sources</h2>
<p>This section describes how to build an optimized version of TensorFlow on your Windows 10 system.</p>
<p>The <a href="https://www.tensorflow.org/install/install_sources" target="_blank" rel="nofollow">TensorFlow website</a> states, “We don't officially support building TensorFlow on Windows; however, you may try to build TensorFlow on Windows if you don't mind using the highly experimental Bazel on Windows or TensorFlow CMake build.” However, don’t let this discourage you. In this section we provide instructions on how to perform a CMake build on your Windows system.</p>
<p>The following TensorFlow build guidelines complement the <a href="https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake#step-by-step-windows-build" target="_blank" rel="nofollow">Step-by-step Windows build</a> instructions shown on <a href="https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/contrib/cmake" target="_blank" rel="nofollow">GitHub</a>. To get a more complete understanding of the build process, we encourage you to review the GitHub documentation before continuing. </p>
<ol><li>Install <a href="https://www.visualstudio.com/vs/older-downloads/" target="_blank" rel="nofollow">Microsoft Visual Studio* 2015</a>. Be sure to check the programming options as shown in Figure 4.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/e9/bc/visual-studio-programming-options.png" /><br /><strong>Figure 4.</strong><em> Visual Studio* programming options.</em></p>
</li>
<li>Download and install Git from <a href="https://git-scm.com/download/win" target="_blank" rel="nofollow">here</a>. Accept all default settings for the installation.<br />
</li>
<li>Download and extract swigwin from <a href="http://www.swig.org/download.html" target="_blank" rel="nofollow">here</a>. Change folders to C:\swigwin-3.0.12 (note that the version number may be different on your system).<br />
</li>
<li>Download and install CMake version 3.6 from <a href="https://cmake.org/files/v3.6/" target="_blank" rel="nofollow">here</a>. During the installation, be sure to check the option <strong>Add CMake to the system path for all users</strong>.<br />
</li>
<li>In the Start menu, click the <strong>Anaconda Prompt</strong> icon (see Figure 2) to open a new terminal. Type the following commands at the prompt:
<p><code>&gt; conda create -n tensorflow-custom36 python=3.6<br />
&gt; activate tensorflow-custom36</code></p>
</li>
<li>Run the following command to set up the environment:
<p><code>&gt; "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"</code></p>
(Note: If vcvarsall.bat is not found, try following the instructions provided <a href="https://social.msdn.microsoft.com/Forums/en-US/1071be0e-2a46-4c30-9546-ea9d7c4755fa/where-is-vcvarsallbat-file?forum=visualstudiogeneral" target="_blank" rel="nofollow">here</a>.)<br />
</li>
<li>Clone the TensorFlow repository and create a working directory for your build:
<p><code>cd /<br />
&gt; git clone https://github.com/tensorflow/tensorflow.git<br />
&gt; cd tensorflow\tensorflow\contrib\cmake<br />
&gt; mkdir build<br />
&gt; cd build</code></p>
</li>
<li>Type the following commands (Note: Be sure to check the paths and library version shown below on your own system, as they may be different):
<p><code>&gt; cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^<br />
-DSWIG_EXECUTABLE=C:\swigwin-3.0.12/swig.exe ^<br />
-DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/Anaconda3/python.exe ^<br />
-DPYTHON_LIBRARIES=C:/Users/%USERNAME%/Anaconda3/libs/python36.lib ^<br />
-Dtensorflow_WIN_CPU_SIMD_OPTIONS=/arch:AVX2</code></p>
</li>
<li>Build the pip package, which will be created as a .whl file in the directory .\tf_python\dist (for example, C:\tensorflow\tensorflow\contrib\cmake\build\tf_python\dist\tensorflow-1.4.0-cp36-cp36m-win_amd64.whl).
<p><code>&gt; C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild /p:Configuration=Release tf_python_build_pip_package.vcxproj</code></p>
(Note: Be sure to check the path to MSBuild on your own system as it may be different.)<br />
</li>
<li>Install the newly created TensorFlow build by typing the following command:
<p><code>pip install C:\tensorflow\tensorflow\contrib\cmake\build\tf_python\dist\tensorflow-1.4.0-cp36-cp36m-win_amd64.whl</code></p>
</li>
<li>As specified in the TensorFlow documentation, ensure the installation worked correctly by starting Python and typing the following commands:
<p><code>&gt; python<br />
&gt;&gt;&gt; import tensorflow as tf<br />
&gt;&gt;&gt; hello = tf.constant('Hello')<br />
&gt;&gt;&gt; sess = tf.Session()<br />
&gt;&gt;&gt; print (sess.run(hello))</code></p>
</li>
<li>If everything worked correctly, 'Hello' should print to the terminal. Also, we should not see any build optimization warnings like we saw in the previous section (see Figure 5).
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/22/bd/python-test-output-2.png" /><br /><strong>Figure 5.</strong><em> Python* test output.</em></p>
</li>
<li>To close Python, at the prompt, press CTRL+Z.<br />
</li>
<li>Navigate to the python subdirectory of the ML-Agents repository you downloaded earlier, and then run the following command to install the other required dependencies:
<p><code>&gt; pip install .</code></p>
</li>
<li>Refer to the <a href="https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md#building-unity-environment" target="_blank" rel="nofollow">Building Unity Environment</a> section of the “Getting Started with Balance Ball Example” tutorial to complete the ML-Agents tutorial.</li>
</ol><h2>Train ML-Agents in the Cloud</h2>
<p>The ML-Agents documentation provides a guide titled “Training on Amazon–Web Service” that contains instructions for setting up an EC2 instance on AWS for training ML-Agents. Although this guide states, “you will need an EC2 instance which contains the latest Nvidia* drivers, CUDA8, and cuDNN,” there is a simpler way to do cloud-based training without the GPU overhead.</p>
<p>In this section we perform the following steps:</p>
<ul><li>Create an Ubuntu Server 16.04 AMI (free tier).</li>
<li>Install prerequisite applications on Windows for interacting with the cloud server.</li>
<li>Install Python and TensorFlow on the AMI.</li>
<li>Build a headless Linux version of the <em>Balance Balls</em> application on Windows.</li>
<li>Export the Python code in the <em>PPO.ipynb</em> Jupyter Notebook* to run as a stand-alone script in the Linux environment.</li>
<li>Copy the <em>python</em> directory from Windows to the Linux AMI.</li>
<li>Run a training session on AWS for the ML-Agents <em>Balance Balls</em> application.</li>
</ul><ol><li>Create an account on AWS if you don’t already have one. You can follow the steps shown in this section with an <a href="https://aws.amazon.com/free/?sc_channel=PS&amp;sc_campaign=acquisition_US&amp;sc_publisher=google&amp;sc_medium=cloud_computing_b&amp;sc_content=aws_url_e_control_q32016&amp;sc_detail=amazon.%20web%20services&amp;sc_category=cloud_computing&amp;sc_segment=188908164670&amp;sc_matchtype=e&amp;sc_country=US&amp;s_kwcid=AL!4422!3!188908164670!e!!g!!amazon.%20web%20services&amp;ef_id=WYDd8wAAALGX80Xs:20171122210318:s" target="_blank" rel="nofollow">AWS Free Tier</a> account; however, we do not cover every detail of creating an account and configuring an AMI, because the website contains detailed information on how to do this.</li>
<li>Create an Ubuntu Server 16.04 AMI. Figure 6 shows the machine instance we used for preparing this article.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/c2/59/linux-server-amazon-machine.png" /><br /><strong>Figure 6.</strong><em> Linux* Server 16.04 LTS Amazon Machine Image*.</em></p>
</li>
<li>Install PuTTY* and WinSCP* on your Windows workstation. Detailed instructions and links for installing these components, connecting to your Linux instance from Windows using PuTTY, and transferring files to your Linux instance using WinSCP are provided on the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html" target="_blank" rel="nofollow">AWS website</a>.<br />
</li>
<li>Log in to the Linux Server AMI using PuTTY, and then type the following commands to install Python and TensorFlow:
<p><code>&gt; sudo apt-get update<br />
&gt; sudo apt-get install python3-pip python3-dev<br />
&gt; pip3 install tensorflow<br />
&gt; pip3 install image </code></p>
Note: The next steps assume you have already completed the ML-Agents <a href="https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md#getting-started-with-the-balance-ball-example" target="_blank" rel="nofollow">Getting Started with Balance Ball Example</a> tutorial. If not, be sure to complete these instructions and verify you can successfully train and run a model on your local Windows workstation before proceeding.<br />
</li>
<li>Ensure your Unity software installation includes Linux Build Support. You need to explicitly specify this option during installation, or you can add it to an existing installation by running the Unity Download Assistant as shown in Figure 7.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/00/e7/unity-software-linux-build.png" /><br /><strong>Figure 7. </strong><em>Unity* software Linux* build support.</em></p>
</li>
<li>In Unity software, open <em>File – Build Settings</em> and make the following selections:
<ul><li>Target Platform: Linux</li>
<li>Architecture: x86_64</li>
<li>Headless Mode: Checked</li>
</ul></li>
<li>These settings are shown in Figure 8.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/e3/de/unity-software-build-settings.png" /><br /><strong>Figure 8.</strong> Unity* software build settings for headless Linux operation.</p>
</li>
<li>After clicking <strong>Build</strong>, create a unique name for the application and save it in the repository’s <em>python</em> folder (see Figure 9). In our example we named it Ball3DHeadless.x86_64 and will refer to it as such for the remainder of this article.
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/c7/44/build-linux.png" /><br /><strong>Figure 9. </strong><em>Build Linux* application.</em></p>
</li>
<li>In order to run through a complete training session on the Linux AMI we will export the Python code in the <em>PPO.ipynb Jupyter</em> Notebook so it can run as a stand-alone script in the Linux environment. To do this, follow these steps:
<p>- In the Start menu, to open a new terminal, click the <strong>Anaconda Prompt</strong> icon (Figure 2).<br />
- Navigate to the <em>python</em> folder, and then type <em>Jupyter</em> Notebook on the command line.<br />
- Open the <em>PPO.ipynb</em> notebook, and then click <strong>File – Download As – Python (.py)</strong>. This will save a new file named “ppo.py” in the Downloads folder of your Windows computer.<br />
- Change the filename to “ppo-test.py” and then copy it to the <em>python</em> folder in your ML-Agents repository.<br />
- Open ppo-test.py in a text editor, and then change the <em>env_name</em> variable to “Ball3DHeadless”:<br />
- <code>env_name = “Ball3DHeadless” # Name of the training environment file.</code><br />
- Save ppo-test.py, and then continue to the next step.</p>
</li>
<li>Once the application has been built for the Linux environment and the test script has been generated, use WinSCP to copy the <em>python</em> folder from your ML-Agents repository to the Ubuntu AMI. (Details on transferring files to your Linux instance using WinSCP are provided on the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html" target="_blank" rel="nofollow">AWS website</a>.)</li>
<li>In the PuTTY console, navigate to the <em>python</em> folder and run the following commands:
<p><code>&gt; cd python<br />
&gt; chmod +x Ball3DHeadless.x86_64<br />
&gt; python3 ppo-test.py<br />
If everything went well you should see the training session start up as shown in Figure 10. </code></p>
<p style="text-align:center"><img alt="" src="/sites/default/files/managed/47/28/training-session-running-on-aws.png" /><br /><strong>Figure 10.</strong><em> Training session running on an Amazon Web Services* Linux* instance.</em></p>
</li>
</ol><h2>Summary</h2>
<p>In the output shown in Figure 10, notice that the time (in seconds) is printed to the console after every model save. Code was added to the <em>ppo-test.py</em> script for this article in order to get a rough measure of the training time between model saves.</p>
<p>To instrument the code we made the following modifications to the Python script:</p>
<pre class="brush:python;">import numpy as np
import os
import tensorflow as tf
import time # New Code
.
.
.
trainer = Trainer(ppo_model, sess, info, is_continuous, use_observations, use_states)
timer_start = time.clock() # New Code
.
.
.
Save_model(sess, model_path=model_path, steps=steps, saver=saver)
print(“ %s seconds “ % (time.clock() – timer_start)) # New Code
timer_start = time.clock() # New Code
.
.
.
</pre>
<p>Using this informal performance metric, we found that the average difference in training time between a prebuilt TensorFlow GPU binary and prebuilt CPU-only binary on the Windows workstation was negligible. The training time for the custom CPU-only TensorFlow build was roughly 19 percent faster than the prebuilt CPU-only binary on the Windows workstation. When training was performed in the cloud, the AWS Ubuntu Server AMI performed roughly 29 percent faster than the custom TensorFlow build on Windows.</p>
Tue, 05 Dec 2017 11:14:11 -0800Bryan B. (Intel)752303A script for performance test with MSDK sampleshttps://software.intel.com/en-us/articles/a-script-for-performance-test-with-msdk-samples
<h2>1. Introduce the script</h2>
<p>When MSDK run on difference platform, performance test usually needed when do the evaluation. MSDK sample is very good tools to do the performance, it supports media classic pipeline: decode, VPP, and encode, also has useful information to calculate the performance like running time, frame number etc. When do performance test, we also need to need the resouces usage, including CPU, memory, GPU usage etc. So auto run script is better to handle this. Followings will introduce this such scripts.</p>
<h2>2. Download the script from Github.</h2>
<pre><code>$ git clone https://github.com/zchrzhou/mdk-perf-script.git
</code></pre>
<p>highly recommend to read : <a href="https://github.com/zchrzhou/mdk-perf-script/blob/master/readme.txt" rel="nofollow">https://github.com/zchrzhou/mdk-perf-script/blob/master/readme.txt</a></p>
<h2>3. Features of MSDK performance script</h2>
<ul><li>Perl script, easy extend, auto collect performance data (FPS/GPU/CPU/MEM usage).</li>
<li>Batch run many test cases in order.</li>
<li>Loop run test cases for stress test.</li>
<li>Support MSDK sample_decode, sample_encode, sample_vpp, sample_multi_transcode</li>
<li>Multi-OS support, support both Windows and Linux.</li>
</ul><h2>4. How to use this tool</h2>
<pre><code>$ ./run.sh or ./main.pl
Welcome to Intel MediaSDK sample multiable process test tool.
Enjoy and good luck.
Performance test with Intel MSDK sample
Use example:
main.pl [--test &lt;item-1&gt; --test &lt;item-n&gt;] [--all] [--loop n] [--start n1 --end n2]
main.pl [--test A01 --test B1 --test C1] [--loop 2]
main.pl [--start 1 --end 10] [--loop 2]
main.pl [--test A01] [--loop -1]
--loop: loop run (-1 will run forever)
--start|--end: run with the range from --start to --end
refer to lib/config.pl -&gt; %conf{"range_template"}
--all: for all test items in lib/config.pl -&gt; %test_map
--test: test item, refter to lib/config.pl -&gt; %test_map
--input-dir: set input file folder
--output-dir: set output file folder
--sample-dir: set sample binary folder
--with-output: save output of transcode
--with-par: only for transcode test
--with-gpu: only for linux
--with-cpu-mem only for linux
</code></pre>
<h2>5. Configure your test cases</h2>
<p>please modify config.pm to custom your tests.</p>
<pre><code>$ vim lib/config.pm
### Test Map
##Transcode: ITEM =&gt; [channel-num, test-type, input-codec, input-file, output-codex, output-ext, head-args, tail-args]
##Decode: ITEM =&gt; [channel-num, test-type, input-param, input-file, output-param, output-ext, head-args, tail-args]
##Encode: ITEM =&gt; [channel-num, test-type, input-param, input-file, output-param, output-ext, head-args, tail-args]
##VPP: ITEM =&gt; [channel-num, test-type, input-param, input-file, output-param, output-ext, head-args, tail-args]
our %test_map = (
"A01" =&gt; [1, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1920 -h 1080 -u 7 -b 6000"],
"A02" =&gt; [2, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1920 -h 1080 -u 7 -b 6000"],
"B1" =&gt; [5, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1280 -h 720 -u 7 -b 2048"],
"C5" =&gt; [4, "transcode", "-i::mpeg2", "1080p.m2t", "-o::h264", "h264", "", "-hw -w 176 -h 144 -u 7 -b 80" ],
"D2" =&gt; [6, "decode", "-i", "JOY_1080.h264", "-o", "yuv", "h264", "-hw" ],
"D3" =&gt; [2, "decode", "-i", "JOY_1080.h264", "-r", "", "h264", "-hw" ],
"E2" =&gt; [6, "encode", "-i", "JOY_1080.yuv", "-o", "h264", "h264", "-hw -w 1920 -h 1080" ],
"F1" =&gt; [2, "vpp", "-i", "JOY_1080.yuv", "-o", "yuv", "-lib hw", "-scc nv12 -dcc nv12 -sw 1920 -sh 1080 -dw 1280 -dh 720 -n 5" ],
"V0" =&gt; [16, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1920 -h 1080 -u 7 -b 15000"],
"V1" =&gt; [16, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1920 -h 1080 -u 7 -b 15000"],
"V2" =&gt; [16, "transcode", "-i::h264", "1080p.h264", "-o::h264", "h264", "", "-hw -w 1920 -h 1080 -u 7 -b 15000"],
);
</code></pre>
<h2>6. A demo for run this scripts</h2>
<pre><code>$ ./run.sh --test A02 --with-fps --with-cpu-mem --with-par --with-gpu
Welcome to Intel MediaSDK sample multiable process test tool.
Enjoy and good luck.
mkdir -p input
mkdir -p output/A02
rm -rf output/A02/*
rm -f input/1080p.h264
cp -f /home/zhoujd/perf-script/stream/1080p.h264 input/1080p.h264
Start top by child process
Test --with-par is used
Start top by child process
/home/zhoujd/perf-script/binary/sample_multi_transcode -par output/A02/multi-channel.par &gt; output/A02/A02-with-par.log
libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
[sudo] password for zhoujd:
top process num: zhoujd 18207 18200 0 22:20 pts/5 00:00:00 /bin/bash /home/zhoujd/perf-script/tools/cpu_mem/top.sh 1 sample_m
gpu process num: zhoujd 18208 18200 0 22:20 pts/5 00:00:00 /bin/bash /home/zhoujd/perf-script/tools/gpu/metrics_monitor.sh
0-time: 4.21 sec
0-frames: 500
0-fps: 118.69
1-time: 4.21 sec
1-frames: 500
1-fps: 118.75
AVG FPS: 118.72
Num Stream: 2
CPU: 13.1 %
MEM: 46.86 MB
GPU: 58.00% 95.75% 0.00%
mv /home/zhoujd/perf-script/tools/cpu_mem/cpu_mem.txt output/A02
mv /home/zhoujd/perf-script/tools/gpu/gpu.log output/A02
Wait 2s ...
run finished
</code></pre>
Sun, 05 Nov 2017 22:30:24 -0800Jiandong Z. (Intel)748069How to create video wall with MSDK sample_multi_transcodehttps://software.intel.com/en-us/articles/how-to-create-video-wall-with-msdk-sample-multi-transcode
<h2>1. Download and install MSDK samples</h2>
<p>Download and install MSDK sample from <a href="https://software.intel.com/en-us/intel-media-server-studio/code-samples">https://software.intel.com/en-us/intel-media-server-studio/code-samples</a> or <a href="https://software.intel.com/en-us/media-sdk/documentation/code-samples">https://software.intel.com/en-us/media-sdk/documentation/code-samples</a></p>
<h2>2. Create par file for sample_multi_transcode for 4 sources</h2>
<pre><code>-i::h264 crowd_run_1080p.264 -vpp_comp_dst_x 0 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink
-i::h264 crowd_run_1080p.264 -vpp_comp_dst_x 960 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink
-i::h264 crowd_run_1080p.264 -vpp_comp_dst_x 0 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink
-i::h264 crowd_run_1080p.264 -vpp_comp_dst_x 960 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink
-vpp_comp_only 4 -join -i::source
</code></pre>
<h2>3. About parameters use in sample_multi_transode</h2>
<pre><code>-o::sink | Output of this session serves as input for all sessions using the -i::source.
-i::source | The session receives the output of the session using the -o::sink option at input.
-join | Join the session to another session.
More information about parameters, please refer to readme-multi-transcode.pdf in the source code folder.
</code></pre>
Fri, 03 Nov 2017 01:45:51 -0700Jiandong Z. (Intel)748022