Legend:

Many platforms offer access to dedicated hardware to perform a range of video-related tasks. Using such hardware allows some operations like decoding, encoding or filtering to be completed faster or using less of other resources (particularly CPU), but may give different or inferior results, or impose additional restrictions which are not present when using software only. On PC-like platforms, video hardware is typically integrated into a GPU (from AMD, Intel or NVIDIA), while on mobile SoC-type platforms it is generally an independent IP core (many different vendors).

Internal hwaccel decoders are enabled via the `-hwaccel` option. The software decoder starts normally, but if it detects a stream which is decodable in hardware then it will attempt to delegate all significant processing to that hardware. If the stream is not decodable in hardware (for example, it is an unsupported codec or profile) then it will still be decoded in software automatically. If the hardware requires a particular device to function (or needs to distinguish between multiple devices, say if several graphics cards are available) then one can be selected using `-hwaccel_device`.

[http://http.download.nvidia.com/XFree86/vdpau/doxygen/html/index.html Video Decode and Presentation API for Unix]. Developed by NVIDIA for Unix/Linux systems. To enable this you typically need the `libvdpau` development package in your distribution, and a compatible graphics card.

79

[https://http.download.nvidia.com/XFree86/vdpau/doxygen/html/index.html Video Decode and Presentation API for Unix]. Developed by NVIDIA for !Unix/Linux systems. To enable this you typically need the `libvdpau` development package in your distribution, and a compatible graphics card.

80

80

81

81

Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be done at the application level (check for example the {{{ffmpeg_vdpau.c}}} file used by {{{ffmpeg.c}}}). Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).

Several decoders are currently supported through VDPAU in libavcodec, in particular H.264, MPEG-1/2/4, and VC-1.

84

84

85

== VAAPI ==

85

= VAAPI =

86

86

87

87

Video Acceleration API (VAAPI) is a non-proprietary and royalty-free open source software library ("libva") and API specification, initially developed by Intel but can be used in combination with other devices.

Several decoders are currently supported, in particular H.264, MPEG-2, VC-1 and WMV 3.

99

96

100

97

DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header.

101

For MinGW this can be done by [http://download.videolan.org/pub/contrib/dxva2api.h downloading the header maintained by VLC] and installing it in the include patch (for example in {{{/usr/include/}}}).

98

For MinGW this can be done by [https://download.videolan.org/pub/contrib/dxva2api.h downloading the header maintained by VLC] and installing it in the include patch (for example in {{{/usr/include/}}}).

102

99

103

100

For MinGW64, `dxva2api.h` is provided by default. One way to install mingw-w64 is through a {{{pacman}}} repository, and can be installed using one of the two following commands, depending on the architecture:

[https://developer.apple.com/documentation/videotoolbox VideoToolbox], only supported on macOS. H.264 decoding is available in FFmpeg/libavcodec.

120

117

121

== NVENC ==

118

= NVENC =

122

119

123

120

NVENC is an API developed by NVIDIA which enables the use of NVIDIA GPU cards to perform H.264 and HEVC encoding. FFmpeg supports NVENC through the {{{h264_nvenc}}} and {{{hevc_nvenc}}} encoders. In order to enable it in FFmpeg you need:

The {{{-hwaccel_device}}} option can be used to specify the GPU to be used by the cuvid hwaccel in ffmpeg.

171

168

172

== libmfx ==

169

= libmfx =

173

170

174

171

libmfx is a proprietary library from Intel for use of Quick Sync hardware on both Linux and Windows. On Windows it is the primary way to use more advanced functions beyond those accessible via DXVA2/D3D11VA, particularly encode. On Linux it has a very restricted feature set and is hard to use, but may be helpful for some use-cases desiring maximum throughput.

[https://www.khronos.org/opencl/ OpenCL] can be used for a number of filters. To build, OpenCL 1.2 or later headers are required, along with an ICD or ICD loader to link to - it is recommended (but not required) to link with the ICD loader, so that the implementation can be chosen at run-time rather than build-time. At run-time, an OpenCL 1.2 driver is required - most GPU manufacturers will provide one as part of their standard drivers. CPU implementations are also usable, but may be slower than using native filters in ffmpeg directly.