Because no one has taken on that task yet. FFmpeg development is
driven by the tasks that are important to the individual developers.
If there is a feature that is important to you, the best way to get
it implemented is to undertake the task yourself or sponsor a developer.

Windows does not support standard formats like MPEG very well, unless you
install some additional codecs.

The following list of video codecs should work on most Windows systems:

msmpeg4v2

.avi/.asf

msmpeg4

.asf only

wmv1

.asf only

wmv2

.asf only

mpeg4

Only if you have some MPEG-4 codec like ffdshow or Xvid installed.

mpeg1video

.mpg only

Note, ASF files often have .wmv or .wma extensions in Windows. It should also
be mentioned that Microsoft claims a patent on the ASF format, and may sue
or threaten users who create ASF files with non-Microsoft software. It is
strongly advised to avoid ASF where possible.

The following list of audio codecs should work on most Windows systems:

This is a bug in gcc. Do not report it to us. Instead, please report it to
the gcc developers. Note that we will not add workarounds for gcc bugs.

Also note that (some of) the gcc developers believe this is not a bug or
not a bug they should fix:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203.
Then again, some of them do not know the difference between an undecidable
problem and an NP-hard problem...

Distributions usually split libraries in several packages. The main package
contains the files necessary to run programs using the library. The
development package contains the files necessary to build programs using the
library. Sometimes, docs and/or data are in a separate package too.

To build FFmpeg, you need to install the development package. It is usually
called libfoo-dev or libfoo-devel. You can remove it after the
build is finished, but be sure to keep the main package.

The best way is to install pkg-config in your cross-compilation
environment. It will automatically use the cross-compilation libraries.

You can also use pkg-config from the host environment by
specifying explicitly --pkg-config=pkg-config to configure.
In that case, you must point pkg-config to the correct directories
using the PKG_CONFIG_LIBDIR, as explained in the previous entry.

As an intermediate solution, you can place in your cross-compilation
environment a script that calls the host pkg-config with
PKG_CONFIG_LIBDIR set. That script can look like that:

First, rename your pictures to follow a numerical sequence.
For example, img1.jpg, img2.jpg, img3.jpg,...
Then you may run:

ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg

Notice that ‘%d’ is replaced by the image number.

img%03d.jpg means the sequence img001.jpg, img002.jpg, etc.

Use the -start_number option to declare a starting number for
the sequence. This is useful if your sequence does not start with
img001.jpg but is still in a numerical order. The following
example will start with img100.jpg:

ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg

If you have large number of pictures to rename, you can use the
following command to ease the burden. The command, using the bourne
shell syntax, symbolically links all files in the current directory
that match *jpg to the /tmp directory in the sequence of
img001.jpg, img002.jpg and so on.

For multithreaded MPEG* encoding, the encoded slices must be independent,
otherwise thread n would practically have to wait for n-1 to finish, so it’s
quite logical that there is a small reduction of quality. This is not a bug.

Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
standard (note that there are many other coding formats that use this
same standard). Thus, use ’-c:v mpeg4’ to encode in these formats. The
default fourcc stored in an MPEG-4-coded file will be ’FMP4’. If you want
a different fourcc, use the ’-vtag’ option. E.g., ’-vtag xvid’ will
force the fourcc ’xvid’ to be stored as the video fourcc rather than the
default.

To "join" video files is quite ambiguous. The following list explains the
different kinds of "joining" and points out how those are addressed in
FFmpeg. To join video files may mean:

To put them one after the other: this is called to concatenate them
(in short: concat) and is addressed
in this very faq.

To put them together in the same file, to let the user choose between the
different versions (example: different audio languages): this is called to
multiplex them together (in short: mux), and is done by simply
invoking ffmpeg with several -i options.

For audio, to put all channels together in a single stream (example: two
mono streams into one stereo stream): this is sometimes called to
merge them, and can be done using the
amerge filter.

For audio, to play one on top of the other: this is called to mix
them, and can be done by first merging them into a single stream and then
using the pan filter to mix
the channels at will.

For video, to display both together, side by side or one on top of a part of
the other; it can be done using the
overlay video filter.

FFmpeg has a concat protocol designed specifically for that, with examples in the
documentation.

A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
video by merely concatenating the files containing them.

Hence you may concatenate your multimedia files by first transcoding them to
these privileged formats, then using the humble cat command (or the
equally humble copy under Windows), and finally transcoding back to your
format of choice.

Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
allow concatenation, and the transcoding step is almost lossless.
When using multiple yuv4mpegpipe(s), the first line needs to be discarded
from all but the first stream. This can be accomplished by piping through
tail as seen below. Note that when piping through tail you
must use command grouping, { ;}, to background properly.

For example, let’s say we want to concatenate two FLV files into an
output.flv file:

VOB and a few other formats do not have a global header that describes
everything present in the file. Instead, applications are supposed to scan
the file to see what it contains. Since VOB files are frequently large, only
the beginning is scanned. If the subtitles happen only later in the file,
they will not be initially detected.

Some applications, including the ffmpeg command-line tool, can only
work with streams that were detected during the initial scan; streams that
are detected later are ignored.

The size of the initial scan is controlled by two options: probesize
(default ~5 Mo) and analyzeduration (default 5,000,000 µs = 5 s). For
the subtitle stream to be detected, both values must be large enough.

The -sameq option meant "same quantizer", and made sense only in a
very limited set of cases. Unfortunately, a lot of people mistook it for
"same quality" and used it in places where it did not make sense: it had
roughly the expected visible effect, but achieved it in a very inefficient
way.

Each encoder has its own set of options to set the quality-vs-size balance,
use the options for the encoder you are using to set the quality level to a
point acceptable for your tastes. The most common options to do that are
-qscale and -qmax, but you should peruse the documentation
of the encoder you chose.

A lot of video codecs and formats can store the aspect ratio of the
video: this is the ratio between the width and the height of either the full
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
SAR.

Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
of video standards, especially from the analogic-numeric transition era, use
non-square pixels.

Most processing filters in FFmpeg handle the aspect ratio to avoid
stretching the image: cropping adjusts the DAR to keep the SAR constant,
scaling adjusts the SAR to keep the DAR constant.

If you want to stretch, or “unstretch”, the image, you need to override the
information with the
setdar or setsar filters.

Do not forget to examine carefully the original video to check whether the
stretching comes from the image or from the aspect ratio information.

For example, to fix a badly encoded EGA capture, use the following commands,
either the first one to upscale to square pixels or the second one to set
the correct aspect ratio or the third one to avoid transcoding (may not work
depending on the format / codec / player / phase of the moon):

ffmpeg normally checks the console input, for entries like "q" to stop
and "?" to give help, while performing operations. ffmpeg does not have a way of
detecting when it is running as a background task.
When it checks the console input, that can cause the process running ffmpeg
in the background to suspend.

To prevent those input checks, allowing ffmpeg to run as a background task,
use the -nostdin option
in the ffmpeg invocation. This is effective whether you run ffmpeg in a shell
or invoke ffmpeg in its own process via an operating system API.

As an alternative, when you are running ffmpeg in a shell, you can redirect
standard input to /dev/null (on Linux and macOS)
or NUL (on Windows). You can do this redirect either
on the ffmpeg invocation, or from a shell script which calls ffmpeg.

The message "tty output" notwithstanding, the problem here is that
ffmpeg normally checks the console input when it runs. The operating system
detects this, and suspends the process until you can bring it to the
foreground and attend to it.

FFmpeg is already organized in a highly modular manner and does not need to
be rewritten in a formal object language. Further, many of the developers
favor straight C; it works for them. For more arguments on this matter,
read "Programming Religion".

The build process creates ffmpeg_g, ffplay_g, etc. which
contain full debug information. Those binaries are stripped to create
ffmpeg, ffplay, etc. If you need the debug information, use
the *_g versions.

Yes, as long as the code is optional and can easily and cleanly be placed
under #if CONFIG_GPL without breaking anything. So, for example, a new codec
or filter would be OK under GPL while a bug fix to LGPL code would not.

FFmpeg builds static libraries by default. In static libraries, dependencies
are not handled. That has two consequences. First, you must specify the
libraries in dependency order: -lavdevice must come before
-lavformat, -lavutil must come after everything else, etc.
Second, external libraries that are used in FFmpeg have to be specified too.

An easy way to get the full list of required libraries in dependency order
is to use pkg-config.

FFmpeg is a pure C project, so to use the libraries within your C++ application
you need to explicitly state that you are using a C library. You can do this by
encompassing your FFmpeg includes using extern "C".

Even if peculiar since it is network oriented, RTP is a container like any
other. You have to demux RTP before feeding the payload to libavcodec.
In this specific case please look at RFC 4629 to see how it should be done.

r_frame_rate is NOT the average frame rate, it is the smallest frame rate
that can accurately represent all timestamps. So no, it is not
wrong if it is larger than the average!
For example, if you have mixed 25 and 30 fps content, then r_frame_rate
will be 150 (it is the least common multiple).
If you are looking for the average frame rate, see AVStream.avg_frame_rate.

Do you happen to have a ~ character in the samples path to indicate a
home directory? The value is used in ways where the shell cannot expand it,
causing FATE to not find files. Just replace ~ by the full path.