You can specify the type (mjpeg) and size (1280x720) and frame rate to tell the device to give you (15 fps) (note for instance, in this instance, the camera can give you a higher frame rate/size total if you specify mjpeg):

You can specify "-vcodec copy" to ​stream copy the video instead of re-encoding, if you can receive the data in some type of pre-encoded format, like mjpeg in this instance.

Also this note ​that the input string is in the format video=<video device name>:audio=<audio device name>. It is possible to have two separate inputs (like -f dshow -i audio=foo -f dshow -i video=bar) but limited tests had shown a better synchronism when both were used in the same input.

Also note that you can only at most have 2 streams at once (one audio and one video). Ask if you want this improved.

See the ​FFmpeg dshow input device documentation for a list of more dshow options you can specify. For instance you can decrease latency on audio devices, or specify a video by "index" if two have the same name, etc.

Buffering

By default FFmpeg captures frames from the input, and then does whatever you told it to do, for instance, re-encoding them and saving them to an output file. By default if it receives a frame "too early" (while the previous frame isn't finished yet), it will discard that frame, so that it can keep up the the real time input. You can adjust this by setting the -rtbufsize parameter, though note that if your encoding process can't keep up, eventually you'll still start losing frames just the same (and using it at all can introduce a bit of latency). It may be helpful to still specify some buffer, however, otherwise frames may be needlessly dropped.

See StreamingGuide for some tips on tweaking encoding (sections latency and cpu usage). For instance, you could save it to a very fast codec, then re-encode it later.

TroubleShooting

If you have a video capture card (ex: AverMedia, possibly some BlackMagic, though it may be a separate unrelated problem, and also some BlackMagic cards don't have the right inputs set up ask on the zeranoe forum), it may not work (yet) out of the box with FFmpeg, as it lacks crossbar support presently. The work around currently is to install the AmerecTV software, which presents the capture card as directshow devices, then input the AmerecTV directshow devices into your FFmpeg as a workaround.

Related

AviSynth Input

FFmpeg can also take DirectShow input by creating an avisynth file (.avs file) that itself gets input from a graphedit file, which graphedit file exposes a pin of your capture source or any filter really, ex (yo.avs) with this content:

Running ffmpeg.exe without opening a console window

If you want to run your ffmpeg "from a gui" without having it popup a console window which spits out all of ffmpeg's input, a few things that may help:

If you can start your program like rubyw.exe or javaw.exe then all command line output (including child processes') is basically not attached to a console.

If your program has an option to run a child program "hidden" or the like, that might work. If you redirect stderr and stdout to something you receive, that might work (but might be tricky because you may need to read from both pipes in different threads, etc.)

ffdshow tryouts

​ffdshow tryouts is a separate project that basically wraps FFmpeg's core source (libavcodec, etc.) and then presents them as filter wrappers that your normal Windows applications can use for decoding video, etc. It's not related to FFmpeg directly at all.