Before you start a Flash video Web project, you must balance a variety of factors to ensure that you start with the highest quality, smallest video files as possible. Join James Gonzalez for a review of the best practices for achieving the best possible video image quality and viewing experience.

From the author of

From the author of

When you encode on-demand Flash video, you must balance a variety of factors
to achieve the best possible image quality and viewing experience. These factors
include the amount of motion in the subject, file size, target bandwidth, frame
rate, keyframe interval, and pixel dimensions of the video.

You can specify many of these factors either before or during encoding, so in
this article I provide the best practices for capturing your footage and
compressing the video afterward.

Two factors play a significant role in the encoding process: source quality
and frame motion. Starting with well-encoded clips is critical to the whole
process of delivering web-based video. Let me address source quality and frame
motion issues first and then move on to some specific encoding tips.

How Video Compression Works

Central to understanding why it’s so important to start with good
quality video and to avoid motion between and within frames is an understanding
of the way video is compressed.

Video is basically a three-dimensional array of
colorpixels.
Two dimensions serve as spatial (horizontal and vertical) directions of the
moving pictures, and one dimension represents the
time domain
(temporal). A
frame
is a set of all pixels that correspond to a single point in time. Basically, a
frame is the same as a still picture, and video is the quick display of one
frame after the next.

Video data often contains information that is repeated from frame to frame.
This phenomenon is referred to as spatial and temporal
redundancy.
These similarities can be encoded by merely registering differences within a
frame (spatial) and/or between frames (temporal). Compression works better if
most pixels stay the same for a number of frames. This process works so well
because the human eye cannot distinguish small differences in color as easily as
it can distinguish changes in brightness. Because of this inability to
distinguish small color differences, similar areas of color can be
"averaged out," so only the changes from one frame to the next are
encoded.

One of the most powerful techniques for compressing video is called
interframe compression, which works by comparing each frame in the
video with the previous one. If the frame contains areas in which nothing has
moved, no new data needs to be captured, and the system simply issues a command
that copies that part of the previous frame into the next one. You can see why
this type of situation can create great compression results!

Today’s video compressors also use a method of dropping frames and then
encoding a series of fully uncompressed frames. These uncompressed frames,
called keyframes, are used to calculate and "rebuild" the missing
frames during playback, thus also drastically reducing the final size of the
video.

With all this manipulation of the original video signal, you can imagine how
quickly and easily a video image can be degraded. This is why it is important,
when applying any kind of compression, to always start with the highest
quality video possible.