Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

A video decoder includes an entropy decoding device that includes a first
processor that generates entropy decoded (EDC) data from an encoded video
signal that includes a plurality of video layers. A general video
decoding device includes a second processor that generates a decoded
video signal from the EDC data, wherein the general video decoding device
includes a neighbor management module, a decode motion compensation
module, an inverse intra-prediction module, an inverse
transform/quantization module, a deblocking filter module, and a
resampling module.

Claims:

1. A video decoder comprising: an entropy decoding device that includes a
first processor that generates entropy decoded (EDC) data from an encoded
video signal that includes slice header data, run length data, motion
vector differential data, and macroblock header data, wherein the encoded
video signal includes a plurality of video layers; a general video
decoding device, coupled to the entropy decoding device, that includes a
second processor that generates a decoded video signal from the EDC data,
wherein the general video decoding device includes: a neighbor management
module that generates motion vector data, macroblock mode data and
deblock strength data, based on the motion vector differential data and
the macroblock header data; a decode motion compensation module, coupled
to the neighbor management module, that generates inter-prediction data
based on the motion vector data when the macroblock mode data indicates
an inter-prediction mode; an inverse intra-prediction module, coupled to
the neighbor management module, that generates intra-prediction data when
the macroblock mode data indicates an intra-prediction mode; an inverse
transform/quantization module, coupled to the neighbor management module,
the decode motion compensation module and the inverse intra-prediction
module, that generates residual data based on the run length data and
that generates reconstructed picture data based on the residual data and
on the inter-prediction data when the macroblock mode data indicates the
inter-prediction mode and based on the residual data and on the
intra-prediction data when the macroblock mode data indicates the
intra-prediction mode; a deblocking filter module, coupled to the inverse
transform/quantization module and the neighbor management module, that
generates filtered picture data from the reconstructed picture data,
based on the deblock strength data; and a resampling module, coupled to
the deblocking filter module and the inverse transform/quantization
module, that generates resampled residual data based on the residual
data, and that generates the decoded video signal based on the filtered
picture data and on the resampled residual data.

2. The video decoder of claim 1 wherein the resampling module generates
the resampled residual data based on a difference in resolution between a
current layer and a target layer of the plurality of video layers.

3. The video decoder of claim 2 wherein the resampling module upscales
the residual data to generate the resampled residual data at a resolution
of the target layer.

4. The video decoder of claim 1 wherein the resampling module generates
resampled filtered picture data from the filtered picture data, based on
a difference in resolution between the current layer and a target layer
of the plurality of layers.

5. The video decoder of claim 4 wherein the resampling module upscales
the filtered picture data to generate the resampled filtered picture data
at a resolution of the target layer.

6. The video decoder of claim 5 wherein the resampling module generates a
picture of the decoded video signal by combining the resampled filtered
picture data of at least one layer of the plurality of layers with the
filtered picture data of the target layer.

7. The video decoder of claim 1 wherein the encoded video signal is
encoded in accordance with at least one of: an H.264 encoding standard
and a video coding 1 (VC-1) encoding standard.

8. A method comprising: generating entropy decoded (EDC) data from an
encoded video signal via a first processor, wherein the EDC data slice
header data, run length data, motion vector differential data, and
macroblock header data, wherein the encoded video signal includes a
plurality of video layers; generating a decoded video signal from the EDC
data via a second processor by: generating motion vector data, macroblock
mode data and deblock strength data, based on the motion vector
differential data and the macroblock header data; generating
inter-prediction data based on the motion vector data when the macroblock
mode data indicates an inter-prediction mode; generating intra-prediction
data when the macroblock mode data indicates an intra-prediction mode;
generating residual data based on the run length data; generating
resampled residual data based on the residual data; generating
reconstructed picture data based on the residual data and on the
inter-prediction data when the macroblock mode data indicates the
inter-prediction mode; generating the reconstructed picture data based on
the residual data and on the intra-prediction data when the macroblock
mode data indicates the intra-prediction mode; generating filtered
picture data from the reconstructed picture data, based on the deblock
strength data; and generating the decoded video signal based on the
filtered picture data and the resampled residual data.

9. The method of claim 8 wherein generating the resampled residual data
includes: generating the resampled residual data based on a difference in
resolution between a current layer and a target layer of the plurality of
video layers.

10. The method of claim 9 wherein generating the resampled residual data
further includes upscaling the residual data to generate the resampled
residual data at a resolution of the target layer.

11. The method of claim 8 wherein generating the decoded video signal
includes: generating resampled filtered picture data from the filtered
picture data, based on a difference in resolution between the current
layer and a target layer of the plurality of layers.

12. The method of claim 11 wherein generating the decoded video signal
further includes: upscaling the filtered picture data to generate the
resampled filtered picture data at a resolution of the target layer.

13. The method of claim 12 wherein generating the decoded video signal
further includes: generating a picture of the decoded video signal by
combining the resampled filtered picture data of at least one layer of
the plurality of layers with the filtered picture data of the target
layer.

14. The method of claim 8 wherein the encoded video signal is encoded in
accordance with at least one of: an H.264 encoding standard and a video
coding 1 (VC-1) encoding standard.

Description:

CROSS REFERENCE TO RELATED PATENTS

[0001] The present U.S. Utility Patent Application claims priority
pursuant to 35 USC 119(e) to the provisionally filed application
entitled, "VIDEO DECODER WITH GENERAL VIDEO DECODING DEVICE AND METHODS
FOR USE THEREWITH," (Attorney Docket No. VIXS183), having U.S. Utility
Patent Application Ser. No. 61/449,461, filed on Mar. 4, 2011, pending,
which is hereby incorporated herein by reference in its entirety and made
part of the present U.S. Utility Patent Application for all purposes.

TECHNICAL FIELD OF THE INVENTION

[0002] The present invention relates to coding used in devices such as
video decoders for video signals.

DESCRIPTION OF RELATED ART

[0003] Video encoding has become an important issue for modern video
processing devices. Robust encoding algorithms allow video signals to be
transmitted with reduced bandwidth and stored in less memory. However,
the accuracy of these encoding methods face the scrutiny of users that
are becoming accustomed to greater resolution and higher picture quality.
Standards have been promulgated for many encoding methods including the
H.264 standard that is also referred to as MPEG-4, part 10 or Advanced
Video Coding, (AVC). While this standard sets forth many powerful
techniques, further improvements are possible to improve the performance
and speed of implementation of such methods. The video signal encoded by
these encoding methods must be similarly decoded for playback on most
video display devices.

[0004] The Motion Picture Expert Group (MPEG) has presented a Scalable
Video Coding (SVC) Annex G extension to H.264/MPEG-4 AVC for
standardization. SVC provides for encoding of video bitstreams that
include subset bitstreams that can represent lower spatial resolution,
lower temporal resolution or otherwise lower quality video. A subset
bitstream can be derived by dropping packets from the total bitstream.
SVC streams allow end devices to flexibly scale the temporal resolution,
spatial resolution or video fidelity, for example, to match the
capabilities of a particular device.

[0005] Efficient and fast encoding and decoding of video signals is
important to the implementation of many video devices, particularly video
devices that are destined for home use. Further limitations and
disadvantages of conventional and traditional approaches will become
apparent to one of ordinary skill in the art through comparison of such
systems with the present invention.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

[0006] FIGS. 1-3 present pictorial diagram representations of various
video devices in accordance with embodiments of the present invention.

[0007] FIG. 4 presents a block diagram representation of a video system in
accordance with an embodiment of the present invention.

[0008] FIG. 5 presents a block diagram representation of a video decoder
102 in accordance with an embodiment of the present invention.

[0009]FIG. 6 presents a block diagram representation of a pipeline
processing of video signals in accordance with an embodiment of the
present invention.

[0010] FIG. 7 presents a block diagram representation of an entropy
decoding device 140 in accordance with an embodiment of the present
invention.

[0011] FIG. 8 presents a block diagram representation of a plurality of
video layers in accordance with an embodiment of the present invention.

[0012]FIG. 9 presents a block diagram representation of a general video
decoder 150 in accordance with an embodiment of the present invention.

[0013] FIG. 10 presents a block diagram representation of a decoding
process in accordance with an embodiment of the present invention.

[0014]FIG. 11 presents a block diagram representation of a decoding
process in accordance with another embodiment of the present invention.

[0015]FIG. 12 presents a block diagram representation of a video
distribution system 375 in accordance with an embodiment of the present
invention.

[0016]FIG. 13 presents a block diagram representation of a video storage
system 179 in accordance with an embodiment of the present invention.

[0017] FIG. 14 presents a flow diagram representation of a method in
accordance with an embodiment of the present invention.

[0018]FIG. 15 presents a flow diagram representation of a method in
accordance with an embodiment of the present invention.

[0019]FIG. 16 presents a flow diagram representation of a method in
accordance with an embodiment of the present invention.

[0020] FIG. 17 presents a flow diagram representation of a method in
accordance with an embodiment of the present invention.

[0021] FIG. 18 presents a flow diagram representation of a method in
accordance with an embodiment of the present invention.

DETAILED DESCRIPTION OF THE INVENTION INCLUDING THE PRESENTLY PREFERRED
EMBODIMENTS

[0022] FIGS. 1-3 present pictorial diagram representations of various
video devices in accordance with embodiments of the present invention. In
particular, set top box 10 with built-in digital video recorder
functionality or a stand alone digital video recorder, television or
monitor 20 and portable computer 30 illustrate electronic devices that
incorporate a video decoder in accordance with one or more features or
functions of the present invention. While these particular devices are
illustrated, the present invention can be implemented in any device that
is capable of decoding and/or transcoding video content in accordance
with the methods and systems described in conjunction with FIGS. 4-18 and
the appended claims.

[0023] FIG. 4 presents a block diagram representation of a video decoder
102 in accordance with an embodiment of the present invention. In
particular, this video device includes a receiving module 100, such as a
server, cable head end, television receiver, cable television receiver,
satellite broadcast receiver, broadband modem, 3G transceiver or other
information receiver or transceiver that is capable of receiving a
received signal 98 and generating a video signal 110 that has been
encoded via a video encoding format. Video processing device 125 includes
video decoder 102 and is coupled to the receiving module 100 to decode or
transcode the video signal for storage, editing, and/or playback in a
format corresponding to video display device 104. Video processing device
can include set top box 10 with built-in digital video recorder
functionality or a stand alone digital video recorder. While shown as a
separate from video display device 104, video processing device 125,
including video decoder 102 can be incorporated in television or monitor
20 and portable computer 30 of other device that includes a video
decoder, such as video decoder 102.

[0024] In an embodiment of the present invention, the received signal 98
is a broadcast video signal, such as a television signal, high definition
television signal, enhanced definition television signal or other
broadcast video signal that has been transmitted over a wireless medium,
either directly or through one or more satellites or other relay stations
or through a cable network, optical network or other transmission
network. In addition, received signal 98 can be generated from a stored
video file, played back from a recording medium such as a magnetic tape,
magnetic disk or optical disk, and can include a streaming video signal
that is transmitted over a public or private network such as a local area
network, wide area network, metropolitan area network or the Internet.

[0025] Video signal 110 can include a digital video signal complying with
a digital video codec standard such as H.264, MPEG-4 Part 10 Advanced
Video Coding (AVC) including a SVC signal, an encoded stereoscopic video
signal having a base layer that includes a 2D compatible base layer and
an enhancement layer generated by processing in accordance with an MVC
extension of MPEG-4 AVC, or another digital format such as a Motion
Picture Experts Group (MPEG) format (such as MPEG1, MPEG2 or MPEG4),
Quicktime format, Real Media format, Windows Media Video (WMV) or Audio
Video Interleave (AVI), video coding one (VC-1), etc.

[0026] Video display devices 104 can include a television, monitor,
computer, handheld device or other video display device that creates an
optical image stream either directly or indirectly, such as by
projection, based on the processed video signal 112 either as a streaming
video signal or by playback of a stored digital video file.

[0027] FIG. 5 presents a block diagram representation of a video decoder
102 in accordance with an embodiment of the present invention. Video
decoder 102 includes an entropy decoding device 140 having a processing
module 142 that generates entropy decoded (EDC) data 146 from an encoded
video signal such as video signal 110. General video decoding device 150
includes a processing module 152 that generates a decoded video signal,
such as processed video signal 112, from the EDC data 146. The EDC data
146 can include run length data, motion vector differential data, and
macroblock header data and/or other data that results from the entropy
decoding of an encoded video signal. In particular, the encoded video
signal can include a plurality of video layers, such as an MVC
stereoscopic signal, an SVC signal or other multi-layer video signal and
the EDC data 146 can include slice header data corresponding to at least
one of the plurality of video layers.

[0028] In an embodiment of the present invention, the entropy decoding
device 140 and the general video decoding device 150 operate
contemporaneously in a pipelined process where the general video decoding
device 150 generates a first portion of the decoded video signal during
at least a portion of time that the entropy decoding device 140 generates
EDC data 146 from a second portion of the encoded video signal.

[0029] The processing modules 142 and 152 can each be implemented using a
single processing device or a plurality of processing devices. Such a
processing device may be a microprocessor, co-processors, a
micro-controller, digital signal processor, microcomputer, central
processing unit, field programmable gate array, programmable logic
device, state machine, logic circuitry, analog circuitry, digital
circuitry, and/or any device that manipulates signals (analog and/or
digital) based on operational instructions that are stored in a memory,
such as memory modules 144 and 154. These memories may each be a single
memory device or a plurality of memory devices. Such a memory device can
include a hard disk drive or other disk drive, read-only memory, random
access memory, volatile memory, non-volatile memory, static memory,
dynamic memory, flash memory, cache memory, and/or any device that stores
digital information. Note that when the processing modules 142 and 152
implement one or more of their functions via a state machine, analog
circuitry, digital circuitry, and/or logic circuitry, the memory storing
the corresponding operational instructions may be embedded within, or
external to, the circuitry comprising the state machine, analog
circuitry, digital circuitry, and/or logic circuitry.

[0032] As shown, the EDC processing (syntax decoding) and GVD processing
(non-syntax related coding) are performed contemporaneously, in parallel,
and in a pipelined fashion. In particular, the Nth portion of the decoded
video signal is processed from the Nth EDC data contemporaneously by the
GVD device 150 during at least a portion of time that the EDC device 140
generates the N+1 EDC data from the N+1 portion of the encoded video
signal.

[0033] In an embodiment of the present invention, the portions of video
signals 110 and processed video signal and 112 are pictures
(frames/fields) of the video signals, however, larger portions, such as a
group of pictures or smaller portions such as macroblocks or groups of
macroblocks or other portion sizes could likewise be employed.

[0034] FIG. 7 presents a block diagram representation of an entropy
decoding device 140 in accordance with an embodiment of the present
invention. In particular, entropy decoding device 140 includes a
processing module 142 and a memory module 144 as described in conjunction
with FIG. 5. In addition, the entropy decoding device 140 further
includes a bus 121, a signal interface 148, entropy decoding module 186,
reordering module 188 and optional slice dependency module 190. In
operation, the signal interface 148 receives video signal 110 and
optionally buffers and preprocesses the video signal for processing by
the other modules of entropy decoding device 140. Similarly, the EDC data
generated via processing by the other modules of entropy decoding device
140 is optionally buffered, such as via a ring buffer or other buffer
structure implemented in conjunction with memory locations of memory
module 144 and formatted for output as EDC data 146 to interface with
general video decoder 150.

[0036] In an embodiment of the present invention, the entropy decoding
module 186, reordering module 188 and slice dependency module 190 are
implemented using software stored in memory module 144 and executed via
processing module 142. In alternative embodiments the entropy decoding
module 186, reordering module 188 and slice dependency module 190 are
optionally implemented via other hardware, software or firmware. Thus,
while a particular bus architecture is shown that represents the
functionally of communication between the various modules of entropy
decoding device 140, other architectures can be implemented in accordance
with the broad scope of the present invention.

[0037] As discussed in conjunction with FIG. 5, the encoded video signal
can include a plurality of video layers, such as an MVC stereoscopic
signal, an SVC signal or other multi-layer video signal and the EDC data
146 can include slice header data corresponding to at least one of the
plurality of video layers.

[0038] FIG. 8 presents a block diagram representation of a plurality of M
video layers of an encoded video signal, such as video signal 110, in
accordance with an embodiment of the present invention.

[0039] Optional slice dependency module 190 operates on these video layers
to generate slice dependency data. This slice dependency data is used by
the processing module 142 to control the entropy decoding of a selected
subset of the plurality of video layers, based on the slice dependency
data. In an embodiment of the present invention, the slice dependency
module 190 operates to decode the slice headers of each of the video
layers before the slice data is entropy decoded. The slice dependency
module 190 extracts dependency data from a slice header for each of the
plurality of video layers that indicates the dependency of each layer.
This dependency data includes, for example, and indication of the video
layers that each video layer is directly dependent on as well as video
layers that each layer is indirectly dependent on.

[0050] When the decoder 102 is decoding a target layer, the slice
dependency data can be used to generate a selected subset of the video
layers required to decode the target layer. Following with the example
above, if the target layer is layer 4, a subset of the layers that
includes only layers 4, 3 and 1 need only be EDC and GVD decoded. Because
layer 4 is not dependent on layer 2, either directly or directly, this
layer can be excluded from the selected subset of layers and need not be
EDC or GVD decoded. In another example, where layer 2 is the target
layer, a subset of the layers that includes only layers 2 and 1 need be
EDC and GVD decoded. Layers 4 and 3 can be excluded from the selected
subset of layers and need not be EDC or GVD decoded.

[0051] It should also be noted that the slice dependency data generated by
slice dependency module 190 indicates an ordering of the layer decoding.
In particular, the layers are decoded in reverse order of their
dependency. In the example above, where the target layer 4 is selected,
the layers are EDC and GVD decoded in the order 1-3-4. Similarly, where
the target layer 2 is selected, the layers are EDC and GVD decoded in the
order 1-2. This saves memory space and decoding time of the layers that
are not necessary to the final decoded video signal.

[0052]FIG. 9 presents a block diagram representation of a general video
decoder 150 in accordance with an embodiment of the present invention. In
particular, general video decoding device 150 includes a processing
module 152 and a memory module 154 as described in conjunction with FIG.
5. In addition, the general video decoding device 150 further includes a
bus 221, a signal interface 158, decode motion compensation module 204,
neighbor management module 218, deblocking filter module 222, inverse
transform and quantization module 220, inverse intra prediction module
211 and optional resampling module 224. In operation, the signal
interface 158 receives EDC data 146 and optionally buffers and
preprocesses the EDC data 146 for processing by the other modules of
general video decoding device 150. Similarly, the decoded video signal
generated via processing by the other modules of general video decoding
device 150 is optionally buffered, such as via a ring buffer or other
buffer structure implemented in conjunction with memory locations of
memory module 154 and formatted for output as processed video signal 112.

[0054] In operation, neighbor management module 218 generates motion
vector data, macroblock mode data and deblock strength data, based on the
motion vector differential data and the macroblock header data. In an
embodiment of the present invention, a data structure, such as a linked
list, array or one or more registers are used to associate and store
neighbor data for each macroblock of a processed picture. In particular,
the neighbor management module 218 stores the motion vector data for a
group of macroblocks that neighbor a current macroblock and generates the
motion vector data for the current macroblock based on both the
macroblock mode data and the motion vector data for the group of
macroblocks that neighbor the current macroblock. In addition, the
neighbor management module calculates a motion vector magnitude and
adjusts the deblock strength data based on the motion vector magnitude.

[0055] The decode motion compensation module generates inter-prediction
data based on the motion vector data when the macroblock mode data
indicates an inter-prediction mode. The inverse intra-prediction module
211, generates intra-prediction data when the macroblock mode data
indicates an intra-prediction mode. The inverse transform/quantization
module 220 generates reconstructed picture data based on the run length
data and on the inter-prediction data when the macroblock mode data
indicates an inter-prediction mode and based on the run length data and
on the intra-prediction data when the macroblock mode data indicates an
intra-prediction mode.

[0056] The deblocking filter module 222 generates the decoded video signal
from the reconstructed picture data, based on the deblock strength data.
In operation, the deblocking filter 222 operates to smooth horizontal and
vertical edges of a block that may correspond to exterior boundaries of a
macroblock of a frame or field of video signal 110 or edges that occur in
the interior of a macroblock. A boundary strength, that is determined
based on quantization parameters, adjacent macroblock type, etcetera, can
vary the amount of filtering to be performed. In addition, the H.264
standard defines two parameters, α and β, that are used to
determine the strength of filtering on a particular edge. The parameter
α is a boundary edge parameter applied to data that includes
macroblock boundaries. The parameter β is an interior edge parameter
applied to data that within a macroblock interior.

[0057] According to the H.264 standard, α and β are selected as
integers within the range [-6, 6] based on the average of the
quantization parameters, QP, of the two blocks adjacent to the edge. In
particular, α and β are increased for large values of QP and
decreased for smaller values of QP. In accordance with the present
invention however, non-quantization coding parameters such a motion
vector magnitude are used by neighbor management module 218 to generate
deblock strength data that adjusts the values for α and β for
deblocking filter module 222. For instance, when the motion vector
magnitude indicates large motion vectors, e.g. magnitudes above a first
magnitude threshold, a larger value of α can be selected. Further,
motion vector magnitude indicates small motion vectors, e.g. magnitudes
below the same or other threshold, a smaller value of α can be
selected.

[0061]FIG. 11 presents a block diagram representation of a decoding
process in accordance with another embodiment of the present invention.
In this embodiment, however the encoded video signal includes a plurality
of video layers and the EDC data 146 further includes slice header data
270 corresponding to the plurality of video layers being processed. As
discussed in conjunction with FIGS. 7 and 8, the processing of a target
layer can include processing of layer data for the target layer and
dependent layers, but can exclude processing of other layers that the
target layer does not depend on, either directly or indirectly. Optional
resampling module 224 is included to receive the residual data 278 via
buffer 292 from inverse transform and quantization module 220, and to
generate the resampled residual data based on the residual data that is
passed back to inverse transform and quantization module 220 to be used
to generate the current reconstructed frames/fields 264. The resampling
module 224 further generates the decoded video signal, as a combined
picture 228 based on the filtered picture data 226 from deblocking filter
module 222 via buffer 290. Buffers 290 and 292 can be implemented via a
frame buffer or other buffer.

[0062] In operation, the resampling module can upscale buffered filtered
pictures 226 and residual data 278 for dependent layers for combination
with higher layers such as the target layer. In an embodiment of the
present invention, the resampling module 224 generates the resampled
residual data based on a difference in resolution between the current
layer and a target layer of the plurality of layers of the encoded video
signal. In particular, the resampling module 224 upscales the residual
data 278 to generate the resampled residual data at a resolution of the
target layer. In addition, the resampling module 224 generates resampled
filtered picture data from the filtered picture data 226 by upscaling the
filtered picture data from the resolution of the current layer to the
resolution of the target layer. Further the resampling module 224
generates a combined picture 228 of the decoded video signal by combining
filtered picture data 226 of the target layer with resampled filtered
picture data of each of the dependent layers of the encoded video signal.

[0063] In an example of operation, the encoded video signal includes two
layers, a base layer and an enhancement layer. In this example, the video
decoder 102 selects the target layer as the enhancement layer for higher
resolution output. When processing the base layer of a picture, residual
data 278 for the base layer is buffered in buffer 292. The reconstructed
picture for the base layer is generated by inverse transform and
quantization module 220 based on the base layer residual data. This
reconstructed base layer picture is filtered via deblocking filter 222 to
produce a filtered base layer picture that is buffered via buffer 290.

[0064] When the enhancement layer is processed, the resampling module 224
retrieves the base layer residual data from the buffer 292 and generates
upscaled residual data for the base layer that is passed to the combining
module 284. The reconstructed picture for the enhancement layer is
generated by inverse transform and quantization module 220 based on the
upscaled base layer residual data and the enhancement layer residual
data. The reconstructed enhancement layer picture is filtered via
deblocking filter 222 to produce a filtered enhancement layer picture 226
that is buffered via buffer 290. The resampling module 224 upscales the
filtered base layer picture and combines it with the filtered enhancement
layer picture to generate the combined picture 228.

[0065]FIG. 12 presents a block diagram representation of a video
distribution system 375 in accordance with an embodiment of the present
invention. In particular, video signal 110 is transmitted from a video
encoder via a transmission path 122 to a video decoder 102. The video
decoder 102 operates to decode the video signal 110 for display on a
display devices 12 or 14 or other display device. In an embodiment of the
present invention, video decoder 102 can be implemented in a set-top box,
digital video recorder, router or home gateway. In the alternative,
decoder 102 can optionally be incorporated directly in the display device
12 or 14.

[0066] The transmission path 122 can include a wireless path that operates
in accordance with a wireless local area network protocol such as an
802.11 protocol, a WIMAX protocol, a Bluetooth protocol, etc. Further,
the transmission path can include a wired path that operates in
accordance with a wired protocol such as a Universal Serial Bus protocol,
an Ethernet protocol or other high speed protocol.

[0067]FIG. 13 presents a block diagram representation of a video storage
system 179 in accordance with an embodiment of the present invention. In
particular, device 11 is a set top box with built-in digital video
recorder functionality, a stand alone digital video recorder, a DVD
recorder/player or other device that stores the video signal 110. In this
configuration, device 11 can include video decoder 102 that operates to
decode the video signal 110 when retrieved from storage to generate a
processed video signal 112 in a format that is suitable for display by
video display device 12 or 14. While these particular devices are
illustrated, video storage system 179 can include a hard drive, flash
memory device, computer, DVD burner, or any other device that is capable
of generating, storing, decoding, transcoding and/or displaying the video
content of video signal 110 in accordance with the methods and systems
described in conjunction with the features and functions of the present
invention as described herein.

[0068] FIG. 14 presents a block diagram representation of a method in
accordance with an embodiment of the present invention. In particular, a
method is presented for use in conjunction with one or more functions and
features described in conjunction with FIGS. 1-9. In step 400, first
entropy decoded (EDC) data is generated from a first portion of an
encoded video signal via a first processor. In step 402, second EDC data
is generated from a second portion of the encoded video signal via the
first processor. In step 404, a first portion of a decoded video signal
is generated from the first EDC data via a second processor
contemporaneously during at least a portion of time that the first
processor generates the second EDC data from the second portion of the
encoded video signal.

[0069] In an embodiment of the present invention, the first portion of the
encoded video signal corresponds to a first picture and wherein the
second portion of the encoded video signal corresponds to a second
picture. The second picture can be subsequent in time to the first
picture in the encoded video signal. The first EDC data can include first
run length data, first motion vector differential data, and first
macroblock header data. The encoded video signal can include a plurality
of video layers and the first EDC data includes slice header data
corresponding to at least one of the plurality of video layers. The
encoded video signal can be encoded in accordance with at least one of:
an H.264 encoding standard, and a video coding 1 (VC-1) encoding
standard.

[0070]FIG. 15 presents a block diagram representation of a method in
accordance with an embodiment of the present invention. In particular, a
method is presented for use in conjunction with one or more functions and
features described in conjunction with FIGS. 1-10. In step 410, third EDC
data is generated from a third portion of the encoded video signal via
the first processor. In step 412, a second portion of a decoded video
signal is generated from the second EDC data via the second processor
contemporaneously during at least a portion of time that the first
processor generates the third EDC data from the third portion of the
encoded video signal.

[0071]FIG. 16 presents a block diagram representation of a method in
accordance with an embodiment of the present invention. In particular, a
method is presented for use in conjunction with one or more functions and
features described in conjunction with FIGS. 1-9. In step 420, entropy
decoded (EDC) data is generated from an encoded video signal via a first
processor, wherein the EDC data includes run length data, motion vector
differential data, and macroblock header data. In step 422, a decoded
video signal is generated from the EDC data via a second processor by:
generating motion vector data, macroblock mode data and deblock strength
data, based on the motion vector differential data and the macroblock
header data; generating inter-prediction data based on the motion vector
data when the macroblock mode data indicates an inter-prediction mode;
generating intra-prediction data when the macroblock mode data indicates
an intra-prediction mode; generating reconstructed picture data based on
the run length data and on the inter-prediction data when the macroblock
mode data indicates an inter-prediction mode; generating reconstructed
picture data based on the run length data and on the intra-prediction
data when the macroblock mode data indicates an intra-prediction mode;
and generating the decoded video signal from the reconstructed picture
data, based on the deblock strength data.

[0072] In an embodiment of the present invention, step 422 includes
generating the motion vector data for a group of macroblocks that
neighbor a current macroblock, and generating the motion vector data for
the current macroblock, based on both the macroblock mode data, and the
motion vector data for the group of macroblocks that neighbor the current
macroblock. Step 422 can also include calculating a motion vector
magnitude, and adjusting the deblock strength data based on the motion
vector magnitude. Step 422 can also include adjusting at least one
deblock filter parameter based on the deblock strength data, and deblock
filtering the reconstructed picture data based on at least one deblock
filter parameter.

[0073] The encoded video signal can include a plurality of video layers
and the first EDC data includes slice header data corresponding to at
least one of the plurality of video layers. The encoded video signal can
be encoded in accordance with at least one of an H.264 encoding standard
and a video coding 1 (VC-1) encoding standard.

[0074] FIG. 17 presents a block diagram representation of a method in
accordance with an embodiment of the present invention. In particular, a
method is presented for use in conjunction with one or more functions and
features described in conjunction with FIGS. 1-9. In step 430, entropy
decoded (EDC) data is generated from an encoded video signal, wherein the
encoded video signal includes a plurality of video layers, and wherein
the EDC data is generated by: generating slice dependency data; and
entropy decoding a selected subset of the plurality of video layers,
based on the slice dependency data. In step 432, a decoded video signal
is generated from the EDC data.

[0075] In an embodiment of the present invention, the slice dependency
data is generated by extracting dependency data from a slice header for
each of the plurality of video layers. The decoded video signal can also
be generated in accordance with a target layer of the plurality of video
layers that is included in the selected subset of the plurality of video
layers, and the slice dependency data can be generated by identifying
dependent layers of the plurality of video layers that are dependent from
the target layer.

[0076] The dependent layers can include each of the plurality of video
layers directly dependent from the target layer, and further, each of the
plurality of video layers indirectly dependent from the target layer. The
selected subset of the plurality of video layers excludes each of the
plurality of video layers that is not directly dependent from the target
layer or indirectly dependent from the target layer.

[0077] Step 430 can include selecting an ordering of the selected subset
of the plurality of video layers, wherein the selected subset of the
plurality of video layers are entropy decoded in accordance with the
selected ordering. The encoded video signal can be encoded in accordance
with at least one of: an H.264 encoding standard and a video coding 1
(VC-1) encoding standard.

[0078] FIG. 18 presents a block diagram representation of a method in
accordance with an embodiment of the present invention. In particular, a
method is presented for use in conjunction with one or more functions and
features described in conjunction with FIGS. 1-9. In step 440, entropy
decoded (EDC) data is generated from an encoded video signal via a first
processor, wherein the EDC data slice header data, run length data,
motion vector differential data, and macroblock header data, wherein the
encoded video signal includes a plurality of video layers. In step 442, a
decoded video signal is generated from the EDC data via a second
processor by: generating motion vector data, macroblock mode data and
deblock strength data, based on the motion vector differential data and
the macroblock header data; generating inter-prediction data based on the
motion vector data when the macroblock mode data indicates an
inter-prediction mode; generating intra-prediction data when the
macroblock mode data indicates an intra-prediction mode; generating
residual data based on the run length data; generating resampled residual
data based on the residual data and the slice header data; generating
reconstructed picture data based on the resampled residual data and on
the inter-prediction data when the macroblock mode data indicates an
inter-prediction mode; generating reconstructed picture data based on the
resampled residual data and on the intra-prediction data when the
macroblock mode data indicates an intra-prediction mode; generating
filtered picture data from the reconstructed picture data, based on the
deblock strength data; and generating the decoded video signal based on
the filtered picture data and the slice header data.

[0079] Step 432 can include analyzing the slice header data to determine a
current layer of the plurality of layers; and generating the resampled
residual data based on a difference in resolution between the current
layer and a target layer of the plurality of layers. Step 432 can also
include upscaling the residual data to generate the resampled residual
data at a resolution of the target layer. Step 432 can include analyzing
the slice header data to determine a current layer of the plurality of
layers; and generating resampled filtered picture data from the filtered
picture data, based on a difference in resolution between the current
layer and a target layer of the plurality of layers.

[0080] Step 432 can include upscaling the filtered picture data to
generate the resampled filtered picture data at a resolution of the
target layer and generating a picture of the decoded video signal by
combining resampled filtered picture data of at least one layer of the
plurality of layers with filtered picture data of the target layer.

[0081] The encoded video signal can be encodeded in accordance with at
least one of: an H.264 encoding standard and a video coding 1 (VC-1)
encoding standard.

[0082] While particular combinations of various functions and features of
the present invention have been expressly described herein, other
combinations of these features and functions are possible that are not
limited by the particular examples disclosed herein are expressly
incorporated in within the scope of the present invention.

[0083] As one of ordinary skill in the art will appreciate, the term
"substantially" or "approximately", as may be used herein, provides an
industry-accepted tolerance to its corresponding term and/or relativity
between items. Such an industry-accepted tolerance ranges from less than
one percent to twenty percent and corresponds to, but is not limited to,
component values, integrated circuit process variations, temperature
variations, rise and fall times, and/or thermal noise. Such relativity
between items ranges from a difference of a few percent to magnitude
differences. As one of ordinary skill in the art will further appreciate,
the term "coupled", as may be used herein, includes direct coupling and
indirect coupling via another component, element, circuit, or module
where, for indirect coupling, the intervening component, element,
circuit, or module does not modify the information of a signal but may
adjust its current level, voltage level, and/or power level. As one of
ordinary skill in the art will also appreciate, inferred coupling (i.e.,
where one element is coupled to another element by inference) includes
direct and indirect coupling between two elements in the same manner as
"coupled". As one of ordinary skill in the art will further appreciate,
the term "compares favorably", as may be used herein, indicates that a
comparison between two or more elements, items, signals, etc., provides a
desired relationship. For example, when the desired relationship is that
signal 1 has a greater magnitude than signal 2, a favorable comparison
may be achieved when the magnitude of signal 1 is greater than that of
signal 2 or when the magnitude of signal 2 is less than that of signal 1.

[0084] As the term module is used in the description of the various
embodiments of the present invention, a module includes a functional
block that is implemented in hardware, software, and/or firmware that
performs one or module functions such as the processing of an input
signal to produce an output signal. As used herein, a module may contain
submodules that themselves are modules.

[0085] Thus, there has been described herein an apparatus and method, as
well as several embodiments including a preferred embodiment, for
implementing a video decoder. Various embodiments of the present
invention herein-described have features that distinguish the present
invention from the prior art.

[0086] It will be apparent to those skilled in the art that the disclosed
invention may be modified in numerous ways and may assume many
embodiments other than the preferred forms specifically set out and
described above. Accordingly, it is intended by the appended claims to
cover all modifications of the invention which fall within the true
spirit and scope of the invention.