Classified adaptive error recovery method and apparatus

Information

  • Patent Grant
  • 6351494
  • Patent Number
    6,351,494
  • Date Filed
    Friday, September 24, 1999
    25 years ago
  • Date Issued
    Tuesday, February 26, 2002
    22 years ago
Abstract
A method, apparatus, and article of manufacture for restoring a deteriorated signal to an undeteriorated signal. A deteriorated signal consists of a plurality of deteriorated and undeteriorated data points. For each deteriorated data point, a plurality of class types including a motion vector class is created based upon characteristics of the area containing the deteriorated data point. The data point is classified with respect to one of the plurality of class types and assigned a corresponding input signal class. The undeteriorated signal is generated by adaptively filtering the deteriorated input signal in accordance with the input signal classification result. More than one classification method is used to create the plurality of class types. Created classes may include a motion class, an error class, a spatial class, a spatial activity class, or a motion vector class.
Description




FIELD OF THE INVENTION




This invention relates generally to the processing of image, sound or other correlated signals, and more particularly, to a method, apparatus, and article of manufacture for restoring a deteriorated signal to an undeteriorated signal.




BACKGROUND OF THE INVENTION




Conventionally, to restore an image that is deteriorated in image quality it is necessary to analyze the cause of the deterioration, determine a deterioration model function, and apply its inverse function to the deteriorated image. Various causes of deteriorations are possible, such as a uniform movement of a camera (imaging device such as a video camera) and blurring caused by the optical system of a camera. Therefore, in restoring an image, different model functions may be used for respective causes of deteriorations. Unless the cause of deterioration is found, it is difficult to restore a deteriorated image because a model function cannot be determined.




In addition, it is frequently the case that even if a model function of a deterioration is established, there is no inverse function for restoration that corresponds to the model function. In such a case, it is difficult to perform evaluation for determining the optimum model.




Conventionally, error recovery has been achieved by correlation evaluation. For example, some recovery choices have been implemented using a conventional error pixel recovery method.

FIG. 1A

shows a conventional error recovery block diagram. Using neighboring data, which are shown in

FIG. 1B

, spatial inclinations of the target data are detected. In this example, the inclinations regarding four directions are evaluated according to the formulae which are shown in FIG.


1


C. An interpolation filter is chosen where the inclination value, E


i


, is the smallest among four values. In addition to the spatial inclination, a motion factor is also evaluated for error recovery. In the case of the motion area, a selected spatial filter is used for error recovery. On the other hand, the previous frame data at the same location as the target data are used for error recovery. This evaluation is performed in the evaluation block of FIG.


1


A.




The conventional error recovery process shown in

FIGS. 1A-1C

may cause many serious degradations on changing data, especially on object edges. Actual signal distribution typically varies widely, so these problems are likely to occur. Therefore, there is a need for a way to restore a deteriorated signal to an undeteriorated signal which minimizes degradations on changing data.




SUMMARY OF THE INVENTION




The present invention provides a method, apparatus, and article of manufacture for restoring a deteriorated signal to an undeteriorated signal. A deteriorated signal consists of a plurality of deteriorated and undeteriorated data points. For each deteriorated data point, a plurality of class types is created based upon characteristics of the area containing the deteriorated data point. The data point is classified with respect to one of the plurality of class types and assigned a corresponding input signal class. The undeteriorated signal is generated by adaptive filtering of the input signal in accordance with the input signal classification results. More than one classification method may optionally be used to create the plurality of class types. Created classes may include a motion vector class, a motion class, an error class, a spatial class or a spatial activity class. An adaptive class tap structure may optionally be used to create the plurality of class types. An adaptive filter tap structure may optionally be used base on the corresponding plurality of class types. Filter tap expansion may optionally be used to reduce the number of filter coefficients. A spatial class may optionally be modified according to spatial symmetry.











BRIEF DESCRIPTION OF THE DRAWINGS




The present invention is illustrated by way of example and may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like references indicate similar elements and in which:





FIGS. 1A-1C

show a conventional error recovery method, filter tap, and correspondence between inclination value and interpolation filter;





FIGS. 2A-2E

show a classified adaptive error recovery method and class compatible with an embodiment of the present invention;





FIGS. 2F-2I

show motion compensation preprocessing compatible with an embodiment of the present invention;





FIG. 2J

shows a motion vector field compatible with an embodiment of the present invention;





FIG. 3

shows a motion class tap compatible with an embodiment of the present invention;





FIG. 4

shows an error class tap compatible with an embodiment of the present invention;





FIG. 5

shows an adaptive spatial class tap compatible with an embodiment of the present invention;





FIG. 6

shows an adaptive spatial class tap (error class


0


) compatible with an embodiment of the present invention;





FIG. 7

shows an adaptive spatial class tap (error class


1


) compatible with an embodiment of the present invention;





FIG. 8

shows an adaptive spatial class tap (error class


2


) compatible with an embodiment of the present invention;





FIG. 9

shows an adaptive spatial class tap (error class


3


) compatible with an embodiment of the present invention;





FIG. 10

shows an adaptive filter tap compatible with an embodiment of the present invention;





FIG. 11

shows a motion class adaptive filter tap compatible with an embodiment of the present invention;





FIG. 12

shows a motion class adaptive filter tap (error class


0


) compatible with an embodiment of the present invention;





FIG. 13

shows a motion class adaptive filter tap (error class


1


) compatible with an embodiment of the present invention;





FIG. 14

shows a motion class adaptive filter tap (error class


2


) compatible with an embodiment of the present invention;





FIG. 15

shows a motion class adaptive filter tap (error class


3


) compatible with an embodiment of the present invention;





FIG. 16

shows a preprocessing algorithm compatible with an embodiment of the present invention;





FIG. 17

shows a motion tap and stationary tap preprocessing algorithm compatible with an embodiment of the present invention;





FIGS. 18A and 18B

show system block diagrams compatible with an embodiment of the present invention;





FIG. 19

shows coefficient memory contents compatible with an embodiment of the present invention;





FIG. 20

shows an ADRC class reduction based on a 4-tap 1-bit ADRC compatible with an embodiment of the present invention; and





FIG. 21

shows an example of audio signal adaptive classification compatible with an embodiment of the present invention.











DETAILED DESCRIPTION




In the following description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.




The present invention provides a method, apparatus, and article of manufacture for restoring a deteriorated signal to an undeteriorated signal using classified adaptive error recovery. Target data is the particular data of the deteriorated signal whose value is to be determined or estimated.




Classified adaptive error recovery is the technology which utilizes classified adaptive filter processing. A proper classification with respect to the deteriorated input signal is performed according to the input signal characteristics. An adaptive filter is prepared for each class prior to error recovery processing.




More than one classification method may optionally be used to generate the plurality of classes. Generated classes may include a motion class, an error class, a spatial activity class or a spatial class. An adaptive class tap structure may optionally be used to generate the plurality of classes. An adaptive filter tap structure may optionally be used according to the class which is detected in each deteriorated input signal. The adaptive filter tap structure may optionally be expanded based upon multiple taps. The number of filter coefficients that must be stored can be reduced by allocating the same coefficient to multiple taps. This process is referred to as filter tap expansion. The deteriorated input signal may optionally be modified by preprocessing peripheral erroneous data. A spatial class may optionally be eliminated according to a spatial class elimination formula.




The present invention can be applied to any form of correlated data, including without limitation photographs or other two-dimensional static images, holograms, or other three-dimensional static images, video or other two-dimensional moving images, three-dimensional moving images, a monaural sound stream, or sound separated into a number of spatially related streams, such as stereo. In the description, the term value, in one embodiment, may refer to a component within a set of received or generated data. Furthermore, a data point is a position, place, instance, location or range within data.




For the sake of clarity, some of the description herein focuses on video data comprising a pixel stream. However, it will be recognized that the present invention may be used with other types of data other than video data and that the terms and phrases used herein to describe the present invention cover a broad range of applications and data types. For example, an adaptive class tap structure is an adaptive structure for class tap definition used in multiple classification. A spatial class, a motion class and an error class may be used to define the structure. An adaptive filter tap structure is an adaptive structure for filter tap definition based upon a corresponding class.




A class may be defined based on one or more characteristics of the target data. For example, a class may also be defined based on one or more characteristics of the group containing the target data. A class ID is a specific value within the class that is used to describe and differentiate the target data from other data with respect to a particular characteristic. A class ID may be represented by a number, a symbol, or a code within a defined range. A parameter may be used as a predetermined or variable quantity that is used in evaluating, estimating, or classifying the data. For example, the particular motion class ID of a target data can be determined by comparing the level of motion quantity in the block containing the target data against a parameter which can be a pre-determined threshold.




The original data may be estimated with neighboring data using the classified adaptive error recovery method, which classifies and filters the data. Generally speaking, data that is spatially close to error data may have a greater contribution to estimating the correct value because of the high spatial correlation. If temporal tap data is introduced for classification and filtering in motion areas, the estimation performance may be decreased because of lower spatial correlation caused by the motion.




In one embodiment, a preprocessing step can introduce motion compensation before classifying the adaptive error recovery. Thus, to improve the error recovery performance of classified adaptive error recovery method, motion compensation preprocessing for erroneous data may be performed before the classified adaptive error recovery method.




In one embodiment, the motion compensation preprocessing has two components. One is motion vector detection. The other is shifting the motion affected data (hereinafter referred to as the “memorized data”) according to the motion vector. The fundamental structure for performing the preprocessing method is shown in FIG.


2


A. The motion compensation structure comprises two processing elements, a motion vector detector


280


, and a memory


290


. The motion vector is detected by the motion vector detector


280


by examining the temporal correlation. A number of conventional methods may be used. According to this detected motion vector, the neighboring image data used, e.g., an image of a prior field or frame stored in memory,


290


is shifted. The neighboring data may be data that is temporally prior or subsequent, as well as data that is spatially adjacent or non-adjacent data, e.g. frames of data. In one embodiment, the shift operation is accomplished by shifting the memory addresses accessed to form the motion compensated data. The motion compensated data is provided for the following classified adaptive error recovery.




The motion compensation preprocessing can greatly improve the estimation performance of the classified adaptive error recovery process, because the motion area data can provide a higher correlation. As a result, improved error recovered images can be achieved by this method.




In an alternative embodiment, as shown in

FIG. 2B

, as motion vector class type is created. Input data and corresponding error flags are input to the system. A motion vector is detected from the input data at


280


. Motion vector classification is performed at


205


. Filter tap data are chosen at


213


based on the motion vector class. Error recovery filtering is performed at


209


with tap data and filter coefficients selected from the coefficient memory


207


. Error recovered data and error free input data are selected at


211


recording to the error flag, which produces the output of the system.




The motion vector detector


280


shown in

FIG. 2A

determines a motion vector in an image. Examples of motion vectors are shown in

FIGS. 2F and 2G

. In

FIG. 2F

, the point (i, j) at time t=0 is shown in a first position (i


0


, j


0


). At a second time, t=a, the point is in a second position (i


a


, j


a


). The motion vector in

FIG. 2F

is the line segment that begins at point (i


0


, j


0


) and ends at point (i


a


, j


a


). Thus, if there is motion in an image, the moving part of the image is duplicated in its new location. In one embodiment, the memorized data stored in memory


290


is data for the previous time, t=0. In an alternative embodiment, the memorized data stored in memory


290


is data for a future time. The motion vector is used by shifting logic to shift a position of the memorized image data to a second position, for the current time t=a.




Referring to

FIG. 2G

, the image data


250


for previous time t=T


−1


is in a first position. In the current frame, image data


250


for the current time t=T


0


has moved to a second position. The memory stores the prior frame data with image


250


in the first position. After the motion vector MV is detected, the position of image


250


within memory is shifted to the second position.




The motion vector may be detected by a variety of techniques. Three examples of methods for detecting a motion vector include phase correlation, gradient descent, and block matching.




An example of the phase correlation technique is as follows. F(ω


1


, ω


2


) is defined as the Fourier Transform of an image point f(x


1


, x


2


). Applying the shifting property of Fourier Transforms, the Fourier Transform of f(x


1





1


, x


2





2


) can be e


j2π(α1ω1+α2ω2)


* F(ω


1


, ω


2


). Thus, the motion vector quantity represented by α


1


and α


2


can be estimated by calculating the Fourier Transform of both f(x


1


, x


2


) and f(x


1





1


, x


2





2


), and then estimating the multiplicative factor that relates the two.




An example of the gradient descent technique is as follows. The error function for an image e(α) may be computed as f(x+v-α)−f(x), where v represents the real motion vector and α represents an estimate of v. The gradient descent technique is used to minimize e(α). Thus the minimum of e(α) is representative of the motion vector.




One embodiment of the block matching technique detects a motion vector using pattern matching. For example, at each search point (e.g. point of the image), the temporal correlation is measured. For example, a correlation value corresponding to the summed absolute temporal differences between the current block data and the corresponding past block data at each point is generated. After generating the values at all or some points, the motion vector with the smallest value is chosen. This is representative of the highest correlated point presenting a motion vector.




The block matching process is illustrated in FIG.


2


H. An original block area is defined as shown in FIG.


2


H. The block data is defined as f(i


0


+X, j


0


+Y). A searching point f(i, j) is detected. The equation







E


(

i
,
j

)


=




x




y


|


f


(



i
0

+
X

,


j
0

+
Y


)


-

f


(


i
+
X

,

j
+
Y


)














is representative of the summed temporal differences. The motion vector MV may then be determined as min {E(i, j)};-I≦i≦I;-J≦j≦J.




By utilizing motion compensation preprocessing to perform motion compensation between frames, intra frame data generation can be used for classified adaptive error recovery processing. This is generally illustrated by FIG.


2


I. After introducing motion compensation, spatial temporal data can be treated as spatial stationary data.

FIG. 2I

shows two fields in a frame during multiple time periods. Field


260


during time period T


3


contains error data. This error data is highly correlated with pixel


262


in field


264


from time period T


2.


Thus, by using motion compensation, the non-stationary data is highly correlated in the spatial and temporal domain.




The filtering performance is improved by using prior field data with the motion compensation preprocessing. As discussed herein, classified adaptive error recovery classifies the error distribution. However, errors may be very hard to correct when the data is non-stationary data. The ability to correct errors is greatly improved using motion compensation preprocessing. The advantages of performing motion compensation preprocessing with the classified adaptive error recovery process include classification quality improvement, because there is a high correlation of temporal data. The spatial (pattern) classification and the spatial activity classification are also improved. Adaptive filtering is also improved due to the high correlation of temporal data by execution of motion compensation preprocessing.




Multiple Classification




In one embodiment, a multiple class may be used as a collection of specific values or sets of values used to describe at least two different characteristics of the target data. For example, a multiple class may be defined to be a combination of at least two different classes. For example, a multiple class may be defined to be a combination of an error class, a motion class, and a spatial class such as an ADRC class.




In one embodiment, the multiple class ID can be used as the memory address to locate the proper filter coefficients and other information that are used to determine or estimate the value of the target data. In one embodiment, a simple concatenation of different class IDs in the multiple class ID is used as the memory address.




Therefore, a multiple classification scheme is a way of classifying the target data with respect to more than one characteristic of the target data in order to more accurately determine or estimate the value of the target data.




An error class is a collection of specific values used to describe the various distribution patterns of erroneous data in the neighborhood of the target data. In one embodiment, an error class is defined to indicate which adjacent data to the target data is erroneous. An error class ID is a specific value within the error class used to describe a particular distribution pattern of erroneous data in the neighborhood of the target data. For example, an error class ID of “


0


” may be defined to indicate that there is no erroneous data to the left and to the right of the target data; an error class ID of “


1


” may be defined to indicate that the data to the left of the target data is erroneous, etc. A filter is a mathematical process, function or mask for selecting a group of data.




A motion class is a collection of specific values used to describe the motion characteristic of the target data. In one embodiment, the motion class is defined based on the different levels of motion of the block containing the target data, for example, no motion in the block, little motion in the block, or large motion in the block. A motion class ID is a specific value within the motion class used to indicate a particular level of motion quantity of the target data. For example, motion class ID of “


0


” may be defined to indicate no motion, motion class ID of “


3


” may be defined to indicate large motion.




A motion vector class is a collection of specific values used to describe the directional motion characteristic of the target data. In one embodiment, the motion vector class is defined based on the different directions of motion of the block containing the target data, for example, vertical, horizontal, or diagonal. A motion vector class ID is a specific value within the motion vector class used to indicate a particular direction of motion of the target data.

FIG. 2J

shows an embodiment of motion vector class ID. For example, motion vector class ID of “


1


” may be defined to indicate small horizontal motion, motion vector class ID of “


4


” may be defined to indicate large vertical motion. Motion vector class “


0


” may define very little or no motion. Motion vector class “


7


”may define very fast motion without specifying a particular direction. The degradation for the very fast motion cannot be detected, therefore, a simple motion class such as “


7


”may be used.





FIG. 2J

is only one example of possible motion vector classifications. The classifications may vary by size and shape of area corresponding to a particular value and/or number of possible classifications.




In this embodiment, a symmetrical structure is used to reduce the number of motion vector classifications, which reduces the number of filters. In alternative embodiments, non-symmetrical motion vector classifications may also be used. The vertical direction of the motion may correspond to a tilting object or a tilting picture. This kind of picture may be separated from the horizontal motion of the picture, especially for interlaced images. Thus, motion vector classification provides several advantages, including identification of the motion direction of an image, and the separation of tilting images from panning images and horizontally move objects from vertically moving objects. Consider a video camera looking at relatively stationary objects. The camera can be tilted up or down or panned left or right. That's analogous to a stationary camera and object, moving down or up (reversed) or right or left. The motion vector classification is also utilized in an adaptive filter tap structure based on the motion vector, and therefore may improve the estimation accuracy over motion classification alone.




A spatial class is a collection of specific values used to describe the spatial characteristic of the target data. For example, spatial classification of the data may be determined using Adaptive Dynamic Range Coding (ADRC), Differential Pulse Code Modulation (DPCM), Vector Quantization (VQ), Discrete Cosine Transform (DCT), etc. A spatial class ID is a specific value within the spatial class used to describe the spatial pattern of the target data in the group or block containing the target data.




For example, an ADRC class is a spatial class defined by the Adaptive Dynamic Range Coding method. An ADRC class ID is a specific value within the ADRC class used to describe the spatial pattern of the data distribution in the group or block containing the target data. A class is a collection of specific values used to describe certain characteristics of the target data. A variety of different types of classes exist, for example, a motion class, a spatial class, an error class, a spatial activity class, etc.




The present invention provides a method and apparatus for adaptive processing that generates data corresponding to a set of one or more data classes. This process is known as “classification”. Classification can be achieved by various attributes of signal distribution. For example, Adaptive Dynamic Range Coding (ADRC) may be used for generation of each class as a spatial class, but it will be recognized by one of ordinary skill in the art that other classes, including a motion class, an error class, and a spatial activity class may be used with the present invention without loss of generality. A spatial activity class is a collection of specific values used to describe the spatial activity characteristic of the target data. For example, spatial activity classification of the data may be determined using the dynamic range, the standard deviation, the Laplacian value or the spatial gradient value. Some classification methods provide advantages which are desirable before restoration of a deteriorated signal takes place. For example, ADRC can achieve classification by normalizing each signal waveform automatically.




For each class, a suitable filter for signal restoration is prepared for the adaptive processing. In one embodiment, each filter is represented by a matrix of filter coefficients which are applied to the data. The filter coefficients can be generated by a training process, an example of which is described subsequently, that occurs as a preparation process prior to filtering. In one embodiment of the present invention, the filter coefficients can be stored in a random access memory (RAM), shown in

FIG. 2A

at


207


.




A typical signal processing flow of the present invention is shown in FIG.


2


A. Target input data


201


can be accompanied with error flag data


203


. Error flag data can indicate locations within the data that contain erroneous pixels. In one embodiment of the present invention, an ADRC class is generated for each input target data in classification block


205


, filter coefficients corresponding to each class ID are output from the coefficient memory block


207


, and filtering is executed with input data


201


and the filter coefficients in the filter block


209


. The filtered data may correspond to an error recovered result. In the selector block


211


, switching between error recovered data and error free data occurs according to the error flag data


203


.




In

FIG. 2C

, an example is shown where the number of class taps is four. In the case of 1-bit ADRC, 16 class IDs are available as given by [formula 3], shown below. ADRC is realized by [formula 2], shown below. Detecting a local dynamic range (DR) is given by [formula 1], shown below,









DR
=

MAX
-
MIN
+
1





[formula  1]







q
i

=





(


x
i

-
MIN
+
0.5

)

·

2
Q


DR







[formula  2]






c
=




i
=
1

4




2

i
-
1


·

q
i







[formula  3]













where c corresponds to an ADRC class ID, DR represents the dynamic range of the four data area, MAX represents the maximum level of the four data, MIN represents the minimum level of the four data, q


i


is the ADRC encoded data, also referred to as a Q code, and Q is the number of quantization bits. The └·┘ operator represents a truncation operation.




In 1-bit ADRC, c may have a value from 0 to 15 with Q=1. This process is one type of spatial classification, but it will be recognized by one of ordinary skill in the art that other examples of spatial classification, including Differential PCM, Vector Quantization and Discrete Cosine Transform may be used with the present invention without loss of generality. Any method may be used if it can classify a target data distribution.




In the example shown in

FIG. 2D

, each adaptive filter has 12 taps. Output data is generated according to the linear combination operation given by [formula 4], shown below,









y
=




i
=
1

12




w
i

·

x
i







[formula  4]













where x


i


is input data, w


i


corresponds to each filter coefficient, and y is the output data after error recovery. Filter coefficients can be generated for each class ID by a training process that occurs prior to the error recovery process.




For example, training may be achieved according to the following criterion.










min
w




&LeftDoubleBracketingBar;


X
·
W

-
Y

&RightDoubleBracketingBar;

2





[formula  5]













where X, W, and Y are, for example, the following matrices: X is the input data matrix defined by [formula 6], W is the coefficient matrix defined by [formula 7], and Y corresponds to the target data matrix defined by [formula 8].









X
=

(




x
11




x
12







x

1

n







x
21




x
22







x

2

n





















x
m1




x
m2







x
mn




)





[formula   6]






W
=

(




w
1






w
2











w
n




)





[formula  7]






Y
=

(




y
1






y
2











y
m




)





[formula  8]













The coefficient w


i


can be obtained according to [formula 5], so that estimation errors against target data are minimized.




In the example shown in

FIG. 2D

, 12 coefficients regarding each ADRC class ID are determined by the training method described above.




A flow diagram of an embodiment of the present invention is shown in FIG.


2


E. The flow chart of

FIG. 2E

shows the basic processing stream for generating an undeteriorated signal from the deteriorated input signal. At step


215


, the preprocessing for a peripheral erroneous pixel is performed. At step


217


, each classification regarding the deteriorated input signal is executed to generate a class ID. Some class taps are selected adaptively according to another class ID. Multiple classification may be executed, such as motion classification, error classification, spatial activity classification and spatial classification.




The classification scheme can be defined during system design, where the classification scheme, the number of classes, and other specification are decided for the target data. The design stage may include, among others, considerations of system performance and hardware complexity.




At step


219


, multiple classification generates a multiple class ID with a plurality of class IDs which are generated by various classification at step


217


. At step


221


, filter taps are adaptively selected according to the multiple class ID which is generated at step


219


. At step


223


, the filter tap structure is adaptively expanded according to the multiple class ID which is generated at step


219


. The number of filter coefficients that must be stored can be reduced by allocating the same coefficient to multiple taps. This process is referred to as filter tap expansion. At step


224


, filter coefficients are selected according to the multiple class ID which is generated at step


219


. At step


225


, filtering with respect to the deteriorated input signal is executed to generate an undeteriorated signal. Filter coefficients are selected adaptively according to the multiple class ID which is generated in step


219


.




In one embodiment of the present invention, a three dimensional ADRC process may be used to realize spatio-temporal classification, because simple waveform classifications such as a two dimensional ADRC process typically cannot structurally achieve separation for general motion pictures in the class of FIG.


2


C. If both stationary and motion areas are processed in the same class ID, error recovery quality is degraded because of differences in characteristics of the two areas.




In another embodiment of the present invention, motion classification or motion vector classification, in addition to spatial classification, may also be used to provide compact definition of temporal characteristics. Further, multiple classification may be added to the classified adaptive error recovery method. For example, there are various types of classes, such as a motion class, a motion vector class, an error class, a spatial activity class and a spatial class explained above. The combination of one or more of these different classification methods can also improve classification quality.





FIG. 3

shows an example of motion class tap structures. [It would be helpful to include an example of motion vector class tap structure?]. The example shows eight taps in neighborhood of the target error data. In this example, the eight tap accumulated temporal difference can be evaluated according to [formula 9], shown below, and is classified to four kinds of motion classes by thresholding based on [formula 10], shown below. In one embodiment of the present invention, th0 is equal to 3, th1 is equal to 8, and th2 is equal to 24.









fd
=




i
=
1

8



|


x
i

-

x
i



|






[formula  9]






mc
=

{



0



(

0

fd
<
th0

)





1



(

th0

fd
<
th1

)





2



(

th1

fd
<
th2

)





3



(

th2

fd

)









[formula  10]













In the above formulas, fd represents an accumulated temporal difference, x


i


represents motion class tap data of the current frame, x


i


represents the previous frame tap data corresponding to the current frame, and mc represents a motion class ID. Three thresholds, th0, th1, th2, can be used for this motion classification.




In one embodiment of the present invention, an error class can be used in conjunction with the classified adaptive error recovery method. This classification is achieved according to the erroneous data distribution pattern in neighborhood of the target data, examples of which are shown in FIG.


4


. This example has four error classes: an independent error case, a left error case, a right error case, and a three consecutive error case.




Generally speaking, filter coefficients of pixels adjacent to the target data have larger weights for error recovery. The data adjacent to the error data has a significant impact on the result of error recovery. Error classes can reduce this influence by separating different characteristic areas to other classes according to the adjacent erroneous data distribution. For the example shown in

FIG. 2C

, ADRC classification generates 16 kinds of ADRC class IDs, where motion and error classification generate four kinds of class IDs, respectively. Thus, the number of class IDs equals 16×4×4, or 256. Classification may be realized by representing each signal characteristic. Multiple classification can define a suitable class, the class ID, regarding the erroneous target data by combining different classification characteristics.




Adaptive Class Tap Structure




In one embodiment of the present invention, an adaptive class tap structure can be used in conjunction with the classified adaptive error recovery method.

FIG. 5

shows one example of motion class adaptive spatial class tap structures. Intra-frame taps can be chosen in a stationary or a slow motion area. Intra-field taps are typically used for larger motion areas. Suitable spatial classification is achieved by this adaptive processing.




For example, if intra-frame taps are used for large motion area classification, then the generated class distribution may vary widely because of low correlation, and therefore it will be difficult to represent the target data characteristics properly. An adaptive class tap structure, such as that shown in

FIG. 5

, is therefore effective.




Additional examples are shown in

FIGS. 6

,


7


,


8


,


9


. Spatial class taps are typically selected according to a motion and an error class. In addition to the motion factor, the erroneous data distribution is taken into account for the spatial class tap definition. The neighboring erroneous data is typically not introduced to the spatial classification. By this definition, only valid data is used and the classification accuracy is improved.




Adaptive Filter Tap Structure




In one embodiment of the present invention, an adaptive filter tap structure based on a corresponding class can be used in conjunction with the classified adaptive error recovery method.

FIG. 10

shows one example of an adaptive filter tap structures based on an error class. The filter tap structure regarding the target data is typically defined adaptively, preferably avoiding damaged data in neighborhood. Damaged data is not chosen for filtering.




An adaptive filter tap structure can be also defined according to motion class, an example of which is shown in FIG.


11


. In the motion class example shown in

FIG. 10

, motion class


0


corresponds to stationary areas, but motion class


3


corresponds to large motion areas. Motion classes


1


and


2


correspond to intermediate motion areas.




For stationary or quasi-stationary class areas, intra-frame taps are used as shown in FIG.


11


. At the same time, previous frame data at the target data location may be used for error recovery filtering. These areas correspond to motion class


0


and


1


. For fast motion or moderate motion areas, each filter typically has an intra-field taps structure, which is also shown in FIG.


11


. As shown by the example in

FIG. 11

, previous frame data is not introduced, and thus weakly correlated data is ignored. Filtering quality is typically improved by intra-field taps in such cases.





FIG. 12

shows an example of motion and error class adaptive filter tap structures.

FIGS. 10 and 11

represent error and motion class adaptive filter taps, respectively. The example shown in

FIG. 12

illustrates both adaptive structures with error class


0


, which is the independent error case. Upper adaptive characteristics are also shown in this example. In a manner similar to that of

FIG. 12

,

FIG. 13

corresponds to error class


1


,

FIG. 14

corresponds to error class


2


and

FIG. 15

corresponds to error class


3


.




Filter Tap Expansion




In one embodiment of the present invention, filter tap expansion by allocating the same coefficient to plural taps can be used in conjunction with the classified adaptive error recovery method. Filter tap expansion is also shown by the structures in

FIGS. 12-15

. For example, the filter tap structure has four of the same coefficient taps with motion class


3


in FIG.


12


. According to the evaluation results, some tap coefficients can be replaced with the same coefficient. The example shown in

FIG. 12

has four W


3


coefficients that are allocated at horizontally and vertically symmetric locations. By this expansion,


14


coefficients can cover


18


tap areas. This reduction method can typically reduce the need for coefficient memory and filtering hardware such as adders and multipliers. In one embodiment of the present invention, the expansion tap definition may be achieved by evaluation of coefficient distribution and visual results.




Preprocessing For Peripheral Erroneous Data




In one embodiment of the present invention, preprocessing for peripheral erroneous data can be used in conjunction with the classified adaptive error recovery method. To achieve error recovery filtering, suitable data is necessary at peripheral error locations of filter taps.




One example of this preprocessing is shown by the flow diagram of FIG.


16


. If at steps


1601


,


1605


, or


1609


there is erroneous data at a peripheral location of the target data, at steps


1603


,


1607


,


1611


the erroneous data is replaced with horizontal processed data in the case of no horizontal errors. If at steps


1613


,


1617


, or


1621


there are three consecutive horizontal errors, at steps


1615


,


1619


, or


1623


vertical processing is applied for generating preprocessed data. In all erroneous cases around the intra-frame data of this example, previous frame data is introduced for error processing, at step


1625


.





FIG. 17

shows another preprocessing example that uses a motion adaptive process for preprocessing. Using error free data, motion quantity is detected at the motion detection step


1701


. Generally speaking, an averaged motion quantity is calculated by averaging summed motion quantity with the number of error free data at the next step. Motion or stationary taps are chosen at step


1703


according to a threshold value of the result of averaged motion quantity. After these steps, processing steps


1705


through


1729


are performed in a manner similar to steps


1601


through


1625


of FIG.


16


. The preprocessed data is generated according to these prioritized processes, and is introduced for error recovery filtering.




Spatial Class Reduction




In one embodiment of the present invention, spatial class reduction can be used in conjunction with the classified adaptive error recovery. As explained above, an ADRC class can be used for the spatial classification, given by [formula 3]. This has 16 kinds of class IDs in the definition of a 4 tap ADRC. These 16 class IDs can be reduced to eight kinds of class IDs according to [formula 11], shown below,









c
=

{







i
=
1

4




2

i
-
1


·

q
i






(

c
<

2
3


)







2
4

-
1
-




i
=
1

4




2

i
-
1


·

q
i







(

c


2
3


)









[formula  11]













where c corresponds to the ADRC class ID, q


i


is the quantized data and Q is the number of quantization bits based on [formula 1] and [formula 2].




In one embodiment of the present invention, [formula 11] corresponds to a 1's complement operation in binary data of the ADRC code. This is related to the symmetric characteristics of each signal waveform. Because ADRC classification is a normalization of the target signal waveform, two waveforms which have the relation of 1's complement in each ADRC code can be classified in the same class ID. ADRC class IDs can typically be halved by this reduction process. An ADRC class reduction based on a 4-tap 1-bit ADRC is shown in FIG.


20


. In this example, applying [formula 11] gives eight ADRC class pairs. Each pair contains spatial symmetric patterns, and therefore the number of ADRC class IDs can be reduced by half by taking advantage of these symmetric patterns. The spatial class reduction technique can also be applied to other spatial classification techniques, including but not limited to DPCM and Block Truncation Coding (BTC).




System Structure




An overall system structure for one embodiment of the present invention, including all the processes described above, is shown in FIG.


18


A. Input data


1801


and corresponding error flags


1803


are input to the system. Examining the error flags


1803


, the input data


1801


is preprocessed at


1805


. ADRC classification is performed at


1807


, motion vector classification is performed at


1809


, and error classification is performed at


1811


.




In this example, ADRC class taps are chosen adaptively according to the error motion vector class. Filter tap data are chosen at


1813


based on the error and motion vector class. Error recovery filtering is performed at


1817


with tap data and filter coefficients selected from the coefficient memory


1815


corresponding to the ADRC class ID of


1807


, the motion vector class ID of


1809


and the error class ID of


1811


. Error recovered data and error free input data


1817


are selected at


1821


according to the error flag


1803


, which produces the output data


1823


of this system.





FIG. 18B

shows an embodiment of a motion vector class generator


1809


. Motion vector detector


1839


detects the motion vector using conventional methods such as the block matching method described above, for example. The motion vector is detected for the target data. Using the motion vector generated, the motion class generator


1841


may identify the vector as a 3 bit quantized characteristic for the motion vector, as shown in

FIG. 2J

, for example. The quantized data therefore may represent the motion vector class data that is used to access the coefficient memory


1815


. The estimation performance in the error recovery processing can be greatly improved by using the motion vector classification.





FIG. 19

shows an example of coefficient memory contents. It has 4×4×8 or 128 class IDs according to the multiple classification scheme. Four categories are used for an error class, four categories are for a motion class, and eight categories are for an ADRC class, which are typically halved according to [formula 11]. Each class corresponds to each memory address in FIG.


19


. In this example, 14 coefficients are stored in each class ID address according to the filter definition, like

FIGS. 12

,


13


,


14


,


15


.




The present invention may be used with any form of correlated data, including without limitation photographs or other two-dimensional static images, holograms, or other three-dimensional static images, video or other two-dimensional moving images, three-dimensional moving images, a monaural sound stream, or sound separated into a number of spatially related streams, such as stereo.

FIG. 21

shows an example of audio signal adaptive classification compatible with the present invention. An example audio signal


2101


is monitored at one or more time points t


0


-t


8


. The level of the audio signal


2101


at time points t


0


-t


8


is given by tap points X


0


-X


8


. The dynamic range of the audio signal


2101


is given as the difference between the lowest level tap point X


0


and the highest level tap point X


4


. In case of error recovery for erroneous data at t


4


, multiple classification can be applied with spatial classification like ADRC classification and spatial activity classification like dynamic range classification. Dynamic range classification is performed by thresholding the dynamic range in a manner similar to the motion classification processing of [formula 10]. As described above, motion classification, error classification and spatial classification are referred to in multiple classification. Spatial activity classification can also be introduced to multiple classification for general applications such as video data. In addition to dynamic range, the standard deviation, the Laplacian value or the spatial gradient value can be introduced for spatial activity classification.




With the present invention, the quality of data that is recovered due to errors is improved by introducing the disclosed technologies to the classified adaptive error recovery method. The present invention provides a way to restore a deteriorated signal to an undeteriorated signal which minimizes degradations on changing data.




While the invention is described in terms of embodiments in a specific system environment, those of ordinary skill in the art will recognize that the invention can be practiced, with modification, in other and different hardware and software environments within the spirit and scope of the appended claims.



Claims
  • 1. A method for restoring a deteriorated input signal comprising:selecting a data point of the deteriorated input signal; classifying the data point with respect to a plurality of class types wherein one of the plurality of class types is a motion vector class; creating a multiple classification result according to the plurality of class types, wherein one of the plurality of class types is a motion vector class; selecting at least one filter coefficient according to the multiple classification result; creating an undeteriorated data by filtering the data with the at least one filter coefficient selected.
  • 2. The method of claim 1 wherein the motion vector classification result is created by detecting a motion vector of the deteriorated input signal and generating a quantized characteristic for the motion vector.
  • 3. The method of claim 2 wherein detecting comprises comparing target data of the deteriorated input signal to temporally neighboring data to determine an amount of change between data indicative of a motion vector.
  • 4. The method of claim 2 wherein detecting comprises execution of a process selected from the group comprising block matching, phase correlation and gradient descent.
  • 5. The method of claim 1 wherein the plurality of class types is selected from the group comprising a spatial class, an Adaptive Dynamic Range Coding (ADRC) class, a Differential Pulse Code Modulation (DPCM) class, a Vector Quantization (VQ) class, a Discrete Cosine Transform (DCT) class, a motion class, a spatial activity class, an error class, a dynamic range class, a Laplacian class, a standard deviation class and a spatial gradient class.
  • 6. A method of recovering deteriorated data points comprising:receiving a stream of data points having spatial-temporal correlation; selecting a deteriorated target data point from the stream; classifying the target data point based upon multiple class types to create a multiple classification result, said multiple class types including a motion vector class; and estimating a recovered data point for the target data based on the multiple classification result.
  • 7. The method of claim 6 wherein the motion vector classification result is created by detecting a motion vector of the deteriorated input signal and generating a quantized characteristic for the motion vector.
  • 8. The method of claim 7 wherein the step of detecting comprises comparing target data of the deteriorated input signal to temporally neighboring data to determine an amount of change between data indicative of a motion vector.
  • 9. The method of claim 7 wherein the step of detecting comprises execution of a process selected from the group comprising block matching, phase correlation and gradient descent.
  • 10. The method of claim 6, wherein the estimating the recovered target data point comprises:selecting, based on the multiple classification result, at least one filter coefficient; selecting at least one tap data point having a spatial-temporal relationship to the target data point; and filtering the at least one tap data point according to the at least one filter coefficient.
  • 11. An apparatus for restoring a deteriorated input signal comprising:a selector configured to select a data point of the deteriorated input signal; a classifier configured to classify the data point with respect to a plurality of class types wherein one of the plurality of class types is a motion vector class, to create a multiple classification result according to the plurality of class types, wherein one of the plurality of class types is a motion vector class, and to select at least one filter coefficient according to the multiple classification result; a filter configured to create an undeteriorated data by filtering the data with the at least one filter coefficient selected.
  • 12. The apparatus of claim 11 wherein the classifier is further configured to create the motion vector classification result by detecting a motion vector of the deteriorated input signal and generating a quantized characteristic for the motion vector.
  • 13. The apparatus of claim 12 wherein the classifier is configured to detect by comparing target data of the deteriorated input signal to temporally neighboring data to determine an amount of change between data indicative of a motion vector.
  • 14. The apparatus of claim 12 wherein the classifier is configured to detect by executing a process selected from the group comprising block matching, phase correlation and gradient descent.
  • 15. The apparatus of claim 11 wherein the plurality of class types is selected from the group comprising a spatial class, an Adaptive Dynamic Range Coding (ADRC) class, a Differential Pulse Code Modulation (DPCM) class, a Vector Quantization (VQ) class, a Discrete Cosine Transform (DCT) class, a motion class, a spatial activity class, an error class, a dynamic range class, a Laplacian class, a standard deviation class and a spatial gradient class.
  • 16. An apparatus for recovering deteriorated data points comprising:a receiver configured to receive a stream of data points having spatial-temporal correlation; a selector configured to select a deteriorated target data point from the stream; a classifier configured to classify the target data point based upon multiple class types to create a multiple classification result, said multiple class types including a motion vector class; and an estimator configured to estimate a recovered data point for the target data based on the multiple classification result.
  • 17. The apparatus of claim 16 wherein the motion vector classification result is created by the classifier by detecting a motion vector of the deteriorated input signal and generating a quantized characteristic for the motion vector.
  • 18. The apparatus of claim 17 wherein the classifier detects by comparing target data of the deteriorated input signal to temporally neighboring data to determine an amount of change between data indicative of a motion vector.
  • 19. The method of claim 17 wherein the step of detecting comprises execution of a process selected from the group comprising block matching, phase correlation and gradient descent.
  • 20. The apparatus of claim 16, wherein the estimator is configured to estimate the recovered target data point byselecting, based on the multiple classification result, at least one filter coefficient; selecting at least one tap data point having a spatial-temporal relationship to the target data point; and filtering the at least one tap data point according to the at least one filter coefficient.
  • 21. A computer readable medium containing executable instructions, which, when executed in a processing system, causes the system to perform the steps of restoring a deteriorated input signal, comprising the steps of:selecting a data point of the deteriorated input signal; classifying the data point with respect to a plurality of class types wherein one of the plurality of class types is a motion vector class; creating a multiple classification result according to the plurality of class types, wherein one of the plurality of class types is a motion vector class; selecting at least one filter coefficient according to the multiple classification result; creating an undeteriorated data by filtering the data with the at least one filter coefficient selected.
  • 22. A computer readable medium containing executable instructions, which, when executed in a processing system, causes the system to perform the steps of recovering deteriorated data points, comprising the steps of:receiving a stream of data points having spatial-temporal correlation; selecting a deteriorated target data point from the stream; classifying the target data point based upon multiple class types to create a multiple classification result, said multiple class types including a motion vector class; and estimating a recovered data point for the target data based on the multiple classification result.
  • 23. A system for restoring a deteriorated input signal, comprising:means for selecting a data point of the deteriorated input signal; means for classifying the data point with respect to a plurality of class types wherein one of the plurality of class types is a motion vector class; means for creating a multiple classification result according to the plurality of class types, wherein one of the plurality of class types is a motion vector class; means for selecting at least one filter coefficient according to the multiple classification result; means for creating an undeteriorated data by filtering the data with the at least one filter coefficient selected.
  • 24. A system for recovering deteriorated data points, comprising:means for receiving a stream of data points having spatial-temporal correlation; means for selecting a deteriorated target data point from the stream; means for classifying the target data point based upon multiple class types to create a multiple classification result, said multiple class types including a motion vector class; and means for estimating a recovered data point for the target data based on the multiple classification result.
US Referenced Citations (117)
Number Name Date Kind
3311879 Daher Mar 1967 A
3805232 Allen Apr 1974 A
4361853 Remy et al. Nov 1982 A
4381519 Wilkinson et al. Apr 1983 A
4419693 Wilkinson et al. Dec 1983 A
4532628 Matthews Jul 1985 A
4574393 Blackwell et al. Mar 1986 A
4586082 Wilkinson Apr 1986 A
4656514 Wilkinson et al. Apr 1987 A
4675735 Wilkinson et al. Jun 1987 A
4703351 Kondo Oct 1987 A
4703352 Kondo Oct 1987 A
4710811 Kondo Dec 1987 A
4722003 Kondo Jan 1988 A
4729021 Kondo Mar 1988 A
4772947 Kondo Sep 1988 A
4788589 Kondo Nov 1988 A
4807033 Keesen et al. Feb 1989 A
4815078 Shimura Mar 1989 A
4845560 Kondo et al. Jul 1989 A
4890161 Kondo Dec 1989 A
4924310 Von Brandt May 1990 A
4953023 Kondo Aug 1990 A
4975915 Sako et al. Dec 1990 A
4979040 Masumoto Dec 1990 A
5023710 Kondo et al. Jun 1991 A
5043810 Vreeswijk et al. Aug 1991 A
5086489 Shimura Feb 1992 A
5093872 Tutt Mar 1992 A
5101446 Resnikoff et al. Mar 1992 A
5122873 Golin Jun 1992 A
5134479 Ohishi Jul 1992 A
5142537 Kutner et al. Aug 1992 A
5150210 Hoshi et al. Sep 1992 A
5159452 Kinoshita Oct 1992 A
5166987 Kageyama Nov 1992 A
5177797 Takenaka et al. Jan 1993 A
5185746 Tanaka et al. Feb 1993 A
5196931 Kondo Mar 1993 A
5208816 Seshardi et al. May 1993 A
5237424 Nishino et al. Aug 1993 A
5241381 Kondo Aug 1993 A
5243428 Challapali et al. Sep 1993 A
5247363 Sun et al. Sep 1993 A
5258835 Kato Nov 1993 A
5307175 Seachman Apr 1994 A
5327502 Katata et al. Jul 1994 A
5337087 Mishima Aug 1994 A
5359694 Concordel Oct 1994 A
5379072 Kondo Jan 1995 A
5398078 Masuda et al. Mar 1995 A
5400076 Iwamura Mar 1995 A
5406334 Kondo et al. Apr 1995 A
5416651 Uetake et al. May 1995 A
5416847 Boze May 1995 A
5428403 Andrew et al. Jun 1995 A
5434716 Sugiyama et al. Jul 1995 A
5438369 Citta et al. Aug 1995 A
5446456 Seo Aug 1995 A
5455629 Sun et al. Oct 1995 A
5469216 Takahashi et al. Nov 1995 A
5469474 Kitabatake Nov 1995 A
5471501 Parr et al. Nov 1995 A
5473479 Takahura Dec 1995 A
5481554 Kondo Jan 1996 A
5481627 Kim Jan 1996 A
5495298 Uchida et al. Feb 1996 A
5499057 Kondo et al. Mar 1996 A
5528608 Shimizume Jun 1996 A
5546130 Hackett et al. Aug 1996 A
5557420 Yanagihara et al. Sep 1996 A
5557479 Yanagihara Sep 1996 A
5568196 Hamada et al. Oct 1996 A
5577053 Dent Nov 1996 A
5579051 Murakami et al. Nov 1996 A
5594807 Liu Jan 1997 A
5598214 Kondo et al. Jan 1997 A
5617135 Noda et al. Apr 1997 A
5617333 Oyamada et al. Apr 1997 A
5625715 Trew et al. Apr 1997 A
5636316 Oku et al. Jun 1997 A
5649053 Kim Jul 1997 A
5663764 Kondo et al. Sep 1997 A
5673357 Shima Sep 1997 A
5677734 Oikawa et al. Oct 1997 A
5689302 Jones Nov 1997 A
5699475 Oguro et al. Dec 1997 A
5703889 Shimoda et al. Dec 1997 A
5724099 Hamdi et al. Mar 1998 A
5724369 Brailean et al. Mar 1998 A
5737022 Yamaguchi et al. Apr 1998 A
5751361 Kim May 1998 A
5751743 Takazawa May 1998 A
5751862 Williams et al. May 1998 A
5786857 Yamaguchi Jul 1998 A
5790195 Ohsawa Aug 1998 A
5796786 Lee Aug 1998 A
5805762 Boyce et al. Sep 1998 A
5809041 Shikakura et al. Sep 1998 A
5809231 Yokoyama et al. Sep 1998 A
5852470 Kondo et al. Dec 1998 A
5861922 Murashita et al. Jan 1999 A
5878183 Sugiyama et al. Mar 1999 A
5883983 Lee et al. Mar 1999 A
5903481 Kondo et al. May 1999 A
5938318 Araki Jul 1999 A
5936674 Kim Aug 1999 A
5940539 Kondo et al. Aug 1999 A
5946044 Kondo et al. Aug 1999 A
6018317 Dogan et al. Jan 2000 A
6067636 Yao et al. May 2000 A
6104434 Nakagawa et al. Aug 2000 A
6137915 Chai Oct 2000 A
6151416 Kondo et al. Nov 2000 A
6164540 Bridgelall et al. Dec 2000 A
6192079 Sharma et al. Feb 2001 B1
6192161 Kondo et al. Feb 2001 B1
Foreign Referenced Citations (23)
Number Date Country
0 398 741 Nov 1990 EP
0 527 611 Aug 1992 EP
0 558 016 Feb 1993 EP
0 566 412 Apr 1993 EP
0 571 180 May 1993 EP
0 592 196 Oct 1993 EP
0 596 826 Nov 1993 EP
0 605 209 Dec 1993 EP
0 610 587 Dec 1993 EP
0 592 196 Apr 1994 EP
0 597 576 May 1994 EP
0 651 584 Oct 1994 EP
0 680 209 Apr 1995 EP
0 746 157 May 1996 EP
0 833 517 Apr 1998 EP
2 280 812 Feb 1995 GB
2 320 836 Nov 1997 GB
7-67028 Mar 1995 JP
WO9607987 Sep 1995 WO
WO 9746019 Dec 1997 WO
WO9921285 Oct 1998 WO
99 21090 Apr 1999 WO
WO 0048126 Aug 2000 WO
Non-Patent Literature Citations (78)
Entry
Ozkan, M.K., et al. Adaptive Motion-Compensated Filtering Of Noisy Image Sequences; IEEE Transactions on pp. 277-290. Aug. 1993 vol. 3, Issue 4.
Sezan, et al. “Temporally Adaptive Filtering Of Noisy Image Sequences Using A Robust Motion Estimation Algorithm”; 1991 International Conference on pp. 2429-2432 vol. 4, Apr. 14-17, 1991.
Crinon, R.J., et al. “Adaptive Model-Based Motion Estimation”; IEEE Transactions on pp. 469-481, vol. 3, Issue 5, Sep. 1994.
Wollborn, M. Prototype Prediction For Colour Update In Object-Based Analysis-Synthesis Coding; IEEE Transactions on pp. 236-245, vol. 4, No. 3, Jun. 1994.
Patti, A.J., et al., Robust Methods For High-Quality Stills From Interlaced Video In the Presence Of Dominant Motion; IEEE Transactions on pp. 328-342, vol. 7, No. 2, Apr. 1997.
Meguro, et al., “Adaptive Order Statistics Filter Based On Fuzzy Rules For Image Processing”, pp. 70-80, ©1997 Scripta Technica, Inc.
PCT Written Opinion PCT/US00/03738, 7 pgs., Jan. 26, 2001.
International Search Report PCT/US00/23035, 5 pgs., Jan. 22, 2001.
Jeng, et al., “Concealment Of Bit Error And Cell Loss In Inter-Frame Coded Video Transmission”, 1991 IEEE, 17.4.1-17.4.5.
Monet, et al., “Block Adaptive Quantization Of Images”, IEEE 1993, pp. 303-306.
International Search Report PCT/US00/03738, Feb. 11, 2000, 9 pgs.
Stammnitz, et al., “Digital HDTV Experimental System”, pp. 535-542.
International Search Report PCT/US00/03508, Feb. 9, 2000, 8 pgs.
Chu, et al., Detection and Concealment of Transmission Errors in H.261 Images, XP-000737027, pp. 74-84, IEEE transactions, Feb. 1998.
Zhu, et al., “Coding and Cell-Loss Recovery in DCT-Based Packet Video”, IEEE Transactions on Circuits and Systems for Video Technology, Jun. 3, 1993, No. 3, NY.
International Search Report PCT/US98/22347, Mar. 16, 1999, 2 pgs.
International Search Report PCT/US95/22531, Apr. 1, 1999, 1 pg.
International Search Report PCT/US98/22411, Feb. 25, 1999, 1 pg.
International Search Report PCT/US98/22412, Oct. 5, 1999, 5 pgs.
International Search Report PCT/US00/03595, 6 pgs., Feb. 10, 2000.
International Search Report PCT/US00/03439, Feb. 9, 2000, 8 pgs.
International Search Report PCT/US00/03595, Feb. 10, 2000, 6 pgs.
International Search Report PCT/US00/03611, Feb. 10, 2000, 8 pgs.
International Search Report PCT/US00/03599, Feb. 10, 2000, 4 pgs.
International Search Report PCT/US00/03742, Feb. 11, 2000, 5 pgs.
International Search Report PCT/US00/03654, Feb. 10, 2000, 4 pgs.
International Search Report PCT/US00/03299, Feb. 9, 2000, 5 pgs.
Park, et al., “A Simple Concealment For ATM Bursty Cell Loss”, IEEE transactions of Consumer Electronics, No. 3, Aug. 1993, pp. 704-709.
Tom, et al., “Packet Video for Cell Loss Protection Using Deinterleaving and Scrambling”, ICASSP 91: 1991 International conference on Acoustics, Speech and Signal Processing, vol. 4, pp. 257-2860, Apr. 1991.
NKH Laboratories Note, “Error Correction, Concealment and Shuffling”, No. 424, Mar. 1994, pp. 29-44.
Kondo,et al., “Adaptive Dynamic Range Coding Scheme For Future HDTV Digital VTR”, Fourth International Workshop on HDTV and Beyond, Sep. 4-6, Turin, Italy.
Kondo,et al., “A New Concealment Method For Digital VCR's”, IEEE Visual Signal Processing and Communication, pp. 20-22, 9/93, Melbourne, australia.
Kim, et al., “Bit Rate Reduction Algorithm For A Digital VCR”, IEEE Transactions on Consumer Electronics, vol. 37, No. 3, Aug. 1, 1992, pp. 267-274.
R.C. Gonzalez, et al., “Digital Image Processing”, Addison Wesley Publishing Company, Inc., 1992, pp. 67-88.
R. Aravind, et al., “Image and Video Coding Standards”, AT&T Technical Journal Jan./Feb. 1993, pp. 67-88.
Translation of Abstract of Japanese Patent No. 61147690.
Translation of Abstract of Japanese Patent No. 63256080.
Translation of Abstract of Japanese Patent No. 63257390.
Translation of Abstract of Japanese Patent No. 02194785.
Translation of Abstract of Japanese Patent No. 03024885.
Translation of Abstract of Japanese Patent No. 04037293.
Translation of Abstract of Japanese Patent No. 04316293.
Translation of Abstract of Japanese Patent No. 04329088.
Translation of Abstract of Japanese Patent No. 05047116.
Translation of Abstract of Japanese Patent No. 05244579.
Translation of Abstract of Japanese Patent No. 05244580.
Translation of Abstract of Japanese Patent No. 05244559.
Translation of Abstract of Japanese Patent No. 05304659.
Translation of Abstract of Japanese Patent No. 06086259.
Translation of Abstract of Japanese Patent No. 06113258.
Translation of Abstract of Japanese Patent No. 06125534.
Translation of Abstract of Japanese Patent No. 06162693.
Translation of Abstract of Japanese Patent No. 06253287.
Translation of Abstract of Japanese Patent No. 06253280.
Translation of Abstract of Japanese Patent No. 06253284.
Translation of Abstract of Japanese Patent No. 07046604.
Translation of Abstract of Japanese Patent No. 07085611.
Translation of Abstract of Japanese Patent No. 07095581.
Translation of Abstract of Japanese Patent No. 07177505.
Translation of Abstract of Japanese Patent No. 07177506.
Translation of Abstract of Japanese Patent No. 07240903.
Japanese Patent No. 05304659 and translation of Abstract.
Japanese Patent No. 05244578 and translation of Abstract.
Japanese Patent No. 05300485 and translation of Abstract.
Japanese Patent No. 06070298 and translation of Abstract.
Japanese Patent No. 06006778 and translation of Abstract.
Japanese Patent No. 06113256 and translation of Abstract.
Japanese Patent No. 06113275 and translation of Abstract.
Japanese Patent No. 06253287 and translation of Abstract.
Japanese Patent No. 06253280 and translation of Abstract.
Japanese Patent No. 06253284 and translation of Abstract.
Japanese Patent No. 06350981 and translation of Abstract.
Japanese Patent No. 06350982 and translation of Abstract.
Japanese Patent No. 08317394 and translation of Abstract.
Japanese Patent No. 07023388 and translation of Abstract.
Japanese Patent No. 04245881 and translation of Abstract.
Japanese Patent No. 04115628 and translation of Abstract.
Japanese Patent No. 04115686 and translation of Abstract.