Advances in extending the AAM techniques from grayscale to color images

Information

  • Patent Grant
  • 7965875
  • Patent Number
    7,965,875
  • Date Filed
    Tuesday, June 12, 2007
    18 years ago
  • Date Issued
    Tuesday, June 21, 2011
    14 years ago
Abstract
A face detection and/or detection method includes acquiring a digital color image. An active appearance model (AAM) is applied including an interchannel-decorrelated color space. One or more parameters of the model are matched to the image. Face detection results based on the matching and/or different results incorporating the face detection result are communicated.
Description
BACKGROUND

The active appearance model (AAM) techniques were first described by Edwards et al. [1]. They have been extensively used in applications such as face tracking and analysis and interpretation of medical images.


Different derivations of the standard AAM techniques have been proposed for grayscale images in order to improve the convergence accuracy or speed. Cootes et al. proposed in [2] a weighted edge representation of the image structure, claiming a more reliable and accurate fitting than using the standard representation based on normalized intensities. Other derivations include the direct appearance models (DAMs) [3], or the Shape AAMs [4], where the convergence speed is increased by reducing the number of parameters that need to be optimized. In the DAM approach, it is shown that predicting shape directly from texture can be possible when the two are sufficiently correlated. The Shape AAMs use the image residuals for driving the pose and shape parameters only, while the texture parameters are directly estimated by fitting to the current texture.


In [5], a method which uses canonical correlation analysis (CCAAAM) for reducing the dimensionality of the original data instead of the common principal components analysis (PCA) is introduced. This method is claimed to be faster than the standard approach while recording almost equal final accuracy.


An inverse compositional approach is proposed in [6], where the texture warp is composed of incremental warps, instead of using the additive update of the parameters. This method considers shape and texture separately and is shown to increase the AAM fitting efficiency.


Originally designed for grayscale images, AAMs have been later extended to color images. Edwards et al. [7] first proposed a color AAM based on the RGB color space. This approach involves constructing a color texture vector by merging concatenated values of each color channel. However, their results did not indicate that benefits in accuracy could be achieved from the additional chromaticity data which were made available. Furthermore, the extra computation required to process these data suggested that color-based AAMs could not provide useful improvements over conventional grayscale AAMs.


Stegmann et al. [8] proposed a value, hue, edge map (VHE) representation of image structure. They used a transformation to HSV (hue, saturation, and value) color space from where they retained only the hue and value (intensity) components. They added to these an edge map component, obtained using numeric differential operators. A color texture vector was created as in [7], using instead of R, G, and B components the V, H, and E components. In their experiments they compared the convergence accuracy of the VHE model with the grayscale and RGB implementations. Here they obtained unexpected results indicating that the RGB model (as proposed in [7]) was slightly less accurate than the grayscale model. The VHE model outperformed both grayscale and RGB models but only by a modest amount; yet some applicability for the case of directional lighting changes was shown.


SUMMARY OF THE INVENTION

A method of detecting and/or tracking faces in a digital image is provided. The method includes acquiring a digital color image. An active appearance model (AAM) is applied including an interchannel-decorrelated color space. One or more parameters of the model are matched to the image. A face detection result based on the matching and/or a different processing result incorporating the face detection result is communicated.


The method may include converting RGB data to I1I2I3 color space. The converting may include linear conversion. Texture may be represented with the I1I2I3 color space. The texture may be aligned on separate channels. Operations may be performed on the texture data on each channel separately. The interchannel-decorrleated color space may include at least three channels including a luminance channel and two chromatic channels.


The AAM may include an application of principal components analysis (PCA) which may include eigen-analysis of dispersions of shape, texture and appearance. The AAM may further include an application of generalized procrustes analysis (GPA) including aligning shapes, a model of shape variability including an application of PCA on a set of shape vectors, a normalization of objects within the image with respect to shape and/or generation of a texture model including sampling intensity information from each shape-free image to form a set of texture vectors. The generation of the texture model may include normalization of the set of texture vectors and application of PCA on the normalized texture vectors. The applying may include retaining only the first one or two of the aligned texture vectors. The AAM may also include generation of a combined appearance model including a combined vector from weighted shape parameters concatenated to texture parameters, and application of PCA to the combined vector.


The matching may include a regression approach and/or finding model parameters and/or pose parameters which may include translation, scale and/or rotation.


The interchannel-decorrelated color space may include an orthogonal color space. Effects of global lighting and chrominance variations may be reduced with the AAM. One or more detected faces may be tracked through a series of two of more images.


An apparatus for detecting faces in a digital image is also provided including a processor and one or more processor-readable media for programming the processor to control the apparatus to perform any of the methods described herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A-1B include examples of annotated images from PIE database and IMM database, respectively.



FIG. 2 includes histogram plots of point-to-curve boundary errors after applying the (PIE) models on PIE subset 1 (seen images).



FIG. 3 includes histogram plots of point-to-curve boundary errors after applying the (PIE) models on PIE subset 1 (seen images).



FIG. 4 includes histogram plots of point-to-curve boundary errors after applying the (PIE) models on PIE subset 1 (unseen images).



FIG. 5 includes cumulative histogram plots of point-to-curve boundary errors after applying the (PIE) models on PIE subset 1 (unseen images).



FIG. 6 includes a plot of actual dx versus predicted dx displacements for (PIE) RGB GN model applied on PIE subset 2.



FIG. 7 includes a plot of actual dx versus predicted dx displacements for (PIE) CIELAB GN model applied on PIE subset 2.



FIG. 8 includes a plot of actual dx versus predicted dx displacements for (PIE) I1I213 SChN model applied on PIE subset 2.



FIG. 9 illustrates comparative average PtPt errors for PIE models applied on three different sets of images.



FIG. 10 illustrates comparative average PtPt errors for IMM models applied on three different sets of images.





DETAILED DESCRIPTION OF THE EMBODIMENTS

A more appropriate extension of active appearance modeling (AAM) techniques to color images is provided. Accordingly, the embodiments are drawn to color spaces other than RGB because intensity and chromaticity information are strongly mixed in each of the R, G and B color channels. By employing color spaces where there is a stronger separation of the chromaticity and intensity information, we have been able to distinguish between intensity-dependent and chromaticity-dependent aspects of a color AAM. This has enabled the development of a new approach for normalizing color texture vectors, performing a set of independent normalizations on the texture subvectors corresponding to each color channel. This approach has enabled the creation of more accurate AAM color models than the conventional grayscale model. An investigation across a number of color spaces indicates that the best performing models are those created in a color space where the three color channels are optimally decorrelated. A performance comparison across the studied color spaces supports these conclusions.


The basic AAM algorithm for grayscale images is briefly described below. Then, extension of this model to RGB color images is analyzed, and a CIELAB-based model is proposed. CIELAB is a perceptually uniform color space that is widely used for advanced image processing applications. Extending the AAMs by applying the texture normalization separately to each component of the color space is also analyzed. The I1I2I3 color space, which exhibits substantially optimal decorrelation between the color channels, is shown to be suited to this purpose. The proposed color AAM extension, which realizes a more appropriate texture normalization for color images is also described. Experimental results are shown, and a detailed set of comparisons between the standard grayscale model, the common RGB extension, and our proposed models are provided. Finally, conclusions are presented.


In what follows we frequently use the term texture. In the context of this work, texture is defined as the set of pixel intensities across an object, also subsequent to a suitable normalization.


Overview of the Basic (Grayscale) AAM

The image properties modeled by AAMs are shape and texture. The parameters of the model are estimated from an initial scene and subsequently used for synthesizing a parametric object image. In order to build a statistical model of the appearance of an object a training dataset is used to create (i) a shape model, (ii) a texture model and then (iii) a combined model of appearance by means of PCA, that is an eigenanalysis of the distributions of shape, texture and appearance. The training dataset contains object images, or image examples, annotated with a fixed set of landmark points. These are the training examples. The sets of 2D coordinates of the landmark points define the shapes inside the image frame. These shapes are aligned using the generalized Procrustes analysis (GPA) [9], a technique for removing the differences in translation, rotation and scale between the training set of shapes. This technique defines the shapes in the normalized frame. These aligned shapes are also called the shape examples.


Let N be the number of training examples. Each shape example is represented as a vector s of concatenated coordinates of its points (x1, x2, . . . , xL, y1, y2, . . . , yL)T, where L is the number of landmark points. PCA is then applied to the set of aligned shape vectors reducing the initial dimensionality of the data. Shape variability is then linearly modeled as a base (mean) shape plus a linear combination of shape eigenvectors.

Sm= SSbs,   (1)


where sm represents a modeled shape, s the mean of the aligned shapes, Φs=((φs1s2| . . . |φsp) is a matrix having p shape eigenvectors as its columns (p<N ), and finally, bs defines the set of parameters of the shape model. p is chosen so that a certain percentage of the total variance of the data is retained. The corresponding texture model is next constructed. For that, a reference shape is needed in order to acquire a set of so-called texture examples. The reference shape is usually chosen as the point-wise mean of the shape examples. The texture examples are defined in the normalized frame of the reference shape. Each image example is then warped (distorted) such that the points that define its attached shape (used as control points) match the reference shape; this is usually realized by means of a fast triangulation algorithm. Thus, the texture across each image object is mapped into its shape-normalized representation. All shape differences between the image examples are now removed. The resulting images are also called the image examples in the normalized frame. For each of these images, the corresponding pixel values across their common shape are scanned to form the texture vectors tim=(tim1, tim2, . . . , timp)T, where P is the number of texture samples.


Each texture vector tim is further aligned with respect to intensity values, as detailed below, in order to minimize the global lighting variations. This global texture normalization is designed so that each normalized texture vector is aligned as closely as possible to the mean of the normalized texture vectors.


PCA is next applied to the set of normalized vectors, reducing thus the dimensionality of the texture data. The texture model is also a linear model, a texture instance being obtained from a base (mean) texture plus a linear combination of texture eigenvectors. Thus,

tm= ttbt.   (2)


Similar to the shape model, tm represents a synthesized (modeled) texture in the normalized texture frame, ī is the mean normalized texture, Φt=((φt1t2|. . . |φtq) is a matrix having q texture eigenvectors as its columns, with q<N chosen so that a certain percentage from the total variance of the texture data is retained, and bt defines the set of parameters of the texture model.


A vector c is further formed by concatenating the shape and texture parameters which optimally describe each of the training examples,







(





W
s



b
s







b
t




)

;





Ws is a diagonal matrix of (normally equal) weights, applied in order to correct the differences in units between the shape and texture parameters.


A model for which the concatenated shape and texture parameters c are used to describe the appearance variability is called an independent model of appearance. A more compact model may be obtained by considering that some correlation exists between shape and texture. Thus, a third PCA is applied on the set of vectors c, resulting in a combined model of appearance

cmcbc,   (3)


where Φc is the matrix of retained eigenvectors and bc represents the set of parameters that provide combined control of shape and texture variations. This reduces the total number of parameters of the appearance model.


During the optimization stage of an AAM (fitting the model to a query image), the parameters to be found are







p
=

(




g
s






b
c




)


,





where gs are the shape 2D position, 2D rotation and scale parameters inside the image frame, and bc are the combined model parameters.


The optimization of the parameters p is realized by minimizing the reconstruction error between the query image and the modeled image. The error is evaluated in the coordinate frame of the model, i.e., in the normalized texture reference frame, rather than in the coordinate frame of the image. This choice enables a fast approximation of a gradient descent optimization algorithm, described below. The difference between the query image and the modeled image is thus given by the difference between the normalized image texture and the normalized synthesized texture,

r(p)=t−tm,   (4)


and ∥r(p)∥2 is the reconstruction error, with ∥.∥ marking the Euclidean norm.


A first order Taylor extension of r(p) is given by










r


(

p
+

δ





p


)





r


(
p
)


+




r



p



δ






p
.







(
5
)









    • ∂p should be chosen so that to minimize ∥r(p+∂P)μ2 It follows that
















r



p



δ





p

=

-


r


(
p
)


.






(
6
)







Normally, the gradient matrix ∥r/∥p should be recomputed at each iteration. Yet, as the error is estimated in a normalized texture frame, this gradient matrix may be considered as fixed. This enables it to be pre-computed from the training dataset. Given a training image, each parameter in p is systematically displaced from its known optimal value retaining the normalized texture differences. The resulted matrices are then averaged over several displacement amounts and over several training images.


The update direction of the model parameters p is then given by

δp=−Rr(p),   (7)

where






R
=



(





r
T




p






r



p



)


-
1







r
T




p








is the pseudo-inverse of the determined gradient matrix, which can be pre-computed as part of the training stage. The parameters p continue to be updated iteratively until the error can no longer be reduced and convergence is declared.


The Texture Normalization Stage

As noted also by Batur et al. [10], and confirmed by our experiments, this stage is preferred during the optimization process, providing enhanced chances for predicting a correct update direction of the parameter vector (∂p).


Texture normalization is realized by applying to the texture vector tim a scaling α, and an offset β, being thus a linear normalization,










t
=



t
im

-

β





1


α


,




(
8
)







where 1 is the unity matrix.


The values for α and β are chosen to best match the current vector to the mean vector of the normalized data. In practice, the mean normalized texture vector is offset and scaled to have zero-mean and unit-variance. If







1
N






i
=
1

N



t
i







is the mean vector of the normalized texture data, let tzm,uv be its zero-mean and unit-variance correspondent. Then, the values for α and β required to normalize a texture vector tim, according to (8), are given by










α
=


t
im
T




t
_


zm
,
uv




,




(
9
)






β
=




t
im
T


1

P

.





(
10
)







Obtaining the mean of the normalized data is thus a recursive process. A stable solution can be found by using one texture vector as the first estimate of the mean. Each texture vector is then aligned to zero mean and unit variance mean vector as described in (8)-(10), re-estimating the mean and iteratively repeating these steps until convergence is achieved.


Color AAM Extensions Based on Global ColorTexture Normalization

It is to be expected that using the complete color information will lead to an increased accuracy of AAM fitting. Yet, previous extensions of AAMs to color images showed only modest improvements, if any, in the convergence accuracy of the model. Before investigating this further, we first present the common AAM extension method to color images. We also propose a variant of this method based on a CIELAB color space representation instead of the initial RGB representation.


RGB is by far the most widely used color space in digital images [11]. The extension proposed by Edwards et al. [7] is realized by using an extended texture vector given by











t
im
RGB

=


(


t

im
1

R

,

t

im
2

R

,





,

t

im

P
o


R

,

t

im
1

G

,

t

im
2

G

,





,

t

im

P
o


G

,

t

im
1

B

,

t

im
2

B

,





,

t

im

P
o


B


)

T


,




(
11
)







where Pc is the number of texture samples corresponding to one channel. Let P=3Pc denote now the number of elements of the full color texture vector.


In order to reduce the effects of global lighting variations, the same normalization method as for the grayscale model, described above, is applied on the full color texture vectors,

timRGB→tRGB   (12)


The remaining steps of the basic grayscale algorithm remain unchanged.


CIELAB Extension

CIELAB is a device-independent, perceptually linear color space which realizes a separation of color information into an intensity, or luminance component (L) and two chromaticity components (a, b). CIELAB was designed to mimic the human perception of the differences between colors. It is defined in terms of a transformation from CIE XYZ, which is a device-independent color space describing the average human observer. CIE XYZ is thus an intermediate space in the RGB to CIELAB conversion (RGB→XYZ→CIELAB).


The distance between two colors in the CIELAB color space is given by the Euclidean distance,

ΔE=√{square root over ((ΔL)2+(Δa)2+(Δb)2)}{square root over ((ΔL)2+(Δa)2+(Δb)2)}{square root over ((ΔL)2+(Δa)2+(Δb)2)}  (13)


CIELAB uses thus the same metric as RGB, and a CIELAB model implementation can be designed simply by substituting in (11) the values corresponding to the R, G, and B components with the values corresponding to the L, a, and b components, respectively. The color texture vector is thus built as










t
im
CIELAB

=



(


t

im
1

L

,

t

im
2

L

,





,

t

im

P
o


L

,

t

im
1

a

,

t

im
2

a

,





,

t

im

P
o


a

,

t

im
1

b

,

t

im
2

b

,





,

t

im

P
o


b


)

T

.





(
14
)







Again, the same normalization technique can be applied on the resulted color vectors,

timCIELAB→Tcielab.  (15)


The CIELAB AAM implementation is interesting as it offers the possibility for a more accurate image reconstruction, aimed towards a human observer. The benefits of this can clearly be noticed when the model is built using a specific image database and tested on another database with different image acquisition attributes (e.g. different illumination conditions). Considering that the image is typically represented in the more common RGB color space, the application of the CIELAB model may be realized at the expense of the added computational cost introduced by the conversion to CIELAB representation.


Texture Normalization on Separate Channel Subvectors

When a typical multi-channel image is represented in a conventional color space such as RGB, there are correlations between its channels. Channel decorrelation refers to the reduction of the cross correlation between the components of a color image in a certain color space representation. In particular, the RGB color space presents very high inter-channel correlations. For natural images, the cross-correlation coefficient between B and R channels is ˜0.78, between R and G channels is ≈0.98, and between G and B channels is ≈0.94 [12]. This implies that, in order to process the appearance of a set of pixels in a consistent way, one must process the color channels as a whole and it is not possible to independently analyze or normalize them.


This observation suggest an explanation as to why previous authors [7] obtained poor results being compelled to treat the RGB components as a single entity. Indeed, if one attempts to normalize individual image channels within a highly correlated color space such as RGB, the performance of the resulting model does not improve when compared with a global normalization applied across all image channels. In a preferred embodiment, however, each image channel is individually normalized when it is substantially decorrelated from the other image channels, and thus an improved color AAM is realized.


There are several color spaces which were specifically designed to separate color information into intensity and chromaticity components. However such a separation still does not necessarily guarantee that the image components are strongly decorrelated. There is though a particular color space which is desirable for substantially optimal image channel decorrelation.


A Decorrelated Color Space

An interesting color space is I1I2I3, proposed by Ohta et al. [13], which realizes a statistical minimization of the interchannel correlations (decorrelation of the RGB components) for natural images. The conversion from RGB to I1I2I3 is given by the linear transformation in (16).











I
1

=


R
+
G
+
B

3


,




(

16

a

)








I
2

=


R
-
B

2


,




(

16

b

)







I
3

=




2

G

-
R
-
B

4

.





(

16

c

)







Similar to the CIELAB color space, I1 stands as the achromatic (intensity) component, while I2 and I3 are the chromatic components. The numeric transformation from RGB to I1I2I3 enables efficient transformation of datasets between these two color spaces.


I1I2I3 was designed as an approximation for the Karhunen Loéve transform (KLT) of the RGB data to be used for region segmentation on color images. The KLT is optimal in terms of energy compaction and mean squared error minimization for a truncated representation. Note that KLT is very similar to PCA. In a geometric interpretation, KLT can be viewed as a rotation of the coordinate system, while for PCA the rotation of the coordinate system is preceded by a shift of the origin to the mean point [14]. By applying KLT to a color image, it creates image basis vectors which are orthogonal, and it thus achieves complete decorrelation of the image channels. As the transformation to I1I2I3 represents a good approximation of the KLT for a large set of natural images, the resulting color channels are almost completely decorrelated. The I1I2I3 color space is thus useful for applying color image processing operations independently to each image channel.


In the previous work of Ohta et al., the discriminating power of 109 linear combinations of R, G, and B were tested on eight different color scenes. The selected linear combinations were gathered such that they could successfully be used for segmenting important (large area) regions of an image, based on a histogram threshold. It was found that 82 of the linear combinations had all positive weights, corresponding mainly to an intensity component which is best approximated by I1. Another 22 showed opposite signs for the weights of R and B, representing the difference between the R and B components which are best approximated by I2. Finally, the remaining 4 linear combinations could be approximated by I3. Thus, it was shown that the I1, I2, and I3 components in (16) are effective for discriminating between different regions and that they are significant in this order [13]. Based on the above figures, the percentage of color features which are well discriminated on the first, second, and third channel is around 76.15%, 20.18%, and 3.67%, respectively.


I1I2I3Based Color AAM

An advantage of this representation is that the texture alignment method used for grayscale models can now be applied independently to each channel. By considering the band subvectors individually, the alignment method described above can be independently applied to each of them as











(


t

im
1


I
1


,

t

im
2


I
1


,





,

t

im

P
o



I
1



)



(


t
1

I
1


,

t
2

I
1


,





,

t

P
o


I
1



)


,




(

17

a

)








(


t

im
1


I
2


,

t

im
2


I
2


,





,

t

im

P
o



I
2



)



(


t
1

I
2


,

t
2

I
2


,





,

t

P
o


I
2



)


,




(

17

b

)








(


t

im
1


I
3


,

t

im
2


I
3


,





,

t

im

P
o



I
3



)



(


t
1

I
3


,

t
2

I
3


,





,

t

P
o


I
3



)


.




(

17

c

)







The color texture vector is then rebuilt using the separately normalized components into the full normalized texture vector,

tI1I2I3=(t1I1, t2I1, . . . , tPcI1, t1I1, t2I2, . . . , tPoI2, t1I8, t2I8, . . . , tPoI8)T.  (18)


In this way, the effect of global lighting variation is reduced due to the normalization on the first channel which corresponds to an intensity component. Furthermore, the effect of some global chromaticity variation is reduced due to the normalization operations applied on the other two channels which correspond to the chromatic components. Thus, the AAM search algorithm becomes more robust to variations in lighting levels and color distributions.


This also addresses a further issue with AAMs which is their dependency on the initial training set of images. For example, if an annotated training set is prepared using a digital camera with a color gamut with extra emphasis on “redness” (some manufacturers do customize their cameras according to market requirements), then the RGB-based AAM will perform poorly on images captured with a camera which has a normal color balance. A model, built using multi-channel normalization, is noticeably more tolerant to such variations in image color balance.


During the optimization process, the overall error function ∥r(p)μ2 is replaced by the weighted error function









i
=
1

3




w
i







r


(
p
)




2

.







The set of weights that correspond to each color channel should be chosen so as to best describe the amount of information contained in that particular image channel. Evidently this is dependent on the current color space representation. For the I1I2I3 color space, the percentages of color features found to be well discriminated for each channel were given above. Note that these percentages can also serve as estimates of the amount of information contained in each channel. Thus, they can provide a good choice for weighting the overall error function. The relative weighting of the error function may be used for texture normalization on separate channel sub-vectors.


As remarked also in [8], the common linear normalization applied on concatenated RGB bands as realized in the RGB implementation is less than optimal. An I1I2I3 based model in accordance with certain embodiments herein uses a separate texture normalization method which is, as described below, a more suitable approach for color images.


Moreover, by employing the I1I2I3 color space, a more efficient compaction of the color texture data is achieved. As the texture subvectors corresponding to I1, I2, and I13 channels are significant in the order of ≈76%, ≈20%, and ≈4%, one can retain about 96% of the useful fitting information out of the first two texture sub-vectors only. Thus, a reduced I1I2 model can be designed with the performance comparable to a full I1I2I3 model in terms of final convergence accuracy. Combined with the normalization method of separate texture subvectors in accordance with certain embodiments, a reduced I1I2 model is still more accurate than the original RGB model while the computational requirements are reduced by approximately one third.


A detailed discussion of results, summarized in Tables I to VI, now follows.


EXPERIMENTS

The performance of several models were analyzed in the color spaces discussed above. Both texture normalization techniques described were tested for face structure modeling. Use was made in tests of the appearance modeling environment FAME [15], modifying and extending it to accommodate the techniques described herein.


The convergence rates of AAMs are not specifically addressed herein. However, this work is envisioned to move towards real-time embodiments in embedded imaging applications.


The performance of the models is presented in terms of their final convergence accuracy. Several measures are used to describe the convergence accuracy of the models and their ability to synthesize the face. These are the point-to-point (PtPt) and point-to-curve (PtCrv) boundary errors, and the texture error. The boundary errors are measured between the exact shape in the image frame (obtained from the ground truth annotations) and the optimized model shape in the image frame. The point-to-point error is given by the Euclidian distance between the two shape vectors of concatenated x and y coordinates of the landmark points. The point-to-curve error is calculated as the Euclidian norm of the vector of distances from each landmark point of the exact shape to the closest point on the associated border of the optimized model shape in the image frame. The mean and standard deviation of PtPt and PtCrv are used to evaluate the boundary errors over a whole set of images. The texture error is computed as the Euclidian distance between the texture vector corresponding to the original image and the synthesized texture vector after texture de-normalization. This error is evaluated inside the CIELAB color space in order to have a qualitative differentiation between the synthesized images which is in accordance with the human perception. This is called the perceptual color texture error (PTE).


Two standard face image databases were used, namely the CMU PIE database [16] and the IMM database [17]. Color images of individuals with full frontal pose, neutral expression, no glasses, and diffuse light were used in these tests. Thus, a set of 66 images (640×486 pixels) was taken from the entire PIE database and a second set of 37 images (640×480 pixels) from the IMM database. These reduced sets of images are referred to below when mentioning the PIE and IMM databases. The images were manually annotated using 65 landmark points as shown in FIG. 1. Although the images in the IMM database were available with an attached set of annotations, it was decided to build an annotation set for reasons of consistency between the two image test sets.


For the PIE database, the first 40 images were used in building the models. The convergence accuracy was tested on the same set of 40 images, called PIE Subset 1 or seen data, and separately on the remaining 26 images, called PIE Subset 2 or unseen data. The IMM database was similarly split into IMM Subset 1, containing the first 20 images (seen data), and IMM Subset 2 with the remaining 17 images (unseen data). By doing this, how well the models are able to memorize a set of examples was analyzed, and also their capability to generalize to new examples. All models were built so as to retain 95% of the variation both for shape and texture, and again 95% of the combined (appearance) variation. For cross-validation, the PIE models were applied on the full IMM database, as well as the IMM models on the full PIE database.


The following AAM implementations were analyzed:

    • standard grayscale model (Grayscale);
    • RGB model with global color texture normalization (RGB GN);
    • and the proposed models,
    • CIELAB model with global texture normalization (CIELAB GN);
    • I1I2I3 model with texture normalization on separate channel sub-vectors (I1I2I3 SChN);
    • I1I2 model with texture normalization on separate channel sub-vectors (I1I2 SChN);
    • and also the remaining (color space)/(normalization method) possibilities were added to provide a complete comparative analysis,
    • RGB model with texture normalization on separate channel sub-vectors (RGB SChN);
    • CIELAB model with texture normalization on separate channel sub-vectors (CIELAB SChN);
    • I1I2I3 model with global texture normalization (I1I2I3GN);
    • I1I2 model with global texture normalization (I1I2 GN).


The grayscale images were obtained from the RGB images by applying the following standard mix of RGB channels,

Grayscale=0.30R+0.59G+0.11B.   (19)


The testing procedure for each model is as follows: each model is initialized using an offset for the centre of gravity of the shape of 20 pixels on the x coordinate and 10 pixels on the y coordinate from the optimal position in the query image. The optimization algorithm (see above) is applied, and the convergence accuracy is measured. Convergence is declared successful if the point-to-point boundary error is less than 10 pixels.



FIG. 2 and FIG. 4 present a histogram of PtCrv errors for landmark points on PIE database for the seen and unseen subsets, respectively. It can be observed that these errors are concentrated within lower values for the proposed models, showing improved convergence accuracy. As expected, better accuracy is obtained for the seen subset. FIG. 3 and FIG. 5 present the dependency of the declared convergence rate on the imposed threshold on PIE database for the seen and unseen data, respectively. This shows again the superiority of the proposed implementations.


In order to provide an indication on the relevancy of the chosen (−20,−10) pixels initial displacement, as well as to have an indication on the convergence range differences between the models, convergence accuracy was studied for a wider range of initial displacements on the x coordinate (dx), keeping the −10 pixels displacement on the y coordinate fixed. The tests were performed on PIE Subset 2 (unseen data) and are presented in FIG. 6-FIG. 8 for the three main model implementations. The figures show diagrams of actual vs. predicted displacements on a range of −60 to 60 pixels from the optimum position. The predicted displacements are averaged with respect to all images in the analyzed dataset. The vertical segments represent one unit of standard deviation of each predicted displacement for the analyzed dataset of images. The converge range, given by the linear part of the diagram, is rather similar for multiple three model implementations. The RGB GN model seems to be able to converge for some larger displacements as well, yet the standard deviation of the predicted displacements rapidly increases with distance, which shows that the convergence accuracy is lost. On the other hand, although the CIELAB GN and I1I2I3 SChN models have a more abrupt delimitation of their convergence range, they present a small and constant standard deviation inside their linear range, which shows a more consistent and accurate convergence. Also, the (20,10) pixels initial displacement, applied for all the other tests, is well inside the normal convergence range for any of the three models, which validates the choice made.


In FIG. 9 and FIG. 10 a comparative block diagram is presented of average PtPt errors on three different image datasets for the PIE models and IMM models, respectively. Note that these errors are consistently low (across all datasets) for the I1I2I3 and the reduced I1I2 models with texture normalization on separate channel subvectors.


From Table I Table VI, the successful convergence rate for the three proposed models is consistently the best in comparison to all other model implementations, being usually much higher than for the grayscale model. An inconclusive result was obtained for IMM database (Table III and Table IV), where most of the studied models converged successfully on all images. Interestingly, it can be noticed that the RGB GN model does not outperform the grayscale model, the successful convergence rate being actually lower for some of the studied cases. In particular, for the cross-validation tests, when applying the PIE models on IMM database (Table V), the RGB GN model has a very poor rate, being actually outperformed by all other model implementations. For the same situation, all three proposed models have very high convergence rates, particularly the I1I2I3 SChN model which registered a rate of 100%. Notable results were also obtained for the case of applying IMM models on PIE database (Table VI).


In terms of convergence accuracy (PtPt, PtCrv) and perceptual texture error, it can be seen that the CIELAB implementation is still dependent to some extent on the image acquisition conditions. This is caused by the limitation of the CIELAB implementation which cannot be efficiently used with texture normalization on separate channel sub-vectors. Some redundancy of RGB coordinates is removed by separating intensity and chromaticity data, yet the components are still coupled during texture normalization. Thus, although the results are improved over the RGB implementation for many of the tested image datasets, especially for the cross-validation tests (Table V and Table VI), they seem to lack consistency (see Table III and Table IV).


Much more consistent results were obtained for I1I2I3 SChN and I1I2 SChN models, where the convergence accuracy is significantly improved over the RGB GN implementation for all studied datasets. For I1I2I3 SChN model the perceptual texture error is also notably reduced for all datasets.









TABLE I







CONVERGENCE RESULTS ON (PIE) SUBSET 1 (Seen)












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)














Grayscale
87.50
2.98/2.17
5.05/5.63



RGB GN
85.00
3.33/2.01
5.68/5.70
5.73/2.15


CIELAB GN
97.50
2.38/1.47
3.48/2.13
4.85/1.19


I1I2I3 SChN
100
1.54/0.88
2.34/1.15
4.26/0.89


I1I2 SChN
97.50
1.63/1.30
2.68/2.79
5.96/1.51


RGB SChN
90.00
2.54/2.54
4.78/6.89
5.20/2.47


CIELAB SChN
97.50
1.71/1.56
3.03/3.62
4.59/1.72


I1I2I3 GN
87.50
3.08/1.80
4.97/4.47
5.50/1.94


I1I2 GN
92.50
2.52/1.66
4.15/4.41
6.62/1.88
















TABLE II







CONVERGENCE RESULTS ON (PIE) SUBSET 2 (Unseen)












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)














Grayscale
88.46
3.93/2.00
6.91/5.45



RGB GN
80.77
3.75/1.77
7.09/4.99
7.20/2.25


CIELAB GN
100
2.70/0.93
4.36/1.63
5.91/1.19


I1I2I3 SChN
100
2.60/0.93
4.20/1.45
5.87/1.20


I1I2 SChN
96.15
2.76/1.11
4.70/2.31
6.95/1.37


RGB SChN
73.08
4.50/2.77
8.73/7.20
7.25/2.67


CIELAB SChN
88.46
3.51/2.91
6.70/8.29
6.28/2.09


I1I2I3 GN
92.31
3.23/1.21
5.55/2.72
6.58/1.62


I1I2 GN
88.46
3.30/1.37
5.84/3.55
7.49/1.70
















TABLE III







CONVERGENCE RESULTS ON (IMM) SUBSET 1 (Seen)












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)














Grayscale
100
1.19/0.37
1.70/0.38



RGB GN
100
0.87/0.19
1.30/0.29
2.22/0.51


CIELAB GN
100
1.36/0.72
1.99/1.09
2.63/1.02


I1I2I3 SChN
100
0.78/0.20
1.21/0.31
2.06/0.44


I1I2 SChN
100
0.77/0.19
1.21/0.29
11.88/2.31 


RGB SChN
100
0.88/0.36
1.31/0.42
2.02/0.44


CIELAB SChN
95.00
1.49/2.03
3.30/7.68
2.99/2.28


I1I2I3 GN
100
1.19/0.57
1.71/0.80
2.49/0.87


I1I2 UN
100
1.09/0.44
1.61/0.67
12.00/2.27 
















TABLE IV







CONVERGENCE RESULTS ON (IMM) SUBSET 2 (unseen)












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)














Grayscale
100
3.03/1.38
4.27/1.54



RGB GN
100
2.97/1.24
4.25/1.38
4.96/1.10


CIELAB GN
100
3.05/1.12
4.21/1.12
4.47/0.77


I1I2I3 SChN
100
2.82/1.40
4.12/1.34
4.43/0.80


I1I2 SChN
100
2.86/1.54
4.21/1.54
12.14/2.67 


RGB SChN
100
2.88/1.17
4.20/1.38
4.28/0.74


CIELAB SChN
94.12
3.37/2.17
5.39/4.72
4.93/1.75


I1I2I3 GN
100
3.06/1.04
4.31/1.15
4.91/1.13


I1I2 GN
100
2.96/1.09
4.20/1.22
12.26/2.64 
















TABLE V







CONVERGENCE RESULTS FOR PIE MODELS ON IMM DB












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)














Grayscale
21.62
9.13/3.76
24.26/14.36



RGB GN
5.41
9.27/1.77
19.99/4.86 
11.68/1.57


CIELAB GN
94.59
4.00/1.02
6.69/1.85
 9.92/0.94


I1I2I3 SChN
100
3.73/0.94
5.55/1.22
 6.07/1.14


I1I2 SChN
94.59
4.69/1.40
7.10/2.08
12.89/2.29


RGB SChN
10.81
10.07/4.28 
22.41/14.64
10.05/1.53


CIELAB SChN
48.65
8.78/4.72
20.37/18.11
 8.94/3.04


I1I2I3 GN
59.46
5.17/1.56
10.84/5.07 
10.24/1.31


I1I2 GN
51.35
5.35/1.65
11.96/5.24 
15.11/2.20
















TABLE VI







CONVERGENCE RESULTS FOR IMM MODELS ON PIE DB












Success
Pt-Crv
Pt—Pt
PTE


Model
[%]
(Mean/Std)
(Mean/Std)
(Mean/Std)





Grayscale
36.36
6.90/3.33
 16.07/10.70



RGB GN
36.36
7.18/2.82
15.73/7.83
17.06/3.15 


CIELAB GN
72.73
5.83/2.31
10.84/7.85
10.35/2.61 


I1I2I3 SChN
65.15
5.52/3.24
12.11/9.84
9.05/2.83


I1I2 SChN
56.06
6.07/3.47
 13.87/11.42
9.98/2.73


RGB SChN
36.36
7.06/3.20
16.43/9.77
8.64/2.32


CIELAB SChN
13.64
8.62/2.49
21.16/7.98
9.62/2.22


I1I2I3 GN
34.85
7.65/3.05
 18.02/12.14
12.84/3.09 


I1I2 GN
25.76
8.83/4.74
 26.35/31.15
11.65/3.39 









DISCUSSION AND CONCLUSIONS

The embodiments described above have been analyzed with respect to how changes in color space representation of an image influence the convergence accuracy of AAMs. In particular, AAMs have been compared that have been built using RGB, CIELAB and I1I2I3 color spaces. Both of the latter color spaces provide a more natural separation of intensity and chromaticity information than RGB. The I1I2I3 color space also enables the application of more suitable color texture normalization and as a consequence model convergence is significantly improved.


From described experiments, it was deduced that it would make sense to normalize each color channel independently, rather than applying a global normalization across all three channels.


Thus, a more natural color texture normalization technique is proposed in certain embodiments, where each texture subvector corresponding to an individual color channel is normalized independently of the other channels. Although this approach cannot be successfully used with the common RGB representation, it was determined that some significant results can be achieved in color spaces where intensity and chromaticity information are better separated. In particular, it was found that the I1I2I3 color space, which was specifically designed to minimize cross-correlation between the color channels, is an advantageously practical choice for this purpose.


Also, applying the same normalization as for grayscale images on an RGB color texture vector can occasionally lead to decreased convergence accuracy, as suggested in earlier research [8]. Thus, there is little rationale to use an RGB based model as the additional color data does not reliably improve model convergence and it will take three times as long to perform matching operations. For these reasons, the common RGB extension of the basic AAM is only interesting for the purpose of rendering the full color information.


Yet, by employing the I1I2I3 color space coupled with texture normalization on separate channel subvectors, significant improvement in convergence accuracy is achieved as well as an accurate reconstruction of the current color image. The reconstruction accuracy, determined by analyzing the mean texture error, is also improved when compared with models based on other color spaces. By using the proposed I1I2I3 model with texture normalization on separate channel subvectors, the optimization algorithm, which is typically based on a gradient descent approximation, is less susceptible to errors caused by local error function minima. Thus, the algorithm performance is also noticeably more robust.


More than 96% of relevant data is encapsulated in the I1 and I2 components of the I1I2I3 color space. The difference between using an AAM derived from a full I1I2I3 color space representation and one which is built by retaining only the first two channels is not very significant. Where the speed of convergence is most important, the reduced I1I2 model might be favored to a full I1I2I3 model due to the lower dimensionality of the overall texture vector and the reduced computational requirements of this two-channel model.


The present invention is not limited to the embodiments described above herein, which may be amended or modified without departing from the scope of the present invention as set forth in the appended claims, and structural and functional equivalents thereof.


In methods that may be performed according to preferred embodiments herein and that may have been described above and/or claimed below, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations.


In addition, all references cited above herein, in addition to the background and summary of the invention sections, as well as US published patent applications nos. 2006/0204110, 2006/0204110, 2006/0098890, 2005/0068446, 2006/0039690, and 2006/0285754, and U.S. Pat. Nos. 7,315,631, 7,844,076, and U.S. patent applications Nos. 60/804,546, 60/829,127, 60/773,714, 60/803,980, 60/821,956, and 60/821,165, which are to be or are assigned to the same assignee, are all hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components.


In addition, the following United States published patent applications are hereby incorporated by reference for all purposes including into the detailed description as disclosing alternative embodiments:


[1] G. J. Edwards, C. J. Taylor, and T. F. Cootes, “Interpreting face images using active appearance models,” in Proc. 3rd IEEE International Conference on Face & Gesture Recognition (FG '98), 1998, pp. 300-305.


[2] T. F. Cootes and C. J. Taylor, “On representing edge structure for model matching,” in Proc. IEEE Computer Vision and Pattern Recognition (CVPR'01), 2001, pp. 1114-1119.


[3] X. Hou, S. Z. Li, H. Zhang, and Q. Cheng, “Direct appearance models.” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2001, pp. 828-833.


[4] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “A comparative evaluation of active appearance model algorithms.” in Proc. 9th British Machine Vison Conference. British Machine Vision Association, 1998, pp. 680-689.


[5] R. Donner, M. Reiter, G. Langs, P. Peloschek, and H. Bischof, “Fast active appearance model search using canonical correlation analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 10, pp. 1690-1694, 2006.


[6] I. Matthews and S. Baker, “Active appearance models revisited,” International Journal of Computer Vision, vol. 60, no. 2, pp. 135-164, November 2004, in Press.


[7] G. J. Edwards, T. F. Cootes, and C. J. Taylor, “Advances in active appearance models,” in International Conference on Computer Vision (ICCV'99), 1999, pp. 137-142.


[8] M. B. Stegmann and R. Larsen, “Multiband modelling of appearance,” Image and Vision Computing, vol. 21, no. 1, pp. 61-67, January 2003. [Online]. Retrieved from the world wide web at www2 dot imm dot dtu dot dk forward slash pubdb forward slash p dot php? 1421


[9] C. Goodall, “Procrustes methods in the statistical analysis of shape,” Journal of the Royal Statistical Society B, vol. 53, no. 2, pp. 285-339, 1991.


[10] A. U. Batur and M. H. Hayes, “Adaptive active appearance models.” IEEE Transactions on Image Processing, vol. 14, no. 11, pp. 1707-1721, 2005.


[11] G. Sharma and H. J. Trussell, “Digital color imaging,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 901-932, 1997. [Online]. Available: citeseer.ist.psu.edu/sharma97digital.html


[12] M. Tkal{hacek over ( )}ci{hacek over ( )}c and J. F. Tasi{hacek over ( )}c, “Colour spaces perceptual, historical and applicational background,” in IEEE, EUROCON, 2003.


[13] Y. Ohta, T. Kanade, and T. Sakai, “Color information for region segmentation,” in Computer Graphics and Image Processing, no. 13, 1980, pp. 222-241.


[14] J. J. Gerbrands, “On the relationships between SVD, KLT and PCA.” Pattern Recognition, vol. 14, no. 16, pp. 375-381, 1981.


[15] M. B. Stegmann, B. K. ErsbØll, and R. Larsen, “FAME a flexible appearance modelling environment,” IEEE Transactions on Medical Imaging, vol. 22, no. 10, pp. 1319-1331, 2003. [Online]. Retrieved from the world wide web at www2 dot imm dot dtu dot dk forward slash pubdb forward slash p dot php? 1918


[16] T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression (PIE) database of human faces,” Robotics Institute, Carnegie Mellon University, Pittsburgh, Pa., Tech. Rep. CMURITR0102, January 2001.


[17] M. M. NordstrØ, M. Larsen, J. Sierakowski, and M. B. Stegmann, “The IMM face database an annotated dataset of 240 face images,” Informatics and Mathematical Modelling, Technical University of Denmark, DTU, Richard Petersens Plads, Building 321, DK2800 Kgs. Lyngby, Tech. Rep., may 2004. [Online]. Retrieved from the world wide web at www2 dot imm dot dtu dot dk forward slash pubdb forward slash p dot php?3160

Claims
  • 1. A method of detecting faces in a digital image, comprising: (a) acquiring a digital color image;(b) applying an active appearance model (AAM) including an interchannel-decorrelated color space;(c) matching one or more parameters of the model to the image; and(d) communicating a face detection result based on the matching or a different processing result incorporating said face detection result, or both.
  • 2. The method of claim 1, further comprising converting RGB data to I1I2I3 color space.
  • 3. The method of claim 2, wherein the converting comprises linear conversion.
  • 4. The method of claim 2, further comprising representing texture with the I1I2I3 color space.
  • 5. The method of claim 4, further comprising aligning the texture on separate channels.
  • 6. The method of claim 4, further comprising performing operations on the texture data on each channel separately.
  • 7. The method of claim 1, wherein said interchannel-decorrleated color space comprises at least three channels including a luminance channel and two chromatic channels.
  • 8. The method of claim 1, wherein the AAM comprises an application of principal components analysis (PCA).
  • 9. The method of claim 8, wherein said PCA comprises eigen-analysis of dispersions of shape, texture and appearance.
  • 10. The method of claim 8, wherein the AAM further comprises an application of generalized procrustes analysis (GPA) including aligning shapes.
  • 11. The method of claim 10, wherein the AAM further comprises a model of shape variability including an application of PCA on a set of shape vectors.
  • 12. The method of claim 11, wherein the AAM further comprises a normalization of objects within the image with respect to shape.
  • 13. The method of claim 12, wherein the AAM further comprises generation of a texture model including sampling intensity information from each shape-free image to form a set of texture vectors.
  • 14. The method of claim 13, wherein the generation of the texture model comprising normalization of the set of texture vectors and application of PCA on the normalized texture vectors.
  • 15. The method of claim 14, wherein the applying comprises retaining only the first one or two of the aligned texture vectors.
  • 16. The method of claim 14, wherein the AAM further comprises generation of a combined appearance model including a combined vector from weighted shape parameters concatenated to texture parameters, and application of PCA to the combined vector.
  • 17. The method of claim 1, wherein the matching comprising a regression approach.
  • 18. The method of claim 1, wherein the matching comprises finding model parameters or pose parameters or both.
  • 19. The method of claim 18, wherein the pose parameters comprise translation, scale or rotation, or combinations thereof.
  • 20. The method of claim 1, wherein said interchannel-decorrelated color space comprises an orthogonal color space.
  • 21. The method of claim 1, wherein effects of global lighting and chrominance variations are reduced with said AAM.
  • 22. The method of claim 1, further comprising tracking one or more detected faces through a series of two of more images.
  • 23. An apparatus for detecting faces in a digital image, comprising a processor and one or more processor-readable media programming the processor to control the apparatus to perform a method comprising: (a) acquiring a digital color image;(b) applying an active appearance model (AAM) including an interchannel-decorrelated color space;(c) matching one or more parameters of the model to the image; and(d) communicating a face detection result based on the matching or a different result incorporating said face detection result, or both.
  • 24. The apparatus of claim 23, wherein the method further comprises converting RGB data to I1I2I3 color space.
  • 25. The apparatus of claim 24, wherein the converting comprises linear conversion.
  • 26. The apparatus of claim 24, wherein the method further comprises representing texture with the I1I2I3 color space.
  • 27. The apparatus of claim 26, wherein the method further comprises aligning the texture on separate channels.
  • 28. The apparatus of claim 26, wherein the method further comprises performing operations on the texture data on each channel separately.
  • 29. The apparatus of claim 23, wherein said interchannel-decorrleated color space comprises at least three channels including a luminance channel and two chromatic channels.
  • 30. The apparatus of claim 23, wherein the AAM comprises an application of principal components analysis (PCA).
  • 31. The apparatus of claim 30, wherein said PCA comprises eigen-analysis of dispersions of shape, texture and appearance.
  • 32. The apparatus of claim 30, wherein the AAM further comprises an application of generalized procrustes analysis (GPA) including aligning shapes.
  • 33. The apparatus of claim 32, wherein the AAM further comprises a model of shape variability including an application of PCA on a set of shape vectors.
  • 34. The apparatus of claim 33, wherein the AAM further comprises a normalization of objects within the image with respect to shape.
  • 35. The apparatus of claim 34, wherein the AAM further comprises generation of a texture model including sampling intensity information from each shape-free image to form a set of texture vectors.
  • 36. The apparatus of claim 35, wherein the generation of the texture model comprising normalization of the set of texture vectors and application of PCA on the normalized texture vectors.
  • 37. The apparatus of claim 36, wherein the applying comprises retaining only the first one or two of the aligned texture vectors.
  • 38. The apparatus of claim 36, wherein the AAM further comprises generation of a combined appearance model including a combined vector from weighted shape parameters concatenated to texture parameters, and application of PCA to the combined vector.
  • 39. The apparatus of claim 23, wherein the matching comprising a regression approach.
  • 40. The apparatus of claim 23, wherein the matching comprises finding model parameters or pose parameters or both.
  • 41. The apparatus of claim 40, wherein the pose parameters comprise translation, scale or rotation, or combinations thereof.
  • 42. The apparatus of claim 23, wherein said interchannel-decorrelated color space comprises an orthogonal color space.
  • 43. The apparatus of claim 23, wherein effects of global lighting and chrominance variations are reduced with said AAM.
  • 44. The apparatus of claim 23, wherein the method further comprises tracking one or more detected faces through a series of two of more images.
PRIORITY

This application claims priority to United States provisional patent application no. 60/804,546, filed Jun. 12, 2006, entitled “Improved Colour Model for Face Detection and Tracking” which is hereby incorporated by reference.

US Referenced Citations (724)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4285588 Mir Aug 1981 A
4317991 Stauffer Mar 1982 A
4367027 Stauffer Jan 1983 A
RE31370 Mashimo et al. Sep 1983 E
4448510 Murakoshi May 1984 A
4456354 Mizokami Jun 1984 A
4577219 Klie et al. Mar 1986 A
4638364 Hiramatsu Jan 1987 A
4646134 Komatsu et al. Feb 1987 A
4690536 Nakai et al. Sep 1987 A
4777620 Shimoni et al. Oct 1988 A
4796043 Izumi et al. Jan 1989 A
4881067 Watanabe et al. Nov 1989 A
4970663 Bedell et al. Nov 1990 A
4970683 Harshaw et al. Nov 1990 A
4975969 Tal Dec 1990 A
4978989 Nakano et al. Dec 1990 A
5008946 Ando Apr 1991 A
5016107 Sasson et al. May 1991 A
5018017 Sasaki et al. May 1991 A
RE33682 Hiramatsu Sep 1991 E
5051770 Cornuejols Sep 1991 A
5063603 Burt Nov 1991 A
5070355 Inoue et al. Dec 1991 A
5111231 Tokunaga May 1992 A
5130789 Dobbs et al. Jul 1992 A
5130935 Takiguchi Jul 1992 A
5150432 Ueno et al. Sep 1992 A
5161204 Hutcheson et al. Nov 1992 A
5164831 Kuchta et al. Nov 1992 A
5164833 Aoki Nov 1992 A
5164992 Turk et al. Nov 1992 A
5202720 Fujino et al. Apr 1993 A
5227837 Terashita Jul 1993 A
5231674 Cleveland et al. Jul 1993 A
5249053 Jain Sep 1993 A
5274457 Kobayashi et al. Dec 1993 A
5278923 Nazarathy et al. Jan 1994 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5301026 Lee Apr 1994 A
5303049 Ejima et al. Apr 1994 A
5305048 Suzuki et al. Apr 1994 A
5311240 Wheeler May 1994 A
5331544 Lu et al. Jul 1994 A
5335072 Tanaka et al. Aug 1994 A
5353058 Takei Oct 1994 A
5384601 Yamashita et al. Jan 1995 A
5384615 Hsieh et al. Jan 1995 A
5384912 Ogrinc et al. Jan 1995 A
5400113 Sosa et al. Mar 1995 A
5424794 McKay Jun 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5432866 Sakamoto Jul 1995 A
5438367 Yamamoto et al. Aug 1995 A
5450504 Calia Sep 1995 A
5452048 Edgar Sep 1995 A
5455606 Keeling et al. Oct 1995 A
5465308 Hutcheson et al. Nov 1995 A
5488429 Kojima et al. Jan 1996 A
5493409 Maeda et al. Feb 1996 A
5496106 Anderson Mar 1996 A
5537516 Sherman et al. Jul 1996 A
5543952 Yonenaga et al. Aug 1996 A
5568187 Okino Oct 1996 A
5568194 Abe Oct 1996 A
5576759 Kawamura et al. Nov 1996 A
5629752 Kinjo May 1997 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5638139 Clatanoff et al. Jun 1997 A
5649238 Wakabayashi et al. Jul 1997 A
5652669 Liedenbaum Jul 1997 A
5671013 Nakao Sep 1997 A
5678073 Stephenson, III et al. Oct 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5694926 DeVries et al. Dec 1997 A
5706362 Yabe Jan 1998 A
5708866 Leonard Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5715325 Bang et al. Feb 1998 A
5719639 Imamura Feb 1998 A
5719951 Shackleton et al. Feb 1998 A
5721983 Furutsu Feb 1998 A
5724456 Boyack et al. Mar 1998 A
5734425 Takizawa et al. Mar 1998 A
5745668 Poggio et al. Apr 1998 A
5748764 Benati et al. May 1998 A
5748784 Sugiyama May 1998 A
5751836 Wildes et al. May 1998 A
5761550 Kancigor Jun 1998 A
5764790 Brunelli et al. Jun 1998 A
5764803 Jacquin et al. Jun 1998 A
5771307 Lu et al. Jun 1998 A
5774129 Poggio et al. Jun 1998 A
5774591 Black et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5802220 Black et al. Sep 1998 A
5805720 Suenaga et al. Sep 1998 A
5805727 Nakano Sep 1998 A
5805745 Graf Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5815749 Tsukahara et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5847714 Naqvi et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5852669 Eleftheriadis et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
RE36041 Turk et al. Jan 1999 E
5862217 Steinberg et al. Jan 1999 A
5862218 Steinberg Jan 1999 A
5870138 Smith et al. Feb 1999 A
5892837 Luo et al. Apr 1999 A
5905807 Kado et al. May 1999 A
5911139 Jain et al. Jun 1999 A
5912980 Hunke Jun 1999 A
5949904 Delp Sep 1999 A
5966549 Hara et al. Oct 1999 A
5974189 Nicponski Oct 1999 A
5978519 Bollman et al. Nov 1999 A
5990973 Sakamoto Nov 1999 A
5991456 Rahman et al. Nov 1999 A
5991549 Tsuchida Nov 1999 A
5991594 Froeber et al. Nov 1999 A
5999160 Kitamura et al. Dec 1999 A
6006039 Steinberg et al. Dec 1999 A
6009209 Acker et al. Dec 1999 A
6011547 Shiota et al. Jan 2000 A
6016354 Lin et al. Jan 2000 A
6028611 Anderson et al. Feb 2000 A
6028960 Graf et al. Feb 2000 A
6035072 Read Mar 2000 A
6035074 Fujimoto et al. Mar 2000 A
6036072 Lee Mar 2000 A
6053268 Yamada Apr 2000 A
6061055 Marks May 2000 A
6072094 Karady et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6104839 Cok et al. Aug 2000 A
6108437 Lin Aug 2000 A
6115052 Freeman et al. Sep 2000 A
6118485 Hinoue et al. Sep 2000 A
6128397 Baluja et al. Oct 2000 A
6128398 Kuperstein et al. Oct 2000 A
6134339 Luo Oct 2000 A
6148092 Qian Nov 2000 A
6151073 Steinberg et al. Nov 2000 A
6151403 Luo Nov 2000 A
6172706 Tatsumi Jan 2001 B1
6173068 Prokoski Jan 2001 B1
6181805 Koike et al. Jan 2001 B1
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6195127 Sugimoto Feb 2001 B1
6201571 Ota Mar 2001 B1
6204858 Gupta Mar 2001 B1
6204868 Yamauchi et al. Mar 2001 B1
6233364 Krainiouk et al. May 2001 B1
6240198 Rehg et al. May 2001 B1
6246779 Fukui et al. Jun 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6252976 Schildkraut et al. Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6266054 Lawton et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6275614 Krishnamurthy et al. Aug 2001 B1
6278491 Wang et al. Aug 2001 B1
6282317 Luo et al. Aug 2001 B1
6285410 Marni Sep 2001 B1
6292574 Schildkraut et al. Sep 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
6295378 Kitakado et al. Sep 2001 B1
6298166 Ratnakar et al. Oct 2001 B1
6300935 Sobel et al. Oct 2001 B1
6301370 Steffens et al. Oct 2001 B1
6301440 Bolle et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6334008 Nakabayashi Dec 2001 B2
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6381345 Swain Apr 2002 B1
6393136 Amir et al. May 2002 B1
6393148 Bhaskar May 2002 B1
6396963 Shaffer et al. May 2002 B2
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 DeLuca Jun 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6426775 Kurokawa Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6429924 Milch Aug 2002 B1
6433818 Steinberg et al. Aug 2002 B1
6438234 Gisin et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6441854 Fellegara et al. Aug 2002 B2
6445810 Darrell et al. Sep 2002 B2
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6463163 Kresch Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6496655 Malloy Desormeaux Dec 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6501911 Malloy Desormeaux Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504546 Cosatto et al. Jan 2003 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6505003 Malloy Desormeaux Jan 2003 B1
6510520 Steinberg Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526156 Black et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6529630 Kinjo Mar 2003 B1
6549641 Ishikawa et al. Apr 2003 B2
6556708 Christian et al. Apr 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567983 Shiimori May 2003 B1
6587119 Anderson et al. Jul 2003 B1
6606398 Cooper Aug 2003 B2
6614471 Ott Sep 2003 B1
6614995 Tseng Sep 2003 B2
6621867 Sazzad et al. Sep 2003 B1
6628833 Horie Sep 2003 B1
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6678407 Tajima Jan 2004 B1
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6700614 Hata Mar 2004 B1
6700999 Yang Mar 2004 B1
6707950 Burns et al. Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6718051 Eschbach Apr 2004 B1
6724941 Aoyama Apr 2004 B1
6728401 Hardeberg Apr 2004 B1
6747690 Molgaard Jun 2004 B2
6754368 Cohen Jun 2004 B1
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6760485 Gilman et al. Jul 2004 B1
6765612 Anderson et al. Jul 2004 B1
6765686 Maruoka Jul 2004 B2
6778216 Lin Aug 2004 B1
6786655 Cook et al. Sep 2004 B2
6792135 Toyama Sep 2004 B1
6792161 Imaizumi et al. Sep 2004 B1
6798834 Murakami et al. Sep 2004 B1
6798913 Toriyama Sep 2004 B2
6801250 Miyashita Oct 2004 B1
6801642 Gorday et al. Oct 2004 B2
6816156 Sukeno et al. Nov 2004 B2
6816611 Hagiwara et al. Nov 2004 B1
6829009 Sugimoto Dec 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6859565 Baron Feb 2005 B2
6873743 Steinberg Mar 2005 B2
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6885760 Yamada et al. Apr 2005 B2
6885766 Held et al. Apr 2005 B2
6895112 Chen et al. May 2005 B2
6900840 Schinner et al. May 2005 B1
6900882 Iida May 2005 B2
6912298 Wilensky Jun 2005 B1
6934406 Nakano Aug 2005 B1
6937773 Nozawa et al. Aug 2005 B1
6937997 Parulski Aug 2005 B1
6940545 Ray et al. Sep 2005 B1
6947601 Aoki et al. Sep 2005 B2
6959109 Moustafa Oct 2005 B2
6965684 Chen et al. Nov 2005 B2
6967680 Kagle et al. Nov 2005 B1
6977687 Suh Dec 2005 B1
6980691 Nesterov et al. Dec 2005 B2
6984039 Agostinelli Jan 2006 B2
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7024051 Miller et al. Apr 2006 B2
7024053 Enomoto Apr 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7027621 Prokoski Apr 2006 B1
7027662 Baron Apr 2006 B2
7030927 Sasaki Apr 2006 B2
7034848 Sobol Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035461 Luo et al. Apr 2006 B2
7035462 White et al. Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7039222 Simon et al. May 2006 B2
7042501 Matama May 2006 B1
7042505 DeLuca May 2006 B1
7042511 Lin May 2006 B2
7043056 Edwards et al. May 2006 B2
7043465 Pirim May 2006 B2
7050607 Li et al. May 2006 B2
7057653 Kubo Jun 2006 B1
7061648 Nakajima et al. Jun 2006 B2
7062086 Chen et al. Jun 2006 B2
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7088386 Goto Aug 2006 B2
7099510 Jones et al. Aug 2006 B2
7106374 Bandera et al. Sep 2006 B1
7106887 Kinjo Sep 2006 B2
7110569 Brodsky et al. Sep 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7116820 Luo et al. Oct 2006 B2
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7133070 Wheeler et al. Nov 2006 B2
7146026 Russon et al. Dec 2006 B2
7151843 Rui et al. Dec 2006 B2
7155058 Gaubatz et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7171044 Chen et al. Jan 2007 B2
7190829 Zhang et al. Mar 2007 B2
7194114 Schneiderman Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7216289 Kagle et al. May 2007 B2
7218759 Ho et al. May 2007 B1
7224850 Zhang et al. May 2007 B2
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7269292 Steinberg Sep 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7289664 Enomoto Oct 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7306337 Ji et al. Dec 2007 B2
7310443 Kris et al. Dec 2007 B1
7315630 Steinberg et al. Jan 2008 B2
7315631 Corcoran et al. Jan 2008 B1
7317815 Steinberg et al. Jan 2008 B2
7321391 Ishige Jan 2008 B2
7321670 Yoon et al. Jan 2008 B2
7324670 Kozakaya et al. Jan 2008 B2
7324671 Li et al. Jan 2008 B2
7336821 Ciuc et al. Feb 2008 B2
7336830 Porter et al. Feb 2008 B2
7352393 Sakamoto Apr 2008 B2
7352394 DeLuca et al. Apr 2008 B1
7362210 Bazakos et al. Apr 2008 B2
7362368 Steinberg et al. Apr 2008 B2
7369712 Steinberg et al. May 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7436998 Steinberg et al. Oct 2008 B2
7437998 Burger et al. Oct 2008 B2
7440593 Steinberg et al. Oct 2008 B1
7454040 Luo et al. Nov 2008 B2
7460694 Corcoran et al. Dec 2008 B2
7460695 Steinberg et al. Dec 2008 B2
7466866 Steinberg Dec 2008 B2
7469055 Corcoran et al. Dec 2008 B2
7471846 Steinberg et al. Dec 2008 B2
7502494 Tafuku et al. Mar 2009 B2
7536036 Steinberg et al. May 2009 B2
7551211 Taguchi et al. Jun 2009 B2
7565030 Steinberg et al. Jul 2009 B2
7574016 Steinberg et al. Aug 2009 B2
7612794 He et al. Nov 2009 B2
7616233 Steinberg et al. Nov 2009 B2
7620214 Chen et al. Nov 2009 B2
7623733 Hirosawa Nov 2009 B2
7630527 Steinberg et al. Dec 2009 B2
7634109 Steinberg et al. Dec 2009 B2
7636485 Simon et al. Dec 2009 B2
7652693 Miyashita et al. Jan 2010 B2
7684630 Steinberg Mar 2010 B2
7693311 Steinberg et al. Apr 2010 B2
7702136 Steinberg et al. Apr 2010 B2
7733388 Asaeda Jun 2010 B2
7809162 Steinberg et al. Oct 2010 B2
20010005222 Yamaguchi Jun 2001 A1
20010015760 Fellegara et al. Aug 2001 A1
20010028731 Covell et al. Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20010038712 Loce et al. Nov 2001 A1
20010038714 Masumoto et al. Nov 2001 A1
20010052937 Suzuki Dec 2001 A1
20020019859 Watanabe Feb 2002 A1
20020041329 Steinberg Apr 2002 A1
20020051571 Jackway et al. May 2002 A1
20020054224 Wasula et al. May 2002 A1
20020081003 Sobol Jun 2002 A1
20020085088 Eubanks Jul 2002 A1
20020090133 Kim et al. Jul 2002 A1
20020093577 Kitawaki et al. Jul 2002 A1
20020093633 Milch Jul 2002 A1
20020102024 Jones et al. Aug 2002 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020114513 Hirao Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020118287 Grosvenor et al. Aug 2002 A1
20020126893 Held et al. Sep 2002 A1
20020131770 Meier et al. Sep 2002 A1
20020136433 Lin Sep 2002 A1
20020136450 Chen et al. Sep 2002 A1
20020141640 Kraft Oct 2002 A1
20020141661 Steinberg Oct 2002 A1
20020150291 Naf et al. Oct 2002 A1
20020150292 O'callaghan Oct 2002 A1
20020150306 Baron Oct 2002 A1
20020150662 Dewis et al. Oct 2002 A1
20020159630 Buzuloiu et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20020176609 Hsieh et al. Nov 2002 A1
20020176623 Steinberg Nov 2002 A1
20020181801 Needham et al. Dec 2002 A1
20020191861 Cheatle Dec 2002 A1
20030007687 Nesterov et al. Jan 2003 A1
20030021478 Yoshida Jan 2003 A1
20030023974 Dagtas et al. Jan 2003 A1
20030025808 Parulski et al. Feb 2003 A1
20030025811 Keelan et al. Feb 2003 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030044063 Meckes et al. Mar 2003 A1
20030044070 Fuersich et al. Mar 2003 A1
20030044176 Saitoh Mar 2003 A1
20030044177 Oberhardt et al. Mar 2003 A1
20030044178 Oberhardt et al. Mar 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030058343 Katayama Mar 2003 A1
20030058349 Takemoto Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030068083 Lee Apr 2003 A1
20030071908 Sannoh et al. Apr 2003 A1
20030084065 Lin et al. May 2003 A1
20030086134 Enomoto May 2003 A1
20030095197 Wheeler et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030113035 Cahill et al. Jun 2003 A1
20030117501 Shirakawa Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030123713 Geng Jul 2003 A1
20030123751 Krishnamurthy et al. Jul 2003 A1
20030137597 Sakamoto et al. Jul 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030142285 Enomoto Jul 2003 A1
20030151674 Lin Aug 2003 A1
20030161506 Velazquez et al. Aug 2003 A1
20030169907 Edwards et al. Sep 2003 A1
20030174773 Comaniciu et al. Sep 2003 A1
20030190072 Adkins et al. Oct 2003 A1
20030194143 Iida Oct 2003 A1
20030202715 Kinjo Oct 2003 A1
20030223622 Simon et al. Dec 2003 A1
20040001616 Gutta et al. Jan 2004 A1
20040017481 Takasumi et al. Jan 2004 A1
20040022435 Ishida Feb 2004 A1
20040027593 Wilkins Feb 2004 A1
20040032512 Silverbrook Feb 2004 A1
20040032526 Silverbrook Feb 2004 A1
20040033071 Kubo Feb 2004 A1
20040037460 Luo et al. Feb 2004 A1
20040041121 Yoshida et al. Mar 2004 A1
20040041924 White et al. Mar 2004 A1
20040046878 Jarman Mar 2004 A1
20040047491 Rydbeck Mar 2004 A1
20040056975 Hata Mar 2004 A1
20040057623 Schuhrke et al. Mar 2004 A1
20040057705 Kohno Mar 2004 A1
20040057715 Tsuchida et al. Mar 2004 A1
20040093432 Luo et al. May 2004 A1
20040095359 Simon et al. May 2004 A1
20040114796 Kaku Jun 2004 A1
20040114797 Meckes Jun 2004 A1
20040114829 LeFeuvre et al. Jun 2004 A1
20040114904 Sun et al. Jun 2004 A1
20040119851 Kaku Jun 2004 A1
20040120391 Lin et al. Jun 2004 A1
20040120399 Kato Jun 2004 A1
20040120598 Feng Jun 2004 A1
20040125387 Nagao et al. Jul 2004 A1
20040126086 Nakamura et al. Jul 2004 A1
20040141657 Jarman Jul 2004 A1
20040150743 Schinner Aug 2004 A1
20040160517 Iida Aug 2004 A1
20040165215 Raguet et al. Aug 2004 A1
20040170397 Ono Sep 2004 A1
20040175021 Porter et al. Sep 2004 A1
20040179719 Chen et al. Sep 2004 A1
20040184044 Kolb et al. Sep 2004 A1
20040184670 Jarman et al. Sep 2004 A1
20040196292 Okamura Oct 2004 A1
20040196503 Kurtenbach et al. Oct 2004 A1
20040213476 Luo et al. Oct 2004 A1
20040218832 Luo et al. Nov 2004 A1
20040223063 DeLuca et al. Nov 2004 A1
20040223649 Zacks et al. Nov 2004 A1
20040227978 Enomoto Nov 2004 A1
20040228505 Sugimoto Nov 2004 A1
20040228542 Zhang et al. Nov 2004 A1
20040233299 Ioffe et al. Nov 2004 A1
20040233301 Nakata et al. Nov 2004 A1
20040234156 Watanabe et al. Nov 2004 A1
20040239779 Washisu Dec 2004 A1
20040240747 Jarman et al. Dec 2004 A1
20040258308 Sadovsky et al. Dec 2004 A1
20040264744 Zhang et al. Dec 2004 A1
20050001024 Kusaka et al. Jan 2005 A1
20050013479 Xiao et al. Jan 2005 A1
20050013602 Ogawa Jan 2005 A1
20050013603 Ichimasa Jan 2005 A1
20050018923 Messina et al. Jan 2005 A1
20050024498 Iida et al. Feb 2005 A1
20050031224 Prilutsky et al. Feb 2005 A1
20050036044 Funakura Feb 2005 A1
20050041121 Steinberg et al. Feb 2005 A1
20050046730 Li Mar 2005 A1
20050047655 Luo et al. Mar 2005 A1
20050047656 Luo et al. Mar 2005 A1
20050053279 Chen et al. Mar 2005 A1
20050058340 Chen et al. Mar 2005 A1
20050058342 Chen et al. Mar 2005 A1
20050062856 Matsushita Mar 2005 A1
20050063083 Dart et al. Mar 2005 A1
20050068446 Steinberg et al. Mar 2005 A1
20050068452 Steinberg et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20050074164 Yonaha Apr 2005 A1
20050074179 Wilensky Apr 2005 A1
20050078191 Battles Apr 2005 A1
20050089218 Chiba Apr 2005 A1
20050104848 Yamaguchi et al. May 2005 A1
20050105780 Ioffe May 2005 A1
20050117132 Agostinelli Jun 2005 A1
20050128518 Tsue et al. Jun 2005 A1
20050129278 Rui et al. Jun 2005 A1
20050129331 Kakiuchi et al. Jun 2005 A1
20050134719 Beck Jun 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050147278 Rui et al. Jul 2005 A1
20050151943 Iida Jul 2005 A1
20050163498 Battles et al. Jul 2005 A1
20050168965 Yoshida Aug 2005 A1
20050185054 Edwards et al. Aug 2005 A1
20050196067 Gallagher et al. Sep 2005 A1
20050200736 Ito Sep 2005 A1
20050207649 Enomoto et al. Sep 2005 A1
20050212955 Craig et al. Sep 2005 A1
20050219385 Terakawa Oct 2005 A1
20050219608 Wada Oct 2005 A1
20050220346 Akahori Oct 2005 A1
20050220347 Enomoto et al. Oct 2005 A1
20050226499 Terakawa Oct 2005 A1
20050232490 Itagaki et al. Oct 2005 A1
20050238230 Yoshida Oct 2005 A1
20050243348 Yonaha Nov 2005 A1
20050275721 Ishii Dec 2005 A1
20050275734 Ikeda Dec 2005 A1
20050276481 Enomoto Dec 2005 A1
20050280717 Sugimoto Dec 2005 A1
20050286766 Ferman Dec 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060008152 Kumar et al. Jan 2006 A1
20060008171 Petschnigg et al. Jan 2006 A1
20060008173 Matsugu et al. Jan 2006 A1
20060017825 Thakur Jan 2006 A1
20060018517 Chen et al. Jan 2006 A1
20060029265 Kim et al. Feb 2006 A1
20060038916 Knoedgen et al. Feb 2006 A1
20060039690 Steinberg et al. Feb 2006 A1
20060045352 Gallagher Mar 2006 A1
20060050300 Mitani et al. Mar 2006 A1
20060050933 Adam et al. Mar 2006 A1
20060056655 Wen et al. Mar 2006 A1
20060066628 Brodie et al. Mar 2006 A1
20060082847 Sugimoto Apr 2006 A1
20060093212 Steinberg et al. May 2006 A1
20060093213 Steinberg et al. May 2006 A1
20060093238 Steinberg et al. May 2006 A1
20060098867 Gallagher May 2006 A1
20060098875 Sugimoto May 2006 A1
20060098890 Steinberg et al. May 2006 A1
20060119832 Iida Jun 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060133699 Widrow et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060147192 Zhang et al. Jul 2006 A1
20060150089 Jensen et al. Jul 2006 A1
20060153472 Sakata et al. Jul 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060187305 Trivedi et al. Aug 2006 A1
20060203106 Lawrence et al. Sep 2006 A1
20060203107 Steinberg et al. Sep 2006 A1
20060203108 Steinberg et al. Sep 2006 A1
20060204034 Steinberg et al. Sep 2006 A1
20060204052 Yokouchi Sep 2006 A1
20060204054 Steinberg et al. Sep 2006 A1
20060204055 Steinberg et al. Sep 2006 A1
20060204056 Steinberg et al. Sep 2006 A1
20060204057 Steinberg Sep 2006 A1
20060204058 Kim et al. Sep 2006 A1
20060204110 Steinberg et al. Sep 2006 A1
20060210264 Saga Sep 2006 A1
20060215924 Steinberg et al. Sep 2006 A1
20060221408 Fukuda Oct 2006 A1
20060227997 Au et al. Oct 2006 A1
20060228037 Simon et al. Oct 2006 A1
20060245624 Gallagher et al. Nov 2006 A1
20060257047 Kameyama et al. Nov 2006 A1
20060268150 Kameyama et al. Nov 2006 A1
20060269270 Yoda et al. Nov 2006 A1
20060280380 Li Dec 2006 A1
20060285754 Steinberg et al. Dec 2006 A1
20060291739 Li et al. Dec 2006 A1
20070047768 Gordon et al. Mar 2007 A1
20070053614 Mori et al. Mar 2007 A1
20070070440 Li et al. Mar 2007 A1
20070071347 Li et al. Mar 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070110305 Corcoran et al. May 2007 A1
20070110417 Itokawa May 2007 A1
20070116379 Corcoran et al. May 2007 A1
20070116380 Ciuc et al. May 2007 A1
20070122056 Steinberg et al. May 2007 A1
20070133863 Sakai et al. Jun 2007 A1
20070133901 Aiso Jun 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20070154189 Harradine et al. Jul 2007 A1
20070160307 Steinberg et al. Jul 2007 A1
20070172126 Kitamura Jul 2007 A1
20070189606 Ciuc et al. Aug 2007 A1
20070189748 Drimbarean et al. Aug 2007 A1
20070189757 Steinberg et al. Aug 2007 A1
20070201724 Steinberg et al. Aug 2007 A1
20070201725 Steinberg et al. Aug 2007 A1
20070201726 Steinberg et al. Aug 2007 A1
20070263104 DeLuca et al. Nov 2007 A1
20070263928 Akahori Nov 2007 A1
20070273504 Tran Nov 2007 A1
20070296833 Corcoran et al. Dec 2007 A1
20080002060 DeLuca et al. Jan 2008 A1
20080013799 Steinberg et al. Jan 2008 A1
20080013800 Steinberg et al. Jan 2008 A1
20080019565 Steinberg Jan 2008 A1
20080031498 Corcoran et al. Feb 2008 A1
20080037827 Corcoran et al. Feb 2008 A1
20080037838 Ianculescu et al. Feb 2008 A1
20080037839 Corcoran et al. Feb 2008 A1
20080037840 Steinberg et al. Feb 2008 A1
20080043121 Prilutsky et al. Feb 2008 A1
20080043122 Steinberg et al. Feb 2008 A1
20080049970 Ciuc et al. Feb 2008 A1
20080055433 Steinberg et al. Mar 2008 A1
20080075385 David et al. Mar 2008 A1
20080112599 Nanu et al. May 2008 A1
20080143854 Steinberg et al. Jun 2008 A1
20080144965 Steinberg et al. Jun 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
20080175481 Petrescu et al. Jul 2008 A1
20080186389 DeLuca et al. Aug 2008 A1
20080205712 Ionita et al. Aug 2008 A1
20080211937 Steinberg et al. Sep 2008 A1
20080219517 Blonk et al. Sep 2008 A1
20080232711 Prilutsky et al. Sep 2008 A1
20080240555 Nanu et al. Oct 2008 A1
20080266419 Drimbarean et al. Oct 2008 A1
20080267461 Ianculescu et al. Oct 2008 A1
20080292193 Bigioi et al. Nov 2008 A1
20080316327 Steinberg et al. Dec 2008 A1
20080316328 Steinberg et al. Dec 2008 A1
20080317339 Steinberg et al. Dec 2008 A1
20080317357 Steinberg et al. Dec 2008 A1
20080317378 Steinberg et al. Dec 2008 A1
20080317379 Steinberg et al. Dec 2008 A1
20090002514 Steinberg et al. Jan 2009 A1
20090003652 Steinberg et al. Jan 2009 A1
20090003661 Ionita et al. Jan 2009 A1
20090003708 Steinberg et al. Jan 2009 A1
20090052749 Steinberg et al. Feb 2009 A1
20090052750 Steinberg et al. Feb 2009 A1
20090080713 Bigioi et al. Mar 2009 A1
20090087030 Steinberg et al. Apr 2009 A1
20090141144 Steinberg Jun 2009 A1
20090175609 Tan Jul 2009 A1
20090179998 Steinberg et al. Jul 2009 A1
20090196466 Capata et al. Aug 2009 A1
20090208056 Corcoran et al. Aug 2009 A1
20090244296 Petrescu et al. Oct 2009 A1
20090245693 Steinberg et al. Oct 2009 A1
20100026831 Ciuc et al. Feb 2010 A1
20100026832 Ciuc et al. Feb 2010 A1
20100026833 Ciuc et al. Feb 2010 A1
20100039525 Steinberg et al. Feb 2010 A1
20100053368 Nanu et al. Mar 2010 A1
20100054533 Steinberg et al. Mar 2010 A1
20100054549 Steinberg et al. Mar 2010 A1
20100092039 Steinberg et al. Apr 2010 A1
20100165140 Steinberg Jul 2010 A1
20100165150 Steinberg et al. Jul 2010 A1
20100188525 Steinberg et al. Jul 2010 A1
20100188530 Steinberg et al. Jul 2010 A1
20100220899 Steinberg et al. Sep 2010 A1
20100271499 Steinberg et al. Oct 2010 A1
20100272363 Steinberg et al. Oct 2010 A1
20110002545 Steinberg et al. Jan 2011 A1
Foreign Referenced Citations (72)
Number Date Country
578508 Jan 1994 EP
1128316 Aug 2001 EP
1199672 Apr 2002 EP
1229486 Aug 2002 EP
1288858 Mar 2003 EP
1288859 Mar 2003 EP
1288860 Mar 2003 EP
1293933 Mar 2003 EP
1296510 Mar 2003 EP
1398733 Mar 2004 EP
1441497 Jul 2004 EP
1453002 Sep 2004 EP
1478169 Nov 2004 EP
1528509 May 2005 EP
1626569 Feb 2006 EP
1785914 May 2007 EP
1887511 Feb 2008 EP
1429290 Jul 2008 EP
841609 Jul 1960 GB
2370438 Jun 2002 GB
2379819 Mar 2003 GB
3205989 Sep 1991 JP
4192681 Jul 1992 JP
5260360 Oct 1993 JP
9214839 Aug 1997 JP
2000-134486 May 2000 JP
2002-247596 Aug 2002 JP
2002-271808 Sep 2002 JP
25164475 Jun 2005 JP
26005662 Jan 2006 JP
2006072770 Mar 2006 JP
26254358 Sep 2006 JP
WO9802844 Jan 1998 WO
WO0076398 Dec 2000 WO
WO0133497 May 2001 WO
WO0171421 Sep 2001 WO
WO0192614 Dec 2001 WO
WO0245003 Jun 2002 WO
WO-02052835 Jul 2002 WO
WO03026278 Mar 2003 WO
WO03028377 Apr 2003 WO
WO03071484 Aug 2003 WO
WO2004034696 Apr 2004 WO
WO2005015896 Feb 2005 WO
WO2005041558 May 2005 WO
WO2005076217 Aug 2005 WO
WO2005076217 Aug 2005 WO
WO2005087994 Sep 2005 WO
WO2005109853 Nov 2005 WO
WO2006011635 Feb 2006 WO
WO2006018056 Feb 2006 WO
WO2006045441 May 2006 WO
2007095477 Aug 2007 WO
2007095483 Aug 2007 WO
2007095553 Aug 2007 WO
WO2007128117 Nov 2007 WO
WO-2007142621 Dec 2007 WO
2008023280 Feb 2008 WO
WO-2008015586 Feb 2008 WO
WO-2008015586 Feb 2008 WO
WO2008017343 Feb 2008 WO
WO-2008018887 Feb 2008 WO
2007095477 Jul 2008 WO
2007095553 Aug 2008 WO
WO-2008104549 Sep 2008 WO
WO2008107002 Sep 2008 WO
WO2008023280 Nov 2008 WO
WO2009039876 Apr 2009 WO
WO2010012448 Feb 2010 WO
WO2010017953 Feb 2010 WO
WO2010025908 Mar 2010 WO
WO2011000841 Jan 2011 WO
Related Publications (1)
Number Date Country
20080013798 A1 Jan 2008 US
Provisional Applications (1)
Number Date Country
60804546 Jun 2006 US