Real time video processing for changing proportions of an object in the video

Information

  • Patent Grant
  • 11651797
  • Patent Number
    11,651,797
  • Date Filed
    Monday, August 15, 2022
    a year ago
  • Date Issued
    Tuesday, May 16, 2023
    a year ago
Abstract
Method involving: providing an object in the video that at least partially and at least occasionally is presented in frames of a video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a first set of node points on the created mesh based on a request for changing proportions; generating a second set of node points based on the first set of node points; and transforming the frames of the video in such way that the object's proportions are transformed in accordance with the second set of the node points using the mesh.
Description
BACKGROUND OF THE INVENTION
Technical Field

The disclosed embodiments relate generally to the field of real time video processing, in particular, to a system and method of real time video processing for changing proportions of an object in the video.


Description of the Related Art

Nowadays a variety of devices and programs can provide processing of still images, for example effects like face thinning, makeup, etc, and processing of real time video using some filters (for example, web cam video). Also some face tracking algorithms and implementations for video streams or video data are known.


In particular, some programs can change an object in a video stream, for example, change a person's face by changing proportions of a whole frame or overlaying any extra objects on a person's face. However, there are no programs that can implement changes to an object in a video stream that seem to be natural and cannot be recognized with the naked eye. Further, such programs cannot be implemented in real time by mobile devices, since they are resource-intensive and such devices cannot handle such operations for changing an object in real time.


U.S. Patent Application Publication No. US2007268312, incorporated herein by reference, discloses a method of replacing face elements by some components that is made by users as applied to real time video. This method involves changing of an object in a video stream by overlaying it with new predetermined images. However, it is not possible to process real time video such that an object shown in real time video can be modified in real time naturally with some effects. In case of a human's face such effects can include making a face fatter/thinner as well as other distortions.


Thus, new and improved systems and methods are needed that would enable real time video processing for changing proportions of an object in the video.


SUMMARY OF THE INVENTION

The embodiments described herein are directed to systems and methods that substantially obviate one or more of the above and other problems associated with the conventional technology for real time video processing.


In accordance with one aspect of the embodiments described herein, there is provided a computer-implemented method for real time video processing for changing proportions of an object in the video, the method involving: providing an object in the video that at least partially and at least occasionally is presented in frames of a video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a first set of node points on the created mesh based on a request for changing proportions; generating a second set of node points based on the first set of node points; and transforming the frames of the video in such way that the object's proportions are transformed in accordance with the second set of the node points using the mesh.


In one or more embodiments, the computer-implemented method further comprises creating a square grid associated with a background of the object in the video; and transforming the background of the object using the square grid to avoid the background distortion.


In one or more embodiments, the object in the video to be detected is a human face.


In one or more embodiments, the object's feature reference points are at least one of the points indicating eyebrows vertical position, eyes vertical position, eyes width, eyes height, eye separation distance, nose vertical position, nose pointing up, mouth vertical position, mouth width, chin width, upper lip raiser, jaw drop, lip stretcher, left brow lowerer, right brow lowerer, lip corner depressor, and outer brow raiser.


In one or more embodiments, the method further comprises: indicating a presence of an object from a list of objects in frames of the video, wherein the list further comprises rules for changing proportions of each object from the list; and generating a request for changing proportions of the object which presence in frames of the video is indicated.


In one or more embodiments, the method further comprises: defining an object to be changed in frames of the video and rules for changing proportions of the object by a user; and generating a request for changing proportions of the object defined by the user.


In one or more embodiments, the method further comprises: defining by a user a frame area of the video to be processed, wherein the frame area to be processed sets a frame area of the video such that only proportions of those objects or their parts which are positioned in the frame area to be processed are changed.


In one or more embodiments, the method further comprises: randomly selecting at least one object to be changed in frames of the video out of the objects in frames of the video and randomly selecting at least one rule for changing proportions of the selected object out of a list of rules; and generating a request for changing proportions of the randomly selected object based on the randomly selected rules.


In one or more embodiments, the detecting of the object in the video is implemented with the use of Viola-Jones method.


In one or more embodiments, the detecting of the object's feature points is implemented with the use of an Active Shape Model (ASM).


In one or more embodiments, the processed video comprises a video stream.


In accordance with another aspect of the embodiments described herein, there is provided a mobile computerized system comprising a central processing unit and a memory, the memory storing instructions for: providing an object in the video that at least partially and at least occasionally is presented in frames of a video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a first set of node points on the created mesh based on a request for changing proportions; generating a second set of node points based on the first set of node points; and transforming the frames of the video in such way that the object's proportions are transformed in accordance with the second set of the node points using the mesh.


In one or more embodiments, the memory further stores instructions for creating a square grid associated with a background of the object in the video; and transforming the background of the object using the square grid to avoid the background distortion.


In one or more embodiments, the object in the video to be detected is a human face.


In one or more embodiments, the object's feature reference points are at least one of the points indicating eyebrows vertical position, eyes vertical position, eyes width, eyes height, eye separation distance, nose vertical position, nose pointing up, mouth vertical position, mouth width, chin width, upper lip raiser, jaw drop, lip stretcher, left brow lowerer, right brow lowerer, lip corner depressor, and outer brow raiser.


In one or more embodiments, the memory storing further instructions for: indicating a presence of an object from a list of objects in frames of the video, wherein the list further comprises rules for changing proportions of each object from the list; and generating a request for changing proportions of the object which presence in frames of the video is indicated.


In one or more embodiments, the memory storing further instructions for: defining an object to be changed in frames of the video and rules for changing proportions of the object by a user; and generating a request for changing proportions of the object defined by the user.


In one or more embodiments, the memory storing further instructions for: defining by a user a frame area of the video to be processed, wherein the frame area to be processed sets a frame area of the video such that only proportions of those objects or their parts which are positioned in the frame area to be processed are changed.


In one or more embodiments, the memory storing further instructions for: randomly selecting at least one object to be changed in frames of the video out of the objects in frames of the video and randomly selecting at least one rule for changing proportions of the selected object out of a list of rules; and generating a request for changing proportions of the randomly selected object based on the randomly selected rules.


In one or more embodiments, the detecting of the object in the video is implemented with the use of Viola-Jones method.


In one or more embodiments, the detecting of the object's feature points is implemented with the use of an Active Shape Model (ASM).


Additional aspects related to the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. Aspects of the invention may be realized and attained by means of the elements and combinations of various elements and aspects particularly pointed out in the following detailed description and the appended claims.


It is to be understood that both the foregoing and the following descriptions are exemplary and explanatory only and are not intended to limit the claimed invention or application thereof in any manner whatsoever.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification exemplify the embodiments of the present invention and, together with the description, serve to explain and illustrate principles of the inventive technique. Specifically:



FIG. 1 illustrates facial feature points detected by an ASM algorithm used in the method according to one embodiment of the present invention.



FIG. 2 illustrates Candide-3 model used in the method according to one embodiment of the present invention.



FIG. 3(a)-3(b) show an example of a mean face (a) and an example of current observation.



FIG. 4 illustrates Candide at a frame used in the method according to one embodiment of the present invention.



FIG. 5 shows an example of the square grid used in the method according to one embodiment of the present invention.



FIG. 6 illustrates a set of control points p.



FIG. 7 illustrates the difference between points' of p and q positions.



FIG. 8(a)-8(c) show an example of a normal face (a), a thin face with a thin nose provided by the method according to the present invention (b) and a fat face with a fat nose provided by the method according to the present invention (c).



FIG. 9 illustrates an exemplary embodiment of a computer platform based on which the techniques described herein may be implemented.





DETAILED DESCRIPTION

In the following detailed description, reference will be made to the accompanying drawing(s), in which identical functional elements are designated with like numerals. The aforementioned accompanying drawings show by way of illustration, and not by way of limitation, specific embodiments and implementations consistent with principles of the present invention. These implementations are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other implementations may be utilized and that structural changes and/or substitutions of various elements may be made without departing from the scope and spirit of present invention. The following detailed description is, therefore, not to be construed in a limited sense. Additionally, the various embodiments of the invention as described may be implemented in the form of a software running on a general purpose computer, in the form of a specialized hardware, or combination of software and hardware.


It will be appreciated that the method for real time video processing can be performed with any kind of video data, e.g. video streams, video files saved in a memory of a computerized system of any kind (such as mobile computer devices, desktop computer devices and others), and all other possible types of video data understandable for those skilled in the art. Any kind of video data can be processed, and the embodiments disclosed herein are not intended to be limiting the scope of the present invention by indicating a certain type of video data.


According to one aspect, the automatic real time video processing of the present invention is aimed to detecting person face in the video and changing its proportions. However, it is obvious for one skilled in the art that proportions of other objects in video can be changed using the present method.


One embodiment described herein provides an automatic detection of a face in real time video and changing its proportions in said video to make the face thinner or thicker to the selected grade.


In one or more embodiments, the method of real time video processing for changing proportions of an object in the video involves face detection and a 6D head position estimation, in which yaw, pitch, roll, x, y, size are estimated. As human faces and heads may have different properties, such as eyes distance, head height etc, they are estimated from the first frame and don't change during a video processing. Positions of eyebrows, lips and yaw are also estimated at each frame, as they can move independently because of human gesture.


In one or more embodiments, the method uses tracked information to achieve changing of proportions. A video can be processed frame-by-frame, with no dependence between consequent frames or information about some previous frames can be used.


In addition, computation on the GPU is used to increase performance.


The embodiments disclosed further are aimed for processing of video streams, however all other types of video data including video files saved in a memory of a computerized system can be processed by the methods of the present invention. For example, a user can load video files and save them in a memory of his computerized system and such video files can be also processed by the methods of the present invention. According to one of the preferred embodiments the method of real time video stream processing for changing proportions of an object in the video stream comprises: providing an object in the video stream that at least partially and at least occasionally is presented in frames of a video stream; detecting the object in the video stream, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video stream, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a first set of node points on the created mesh based on a request for changing proportions; generating a second set of node points based on the first set of node points; and transforming the frames of the video stream in such way that the object's proportions are transformed in accordance with the second set of the node points using the mesh.


According to one of the embodiments the computer implemented method of claim 1 includes further creating a square grid associated with a background of the object in the video stream; and transforming the background of the object using the square grid to avoid the background distortion.


One of the objects to be processed is a human face. In this case object's feature reference points for a human's face are at least one of the points indicating eyebrows vertical position, eyes vertical position, eyes width, eyes height, eye separation distance, nose vertical position, nose pointing up, mouth vertical position, mouth width, chin width, upper lip raiser, jaw drop, lip stretcher, left brow lowerer, right brow lowerer, lip corner depressor, and outer brow raiser.


According to one of the embodiments the method further comprises indicating a presence of an object from a list of objects in frames of the video stream, wherein the list further comprises rules for changing proportions of each object from the list; and generating a request for changing proportions of the object which presence in frames of the video stream is indicated.


According to another embodiment the method further includes defining an object to be changed in frames of the video stream and rules for changing proportions of the object by a user; and generating a request for changing proportions of the object defined by the user. In this case the method xan further include defining by a user a frame area of the video stream to be processed, wherein the frame area to be processed sets a frame area of the video stream such that only proportions of those objects or their parts which are positioned in the frame area to be processed are changed.


According to yet another embodiment the method further includes randomly selecting at least one object to be changed in frames of the video stream out of the objects in frames of the video stream and randomly selecting at least one rule for changing proportions of the selected object out of a list of rules; and generating a request for changing proportions of the randomly selected object based on the randomly selected rules.


Face Detection and Initialization


In one or more embodiments, first, in the algorithm for changing proportion a user sends a request for changing proportions of an object in a video stream. The next step in the algorithm involves detecting the object in the video stream.


In one or more embodiments, the face is detected on an image with the use of Viola-Jones method. Viola-Jones method is a fast and quite accurate method used to detect the face region. Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature points. However, it should be appreciated that other methods and algorithms suitable for face detection can be used.


In one or more embodiments, for locating facial features locating of landmarks is used. A landmark represents a distinguishable point present in most of the images under consideration, for example, the location of the left eye pupil (FIG. 1).


In one or more embodiments, a set of landmarks forms a shape. Shapes are represented as vectors: all the x- followed by all the y-coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes (which in the present disclosure are manually landmarked faces).


Subsequently, in accordance with the ASM algorithm, the search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. It then repeats the following two steps until convergence (i) suggest a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point (ii) conform the tentative shape to a global shape model. The individual template matches are unreliable and the shape model pools the results of the weak template matchers to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. It follows that two types of submodel make up the ASM: the profile model and the shape model.


In one or more embodiments, the profile models (one for each landmark at each pyramid level) are used to locate the approximate position of each landmark by template matching. Any template matcher can be used, but the classical ASM forms a fixed-length normalized gradient vector (called the profile) by sampling the image along a line (called the whisker) orthogonal to the shape boundary at the landmark. During training on manually landmarked faces, at each landmark the mean profile vector g and the profile covariance matrix Sg are calculated. During searching, the landmark along the whisker to the pixel whose profile g has lowest Mahalanobis distance from the mean profile g is displaced, where the

MahalanobisDistance=(g−g)TSg−1(g−g).  (1)


In one or more embodiments, the shape model specifies allowable constellations of landmarks. It generates a shape {circumflex over (x)} with

{circumflex over (x)}=x+Φb  (2)


where {circumflex over (x)} is the mean shape, b is a parameter vector, and Φ is a matrix of selected eigenvectors of the covariance matrix Sg of the points of the aligned training shapes. Using a standard principal components approach model has as much variation in the training set as it is desired by ordering the eigenvalues λi of Ss and keeping an appropriate number of the corresponding eigenvectors in Φ. In the method is used a single shape model for the entire ASM but it is scaled for each pyramid level.


Subsequently, the Equation 2 is used to generate various shapes by varying the vector parameter b. By keeping the elements of b within limits (determined during model building) it is possible to ensure that generated face shapes are lifelike.


Conversely, given a suggested shape x, it is possible to calculate the parameter b that allows Equation 2 to best approximate x with a model shape x{circumflex over ( )}. An iterative algorithm, described by Cootes and Taylor, that gives the b and T that minimizes

distance(x,T(x+Φb))  (3)

where T is a similarity transform that maps the model space into the image space is used.


In one or more embodiments, mapping can be built from facial feature points, detected by ASM, to Candide-3 point, and that gives us Candide-3 points x and y coordinates. Candide is a parameterised face mask specifically developed for model-based coding of human faces. Its low number of polygons (approximately 100) allows fast reconstruction with moderate computing power. Candide is controlled by global and local Action Units (AUs). The global ones correspond to rotations around three axes. The local Action Units control the mimics of the face so that different expressions can be obtained.


The following equation system can be made, knowing Candide-3 points x and y coordinates.

Σj=1mXij*Bj=xi,  (4)
Σj=1mYij=Yij*Bj=yi,  (5)

where Bj—j-th shape unit, xi, yi—i-th point coordinates, Xij, Yij—coefficients, which denote how the i-th point coordinates are changed by j-th shape unit. In this case, this system is over determined, so it cancan be solved precisely. Thus, the following minimization is made:

j=1mXij*Bj−xi)2+(Σj=1mYij*Bj−yi)2→min.  (6)
Let's denote X=((Xij)T,(Yij)T)T,x=((xi)T,(yi)T)T,B=(Bj)T.  (7)
This equation system is linear, so it's solution is B=(XTX)−1XTx  (8)


In one or more embodiments, it is also possible to use Viola-Jones method and ASM to improve tracking quality. Face tracking methods usually accumulate error over time, so they can lose face position after several hundred frames. In order to prevent it, in the present invention the ASM algorithm is run from time to time to re-initialize tracking algorithm.


Face Tracking


In one or more embodiments, the next step comprises tracking the detected object in the video stream. In the present invention is used the abovementioned Candide-3 model (see Ahlberg, J.: Candide-3, an updated parameterized face. Technical report, Linkoping University, Sweden (2001)) for tracking face in a video stream. The mesh or mask corresponding to Candide-3 model is shown in FIG. 2.


In one or more embodiments, a state of the model can be described by shape units intensity vector, action units intensity vector and a position-vector. Shape units are some main parameters of a head and a face, in the present invention next 10 units are used:

    • Eyebrows vertical position
    • Eyes vertical position
    • Eyes width
    • Eyes height
    • Eye separation distance
    • Nose vertical position
    • Nose pointing up
    • Mouth vertical position
    • Mouth width
    • Chin width


In one or more embodiments, action units are face parameters that correspond to some face movement, In the present invention next 7 units are used:

    • Upper lip raiser
    • Jaw drop
    • Lip stretcher
    • Left brow lowerer
    • Right brow lowerer
    • Lip corner depressor
    • Outer brow raiser


In one or more embodiments, the mask position at a picture can be described using 6 coordinates: yaw, pitch, roll, x, y, scale. The main idea of the algorithm proposed by Dornaika et al. (Dornaika, F., Davoine, F.: On appearance based face and facial action tracking. IEEE Trans. Circuits Syst. Video Technol. 16(9):1107-1124 (2006)) is to find the mask position, which observes the region most likely to be a face. For each position it is possible to calculate observation error—the value which indicates the difference between image under current mask position and the mean face. An example of the mean face and of the observation under current position is illustrated in FIGS. 3(a)-3(b). FIG. 3(b) corresponds to the observation under the mask shown in FIG. 4.


In one or more embodiments, face is modeled as a picture with a fixed size (width=40px, height=46px) called a mean face. Gaussian distribution that proposed in original algorithms has shown worse result in compare with static image. So the difference between current observation and a mean face is calculated in the following way:

e(b)=Σ(log(1+Im)−log(1+Ii)2  (9)

Logarithm function makes tracking more stable.


In one or more embodiments, to minimize error Taylor series is used as it was proposed by Dornaika at. el (see F. Dornaika, F. Davoine, On appearance based face and facial action tracking, in IEEE Transactions on Circuits and Systems for Video Technology, 16(9), September, 2006, p. 1107-1124). It was found that it is not necessary to sum up a number of finite differences when calculating an approximation to first derivative. Derivative is calculated in the following way:










g
ij

=




W

(


y
t

,


b
t

+

δ


b
t




)

ij

-


W

(


y
t

,


b
t

-

δ


b
t




)

ij



δ
j






(
10
)







Here gij is an element of matrix G. This matrix has size m*n, where m is large enough (about 1600) and n is small (about 14). In case of straight-forward calculating there have to be done n*m operations of division. To reduce the number of divisions this matrix can be rewritten as a product of two matrices:

G=A*B

Where matrix A has the same size as G and its element is:

aij=W(yt,bt+δbt)ij−W(yt,bt−δbt)ij  (11)


and matrix B is a diagonal matrix with sizes n*n, and

biii−1


Now Matrix Gt+ has to be obtained and here is a place where a number of divisions can be reduced.

Gt+=(GTG)−1GT=(BTATAB)−1BTAT=B−1(ATA)−1B−1BAT=B−1(ATA)−1AT  (12)


After that transformation this can be done with n*n divisions instead of m*n.


One more optimization was used here. If matrix Gt+ is created and then multiplied to Δbt, it leads to n2m operations, but if first AT and Δbt are multiplied and then B−1(ATA)−1 with it, there will be only n*m+n3 operations, that is much better because n<<m.


Thus, the step of tracking the detected object in the video stream in the present embodiment comprises creating a mesh that is based on the detected feature points of the object and aligning the mesh to the object on each frame.


It should be also noted that to increase tracking speed in the present invention multiplication of matrices is performed in such a way, that it can be boosted using ARM advanced SIMD extensions (also known as NEON). Also, the GPU is used instead of CPU, whenever possible. To get high performance of the GPU, operations in the present invention are grouped in a special way.


Thus, tracking according to the present invention has the following advantageous features:


1. Before tracking, Logarithm is applied to grayscale the value of each pixel to track it. This transformation has a great impact to tracking performance.


2. In the procedure of gradient matrix creation, the step of each parameter depends on the scale of the mask.


Changing of Proportions


In this disclosure, changing of proportions will be described in terms of making the face thinner/thicker. However, it will be appreciated by one skilled in the art that other proportions of the object, for example a human face, can be changed using the method of the present invention.


In the present embodiment of the method, face tracking results and rigid moving least squares (MLS) deformation method are used for deforming some face details.


In one or more embodiments, image deformations are built based on collections of points with which the user controls the deformation. A set of control points is referred to as p and the deformed positions of the control points p are referred to as q. A deformation function f is constructed which satisfies the three properties outlined in the introduction using Moving Least Squares. Given a point v in the image, the best affine transformation Iv(x) is needed that minimizes

Σwi|lv(pi)−qi|2  (13)


where pi and qi are row vectors and the weights wi have the form










w
i

=


1




"\[LeftBracketingBar]"



p
i

-
v



"\[RightBracketingBar]"



2

α










(
14
)







In one or more embodiments, α=0.9 is chosen for the method. In this embodiment, Rigid Deformations method is chosen. However, it is clear for one skilled in the art that other values and methods can be chosen in another embodiments of the present invention. By this method each point v on the image transforms to the point fr(v).











f
r

(
v
)

=





"\[LeftBracketingBar]"


v
-

p
*




"\[RightBracketingBar]"







Σ
(


q
i

-
q


*)



A
i








"\[LeftBracketingBar]"


Σ
(


q
i

-
q



*)



A
i




"\[RightBracketingBar]"




+

q
*






(
15
)







where


















A
i

=


w
i

(



p
i

-

p
*


;

-

(


p
i

-
p






*)



)





(


v
-

p
*


;

-

(

v
-
p






*)



)




(
16
)














(

x
;
y

)



=

(


-
y

;
x

)





(
17
)












p
*=


Σ


w
i



p
i



Σ


w
i







(
18
)












q
*=


Σ


w
i



q
i



Σ


w
i







(
19
)















"\[LeftBracketingBar]"


(

x
;
y

)



"\[RightBracketingBar]"


=



x
2

+

y
2







(
20
)







In one or more embodiments, to make calculations faster a square grid is made on the picture and function's values are calculated in its vertices only. Values in all other pixels are calculated approximately, using bilinear interpolation. This square grid is also associated with the background of the object in the video stream and is used to transform the background of the object to avoid the background distortion.


In mathematics, bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g., x and y) on a regular 2D grid.


In one or more embodiments, linear interpolation is performed first in one direction, and then again in the other direction. Although each step is linear in the sampled values and in the position, the interpolation as a whole is not linear but rather quadratic in the sample location (details below).


In one or more embodiments, it is further supposed that the value of the unknown function f at the point P=(x,y) is to be found. It is assumed that the value of f at the four points Q11=(x1,y1), Q12=(x1,y2), Q21=(x2,y1), and Q22=(x2,y2) is known.


First linear interpolation in the x-direction is made. This yields










f

(

R
1

)







x
2

-
x



x
2

-

x
1





f

(

Q
11

)


+



x
-

x
1




x
2

-

x
1





f

(

Q
21

)







(
21
)







where R1=(x,y1)R1=(x,y1)










f

(

R
2

)







x
2

-
x



x
2

-

x
1





f

(

Q
12

)


+



x
-

x
1




x
2

-

x
1





f

(

Q
22

)







(
22
)







where R1=(x,y2)R1=(x,y2)


Then interpolating in the y-direction is made:










f

(
P
)







y
2

-
y



y
2

-

y
1





f

(

R
1

)


+



y
-

y
1




y
2

-

y
1





f

(

R
2

)







(
23
)







This gives the desired estimate of f(x,y).










f

(

x
,
y

)







(


x
2

-
x

)



(


y
2

-
y

)




(


x
2

-

x
1


)



(


y
2

-

y
1


)





f

(


x
1

,

y
1


)


+




(

x
-

x
1


)



(


y
2

-
y

)




(


x
2

-

x
1


)



(


y
2

-

y
1


)





f

(


x
2

,

y
1


)


+




(


x
2

-
x

)



(


y
2

-
y

)




(


x
2

-

x
1


)



(


y
2

-

y
1


)





f

(


x
1

,

y
2


)


+




(

x
-

x
1


)



(


y
2

-
y

)




(


x
2

-

x
1


)



(


y
2

-

y
1


)





f

(


x
2

,

y
2


)







(
24
)







Red pixels as the vertices of the grid are shown in FIG. 5.


In one or more embodiments, to make calculations faster the values of wi are being pre-calculated for all integer vectors pi-v in the beginning of the program work and real values are not being calculated during algorithm work. They are being taken by the nearest neighbor method.


In one or more embodiments, for each pixel of the resulting point its value is calculated using the next formula:










c
u

=













"\[LeftBracketingBar]"



u
.
x

-



f
r

(
v
)

.
x




"\[RightBracketingBar]"


<
1

&





"\[LeftBracketingBar]"



u
.
y

-



f
r

(
v
)

.
y




"\[RightBracketingBar]"



<
1








c
v




(

1
-



"\[LeftBracketingBar]"




f
r




(
v
)

.
x


-

u
.
x




"\[RightBracketingBar]"



)

·

(

1
-



"\[LeftBracketingBar]"




f
r




(
v
)

.
y


-

u
.
y




"\[RightBracketingBar]"



)


















"\[LeftBracketingBar]"



u
.
x

-



f
r

(
v
)

.
x




"\[RightBracketingBar]"


<
1

&





"\[LeftBracketingBar]"



u
.
y

-



f
r

(
v
)

.
y




"\[RightBracketingBar]"



<
1








(

1
-



"\[LeftBracketingBar]"




f
r




(
v
)

.
x


-

u
.
x




"\[RightBracketingBar]"



)

·

(

1
-



"\[LeftBracketingBar]"




f
r




(
v
)

.
y


-

u
.
y




"\[RightBracketingBar]"



)










(
25
)







where u is a point on the resulting image, v is a point on the initial image, cu is a color of pixel u, cv is a color of pixel v. To find all the pixels on the initial image which satisfy the condition

|u·x−fr(vx|<1&|u·y−fr(vy|<1  (26)


it is not necessary to look through all the pixels. Instead the transformation fr is built and for each point fr (v) the nearest pixels are found:

([fr(vx],[fr(vy])  (27)
([fr(vx]+1,[fr(vy])  (28)
([fr(vx],[fr(vy]+1)  (29)
([fr(vx]+1,[fr(vy]+1)  (30)


and save two corresponding sums for them:

bufferSums[u]+=cv(1−|fr(vx−u·x|)(1−|fr(vy−u·y|)  (31)
bufferWeight[u]+=(1−|fr(vx−u·x|)(1−|fr(vy−u·y|)  (32)


Then the color value in each pixel can be calculated as following:










c
u

=


bufferSums
[
u
]


bufferWeight
[
u
]






(
33
)







If some resulting points don't have a prototype, their values are calculated using bilinear interpolation on neighbors.


In one or more embodiments, face tracking results are used to choose sets of control points p and q. Some vertices of Candide are projected to the plane and 8 points are added: 4 corner points and 4 middles of borders. This set of points is taken as p. On the FIG. 6 the choice of control points (marked green) is shown.


In one or more embodiments, to obtain set q Deformation units to Candide were introduced. They are some parameters that correspond to the desired deformations. In this embodiment 3 deformation units are added:

    • Fatness
    • Nose width
    • Eye width


However, in other embodiments other deformation units can be chosen to implement the desired face deformation.


In one or more embodiments, each of Deformation units influences on some Candide points' positions and it has its current value in each moment of time—the bigger value, the bigger influence. For example, to make a man fatter, Fatness value should be increased and to make him thinner it should be decreased.


Thus, in each moment of time two Candide models with equal values of Shape and Action units are present, but with different values of Deformation units. The first Candide corresponds to the real face form and the second one corresponds to the wanted form. By the second Candide points' projection to the plane set q is obtained. On the FIG. 7 the difference between sets p (green points) and q (corresponding blue points) is shown. Than MLS is used to get transformation of p into q.


Here are values of Deformation units' influence on the chosen points in the described embodiment:


Fatness (8)

    • 62 0.050000 0.000000 0.000000
    • 61 0.100000 0.000000 0.000000
    • 63 0.110000 0.000000 0.000000
    • 29-0.050000 0.000000 0.000000
    • 28-0.100000 0.000000 0.000000
    • 30-0.110000 0.000000 0.000000
    • 65 0.000000 0.100000 0.000000
    • 32 0.000000 0.100000 0.000000


Nose width (4)

    • 76 0.050000 0.000000 0.000000
    • 75-0.050000 0.000000 0.000000
    • 78 0.030000 0.000000 0.000000
    • 77-0.030000 0.000000 0.000000


Eye width (10)

    • 52 0.000000 0.030000 0.000000
    • 53-0.020000 0.000000 0.000000
    • 56 0.020000 0.000000 0.000000
    • 57 0.000000-0.030000 0.000000
    • 73 0.000000 0.025000 0.000000
    • 19 0.000000 0.030000 0.000000
    • 20 0.020000 0.000000 0.000000
    • 23-0.020000 0.000000 0.000000
    • 24 0.000000-0.030000 0.000000
    • 0.000000 0.025000 0.000000


Examples of Fatness and Nose width deformations' applying are shown in FIGS. 8(a)-8(c). To make fat deformation more natural mouth is not stretched while making people fatter but mouth is compressed while making people thinner.


Thus, the algorithm has to:

    • 1. find the Candide position (Shape and Action units)
    • 2. apply Deformation units to the second Candide
    • 3. project both Candides to obtain sets p and q
    • 4. build the deformation using MLS in grid vertices
    • 5. calculate deformation in all pixels using bilinear interpolation
    • 6. build resulting picture


In one or more embodiments, to make this effect a real time GPU is used with some optimizations of its functioning. The image is split with regular grid and the transformation is calculated only in its nodes. Then the linear interpolation is used to get transformation at each pixel. With increasing of grid size fps (frames per second) is increased but quality becomes worse.


Thus, changing of the object's proportions in real time in video stream according to the present invention has the following distinguishing features. In the original algorithm the inventors have to compute transformation for each pixel, but on a device it runs slow. To increase speed the inventors divide plane of image with regular grid and compute transformation in grid nodes only. Transformation in other pixels is interpolated.


Further advantages of the described embodiments are given by the fact that the method of real time video stream processing for changing proportions of an object in the video stream can be implemented on mobile devices, for example such as mobile phones, smart phones, tablet computers etc., since the method is not resource-intensive.


Exemplary Computer Platform


FIG. 9 is a block diagram that illustrates an embodiment of a computer system 500 upon which various embodiments of the inventive concepts described herein may be implemented. The system 500 includes a computer platform 501, peripheral devices 502 and network resources 503.


The computer platform 501 may include a data bus 504 or other communication mechanism for communicating information across and among various parts of the computer platform 501, and a processor 505 coupled with bus 504 for processing information and performing other computational and control tasks. Computer platform 501 also includes a volatile storage 506, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 504 for storing various information as well as instructions to be executed by processor 505, including the software application for implementing multifunctional interaction with elements of a list using touch-sensitive devices described above. The volatile storage 506 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 505. Computer platform 501 may further include a read only memory (ROM or EPROM) 507 or other static storage device coupled to bus 504 for storing static information and instructions for processor 505, such as basic input-output system (BIOS), as well as various system configuration parameters. A persistent storage device 508, such as a magnetic disk, optical disk, or solid-state flash memory device is provided and coupled to bus 504 for storing information and instructions.


Computer platform 501 may be coupled via bus 504 to a touch-sensitive display 509, such as a cathode ray tube (CRT), plasma display, or a liquid crystal display (LCD), for displaying information to a system administrator or user of the computer platform 501. An input device 510, including alphanumeric and other keys, is coupled to bus 504 for communicating information and command selections to processor 505. Another type of user input device is cursor control device 511, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 505 and for controlling cursor movement on touch-sensitive display 509. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. To detect user's gestures, the display 509 may incorporate a touchscreen interface configured to detect user's tactile events and send information on the detected events to the processor 505 via the bus 504.


An external storage device 512 may be coupled to the computer platform 501 via bus 504 to provide an extra or removable storage capacity for the computer platform 501. In an embodiment of the computer system 500, the external removable storage device 512 may be used to facilitate exchange of data with other computer systems.


The invention is related to the use of computer system 500 for implementing the techniques described herein. In an embodiment, the inventive system may reside on a machine such as computer platform 501. According to one embodiment of the invention, the techniques described herein are performed by computer system 500 in response to processor 505 executing one or more sequences of one or more instructions contained in the volatile memory 506. Such instructions may be read into volatile memory 506 from another computer-readable medium, such as persistent storage device 508. Execution of the sequences of instructions contained in the volatile memory 506 causes processor 505 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.


The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 505 for execution. The computer-readable medium is just one example of a machine-readable medium, which may carry instructions for implementing any of the methods and/or techniques described herein. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as the persistent storage device 508. Volatile media includes dynamic memory, such as volatile storage 506.


Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, a flash drive, a memory card, any other memory chip or cartridge, or any other medium from which a computer can read.


Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 505 for execution. For example, the instructions may initially be carried on a magnetic disk from a remote computer. Alternatively, a remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on the data bus 504. The bus 504 carries the data to the volatile storage 506, from which processor 505 retrieves and executes the instructions. The instructions received by the volatile memory 506 may optionally be stored on persistent storage device 508 either before or after execution by processor 505. The instructions may also be downloaded into the computer platform 501 via Internet using a variety of network data communication protocols well known in the art.


The computer platform 501 also includes a communication interface, such as network interface card 513 coupled to the data bus 504. Communication interface 513 provides a two-way data communication coupling to a network link 514 that is coupled to a local network 515. For example, communication interface 513 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 513 may be a local area network interface card (LAN NIC) to provide a data communication connection to a compatible LAN. Wireless links, such as well-known 802.11a, 802.11b, 802.11g and Bluetooth may also used for network implementation. In any such implementation, communication interface 513 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.


Network link 514 typically provides data communication through one or more networks to other network resources. For example, network link 514 may provide a connection through local network 515 to a host computer 516, or a network storage/server 522. Additionally or alternatively, the network link 514 may connect through gateway/firewall 517 to the wide-area or global network 518, such as an Internet. Thus, the computer platform 501 can access network resources located anywhere on the Internet 518, such as a remote network storage/server 519. On the other hand, the computer platform 501 may also be accessed by clients located anywhere on the local area network 515 and/or the Internet 518. The network clients 520 and 521 may themselves be implemented based on the computer platform similar to the platform 501.


Local network 515 and the Internet 518 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 514 and through communication interface 513, which carry the digital data to and from computer platform 501, are exemplary forms of carrier waves transporting the information.


Computer platform 501 can send messages and receive data, including program code, through the variety of network(s) including Internet 518 and LAN 515, network link 515 and communication interface 513. In the Internet example, when the system 501 acts as a network server, it might transmit a requested code or data for an application program running on client(s) 520 and/or 521 through the Internet 518, gateway/firewall 517, local area network 515 and communication interface 513. Similarly, it may receive code from other network resources.


The received code may be executed by processor 505 as it is received, and/or stored in persistent or volatile storage devices 508 and 506, respectively, or other non-volatile storage for later execution.


Finally, it should be understood that processes and techniques described herein are not inherently related to any particular apparatus and may be implemented by any suitable combination of components. Further, various types of general purpose devices may be used in accordance with the teachings described herein. It may also prove advantageous to construct specialized apparatus to perform the method steps described herein. The present invention has been described in relation to particular examples, which are intended in all respects to be illustrative rather than restrictive. Those skilled in the art will appreciate that many different combinations of hardware, software, and firmware will be suitable for practicing the present invention. For example, the described software may be implemented in a wide variety of programming or scripting languages, such as Assembler, C/C++, Objective-C, perl, shell, PHP, Java, as well as any now known or later developed programming or scripting language.


Moreover, other implementations of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. Various aspects and/or components of the described embodiments may be used singly or in any combination in the systems and methods for real time video stream processing. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims
  • 1. A computer implemented method comprising: receiving a video depicting a face;receiving a request for changing a fatness of the face;after receiving the request for changing the fatness of the face:initializing a tracking process to detect the face in the video; anddeforming a first potion of the detected face depicted in the video by a first deformation amount, in accordance with the request, while deforming a second portion of the face depicted in the video by a second deformation amount, the deforming comprising: causing a first portion of the face to be deformed without stretching a mouth portion of the face in response to determining that the request corresponds to increasing the fatness of the face; andcausing the first portion of the face to be deformed together with compressing the mouth portion of the face in response to determining that the request corresponds to decreasing the fatness of the face; andproviding the video comprising the deformed first portion of the face.
  • 2. The computer implemented method of claim 1, further comprising: detecting feature reference points of the face;tracking the face in the video, wherein the tracking comprises creating a first mesh based on the detected feature reference points of the face and aligning the first mesh to the face in each frame;while tracking the face with the first mesh, transforming a set of pixels within the frames of the video representing a portion of the feature reference points to generate transformed frames of the video; andmaintaining the first mesh while the face is present in the frames of the video.
  • 3. The computer implemented method of claim 2, wherein the feature reference points are at least one of points indicating eyebrows vertical position, eyes vertical position, eyes width, eyes height, eye separation distance, nose vertical position, nose pointing up, mouth vertical position, mouth width, chin width, upper lip raiser, jaw drop, lip stretcher, left brow lowerer, right brow lowerer, lip corner depressor, or outer brow raiser.
  • 4. The computer implemented method of claim 1, further comprising: associating a square grid with a background of the face in the video; andtransforming the background of the face using the square grid to avoid background distortion.
  • 5. The computer implemented method of claim 1, further comprising: indicating a presence of the face from a list of objects in frames of the video, wherein the list further comprises rules for changing proportions of each object from the list; andgenerating a request for changing proportions of the face in which presence in frames of the video is indicated.
  • 6. The computer implemented method of claim 1, further comprising: from time to time, re-initializing the tracking process to detect the face in the video to continue deforming the first portion and a second portion of the face.
  • 7. The computer implemented method of claim 1, further comprising: defining a frame area of the video to be processed, wherein the frame area to be processed sets a frame area of the video where only proportions of those objects or their parts which are positioned in the frame area to be processed are changed.
  • 8. The computer implemented method of claim 1, further comprising: randomly selecting the face to be changed in frames of the video and randomly selecting at least one rule for changing proportions of the face out of a list of rules; andgenerating the request for changing proportions of the randomly selected face based on the randomly selected rules.
  • 9. The computer implemented method of claim 1, further comprising: applying a transformation function only in vertices of a square grid associated with frames of the video; andafter the transformation function is applied only in the vertices, computing values in a collection of pixels using linear interpolation in a first direction based on values of the vertices; andafter computing the values in the collection of pixels in the first direction, computing values of the collection of pixels using linear interpolation in a second direction based on the values of the vertices.
  • 10. The computer implemented method of claim 1, wherein the tracking process for tracking the face comprises an Active Shape Model (ASM).
  • 11. The computer implemented method of claim 1, further comprising: generating a transformation between a pixel of a frame of the video and a corresponding modified pixel in a second frame of the video, the modified pixel being generated based on the request for changing the fatness of the face;identifying a set of nearby pixels to the pixel of the frame based on the transformation;computing two sums for the set of nearby pixels, a first sum of the two sums being computed by scaling values of the set of nearby pixels by the pixel of the frame, a second sum of the two sums being computed based on the values of the set of nearby pixels; andgenerating the corresponding modified pixel based on a ratio of the first sum and the second sum.
  • 12. The computer implemented method of claim 1, further comprising: obtaining grayscale values of each pixel of the face in the video;applying a logarithm to the grayscale values of each pixel of the face in the video; andtracking the face in the video based on the logarithm of the grayscale values.
  • 13. A system comprising: a processor and a memory, the memory storing instructions executed by the processor for performing operations comprising:receiving a video depicting a face;receiving a request for changing a fatness of the face;after receiving the request for changing the fatness of the face:initializing a tracking process to detect the face in the video; anddeforming a first potion of the detected face depicted in the video by a first deformation amount, in accordance with the request, while deforming a second portion of the face depicted in the video by a second deformation amount, the deforming comprising: causing a first portion of the face to be deformed without stretching a mouth portion of the face in response to determining that the request corresponds to increasing the fatness of the face; andcausing the first portion of the face to be deformed together with compressing the mouth portion of the face in response to determining that the request corresponds to decreasing the fatness of the face; andproviding the video comprising the deformed first portion of the face.
  • 14. The system of claim 13, the operations further comprise: detecting feature reference points of the face;tracking the detected face in the video, wherein the tracking comprises creating a first mesh based on the detected feature reference points of the face and aligning the first mesh to the face in each frame;while tracking the detected face with the first mesh, transforming a set of pixels within the frames of the video representing a portion of the feature reference points to generate transformed frames of the video; andmaintaining the first mesh while the face is present in the frames of the video.
  • 15. The system of claim 14, wherein the feature reference points are at least one of points indicating eyebrows vertical position, eyes vertical position, eyes width, eyes height, eye separation distance, nose vertical position, nose pointing up, mouth vertical position, mouth width, chin width, upper lip raiser, jaw drop, lip stretcher, left brow lowerer, right brow lowerer, lip corner depressor, or outer brow raiser.
  • 16. The system of claim 13, further comprising operations for: associating a square grid with a background of the face in the video; andtransforming the background of the face using the square grid to avoid background distortion.
  • 17. The system of claim 13, the operations further comprising: generating a transformation between a pixel of a frame of the video and a corresponding modified pixel in a second frame of the video, the modified pixel being generated based on the request for changing the fatness of the face;identifying a set of nearby pixels to the pixel of the frame based on the transformation;computing two sums for the set of nearby pixels, a first sum of the two sums being computed by scaling values of the set of nearby pixels, a second sum of the two sums being computed based on the values of the set of nearby pixels; andgenerating the corresponding modified pixel based on a ratio of the first sum and the second sum.
  • 18. A non-transitory computer readable medium comprising non-transitory computer readable instructions that, when executed by one or more processors, configure the one or more processors to perform operations comprising: receiving a video depicting a face;receiving a request for changing a fatness of the face;after receiving the request for changing the fatness of the face:initializing a tracking process to detect the face in the video; anddeforming a first potion of the detected face depicted in the video by a first deformation amount, in accordance with the request, while deforming a second portion of the face depicted in the video by a second deformation amount, the deforming comprising: causing a first portion of the face to be deformed without stretching a mouth portion of the face in response to determining that the request corresponds to increasing the fatness of the face; andcausing the first portion of the face to be deformed together with compressing the mouth portion of the face in response to determining that the request corresponds to decreasing the fatness of the face; andproviding the video comprising the deformed first portion of the face.
  • 19. The non-transitory computer readable medium of claim 18, wherein the operations further comprise: generating a transformation between a pixel of a frame of the video and a corresponding modified pixel in a second frame of the video, the modified pixel being generated based on the request for changing the fatness of the face;identifying a set of nearby pixels to the pixel of the frame based on the transformation;computing two sums for the set of nearby pixels, a first sum of the two sums being computed by scaling values of the set of nearby pixels, a second sum of the two sums being computed based on the values of the set of nearby pixels; andgenerating the corresponding modified pixel based on a ratio of the first sum and the second sum.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of and claims the benefit of priority of U.S. patent application Ser. No. 16/749,708, filed on Jan. 22, 2020, which is a continuation of and claims the benefit of priority of U.S. patent application Ser. No. 14/314,312, filed on Jun. 25, 2014, which claims the benefit of U.S. Provisional Application No. 61/936,016, filed on Feb. 5, 2014, which are hereby incorporated by reference herein in their entirety.

US Referenced Citations (431)
Number Name Date Kind
4573070 Cooper Feb 1986 A
4888713 Falk Dec 1989 A
5227863 Bilbrey et al. Jul 1993 A
5359706 Sterling Oct 1994 A
5479603 Stone et al. Dec 1995 A
5715382 Herregods et al. Feb 1998 A
5726671 Ansley Mar 1998 A
5880731 Liles et al. Mar 1999 A
5990973 Sakamoto Nov 1999 A
6016150 Lengyel et al. Jan 2000 A
6023270 Brush, II et al. Feb 2000 A
6038295 Mattes Mar 2000 A
6223165 Lauffer Apr 2001 B1
6252576 Nottingham Jun 2001 B1
6278491 Wang et al. Aug 2001 B1
H2003 Minner Nov 2001 H
6492986 Metaxas et al. Dec 2002 B1
6621939 Negishi et al. Sep 2003 B1
6664956 Erdem Dec 2003 B1
6768486 Szabo et al. Jul 2004 B1
6771303 Zhang et al. Aug 2004 B2
6772195 Hatlelid et al. Aug 2004 B1
6806898 Toyama et al. Oct 2004 B1
6807290 Liu et al. Oct 2004 B2
6829391 Comaniciu et al. Dec 2004 B2
6842779 Nishizawa Jan 2005 B1
6891549 Gold May 2005 B2
6897977 Bright May 2005 B1
6980909 Root et al. Dec 2005 B2
7034820 Urisaka et al. Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7039222 Simon et al. May 2006 B2
7050078 Dempski May 2006 B2
7119817 Kawakami Oct 2006 B1
7167519 Comaniciu et al. Jan 2007 B2
7173651 Knowles Feb 2007 B1
7212656 Liu et al. May 2007 B2
7227567 Beck et al. Jun 2007 B1
7239312 Urisaka et al. Jul 2007 B2
7256827 Sato Aug 2007 B1
7289124 Breton Oct 2007 B2
7342587 Danzig et al. Mar 2008 B2
7411493 Smith Aug 2008 B2
7415140 Nagahashi et al. Aug 2008 B2
7468729 Levinson Dec 2008 B1
7535890 Rojas May 2009 B2
7538764 Salomie May 2009 B2
7564476 Coughlan et al. Jul 2009 B1
7612794 He et al. Nov 2009 B2
7636755 Blattner et al. Dec 2009 B2
7639251 Gu et al. Dec 2009 B2
7671318 Tener et al. Mar 2010 B1
7697787 Illsley Apr 2010 B2
7710608 Takahashi May 2010 B2
7720283 Sun et al. May 2010 B2
7775885 Van Luchene et al. Aug 2010 B2
7782506 Suzuki et al. Aug 2010 B2
7801328 Au Sep 2010 B2
7812993 Bright Oct 2010 B2
7830384 Edwards et al. Nov 2010 B1
7859551 Bulman et al. Dec 2010 B2
7885931 Seo et al. Feb 2011 B2
7925703 Dinan et al. Apr 2011 B2
7945653 Zuckerberg et al. May 2011 B2
7971156 Albertson et al. Jun 2011 B2
8026931 Sun et al. Sep 2011 B2
8086060 Gilra et al. Dec 2011 B1
8088044 Tchao et al. Jan 2012 B2
8090160 Kakadiaris et al. Jan 2012 B2
8095878 Bates et al. Jan 2012 B2
8108774 Finn et al. Jan 2012 B2
8117281 Robinson et al. Feb 2012 B2
8130219 Fleury et al. Mar 2012 B2
8131597 Hudetz Mar 2012 B2
8146005 Jones et al. Mar 2012 B2
8151191 Nicol Apr 2012 B2
8199747 Rojas et al. Jun 2012 B2
8230355 Bauermeister et al. Jul 2012 B1
8233789 Brunner Jul 2012 B2
8253789 Aizaki et al. Aug 2012 B2
8294823 Ciudad et al. Oct 2012 B2
8295557 Wang et al. Oct 2012 B2
8296456 Klappert Oct 2012 B2
8314842 Kudo Nov 2012 B2
8332475 Rosen et al. Dec 2012 B2
8335399 Gyotoku Dec 2012 B2
8384719 Reville et al. Feb 2013 B2
8385684 Sandrew et al. Feb 2013 B2
RE44054 Kim Mar 2013 E
8396708 Park et al. Mar 2013 B2
8421873 Majewicz et al. Apr 2013 B2
8425322 Gillo et al. Apr 2013 B2
8458601 Castelli et al. Jun 2013 B2
8462198 Lin et al. Jun 2013 B2
8484158 Deluca et al. Jul 2013 B2
8495503 Brown et al. Jul 2013 B2
8495505 Smith et al. Jul 2013 B2
8504926 Wolf Aug 2013 B2
8520093 Nanu et al. Aug 2013 B2
8559980 Pujol Oct 2013 B2
8564621 Branson et al. Oct 2013 B2
8564710 Nonaka et al. Oct 2013 B2
8581911 Becker et al. Nov 2013 B2
8597121 del Valle Dec 2013 B2
8601051 Wang Dec 2013 B2
8601379 Marks et al. Dec 2013 B2
8632408 Gillo et al. Jan 2014 B2
8638993 Lee et al. Jan 2014 B2
8648865 Dawson et al. Feb 2014 B2
8659548 Hildreth Feb 2014 B2
8675972 Lefevre et al. Mar 2014 B2
8683354 Khandelwal et al. Mar 2014 B2
8687039 Degrazia et al. Apr 2014 B2
8692830 Nelson et al. Apr 2014 B2
8717465 Ning May 2014 B2
8718333 Wolf et al. May 2014 B2
8724622 Rojas May 2014 B2
8743210 Lin Jun 2014 B2
8761497 Berkovich et al. Jun 2014 B2
8766983 Marks et al. Jul 2014 B2
8810513 Ptucha et al. Aug 2014 B2
8810696 Ning Aug 2014 B2
8812171 Filev et al. Aug 2014 B2
8823769 Sekine Sep 2014 B2
8824782 Ichihashi et al. Sep 2014 B2
8832201 Wall Sep 2014 B2
8832552 Arrasvuori et al. Sep 2014 B2
8839327 Amento et al. Sep 2014 B2
8874677 Rosen et al. Oct 2014 B2
8890926 Tandon et al. Nov 2014 B2
8892999 Nims et al. Nov 2014 B2
8897596 Passmore et al. Nov 2014 B1
8909679 Root et al. Dec 2014 B2
8924250 Bates et al. Dec 2014 B2
8929614 Oicherman et al. Jan 2015 B2
8934665 Kim et al. Jan 2015 B2
8958613 Kondo et al. Feb 2015 B2
8963926 Brown et al. Feb 2015 B2
8976862 Kim et al. Mar 2015 B2
8988490 Fujii Mar 2015 B2
8989786 Feghali Mar 2015 B2
8995433 Rojas Mar 2015 B2
9032314 Mital et al. May 2015 B2
9040574 Wang et al. May 2015 B2
9055416 Rosen et al. Jun 2015 B2
9086776 Ye et al. Jul 2015 B2
9100806 Rosen et al. Aug 2015 B2
9100807 Rosen et al. Aug 2015 B2
9105014 Collet et al. Aug 2015 B2
9191776 Root et al. Nov 2015 B2
9204252 Root Dec 2015 B2
9225897 Sehn Dec 2015 B1
9232189 Shaburov et al. Jan 2016 B2
9241184 Weerasinghe Jan 2016 B2
9256860 Herger et al. Feb 2016 B2
9276886 Samaranayake Mar 2016 B1
9298257 Hwang et al. Mar 2016 B2
9311534 Liang Apr 2016 B2
9314692 Konoplev et al. Apr 2016 B2
9330483 Du et al. May 2016 B2
9357174 Li et al. May 2016 B2
9361510 Yao et al. Jun 2016 B2
9364147 Wakizaka et al. Jun 2016 B2
9378576 Bouaziz et al. Jun 2016 B2
9396525 Shaburova et al. Jul 2016 B2
9402057 Kaytaz et al. Jul 2016 B2
9412007 Nanu et al. Aug 2016 B2
9412192 Mandel et al. Aug 2016 B2
9443227 Evans et al. Sep 2016 B2
9460541 Li et al. Oct 2016 B2
9489661 Evans et al. Nov 2016 B2
9489760 Li et al. Nov 2016 B2
9491134 Rosen et al. Nov 2016 B2
9503845 Vincent Nov 2016 B2
9508197 Quinn et al. Nov 2016 B2
9544257 Ogundokun et al. Jan 2017 B2
9565362 Kudo Feb 2017 B2
9576400 Van Os et al. Feb 2017 B2
9589357 Li et al. Mar 2017 B2
9592449 Barbalet et al. Mar 2017 B2
9648376 Chang et al. May 2017 B2
9697635 Quinn et al. Jul 2017 B2
9705831 Spiegel Jul 2017 B2
9706040 Kadirvel et al. Jul 2017 B2
9742713 Spiegel et al. Aug 2017 B2
9744466 Fujioka Aug 2017 B2
9746990 Anderson et al. Aug 2017 B2
9749270 Collet et al. Aug 2017 B2
9792714 Li et al. Oct 2017 B2
9839844 Dunstan et al. Dec 2017 B2
9848293 Murray et al. Dec 2017 B2
9883838 Kaleal, III et al. Feb 2018 B2
9898849 Du et al. Feb 2018 B2
9911073 Spiegel et al. Mar 2018 B1
9928874 Shaburova Mar 2018 B2
9936165 Li et al. Apr 2018 B2
9959037 Chaudhri et al. May 2018 B2
9980100 Charlton et al. May 2018 B1
9990373 Fortkort Jun 2018 B2
10039988 Lobb et al. Aug 2018 B2
10097492 Tsuda et al. Oct 2018 B2
10102423 Shaburov et al. Oct 2018 B2
10116598 Tucker et al. Oct 2018 B2
10116901 Shaburov et al. Oct 2018 B2
10155168 Blackstock et al. Dec 2018 B2
10242477 Charlton et al. Mar 2019 B1
10242503 McPhee et al. Mar 2019 B2
10255948 Shaburova et al. Apr 2019 B2
10262250 Spiegel et al. Apr 2019 B1
10271010 Gottlieb Apr 2019 B2
10283162 Shaburova et al. May 2019 B2
10284508 Allen et al. May 2019 B1
10362219 Wilson et al. Jul 2019 B2
10438631 Shaburova et al. Oct 2019 B2
10439972 Spiegel et al. Oct 2019 B1
10475225 Park et al. Nov 2019 B2
10504266 Blattner et al. Dec 2019 B2
10509466 Miller et al. Dec 2019 B1
10514876 Sehn Dec 2019 B2
10566026 Shaburova Feb 2020 B1
10573048 Ni et al. Feb 2020 B2
10586570 Shaburova et al. Mar 2020 B2
10614855 Huang Apr 2020 B2
10657701 Osman et al. May 2020 B2
10674133 Oh Jun 2020 B2
10748347 Li et al. Aug 2020 B1
10950271 Shaburova et al. Mar 2021 B1
10958608 Allen et al. Mar 2021 B1
10962809 Castañeda Mar 2021 B1
10991395 Shaburova et al. Apr 2021 B1
10996846 Robertson et al. May 2021 B2
10997787 Ge et al. May 2021 B2
11012390 Al Majid et al. May 2021 B1
11030454 Xiong et al. Jun 2021 B1
11036368 Al Majid et al. Jun 2021 B1
11062498 Voss et al. Jul 2021 B1
11087728 Canberk et al. Aug 2021 B1
11092998 Castañeda et al. Aug 2021 B1
11106342 Al Majid et al. Aug 2021 B1
11126206 Meisenholder et al. Sep 2021 B2
11143867 Rodriguez, II Oct 2021 B2
11169600 Canberk et al. Nov 2021 B1
11227626 Krishnan Gorumkonda et al. Jan 2022 B1
11443772 Shaburova et al. Sep 2022 B2
11450349 Shaburova Sep 2022 B2
11468913 Shaburova et al. Oct 2022 B1
11514947 Shaburova Nov 2022 B1
20010004417 Narutoshi et al. Jun 2001 A1
20020006431 Tramontana Jan 2002 A1
20020012454 Liu et al. Jan 2002 A1
20020064314 Comaniciu et al. May 2002 A1
20020067362 Agostino Nocera et al. Jun 2002 A1
20020163516 Hubbell Nov 2002 A1
20020169644 Greene Nov 2002 A1
20030107568 Urisaka et al. Jun 2003 A1
20030132946 Gold Jul 2003 A1
20030160791 Breton Aug 2003 A1
20030228135 Illsley Dec 2003 A1
20040037475 Avinash et al. Feb 2004 A1
20040076337 Nishida Apr 2004 A1
20040119662 Dempski Jun 2004 A1
20040130631 Suh Jul 2004 A1
20040233223 Schkolne et al. Nov 2004 A1
20050046905 Aizaki et al. Mar 2005 A1
20050073585 Ettinger et al. Apr 2005 A1
20050117798 Patton et al. Jun 2005 A1
20050128211 Berger et al. Jun 2005 A1
20050131744 Brown et al. Jun 2005 A1
20050162419 Kim et al. Jul 2005 A1
20050180612 Nagahashi et al. Aug 2005 A1
20050190980 Bright Sep 2005 A1
20050202440 Fletterick et al. Sep 2005 A1
20050206610 Cordelli Sep 2005 A1
20050220346 Akahori Oct 2005 A1
20050238217 Enomoto et al. Oct 2005 A1
20060098248 Suzuki et al. May 2006 A1
20060110004 Wu et al. May 2006 A1
20060170937 Takahashi Aug 2006 A1
20060227997 Au et al. Oct 2006 A1
20060242183 Niyogi et al. Oct 2006 A1
20060269128 Vladislav Nov 2006 A1
20060290695 Salomie Dec 2006 A1
20060294465 Ronen et al. Dec 2006 A1
20070013709 Charles et al. Jan 2007 A1
20070087352 Fletterick et al. Apr 2007 A9
20070113181 Blattner et al. May 2007 A1
20070140556 Willamowski et al. Jun 2007 A1
20070159551 Kotani Jul 2007 A1
20070168863 Blattner et al. Jul 2007 A1
20070176921 Iwasaki et al. Aug 2007 A1
20070216675 Sun et al. Sep 2007 A1
20070223830 Ono Sep 2007 A1
20070258656 Aarabi Nov 2007 A1
20070268312 Marks et al. Nov 2007 A1
20080063285 Porikli et al. Mar 2008 A1
20080077953 Fernandez et al. Mar 2008 A1
20080158222 Li et al. Jul 2008 A1
20080184153 Matsumura et al. Jul 2008 A1
20080187175 Kim et al. Aug 2008 A1
20080204992 Swenson et al. Aug 2008 A1
20080212894 Demirli et al. Sep 2008 A1
20090016617 Bregman-amitai et al. Jan 2009 A1
20090027732 Imai Jan 2009 A1
20090055484 Vuong et al. Feb 2009 A1
20090070688 Gyorfi et al. Mar 2009 A1
20090099925 Mehta et al. Apr 2009 A1
20090106672 Burstrom Apr 2009 A1
20090158170 Narayanan et al. Jun 2009 A1
20090177976 Bokor et al. Jul 2009 A1
20090202114 Morin et al. Aug 2009 A1
20090265604 Howard et al. Oct 2009 A1
20090290791 Holub et al. Nov 2009 A1
20090300525 Jolliff et al. Dec 2009 A1
20090303984 Clark et al. Dec 2009 A1
20090309878 Otani et al. Dec 2009 A1
20090310828 Kakadiaris et al. Dec 2009 A1
20100011422 Mason et al. Jan 2010 A1
20100023885 Reville et al. Jan 2010 A1
20100074475 Chouno Mar 2010 A1
20100115426 Liu et al. May 2010 A1
20100162149 Sheleheda et al. Jun 2010 A1
20100177981 Wang et al. Jul 2010 A1
20100185963 Slik et al. Jul 2010 A1
20100188497 Aizaki et al. Jul 2010 A1
20100202697 Matsuzaka et al. Aug 2010 A1
20100203968 Gill et al. Aug 2010 A1
20100227682 Reville et al. Sep 2010 A1
20100231590 Erceis et al. Sep 2010 A1
20100316281 Lefevre Dec 2010 A1
20110018875 Arahari et al. Jan 2011 A1
20110038536 Gong Feb 2011 A1
20110093780 Dunn Apr 2011 A1
20110115798 Nayar et al. May 2011 A1
20110148864 Lee et al. Jun 2011 A1
20110182357 Kim et al. Jul 2011 A1
20110202598 Evans et al. Aug 2011 A1
20110239136 Goldman et al. Sep 2011 A1
20110261050 Smolic et al. Oct 2011 A1
20110273620 Berkovich et al. Nov 2011 A1
20110299776 Lee et al. Dec 2011 A1
20120050323 Baron, Jr. et al. Mar 2012 A1
20120106806 Folta et al. May 2012 A1
20120113106 Choi et al. May 2012 A1
20120124458 Cruzada May 2012 A1
20120130717 Xu et al. May 2012 A1
20120136668 Kuroda May 2012 A1
20120144325 Mital et al. Jun 2012 A1
20120167146 Incorvia Jun 2012 A1
20120209924 Evans et al. Aug 2012 A1
20120242874 Noudo Sep 2012 A1
20120288187 Ichihashi et al. Nov 2012 A1
20120306853 Wright et al. Dec 2012 A1
20120327172 El-Saban et al. Dec 2012 A1
20130004096 Goh et al. Jan 2013 A1
20130103760 Golding et al. Apr 2013 A1
20130114867 Kondo et al. May 2013 A1
20130155169 Hoover et al. Jun 2013 A1
20130190577 Brunner et al. Jul 2013 A1
20130201105 Ptucha et al. Aug 2013 A1
20130201187 Tong et al. Aug 2013 A1
20130201328 Chung Aug 2013 A1
20130208129 Stenman Aug 2013 A1
20130216094 Delean Aug 2013 A1
20130222432 Arrasvuori Aug 2013 A1
20130229409 Song et al. Sep 2013 A1
20130235086 Otake Sep 2013 A1
20130249948 Reitan Sep 2013 A1
20130257877 Davis Oct 2013 A1
20130278600 Christensen et al. Oct 2013 A1
20130287291 Cho Oct 2013 A1
20130342629 North et al. Dec 2013 A1
20140043329 Wang et al. Feb 2014 A1
20140055554 Du et al. Feb 2014 A1
20140125678 Wang et al. May 2014 A1
20140129343 Finster et al. May 2014 A1
20140179347 Murray et al. Jun 2014 A1
20140198177 Castellani et al. Jul 2014 A1
20140228668 Wakizaka et al. Aug 2014 A1
20150002517 Lee Jan 2015 A1
20150055829 Liang Feb 2015 A1
20150097834 Ma et al. Apr 2015 A1
20150116350 Lin et al. Apr 2015 A1
20150116448 Gottlieb Apr 2015 A1
20150131924 He et al. May 2015 A1
20150145992 Traff May 2015 A1
20150163416 Nevatie Jun 2015 A1
20150195491 Shaburov et al. Jul 2015 A1
20150206349 Rosenthal et al. Jul 2015 A1
20150213604 Li et al. Jul 2015 A1
20150220252 Mital et al. Aug 2015 A1
20150221069 Shaburova et al. Aug 2015 A1
20150221118 Shaburova Aug 2015 A1
20150221136 Shaburova et al. Aug 2015 A1
20150221338 Shaburova et al. Aug 2015 A1
20150222821 Shaburova Aug 2015 A1
20160012627 Kishikawa et al. Jan 2016 A1
20160134840 Mcculloch May 2016 A1
20160234149 Tsuda et al. Aug 2016 A1
20160253550 Zhang et al. Sep 2016 A1
20160322079 Shaburova et al. Nov 2016 A1
20170019633 Shaburov et al. Jan 2017 A1
20170080346 Abbas Mar 2017 A1
20170087473 Siegel et al. Mar 2017 A1
20170094246 Oh Mar 2017 A1
20170098122 El Kaliouby et al. Apr 2017 A1
20170113140 Blackstock et al. Apr 2017 A1
20170118145 Aittoniemi et al. Apr 2017 A1
20170199855 Fishbeck Jul 2017 A1
20170235848 Van Dusen et al. Aug 2017 A1
20170310934 Du et al. Oct 2017 A1
20170312634 Ledoux et al. Nov 2017 A1
20180047200 O'hara et al. Feb 2018 A1
20180113587 Allen et al. Apr 2018 A1
20180115503 Baldwin et al. Apr 2018 A1
20180315076 Andreou Nov 2018 A1
20180315133 Brody et al. Nov 2018 A1
20180315134 Amitay et al. Nov 2018 A1
20180364810 Parshionikar Dec 2018 A1
20190001223 Blackstock et al. Jan 2019 A1
20190057616 Cohen et al. Feb 2019 A1
20190188920 Mcphee et al. Jun 2019 A1
20200160886 Shaburova May 2020 A1
20210011612 Dancie et al. Jan 2021 A1
20210074016 Li et al. Mar 2021 A1
20210166732 Shaburova et al. Jun 2021 A1
20210241529 Cowburn et al. Aug 2021 A1
20210303075 Cowburn et al. Sep 2021 A1
20210303077 Anvaripour et al. Sep 2021 A1
20210303140 Mourkogiannis Sep 2021 A1
20210382564 Blachly et al. Dec 2021 A1
20210397000 Rodriguez, II Dec 2021 A1
Foreign Referenced Citations (39)
Number Date Country
2887596 Jul 2015 CA
1411277 Apr 2003 CN
1811793 Aug 2006 CN
101167087 Apr 2008 CN
101499128 Aug 2009 CN
101753851 Jun 2010 CN
102665062 Sep 2012 CN
103620646 Mar 2014 CN
103650002 Mar 2014 CN
103999096 Aug 2014 CN
104378553 Feb 2015 CN
107637072 Jan 2018 CN
109863532 Jun 2019 CN
110168478 Aug 2019 CN
2184092 May 2010 EP
2001230801 Aug 2001 JP
5497931 Mar 2014 JP
20040058671 Jul 2004 KR
100853122 Aug 2008 KR
20080096252 Oct 2008 KR
101445263 Sep 2014 KR
102031135 Oct 2019 KR
102173786 Oct 2020 KR
WO-2003094072 Nov 2003 WO
WO-2004095308 Nov 2004 WO
WO-200610718241 Oct 2006 WO
WO-2007134402 Nov 2007 WO
WO-2012139276 Oct 2012 WO
WO-2013027893 Feb 2013 WO
WO-2013152454 Oct 2013 WO
WO-2013166588 Nov 2013 WO
WO-2014031899 Feb 2014 WO
WO-2014194439 Dec 2014 WO
WO-2016090605 Jun 2016 WO
WO-2016149576 Sep 2016 WO
WO-2018081013 May 2018 WO
WO-2018102562 Jun 2018 WO
WO-2018129531 Jul 2018 WO
WO-2019089613 May 2019 WO
Non-Patent Literature Citations (207)
Entry
Cao et al., “Facewarehouse: A 3d facial expression database for visual computing.” IEEE Transactions on Visualization and Computer Graphics 20, No. 3 (2013): 413-425. (Year: 2013).
Canton, “How to make someone look fatter/thinner in photoshop” (video), https://www.youtube.com/watch?v=I34y4qzgIUE, Mar. 2, 2012 . . . In particular, the video shows how to deform the face to make it fatter without stretching a mouth portion of the face. (Year: 2012).
Mansur, “Reshape facial structure with Photoshop liquify” (https://photoshop-tutorials.wonderhowto.com/how-to/reshape-facial-structure-with-photoshop-liquify-214750/), Jul. 20, 2008. “This Photoshop tutorial shows you how you can actually reshape or define someone's facial structure.” (Year: 2008).
Pinshy, “Facial Structure Reshaping in Photoshop (FaceLift)” (video), https://www.youtube.com/watch?v=w8-9xi7WkHY, 2008. The tutorial video referred in the above cited reference shows how to deform the face to make the face thinner with compressing the mouth portion of the face. (Year: 2008).
Fu et al., “Age Synthesis and Estimation via Faces: A Survey,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, No. 11, pp. 1955-1976, Nov. 2010 (Year: 2010).
Cosatto et al., “Photo-realistic talking-heads from image samples,” in IEEE Transactions on Multimedia, vol. 2, No. 3, pp. 152-163, Sep. 2000 (Year: 2000).
“U.S. Appl. No. 14/661,367, Non Final Office Action dated May 5, 2015”, 30 pgs.
“U.S. Appl. No. 14/661,367, Notice of Allowance dated Aug. 31, 2015”, 5 pgs.
“U.S. Appl. No. 14/661,367. Response filed Aug. 5, 2015 to Non Final Office Action dated May 5, 2015”, 17 pgs.
“U.S. Appl. No. 14/987,514, Final Office Action dated Sep. 26, 2017”, 25 pgs.
“U.S. Appl. No. 14/987,514, Non Final Office Action dated Jan. 18, 2017”, 35 pgs.
“U.S. Appl. No. 14/987,514, Notice of Allowance dated Jun. 29, 2018”, 9 pgs.
“U.S. Appl. No. 14/987,514, Response filed Feb. 26, 2018 to Final Office Action dated Sep. 26, 2017”, 15 pgs.
“U.S. Appl. No. 14/987,514, Response filed Jul. 18, 2017 to Non Final Office Action dated Jan. 18, 2017”, 15 pgs.
“U.S. Appl. No. 16/141,588, Advisory Action dated Jan. 27, 2021”, 3 pgs.
“U.S. Appl. No. 16/141,588, Advisory Action dated Jul. 20, 2020”, 3 pgs.
“U.S. Appl. No. 16/141,588, Corrected Notice of Allowability dated Oct. 26, 2021”, 2 pgs.
“U.S. Appl. No. 16/141,588, Corrected Notice of Allowability dated Dec. 1, 2021”, 2 pgs.
“U.S. Appl. No. 16/141,588, Ex Parte Quayle Action dated Jun. 25, 2021”, 4 pgs.
“U.S. Appl. No. 16/141,588, Examiner Interview Summary dated Apr. 22, 2021”, 2 pgs.
“U.S. Appl. No. 16/141,588, Final Office Action dated Apr. 7, 2020”, 34 pgs.
“U.S. Appl. No. 16/141,588, Final Office Action dated Nov. 16, 2020”, 35 pgs.
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Mar. 10, 2021”, 37 pgs.
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Aug. 27, 2020”, 34 pgs.
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Dec. 9, 2019”, 25 pgs.
“U.S. Appl. No. 16/141,588, Notice of Allowance dated Oct. 20, 2021”, 5 pgs.
“U.S. Appl. No. 16/141,588, Notice of Allowance dated Nov. 16, 2021”, 5 pgs.
“U.S. Appl. No. 16/141,588, Response filed Jan. 18, 2021 to Final Office Action dated Nov. 16, 2020”, 10 pgs.
“U.S. Appl. No. 16/141,588, Response filed Mar. 6, 2020 to Non Final Office Action dated Dec. 9, 2019”, 11 pgs.
“U.S. Appl. No. 16/141,588, Response filed Jun. 9, 2021 to Non Final Office Action dated Mar. 10, 2021”, 10 pgs.
“U.S. Appl. No. 16/141,588, Response filed Jul. 7, 2020 to Final Office Action dated Apr. 7, 2020”, 12 pgs.
“U.S. Appl. No. 16/141,588, Response filed Sep. 27, 2021 to Ex Parte Quayle Action dated Jun. 25, 2021”, 8 pgs.
“U.S. Appl. No. 16/141,588, Response filed Oct. 13, 2020 to Non Final Office Action dated Aug. 27, 2020”, 12 pgs.
“U.S. Appl. No. 16/548,279, Notice of Allowance dated Jun. 3, 2022”, 31 pgs.
“U.S. Appl. No. 16/548,279, Response filed May 16, 2022 to Non Final Office Action dated Feb. 17, 2022”, 15 pgs.
“U.S. Appl. No. 16/548,279, Supplemental Notice of Allowability dated Jun. 15, 2022”, 2 pgs.
“U.S. Appl. No. 16/732,858, Notice of Allowance dated May 31, 2022”, 12 pgs.
“U.S. Appl. No. 16/732,858, Notice of Allowance dated Sep. 19, 2022”, 5 pgs.
“U.S. Appl. No. 16/732,858, Response filed May 9, 2022 to Non Final Office Action dated Feb. 9, 2022”, 11 pgs.
“U.S. Appl. No. 16/732,858, Supplemental Notice of Allowability dated Jun. 17, 2022”, 2 pgs.
“U.S. Appl. No. 17/248,812, Notice of Allowance dated Jul. 29, 2022”, 5 pgs.
“U.S. Appl. No. 14/987,514, Preliminary Amendment filed Jan. 4, 2016”, 3 pgs.
“Chinese Application Serial No. 201680028853.3, Notice of Reexamination dated Nov. 25, 2021”, w/ English translation, 36 pgs.
“Chinese Application Serial No. 201680028853.3, Office Action dated Apr. 2, 2021”, w/ English translation, 10 pgs.
“Chinese Application Serial No. 201680028853.3, Office Action dated May 6, 2020”, w/ English Translation, 22 pgs.
“Chinese Application Serial No. 201680028853.3, Office Action dated Aug. 19, 2019”, w/ English Translation, 20 pgs.
“Chinese Application Serial No. 201680028853.3, Office Action dated Dec. 1, 2020”, w/ English Translation, 20 pgs.
“Chinese Application Serial No. 201680028853.3, Response filed Jun. 23, 2020 to Office Action dated May 6, 2020”, w/ English Claims, 17 pgs.
“Chinese Application Serial No. 201680028853.3, Response filed Dec. 6, 2019 to Office Action dated Aug. 19, 2019”, w/ English Claims, 16 pgs.
“Chinese Application Serial No. 201680028853.3,Response filed Feb. 4, 2021 to Office Action dated Dec. 1, 2020”, w/ English Claims, 17 pgs.
“European Application Serial No. 16716975.4, Communication Pursuant to Article 94(3) EPC dated Mar. 31, 2020”, 8 pgs.
“European Application Serial No. 16716975.4, Response filed May 4, 2018 to Communication pursuant to Rules 161(1) and 162 EPC dated Oct. 25, 2017”, w/ English Claims, 116 pgs.
“European Application Serial No. 16716975.4, Response Filed Jul. 31, 2020 to Communication Pursuant to Article 94(3) EPC dated Mar. 31, 2020”, 64 pgs.
“European Application Serial No. 16716975.4, Summons to Attend Oral Proceedings dated Apr. 16, 2021”, 11 pgs.
“European Application Serial No. 16716975.4, Summons to Attend Oral Proceedings dated Sep. 15, 2021”, 4 pgs.
“European Application Serial No. 16716975.4, Written Submissions filed Aug. 10, 2021 to Summons to Attend Oral Proceedings dated Apr. 16, 2021”, 62 pgs.
“International Application Serial No. PCT/US2016/023046, International Preliminary Report on Patentability dated Sep. 28, 2017”, 8 pgs.
“International Application Serial No. PCT/US2016/023046, International Search Report dated Jun. 29, 2016”, 4 pgs.
“International Application Serial No. PCT/US2016/023046, Written Opinion dated Jun. 29, 2016”, 6 pgs.
“Korean Application Serial No. 10-2017-7029496, Notice of Preliminary Rejection dated Jan. 29, 2019”, w/English Translation, 11 pgs.
“Korean Application Serial No. 10-2017-7029496, Response filed Mar. 28, 2019 to Notice of Preliminary Rejection dated Jan. 29, 2019”, w/ English Claims, 28 pgs.
“Korean Application Serial No. 10-2019-7029221, Notice of Preliminary Rejection dated Jan. 6, 2020”, w/ English Translation, 13 pgs.
“Korean Application Serial No. 10-2019-7029221, Response filed Mar. 6, 2020 to Notice of Preliminary Rejection dated Jan. 6, 2020”, w/ English Claims, 19 pgs.
“Korean Application Serial No. 10-2020-7031217, Notice of Preliminary Rejection dated Jan. 21, 2021”, w/ English Translation, 9 pgs.
“Korean Application Serial No. 10-2020-7031217, Response filed May 6, 2021 to Notice of Preliminary Rejection dated Jan. 21, 2021”, w/ English Claims, 20 pgs.
Chen, Jingying, et al., “Robust Facial Feature Tracking Under Various Illuminations”, 2006 International Conference on Image Processing, doi: 10.1109/ICIP.2006.312997., (2006), 2829-2832.
Kuhl, Annika, et al., “Automatic Fitting of a Deformable Face Mask Using a Single Image”, Computer Vision/Computer Graphics Collaboration Techniques, Springer, Berlin, (May 4, 2009), 69-81.
Pham, Hai, et al., “Hybrid On-line 3D Face and Facial Actions Tracking in RGBD Video Sequences”, International Conference On Pattern Recognition, IEEE Computer Society, US, (Aug. 24, 2014), 4194-4199.
Viola, Paul, et al., “Rapid Object Detection using a Boosted Cascade of Simple Features”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2001), 511-518.
“U.S. Appl. No. 14/114,124, Response filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 14 pgs.
“U.S. Appl. No. 14/314,312, Advisory Action dated May 10, 2019”, 3 pgs.
“U.S. Appl. No. 14/314,312, Appeal Brief filed Oct. 3, 2019”, 14 pgs.
“U.S. Appl. No. 14/314,312, Final Office Action dated Mar. 22, 2019”, 28 pgs.
“U.S. Appl. No. 14/314,312, Final Office Action dated Apr. 12, 2017”, 34 pgs.
“U.S. Appl. No. 14/314,312, Final Office Action dated May 5, 2016”, 28 pgs.
“U.S. Appl. No. 14/314,312, Final Office Action dated May 10, 2018”, 32 pgs.
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Jul. 5, 2019”, 25 pgs.
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Aug. 30, 2017”, 32 pgs.
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Oct. 17, 2016”, 33 pgs.
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Nov. 5, 2015”, 26 pgs.
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Nov. 27, 2018”, 29 pgs.
“U.S. Appl. No. 14/314,312, Notice of Allowability dated Jan. 7, 2020”, 3 pgs.
“U.S. Appl. No. 14/314,312, Notice of Allowance dated Oct. 25, 2019”, 9 pgs.
“U.S. Appl. No. 14/314,312, Response filed Jan. 28, 2019 to Non Final Office Action dated Nov. 27, 2018”, 10 pgs.
“U.S. Appl. No. 14/314,312, Response filed Feb. 28, 2018 to Non Final Office Action dated Aug. 30, 2017”, 13 pgs.
“U.S. Appl. No. 14/314,312, Response filed Mar. 17, 2017 to Non Final Office Action dated Oct. 17, 2016”, 12 pgs.
“U.S. Appl. No. 14/314,312, Response filed Apr. 5, 2016 to Non Final Office Action dated Nov. 5, 2015”, 13 pgs.
“U.S. Appl. No. 14/314,312, Response filed Aug. 14, 2017 to Final Office Action dated Apr. 12, 2017”, 16 pgs.
“U.S. Appl. No. 14/314,312, Response filed Sep. 6, 2018 to Final Office Action dated May 10, 2018”, 12 pgs.
“U.S. Appl. No. 14/314,312, Response filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 12 pgs.
“U.S. Appl. No. 14/314,312, Response filed May 3, 2019 to Final Office Action dated Mar. 22, 2019”, 11 pgs.
“U.S. Appl. No. 14/314,324, Advisory Action dated Sep. 21, 2017”, 4 pgs.
“U.S. Appl. No. 14/314,324, Final Office Action dated May 3, 2017”, 33 pgs.
“U.S. Appl. No. 14/314,324, Final Office Action dated May 5, 2016”, 24 pgs.
“U.S. Appl. No. 14/314,324, Non Final Office Action dated Oct. 14, 2016”, 26 pgs.
“U.S. Appl. No. 14/314,324, Non Final Office Action dated Nov. 5, 2015”, 23 pgs.
“U.S. Appl. No. 14/314,324, Notice of Allowance dated Nov. 8, 2017”, 7 pgs.
“U.S. Appl. No. 14/314,324, Response filed Feb. 14, 2017 to Non Final Office Action dated Oct. 14, 2016”, 19 pgs.
“U.S. Appl. No. 14/314,324, Response filed Apr. 5, 2016 to Non Final Office Action dated Nov. 5, 2015”, 15 pgs.
“U.S. Appl. No. 14/314,324, Response filed Sep. 1, 2017 to Final Office Action dated May 3, 2017”, 10 pgs.
“U.S. Appl. No. 14/314,324, Response Filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 14 pgs.
“U.S. Appl. No. 14/314,324, Response filed Nov. 3, 2017 to Advisory Action dated Sep. 21, 2017”, 11 pgs.
“U.S. Appl. No. 14/314,334, Appeal Brief filed Apr. 15, 2019”, 19 pgs.
“U.S. Appl. No. 14/314,334, Examiner Interview Summary dated Apr. 28, 2017”, 3 pgs.
“U.S. Appl. No. 14/314,334, Examiner Interview Summary dated Nov. 26, 2018”, 3 pgs.
“U.S. Appl. No. 14/314,334, Final Office Action dated Feb. 15, 2019”, 40 pgs.
“U.S. Appl. No. 14/314,334, Final Office Action dated May 16, 2016”, 43 pgs.
“U.S. Appl. No. 14/314,334, Final Office Action dated May 31, 2018”, 38 pgs.
“U.S. Appl. No. 14/314,334, Final Office Action dated Jul. 12, 2017”, 40 pgs.
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Jan. 22, 2018”, 35 pgs.
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Oct. 26, 2018”, 39 pgs.
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Nov. 13, 2015”, 39 pgs.
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Dec. 1, 2016”, 45 pgs.
“U.S. Appl. No. 14/314,334, Notice of Allowance dated Jul. 1, 2019”, 9 pgs.
“U.S. Appl. No. 14/314,334, Notice of Allowance dated Sep. 19, 2017”, 5 pgs.
“U.S. Appl. No. 14/314,334, Response filed Apr. 13, 2016 to Non Final Office Action dated Nov. 13, 2015”, 20 pgs.
“U.S. Appl. No. 14/314,334, Response Filed Apr. 23, 2018 to Non Final Office Action dated Jan. 22, 2018”, 14 pgs.
“U.S. Appl. No. 14/314,334, Response filed May 20, 2017 to Non Final Office Action dated Dec. 1, 2016”, 16 pgs.
“U.S. Appl. No. 14/314,334, Response filed Aug. 30, 2018 to Final Office Action dated May 31, 2018”, 13 pgs.
“U.S. Appl. No. 14/314,334, Response filed Sep. 1, 2017 to Final Office Action dated Jul. 12, 2017”, 12 pgs.
“U.S. Appl. No. 14/314,334, Response filed Oct. 17, 2016 to Final Office Action dated May 16, 2016”, 16 pgs.
“U.S. Appl. No. 14/314,343, Final Office Action dated May 6, 2016”, 19 pgs.
“U.S. Appl. No. 14/314,343, Final Office Action dated Aug. 15, 2017”, 38 pgs.
“U.S. Appl. No. 14/314,343, Final Office Action dated Sep. 6, 2018”, 43 pgs.
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Apr. 19, 2018”, 40 pgs.
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Nov. 4, 2015”, 14 pgs.
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Nov. 17, 2016”, 31 pgs.
“U.S. Appl. No. 14/314,343, Notice of Allowance dated Dec. 17, 2018”, 5 pgs.
“U.S. Appl. No. 14/314,343, Response filed Feb. 15, 2018 to Final Office Action dated Aug. 15, 2017”, 11 pgs.
“U.S. Appl. No. 14/314,343, Response filed Apr. 4, 2016 to Non Final Office Action dated Nov. 4, 2015”, 10 pgs.
“U.S. Appl. No. 14/314,343, Response filed May 11, 2017 to Non Final Office Action dated Nov. 17, 2016”, 13 pgs.
“U.S. Appl. No. 14/314,343, Response filed Jul. 19, 2018 to Non Final Office Action dated Apr. 19, 2018”, 15 pgs.
“U.S. Appl. No. 14/314,343, Response filed Oct. 6, 2016 to Final Office Action dated May 6, 2016”, 13 pgs.
“U.S. Appl. No. 14/314,343, Response Filed Oct. 11, 2018 to Final Office Action dated Sep. 6, 2018”, 11 pgs.
“U.S. Appl. No. 14/325,477, Non Final Office Action dated Oct. 9, 2015”, 17 pgs.
“U.S. Appl. No. 14/325,477, Notice of Allowance dated Mar. 17, 2016”, 5 pgs.
“U.S. Appl. No. 14/325,477, Response filed Feb. 9, 2016 to Non Final Office Action dated Oct. 9, 2015”, 13 pgs.
“U.S. Appl. No. 15/208,973, Final Office Action dated May 10, 2018”, 13 pgs.
“U.S. Appl. No. 15/208,973, Non Final Office Action dated Sep. 19, 2017”, 17 pgs.
“U.S. Appl. No. 15/208,973, Notice of Allowability dated Feb. 21, 2019”, 3 pgs.
“U.S. Appl. No. 15/208,973, Notice of Allowance dated Nov. 20, 2018”, 14 pgs.
“U.S. Appl. No. 15/208,973, Preliminary Amendment filed Jan. 17, 2017”, 9 pgs.
“U.S. Appl. No. 15/208,973, Response filed Sep. 5, 2018 to Final Office Action dated May 10, 2018”, 10 pgs.
“U.S. Appl. No. 15/921,282, Notice of Allowance dated Oct. 2, 2019”, 9 pgs.
“U.S. Appl. No. 16/277,750, Non Final Office Action dated Aug. 5, 2020”, 8 pgs.
“U.S. Appl. No. 16/277,750, Notice of Allowance dated Nov. 30, 2020”, 5 pgs.
“U.S. Appl. No. 16/277,750, PTO Response to Rule 312 Communication dated Mar. 30, 2021”, 2 pgs.
“U.S. Appl. No. 16/277,750, Response filed Nov. 5, 2020 to Non Final Office Action dated Aug. 5, 2020”, 27 pgs.
“U.S. Appl. No. 16/277,750, Supplemental Notice of Allowability dated Dec. 28, 2020”, 2 pgs.
“U.S. Appl. No. 16/298,721, Advisory Action dated May 12, 2020”, 3 pgs.
“U.S. Appl. No. 16/298,721, Examiner Interview Summary dated Oct. 20, 2020”, 3 pgs.
“U.S. Appl. No. 16/298,721, Final Office Action dated Mar. 6, 2020”, 54 pgs.
“U.S. Appl. No. 16/298,721, Non Final Office Action dated Jul. 24, 2020”, 80 pgs.
“U.S. Appl. No. 16/298,721, Non Final Office Action dated Oct. 3, 2019”, 40 pgs.
“U.S. Appl. No. 16/298,721, Notice of Allowance dated Nov. 10, 2020”, 5 pgs.
“U.S. Appl. No. 16/298,721, PTO Response to Rule 312 Communication dated Feb. 4, 2021”, 2 pgs.
“U.S. Appl. No. 16/298,721, Response filed Jan. 3, 2020 to Non Final Office Action dated Oct. 3, 2019”, 10 pgs.
“U.S. Appl. No. 16/298,721, Response filed Apr. 23, 2020 to Final Office Action dated Mar. 6, 2020”, 11 pgs.
“U.S. Appl. No. 16/298,721, Response filed Oct. 22, 2020 to Non Final Office Action dated Jul. 24, 2020”, 13 pgs.
“U.S. Appl. No. 16/548,279, Advisory Action dated Jan. 13, 2022”, 4 pgs.
“U.S. Appl. No. 16/548,279, Advisory Action dated Jul. 23, 2021”, 3 pgs.
“U.S. Appl. No. 16/548,279, Final Office Action dated May 21, 2021”, 24 pgs.
“U.S. Appl. No. 16/548,279, Final Office Action dated Nov. 12, 2021”, 31 pgs.
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Feb. 17, 2022”, 37 pgs.
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Mar. 1, 2021”, 26 pgs.
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Aug. 4, 2021”, 23 pgs.
“U.S. Appl. No. 16/548,279, Response filed Jan. 5, 2022 to Final Office Action dated Nov. 12, 2021”, 12 pgs.
“U.S. Appl. No. 16/548,279, Response filed May 5, 2021 to Non Final Office Action dated Mar. 1, 2021”, 11 pgs.
“U.S. Appl. No. 16/548,279, Response filed Jul. 16, 2021 to Final Office Action dated May 21, 2021”, 10 pgs.
“U.S. Appl. No. 16/548,279, Response filed Nov. 1, 2021 to Non Final Office Action dated Aug. 4, 2021”, 11 pgs.
“U.S. Appl. No. 16/732,858, Advisory Action dated Jan. 12, 2022”, 3 pgs.
“U.S. Appl. No. 16/732,858, Final Office Action dated Nov. 4, 2021”, 19 pgs.
“U.S. Appl. No. 16/732,858, Non Final Office Action dated Feb. 9, 2022”, 26 pgs.
“U.S. Appl. No. 16/732,858, Non Final Office Action dated Jul. 19, 2021”, 29 pgs.
“U.S. Appl. No. 16/732,858, Response filed Jan. 3, 2022 to Final Office Action dated Nov. 4, 2021”, 10 pgs.
“U.S. Appl. No. 16/732,858, Response filed Oct. 19, 2021 to Non Final Office Action dated Jul. 19, 2021”, 12 pgs.
“U.S. Appl. No. 16/749,708, Final Office Action dated Nov. 15, 2021”, 35 pgs.
“U.S. Appl. No. 16/749,708, Non Final Office Action dated Jul. 30, 2021”, 29 pgs.
“U.S. Appl. No. 16/749,708, Notice of Allowance dated Jan. 21, 2022”, 13 pgs.
“U.S. Appl. No. 16/749,708, Notice of Allowance dated May 13, 2022”, 5 pgs.
“U.S. Appl. No. 16/749,708, Response filed Jan. 7, 2022 to Final Office Action dated Nov. 15, 2021”, 11 pgs.
“U.S. Appl. No. 16/749,708, Response filed Oct. 28, 2021 to Non Final Office Action dated Jul. 30, 2021”, 12 pgs.
“U.S. Appl. No. 17/248,812, Non Final Office Action dated Nov. 22, 2021”, 39 pgs.
“U.S. Appl. No. 17/248,812, Notice of Allowance dated Mar. 23, 2022”, 5 pgs.
“U.S. Appl. No. 17/248,812, Response filed Feb. 18, 2022 to Non Final Office Action dated Nov. 22, 2021”, 12 pgs.
“Bilinear interpolation”, Wikipedia, [Online] Retrieved from the Internet: <URL: https://web.archive.org/web/20110921104425/http://en.wikipedia.org/wiki/Bilinear_interpolation>, (Jan. 8, 2014), 3 pgs.
“Facial Action Coding System”, Wikipedia, [Online]. Retrieved from the Internet: <URL: https://en.wikipedia.org/w/index.php?title=Facial_Action_Coding_System&oldid=591978414>, (Jan. 23, 2014), 6 pgs.
“Imatest”, [Online] Retrieved from the Internet on Jul. 10, 2015: <URL: https://web.archive.org/web/20150710000557/http://www.imatest.com/>, 3 pgs.
“KR 10-0853122 B1 machine translation”, IP.com, (2008), 29 pgs.
Ahlberg, Jorgen, “Candide-3: An Updated Parameterised Face”, Image Coding Group, Dept. of Electrical Engineering, Linkoping University, SE, (Jan. 2001), 16 pgs.
Baldwin, Bernard, et al., “Resolution-Appropriate Shape Representation”, Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), (1998), 460-465.
Baxes, Gregory A., et al., “Digital Image Processing: Principles and Applications, Chapter 4”, New York: Wiley, (1994), 88-91.
Chen, et al., “Manipulating, Deforming and Animating Sampled Object Representations”, Computer Graphics Forum vol. 26, (2007), 824-852 pgs.
Dornaika, F, et al., “On Appearance Based Face and Facial Action Tracking”, IEEE Trans. Circuits Syst. Video Technol. 16(9), (Sep. 2006), 1107-1124.
Florenza, Lidia, et al., “Real Time Corner Detection for Miniaturized Electro-Optical Sensors Onboard Small Unmanned Aerial Systems”, Sensors, 12(1), (2012), 863-877.
Forlenza, Lidia, et al., “Real Time Corner Detection for Miniaturized Electro-Optical Sensors Onboard Small Unmanned Aerial Systems”, Sensors, 12(1), (2012), 863-877.
Kaufmann, Peter, et al., “Finite Element Image Warping”, Computer Graphics Forum, vol. 32, No. 2-1, Oxford, UK: Blackwell Publishing Ltd., (2013), 31-39.
Lefevre, Stephanie, et al. “Structure and Appearance Features for Robust 3D Facial Actions Tracking”, 2009 IEEE International Conference on Multimedia and Expo, (2009), 298-301.
Leyden, John, “This SMS will self-destruct in 40 seconds”, [Online] Retrieved from the Internet: <URL: http://www.theregister.co.uk/2005/12/12/stealthtext/>, (Dec. 12, 2005), 1 pg.
Li, Yongqiang, et al., “Simultaneous Facial Feature Tracking and Facial Expression Recognition”, IEEE Transactions on Image Processing, 22(7), (Jul. 2013), 2559-2573.
Milborrow, S, et al., “Locating facial features with an extended active shape model”, European Conference on Computer Vision, Springer, Berlin, Heidelberg, [Online] Retrieved from the Internet: <URL: http://www.milbo.org/stasm-files/locating-facial-features-with-an-extended-asm.pdf>, (2008), 11 pgs.
Neoh, Hong Shan, et al., “Adaptive Edge Detection for Real-Time Video Processing using FPGAs”, Global Signal Processing, vol. 7. No. 3, (2004), 7 pgs.
Ohya, Jun, et al., “Virtual Metamorphosis”, IEEE MultiMedia, 6(2), (1999), 29-39.
Phadke, Gargi, et al., “Illumination Invariant Mean-Shift Tracking”, 2013 IEEE Workshop on Applications of Computer Vision (WACV), doi: 10.1109/WACV.2013.6475047, (2013), 407-412.
Salmi, Jussi, et al., “Hierarchical grid transformation for image warping in the analysis of two-dimensional electrophoresis gels”, Proteomics, 2(11), (2002), 1504-1515.
Su, Zihua, “Statistical Shape Modelling: Automatic Shape Model Building”, Submitted to the University College London for the Degree of Doctor of Philosophy, (2011), 238 pgs.
Tchoulack, Stephane, et al., “A Video Stream Processor for Real-time Detection and Correction of Specular Reflections in Endoscopic Images”, 2008 Joint 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference, (2008), 49-52.
Related Publications (1)
Number Date Country
20220392491 A1 Dec 2022 US
Provisional Applications (1)
Number Date Country
61936016 Feb 2014 US
Continuations (2)
Number Date Country
Parent 16749708 Jan 2020 US
Child 17819870 US
Parent 14314312 Jun 2014 US
Child 16749708 US