The disclosed embodiments relate generally to the field of real-time video processing. In particular, the disclosed embodiments relate to a computerized system and a method for real-time video processing that involves changing features of an object in a video.
At the present time some programs can provide processing of still images. For example, U.S. Patent Application Publication No. US2007268312, incorporated herein by reference, discloses a method of replacing face elements by some components that is made by users for real-time video. However, it is not possible to process real time video in such a way that an object shown in real time video can be modified in real time naturally with some effects. In case of a human's face such effects can include making a face younger/older and etc.
Thus, new and improved systems and methods are needed that would enable real time video stream processing that involves changing features of an object in the video stream.
The embodiments described herein are directed to systems and methods that substantially obviate one or more of the above and other problems associated with the conventional technology for real time video stream processing. In accordance with one aspect of the embodiments described herein, there is provided a method for real-time video processing for changing features of an object in a video, the method comprises: providing an object in the video, the object being at least partially and at least occasionally presented in frames of the video; detecting the object in the video; generating a list of at least one element of the object, the list being based on the object's features to be changed according to a request for modification; detecting the at least one element of the object in the video; tracking the at least one element of the object in the video; and transforming the frames of the video such that the at least one element of the object is modified according to the request for modification.
In one or more embodiments, transforming the frames of the video comprises: calculating characteristic points for each of the at least one element of the object; generating a mesh based on the calculated characteristic points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein tracking comprises aligning the mesh for each of the at least one element with a position of the corresponding each of the at least one element from frame to frame; generating a set of first points on the mesh for each of the at least one element of the object based on the request for modification; generating a set of second points on the mesh based on the set of first points and the request for modification; and transforming the frames of the video such that the at least one element of the object is modified, wherein, for each of the at least one element of the object, the set of first points comes into the set of second points using the mesh
In one or more embodiments, the computer-implemented method further comprises: generating a square grid associated with the background of the object in the video; and transforming the background of the object using the square grid in accordance with the modification of the at least one element of the object.
In one or more embodiments, the computer-implemented method further comprises: generating at least one square grid associated with regions of the object that are adjacent to the modified at least one element of the object; and modifying the regions of the object that are adjacent to the modified at least one element of the object in accordance with the modification of the at least one element of the object using the at least one square grid.
In one or more embodiments, the detecting of the object in the video is implemented with the use of Viola-Jones method.
In one or more embodiments, calculating of the object's characteristic points is implemented with the use of an Active Shape Model (ASM).
In one or more embodiments, transforming the frames of the video comprises: calculating characteristic points for each of the at least one element of the object; generating a mesh based on the calculated characteristic points for each of the at least one element of the object; generating a set of first points on the mesh for each of the at least one element of the object based on the request for modification; generating at least one area based on the set of first points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein the tracking comprises aligning the at least one area of each of the at least one element with a position of the corresponding each of the at least one element from frame to frame; transforming the frames of the video such that the properties of the at least one area are modified based on the request for modification.
In one or more embodiments, modification of the properties of the at least one area includes changing color of the at least one area.
In one or more embodiments, modification of the properties of the at least one area includes removing at least part of the at least one area from the frames of the video.
In one or more embodiments, modification of the properties of at least one area includes adding at least one new object to the at least one area, the at least one new object is based on the request for modification.
In one or more embodiments, objects to be modified include a human's face.
In one or more embodiments, the processed video comprises a video stream.
In accordance with another aspect of the embodiments described herein, there is provided a mobile computerized system comprising a central processing unit and a memory, the memory storing instructions for: providing an object in the video, the object being at least partially and at least occasionally presented in frames of the video; detecting the object in the video; generating a list of at least one element of the object, the list being based on the object's features to be changed according to a request for modification; detecting the at least one element of the object in the video; tracking the at least one element of the object in the video; and transforming the frames of the video such that the at least one element of the object is modified according to the request for modification.
In one or more embodiments, transforming the frames of the video comprises: calculating characteristic points for each of the at least one element of the object; generating a mesh based on the calculated characteristic points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein tracking comprises aligning the mesh for each of the at least one element with a position of the corresponding each of the at least one element from frame to frame; generating a set of first points on the mesh for each of the at least one element of the object based on the request for modification;
generating a set of second points on the mesh based on the set of first points and the request for modification; and transforming the frames of the video such that the at least one element of the object is modified, wherein, for each of the at least one element of the object, the set of first points comes into the set of second points using the mesh.
In one or more embodiments, the computer-implemented method further comprises: generating a square grid associated with the background of the object in the video; and transforming the background of the object using the square grid in accordance with the modification of the at least one element of the object.
In one or more embodiments, the computer-implemented method further comprises: generating at least one square grid associated with regions of the object that are adjacent to the modified at least one element of the object; and modifying the regions of the object that are adjacent to the modified at least one element of the object in accordance with the modification of the at least one element of the object using the at least one square grid.
In one or more embodiments, the detecting of the object in the video is implemented with the use of Viola-Jones method.
In one or more embodiments, calculating of the object's characteristic points is implemented with the use of an Active Shape Model (ASM).
In one or more embodiments, transforming the frames of the video comprises: calculating characteristic points for each of the at least one element of the object; generating a mesh based on the calculated characteristic points for each of the at least one element of the object; generating a set of first points on the mesh for each of the at least one element of the object based on the request for modification; generating at least one area based on the set of first points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein the tracking comprises aligning the at least one area of each of the at least one element with a position of the corresponding each of the at least one element from frame to frame; transforming the frames of the video such that the properties of the at least one area are modified based on the request for modification.
In one or more embodiments, modification of the properties of the at least one area includes changing color of the at least one area.
In one or more embodiments, modification of the properties of the at least one area includes removing at least part of the at least one area from the frames of the video.
In one or more embodiments, modification of the properties of at least one area includes adding at least one new object to the at least one area, the at least one new object is based on the request for modification.
In one or more embodiments, objects to be modified include a human's face.
The accompanying drawings, which are incorporated in and constitute a part of this specification exemplify the embodiments of the present invention and, together with the description, serve to explain and illustrate principles of the inventive technique. Specifically:
In the following detailed description, reference will be made to the accompanying drawing(s), in which identical functional elements are designated with like numerals. The aforementioned accompanying drawings show by way of illustration and not by way of limitation, specific embodiments and implementations consistent with principles of the present invention. These implementations are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other implementations may be utilized and that structural changes and/or substitutions of various elements may be made without departing from the scope and spirit of present invention. The following detailed description is, therefore, not to be construed in a limited sense. Additionally, the various embodiments of the invention as described may be implemented in the form of a software running on a general purpose computer, in the form of a specialized hardware, or combination of software and hardware.
It will be appreciated that the method for real time video processing can be performed with any kind of video data, e.g. video streams, video files saved in a memory of a computerized system of any kind (such as mobile computer devices, desktop computer devices and others), and all other possible types of video data understandable for those skilled in the art. Any kind of video data can be processed, and the embodiments disclosed herein are not intended to be limiting the scope of the present invention by indicating a certain type of video data.
The embodiments disclosed further are aimed for processing of video streams, however all other types of video data including video files saved in a memory of a computerized system can be processed by the methods of the present invention. For example, a user can load video files and save them in a memory of his computerized system and such video files can be also processed by the methods of the present invention. In accordance with one aspect of the embodiments described herein, there is provided a computerized system and a computer-implemented method for processing a real-time video stream that involves changing features of an object in the video stream. The described method may be implemented using any kind of computing device including desktops, laptops, tablet computers, mobile phones, music players, multimedia players etc. having any kind of generally used operational system such as Windows®, iOS®, Android® and others. All disclosed embodiments and examples are non-limiting to the invention and disclosed for illustrative purposes only.
It is important to note that any objects can be processed by the embodiments of the described method, including, without limitation, such objects as a human's face and parts of a human body, animals, and other living creatures or non-living things which images can be transported in a real-time video stream.
The method 100 according to the first embodiment of the invention is illustrated in
Next, the object from the request for modification is detected in the video stream (stage 120), for example 5 with the use of the conventional Viola-Jones method, and the request for modification is analyzed by generating a list having one or more elements of the object (stage 130) such that the mentioned list is based on the object's features that must be changed according to the request.
Further, in one or more embodiments, the elements of the object are detected (stage 140) and tracked (stage 150) in the video stream.
Finally, in one or more embodiments, the elements of the object are modified according to the request for modification, thus transforming the frames of the video stream (stage 160).
It shall be noted that transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements the second embodiment of the invention can be used. According to the second embodiment, primarily characteristic points for each of element of an object are calculated. Hereinafter characteristic points refer to points of an object which relate to its elements used in changing features of this object. It is possible to calculate characteristic points with the use of an Active Shape Model (ASM) or other known methods. Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh used in the following stage of tracking the elements of the object in the video stream. In particular in the process of tracking the mentioned mesh for each element is aligned with a position of each element. Further, two sets additional points are generated on the mesh, namely a set of first points and a set of second points. The set of first points is generated for each element based on a request for modification, and the set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh.
In such method a background of the modified object can be changed or distorted. Thus, to prevent such effect it is possible to generate a square grid associated with the background of the object and to transform the background of the object based on modifications of elements of the object using the square grid.
As it can be understood by those skilled in the art, not only the background of the object but also some its regions adjacent to the modified elements can be changed or distorted. Here, one or several square grids associated with the mentioned regions of the object can be generated, and the regions can be modified in accordance with the modification of elements of the object by using the generated square grid or several square grids.
In one or more embodiments, transformations of frames referring to changing some areas of an object using its elements can be performed by the third embodiment of the invention that is similar to the third embodiment. More specifically, transformation of frames according to the third embodiment begins with calculating of characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. After that a set of first points is generated on the mesh for each element of the object on the basis of a request for modification. Then one or more areas based on the set of first points are generated for each element. Finally, the elements of the object are tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream.
According to the nature of the request for modification properties of the mentioned areas can be transformed in different ways:
It should be noted that different areas or different parts of such areas can be modified in different ways as mentioned above, and properties of the mentioned areas can be also modified in a different manner other then the specific exemplary modifications described above and apparent for those skilled in the art.
Face detection and face tracking are discussed below in greater detail.
Face Detection and Initialization
In one or more embodiments, first, in the algorithm for changing proportion a user sends a request for changing proportions of an object in a video stream. The next step in the algorithm involves detecting the object in the video stream.
In one or more embodiments, the face is detected on an image with use of Viola-Jones method. Viola-Jones method is a fast and quite accurate method used to detect the face region. Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. However, it should be appreciated that other methods and algorithms suitable for face detection can be used.
In one or more embodiments, for locating facial features locating of landmarks is used. A landmark represents a distinguishable point present in most of the images under consideration, for example, the location of the left eye pupil (
In one or more embodiments, a set of landmarks forms a shape. Shapes are represented as vectors: all the x- followed by all the y-coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes (which in the present disclosure are manually landmarked faces).
In one or more embodiments, subsequently, in accordance with the ASM algorithm, the search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. It then repeats the following two steps until convergence (i) suggest a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point (ii) conform the tentative shape to a global shape model. The individual template matches are unreliable and the shape model pools the results of the weak template matchers to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. It follows that two types of sub-model make up the ASM: the profile model and the shape model.
In one or more embodiments, the profile models (one for each landmark at each pyramid level) are used to locate the approximate position of each landmark by template matching. Any template matcher can be used, but the classical ASM forms a fixed-length normalized gradient vector (called the profile) by sampling the image along a line (called the whisker) orthogonal to the shape boundary at the landmark. During training on manually landmarked faces, at each landmark the mean profile vector g and the profile covariance matrix Sg are calculated. During searching, the landmark along the whisker to the pixel whose profile g has lowest Mahalanobis distance from the mean profile g is displaced, where the
MahalanobisDistance=(g−g)TSg−1(g−g) MahalanobisDistance=(g−g)TSg−1(g−g) (1)
The shape model specifies allowable constellations of landmarks. It generates a shape {circumflex over (x)} with
{circumflex over (x)}=x+_b (2)
where {circumflex over (x)} is the mean shape, is a parameter vector, and _ is a matrix of selected eigenvectors of the covariance matrix Sg of the points of the aligned training shapes. Using a standard principal components approach, model has as much variation in the training set as it is desired by ordering the eigenvalues λi of Ss and keeping an appropriate number of the corresponding eigenvectors in Φ. In the method, a single shape model for the entire ASM is used but it is scaled for each pyramid level.
Then the Equation 2 is used to generate various shapes by varying the vector parameter b. By keeping the elements of b within limits (determined during model building) it is possible to ensure that generated face shapes are lifelike.
Conversely, given a suggested shape x, it is possible to calculate the parameter b that allows Equation 2 to best approximate x with a model shape x{circumflex over ( )}. An iterative algorithm, described by Cootes and Taylor, that gives the b and T that minimizes
distance(x,T(x+_b)) (3)
where T is a similarity transform that maps the model space into the image space is used.
In one or more embodiments, mapping can be built from facial feature reference points, detected by ASM, to Candide-3 point, and that gives us Candide-3 points x and y coordinates. Candide is a parameterised face mask specifically developed for model-based coding of human faces. Its low number of polygons (approximately 100) allows fast reconstruction with moderate computing power. Candide is controlled by global and local Action Units (AUs). The global ones correspond to rotations around three axes. The local Action Units control the mimics of the face so that different expressions can be obtained.
The following equation system can be made, knowing Candide-3 points x and y coordinates.
where—j-th shape unit, xi, yi—i-th point coordinates, Xjj, Yij—coefficients, which denote how the i-th point coordinates are changed by j-th shape unit. In this case, this system is overdetermined, so it can be solved precisely. Thus, the following minimization is made:
Let's denote
X=((Xij)T,(Yij)T)T, x=((xi)T,(yi)T)T, B=(Bj)T. (7)
This equation system is linear, so it's solution is
B=(XTX)−1XTx (8)
In one or more embodiments, it is also possible to use Viola-Jones method and ASM to improve tracking quality. Face tracking methods usually accumulate error over time, so they can lose face position after several hundred frames. In order to prevent it, in the present invention the ASM algorithm is run from time to time to re-initialize tracking algorithm.
Face Tracking
In one or more embodiments, the next step comprises tracking the detected object in the video stream. In the present invention the abovementioned Candide-3 model is used (see Ahlberg, J.: Candide-3, an updated parameterized face. Technical report, Linköping University, Sweden (2001)), incorporated herein by reference, for tracking face in a video stream). The mesh or mask corresponding to Candide-b 3 model is shown in
In one or more embodiments, a state of the model can be described by shape units intensity vector, action units intensity vector and a position-vector. Shape units are some main parameters of a head and a face. In the present invention next 10 units are used:
In one or more embodiments, action units are face parameters that correspond to some face movement. In the present invention next 7 units are used:
In one or more embodiments, the mask position at a picture can be described by using 6 coordinates: yaw, pitch, roll, x, y, scale. The main idea of the algorithm proposed by Dornaika et al. (Dornaika, F., Davoine, F.: On appearance based face and facial action tracking. IEEE Trans. Circuits Syst. Video Technol. 16(9):1107-1124 (2006), incorporated herein by reference) is to find the mask position, which observes the region most likely to be a face. For each position it is possible to calculate the observation error—the value that indicates the difference between image under current mask position and the mean face. An example of the mean face and of the observation under current position is illustrated in
In one or more embodiments, human face is modeled as a picture with a fixed size (width=40 px, height=46 px) called a mean face. Gaussian distribution that is proposed in the original algorithms has shown worse result in comparison with the static image. So, a difference between the current observation and the mean face is calculated in the following way:
e(b)=Σ(log(1+Im)−log(1+Ii))2 (9)
Logarithm function makes tracking more stable.
In one or more embodiments, to minimize error, Taylor series is used as it was proposed by Dornaika at. el. (see Dornaika, F., Davoine, F.: On appearance based face and facial action tracking. IEEE Trans. Circuits Syst. Video Technol. 16(9):1107-1124 (2006)). It was found that it is not necessary to sum up a number of finite difference when calculating an approximation to first derivative. Derivative is calculated in the following way:
Here gij is an element of matrix G. This matrix has size m*n, where m is large enough (about 1600) and n is small (about 14). In case of straight-forward calculating there have to be done n*m operations of division. To reduce the number of divisions this matrix can be rewritten as a product of two matrices:
G=A*B
Where matrix A has the same size as G and its element is:
aij=W(yt,bt+_bt)ij−W(yt,bt−_bt)ij (11)
and matrix B is a diagonal matrix with sizes n*n, and bii=_i−1
Now Matrix Gt+ has to be obtained and here is a place where a number of divisions can be reduced.
Gt+=(GTG)−1GT=(BTATAB)−1BTAT=B−(ATA)−1B−1BAT=B−1(ATA)−1AT (12)
After that transformation this can be done with n*n divisions instead of m*n.
One more optimization was used here. If matrix is created and then multiplied to _{dot over (o)}bt, it leads to n*m+n3 operations, but if first AT and _{dot over (o)}bt are multiplied and then B−1(ATA)−1 with it, there will be only n*m+n3 operations, that is much better because n<<m.
Thus, the step of tracking the detected object in the video stream in the present embodiment comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object on each frame.
It should be also noted that to increase tracking speed in the present invention multiplication of matrices is performed in such a way that it can be boosted using ARM advanced SIMD extensions (also known as NEON). Also, the GPU is used instead of CPU whenever possible. To get high performance of the GPU, operations in the present invention are grouped in a special way.
Thus, tracking according to the exemplary embodiment of the invention has the following advantageous features:
1. Before tracking, Logarithm is applied to the grayscale value of each pixel to track it. This transformation has a great impact to tracking performance.
2. In the procedure of gradient matrix creation, the step of each parameter depends on the scale of the mask.
Exemplary Computer Platform
The computer platform 501 may include a data bus 504 or other communication mechanism for communicating information across and among various parts of the computer platform 501, and a processor 505 coupled with bus 504 for processing information and performing other computational and control tasks. Computer platform 501 also includes a volatile storage 506, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 504 for storing various information as well as instructions to be executed by processor 505, including the software application for implementing multifunctional interaction with elements of a list using touch-sensitive devices described above. The volatile storage 506 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 505. Computer platform 501 may further include a read only memory (ROM or EPROM) 507 or other static storage device coupled to bus 504 for storing static information and instructions for processor 505, such as basic input-output system (BIOS), as well as various system configuration parameters. A persistent storage device 508, such as a magnetic disk, optical disk, or solid-state flash memory device is provided and coupled to bus 504 for storing information and instructions.
Computer platform 501 may be coupled via bus 504 to a touch-sensitive display 509, such as a cathode ray tube (CRT), plasma display, or a liquid crystal display (LCD), for displaying information to a system administrator or user of the computer platform 501. An input device 510, including alphanumeric and other keys, is coupled to bus 504 for communicating information and command selections to processor 505. Another type of user input device is cursor control device 511, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 505 and for controlling cursor movement on touch-sensitive display 509. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. To detect user's gestures, the display 509 may incorporate a touchscreen interface configured to detect user's tactile events and send information on the detected events to the processor 505 via the bus 504.
An external storage device 512 may be coupled to the computer platform 501 via bus 504 to provide an extra or removable storage capacity for the computer platform 501. In an embodiment of the computer system 500, the external removable storage device 512 may be used to facilitate exchange of data with other computer systems.
The invention is related to the use of computer system 500 for implementing the techniques described herein. In an embodiment, the inventive system may reside on a machine such as computer platform 501. According to one embodiment of the invention, the techniques described herein are performed by computer system 500 in response to processor 505 executing one or more sequences of one or more instructions contained in the volatile memory 506. Such instructions may be read into volatile memory 506 from another computer-readable medium, such as persistent storage device 508. Execution of the sequences of instructions contained in the volatile memory 506 causes processor 505 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 505 for execution. The computer-readable medium is just one example of a machine-readable medium, which may carry instructions for implementing any of the methods and/or techniques described herein. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as the persistent storage device 508. Volatile media includes dynamic memory, such as volatile storage 506.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, a flash drive, a memory card, any other memory chip or cartridge, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 505 for execution. For example, the instructions may initially be carried on a magnetic disk from a remote computer. Alternatively, a remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on the data bus 504. The bus 504 carries the data to the volatile storage 506, from which processor 505 retrieves and executes the instructions. The instructions received by the volatile memory 506 may optionally be stored on persistent storage device 508 either before or after execution by processor 505. The instructions may also be downloaded into the computer platform 501 via Internet using a variety of network data communication protocols well known in the art.
The computer platform 501 also includes a communication interface, such as network interface card 513 coupled to the data bus 504. Communication interface 513 provides a two-way data communication coupling to a network link 514 that is coupled to a local network 515. For example, communication interface 513 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 513 may be a local area network interface card (LAN NIC) to provide a data communication connection to a compatible LAN. Wireless links, such as well-known 802.11a, 802.11b, 802.11g and Bluetooth may also used for network implementation. In any such implementation, communication interface 513 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 514 typically provides data communication through one or more networks to other network resources. For example, network link 514 may provide a connection through local network 515 to a host computer 516, or a network storage/server 522. Additionally or alternatively, the network link 514 may connect through gateway/firewall 517 to the wide-area or global network 518, such as an Internet. Thus, the computer platform 501 can access network resources located anywhere on the Internet 518, such as a remote network storage/server 519. On the other hand, the computer platform 501 may also be accessed by clients located anywhere on the local area network 515 and/or the Internet 518. The network clients 520 and 521 may themselves be implemented based on the computer platform similar to the platform 501.
Local network 515 and the Internet 518 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 514 and through communication interface 513, which carry the digital data to and from computer platform 501, are exemplary forms of carrier waves transporting the information.
Computer platform 501 can send messages and receive data, including program code, through the variety of network(s) including Internet 518 and LAN 515, network link 515 and communication interface 513. In the Internet example, when the system 501 acts as a network server, it might transmit a requested code or data for an application program running on client(s) 520 and/or 521 through the Internet 518, gateway/firewall 517, local area network 515 and communication interface 513. Similarly, it may receive code from other network resources.
The received code may be executed by processor 505 as it is received, and/or stored in persistent or volatile storage devices 508 and 506, respectively, or other non-volatile storage for later execution.
Finally, it should be understood that processes and techniques described herein are not inherently related to any particular apparatus and may be implemented by any suitable combination of components. Further, various types of general purpose devices may be used in accordance with the teachings described herein. It may also prove advantageous to construct specialized apparatus to perform the method steps described herein. The present invention has been described in relation to particular examples, which are intended in all respects to be illustrative rather than restrictive. Those skilled in the art will appreciate that many different combinations of hardware, software, and firmware will be suitable for practicing the present invention. For example, the described software may be implemented in a wide variety of programming or scripting languages, such as Assembler, C/C++, Objective-C, per, shell, PHP, Java, as well as any now known or later developed programming or scripting language.
Moreover, other implementations of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. Various aspects and/or components of the described embodiments may be used singly or in any combination in the systems and methods for real time video stream processing. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
This application is a continuation of, and claims the benefit of U.S. patent application Ser. No. 15/921,282, filed on Mar. 14, 2018, which is a continuation of, and claims the benefit of U.S. patent application Ser. No. 14/314,324, filed on Jun. 25, 2014, which claims the benefit of U.S. Provisional Application Ser. No. 61/936,016, filed on Feb. 5, 2014.
Number | Name | Date | Kind |
---|---|---|---|
4888713 | Falk | Dec 1989 | A |
5227863 | Bilbrey et al. | Jul 1993 | A |
5359706 | Sterling | Oct 1994 | A |
5479603 | Stone et al. | Dec 1995 | A |
5715382 | Herregods et al. | Feb 1998 | A |
5990973 | Sakamoto | Nov 1999 | A |
6016150 | Lengyel et al. | Jan 2000 | A |
6038295 | Mattes | Mar 2000 | A |
6252576 | Nottingham | Jun 2001 | B1 |
6278491 | Wang et al. | Aug 2001 | B1 |
H2003 | Minner | Nov 2001 | H |
6492986 | Metaxas | Dec 2002 | B1 |
6621939 | Negishi et al. | Sep 2003 | B1 |
6664956 | Erdem | Dec 2003 | B1 |
6768486 | Szabo et al. | Jul 2004 | B1 |
6771303 | Zhang et al. | Aug 2004 | B2 |
6806898 | Toyama et al. | Oct 2004 | B1 |
6807290 | Liu et al. | Oct 2004 | B2 |
6829391 | Comaniciu et al. | Dec 2004 | B2 |
6891549 | Gold | May 2005 | B2 |
6897977 | Bright | May 2005 | B1 |
6980909 | Root et al. | Dec 2005 | B2 |
7034820 | Urisaka et al. | Apr 2006 | B2 |
7035456 | Lestideau | Apr 2006 | B2 |
7039222 | Simon et al. | May 2006 | B2 |
7050078 | Dempski | May 2006 | B2 |
7119817 | Kawakami | Oct 2006 | B1 |
7167519 | Comaniciu et al. | Jan 2007 | B2 |
7173651 | Knowles | Feb 2007 | B1 |
7212656 | Liu et al. | May 2007 | B2 |
7227567 | Beck et al. | Jun 2007 | B1 |
7239312 | Urisaka et al. | Jul 2007 | B2 |
7411493 | Smith | Aug 2008 | B2 |
7415140 | Nagahashi et al. | Aug 2008 | B2 |
7535890 | Rojas | May 2009 | B2 |
7538764 | Salomie | May 2009 | B2 |
7564476 | Coughlan et al. | Jul 2009 | B1 |
7612794 | He et al. | Nov 2009 | B2 |
7671318 | Tener et al. | Mar 2010 | B1 |
7697787 | Illsley | Apr 2010 | B2 |
7710608 | Takahashi | May 2010 | B2 |
7720283 | Sun et al. | May 2010 | B2 |
7782506 | Suzuki et al. | Aug 2010 | B2 |
7812993 | Bright | Oct 2010 | B2 |
7830384 | Edwards et al. | Nov 2010 | B1 |
7945653 | Zuckerberg et al. | May 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
7996793 | Latta et al. | Aug 2011 | B2 |
8026931 | Sun | Sep 2011 | B2 |
8086060 | Gilra et al. | Dec 2011 | B1 |
8090160 | Kakadiaris et al. | Jan 2012 | B2 |
8131597 | Hudetz | Mar 2012 | B2 |
8199747 | Rojas et al. | Jun 2012 | B2 |
8230355 | Bauermeister et al. | Jul 2012 | B1 |
8233789 | Brunner | Jul 2012 | B2 |
8253789 | Aizaki et al. | Aug 2012 | B2 |
8294823 | Ciudad et al. | Oct 2012 | B2 |
8295557 | Wang et al. | Oct 2012 | B2 |
8296456 | Klappert | Oct 2012 | B2 |
8314842 | Kudo | Nov 2012 | B2 |
8332475 | Rosen et al. | Dec 2012 | B2 |
8335399 | Gyotoku | Dec 2012 | B2 |
8385684 | Sandrew et al. | Feb 2013 | B2 |
8421873 | Majewicz et al. | Apr 2013 | B2 |
8462198 | Lin et al. | Jun 2013 | B2 |
8487938 | Latta et al. | Jul 2013 | B2 |
8520093 | Nanu et al. | Aug 2013 | B2 |
8638993 | Lee et al. | Jan 2014 | B2 |
8675972 | Lefevre et al. | Mar 2014 | B2 |
8687039 | Degrazia et al. | Apr 2014 | B2 |
8692830 | Nelson et al. | Apr 2014 | B2 |
8717465 | Ning | May 2014 | B2 |
8718333 | Wolf et al. | May 2014 | B2 |
8724622 | Rojas | May 2014 | B2 |
8743210 | Lin | Jun 2014 | B2 |
8761497 | Berkovich et al. | Jun 2014 | B2 |
8766983 | Marks et al. | Jul 2014 | B2 |
8810696 | Ning | Aug 2014 | B2 |
8823769 | Sekine | Sep 2014 | B2 |
8824782 | Ichihashi et al. | Sep 2014 | B2 |
8856691 | Geisner et al. | Oct 2014 | B2 |
8874677 | Rosen et al. | Oct 2014 | B2 |
8897596 | Passmore et al. | Nov 2014 | B1 |
8909679 | Root et al. | Dec 2014 | B2 |
8929614 | Oicherman et al. | Jan 2015 | B2 |
8934665 | Kim et al. | Jan 2015 | B2 |
8958613 | Kondo et al. | Feb 2015 | B2 |
8976862 | Kim et al. | Mar 2015 | B2 |
8988490 | Fujii | Mar 2015 | B2 |
8995433 | Rojas | Mar 2015 | B2 |
9032314 | Mital et al. | May 2015 | B2 |
9040574 | Wang et al. | May 2015 | B2 |
9055416 | Rosen et al. | Jun 2015 | B2 |
9100806 | Rosen et al. | Aug 2015 | B2 |
9100807 | Rosen et al. | Aug 2015 | B2 |
9191776 | Root et al. | Nov 2015 | B2 |
9204252 | Root | Dec 2015 | B2 |
9225897 | Sehn et al. | Dec 2015 | B1 |
9230160 | Kanter | Jan 2016 | B1 |
9232189 | Shaburov et al. | Jan 2016 | B2 |
9276886 | Samaranayake | Mar 2016 | B1 |
9311534 | Liang | Apr 2016 | B2 |
9364147 | Wakizaka et al. | Jun 2016 | B2 |
9396525 | Shaburova et al. | Jul 2016 | B2 |
9412007 | Nanu et al. | Aug 2016 | B2 |
9443227 | Evans et al. | Sep 2016 | B2 |
9489661 | Evans et al. | Nov 2016 | B2 |
9491134 | Rosen et al. | Nov 2016 | B2 |
9565362 | Kudo | Feb 2017 | B2 |
9705831 | Spiegel | Jul 2017 | B2 |
9742713 | Spiegel et al. | Aug 2017 | B2 |
9848293 | Murray et al. | Dec 2017 | B2 |
9928874 | Shaburova | Mar 2018 | B2 |
10102423 | Shaburov et al. | Oct 2018 | B2 |
10116901 | Shaburov et al. | Oct 2018 | B2 |
10255948 | Shaburova et al. | Apr 2019 | B2 |
10271010 | Gottlieb | Apr 2019 | B2 |
10283162 | Shaburova et al. | May 2019 | B2 |
10284508 | Allen et al. | May 2019 | B1 |
10438631 | Shaburova et al. | Oct 2019 | B2 |
10439972 | Spiegel et al. | Oct 2019 | B1 |
10509466 | Miller et al. | Dec 2019 | B1 |
10514876 | Sehn | Dec 2019 | B2 |
10566026 | Shaburova | Feb 2020 | B1 |
10586570 | Shaburova et al. | Mar 2020 | B2 |
10614855 | Huang | Apr 2020 | B2 |
10748347 | Li et al. | Aug 2020 | B1 |
10950271 | Shaburova et al. | Mar 2021 | B1 |
10958608 | Allen et al. | Mar 2021 | B1 |
10962809 | Castañeda | Mar 2021 | B1 |
10991395 | Shaburova et al. | Apr 2021 | B1 |
10996846 | Robertson et al. | May 2021 | B2 |
10997787 | Ge et al. | May 2021 | B2 |
11012390 | Al Majid et al. | May 2021 | B1 |
11030454 | Xiong et al. | Jun 2021 | B1 |
11036368 | Al Majid et al. | Jun 2021 | B1 |
11062498 | Voss et al. | Jul 2021 | B1 |
11087728 | Canberk et al. | Aug 2021 | B1 |
11092998 | Castañeda et al. | Aug 2021 | B1 |
11106342 | Al Majid et al. | Aug 2021 | B1 |
11126206 | Meisenholder et al. | Sep 2021 | B2 |
11143867 | Rodriguez, II | Oct 2021 | B2 |
11169600 | Canberk et al. | Nov 2021 | B1 |
11227626 | Krishnan Gorumkonda et al. | Jan 2022 | B1 |
11290682 | Shaburov et al. | Mar 2022 | B1 |
20010004417 | Narutoshi et al. | Jun 2001 | A1 |
20020006431 | Tramontana | Jan 2002 | A1 |
20020012454 | Liu et al. | Jan 2002 | A1 |
20020064314 | Comaniciu et al. | May 2002 | A1 |
20020163516 | Hubbell | Nov 2002 | A1 |
20030107568 | Urisaka et al. | Jun 2003 | A1 |
20030132946 | Gold | Jul 2003 | A1 |
20030228135 | Illsley | Dec 2003 | A1 |
20040037475 | Avinash et al. | Feb 2004 | A1 |
20040076337 | Nishida | Apr 2004 | A1 |
20040119662 | Dempski | Jun 2004 | A1 |
20040130631 | Suh | Jul 2004 | A1 |
20040233223 | Schkolne et al. | Nov 2004 | A1 |
20050046905 | Aizaki et al. | Mar 2005 | A1 |
20050073585 | Ettinger et al. | Apr 2005 | A1 |
20050117798 | Patton et al. | Jun 2005 | A1 |
20050128211 | Berger et al. | Jun 2005 | A1 |
20050131744 | Brown et al. | Jun 2005 | A1 |
20050180612 | Nagahashi et al. | Aug 2005 | A1 |
20050190980 | Bright | Sep 2005 | A1 |
20050202440 | Fletterick et al. | Sep 2005 | A1 |
20050220346 | Akahori | Oct 2005 | A1 |
20050238217 | Enomoto et al. | Oct 2005 | A1 |
20060098248 | Suzuki et al. | May 2006 | A1 |
20060110004 | Wu et al. | May 2006 | A1 |
20060170937 | Takahashi | Aug 2006 | A1 |
20060227997 | Au et al. | Oct 2006 | A1 |
20060242183 | Niyogi et al. | Oct 2006 | A1 |
20060269128 | Vladislav | Nov 2006 | A1 |
20060290695 | Salomie | Dec 2006 | A1 |
20070013709 | Charles et al. | Jan 2007 | A1 |
20070087352 | Fletterick et al. | Apr 2007 | A9 |
20070140556 | Willamowski et al. | Jun 2007 | A1 |
20070159551 | Kotani | Jul 2007 | A1 |
20070216675 | Sun | Sep 2007 | A1 |
20070223830 | Ono | Sep 2007 | A1 |
20070258656 | Aarabi et al. | Nov 2007 | A1 |
20070268312 | Marks et al. | Nov 2007 | A1 |
20080063285 | Porikli et al. | Mar 2008 | A1 |
20080077953 | Fernandez et al. | Mar 2008 | A1 |
20080184153 | Matsumura et al. | Jul 2008 | A1 |
20080187175 | Kim et al. | Aug 2008 | A1 |
20080204992 | Swenson et al. | Aug 2008 | A1 |
20080212894 | Demirli et al. | Sep 2008 | A1 |
20090012788 | Gilbert | Jan 2009 | A1 |
20090027732 | Imai | Jan 2009 | A1 |
20090158170 | Narayanan et al. | Jun 2009 | A1 |
20090290791 | Holub et al. | Nov 2009 | A1 |
20090309878 | Otani et al. | Dec 2009 | A1 |
20090310828 | Kakadiaris et al. | Dec 2009 | A1 |
20100074475 | Chouno | Mar 2010 | A1 |
20100177981 | Wang et al. | Jul 2010 | A1 |
20100185963 | Slik et al. | Jul 2010 | A1 |
20100188497 | Aizaki et al. | Jul 2010 | A1 |
20100202697 | Matsuzaka et al. | Aug 2010 | A1 |
20100203968 | Gill et al. | Aug 2010 | A1 |
20100231590 | Erceis et al. | Sep 2010 | A1 |
20100316281 | Lefevre | Dec 2010 | A1 |
20110018875 | Arahari et al. | Jan 2011 | A1 |
20110038536 | Gong | Feb 2011 | A1 |
20110182357 | Kim et al. | Jul 2011 | A1 |
20110202598 | Evans et al. | Aug 2011 | A1 |
20110261050 | Smolic et al. | Oct 2011 | A1 |
20110273620 | Berkovich et al. | Nov 2011 | A1 |
20110299776 | Lee et al. | Dec 2011 | A1 |
20110301934 | Tardif | Dec 2011 | A1 |
20120050323 | Baron, Jr. et al. | Mar 2012 | A1 |
20120106806 | Foita et al. | May 2012 | A1 |
20120136668 | Kuroda | May 2012 | A1 |
20120144325 | Mital et al. | Jun 2012 | A1 |
20120167146 | Incorvia | Jun 2012 | A1 |
20120209924 | Evans et al. | Aug 2012 | A1 |
20120288187 | Ichihashi et al. | Nov 2012 | A1 |
20120306853 | Wright et al. | Dec 2012 | A1 |
20120327172 | El-Saban et al. | Dec 2012 | A1 |
20130004096 | Goh et al. | Jan 2013 | A1 |
20130114867 | Kondo et al. | May 2013 | A1 |
20130155169 | Hoover et al. | Jun 2013 | A1 |
20130190577 | Brunner et al. | Jul 2013 | A1 |
20130201105 | Ptucha et al. | Aug 2013 | A1 |
20130201187 | Tong et al. | Aug 2013 | A1 |
20130201328 | Chung | Aug 2013 | A1 |
20130208129 | Stenman | Aug 2013 | A1 |
20130216094 | Delean | Aug 2013 | A1 |
20130229409 | Song et al. | Sep 2013 | A1 |
20130235086 | Otake | Sep 2013 | A1 |
20130278600 | Christensen | Oct 2013 | A1 |
20130287291 | Cho | Oct 2013 | A1 |
20130342629 | North et al. | Dec 2013 | A1 |
20140043329 | Wang et al. | Feb 2014 | A1 |
20140171036 | Simmons | Jun 2014 | A1 |
20140179347 | Murray et al. | Jun 2014 | A1 |
20140198177 | Castellani | Jul 2014 | A1 |
20140228668 | Wakizaka et al. | Aug 2014 | A1 |
20150055829 | Liang | Feb 2015 | A1 |
20150097834 | Ma et al. | Apr 2015 | A1 |
20150116350 | Lin et al. | Apr 2015 | A1 |
20150116448 | Gottlieb | Apr 2015 | A1 |
20150120293 | Wohlert et al. | Apr 2015 | A1 |
20150131924 | He et al. | May 2015 | A1 |
20150145992 | Traff | May 2015 | A1 |
20150163416 | Nevatie | Jun 2015 | A1 |
20150195491 | Shaburov et al. | Jul 2015 | A1 |
20150213604 | Li et al. | Jul 2015 | A1 |
20150220252 | Mital et al. | Aug 2015 | A1 |
20150221069 | Shaburova et al. | Aug 2015 | A1 |
20150221118 | Shaburova | Aug 2015 | A1 |
20150221136 | Shaburova et al. | Aug 2015 | A1 |
20150221338 | Shaburova et al. | Aug 2015 | A1 |
20150222821 | Shaburova | Aug 2015 | A1 |
20150370320 | Connor | Dec 2015 | A1 |
20160012627 | Kishikawa et al. | Jan 2016 | A1 |
20160253550 | Zhang et al. | Sep 2016 | A1 |
20160322079 | Shaburova et al. | Nov 2016 | A1 |
20170019633 | Shaburov et al. | Jan 2017 | A1 |
20170098122 | el Kaliouby | Apr 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170277684 | Dharmarajan | Sep 2017 | A1 |
20170277685 | Takumi | Sep 2017 | A1 |
20170351910 | Elwazer et al. | Dec 2017 | A1 |
20180158370 | Pryor | Jun 2018 | A1 |
20180036481 | Parshionikar | Dec 2018 | A1 |
20200160886 | Shaburova | May 2020 | A1 |
20210011612 | Dancie et al. | Jan 2021 | A1 |
20210074016 | Li et al. | Mar 2021 | A1 |
20210166732 | Shaburova et al. | Jun 2021 | A1 |
20210174034 | Retek et al. | Jun 2021 | A1 |
20210241529 | Cowburn et al. | Aug 2021 | A1 |
20210303075 | Cowburn et al. | Sep 2021 | A1 |
20210303077 | Anvaripour et al. | Sep 2021 | A1 |
20210303140 | Mourkogiannis | Sep 2021 | A1 |
20210382564 | Blachly et al. | Dec 2021 | A1 |
20210397000 | Rodriguez, II | Dec 2021 | A1 |
Number | Date | Country |
---|---|---|
2887596 | Jul 2015 | CA |
1411277 | Apr 2003 | CN |
1811793 | Aug 2006 | CN |
101167087 | Apr 2008 | CN |
101499128 | Aug 2009 | CN |
101753851 | Jun 2010 | CN |
102665062 | Sep 2012 | CN |
103620646 | Mar 2014 | CN |
103650002 | Mar 2014 | CN |
103999096 | Aug 2014 | CN |
104378553 | Feb 2015 | CN |
103049761 | Aug 2016 | CN |
107637072 | Jan 2018 | CN |
3707693 | Sep 2020 | EP |
20040058671 | Jul 2004 | KR |
100853122 | Aug 2008 | KR |
20080096252 | Oct 2008 | KR |
102031135 | Oct 2019 | KR |
102173786 | Oct 2020 | KR |
102346691 | Jan 2022 | KR |
102417043 | Jul 2022 | KR |
WO-2016149576 | Sep 2016 | WO |
WO-2016168591 | Oct 2016 | WO |
WO-2019094618 | May 2019 | WO |
Entry |
---|
Tchoulack et al., “A video stream processor for real-time detection and correction of specular reflections in endoscopic images.” In 2008 Joint 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference, pp. 49-52. IEEE, 2008. (Year: 2008). |
Neoh et al., “Adaptive edge detection for real-time video processing using FPGAs.” Global Signal Processing 7, No. 3 (2004): 2-3. (Year: 2004). |
Salmi et al., “Hierarchical grid transformation for image warping in the analysis of two-dimensional electrophoresis gels.” Proteomics 2, No. 11 (2002): 1504-1515. (Year: 2002). |
Kaufmann et al., “Finite element image warping.” In Computer Graphics Forum, vol. 32, No. 2pt1, pp. 31-39. Oxford, UK: Blackwell Publishing Ltd, 2013. (Year: 2013). |
Forlenza et al., “Real time corner detection for miniaturized electro-optical sensors onboard small unmanned aerial systems.” Sensors 12, No. 1 (2012): 863-877. (Year: 2012). |
Phadke et al., “Illumination invariant Mean-shift tracking,” 2013 IEEE Workshop on Applications of Computer Vision (WACV), 2013, pp. 407-412, doi: 10.1109/WACV.2013.6475047. (Year: 2013). |
Baldwin et al., “Resolution-appropriate shape representation.” In Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 460-465. IEEE, 1998. (Year: 1998). |
Wikipedia, “Facial Action Coding System”, published on Jan. 23, 2014 (Year: 2014). |
Lefevre et al., “Structure and appearance features for robust 3d facial actions tracking.” In 2009 IEEE International Conference on Multimedia and Expo, pp. 298-301. IEEE, 2009. (Year: 2009). |
Su, Zihua. “Statistical shape modelling: automatic shape model building.” PhD diss., UCL (University College London), 2011. (Year: 2011). |
Chen et al., “Robust Facial Feature Tracking Under Various Illuminations,” 2006 International Conference on Image Processing, 2006, pp. 2829-2832, doi: 10.1109/ICIP.2006.312997. (Year: 2006). |
“U.S. Appl. No. 14/114,124, Response filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 14 pgs. |
“U.S. Appl. No. 14/314,312, Advisory Action dated May 10, 2019”, 3 pgs. |
“U.S. Appl. No. 14/314,312, Final Office Action dated Mar. 22, 2019”, 28 pgs. |
“U.S. Appl. No. 14/314,312, Final Office Action dated Apr. 12, 2017”, 34 pgs. |
“U.S. Appl. No. 14/314,312, Final Office Action dated May 5, 2016”, 28 pgs. |
“U.S. Appl. No. 14/314,312, Final Office Action dated May 10, 2018”, 32 pgs. |
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Jul. 5, 2019”, 25 pgs. |
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Aug. 30, 2017”, 32 pgs. |
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Oct. 17, 2016”, 33 pgs. |
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Nov. 5, 2015”, 26 pgs. |
“U.S. Appl. No. 14/314,312, Non Final Office Action dated Nov. 27, 2018”, 29 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Mar. 17, 2017 to Non Final Office Action dated Oct. 17, 2016”, 12 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Jan. 28, 2019 to Non Final Office Action dated Nov. 27, 2018”, 10 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Feb. 28, 2018 to Non Final Office Action dated Aug. 30, 2017”, 13 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Apr. 5, 2016 to Non Final Office Action dated Nov. 5, 2015”, 13 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Aug. 14, 2017 to Final Office Action dated Apr. 12, 2017”, 16 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Sep. 6, 2018 to Final Office Action dated May 10, 2018”, 12 pgs. |
“U.S. Appl. No. 14/314,312, Response filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 12 pgs. |
“U.S. Appl. No. 14/314,312, Response filed May 3, 2019 to Final Office Action dated Mar. 22, 2019”, 11 pgs. |
“U.S. Appl. No. 14/314,324, Advisory Action dated Sep. 21, 2017”, 4 pgs. |
“U.S. Appl. No. 14/314,324, Final Office Action dated May 3, 2017”, 33 pgs. |
“U.S. Appl. No. 14/314,324, Final Office Action dated May 5, 2016”, 24 pgs. |
“U.S. Appl. No. 14/314,324, Non Final Office Action dated Oct. 14, 2016”, 26 pgs. |
“U.S. Appl. No. 14/314,324, Non Final Office Action dated Nov. 5, 2015”, 23 pgs. |
“U.S. Appl. No. 14/314,324, Notice of Allowance dated Nov. 8, 2017”, 7 pgs. |
“U.S. Appl. No. 14/314,324, Response filed Feb. 14, 2017 to Non Final Office Action dated Oct. 14, 2016”, 19 pgs. |
“U.S. Appl. No. 14/314,324, Response filed Apr. 5, 2016 to Non Final Office Action dated Nov. 5, 2015”, 15 pgs. |
“U.S. Appl. No. 14/314,324, Response filed Sep. 1, 2017 to Final Office Action dated May 3, 2017”, 10 pgs. |
“U.S. Appl. No. 14/314,324, Response Filed Oct. 5, 2016 to Final Office Action dated May 5, 2016”, 14 pgs. |
“U.S. Appl. No. 14/314,324, Response filed Nov. 3, 2017 to Advisory Action dated Sep. 21, 2017”, 11 pgs. |
“U.S. Appl. No. 14/314,334, Appeal Brief filed Apr. 15, 2019”, 19 pgs. |
“U.S. Appl. No. 14/314,334, Examiner Interview Summary dated Apr. 28, 2017”, 3 pgs. |
“U.S. Appl. No. 14/314,334, Examiner Interview Summary dated Nov. 26, 2018”, 3 pgs. |
“U.S. Appl. No. 14/314,334, Final Office Action dated Feb. 15, 2019”, 40 pgs. |
“U.S. Appl. No. 14/314,334, Final Office Action dated May 16, 2016”, 43 pgs. |
“U.S. Appl. No. 14/314,334, Final Office Action dated May 31, 2018”, 38 pgs. |
“U.S. Appl. No. 14/314,334, Final Office Action dated Jul. 12, 2017”, 40 pgs. |
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Jan. 22, 2018”, 35 pgs. |
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Oct. 26, 2018”, 39 pgs. |
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Nov. 13, 2015”, 39 pgs. |
“U.S. Appl. No. 14/314,334, Non Final Office Action dated Dec. 1, 2016”, 45 pgs. |
“U.S. Appl. No. 14/314,334, Notice of Allowance dated Jul. 1, 2019”, 9 pgs. |
“U.S. Appl. No. 14/314,334, Notice of Allowance dated Sep. 19, 2017”, 5 pgs. |
“U.S. Appl. No. 14/314,334, Response filed Apr. 13, 2016 to Non Final Office Action dated Nov. 13, 2015”, 20 pgs. |
“U.S. Appl. No. 14/314,334, Response Filed Apr. 23, 2018 to Non Final Office Action dated Jan. 22, 2018”, 14 pgs. |
“U.S. Appl. No. 14/314,334, Response filed May 20, 2017 to Non Final Office Action dated Dec. 1, 2016”, 16 pgs. |
“U.S. Appl. No. 14/314,334, Response filed Aug. 30, 2018 to Final Office Action dated May 31, 2018”, 13 pgs. |
“U.S. Appl. No. 14/314,334, Response filed Sep. 1, 2017 to Final Office Action dated Jul. 12, 2017”, 12 pgs. |
“U.S. Appl. No. 14/314,334, Response filed Oct. 17, 2016 to Final Office Action dated May 16, 2016”, 16 pgs. |
“U.S. Appl. No. 14/314,343, Final Office Action dated May 6, 2016”, 19 pgs. |
“U.S. Appl. No. 14/314,343, Final Office Action dated Aug. 15, 2017”, 38 pgs. |
“U.S. Appl. No. 14/314,343, Final Office Action dated Sep. 6, 2018”, 43 pgs. |
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Apr. 19, 2018”, 40 pgs. |
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Nov. 4, 2015”, 14 pgs. |
“U.S. Appl. No. 14/314,343, Non Final Office Action dated Nov. 17, 2016”, 31 pgs. |
“U.S. Appl. No. 14/314,343, Notice of Allowance dated Dec. 17, 2018”, 5 pgs. |
“U.S. Appl. No. 14/314,343, Response filed Feb. 15, 2018 to Final Office Action dated Aug. 15, 2017”, 11 pgs. |
“U.S. Appl. No. 14/314,343, Response filed Apr. 4, 2016 to Non Final Office Action dated Nov. 4, 2015”, 10 pgs. |
“U.S. Appl. No. 14/314,343, Response filed May 11, 2017 to Non Final Office Action dated Nov. 17, 2016”, 13 pgs. |
“U.S. Appl. No. 14/314,343, Response filed Jul. 19, 2018 to Non Final Office Action dated Apr. 19, 2018”, 15 pgs. |
“U.S. Appl. No. 14/314,343, Response filed Oct. 6, 2016 to Final Office Action dated May 6, 2016”, 13 pgs. |
“U.S. Appl. No. 14/314,343, Response Filed Oct. 11, 2018 to Final Office Action dated Sep. 6, 2018”, 11 pgs. |
“U.S. Appl. No. 14/325,477, Non Final Office Action dated Oct. 9, 2015”, 17 pgs. |
“U.S. Appl. No. 14/325,477, Notice of Allowance dated Mar. 17, 2016”, 5 pgs. |
“U.S. Appl. No. 14/325,477, Response filed Feb. 9, 2016 to Non Final Office Action dated Oct. 9, 2015”, 13 pgs. |
“U.S. Appl. No. 15/208,973, Final Office Action dated May 10, 2018”, 13 pgs. |
“U.S. Appl. No. 15/208,973, Non Final Office Action dated Sep. 19, 2017”, 17 pgs. |
“U.S. Appl. No. 15/208,973, Notice of Allowability dated Feb. 21, 2019”, 3 pgs. |
“U.S. Appl. No. 15/208,973, Notice of Allowance dated Nov. 20, 2018”, 14 pgs. |
“U.S. Appl. No. 15/208,973, Preliminary Amendment, filed Jan. 17, 2017”, 9 pgs. |
“U.S. Appl. No. 15/208,973, Response filed Sep. 5, 2018 to Final Office Action dated May 10, 2018”, 10 pgs. |
“U.S. Appl. No. 15/921,282, Notice of Allowance dated Oct. 2, 2019”, 9 pgs. |
“Bilinear interpolation”, Wikipedia, [Online] Retrieved from the Internet: <URL: https://web.archive.org/web/20110921104425/http://en.wikipedia.org/wiki/Bilinear_interpolation>, (Jan. 8, 2014), 3 pgs. |
“Imatest”, [Online] Retrieved from the Internet on Jul. 10, 2015: <URL: https://web.archive.org/web/20150710000557/http://www.imatest.com/>, 3 pgs. |
“KR 10-0853122 B1 machine translation”, IP.com, (2008), 29 pgs. |
Ahlberg, Jorgen, “Candide-3: An Updated Parameterised Face”, Image Coding Group, Dept. of Electrical Engineering, Linkoping University, SE, (Jan. 2001), 16 pgs. |
Baxes, Gregory A., et al., “Digital Image Processing: Principles and Applications, Chapter 4”, New York: Wiley, (1994), 88-91. |
Chen, et al., “Manipulating, Deforming and Animating Sampled Object Representations”, Computer Graphics Forum vol. 26, (2007), 824-852 pgs. |
Dornaika, F, et al., “On Appearance Based Face and Facial Action Tracking”, IEEE Trans. Circuits Syst. Video Technol. 16(9), (Sep. 2006), 1107-1124. |
Leyden, John, “This SMS will self-destruct in 40 seconds”, [Online] Retrieved from the Internet: <URL: http://www.theregister.co.uk/2005/12/12/stealthtext/>, (Dec. 12, 2005), 1 pg. |
Milborrow, S, et al., “Locating facial features with an extended active shape model”, European Conference on Computer Vision, Springer, Berlin, Heidelberg, [Online] Retrieved from the Internet: <URL: http://www.milbo.org/stasm-files/locating-facial-features-with-an-extended-asm.pdf>, (2008), 11 pgs. |
Ohya, Jun, et al., “Virtual Metamorphosis”, IEEE MultiMedia, 6(2), (1999), 29-39. |
“U.S. Appl. No. 14/314,312, Appeal Brief filed Oct. 3, 2019”, 14 pgs. |
“U.S. Appl. No. 14/314,312, Notice of Allowability dated Jan. 7, 2020”, 3 pgs. |
“U.S. Appl. No. 14/314,312, Notice of Allowance dated Oct. 25, 2019”, 9 pgs. |
“U.S. Appl. No. 16/298,721, Final Office Action dated Mar. 6, 2020”, 54 pgs. |
“U.S. Appl. No. 16/298,721, Non Final Office Action dated Oct. 3, 2019”, 40 pgs. |
“U.S. Appl. No. 16/298,721, Response filed Jan. 3, 2020 to Non Final Office Action dated Oct. 3, 2019”, 10 pgs. |
U.S. Appl. No. 14/314,312 U.S. Pat. No. 10,586,570, filed Jun. 25, 2014, Method for Real Time Video Processing for Changing Proportions of an Object in the Video. |
U.S. Appl. No. 16/749,708, filed Jan. 22, 2020, Real Time Video Processing for Changing Proportions of an Object in the Video. |
U.S. Appl. No. 14/325,477 U.S. Pat. No. 9,396,525, filed Jul. 8, 2014, Method for Real Time Video Processing Involving Changing a Color of an Object on a Human Face in a Video. |
U.S. Appl. No. 15/208,973 U.S. Pat. No. 10,255,948, filed Jul. 13, 2016, Method for Real Time Video Processing Involving Changing a Color of an Object on a Human Face in a Video. |
U.S. Appl. No. 16/277,750, filed Feb. 15, 2019, Method for Real Time Video Processing Involving Changing a Color of an Object on a Human Face in a Video. |
U.S. Appl. No. 15/921,282 U.S. Pat. No. 10,566,026, filed Mar. 14, 2018, Method for Real-Time Video Processing Involving Changing Features of an Object in the Video. |
U.S. Appl. No. 14/314,324 U.S. Pat. No. 9,928,874, filed Jun. 25, 2014, Method for Real-Time Video Processing Involving Changing Features of an Object in the Video. |
U.S. Appl. No. 14/314,334 U.S. Pat. No. 10,438,631, filed Jun. 25, 2014, Method for Real-Time Video Processing Involving Retouching of an Object in the Video. |
U.S. Appl. No. 16/548,279, filed Aug. 22, 2019, Method for Real-Time Video Processing Involving Retouching of an Object in the Video. |
U.S. Appl. No. 14/314,343 U.S. Pat. No. 10,283,162, filed Jun. 25, 2014, Method for Triggering Events in a Video. |
U.S. Appl. No. 16/298,721, filed Mar. 11, 2019, Method for Triggering Events in a Video. |
“U.S. Appl. No. 16/298,721, Response filed Apr. 23, 2020 to Final Office Action dated Mar. 6, 2020”, 11 pgs. |
“U.S. Appl. No. 16/298,721, Advisory Action dated May 12, 2020”, 3 pgs. |
“U.S. Appl. No. 16/298,721, Non Final Office Action dated Jul. 24, 2020”, 80 pgs. |
“U.S. Appl. No. 16/277,750, Non Final Office Action dated Aug. 5, 2020”, 8 pgs. |
“U.S. Appl. No. 16/298,721, Examiner Interview Summary dated Oct. 20, 2020”, 3 pgs. |
“U.S. Appl. No. 16/298,721, Response filed Oct. 22, 2020 to Non Final Office Action dated Jul. 24, 2020”, 13 pgs. |
“U.S. Appl. No. 16/277,750, Response filed Nov. 5, 2020 to Non Final Office Action dated Aug. 5, 2020”, 27 pgs. |
“U.S. Appl. No. 16/298,721, Notice of Allowance dated Nov. 10, 2020”, 5 pgs. |
“U.S. Appl. No. 14/661,367, Non Final Office Action dated May 5, 2015”, 30 pgs. |
“U.S. Appl. No. 14/661,367, Notice of Allowance dated Aug. 31, 2015”, 5 pgs. |
“U.S. Appl. No. 14/661,367. Response filed Aug. 5, 2015 to Non Final Office Action dated May 5, 2015”, 17 pgs. |
“U.S. Appl. No. 14/987,514, Final Office Action dated Sep. 26, 2017”, 25 pgs. |
“U.S. Appl. No. 14/987,514, Non Final Office Action dated Jan. 18, 2017”, 35 pgs. |
“U.S. Appl. No. 14/987,514, Notice of Allowance dated Jun. 29, 2018”, 9 pgs. |
“U.S. Appl. No. 14/987,514, Response filed Feb. 26, 2018 to Final Office Action dated Sep. 26, 2017”, 15 pgs. |
“U.S. Appl. No. 14/987,514, Response filed Jul. 18, 2017 to Non Final Office Action dated Jan. 18, 2017”, 15 pgs. |
“U.S. Appl. No. 16/141,588, Advisory Action dated Jan. 27, 2021”, 3 pgs. |
“U.S. Appl. No. 16/141,588, Advisory Action dated Jul. 20, 2020”, 3 pgs. |
“U.S. Appl. No. 16/141,588, Ex Parte Quayle Action mailed Jun. 25, 2021”, 4 pgs. |
“U.S. Appl. No. 16/141,588, Examiner Interview Summary dated Apr. 22, 2021”, 2 pgs. |
“U.S. Appl. No. 16/141,588, Final Office Action dated Apr. 7, 2020”, 34 pgs. |
“U.S. Appl. No. 16/141,588, Final Office Action dated Nov. 16, 2020”, 35 pgs. |
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Mar. 10, 2021”, 37 pgs. |
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Aug. 27, 2020”, 34 pgs. |
“U.S. Appl. No. 16/141,588, Non Final Office Action dated Dec. 9, 2019”, 25 pgs. |
“U.S. Appl. No. 16/141,588, Notice of Allowance dated Oct. 20, 2021”, 5 pgs. |
“U.S. Appl. No. 16/141,588, Response filed Jan. 18, 2021 to Final Office Action dated Nov. 16, 2020”, 10 pgs. |
“U.S. Appl. No. 16/141,588, Response filed Mar. 6, 2020 to Non Final Office Action dated Dec. 9, 2019”, 11 pgs. |
“U.S. Appl. No. 16/141,588, Response filed Jun. 9, 2021 to Non Final Office Action dated Mar. 10, 2021”, 10 pages. |
“U.S. Appl. No. 16/141,588, Response filed Jul. 7, 2020 to Final Office Action dated Apr. 7, 2020”, 12 pgs. |
“U.S. Appl. No. 16/141,588, Response filed Sep. 27, 2021 to Ex Parte Quayle Action mailed Jun. 25, 2021”, 8 pages. |
“U.S. Appl. No. 16/141,588, Response filed Oct. 13, 2020 to Non Final Office Action dated Aug. 27, 2020”, 12 pgs. |
“U.S. Appl. No. 16/277,750, Notice of Allowance dated Nov. 30, 2020”, 5 pgs. |
“U.S. Appl. No. 16/277,750, PTO Response to Rule 312 Communication dated Mar. 30, 2021”, 2 pgs. |
“U.S. Appl. No. 16/277,750, Supplemental Notice of Allowability dated Dec. 28, 2020”, 2 pgs. |
“U.S. Appl. No. 16/298,721, PTO Response to Rule 312 Communication dated Feb. 4, 2021”, 2 pgs. |
“U.S. Appl. No. 16/548,279, Advisory Action dated Jul. 23, 2021”, 3 pgs. |
“U.S. Appl. No. 16/548,279, Final Office Action dated May 21, 2021”, 24 pgs. |
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Mar. 1, 2021”, 26 pgs. |
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Aug. 4, 2021”, 23 pgs. |
“U.S. Appl. No. 16/548,279, Response filed May 5, 2021 to Non Final Office Action dated Mar. 1, 2021”, 11 pgs. |
“U.S. Appl. No. 16/548,279, Response filed Jul. 16, 2021 to Final Office Action dated May 21, 2021”, 10 pgs. |
“U.S. Appl. No. 16/749,708, Non Final Office Action dated Jul. 30, 2021”, 29 pgs. |
“U.S. Appl. No. 14/987,514, Preliminary Amendment filed Jan. 4, 2016”, 3 pgs. |
“Chinese Application Serial No. 201680028853.3, Office Action dated Apr. 2, 2021”, w/English translation, 10 pgs. |
“Chinese Application Serial No. 201680028853.3, Office Action dated May 6, 2020”, w/English Translation, 22 pgs. |
“Chinese Application Serial No. 201680028853.3, Office Action dated Aug. 19, 2019”, w/English Translation, 20 pgs. |
“Chinese Application Serial No. 201680028853.3, Office Action dated Dec. 1, 2020”, w/English Translation, 20 pgs. |
“Chinese Application Serial No. 201680028853.3, Response filed Jun. 23, 2020 to Office Action dated May 6, 2020”, w/ English Claims, 17 pgs. |
“Chinese Application Serial No. 201680028853.3, Response filed Dec. 6, 2019 to Office Action dated Aug. 19, 2019”, w/ English Claims, 16 pgs. |
“Chinese Application Serial No. 201680028853.3, Response filed Feb. 4, 2021 to Office Action dated Dec. 1, 2020”, w/ English Claims, 17 pgs. |
“European Application Serial No. 16716975.4, Communication Pursuant to Article 94(3) EPC dated Mar. 31, 2020”, 8 pgs. |
“European Application Serial No. 16716975.4, Response filed May 4, 2018 to Communication pursuant to Rules 161(1) and 162 EPC dated Oct. 25, 2017”, w/ English Claims, 116 pgs. |
“European Application Serial No. 16716975.4, Response Filed Jul. 31, 2020 to Communication Pursuant to Article 94(3) EPC dated Mar. 31, 2020”, 64 pgs. |
“European Application Serial No. 16716975.4, Summons to Attend Oral Proceedings mailed Apr. 16, 2021”, 11 pgs. |
“European Application Serial No. 16716975.4, Summons to Attend Oral Proceedings mailed Sep. 15, 2021”, 4 pgs. |
“European Application Serial No. 16716975.4, Written Submissions filed Aug. 10, 2021 to Summons to Attend Oral Proceedings mailed Apr. 16, 2021”, 62 pgs. |
“International Application Serial No. PCT/US2016/023046, International Preliminary Report on Patentability dated Sep. 28, 2017”, 8 pgs. |
“International Application Serial No. PCT/US2016/023046, International Search Report dated Jun. 29, 2016”, 4 pgs. |
“International Application Serial No. PCT/US2016/023046, Written Opinion dated Jun. 29, 2016”, 6 pgs. |
“Korean Application Serial No. 10-2017-7029496, Notice of Preliminary Rejection dated Jan. 29, 2019”, w/English Translation, 11 pgs. |
“Korean Application Serial No. 10-2017-7029496, Response filed Mar. 28, 2019 to Notice of Preliminary Rejection dated Jan. 29, 2019”, w/ English Claims, 28 pgs. |
“Korean Application Serial No. 10-2019-7029221, Notice of Preliminary Rejection dated Jan. 6, 2020”, w/ English Translation, 13 pgs. |
“Korean Application Serial No. 10-2019-7029221, Response filed Mar. 6, 2020 to Notice of Preliminary Rejection dated Jan. 6, 2020”, w/ English Claims, 19 pgs. |
“Korean Application Serial No. 10-2020-7031217, Notice of Preliminary Rejection dated Jan. 21, 2021”, w/ English Translation, 9 pgs. |
“Korean Application Serial No. 10-2020-7031217, Response filed May 6, 2021 to Notice of Preliminary Rejection dated Jan. 21, 2021”, w/ English Claims, 20 pgs. |
Kuhl, Annika, et al., “Automatic Fitting of a Deformable Face Mask Using a Single Image”, Computer Vision/Computer Graphics Collaboration Techniques, Springer, Berlin, (May 4, 2009), 69-81. |
Pham, Hai, et al., “Hybrid On-line 3D Face and Facial Actions Tracking in RGBD Video Sequences”, International Conference on Pattern Recognition, IEEE Computer Society, US, (Aug. 24, 2014), 4194-4199. |
Viola, Paul, et al., “Rapid Object Detection using a Boosted Cascade of Simple Features”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2001), 511-518. |
“U.S. Appl. No. 16/141,588, Corrected Notice of Allowability dated Oct. 26, 2021”, 2 pgs. |
“U.S. Appl. No. 16/141,588, Corrected Notice of Allowability dated Dec. 1, 2021”, 2 pgs. |
“U.S. Appl. No. 16/141,588, Notice of Allowance dated Nov. 16, 2021”, 5 pgs. |
“U.S. Appl. No. 16/548,279, Advisory Action dated Jan. 13, 2022”, 4 pgs. |
“U.S. Appl. No. 16/548,279, Final Office Action dated Nov. 12, 2021”, 31 pgs. |
“U.S. Appl. No. 16/548,279, Response filed Jan. 5, 2022 to Final Office Action dated Nov. 12, 2021”, 12 pgs. |
“U.S. Appl. No. 16/548,279, Response filed Nov. 1, 2021 to Non Final Office Action dated Aug. 4, 2021”, 11 pgs. |
“U.S. Appl. No. 16/749,708, Final Office Action dated Nov. 15, 2021”, 35 pgs. |
“U.S. Appl. No. 16/749,708, Notice of Allowance dated Jan. 21, 2022”, 13 pgs. |
“U.S. Appl. No. 16/749,708, Response filed Jan. 7, 2022 to Final Office Action dated Nov. 15, 2021”, 11 pgs. |
“U.S. Appl. No. 16/749,708, Response filed Oct. 28, 2021 to Non Final Office Action dated Jul. 30, 2021”, 12 pgs. |
“U.S. Appl. No. 17/248,812, Non Final Office Action dated Nov. 22, 2021”, 39 pgs. |
“Chinese Application Serial No. 201680028853.3, Notice of Reexamination dated Nov. 25, 2021”, w/ English translation, 36 pgs. |
Forlenza, Lidia, et al., “Real Time Corner Detection for Miniaturized Electro-Optical Sensors Onboard Small Unmanned Aerial Systems”, Sensors, 12(1), (2012), 863-877. |
“U.S. Appl. No. 16/548,279, Non Final Office Action dated Feb. 17, 2022”, 37 pgs. |
“U.S. Appl. No. 17/248,812, Notice of Allowance dated Mar. 23, 2022”, 5 pgs. |
“U.S. Appl. No. 17/248,812, Response filed Feb. 18, 2022 to Non Final Office Action dated Nov. 22, 2021”, 12 pgs. |
Li, Yongqiang, et al., “Simultaneous Facial Feature Tracking and Facial Expression Recognition”, IEEE Transactions on Image Processing, 22(7), (Jul. 2013), 2559-2573. |
“U.S. Appl. No. 16/749,708, Notice of Allowance dated May 13, 2022”, 5 pgs. |
“U.S. Appl. No. 16/548,279, Response filed May 16, 2022 to Non Final Office Action dated Feb. 17, 2022”, 15 pgs. |
“U.S. Appl. No. 16/548,279, Notice of Allowance dated Jun. 3, 2022”, 31 pgs. |
“U.S. Appl. No. 16/548,279, Supplemental Notice of Allowability dated Jun. 15, 2022”, 2 pgs. |
“U.S. Appl. No. 17/248,812, Notice of Allowance dated Jul. 29, 2022”, 5 pgs. |
Number | Date | Country | |
---|---|---|---|
61936016 | Feb 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15921282 | Mar 2018 | US |
Child | 16732858 | US | |
Parent | 14314324 | Jun 2014 | US |
Child | 15921282 | US |