System and method for generating a synthetic video stream

Abstract
A system and method for receiving an ordered set of images and analyzing the images to determine at least one position in space and at least one motion vector in space and time for at least one object represented in the images is disclosed. Using these vectors, a four dimensional model of at least a portion of the information represented in the images is formulated. This model generally obeys the laws of physics, though aberrations may be imposed. The model is then exercised with an input parameter, which, for example, may represent a different perspective than the original set of images. The four dimensional model is then used to produce a modified set of ordered images in dependence on the input parameter and optionally the set of images, e.g., if only a portion of the data represented in the images is modeled. The set of images may then be rendered on a display device.
Description
FIELD OF THE INVENTION

This invention is directed at a system and method for processing and displaying image information.


BACKGROUND OF THE INVENTION

Modern televisions (and the source material they present) typically display two dimensional (2D) images. The images are presented a series of frames from a single perspective. However, recently, several advanced televisions have been developed. For example, three dimensional (3D) televisions enhance the viewing experience when used with three-dimensional video sources. However, there are relatively few movies that are encoded in three-dimensional format. Also, currently available cable, and telephone company-based broadcast services do not provide three dimensional content (except anaglyph encoded), thereby reducing the value to the user of three dimensional televisions.


A technology has evolved in computer video graphics involving so-called graphics processor units (GPUs), which typically employ single instruction multiple data (SIMD) technology. Using a SIMD processor, a powerful simplified processor architecture exploiting parallel processing is provided, in which multiple processing elements perform the same operation (“instruction”) on multiple data simultaneously. Many video cards use SIMD because similar transformations might need to occur to multiple pixels simultaneously. See Young, U.S. Pat. No. 6,429,903, incorporated herein by reference.


Hoffberg et al., U.S. Pat. No. 6,400,996, expressly incorporated herein by reference, teaches comparing two-dimensional video images to three dimensional models.


Methods of converting two dimensional images and sets of images to three dimensions are known in the art. For example, Cipolla discusses an algorithm that can be used to generate a 3D model from two or more uncalibrated images. Cipolla's algorithm comprises the following four steps:


“1. We first determine a set of primitives—segments and cuboids—for which parallelism and orthogonality constraints are derived. These primitives are precisely localized in the image using the image gradient information.


“2. The next step concerns the camera calibration: the intrinsic parameters of the camera are determined for each image. This is done by determining the vanishing points associated with parallel lines in the world. Three mutually orthogonal directions are exploited.


“3. The motion (a rotation and a translation) between viewpoints are computed. The motion combined with the knowledge of the camera parameters allows the recovery of the perspective projection matrices for each viewpoint.


“4. The last step consists in using these projection matrices to find more correspondences between the images and then compute 3D textured triangles that represent a model of the scene.”


Cipolla, “3D Model Acquisition from Uncalibrated Images,” IAPR Workshop on Machine Vision Applications, Makuhari, Japan, 1998.


See also: Fehn, “An Evolutionary and Optimized Approach on 3D-TV,” Proc. of IBC, 2002.


SUMMARY OF THE INVENTION

The present system and method permits users to have more interaction with their televisions and with broadcast programs. This is accomplished by coupling a graphics card or video card with a standard television connected to a cable or broadcast network. A television is provided that receives a video signal, and comprises a video graphic processor. The video graphic processor allows for the processing of incoming graphics from the video program. In a preferred embodiment, a SIMD processor which forms part of the video graphic processor is used to perform mathematical transforms on the video data to derive a moving 3D model (i.e., a 4D model) and permit modification of, and display of, a rendered version of the model. In one embodiment, the last several minutes (or hours) of a broadcast program are stored, and the user is permitted to perform trick play functions, such as rewind, and make modifications to the events in the program and to play out “what if” scenarios. That is, a model is formed of a set of objects in the video, and the model is modified by the user, and the result is a synthesis of the actual video program with the computationally derived alteration.


Many uses of this invention will be apparent to skilled persons in the art. For example, while watching sports games, fans often hypothesize that the game might have turned out differently if a player had made a slightly different move. In one embodiment, the instant invention will allow the fans to test such hypotheses.


In one embodiment, the television allows for the viewing of broadcast shows provided via cable, via a fiber optic network, over the Internet, or through a similar technology. In addition, the television can be connected either over a wire or wirelessly with a non-transitory computer memory storing videos, such as a DVD, CD, hard disk, or USB flash memory. The television comprises a video graphic processor which is preferably on a separate card having a PCIe-x16 bus, connected to a main processor on a motherboard. The television stores recently-played scenes and allows the viewer to rewind into the recently-played scenes and modify some of the movements taking place therein. Optionally, the television is a three-dimensional television.


One embodiment a method implemented on a computing machine or processor. The processor receives as input information representing a first ordered set of images. In one embodiment, these are two dimensional images. In another embodiment, these are three dimensional images. In one embodiment, these could be a set of continuous image files in JPG or GIF89a format. Alternatively, these could be single or multiple ordered video files in MP4 (MPEG-4 video file) or .MOV (Apple QuickTime movie) format. Any set of ordered image files or single or multiple video files will typically suffice. Since one embodiment formulates a 4D model from the ordered set of images, a 3D image feed is preferred, but not required. The ordered set of images is analyzed to determine at least one position in more than two dimensions of space for an object represented in the images and at least one motion vector in two dimensions of space and one dimension of time. In the next step, a four-dimensional representation of the information based on the at least one position and the at least one motion vector is generated. The processor receives as input a parameter, which may be a modification of at least one spatial dimension at a first time. In response, the processor modifies the four-dimensional representation at the first time. The processor also modifies the four-dimensional representation at a second time, different from the first time, in response to the received parameter or modification and in accordance with at least one law of physics, thereby rendering information representing an ordered set of images.


For example, if at time T=1, a modification was made causing a child to drop a ball, at time T=2, the ball might be changed from remaining in the child's hands to being on the ground. This change would be in response to the modification and in accordance with a law of physics, the law of gravity. Finally, the processor would provide, as output, the information representing the second ordered set of images derived by exercising the 4D model and rendering it from a perspective. Preferably, this information would be in a video format, such as mp4 (MPEG-4 video file) or .MOV (Apple QuickTime movie) format. Alternatively, a set of image files, such as jpg or gif, could be used. The second ordered set of images could be 2D or 3D.


Persons skilled in the art will recognize many situations in which this could be useful. For example, the first ordered set of images could represent a part of a sports game, such as a key college basketball play. The model would then be useful to determine what would have happened, for example, had one of the players passed the ball instead of attempting to shoot it into the basket. In another embodiment, the invention could be used to model natural disasters, such as earthquakes or tornadoes and the efficiency of safety and protection mechanisms against such disasters. In yet another embodiment, the model could be used to represent car accidents and to ascertain whether an accident could have been avoided had one of the drivers had a better reaction time or chosen a different response to avoid the imminent collision.


The processor used to implement the invention can be any processor having a memory, an input, and an output. One embodiment of this invention can be implemented on a substantially arbitrary laptop or desktop computer running Microsoft Windows Vista, Microsoft Windows 7, Apple Mac Os X, or Linux. Another embodiment runs on a digital video recorder (DVR) that is connected to a television. The DVR stores the last several minutes or hours of the program currently being viewed and allows the user to make changes to the events that took place and run experimental scenarios through the interface of the remote control and the television screen.


In one more embodiment, the invention comprises machine instructions for the method described above stored on a non-transitory computer readable medium. This non-transitory computer-readable medium could be any device on which computer instructions or data are stored, for example, a USB flash memory drive, a CD, a DVD, or a RAM.


It is therefore an object to provide a method and system for implementation, comprising receiving information representing a first ordered set of images; analyzing the information, for example with an automated processor, to determine at least one position in more than two dimensions of space and at least one motion vector in at least two dimensions of space and one dimension of time of an object represented in the first ordered set of images; automatically developing a four dimensional model comprising greater than two spatial dimensions and one temporal dimension, obeying at least one predetermined physical law, of at least a portion of the information based on the at least one position and the at least one motion vector; receiving at least one parameter representing at least one spatial dimension for at least one instant in time; analyzing, using an automated processor, the four dimensional model in response to the received at least one parameter; and rendering at least information representing in the four dimensional model dependent on said analyzing.


It is also an object to provide a system and method for use, comprising a first input configured to receive information representing a first ordered set of images; at least one processor, configured to: analyze the information to determine at least one position in more than two dimensions of space and at least one motion vector in at least two dimensions of space and one dimension of time of an object represented in the first ordered set of images; automatically develop a four dimensional model comprising greater than two spatial dimensions and one temporal dimension, obeying at least one predetermined physical law, of at least a portion of the information based on the at least one position and the at least one motion vector; and analyzing the four dimensional model in response to at least one parameter representing at least one spatial dimension for at least one instant in time; and at output configured to present a rendering of the four dimensional model dependent on said analyzing.


The first ordered set of images may comprise a set of two-dimensional images (e.g., a video or audio-visual presentation), or a set of three-dimensional images. They may represent a part of a sports game, a natural disaster, or an accident (e.g., automobile accident). The first ordered set of images may comprise forensic data, and wherein the received parameter comprises a modified viewer perspective. The system and method may further comprise automatically generating the parameter based on a viewer perspective with respect to a display device.


The rendering may comprise rendering at least information representing in the four-dimensional model in conjunction with a portion of the information representing the first ordered set of images.


The system may automatically modify, for the purpose of rendering the second set of images, the at least one physical law in response to detecting an inconsistency a motion of objects in the first set of images. The at least one physical law comprises, for example one of gravity, conservation of momentum, conservation of energy, and conservation of mass, and in realistic modeling, each of these laws may be respected.


The four-dimensional model may comprise a volume and mass model corresponding to each of a plurality of physical objects represented in the first ordered set of images, a texture mapping model configured to impose appropriate surface textures derived and extrapolated from the first ordered set of images on each volume and mass model, and an interaction model representing interactions between the volume and mass models.


The at least one parameter may comprise a modification of at least one spatial or temporal relationship represented in the four-dimensional model.


A human user-machine interface may be provided, configured to provide interactive feedback to the human user with respect to changes in the four-dimensional model, based on changes in the at least one parameter.


The four-dimensional model may be developed based on both past and future images, or past images only, of the first ordered set of images with respect to the at least one instant of time.


It is another object to provide a system and method are provided, comprising storing a computer model of at least one object in a physical scene in a memory, developed based on a physical scene, the computer model comprising at least three physical dimensions, a time dimension, and supporting at least one of gravity, conservation of momentum, conservation of energy, and conservation of mass; extracting a set of motion parameters for the at least one object in the computer model from a series of images; at least one of modifying and extrapolating the parameters for the at least one object; and rendering the at least one object with an automated processor, using the at least one of modified and extrapolated parameters as a modified or extrapolated scene.


A further object provides a system and method, comprising generating a computer model representing four dimensions in accordance with the laws of physics, of at least one object in a physical scene represented in a series of images; defining a set of motion parameters for the at least one object in the computer model independent of the series of images; and rendering the at least one object with an automated processor, in accordance with the defined set of motion parameters.


The physical scene may be represented as an ordered set of two or three-dimensional images.


The physical scene may be derived from a sports game, and the at least one object comprises a human player. The at least one object may also comprise a projectile object.


The at least one of modified and extrapolated parameters may result in a rendering of the at least one object from a different perspective that that from which the model is derived.


The computer model may be generated based on at least one of an audio, acoustic and video representation of the scene.


The at least one of modified and extrapolated parameters may comprise altering a parameter associated with gravity, of momentum, energy, and mass.


The physical scene may be analyzed to determine at least inconsistency which a law of physics.


The computer model may comprise a four-dimensional model represents a volume and a mass representation corresponding to each of a plurality of physical objects represented in an ordered set of images.


The computer model may comprise a texture mapping model configured to impose appropriate surface textures derived and extrapolated from an ordered set of images.


The computer model may comprise an interaction model representing physical contact interactions between at least two different objects.


A human user-machine interface configured to provide interactive feedback to the human user may interact with a human user, with respect to an effect of the modifying and extrapolating of the motion parameters for the at least one object on the rendering of the at least one object.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a system according to one embodiment of the invention, where the DVR and video card are external to the television.



FIG. 2 illustrates a set of television screen transforms, in accordance with at least one law of physics, according to one embodiment of the invention.



FIG. 3 illustrates a system according to one embodiment of the invention, where the DVR and video card are internal to the television.



FIG. 4 illustrates a computer system that could be used to implement the invention.



FIG. 5 is a flowchart of an implementation of one embodiment of the invention that could be used to model a sports game.



FIGS. 6A and 6B illustrate the operation of a touch screen machine, in accordance with one embodiment of the invention.



FIG. 7 if a flow chart of an implementation of one embodiment of the invention that could be used to model a car accident.



FIG. 8 is an illustration of 3D display technology according to one embodiment of the invention.



FIG. 9 is an implementation of a 3D television broadcast system according to one embodiment of the invention.



FIG. 10 is a block diagram of a digital video recorder (DVR)-integrated display device having a picture-in-picture (PIP) function according to an exemplary embodiment of the invention.



FIG. 11 is a block diagram of an image multiplexing unit according to an exemplary embodiment of the invention.



FIG. 12 is a block diagram of a recording/reproduction control unit according to an exemplary embodiment of the invention.



FIG. 13 is a block diagram of a picture frame output unit according to an exemplary embodiment of the invention.



FIG. 14 is a block diagram of an output image generating unit according to an exemplary embodiment of the invention.



FIG. 15 illustrates a mechanism by which a 3D touch screen device ascertains the position of a user, in accordance with one embodiment of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Example 1

Sports Game Modeling Embedded in a Video Game System


One embodiment of this invention involves a substantially arbitrary television connected with a video card located inside a video game system, such as a personal computer operating under Microsoft Windows 7 or Linux or Unix, an Apple Macintosh computer executing under OSX, the Sony PlayStation, Microsoft Xbox 360, or the Nintendo Wii. The connection between the television and the video game system could be wired. Alternatively, a wireless connection could be used.



FIG. 1 illustrates a system embodying this invention. A television 110 is connected by wire to a video game system 120, comprising a video graphic processor device.


Video graphic processors are manufactured by Nvidia Corporation, ATI, and Intel Corporation, among others, and are preinstalled in many laptop computers, desktop computers, and video game systems. A video card or graphics card is an expansion card whose function is to generate and output images to a display. Many video cards offer added functions, such as accelerated rendering of 3D scenes and 2D graphics, video capture, TV-tuner adapter, MPEG-2/MPEG-4 decoding, light pen, TV output, or the ability to connect multiple monitors. Other modern high-performance video cards are used for more graphically demanding purposes, such as PC games.


As an alternative to having a video card, video hardware can be integrated on the motherboard or even with the central processing unit as an integral part or separate core.


The system with the video card could be connected to the monitor through several known standards. For example, a video graphics array (VGA) analog-based standard could be used. Alternatively, a digital visual interface (DVI) could be provided. In one embodiment, a high-definition multimedia interface (HDMI) is used. Persons skilled in the art will undoubtedly recognize other methods of forming this connection. For example, a wireless connection over a Bluetooth, WiFi, infrared or microwave network or other wired or wireless connections might be used.


The video game system is connected by wire or wireless communication to a handheld user control panel 130. The control panel could include a keyboard, mouse, game controller, joystick or other buttons such that the user 140 can interact with the television to make changes to the spatial dimensions of the items shown on the screen. The video game system also comprises a digital video recorder (DVR) 150, which continually stores in a circular buffer or other buffer type the program being viewed, e.g., for up to 4 hours. DVRs are manufactured by TiVo, Inc. in Alviso, Calif., as well as by several other businesses, and are described in detail in Barton et al, U.S. Pat. No. 6,233,389, incorporated herein by reference. In another embodiment, the DVR 150 could be external to the video game system 120. However, the DVR 150 is preferably communicatively coupled to the video game system 120, such that the video game system could get information about what was recently played on the television. DVRs that are integrated with LCD screens are also known in the art. See, e.g., Park, US App. 2009/0244404, expressly incorporated herein by reference. An embodiment could be implemented by coupling Park's DVR/LCD screen combination with an external video card, for example in a video game playing system. More generally, a video input is received and a sequence of frames stored for joint analysis. The user is provider with two interfaces: a first interface configured to view the video program (usually with a synchronized audio program as well) and a second interface to manipulate the system, e.g., to input parameters regarding an intended modification of the objects represented in the video stream. According to another embodiment, the input parameter is automatically derived from a position or perspective of the viewer with respect to the display (e.g., the first user interface), and therefore the second interface is not an interactive feedback visual interface, such as a graphic user interface (GUI).


In other embodiments of the invention (not illustrated) any of the wired connections depicted in FIG. 1 may be replaced by wireless connections using Bluetooth, infrared, microwave, WiFi (IEEE-802.11 a/b/g/n etc.) or similar technology. In yet another embodiment, the television 110, video game system 120, and DVR 150 are all one, single unitary system. In yet another embodiment, the control 130 might be replaced either by buttons on the television or video game system or by a touch screen. In yet another embodiment, the video game system may be replaced with a computer running software designed to implement an embodiment of the invention. The computer could be running Microsoft Windows 7, Apple OS X, Linux, or most other currently available operating systems.


The embodiment of a television with a touch screen is discussed in detail below. Basic touch screen systems with two dimensional screens are known in the art. For example, the ELO Touchsystems provides capacitive, resistive and acoustic devices. The Apple iPhone has a touch screen user interface, as do various other smartphones and consumer electronic appliances. A two-dimensional television can implement a similar touch screen device.


However, the touch screen is a problem for three dimensional televisions. In three dimensional screens, unlike two dimensional screens, each point on the screen does not map to an image. Rather, the image is based on three factors: (1) the location of the viewer's left eye, (2) the location of the viewer's right eye, and (3) the position on the screen. Thus, the three-dimensional television must locate the left eye and right eye of the viewer in order to ascertain which object the viewer is attempting to manipulate with the touch screen. The left and right eyes of the viewer can be located a using a camera communicatively connected to the screen. The distance to the eyes can then be calculated using the focal length, thereby rendering a representation of the position of the person's eyes. An analysis may also be performed to determine the gaze focus of the viewer. If, for example, the user has amblyopia, then the axis of a single eye may be employed to determine the user's object of attention.


To implement the present invention, the DVR 150 would store the last several minutes of the program being watched. If the user 140 views a scene that he would like to remodel, he presses an appropriate button or enters the appropriate command in control 130. The user is then able to rewind or fast forward through scenes stored on the DVR until he finds a scene that he wishes to modify. When he finds such a scene, the user may pause the program and, using the control 130, change at least one of the dimensions in the scene. For example, the user may create or modify a wind, or he could cause a person in the scene to drop an object instead of continuing to hold it.



FIG. 15 illustrates the three-dimensional touch screen idea in detail. A camera 1520 is placed on top of a three-dimensional screen 1510. The camera 1520 can ascertain the position and gaze of the eyes 1532 and 1534 of the user 1530. The screen knows where the user touches it through the conventional touch screen technology. See, e.g., Heidal, U.S. Pat. No. 5,342,047, expressly incorporated herein by reference, describing a touch screen machine.


In another embodiment, if the three-dimensional television requires the viewer to wear special glasses, the glasses could have a position sensor (relative to the television) attached to them.



FIG. 6, copied from FIG. 10 in Ningrat, US App. 2010/0066701, expressly incorporated herein by reference, is a flowchart illustrating methods of implementing an exemplary process for identifying multiple touches in a multi array capacitance-based touch screen. In the following description, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine such that the instructions that execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed in the computer or on the other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.


Accordingly, blocks of the flowchart illustrations support combinations of elements or means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.


A method for determining valid or true touch point locations begins 1005 with monitoring and acquiring capacitance data from each active array included in the system. Using the collected data, a combined capacitance is calculated 1015 and compared to a predefined threshold. When the total exceeds 1020 the predefined threshold, a touch is declared to have occurred. If the total fails to exceed the threshold, the system continues to monitor total capacitance until circumstances are such that a touch occurs.


With the declaration of a touch occurring, capacitance data from each active array within the system is gained 1030 for further analysis. Within each array active lines whose capacitance level exceeds a predefined threshold and whose capacitance level is greater than both adjacent lines 1035 is identified as a local peak. Once these local peaks are identified, the capacitance values of each adjacent line is compared to the peak value to determine whether the capacitance of the adjacent line is within a predefined range of the local peak. When an adjacent value is within the range, that adjacent line is considered part of the peak channel.


Having identified the peak channel 1035, a touch grouping is formed 1040 by combining lines adjacent to each peak channel. Capacitance values associated with each line of the touch grouping are used to determine 1045 estimated touch points using, for example, by a weighted average statistical analysis. Based on the number of estimated touch points, a determination 1050 is made whether a single or multiple touch point condition exists. When a single touch occurs each array will have only one estimated touch location. However, when any of the active line arrays produce more than one estimated touch point, the system assumes a dual or multiple touch as occurred. Thus, in a dual touch situation the above described process will produce for each active line array at least two estimated touch positions.


From the estimated touch points of each array possible touch point coordinate combinations are created 1055. As previously described some of these combinations are ghost or false locations. In a dual touch four possible coordinate combinations exist of which two are false and two are correct. Using a coordinate relationship between the three active arrays, valid touch point coordinates are determined 1060. The valid touch point coordinates are compared to a previously stored set of valid touch point coordinates, and when the new touch points occur within a specific period of time and within a specific window of interest with respect to the previous touch points, touch point tracking can be invoked 1065.


Thereafter the location of the valid touch points is output to a processor, screen or other input/output terminal. As one skilled in the art will recognize the touch screen of the present invention can be coupled with a variety of display and input mechanisms as well as computing systems. Using an underlying screen correlated with the touch screen, data inputs can be gained and results displayed. Finally, the newly defined touch point locations are stored in memory 1080 for touch point tracking purposes ending the process.


As one skilled the relevant art will recognize, the data produced by this method can be used by any type of microprocessor or similar computing system. For example, one or more portions of the present invention can be implemented in software. Software programming code which embodies the present invention is typically accessed by the microprocessor from long-term storage media of some type, such as a CD-ROM drive or hard drive or flash memory. The software programming code may be embodied on any of a variety of known media for use with a data processing system, such as a flash memory stick, diskette, hard drive, CD-ROM or the like. The code may be distributed on such media, or may be distributed from the memory or storage of one computer system over a network of some type to other computer systems for use by such other systems. Alternatively, the programming code may be embodied in the memory, and accessed by the microprocessor. The techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known.



FIG. 2 illustrates four screen shots. Screen shots 210 and 220 are from an original video of a boy and a girl playing with a ball. In Screen shot 210, which takes place at a first time, the boy tosses the ball to the girl. He directs the ball to the right and in a slightly upward direction, such that the force of gravity would cause the ball to be at approximately the height of the girl's arms when the ball reaches her. In screen shot 220, which takes place at a second time, which is later than the first time, the girl catches the ball, in response to the boy's toss.


In screen shot 230, the user decided to modify the scene at time 1 to cause the ball to be dropped instead of tossed by the boy. To do this, the user caused the boy to merely release the ball from his hand instead of tossing it. To do this, the user would have used the control 130 to select the boy's fingers and cause them to separate from the ball and to select the boy's hand and prevent it from making a toss.


As a result, the forces acting on the ball at time 1 in screen shot 230 are now different. In screen shot 210, there was an upward and rightward push by the boy as well as a downward force of gravity. Now there is only a force of gravity (and air resistance) acting on the ball. This causes a change in the motion vector of the ball at time 1. Instead of moving up and to the right as in screen shot 210, the ball will now fall straight down in screen shot 230. Also note that the girl will no longer be able to catch the ball. The video game system 120, knowing that the girl is acting to catch the ball, will no longer cause the girl to stretch out her arms in anticipation of the ball, as in FIG. 220. Instead, she will merely stand in her place. In one embodiment (not illustrated) the girl might run toward the boy in an attempt to catch the ball before it hits the ground. However, the girl might not do this if it is in violation of the rules of the game or if the processor implementing the invention decides that such running would be futile.


Screen shot 240 shows what the scene looks like at time 2. Notice that, instead of being tossed to and caught by the girl, as in screen shot 220, the ball has hit the ground and come to rest thereon. Therefore, the user's modification caused a change in the appearance of the scene in at least two times—time 1 and time 2. Note that a law of physics, here the law of gravity, was taken into account, to ascertain the ball's position at time 2. Gravity teaches that objects that are not attached to something fall to the ground. Here, the ball was not attached after the boy released it, so it fell to the ground.


A potential implementation difficulty here is that the scenes in the example are pictured in two dimensions on a two-dimensional television. Fehn teaches a method of displaying real images, which are viewed in three dimensions, on a three-dimensional TV. See C. Fehn. 3D TV Broadcasting. In o. Schreer, P. Kauff, and T. Sikora, (Editors): 3D Videocommunication, John Wiley and Sons, Chichester, UK, 2005 (Fehn (2005)).


Under Fehn's stereoscopic view synthesis, depth-image-based rendering (DIBR) is defined as the process of synthesizing “virtual” views of a real-world scene from still or moving images and associated per-pixel depth information. Conceptually, this novel view generation can be understood as the following two-step process: At first, the original image points are reprojected into the three-dimensional world, utilizing the respective depth values. Thereafter, these intermediate space points are projected into the image plane of a “virtual” camera, which is located at the required viewing position. The concatenation of reprojection (2D-to-3D) and subsequent projection (3D-to-2D) is usually called 3D image warping.


Consider a system of two pin-hole cameras and an arbitrary 3D point M=(x, y, z, 1)T with its projections ml=(u1, v1, 1)T and mr=(ur, vr, 1)T in the first, and, respectively, the second view. Under the assumption that the world coordinate system equals the camera coordinate system of the first camera, the two perspective projection equations result to:

zlml=Kl[I|0]M
zrmr=Kr[R|t]M


where the 3×3 matrix R and the 3×1 vector t define the rotation and translation that transform the space point from the world coordinate system into the camera coordinate system of the second view, the two upper triangular 3×3 matrices Kl and Kr specify the intrinsic parameters of the first, and respectively the second camera and zl and zr describe the scene depth in each respective camera coordinate system.


Rearranging the above equation gives an affine representation of the 3D point M that is linear dependent on its depth value zl:

M=zlKl−1ml


Substituting the above equations then leads to the classical affine disparity equation, which defines the depth-dependent relation between corresponding points in two images of the same scene:

zrmr=zlKrRKl−1ml+Krt


This disparity equation can also be considered as a 3D image warping formalism, which can be used to create an arbitrary novel view from a known reference image. This requires nothing but the definition of the position t and the orientation R of a “virtual” camera relative to the reference camera as well as the declaration of this synthetic camera's intrinsic parameters K. Then, if the depth values z of the corresponding 3D points are known for every pixel of the original image, the “virtual” view can—on principle—be synthesized by applying these equations to all original image points.


The simple 3D image warping concept described in the last section can of course also be used to create pairs of “virtual” views that together each with comprise a stereoscopic image. For that purpose, two “virtual” cameras are specified, which are separated by the interaxial distance tc. To establish the so-called zero-parallax setting (ZPS), i.e., to choose the convergence distance zc in the 3D scene, one of two different approaches, which are both in regular use in today's modern stereo cameras, can be chosen. With the “toed-in” method, the ZPS is adjusted by a joint inward-rotation of the left- and the right-eye camera. In the so-called “shift-sensor” approach, a plane of convergence is established by a small shift h of the parallel positioned cameras' CCD sensors (Woods et al. 1993). Fehn (2005).



FIG. 8, copied from Fehn (2005), illustrates a “shift sensor” stereo camera and the respective three-dimensional reproduction on a stereoscopic or autostereoscopic 3D TV display.


In other embodiments, more complex scenes could be modeled and more complex changes could be made. For example, a full professional or college football game could be modeled. As a result, people would be able to test hypothesis, such as, “Michigan would have won the game had Jacobson not dropped the ball.” In a preferred embodiment, the processor would have, in its memory, a model including a representation of the heights, weights, and key features of all of the players on the teams. Also, the dimensions of each stadium where games are played would also be available either locally or over the Internet. The processor would also know all of the rules of the game and the object of the game. Thus, it would be able, using the model, to synthesize action, even if it is not present in a source set of images. For example, using live game action up to a point in time as a predicate, a user may alter the game by moving an object, and the processor could thereafter direct players to act accordingly. In a preferred embodiment, some room for error would be allowed. For example, if a certain basketball player can make 70% of the baskets when he tosses the ball standing 10 yards away, the model would use a random variable to ascertain that the tosses are successful in 70% of the cases. Thus, one aspect of the embodiment allows a merger between broadcast entertainment, such as sports, and user-controlled game play.


Preferably, the system accesses the Internet and the data about the sports players is stored in an online database, in a format such as XML or other machine-readable format. However, persons skilled in the art will recognize other means of storing data, for example, Microsoft Excel format, comma delimited text files or Oracle databases. In another embodiment, the database could be stored local to the processor.


It should also be noted that, in one embodiment, this processor can be used to simulate a sports game from the beginning. As the heights, weights, and key features of all of the players, as well as the data regarding the sizes of all of the stadiums, are accessible to the system, the simulated game would be a mere extension of the method where the starting position is the typical starting position for the game and the first motion vector is the first movement of the game. For example, the system and method could be used to recreate a match between the 2010 football teams of the University of Michigan and Ohio State University in the University of Michigan's “Big House” stadium. Because the weather conditions on the day of the game are known, those can also be recreated. Alternatively, the user could select to change the weather conditions.



FIG. 9, copied from Fehn (2005) (reference numbers added by applicant), illustrates the data transmission chain in a 3D television broadcast system that takes legacy 2D televisions into account. The system consists of five stages: 3D content creation 910, 3D video coding 920, transmission 930, “virtual” view synthesis 940, and 3D or 2D display 950.


The idea is that one starts with a set of 3D content 910, which comes from either a 3D recording 911 or a 2D-to-3D Content Conversion 912. This data is converted to either recorded 3D 913, meta data 914, or 3D out of 2D 915. The coding layer 920 then provides a 3D video coding in 921. The transmission layer 930 receives as input a 2D color video coded in MPEG-2 931 and depth information associated with this video coded in MPEG-4 or AVC 932. This information is provided as input to the digital video broadcasting (DVB) network 933. The DVB network 933 provides output to the synthesis layer 940. This output is read either by a standard DVB decoder 941 and then viewed on a standard 2D TV 951 or by a 3D-TV decoder with depth-image-based rendering (DIBR) 942, which provides output to either a single user 3D-TV 952 or a multiple user 3D-TV 953 in the display layer 950. Note that there is a difference between single user 3D-TVs 952 and multiple user 3D-TVs 953 because 3D televisions provide two outputs—one for the left eye and one for the right eye. Therefore, users must be seated in an optimal location, rather than in any arbitrary location, to optimize the viewing experience. Also note that, while on a 2D screen, every pixel must have a color only, on a 3D screen every pixel must have both a color and a depth. Depth is typically represented as an image in grayscale, where each grayscale color has a value between 0 and 255. Points that are very near to the camera appear white (z=0) while points that are far from the camera appear black (z=255). Fehn (2005).


Other methods of 2D-to-3D video conversion are known in the art. For graphics-heavy content, the geometry data needs to be re-rendered for two eyes' worth of perspective rather than one. This approach was used in the 3D remake of the movie The Polar Express. With traditionally captured images, the 2D to 3D conversion is far more involved, and the results are far less predictable. For example, in the 3D remake of Tim Burton's The Nightmare Before Christmas, which the director created using stop-motion 2D image capture of miniature physical models, the studio did a frame-by-frame digitization of the entire film. This required morphing each frame into a two-eye counterpart. However, a problem with these methods of 2D-to-3D conversion is that these result in relatively poor depth perception and are not similar to real 3D images. Also, many modern 3D theaters require viewers to wear special-purpose glasses, and the image appears unusual unless the viewer holds his head still and look directly at the screen through the entire program. See, generally, Dipert, Brian, “3-D: nifty,” EDN, Feb. 22, 2006; Dipert, Brian, “3-D stop motion: well-deserved promotion,” EDN, Oct. 31, 2007; Dipert, Brian, “Coming soon: 3-D TV,” EDN, Apr. 8, 2010.


Example 2

Sports Game Modeling as a Feature of a “Smart” Television (internal video graphic processor card and DVR)


Another embodiment of this invention, as illustrated in FIG. 3, involves a “smart” television system 300 featuring a television screen 110 with an internal video graphic processor card 160 and DVR 150. One advantage of this system is that it can be designed specifically to implement the invention, and, therefore, the remote control can have buttons or commands to facilitate the user's operation of the invention. Alternatively, the television could have a touch screen or other user input, such as a six-axis input (x, y, z, phi, theta and rho, e.g., combining accelerometers or absolute positioning and gyroscopes) allowing the user to reposition the features thereon to implement the invention. In yet another embodiment, the television could, wirelessly or through wires, connect to a computer or smart phone with a touch screen or other suitable user input, on which the scene would be reconfigured and sent back to the television. Yet another embodiment involves a “smart” remote control, 310 with a touch screen element 312 featuring a miniature copy of the scene on the TV for user editing.


Optionally, the user can be provided with the ability to zoom in and to zoom out either using specially dedicated buttons or by moving two fingers toward each other to zoom in and moving two fingers away from each other to zoom out. (This is similar to the zoom control on the Apple iPhone). In one embodiment, when not in use, the smart remote control's mini display can be turned off. In another embodiment, the smart remote control's mini display can also function as an internet access tablet, similar to the display on an Apple iPad PDA or a Sony Dash clock radio; indeed, an iPod Touch itself could be used as the remote-control user input device.


The moving of objects in a paused scene would be done in a typical “drag and drop” mechanism. A user would pick up an object at a first location and drop it at a second location elsewhere on the screen. In a two-dimensional screen, the third dimension may be represented as the size of the object—as an object gets closer to the user, it appears bigger in size. Another issue is hidden surface extrapolation, which may occur with the surfaces that are located behind an object that is moved. In one embodiment, the processor predicts what might be found at that location (based on other images of the same scene in the same television program) or the processor could fill the gaps with a background (e.g. alternating light grey and dark grey lines) or even advertisements.



FIG. 5 is a flow chart describing an interaction of a user, Yuri. In step 510, Yuri is watching the Michigan—Ohio football game on his 3d TV at home. Yuri's TV is specially configured, as illustrated in FIG. 3, and includes an internal DVR and video card, which could of course also be external and either nearby or remote. Further, a 2D television may also implement this invention.


In step 520, Yuri notices that Michigan's quarterback attempted to pass the ball to the left but failed, leading to the Ohio team getting the ball. Yuri thinks that this was a stupid move, and that the quarterback would have been successful had he passed to the right instead. Yuri decides to recreate the play, with this alteration, to test his thesis.


In step 530, Yuri takes out his “smart” remote control, which includes a miniature copy of the TV screen 310 located thereon (as illustrated in FIG. 3). Because the game is being stored on the DVR as Yuri is watching, Yuri is able to rewind the game to immediately before the quarterback passed the ball to his left. Yuri zooms in to the quarterback's feet and selects the left foot. In a preferred embodiment, the selection is done on the miniature screen of the smart remote control, which can also function as a touch screen. However, persons skilled in the art will note alternative embodiments. For example, if the invention is implemented on a computer, such as a HP Pavilion DV4T laptop running Windows 7, the mouse can be used to zoom into and select an appropriate part of the image. Alternatively, if a video game console is providing the video card, the video game console's remote control could provide a method for zooming in and selecting a body part. In yet another embodiment, a 2d or 3d TV could have a touch screen located on itself.


After Yuri selects the left foot, in step 540, Yuri is shown an arrow corresponding to the motion vector of the left foot and is given an opportunity to modify this motion vector. Yuri takes the arrow and moves it to the right, thereby re-aiming the pass in the other direction.


After Yuri makes the selection, in step 550 an alert appears on the screen that Yuri's selected motion vector conflicts with another pre-existing motion vector, such as a tackle going in for a sack whose own path intersects the new position of the quarterback's left foot. Yuri is asked to go back, cancel or modify one of the motion vectors, or keep both motion vectors and permit a collision. In step 560, Yuri selects to maintain the repositioning of the left foot and permit the collision. The processor then exercises the model in accordance with the revised starting conditions, rules of play, and laws of physics, such as gravity, conservation of momentum, conservation of energy and mass, and game logic, and allows the revised play to continue. The system then determines the likelihood that the pass was completed, and the injuries, if any suffered by the players and possibly penalties. If the pass is probably caught, the visual depiction is, for example, of a completed play and the likely end of the down.


In one embodiment of the invention (not illustrated), the processor takes some time to make the calculations necessary to display the result of the model to the user. While the user is waiting, the processor may present the user with at least one advertisement. The advertisement could be downloaded from the television station, an on-line or local source, or provided by the manufacturer of the device. In one embodiment, the device is connected to the Internet and can download advertisements from the Internet. In one embodiment, the advertisements are interactive and allow the user to provide input through the remote control, video game controller, or other inputs during the presentation of the advertisement.


In step 570, Yuri watches the model execute the game under his simulated conditions. Yuri notes that the pass he had thought of would have been successful.


Combining an LCD screen and a DVR is known in the art. See, e.g., Park, US App. 2009/0244404, expressly incorporated herein by reference. Park teaches that such a device could be useful in a security/surveillance scenario. A plurality of cameras would be monitored on a single screen in a central location. The video of the activity at each of the areas would be stored on the DVR to be displayed on a television or provided to law enforcement authorities.



FIG. 10, copied from Park's (US App. 2009/0244404) FIG. 1, shows a block diagram of a digital video recorder (DVR)-integrated display device having a picture-in-picture (PIP) function. This device could be used to implement an embodiment of the invention, if coupled with an appropriate a video graphic processor card and other computation and user interface hardware and software.


Referring to FIG. 10, the DVR-integrated display device includes a plurality of surveillance cameras 10, an image multiplexing unit 20, a recording/reproduction control unit 30, a picture frame output unit 40, an external image output unit 50, an output image generating unit 60, and a control unit 70.


The plurality of surveillance cameras 10 are installed in particular areas to be monitored and take pictures of the areas. Thus, an administrator can monitor the areas by pictures taken by the surveillance cameras through a monitor in real time.


The image multiplexing unit 20 multiplexes a visual signal output from each surveillance camera 10, and outputs a multiplexed image frame or an image frame selected by a user from the visual signals output from the plurality of surveillance cameras 10.


The recording/reproduction control unit 30 records an image frame output from the image multiplexing unit 20, reproduces the image frame later, or processes the image frame to be displayed in real time.


The picture frame output unit 40 reads data stored in a memory and outputs the data as a picture frame. Various advertising contents are stored in the memory and output as picture frames from the picture frame output unit 40.


When an external image is input to the external image output unit 50, the external image output unit 50 determines whether the external image is an analog image or a digital image. When it is determined that the external image is an analog image, the external image output unit 50 converts the input analog image into a digital image and outputs the converted digital image to the output image generating unit 60. The analog image or digital image input from the exterior may be a computer image or an image from a television tuner.


The output image generating unit 60 creates a composite PIP picture of an image frame output from the recording/reproduction control unit 30 and an image selected by a user and displays the composite PIP picture. That is, the output image generating unit 60 creates a PIP composite image frame with either the picture frame output from the picture frame output unit 40 or an external image output from the external image output unit 500 according to a user operation and displays the composite PIP picture.


The control unit 70 controls the entire display device which includes the image multiplexing unit 20, the recording/reproduction control unit 30, the picture frame output unit 40, the external image output unit 50 and the output image generating unit 60. The control unit may control operations of other elements by use of a serial bus such as an integrated circuit (I2C) bus. The I2C bus, which is also referred to as an Inter-IC bus, is a two-wire bidirectional serial bus that provides a communication link between integrated circuits. Below, each element in FIG. 10 will be described in detail with reference to FIGS. 11 to 14.



FIG. 11, copied from Park's (US App. 2009/0244404) FIG. 2, is a block diagram of the image multiplexing unit 20 according to an exemplary embodiment. As shown in FIG. 2, the image multiplexing unit 20 which multiplexes images output from the respective surveillance cameras 10 or generates a camera image as an image frame includes a camera image converting unit 21, a camera image scaler 22, and an image frame generating unit 23.


The camera image converting unit 21 converts a corresponding image output from each of the surveillance cameras 10 into a digital image in accordance with a standard format. For example, the camera image converting unit 21 may be a national television system committee (NTSC) decoder that converts an analog signal output from each of the surveillance cameras 10 into a digital signal.


The camera image scaler 22 adjusts the number of vertical and/or horizontal scanning lines of each digital image converted by the camera image converting unit 21 in accordance with an output format. For example, when an image from a surveillance camera 10 is displayed in full screen, or when a resolution of an image of the surveillance camera 10 is the same as a resolution of a monitor or a storage format, the camera image scaler 22 outputs the image from the surveillance camera 10 intact. However, when a screen divided into four sections is implemented on the monitor, the camera image scaler 22 interpolates and/or decimates the input image to reduce a resolution of the image from the surveillance camera 10 to a quarter of the original resolution. According to an operation mode, the camera image scaler 22 allows a monitor screen to be divided into a single section, two sections, four sections, eight sections, and so on to display a number of images from the plurality of surveillance cameras 10 in each divided section. The visual signal from each of the surveillance cameras 10 may be processed by the corresponding camera image converting unit 21 and the camera scaler 22.


The image frame generating unit 23 combines digital images that have been scaled by the camera image scaler 22 according to the operation mode, or generates an image frame from a selected digital image. The image frame generating unit 23 combines the digital images frame by frame such that the digital image generated by each of the camera image scalers 22 is output at a designated location on the monitor screen. Accordingly, each digital image is multiplexed by the frame image generating unit 23, or only the specific digital image is generated as an image frame to be output on the monitor screen.



FIG. 12, copied from Park's (US App. 2009/0244404) FIG. 3, is a block diagram of the recording/reproduction control unit 30. As shown in FIG. 12, the recording/reproduction control unit 30 includes an encoder 31, an image storage unit 32, a decoder 33, and a format converting unit 34.


The encoder 31 encodes the image frame output from the image frame generating unit 23 according to an image compression standard such as MPEG (moving picture experts group) 4. MPEG4, which is audio-visual (AV) data compression technology enabling implementation of bidirectional multimedia, is a standard encoding image compression method that encodes the visual signal based on image contents. However, the type of image compression method is not limited to MPEG4, and other compression methods such as wavelet and H.264 can be employed.


The image storage unit 32 may be a non-volatile memory that retains data even when the power is off. Examples of the non-volatile memory include ROM (read only memory), Flash memory, and hard disk. The image storage unit 32 is a non-volatile memory that stores the image frame encoded by the encoder 31.


The decoder 33, which decompresses the compressed image, restores the image frame stored in the image storage unit 32. Such a decoder 33 is well-known technology and thus will not be described in detail.


The format converting unit 34 converts the image frame restored by the decoder 33 in accordance with a standard output format. Also, the format converting unit 34 converts a combined image frame or a selected image frame, each of which is output from the image frame generating unit 23, in accordance with a standard output format. That is, the format converting unit 34 converts the format of the image frame to output the visual signal in accordance with a standard output format such as RGB or a composite signal (Vcom). Then, the image frame converted by the format converting unit 34 is output through the output image generating unit 600 to a picture frame type liquid crystal display (LCD) or a monitor screen, together with a picture frame output from the picture frame output unit 40 or an external image output from the external image output unit 50, by a PIP technique.



FIG. 13, copied from Park's (US App. 2009/0244404) FIG. 4 is a block diagram of the picture frame output unit 40 according to an exemplary embodiment. As shown in FIG. 13, the picture frame output unit 40 includes an interface unit 41, a controller 42, and a picture frame decoder 43.


The interface unit 41 can establish communication with the memory through a parallel or serial bus.


The controller 42 reads data stored in the memory cluster by cluster, and transfers the read data to the picture frame decoder 43 cluster by cluster. The memory is installed inside or outside of the DVR-integrated display device, and the data stored in the memory may be various types of advertising contents such as multimedia including digital pictures, moving pictures, and music. As described above, when the internal or external memory is connected through the interface unit 41, the controller 42 outputs picture frame data stored in the memory according to an operation command of a user.


The picture frame decoder 43 reads and decodes the picture frame data output from the controller 42 and outputs the decoded picture frame image to the output image generating unit 60. Accordingly, the output image generating unit 60 outputs an image frame together with either the picture frame or the external image as a PIP picture using the elements shown in FIG. 14.



FIG. 14, copied from Park's (US App. 2009/0244404) FIG. 5 is a block diagram of the output image generating unit 60 according to an exemplary embodiment. As shown in FIG. 14, the output image generating unit 60 includes a first scaler 61, a second scaler 62, a third scaler 63, an image combining unit 64, and an image output unit 65.


The first scaler 61 scales an image frame converted by the format converting unit 34 of the recording/reproduction control unit 30.


The second scaler 62 scales an image frame output from the picture frame decoder 43 of the picture frame output unit 40, and the third scaler 63 scales an external image input from the external image output unit 50.


As described above, the external image output unit 50 determines whether the input external image is an analog image or a digital image. When it is determined that the input external image is a digital image, the external image output unit 50 transfers the digital image to the third scaler 63. Alternatively, when it is determined that the external image is an analog image, the external image output unit 50 converts the analog image into a digital image. More specifically, the external image output unit 50 converts an analog image input from a computer or a TV tuner into a digital image in a predetermined format such as YUV and outputs the converted digital image. The digital images output from the external image output unit 50 may be selectively scaled by the third scaler 63.


The above-described first scaler 61, second scaler 62 and third scaler 63 are selectively activated to perform scaling according to a user operation. That is, an image frame, a picture frame, and an external image are scaled by the respective scalers 61, 62, and 63 according to the user operation.


The scaled image frame, picture frame and external image are combined into a PIP picture by the image combining unit 64. In other words, according to the user operation, the image combining unit 64 combines the image frame and the picture frame into a PIP picture, or combines the image frame and the external image into a PIP picture. Subsequently, the image output unit 65 outputs the generated PIP picture on an LCD screen or a monitor screen.


In addition, the image output unit 65 may include a sharpness enhancement circuit and a contrast adjustment function allowing improvement of image quality of the generated PIP picture to be displayed on the LCD screen or the monitor screen.


According to another exemplary embodiment, an event detecting unit 80 may be further included to issue an event detection signal to the control unit 70 when an abnormal event is detected. The event detecting unit 80 may detect any events using, for example, a motion sensor, a door open sensor, and a damage sensor. That is, the event detecting unit 80 issues an event detection signal to the control unit 70 upon detection of an event. In response, the control unit 70 controls the output image generating unit 60 to display an image from the surveillance camera 10 installed in an area where the event is detected on the LCD screen or the monitor screen in full view.


It will now be described how to output images scaled by the first scaler 61, the second scaler 62, and the third scaler 63 when the event detecting unit 80 detects an event.


According to an exemplary embodiment, the output image generating unit 60 outputs the picture frame scaled by the second scaler 62 or the external image scaled by the third scaler 63 according to a user operation on the LCD screen or the monitor screen in full view when there is no event. However, the event detecting unit 80 issues an event detection signal to the control unit 70 upon detection of an event. In response, the control unit 70 controls the output image generating unit 60 such that DVR images taken by the plurality of surveillance cameras 10 are displayed on the LCD screen or the monitor screen in full view. Accordingly, the image combining unit 64 stops the image output unit from outputting the picture frame or the external image. Then, the image combining unit 64 transmits the image frame received from the first scaler 61 to the image outputting unit 65. In response, the image output unit 65 outputs a DVR image multiplexed into the image frames on the LCD screen or the monitor screen in full view. The image output unit 65 can not only output the multiplexed DVR image, but also output only image frames from the surveillance camera 10 at an area where the event is detected on the LCD screen or the monitor screen in full view. As described above, an administrator can monitor events immediately, even when he/she is engaged in other activities using the LCD screen or the monitor screen, for example, watching TV.


According to another exemplary embodiment, when no event is detected, the image combining unit 64 of the output image generating unit 60 combines the picture frame scaled by the second scaler 62 and the image frame scaled by the first scaler 61 according to a user operation such that the picture frame is output as a primary display and the image frame is output as a secondary display. Accordingly, the picture frame is output as the primary display on the LCD screen or the monitor screen, and the DVR image is output as the secondary display. However, when the event detecting unit 80 detects an event, the control unit 70 controls the output image generating unit 60 to output the image frame as the primary display. Consequently, the first scaler 61 scales the multiplexed image frame to fit the size of the primary display, and the second scaler 62 scales the picture frame to fit the size of the secondary display. Furthermore, the first scaler 61 may scale an image frame of an image taken by the surveillance camera 10 at an area where the event occurs to fit the size of the primary display. Then, the image combining unit 64 overlays the scaled image frame and picture frame to combine them, and transmits the combined image to the image output unit 65. Accordingly, the image output unit 65 outputs a PIP picture generated by combining the two images on the LCD screen or the monitor screen.


This DVR system may be used to capture images from multiple perspectives of a non-professional sports game, such as high school or college. The images may be processed to generate a three or four-dimensional model of the game, and used as discussed above.


For example, if an LCD screen is installed at a crowded location such as a subway station, the LCD screen can simultaneously provide advertising contents in a picture frame and DVR images taken by surveillance cameras at locations throughout the station. As such, since the advertising contents and the DVR images taken by the surveillance cameras are displayed simultaneously on an LCD screen, vandalization of the high-priced LCD screen can be avoided.


Additionally, by displaying advertising contents together with DVR images on an LCD screen located in a blind spot where no surveillance camera is installed, potential vandals, thieves, etc. may believe that a surveillance camera is monitoring the area and be deterred from committing crimes.


According to another exemplary embodiment, when there is no event, the image combining unit 64 of the output image generating unit 60 combines the external image scaled by the third scaler 63 and the image frame scaled by the first scaler 61 such that the external image is output as a primary display and the image frame is output as the secondary display. Accordingly, the external image is output as the primary display and the DVR image multiplexed to the image frame is output as the secondary display. However, when the event detecting unit 80 detects the event, the control unit 70 controls the output image generating unit to output the image frame as the primary display. Accordingly, the first scaler 61 scales the multiplexed image frame to fit the size of the primary display and transmits the scaled image frame to the image combining unit 64, and the third scaler 63 scales the external image to fit the size of the secondary display and transmits the scaled external image to the image combining unit 64. Additionally, the first scaler 61 may scale an image frame of an image taken by the surveillance camera 10 installed at an area where the event occurs to fit the size of the primary display. Then, the image combining unit 64 combines the image frame scaled to the size of the primary display and the external image scaled to the size of the secondary display by overlaying the picture frame with the external image. Then, the image combining unit 64 transmits the combined images to the image output unit 65. Accordingly, the image output unit 65 outputs the images combined in a PIP manner on the LCD screen or the monitor screen.


For example, a security guard who manages the coming and going of vehicles watches TV with a primary display of a monitor screen and monitors DVR images taken by surveillance cameras installed on the premises and displayed on a secondary display of the monitor screen. When a vehicle enters the building, a motion sensor detects the motion of the vehicle, and the event detecting unit detects the sensing signal from the motion sensor and issues an event occurrence signal to the control unit 70. In response, the control unit 70 switches the images previously displayed on the primary display and the secondary display with each other. The TV image, previously displayed on the primary display, is output on the secondary display, and the DVR image, previously displayed on the secondary display, is output on the primary display. The DVR image output on the primary display of the monitor screen may be a multiplexed DVR image or a DVR image taken by a surveillance camera located at an area where an event occurs. Since the TV image and the DVR image displayed respectively on the primary display and the secondary display are switched with each other, the security guard can monitor the coming and going vehicles on the monitor screen.


For another example, in a convenience store or a grocery shop, advertisement of new products and a DVR image of an installed surveillance camera can be displayed at the same time through a monitor screen. Thus, new products can be advertised to customers through the monitor screen while DVR images from a surveillance camera deter shoplifting, robbery, etc.


As described above, an administrator can view DVR images output from a surveillance camera, and external images and/or a picture frame output from a computer or a TV tuner, on a single monitor, and switch the images between the primary display and the secondary display according to circumstances. Moreover, the above-described images can be combined as a PIP picture and then displayed on a single monitor screen or an LCD screen.


According to an exemplary embodiment, a DVR-integrated display device with a PIP function allows an administrator to monitor a plurality of sites where surveillance cameras are installed on a single monitor in real time and carry out another activity simultaneously. In addition, a DVR-integrated display device installed in public places can simultaneously provide images taken by a surveillance camera and advertising contents.


For example, in a convenience store, the DVR-integrated display device allows a clerk to monitor customers from images taken by a surveillance camera and displayed on the same monitor which is displaying a TV program. Thus, crimes like theft which could occur while the clerk is distracted by TV can be avoided.


Moreover, the DVR-integrated display device can output advertising contents about new products while displaying DVR images taken by a surveillance camera through a single screen. For example, a high-priced LCD display installed in a crowded place such as a subway station can display advertisements about a plurality of goods and services along with DVR images taken by a surveillance camera to help prevent theft or vandalization of the LCD display.


Adding a video card to the invention of Park, US App. 2009/0244404, described above would provide additional benefits. In addition to monitoring a location for criminal or violent activity, the DVR/monitor combination can be used to analyze alternative scenarios. For example, if a woman is robbed in a subway station, a DVR/monitor/video card combination could be used, not only to identify and prosecute the robber, but also to predict the weight, strength, and running abilities of a police officer who would have been able to prevent either the robbery or the escape of the robber. This data could be of assistance to cities assigning police officers to patrol dangerous neighborhoods.


Example 3

Automobile Collision Modeling


In another embodiment, this invention could be useful in automobile collision modeling when there are photographs or videos at least a part of the accident available. Note that, unlike prior art methods, which relied on expert analysis and skid marks on the road, in addition to the image data, this invention only relies on photographs and the modeling system described and claimed herein.


An exemplary embodiment of the invention is illustrated in FIG. 7. In step 710, the processor implementing the invention is provided as input either a set of consecutive photographs or a video of a car accident. In step 720, on the basis of these images, the processor ascertains a model of the motion vectors for the cars involved in the accident. The motion vectors could include information about position, velocity, and acceleration. If the accident occurred on an incline, for example on a downhill roadway, the motion vectors may take the acceleration due to gravity into account. If the accident occurred on a windy day, a wind motion vector may be included in the calculation.


In step 730, the motion vectors are analyzed to determine at least one likely driver activity. For example, if the car rapidly decelerated, it is likely that the driver hit the brakes. Other driver activities include accelerating or turning. The motion vectors may also be analyzed to predict existing conditions, such as a failure of a car's breaks, black ice, an oil spill, etc. These could have occurred before, during or after the accident being analyzed.


As demonstrated in step 740, the model can also be used to modify driver behavior and the resultant impact on the cars, for example, it could be ascertained whether one of the drivers could have swerved to avoid the accident, or whether the accident would have been preventable but for the black ice, etc. Persons skilled in the art will recognize that this feature is useful not only for ascertaining fault in an accident but also in driver education courses and in automotive design. For example, a student driver could be asked to determine an optimal reaction in an accident simulation and be shown the result of his choices. Also, it could be ascertained whether safety features such as seatbelts, ABS brakes, and traction control would have prevented a given accident.


Step 750 suggests that the model could be used to determine which driver was in the best position to avoid the accident, or whether the driver could easily have avoided the accident. Such calculations are useful for assigning fault, when applied in conjunction with local vehicle and traffic laws.


Example 4

Modeling of Special Effects


While the events in most movies follow the laws of physics, some movies and other television presentations feature special effects that violate at least one law of physics. For example, a movie about Jesus might feature Jesus walking on water even though, under the traditional laws of physics, the man would be expected to sink because he lacks buoyancy.


In one embodiment, when asked to provide a model of the events in this movie, the modeler would cause Jesus to drown.


However, in a preferred embodiment, the modeler would detect that Jesus walking on water is a special effect. As such, the model would not conform the special effect to the laws of physics. In yet another embodiment, the modeler would present the user with an option to conform or not to conform the special effect to the laws of physics.


In yet another embodiment, the user could be given an option to rewrite the laws of physics. For example, the gravitational acceleration near Earth's surface is approximately g=9.8 m/s2. However, a user might wish to model what the game of football would be like if it were played on the moon, where the gravitational acceleration is approximately gmoon=1.6 m/s2. In this case, the user could reset the value of g and watch the processor develop the game under the modified conditions.


Hardware Overview



FIG. 4, copied from U.S. Pat. No. 7,702,660, issued to Chan, is a block diagram that illustrates a computer system 400 upon which an embodiment of the invention may be implemented. Computer system 400 includes a bus 402 or other communication mechanism for communicating information, and a processor 404 coupled with bus 402 for processing information. Computer system 400 also includes a main memory 406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404. Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Computer system 400 further includes a read only memory (ROM) 408 or other static storage device coupled to bus 402 for storing static information and instructions for processor 404. A storage device 410, such as a magnetic disk or optical disk, is provided and coupled to bus 402 for storing information and instructions.


Computer system 400 may be coupled via bus 402 to a display 412, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 414, including alphanumeric and other keys, is coupled to bus 402 for communicating information and command selections to processor 404. Another type of user input device is cursor control 416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 404 and for controlling cursor movement on display 412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.


The invention is related to the use of computer system 400 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 400 in response to processor 404 executing one or more sequences of one or more instructions contained in main memory 406. Such instructions may be read into main memory 406 from another machine-readable medium, such as storage device 410. Execution of the sequences of instructions contained in main memory 406 causes processor 404 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.


The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 400, various machine-readable media are involved, for example, in providing instructions to processor 404 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 410. Volatile media includes dynamic memory, such as main memory 406. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine.


Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.


Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 404 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 400 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 402. Bus 402 carries the data to main memory 406, from which processor 404 retrieves and executes the instructions. The instructions received by main memory 406 may optionally be stored on storage device 410 either before or after execution by processor 404.


Computer system 400 also includes a communication interface 418 coupled to bus 402. Communication interface 418 provides a two-way data communication coupling to a network link 420 that is connected to a local network 422. For example, communication interface 418 may be an Integrated Services Digital Network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 418 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 418 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.


Network link 420 typically provides data communication through one or more networks to other data devices. For example, network link 420 may provide a connection through local network 422 to a host computer 424 or to data equipment operated by an Internet Service Provider (ISP) 426. ISP 426 in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet” 428. Local network 422 and Internet 428 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 420 and through communication interface 418, which carry the digital data to and from computer system 400, are exemplary forms of carrier waves transporting the information.


Computer system 400 can send messages and receive data, including program code, through the network(s), network link 420 and communication interface 418. In the Internet example, a server 430 might transmit a requested code for an application program through Internet 428, ISP 426, local network 422 and communication interface 418.


The received code may be executed by processor 404 as it is received, and/or stored in storage device 410, or other non-volatile storage for later execution. In this manner, computer system 400 may obtain application code in the form of a carrier wave.


In this description, several preferred embodiments of the invention were discussed. Persons skilled in the art will, undoubtedly, have other ideas as to how the systems and methods described herein may be used. It is understood that this broad invention is not limited to the embodiments discussed herein. Rather, the invention is limited only by the following claims.

Claims
  • 1. A method, comprising: recognizing at least one object present presented in a stream of live action video images;performing a database lookup to retrieve an object model of the recognized at least one object;defining a physical model describing interactions over a time range of a plurality of real-world objects, comprising the recognized at least one object, within an environment, the physical model representing, for each of the plurality of real world objects, at least a surface, a rotational and translational movement, and an interaction between the plurality of real-world objects;determining interaction logic of the plurality of real-world objects within the environment to define a real-world outcome;receiving a user input representing a synthetic change in at least one of the rotational and translational movement of at least one of the plurality of real-world objects, at a time within the time range, leading to a synthetic outcome according to the physical model and the interaction logic of the plurality of real-world objects within the environment, wherein the synthetic outcome is different from the real-world outcome; andgenerating a synthetic video stream, dependent on the real-world model and the received user input, representing the synthetic outcome.
  • 2. The method according to claim 1, wherein the synthetic outcome is further dependent on a random variable.
  • 3. The method according to claim 1, wherein the synthetic outcome is further dependent on a probability.
  • 4. The method according to claim 1, wherein the physical model is a texture mapping model, and the synthetic video stream comprises textures mapped from the texture mapping model.
  • 5. The method according to claim 1, wherein the physical model further represents a mass of a respective object, and the interaction between plurality of real-world objects is momentum-dependent.
  • 6. The method according to claim 1, wherein the interaction logic comprises rules of a team sport.
  • 7. The method according to claim 1, wherein the synthetic video stream complies with the laws of physics.
  • 8. The method according to claim 1, wherein the user input causes the synthetic video stream to violate the laws of physics.
  • 9. The method according to claim 1, wherein the interaction comprises a collision of at least two objects.
  • 10. The method according to claim 1, wherein the user input comprises a modification of at least one motion vector of at least one object.
  • 11. The method according to claim 1, wherein the user input comprises a modification of at least one acceleration of at least one object.
  • 12. The method according to claim 1, wherein the object model of the recognized at least one object comprises a statistical performance descriptor.
  • 13. The method according to claim 1, wherein: the recognized at least one object comprises a player in a term sport game; andsaid performing a database lookup comprises accessing an Internet database representing characteristics of the player.
  • 14. The method according to claim 1, wherein the synthetic video stream comprises extrapolated hidden surfaces with respect to an input view of the interactions of the plurality of real-world objects within the environment over the time range.
  • 15. The method according to claim 1, wherein the synthetic video stream is from a synthetic perspective view with respect to the stream of live action video images.
  • 16. The method according to claim 1, wherein the synthetic video stream is obtained from a plurality of vantage points.
  • 17. A method, comprising: recognizing at least one object present presented in a stream of live action video images;performing a database lookup to retrieve an object model of the recognized at least one object;defining physical models comprising the retrieved object model, describing a plurality of real-world objects within an environment over a time range, representing, for each object, at least a surface, a rotational and translational movement;defining real-world interactions of actual interactions between objects of the plurality of real-world objects;determining logic defining scoring of interaction between objects to define an outcome;receiving a synthetic change in at least one of the rotational and translational movement of at least one of the plurality of real-world objects, at a time before a termination of the time range, leading to a synthetic outcome according to the defined physical models describing the plurality of real-world objects within the environment over the time range, wherein the synthetic outcome is different from the real-world outcome; andgenerating a synthetic output, dependent on the defined physical models describing the plurality of real-world objects within the environment, dependent on the synthetic change, having a synthetic outcome different from the defined outcome.
  • 18. A system, comprising: a physical model describing actual interactions of a plurality of real-world objects within an environment over a time range, representing, for each of the plurality of real-world objects, at least a surface, a rotational and translational movement, and an interaction between the plurality of real-world objects;an input port configured to receive a user input representing a synthetic change in at least one of the rotational and translational movement of at least one of the plurality of real-world objects, at a time within the time range;at least one automated processor configured to: recognize an object of the plurality of objects in a stream of live action video images; andperform a database lookup for an object model of the recognized object, wherein the physical model comprises the object model;apply interaction logic of the objects within the environment to define a real-world outcome leading to a synthetic outcome according to the physical model and the interaction logic of the plurality of real-world objects within the environment, wherein the synthetic outcome is different from the real-world outcome; andgenerate a synthetic video stream, dependent on the real-world model and the received user input, representing the outcome.
  • 19. The system according to claim 18, wherein the physical model further represents a mass of a respective object, and the interaction between objects is momentum-dependent,wherein the interaction logic comprises rules of a team sport, andwherein the synthetic video stream complies with the laws of physics.
  • 20. The system according to claim 18, wherein the object model of the recognized at least one object comprises a statistical performance descriptor.
  • 21. The system according to claim 18, wherein the recognized object is a player in a term sport game, and the synthetic video stream comprises extrapolated hidden surfaces with respect to the live action video, derived from the object model.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation of U.S. patent application Ser. No. 16/153,627, filed Oct. 5, 2018, now U.S. Pat. No. 10,549,197, issued Feb. 4, 2020, which is a Continuation of U.S. patent application Ser. No. 15/789,621, filed Oct. 20, 2017, now U.S. Pat. No. 10,092,843, issued Oct. 9, 2018, which is a Continuation of U.S. patent application Ser. No. 14/851,860, filed Sep. 11, 2015, now U.S. Pat. No. 9,795,882, issued Oct. 24, 2017, which is a Continuation of U.S. patent application Ser. No. 13/161,820, filed Jun. 16, 2011, now U.S. Pat. No. 9,132,352, issued Sep. 15, 2015, which is a non-provisional of, and claims benefit of priority from U.S. Provisional Patent Application No. 61/358,232, filed Jun. 24, 2010, the entirety of which are each expressly incorporated herein by reference.

US Referenced Citations (782)
Number Name Date Kind
4843568 Krueger et al. Jun 1989 A
5021976 Wexelblat et al. Jun 1991 A
5130794 Ritchey Jul 1992 A
5275565 Moncrief Jan 1994 A
5307456 MacKay Apr 1994 A
5322441 Lewis et al. Jun 1994 A
5342047 Heidel et al. Aug 1994 A
5588139 Lanier et al. Dec 1996 A
5617515 MacLaren et al. Apr 1997 A
5619709 Caid et al. Apr 1997 A
5629594 Jacobus et al. May 1997 A
5691898 Rosenberg et al. Nov 1997 A
5721566 Rosenberg et al. Feb 1998 A
5734373 Rosenberg et al. Mar 1998 A
5739811 Rosenberg et al. Apr 1998 A
5742278 Chen et al. Apr 1998 A
5774357 Hoffberg et al. Jun 1998 A
5794178 Caid et al. Aug 1998 A
5805140 Rosenberg et al. Sep 1998 A
5850352 Moezzi et al. Dec 1998 A
5852450 Thingvold Dec 1998 A
5867386 Hoffberg et al. Feb 1999 A
5903395 Rallison et al. May 1999 A
5903454 Hoffberg et al. May 1999 A
5920477 Hoffberg et al. Jul 1999 A
5929846 Rosenberg et al. Jul 1999 A
5956484 Rosenberg et al. Sep 1999 A
5991085 Rallison et al. Nov 1999 A
6020876 Rosenberg et al. Feb 2000 A
6020931 Bilbrey et al. Feb 2000 A
6024576 Bevirt et al. Feb 2000 A
6028593 Rosenberg et al. Feb 2000 A
6057828 Rosenberg et al. May 2000 A
6061004 Rosenberg May 2000 A
6072504 Segen Jun 2000 A
6078308 Rosenberg et al. Jun 2000 A
6080063 Khosla Jun 2000 A
6084590 Robotham et al. Jul 2000 A
6088018 DeLeeuw et al. Jul 2000 A
6100874 Schena et al. Aug 2000 A
6101530 Rosenberg et al. Aug 2000 A
6104158 Jacobus et al. Aug 2000 A
6124862 Boyken et al. Sep 2000 A
6125385 Wies et al. Sep 2000 A
6133946 Cavallaro et al. Oct 2000 A
6144375 Jain et al. Nov 2000 A
6147674 Rosenberg et al. Nov 2000 A
6154201 Levin et al. Nov 2000 A
6160666 Rallison et al. Dec 2000 A
6161126 Wies et al. Dec 2000 A
6166723 Schena et al. Dec 2000 A
6169540 Rosenberg et al. Jan 2001 B1
6184868 Shahoian et al. Feb 2001 B1
6201533 Rosenberg et al. Mar 2001 B1
6219033 Rosenberg et al. Apr 2001 B1
6232891 Rosenberg May 2001 B1
6233389 Barton et al. May 2001 B1
6252579 Rosenberg et al. Jun 2001 B1
6252583 Braun et al. Jun 2001 B1
6256011 Culver Jul 2001 B1
6259382 Rosenberg Jul 2001 B1
6271828 Rosenberg et al. Aug 2001 B1
6271833 Rosenberg et al. Aug 2001 B1
6278439 Rosenberg et al. Aug 2001 B1
6285351 Chang et al. Sep 2001 B1
6288705 Rosenberg et al. Sep 2001 B1
6289299 Daniel, Jr. et al. Sep 2001 B1
6292170 Chang et al. Sep 2001 B1
6300936 Braun et al. Oct 2001 B1
6300937 Rosenberg Oct 2001 B1
6310605 Rosenberg et al. Oct 2001 B1
6343349 Braun et al. Jan 2002 B1
6348911 Rosenberg et al. Feb 2002 B1
6353850 Wies et al. Mar 2002 B1
6366272 Rosenberg et al. Apr 2002 B1
6374255 Peurach et al. Apr 2002 B1
6380925 Martin et al. Apr 2002 B1
6400352 Bruneau et al. Jun 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6411276 Braun et al. Jun 2002 B1
6421048 Shih et al. Jul 2002 B1
6424356 Chang et al. Jul 2002 B2
6429903 Young Aug 2002 B1
6437771 Rosenberg et al. Aug 2002 B1
6441846 Carlbom et al. Aug 2002 B1
6448977 Braun et al. Sep 2002 B1
6486872 Rosenberg et al. Nov 2002 B2
6580417 Rosenberg et al. Jun 2003 B2
6597347 Yasutake Jul 2003 B1
6616529 Qian et al. Sep 2003 B1
6624833 Kumar et al. Sep 2003 B1
6639581 Moore et al. Oct 2003 B1
6651044 Stoneman Nov 2003 B1
6671390 Barbour et al. Dec 2003 B1
6686911 Levin et al. Feb 2004 B1
6697043 Shahoian Feb 2004 B1
6697044 Shahoian et al. Feb 2004 B2
6697048 Rosenberg et al. Feb 2004 B2
6704001 Schena et al. Mar 2004 B1
6705871 Bevirt et al. Mar 2004 B1
6707443 Bruneau et al. Mar 2004 B2
6726567 Khosla Apr 2004 B1
6738065 Even-Zohar May 2004 B1
6801008 Jacobus et al. Oct 2004 B1
6823496 Bergman Reuter et al. Nov 2004 B2
6850222 Rosenberg Feb 2005 B1
6859819 Rosenberg et al. Feb 2005 B1
6876891 Schuler et al. Apr 2005 B1
6894678 Rosenberg et al. May 2005 B2
6903721 Braun et al. Jun 2005 B2
6928386 Hasser Aug 2005 B2
6979164 Kramer Dec 2005 B2
6982700 Rosenberg et al. Jan 2006 B2
7023423 Rosenberg Apr 2006 B2
7027032 Rosenberg et al. Apr 2006 B2
7038657 Rosenberg et al. May 2006 B2
7039866 Rosenberg et al. May 2006 B1
7054775 Rosenberg et al. May 2006 B2
7061467 Rosenberg Jun 2006 B2
7091950 Rosenberg et al. Aug 2006 B2
7102541 Rosenberg Sep 2006 B2
7106313 Schena et al. Sep 2006 B2
7113166 Rosenberg et al. Sep 2006 B1
7129951 Stelly, III Oct 2006 B2
7131073 Rosenberg et al. Oct 2006 B2
7136045 Rosenberg et al. Nov 2006 B2
7148875 Rosenberg et al. Dec 2006 B2
7149596 Berger et al. Dec 2006 B2
7158112 Rosenberg et al. Jan 2007 B2
7168042 Braun et al. Jan 2007 B2
7191191 Peurach et al. Mar 2007 B2
7202851 Cunningham et al. Apr 2007 B2
7209028 Boronkay et al. Apr 2007 B2
7209117 Rosenberg et al. Apr 2007 B2
7215326 Rosenberg May 2007 B2
7225404 Zilles et al. May 2007 B1
7242988 Hoffberg et al. Jul 2007 B1
7249951 Bevirt et al. Jul 2007 B2
7251637 Caid et al. Jul 2007 B1
7253803 Schena et al. Aug 2007 B2
7265750 Rosenberg Sep 2007 B2
7345672 Jacobus et al. Mar 2008 B2
7372463 Anand May 2008 B2
7403202 Nash Jul 2008 B1
7404716 Gregorio et al. Jul 2008 B2
7411576 Massie et al. Aug 2008 B2
7423631 Shahoian et al. Sep 2008 B2
7430490 Tong et al. Sep 2008 B2
7432910 Shahoian Oct 2008 B2
7446767 Takizawa et al. Nov 2008 B2
7457439 Madsen et al. Nov 2008 B1
7472047 Kramer et al. Dec 2008 B2
7489309 Levin et al. Feb 2009 B2
7535498 Segman May 2009 B2
RE40808 Shahoian et al. Jun 2009 E
7551780 Nudd et al. Jun 2009 B2
7561141 Shahoian et al. Jul 2009 B2
7561142 Shahoian et al. Jul 2009 B2
RE40891 Yasutake Sep 2009 E
7584077 Bergman Reuter et al. Sep 2009 B2
7587412 Weyl et al. Sep 2009 B2
7605800 Rosenberg Oct 2009 B2
7626569 Lanier Dec 2009 B2
7626589 Berger Dec 2009 B2
7636080 Rosenberg et al. Dec 2009 B2
7639387 Hull et al. Dec 2009 B2
7650319 Hoffberg et al. Jan 2010 B2
7669148 Hull et al. Feb 2010 B2
7672543 Hull et al. Mar 2010 B2
7675520 Gee et al. Mar 2010 B2
7682250 Ikebata et al. Mar 2010 B2
7696978 Mallett et al. Apr 2010 B2
7702660 Chan et al. Apr 2010 B2
7702673 Hull et al. Apr 2010 B2
7710399 Bruneau et al. May 2010 B2
7716224 Reztlaff, II et al. May 2010 B2
7728820 Rosenberg et al. Jun 2010 B2
7742036 Grant et al. Jun 2010 B2
7769772 Weyl et al. Aug 2010 B2
7791588 Tierling et al. Sep 2010 B2
7796155 Neely, III et al. Sep 2010 B1
7809167 Bell Oct 2010 B2
7812820 Schuler et al. Oct 2010 B2
7812986 Graham et al. Oct 2010 B2
7821407 Shears et al. Oct 2010 B2
7821496 Rosenberg et al. Oct 2010 B2
7825815 Shears et al. Nov 2010 B2
7885912 Stoneman Feb 2011 B1
7885955 Hull et al. Feb 2011 B2
7889174 Culver Feb 2011 B2
7889209 Berger et al. Feb 2011 B2
RE42183 Culver Mar 2011 E
7917554 Hull et al. Mar 2011 B2
7920759 Hull et al. Apr 2011 B2
7921309 Isbister et al. Apr 2011 B1
7944433 Schena et al. May 2011 B2
7944435 Rosenberg et al. May 2011 B2
7956758 Hattori Jun 2011 B2
7978081 Shears et al. Jul 2011 B2
7978183 Rosenberg et al. Jul 2011 B2
7982720 Rosenberg et al. Jul 2011 B2
7991778 Hull et al. Aug 2011 B2
8005831 Hull et al. Aug 2011 B2
8007282 Gregorio et al. Aug 2011 B2
8015129 Thiesson et al. Sep 2011 B2
8031181 Rosenberg et al. Oct 2011 B2
8032477 Hoffberg et al. Oct 2011 B1
8049734 Rosenberg et al. Nov 2011 B2
8059104 Shahoian et al. Nov 2011 B2
8059105 Rosenberg et al. Nov 2011 B2
8063892 Shahoian et al. Nov 2011 B2
8063893 Rosenberg et al. Nov 2011 B2
8072422 Rosenberg et al. Dec 2011 B2
8077145 Rosenberg et al. Dec 2011 B2
8086038 Ke et al. Dec 2011 B2
8131647 Siegel et al. Mar 2012 B2
8144921 Ke et al. Mar 2012 B2
8156116 Graham et al. Apr 2012 B2
8156427 Graham et al. Apr 2012 B2
8169402 Shahoian et al. May 2012 B2
8174535 Berger et al. May 2012 B2
8184094 Rosenberg May 2012 B2
8184155 Ke et al. May 2012 B2
8188981 Shahoian et al. May 2012 B2
8188989 Levin et al. May 2012 B2
8195659 Hull et al. Jun 2012 B2
8199107 Xu et al. Jun 2012 B2
8199108 Bell Jun 2012 B2
8207843 Huston Jun 2012 B2
8212772 Shahoian Jul 2012 B2
8217938 Chen et al. Jul 2012 B2
8230367 Bell et al. Jul 2012 B2
8234282 Retzlaff, II et al. Jul 2012 B2
8266173 Reztlaff, II et al. Sep 2012 B1
8276088 Ke et al. Sep 2012 B2
8284238 Stone et al. Oct 2012 B2
8315432 Lefevre et al. Nov 2012 B2
8330811 Macguire, Jr. Dec 2012 B2
8330812 Maguire, Jr. Dec 2012 B2
8332401 Hull et al. Dec 2012 B2
8335789 Hull et al. Dec 2012 B2
8341210 Lattyak et al. Dec 2012 B1
8341513 Lattyak et al. Dec 2012 B1
8352400 Hoffberg et al. Jan 2013 B2
8368641 Tremblay et al. Feb 2013 B2
8378979 Frid et al. Feb 2013 B2
8384777 Maguire, Jr. Feb 2013 B2
8385589 Erol et al. Feb 2013 B2
8402490 Hoffberg-Borghesani et al. Mar 2013 B2
8417261 Huston Apr 2013 B2
8436821 Plichta et al. May 2013 B1
8441437 Rank May 2013 B2
8446367 Benko et al. May 2013 B2
8456484 Berger et al. Jun 2013 B2
8462116 Bruneau et al. Jun 2013 B2
8467133 Miller Jun 2013 B2
8472120 Border et al. Jun 2013 B2
8477425 Border et al. Jul 2013 B2
8482859 Border et al. Jul 2013 B2
8487838 Lewis et al. Jul 2013 B2
8488246 Border et al. Jul 2013 B2
8500284 Rotschild et al. Aug 2013 B2
8508469 Rosenberg et al. Aug 2013 B1
8521737 Hart et al. Aug 2013 B2
8527873 Braun et al. Sep 2013 B2
8589488 Huston et al. Nov 2013 B2
8595218 Bell et al. Nov 2013 B2
8600989 Hull et al. Dec 2013 B2
8615374 Discenzo Dec 2013 B1
8617008 Marty Dec 2013 B2
8638308 Cunningham Jan 2014 B2
8656040 Bajaj et al. Feb 2014 B1
8686941 Rank Apr 2014 B2
8700005 Kiraly et al. Apr 2014 B1
8704710 Shen et al. Apr 2014 B2
8719200 Beilby et al. May 2014 B2
RE44925 Vincent Jun 2014 E
8747196 Rosenberg et al. Jun 2014 B2
8758103 Nicora et al. Jun 2014 B2
RE45062 Maguire, Jr. Aug 2014 E
RE45114 Maguire, Jr. Sep 2014 E
8838591 Hull et al. Sep 2014 B2
8842003 Huston Sep 2014 B2
8866602 Birnbaum et al. Oct 2014 B2
8886578 Galiana et al. Nov 2014 B2
8892495 Hoffberg et al. Nov 2014 B2
8894489 Lim Nov 2014 B2
8928558 Lewis et al. Jan 2015 B2
8933939 Li et al. Jan 2015 B2
8933967 Huston et al. Jan 2015 B2
8949287 Hull et al. Feb 2015 B2
8954444 Retzlaff, II et al. Feb 2015 B1
8957909 Joseph et al. Feb 2015 B2
8965807 Rykov et al. Feb 2015 B1
8976112 Birnbaum et al. Mar 2015 B2
8990215 Reztlaff, II et al. Mar 2015 B1
8994643 Massie et al. Mar 2015 B2
9035955 Keane et al. May 2015 B2
9035970 Lamb et al. May 2015 B2
9041520 Nakamura et al. May 2015 B2
RE45559 Williams Jun 2015 E
9055267 Raghoebardajal et al. Jun 2015 B2
9058058 Bell et al. Jun 2015 B2
9063953 Hull et al. Jun 2015 B2
9097890 Miller et al. Aug 2015 B2
9097891 Border et al. Aug 2015 B2
9105210 Lamb et al. Aug 2015 B2
9110504 Lewis et al. Aug 2015 B2
9111458 Michalowski et al. Aug 2015 B2
9128281 Osterhout et al. Sep 2015 B2
9129295 Border et al. Sep 2015 B2
9132352 Rabin et al. Sep 2015 B1
9134534 Border et al. Sep 2015 B2
9134803 Birnbaum et al. Sep 2015 B2
9135791 Nakamura et al. Sep 2015 B2
9142104 Nakamura et al. Sep 2015 B2
9171202 Hull et al. Oct 2015 B2
9171437 Nakamura et al. Oct 2015 B2
9178744 Lattyak et al. Nov 2015 B1
9182596 Border et al. Nov 2015 B2
9202443 Perez et al. Dec 2015 B2
9213405 Perez et al. Dec 2015 B2
9213940 Beilby et al. Dec 2015 B2
9218116 Benko et al. Dec 2015 B2
9223134 Miller et al. Dec 2015 B2
9229227 Border et al. Jan 2016 B2
9266017 Parker et al. Feb 2016 B1
9275052 Siegel et al. Mar 2016 B2
9280205 Rosenberg et al. Mar 2016 B2
9282927 Hyde et al. Mar 2016 B2
9285589 Osterhout et al. Mar 2016 B2
9317971 Lamb et al. Apr 2016 B2
9323325 Perez et al. Apr 2016 B2
9329689 Osterhout et al. May 2016 B2
9341843 Border et al. May 2016 B2
9344842 Huston May 2016 B2
9358361 Hyde et al. Jun 2016 B2
9366862 Haddick et al. Jun 2016 B2
9373029 Hull et al. Jun 2016 B2
9384737 Lamb et al. Jul 2016 B2
9405751 Hull et al. Aug 2016 B2
9449150 Hyde et al. Sep 2016 B2
9466146 Oka et al. Oct 2016 B2
9479591 Bajaj et al. Oct 2016 B1
9492847 Goldenberg et al. Nov 2016 B2
9495803 Nakamura et al. Nov 2016 B2
9495804 Nakamura et al. Nov 2016 B2
9498694 Huston et al. Nov 2016 B2
9504788 Hyde et al. Nov 2016 B2
9509981 Wilson et al. Nov 2016 B2
9524081 Keane et al. Dec 2016 B2
9541901 Rotschild et al. Jan 2017 B2
9547222 Smith et al. Jan 2017 B2
9547368 Vartanian et al. Jan 2017 B2
9560967 Hyde et al. Feb 2017 B2
9566494 Huston et al. Feb 2017 B2
9568984 Ryan et al. Feb 2017 B1
9582077 Rosenberg et al. Feb 2017 B2
9582178 Grant et al. Feb 2017 B2
9594347 Kaufman et al. Mar 2017 B2
9613261 Wilson et al. Apr 2017 B2
9619132 Ording Apr 2017 B2
9631931 Shen et al. Apr 2017 B2
9649469 Hyde et al. May 2017 B2
9662391 Hyde et al. May 2017 B2
9690379 Tremblay et al. Jun 2017 B2
9691299 Peters et al. Jun 2017 B2
9727042 Hoffberg-Borghesani et al. Aug 2017 B2
9740287 Braun et al. Aug 2017 B2
RE46548 Williams Sep 2017 E
9759917 Osterhout et al. Sep 2017 B2
9761056 Gentilin et al. Sep 2017 B1
9772772 Vartanian et al. Sep 2017 B2
9778840 Vartanian et al. Oct 2017 B2
9785238 Birnbaum et al. Oct 2017 B2
9788907 Alvi et al. Oct 2017 B1
9792833 Peters et al. Oct 2017 B2
9795882 Rabin et al. Oct 2017 B1
9811166 Bell et al. Nov 2017 B2
9836117 Shapira Dec 2017 B2
9836654 Alvi et al. Dec 2017 B1
9836994 Kindig et al. Dec 2017 B2
9875406 Haddick et al. Jan 2018 B2
9888005 Lattyak et al. Feb 2018 B1
9898864 Shapira et al. Feb 2018 B2
9898866 Fuchs et al. Feb 2018 B2
9907997 Cusey et al. Mar 2018 B2
9911232 Shapira et al. Mar 2018 B2
9921665 Scott et al. Mar 2018 B2
9922172 Alvi et al. Mar 2018 B1
9922463 Lanier et al. Mar 2018 B2
9946337 Gentilin et al. Apr 2018 B2
9959644 Monney et al. May 2018 B2
9959675 Gal et al. May 2018 B2
9965471 Huston et al. May 2018 B2
9977782 Huston et al. May 2018 B2
10002540 Michalowski et al. Jun 2018 B2
10019061 Birnbaum et al. Jul 2018 B2
10038966 Mehra Jul 2018 B1
10089454 Archambault et al. Oct 2018 B2
10092843 Rabin et al. Oct 2018 B1
10095033 Wang et al. Oct 2018 B2
10120335 Rotschild et al. Nov 2018 B2
10139901 Gentilin et al. Nov 2018 B2
10152131 Grant et al. Dec 2018 B2
10168531 Trail et al. Jan 2019 B1
10169917 Chen et al. Jan 2019 B2
10180572 Osterhout et al. Jan 2019 B2
10191652 Vartanian et al. Jan 2019 B2
10204529 Peters et al. Feb 2019 B2
10210665 Constantinides Feb 2019 B2
10216278 Nakamura et al. Feb 2019 B2
10223832 Geisner et al. Mar 2019 B2
10235808 Chen et al. Mar 2019 B2
10248203 Birnbaum et al. Apr 2019 B2
10248842 Bardagjy et al. Apr 2019 B1
10249095 Energin et al. Apr 2019 B2
10268888 Osterhout et al. Apr 2019 B2
10297129 Piccolo, III May 2019 B2
10311584 Hall et al. Jun 2019 B1
10317680 Richards et al. Jun 2019 B1
10317999 Ahne et al. Jun 2019 B2
10326977 Mercier et al. Jun 2019 B1
10338379 Yoon Jul 2019 B1
10341803 Mehra Jul 2019 B1
10345600 Chi et al. Jul 2019 B1
10349194 Hoffman et al. Jul 2019 B1
10359845 Sulai et al. Jul 2019 B1
10360732 Krishnaswamy et al. Jul 2019 B2
10365711 Fuchs et al. Jul 2019 B2
10373392 Christen et al. Aug 2019 B2
10389996 Mercier et al. Aug 2019 B1
10397727 Schissler Aug 2019 B1
10402950 Geng et al. Sep 2019 B1
10410372 Bapat et al. Sep 2019 B1
10412527 Miller et al. Sep 2019 B1
10415973 Shen et al. Sep 2019 B2
10416445 Yoon Sep 2019 B1
10416767 Nakamura et al. Sep 2019 B2
10417784 Cavin et al. Sep 2019 B1
10419701 Liu Sep 2019 B2
10425762 Schissler Sep 2019 B1
10429656 Sharma et al. Oct 2019 B1
10429657 Sharma et al. Oct 2019 B1
10429927 Sharma et al. Oct 2019 B1
10432908 Mercier et al. Oct 2019 B1
10440498 Amengual Gari et al. Oct 2019 B1
10451947 Lu et al. Oct 2019 B1
10453828 Ouderkirk et al. Oct 2019 B1
10462451 Trail et al. Oct 2019 B1
10466484 Yoon et al. Nov 2019 B1
10468552 Lutgen Nov 2019 B2
10489651 Luccin et al. Nov 2019 B2
10495798 Peng et al. Dec 2019 B1
10495882 Badino et al. Dec 2019 B1
10497295 Jia et al. Dec 2019 B1
10497320 Ouderkirk et al. Dec 2019 B1
10502963 Noble et al. Dec 2019 B1
10503007 Parsons et al. Dec 2019 B1
10504243 Linde et al. Dec 2019 B2
10506217 Linde et al. Dec 2019 B2
10509228 Sulai et al. Dec 2019 B1
10509467 Wei et al. Dec 2019 B1
10512832 Huston et al. Dec 2019 B2
10528128 Yoon et al. Jan 2020 B1
10529113 Sheikh et al. Jan 2020 B1
10529117 Hunt et al. Jan 2020 B2
10529290 Parsons et al. Jan 2020 B1
10539787 Haddick et al. Jan 2020 B2
10549197 Rabin et al. Feb 2020 B1
10553012 Hunt et al. Feb 2020 B2
10553013 Hunt et al. Feb 2020 B2
10558260 Miller Feb 2020 B2
10558341 Hinckley et al. Feb 2020 B2
10564431 Chao et al. Feb 2020 B1
10564731 Bell et al. Feb 2020 B2
10567731 Mercier et al. Feb 2020 B1
20010001303 Ohsuga et al. May 2001 A1
20010003715 Jutzi et al. Jun 2001 A1
20010035865 Utterback et al. Nov 2001 A1
20020126120 Snowdon et al. Sep 2002 A1
20030200513 Bergman Reuter et al. Oct 2003 A1
20040001647 Lake et al. Jan 2004 A1
20040018476 LaDue Jan 2004 A1
20040044589 Inoue et al. Mar 2004 A1
20040116181 Takizawa et al. Jun 2004 A1
20040179008 Gordon et al. Sep 2004 A1
20040221250 Bergman Reuter et al. Nov 2004 A1
20040243529 Stoneman Dec 2004 A1
20050021518 Snowdon et al. Jan 2005 A1
20050023763 Richardson Feb 2005 A1
20050032582 Mahajan Feb 2005 A1
20050041100 Maguire Feb 2005 A1
20050130725 Creamer et al. Jun 2005 A1
20050211768 Stillman Sep 2005 A1
20050261073 Farrington, Jr. et al. Nov 2005 A1
20060139355 Tak et al. Jun 2006 A1
20060146169 Segman Jul 2006 A1
20060219251 Ray Oct 2006 A1
20060262188 Elyada et al. Nov 2006 A1
20060271448 Inoue et al. Nov 2006 A1
20070021199 Ahdoot Jan 2007 A1
20070021207 Ahdoot Jan 2007 A1
20070061022 Hoffberg-Borghesani et al. Mar 2007 A1
20070061023 Hoffberg et al. Mar 2007 A1
20070091063 Nakamura et al. Apr 2007 A1
20070146372 Gee et al. Jun 2007 A1
20070157226 Misra Jul 2007 A1
20070296723 Williams Dec 2007 A1
20080018667 Cheng et al. Jan 2008 A1
20080040749 Hoffberg et al. Feb 2008 A1
20080146302 Olsen et al. Jun 2008 A1
20080186330 Pendleton et al. Aug 2008 A1
20080312010 Marty Dec 2008 A1
20090029754 Slocum Jan 2009 A1
20090040055 Hattori Feb 2009 A1
20090128548 Gloudemans et al. May 2009 A1
20090128549 Gloudemans et al. May 2009 A1
20090128563 Gloudemans et al. May 2009 A1
20090128568 Gloudemans et al. May 2009 A1
20090128577 Gloudemans et al. May 2009 A1
20090129630 Gloudemans et al. May 2009 A1
20090131189 Swartz May 2009 A1
20090150802 Do et al. Jun 2009 A1
20090244404 Park Oct 2009 A1
20090254417 Beilby et al. Oct 2009 A1
20090285444 Erol et al. Nov 2009 A1
20100017728 Cho et al. Jan 2010 A1
20100019992 Maguire, Jr. Jan 2010 A1
20100066701 Ningrat Mar 2010 A1
20100135509 Zeleny Jun 2010 A1
20100149329 Maguire, Jr. Jun 2010 A1
20100157045 Maguire, Jr. Jun 2010 A1
20100209007 Elyada et al. Aug 2010 A1
20100231706 Maguire, Jr. Sep 2010 A1
20100245237 Nakamura Sep 2010 A1
20110029922 Hoffberg et al. Feb 2011 A1
20110050404 Nakamura et al. Mar 2011 A1
20110128555 Rotschild et al. Jun 2011 A1
20120004956 Huston et al. Jan 2012 A1
20120007885 Huston Jan 2012 A1
20120017232 Hoffberg et al. Jan 2012 A1
20120075168 Osterhout et al. Mar 2012 A1
20120086725 Joseph et al. Apr 2012 A1
20120150698 McClements Jun 2012 A1
20120150997 McClements Jun 2012 A1
20120151320 McClements Jun 2012 A1
20120151345 McClements Jun 2012 A1
20120151346 McClements Jun 2012 A1
20120151347 McClements Jun 2012 A1
20120179636 Galiana et al. Jul 2012 A1
20120194418 Osterhout et al. Aug 2012 A1
20120194419 Osterhout et al. Aug 2012 A1
20120194420 Osterhout et al. Aug 2012 A1
20120194549 Osterhout et al. Aug 2012 A1
20120194550 Osterhout et al. Aug 2012 A1
20120194551 Osterhout et al. Aug 2012 A1
20120194552 Osterhout et al. Aug 2012 A1
20120194553 Osterhout et al. Aug 2012 A1
20120200488 Osterhout et al. Aug 2012 A1
20120200499 Osterhout et al. Aug 2012 A1
20120200601 Osterhout et al. Aug 2012 A1
20120206322 Osterhout et al. Aug 2012 A1
20120206323 Osterhout et al. Aug 2012 A1
20120206334 Osterhout et al. Aug 2012 A1
20120206335 Osterhout et al. Aug 2012 A1
20120206485 Osterhout et al. Aug 2012 A1
20120212398 Border et al. Aug 2012 A1
20120212399 Border et al. Aug 2012 A1
20120212400 Border et al. Aug 2012 A1
20120212406 Osterhout et al. Aug 2012 A1
20120212414 Osterhout et al. Aug 2012 A1
20120212484 Haddick et al. Aug 2012 A1
20120212499 Haddick et al. Aug 2012 A1
20120218172 Border et al. Aug 2012 A1
20120218301 Miller Aug 2012 A1
20120235883 Border et al. Sep 2012 A1
20120235884 Miller et al. Sep 2012 A1
20120235885 Miller et al. Sep 2012 A1
20120235886 Border et al. Sep 2012 A1
20120235887 Border et al. Sep 2012 A1
20120235900 Border et al. Sep 2012 A1
20120236030 Border et al. Sep 2012 A1
20120236031 Haddick et al. Sep 2012 A1
20120242678 Border et al. Sep 2012 A1
20120242697 Border et al. Sep 2012 A1
20120242698 Haddick et al. Sep 2012 A1
20120249797 Haddick et al. Oct 2012 A1
20120306874 Nguyen et al. Dec 2012 A1
20120306907 Huston Dec 2012 A1
20120327113 Huston Dec 2012 A1
20120331058 Huston et al. Dec 2012 A1
20130147598 Hoffberg et al. Jun 2013 A1
20130166047 Fernandez et al. Jun 2013 A1
20130166693 Fernandez et al. Jun 2013 A1
20130167162 Fernandez Jun 2013 A1
20130173673 Miller Jul 2013 A1
20130222369 Huston et al. Aug 2013 A1
20130265148 Nakamura et al. Oct 2013 A1
20130265149 Nakamura et al. Oct 2013 A1
20130265254 Nakamura et al. Oct 2013 A1
20130293939 Rotschild et al. Nov 2013 A1
20130300637 Smits et al. Nov 2013 A1
20130311868 Monney et al. Nov 2013 A1
20130314303 Osterhout et al. Nov 2013 A1
20140018135 Fernandez et al. Jan 2014 A1
20140025768 Huston et al. Jan 2014 A1
20140028832 Maguire, Jr. Jan 2014 A1
20140033052 Kaufman et al. Jan 2014 A1
20140033081 Fernandez et al. Jan 2014 A1
20140045597 Fernandez et al. Feb 2014 A1
20140063054 Osterhout et al. Mar 2014 A1
20140063055 Osterhout et al. Mar 2014 A1
20140074640 Gleadall et al. Mar 2014 A1
20140279072 Joseph Sep 2014 A1
20140279424 Holman et al. Sep 2014 A1
20140279426 Holman et al. Sep 2014 A1
20140279427 Holman et al. Sep 2014 A1
20140279428 Holman et al. Sep 2014 A1
20140297568 Beilby (nee Capper) et al. Oct 2014 A1
20140320529 Roberts et al. Oct 2014 A1
20140327609 Leroy et al. Nov 2014 A1
20150006639 Huston et al. Jan 2015 A1
20150097756 Ziarati et al. Apr 2015 A1
20150204559 Hoffberg et al. Jul 2015 A1
20150253854 Nakamura et al. Sep 2015 A1
20150287241 Huston et al. Oct 2015 A1
20150287246 Huston et al. Oct 2015 A1
20150309316 Osterhout et al. Oct 2015 A1
20150371510 Nakamura et al. Dec 2015 A1
20160035139 Fuchs et al. Feb 2016 A1
20160077489 Kaufman et al. Mar 2016 A1
20160187654 Border et al. Jun 2016 A1
20160209648 Haddick et al. Jul 2016 A1
20160217620 Constantinides Jul 2016 A1
20160220885 Huston et al. Aug 2016 A1
20160257000 Guerin et al. Sep 2016 A1
20170031443 Nakamura et al. Feb 2017 A1
20170090420 Rotschild et al. Mar 2017 A1
20170091998 Piccolo Mar 2017 A1
20170168566 Osterhout et al. Jun 2017 A1
20170203438 Guerin et al. Jul 2017 A1
20170206708 Gentilin et al. Jul 2017 A1
20170220112 Nakamura et al. Aug 2017 A1
20170228028 Nakamura et al. Aug 2017 A1
20170263053 Shah et al. Sep 2017 A1
20170301008 Houdek-Heis et al. Oct 2017 A1
20170301142 Gentilin et al. Oct 2017 A1
20170344114 Osterhout et al. Nov 2017 A1
20170353811 McGibney Dec 2017 A1
20170365102 Huston et al. Dec 2017 A1
20180005312 Mattingly et al. Jan 2018 A1
20180012408 Gentilin et al. Jan 2018 A1
20180024624 Gentilin et al. Jan 2018 A1
20180024625 Gentilin et al. Jan 2018 A1
20180025028 Fransen Jan 2018 A1
20180036531 Schwarz et al. Feb 2018 A1
20180040044 Mattingly et al. Feb 2018 A1
20180082478 Constantinides Mar 2018 A1
20180092698 Chopra et al. Apr 2018 A1
20180108172 Huston et al. Apr 2018 A1
20180113506 Hall Apr 2018 A1
20180122142 Egeler et al. May 2018 A1
20180143976 Huston May 2018 A1
20180164591 Saarikko Jun 2018 A1
20180172995 Lee et al. Jun 2018 A1
20180204276 Tumey Jul 2018 A1
20180210550 Tumey Jul 2018 A1
20180224932 Von Novak et al. Aug 2018 A1
20180224936 Tumey Aug 2018 A1
20180239145 Lanman et al. Aug 2018 A1
20180247128 Alvi et al. Aug 2018 A1
20180270609 Huston Sep 2018 A1
20180276841 Krishnaswamy et al. Sep 2018 A1
20180293798 Energin et al. Oct 2018 A1
20180300551 Luccin et al. Oct 2018 A1
20180300952 Evans et al. Oct 2018 A1
20180307311 Webb et al. Oct 2018 A1
20180335630 Lu et al. Nov 2018 A1
20180343432 Duan et al. Nov 2018 A1
20180344514 Sylvester Dec 2018 A1
20180349690 Rhee et al. Dec 2018 A1
20180364840 Alack, Jr. et al. Dec 2018 A1
20180365896 Jung Dec 2018 A1
20180374267 Yurkin Dec 2018 A1
20180376090 Liu Dec 2018 A1
20190012835 Bleyer et al. Jan 2019 A1
20190025587 Osterhout et al. Jan 2019 A1
20190035154 Liu Jan 2019 A1
20190049949 Moeller et al. Feb 2019 A1
20190104325 Linares et al. Apr 2019 A1
20190105572 Bell et al. Apr 2019 A1
20190107723 Lee et al. Apr 2019 A1
20190108652 Linde et al. Apr 2019 A1
20190110039 Linde et al. Apr 2019 A1
20190113971 Ahne et al. Apr 2019 A1
20190129089 Mohanty May 2019 A1
20190129180 Mohanty May 2019 A1
20190129506 Nakamura et al. May 2019 A1
20190141310 Pohl et al. May 2019 A1
20190156222 Emma et al. May 2019 A1
20190166447 Seldess May 2019 A1
20190171463 Energin et al. Jun 2019 A1
20190172267 Constantinides Jun 2019 A1
20190173467 Moseley Jun 2019 A1
20190178607 Johnson Jun 2019 A1
20190187779 Miller Jun 2019 A1
20190201783 Higgins et al. Jul 2019 A1
20190212821 Keller et al. Jul 2019 A1
20190212822 Keller et al. Jul 2019 A1
20190212823 Keller et al. Jul 2019 A1
20190212824 Keller et al. Jul 2019 A1
20190212827 Kin et al. Jul 2019 A1
20190212828 Kin et al. Jul 2019 A1
20190215938 Paolini Jul 2019 A1
20190227316 Lee et al. Jul 2019 A1
20190227321 Lee et al. Jul 2019 A1
20190227322 Schaub et al. Jul 2019 A1
20190227624 Holman et al. Jul 2019 A1
20190227625 Holman et al. Jul 2019 A1
20190227664 Holman et al. Jul 2019 A1
20190227665 Holman et al. Jul 2019 A1
20190244582 Fruchter et al. Aug 2019 A1
20190278091 Smits et al. Sep 2019 A1
20190283248 Guerin et al. Sep 2019 A1
20190285890 Lam et al. Sep 2019 A1
20190285891 Lam et al. Sep 2019 A1
20190295503 Ouderkirk et al. Sep 2019 A1
20190304166 Yu et al. Oct 2019 A1
20190305181 Lauermann et al. Oct 2019 A1
20190305183 Lutgen Oct 2019 A1
20190305185 Lauermann et al. Oct 2019 A1
20190305188 Lauermann et al. Oct 2019 A1
20190311232 Hall et al. Oct 2019 A1
20190317622 Leigh et al. Oct 2019 A1
20190317642 Moseley Oct 2019 A1
20190318526 Hunt et al. Oct 2019 A1
20190318528 Hunt et al. Oct 2019 A1
20190318529 Hunt et al. Oct 2019 A1
20190318530 Hunt et al. Oct 2019 A1
20190318678 Sears et al. Oct 2019 A1
20190319620 Moseley Oct 2019 A1
20190332175 Vaananen et al. Oct 2019 A1
20190333280 Egeler et al. Oct 2019 A1
20190339837 Furtwangler Nov 2019 A1
20190340306 Harrison et al. Nov 2019 A1
20190340816 Rogers Nov 2019 A1
20190340818 Furtwangler Nov 2019 A1
20190340833 Furtwangler et al. Nov 2019 A1
20190342647 Mehra et al. Nov 2019 A1
20190347981 Horowitz et al. Nov 2019 A1
20190355138 Hall et al. Nov 2019 A1
20190361518 Vakrat et al. Nov 2019 A1
20190361523 Sharma et al. Nov 2019 A1
20190369718 Wei et al. Dec 2019 A1
20190377176 Sharp Dec 2019 A1
20190377182 Sharp Dec 2019 A1
20190377183 Sharp Dec 2019 A1
20190377184 Sharp et al. Dec 2019 A1
20190377406 Steptoe et al. Dec 2019 A1
20190385613 Mindlin et al. Dec 2019 A1
20190387324 Porter Dec 2019 A1
20190391396 Saarikko Dec 2019 A1
20190394564 Mehra et al. Dec 2019 A1
20200004401 Hwang et al. Jan 2020 A1
20200005026 Andersen et al. Jan 2020 A1
20200005539 Hwang et al. Jan 2020 A1
20200007807 Liu Jan 2020 A1
20200012349 Nakamura et al. Jan 2020 A1
20200012946 Costa et al. Jan 2020 A1
20200018875 Mohanty et al. Jan 2020 A1
20200018962 Lu et al. Jan 2020 A1
20200037095 Seldess Jan 2020 A1
20200043219 Hunt et al. Feb 2020 A1
20200045491 Robinson et al. Feb 2020 A1
20200049872 Peng et al. Feb 2020 A1
20200049992 Peng et al. Feb 2020 A1
20200051483 Buckley Feb 2020 A1
20200057828 Harrison et al. Feb 2020 A1
20200057911 Harrison et al. Feb 2020 A1
20200058401 Harrison et al. Feb 2020 A1
20200060007 Harrison et al. Feb 2020 A1
Provisional Applications (1)
Number Date Country
61358232 Jun 2010 US
Continuations (4)
Number Date Country
Parent 16153627 Oct 2018 US
Child 16779533 US
Parent 15789621 Oct 2017 US
Child 16153627 US
Parent 14851860 Sep 2015 US
Child 15789621 US
Parent 13161820 Jun 2011 US
Child 14851860 US