Human body pose estimation

Information

  • Patent Grant
  • 10210382
  • Patent Number
    10,210,382
  • Date Filed
    Tuesday, December 22, 2015
    8 years ago
  • Date Issued
    Tuesday, February 19, 2019
    5 years ago
Abstract
Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints.
Description
BACKGROUND

In a typical computing environment, a user has an input device such as a keyboard, a mouse, a joystick or the like, which may be connected to the computing environment by a cable, wire, wireless connection or the like. If control of a computing environment were to be shifted from a connected controller to gesture or pose based control, the system will need effective techniques to be able to determine what poses or gestures a person is making. Interpreting gestures or poses in a tracking and processing system without knowing the pose of a user's body may cause the system to misinterpret commands, or to miss them all together.


Further, a user of a tracking and processing system may stand at one of various different possible angles with respect to a capture device, and the user's gesture may appear differently to the capture device depending upon the particular angle of the user with respect to the capture device. For example, if the capture device is unaware that the user is not directly facing the capture device, then the user extending his arm directly forward could possibly be misinterpreted by the capture device as the user extending his arm partially to the left or the right. Thus, the system may not work properly without body pose estimation.


Accordingly, there is a need for technology that allows a tracking and processing system to determine the position of a user's body, and to therefore better interpret the gestures that the user is makes.


SUMMARY

Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints.


In an embodiment, a first pixel of a depth map may be associated with one or more body parts of one or more users. Association with a body part may mean that there is a high probability that the first pixel is located within the body part. This probability may be determined by measuring the background depth, the depth of the first pixel, and the depth of various other pixels around the first pixel.


The location and angle at which various other pixels around the first pixel may be measured for depth may be determined by a feature test training program. In one embodiment, each time the depth at a pixel is measured, a determination of whether the pixel is within the depth range of the body is made. Based on the determination, the distance and angle for the next test pixel may be provided. Selecting the test pixels in such a way may increase the efficiency and robustness of the system.


Body poses, which may include pointing, xyz coordinates, joints, rotation, area, and any other aspects of one or more body parts of user may be estimated for multiple users. In an embodiment, this may be accomplished by assuming a user segmentation. For example, values may be assigned to an image such that a value 0 represents background, value 1 represents user 1, value 2 represents user 2, etc. Given this player segmentation image, it is possible to classify all user 1 pixels and do a three dimensional centroid finding, and then repeat this process for subsequent users. In another embodiment, background subtraction may be performed and the remaining foreground pixels (belonging to the multiple users) may then be classified as associated with one or more body parts. In a further embodiment, the background may be considered another ‘body part’ and every pixel in the frame may be considered and associated with one or more body parts, including the background. When computing centroids, it may be ensured that each centroid is spatially localized, so that a respective body part is present for each user. The centroids may then be combined into coherent models by, for example, connecting neighboring body parts throughout each user's body.


In an embodiment, after one or more initial body part probabilities are calculated for each pixel, the initial probabilities for each pixel may be compared with the initial probabilities of one or more offset adjacent pixels to further refine the probability calculations. For example, if the initial probabilities suggest that adjacent pixels are in the same or adjacent body parts (i.e., head and neck), then this would increase the probabilities of the initial calculations. By contrast, if the initial probabilities suggest that adjacent pixels are in non-adjacent body parts (i.e., head and foot), then this would decrease the probabilities of the initial calculations.





BRIEF DESCRIPTION OF THE DRAWINGS

The file of this patent or application contains at least one drawing/photograph executed in color. Copies of this patent or patent application publication with color drawing(s)/photograph(s) will be provided by the Office upon request and payment of the necessary fee.


The systems, methods, and computer readable media for body pose estimation in accordance with this specification are further described with reference to the accompanying drawings in which:



FIGS. 1A, 1B, and 1C illustrate an example embodiment of a target recognition, analysis, and tracking system with a user playing a game.



FIG. 2 illustrates an example embodiment of a capture device that may be used in a target recognition, analysis, and tracking system.



FIG. 3 depicts an example embodiment of a depth image.



FIG. 4 illustrates an example embodiment of a computing environment that may be used to interpret one or more poses or gestures in a body pose estimation system.



FIG. 5 illustrates another example embodiment of a computing environment that may be used to interpret one or more poses or gestures in a body pose estimation system.



FIG. 6 depicts a flow diagram of an example method for body pose estimation.



FIG. 7 depicts a flow diagram of an example depth feature test.



FIG. 8 depicts an example embodiment of pixels measured in a depth feature/probability test.



FIG. 9 depicts a flow diagram of an example embodiment of a depth feature/probability test tree.



FIG. 10 depicts an example embodiment of a segmented body used in body pose estimation.



FIG. 11 depicts example embodiments of poses of a user and corresponding segmented images which may be used in a training program to create feature tests.



FIG. 12 depicts an example embodiment of assigning probabilities associated with body parts using multiple feature tests.



FIG. 13 depicts an example embodiment of centroids/joints/nodes of body parts in body pose estimation.





DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

As will be described herein, a tracking and processing system determine body pose estimation. When a user makes a gesture or pose, a tracking and processing system may receive the gesture or pose and associate one or more commands with the user. In order to determine what response to provide the user of a computing environment, the system may need to be able to determine the body pose of the user. Body poses may also be used to determine skeletal models, determine the location of particular body parts and the like.


In an example embodiment, a tracking and processing system is provided with a capture device, wherein the capture device comprises a depth camera. The depth camera may capture a depth map of an image scene. The computing environment may perform one or more processes on the depth map to assign pixels on the depth map to segments of the users body. From these assigned body parts, the computing environment may obtain nodes, centroids or joint positions of the body parts, and may provide the nodes, joints or centroids to one or more processes to create a 3-D model of the body pose. In one aspect, the body pose is the three dimensional location of the set of body parts associated with a user. In another aspect, pose includes the three dimensional location of the body part, as well as the direction it is pointing, the rotation of the body segment or joint as well as any other aspects of the body part or segment.



FIGS. 1A and 1B illustrate an example embodiment of a configuration of a tracking and processing system 10 utilizing body pose estimation with a user 18 playing a boxing game. In an example embodiment, the tracking and processing system 10 may be used to, among other things, determine body pose, bind, recognize, analyze, track, associate to a human target, provide feedback, interpret poses or gestures, and/or adapt to aspects of the human target such as the user 18.


As shown in FIG. 1A, the tracking and processing system 10 may include a computing environment 12. The computing environment 12 may be a computer, a gaming system or console, or the like. According to an example embodiment, the computing environment 12 may include hardware components and/or software components such that the computing environment 12 may be used to execute applications such as gaming applications, non-gaming applications, or the like.


As shown in FIG. 1A, the tracking and processing system 10 may further include a capture device 20. The capture device 20 may be, for example, a detector that may be used to monitor one or more users, such as the user 18, such that poses performed by the one or more users may be captured, analyzed, processed, and tracked to perform one or more controls or actions within an application, as will be described in more detail below.


According to one embodiment, the tracking and processing system 10 may be connected to an audiovisual device 16 such as a television, a monitor, a high-definition television (HDTV), or the like that may provide game or application visuals and/or audio to the user 18. For example, the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the feedback about virtual ports and binding, game application, non-game application, or the like. The audiovisual device 16 may receive the audiovisual signals from the computing environment 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18. According to one embodiment, the audiovisual device 16 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, a wireless connection or the like.


As shown in FIGS. 1A and 1B, the tracking and processing system 10 may be used to recognize, analyze, process, determine the pose of, and/or track a human target such as the user 18. For example, the user 18 may be tracked using the capture device 20 such that the position, movements and size of user 18 may be interpreted as controls that may be used to affect the application being executed by computer environment 12. Thus, according to one embodiment, the user 18 may move his or her body to control the application.


As shown in FIGS. 1A and 1B, in an example embodiment, the application executing on the computing environment 12 may be a boxing game that the user 18 may be playing. For example, the computing environment 12 may use the audiovisual device 16 to provide a visual representation of a boxing opponent 22 to the user 18. The computing environment 12 may also use the audiovisual device 16 to provide a visual representation of a user avatar 24 that the user 18 may control with his or her movements on a screen 14. For example, as shown in FIG. 1B, the user 18 may throw a punch in physical space to cause the user avatar 24 to throw a punch in game space. Thus, according to an example embodiment, the computer environment 12 and the capture device 20 of the tracking and processing system 10 may be used to recognize and analyze the punch of the user 18 in physical space such that the punch may be interpreted as a game control of the user avatar 24 in game space.


The user 18 may be associated with a virtual port in computing environment 12. Feedback of the state of the virtual port may be given to the user 18 in the form of a sound or display on audiovisual device 16, a display such as an LED or light bulb, or a speaker on the computing environment 12, or any other means of providing feedback to the user. The feedback may be used to inform a user when he is in a capture area of capture device 20, if he is bound to the tracking and processing system 10, what virtual port he is associated with, and when he has control over an avatar such as avatar 24. Gestures and poses by user 18 may change the state of the system, and thus the feedback that the user receives from the system.


Other movements by the user 18 may also be interpreted as other controls or actions, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches. Furthermore, some movements may be interpreted as controls that may correspond to actions other than controlling the user avatar 24. For example, the user may use movements to enter, exit, turn system on or off, pause, volunteer, switch virtual ports, save a game, select a level, profile or menu, view high scores, communicate with a friend, etc. Additionally, a full range of motion of the user 18 may be available, used, and analyzed in any suitable manner to interact with an application.


In FIG. 1C, the human target such as the user 18 may have an object such as racket 21. In such embodiments, the user of an electronic game may be holding the object such that the motions of the user and the object may be used to adjust and/or control parameters of the game, such as, for example, hitting an onscreen ball 23. The motion of a user holding a racket 21 may be tracked and utilized for controlling an on-screen racket in an electronic sports game. In another example embodiment, the motion of a user holding an object may be tracked and utilized for controlling an on-screen weapon in an electronic combat game. Any other object may also be included, such as one or more gloves, balls, bats, clubs, guitars, microphones, sticks, pets, animals, drums and the like.


According to other example embodiments, the tracking and processing system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the target such as the user 18.


As shown in FIG. 2, according to an example embodiment, the image camera component 25 may include an IR light component 26, a three-dimensional (3-D) camera 27, and an RGB camera 28 that may be used to capture the depth image of a scene. For example, in time-of-flight analysis, the IR light component 26 of the capture device 20 may emit an infrared light onto the scene and may then use sensors (not shown) to detect the backscattered light from the surface of one or more targets and objects in the scene using, for example, the 3-D camera 27 and/or the RGB camera 28. In some embodiments, pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the scene. Additionally, in other example embodiments, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location on the targets or objects.


According to another example embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.


In another example embodiment, the capture device 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern or a stripe pattern) may be projected onto the scene via, for example, the IR light component 26. Upon striking the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 27 and/or the RGB camera 28 and may then be analyzed to determine a physical distance from the capture device to a particular location on the targets or objects.


According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles, to obtain visual stereo data that may be resolved to generate depth information. Depth may also be determined by capturing images using one or more detectors that may be monochromatic, infrared, RGB or any other type of detector and performing a parallax calculation.


The capture device 20 may further include a microphone 30. The microphone 30 may include a transducer or sensor that may receive and convert sound into an electrical signal. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the tracking and processing system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing environment 12.


The capture device 20 may further include a feedback component 31. The feedback component 31 may comprise a light such as an LED or a light bulb, a speaker or the like. The feedback device may perform at least one of changing colors, turning on or off, increasing or decreasing in brightness, and flashing at varying speeds. The feedback component 31 may also comprise a speaker which may provide one or more sounds or noises as a feedback of one or more states. The feedback component may also work in combination with computing environment 12 or processor 32 to provide one or more forms of feedback to a user by means of any other element of the capture device, the tracking and processing system or the like.


In an example embodiment, the capture device 20 may further include a processor 32 that may be in operative communication with the image camera component 25. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions that may include instructions for receiving the depth image, determining whether a suitable target may be included in the depth image, converting the suitable target into a skeletal representation or model of the target, determining the body pose, or any other suitable instruction.


The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera or RGB camera, user profiles or any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. As shown in FIG. 2, in one embodiment, the memory component 34 may be a separate component in communication with the image capture component 25 and the processor 32. According to another embodiment, the memory component 34 may be integrated into the processor 32 and/or the image capture component 25.


As shown in FIG. 2, the capture device 20 may be in communication with the computing environment 12 via a communication link 36. The communication link 36 may be a wired connection including, for example, a USB connection, a Firewire connection, an Ethernet cable connection, or the like and/or a wireless connection such as a wireless 802.11b, g, a, or n connection. According to one embodiment, the computing environment 12 may provide a clock to the capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 36.


Additionally, the capture device 20 may provide the depth information and images captured by, for example, the 3-D camera 27 and/or the RGB camera 28, and a skeletal model that may be generated by the capture device 20 or the computing environment to the computing environment 12 via the communication link 36. The computing environment 12 may then use the skeletal model, depth information, and captured images to, for example, create a virtual screen, adapt the user interface and control an application such as a game or word processor. For example, as shown, in FIG. 2, the computing environment 12 may include a gestures library 190. The gestures library 190 may include a collection of gesture filters, each comprising information concerning a gesture that may be performed by the skeletal model (as the user moves). The data captured by the cameras 26, 27 and device 20 in the form of the skeletal model and movements associated with it may be compared to the gesture filters in the gesture library 190 to identify when a user (as represented by the skeletal model) has performed one or more gestures. Those gestures or poses may be associated with various controls of an application. Thus, the computing environment 12 may use the gestures library 190 to interpret movements of the skeletal model and to control an application based on the movements.



FIG. 3 illustrates an example embodiment of a depth image 60 that may be received by the tracking and processing system and/or the computing environment. According to an example embodiment, the depth image 60 may be an image or frame of a scene captured by, for example, the 3-D camera 27 and/or the RGB camera 28 of the capture device 20 described above with respect to FIG. 2. As shown in FIG. 3, the depth image 60 may include a human target 62 and one or more non-human targets 64 such as a wall, a table, a monitor, or the like in the captured scene. As described above, the depth image 60 may include a plurality of observed pixels where each observed pixel has an observed depth value associated therewith. For example, the depth image 60 may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value such as a length or distance in, for example, centimeters, millimeters, or the like of a target or object in the captured scene from the capture device.


According to one embodiment, a depth image such as depth image 60 or an image on an RGB camera such as camera 28, or an image on any other detector may be processed and used to determine the shape and size of a target. In another embodiment, the depth image 60 may be used to determine the body pose of a user. The body may be divided into a series of segments and each pixel of a depth map 60 may be assigned a probability that it is associated with each segment. This information may be provided to one or more processes which may determine the location of nodes, joints, centroids or the like to determine a skeletal model and interpret the motions of a user 62 for pose or gesture based command.


Referring back to FIG. 2, in one embodiment, upon receiving the depth image, the depth image may be downsampled to a lower processing resolution such that the depth image may be more easily used and/or more quickly processed with less computing overhead. Additionally, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth image; portions of missing and/or removed depth information may be filled in and/or reconstructed; and/or any other suitable processing may be performed on the received depth information may such that the depth information may used to size a virtual screen on a user as described above.



FIG. 4 illustrates an example embodiment of a computing environment that may be used to interpret one or more gestures in a target recognition, analysis, and tracking system. The computing environment such as the computing environment 12 described above with respect to FIGS. 1A-2 may be a multimedia console 100, such as a gaming console. As shown in FIG. 4, the multimedia console 100 has a central processing unit (CPU) 101 having a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106. The level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput. The CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104. The flash ROM 106 may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered ON.


A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus as well as to the CPU. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).


The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.


System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).


The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.


The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.


The front panel I/O subassembly 130 may include LEDs, a visual display screen, light bulbs, a speaker or any other means that may provide audio or visual feedback of the state of control of the multimedia control 100 to a user 18. For example, if the system is in a state where no users are detected by capture device 20, such a state may be reflected on front panel I/O subassembly 130. If the state of the system changes, for example, a user becomes bound to the system, the feedback state may be updated on the front panel I/O subassembly to reflect the change in states.


The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.


When the multimedia console 100 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.


The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.


When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.


In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.


With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.


After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.


When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.


Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 27, 28 and capture device 20 may define additional input devices for the console 100.



FIG. 5 illustrates another example embodiment of a computing environment that may be the computing environment 12 shown in FIGS. 1A-2 used to interpret one or more poses or gestures in a tracking and processing system. The computing system environment of FIG. 5 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 12 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment of FIG. 5. In some embodiments the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure. For example, the term circuitry used in the disclosure can include specialized hardware components configured to perform function(s) by firmware or switches. In other examples embodiments the term circuitry can include a general purpose processing unit, memory, etc., configured by software instructions that embody logic operable to perform function(s). In example embodiments where circuitry includes a combination of hardware and software, an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit. Since one skilled in the art can appreciate that the state of the art has evolved to a point where there is little difference between hardware, software, or a combination of hardware/software, the selection of hardware versus software to effectuate specific functions is a design choice left to an implementer. More specifically, one of skill in the art can appreciate that a software process can be transformed into an equivalent hardware structure, and a hardware structure can itself be transformed into an equivalent software process. Thus, the selection of a hardware implementation versus a software implementation is one of design choice and left to the implementer.


In FIG. 5, the computing environment comprises a computer 241, which typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 222 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 223 and random access memory (RAM) 260. A basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223. RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259. By way of example, and not limitation, FIG. 5 illustrates operating system 225, application programs 226, other program modules 227, and program data 228.


The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 5 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 238 is typically connected to the system bus 221 through a non-removable memory interface such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235.


The drives and their associated computer storage media discussed above and illustrated in FIG. 5, provide storage of computer readable instructions, data structures, program modules and other data for the computer 241. In FIG. 5, for example, hard disk drive 238 is illustrated as storing operating system 258, application programs 257, other program modules 256, and program data 255. Note that these components can either be the same as or different from operating system 225, application programs 226, other program modules 227, and program data 228. Operating system 258, application programs 257, other program modules 256, and program data 255 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). The cameras 27, 28 and capture device 20 may define additional input devices for the console 100. A monitor 242 or other type of display device is also connected to the system bus 221 via an interface, such as a video interface 232. In addition to the monitor, computers may also include other peripheral output devices such as speakers 244 and printer 243, which may be connected through a output peripheral interface 233.


The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in FIG. 5. The logical connections depicted in FIG. 5 include a local area network (LAN) 245 and a wide area network (WAN) 249, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.


When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 5 illustrates remote application programs 248 as residing on memory device 247. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.



FIG. 6 depicts a block diagram 300 whereby body pose estimation may be performed. In one embodiment, at 302, a depth map such as depth map 60 may be received by the tracking and processing system. Probabilities associated with one or more virtual body parts may be assigned to pixels on a depth map at 304. A centroid may be calculated for sets of associated pixels associated with a virtual body part, which may be a node, joint or centroid at 306. Centroids may be representations of joints or nodes of a body, and may be calculated using any mathematical algorithm, including, for example, averaging the coordinates of every pixel in a depth map having a threshold probability that it is associated with a body part, or, as another example, a linear regression technique. At 308, the various nodes, joints or centroids associated with the body parts may be combined into a model, which may be provided to one or more programs in a tracking and processing system. The model may include not only the location in three dimensions of the joints or body parts, but may also include the rotation of a joint or any other information about the pointing of the body part.


Body poses may be estimated for multiple users. In an embodiment, this may be accomplished by assuming a user segmentation. For example, values may be assigned to an image such that a value 0 represents background, value 1 represents user 1, value 2 represents user 2, etc. Given this player segmentation image, it is possible to classify all user 1 pixels and do a centroid finding, and then repeat this process for subsequent users. In another embodiment, background subtraction may be performed and the remaining foreground pixels (belonging to the multiple users) may then be classified. When computing centroids, it may be ensured that each centroid is spatially localized, so that a respective body part is present for each user. The centroids may then be combined into coherent models by, for example, connecting neighboring body parts throughout each user's body.



FIG. 7 depicts a sample flow chart for assigning probabilities associated with virtual body parts to a depth map. In an example embodiment, the process of FIG. 7 may be performed at 304 of FIG. 6. Process 350 may employ a depth map received at 302 to assign probabilities associated with virtual body parts at 304. One or more background depths on a depth map may be established at 352. For example, one background depth may correspond to a wall in the back of a room, other background depths may correspond to other humans or objects in the room. These background depths may be used later in flowchart of FIG. 7 to determine if a pixel on the depth map is part of a particular user's body or whether the pixel may be associated with the background.


At 353, a first location may be selected in the depth map. The depth of the first location may be determined at 354. At 356, the depth of the first location may be compared with one or more background depths. If the first location depth is at the same or within a specified threshold range of a background depth, then, at 358, the first location is determined to be part of the background and not part of any body parts. If the first location is not at or within a specified threshold range of a background depth, an offset location, referenced with respect to the first location, may be selected at 360. At 362, the depth of the offset location may be determined and a depth test may be performed to determine if the offset location is background. At 354, it is determined whether any additional offset locations are desired.


The determination of whether or not to select additional offset locations, as well as the angle and distance of the additional offset locations from the first location, may be made based in part on the depth of the previous offset location(s) with respect to the first location and/or the background. These determinations may also be made based on additional factors such as the training module described below. In one embodiment, the offsets will scale with depth. For example, if a user is very close to a detector in a capture area, depth may be measured at large offset distances from the first pixel. If the user were to move twice as far from a detector, then the offset distances may decrease by a factor of two. In one embodiment, this scaling causes the depth offset tests to be invariant. Any number of offset locations may be selected and depth tested, after which a probability that the first location is associated with one or more body parts is calculated at 366. This calculation may be based in part on the depth of the first location and the offset locations with respect to the one or more background depths. This calculation may also be made based on additional factors such as the training module described below.


In another embodiment, 352 may not be performed. In this embodiment, each pixel in a depth map is examined for depth at 354, and then the method proceeds directly to choosing offset locations at 360. In such an example, every pixel in a depth map may be examined for depth or for the probability that it is associated with one or more body parts and/or background. From the determinations made at the first pixel and the offset locations, probabilities may be associated with one or more pixels.



FIG. 8 depicts an instance of the flow chart referenced in FIG. 7. In the flow chart of FIG. 7, a series of feature tests may be used to determine the probability that a pixel in a depth map is associated with one or more body parts. A first location pixel is selected at 480. A first offset pixel is examined at 482, and a second offset pixel is examined at 484. As more pixels are examined for depth, the probability that a particular pixel is associated with a part of the body may decrease or increase. This probability may be provided to other processes in a tracking and processing system.


In another example depicted by FIG. 8, a first location pixel of a depth map is selected at 480, wherein the depth map has probabilities that each pixel in the depth map is associated with one or more body parts already assigned to each pixel. A second offset pixel is examined for its associated probability at 484. As more pixels are examined for their associated probabilities, a second pass at the probability associated with the first pixel may provide a more accurate determination of the body part associated with the pixel. This probability may be provided to other processes in a tracking and processing system.



FIG. 9 depicts a flow chart of another example implementation of feature testing in body pose estimation. A depth map is received and a first pixel location is selected at 502. This may be the pixel depicted at FIG. 8 as the first location. If the first pixel is at the background depth, then probabilities associated with each body part may be zero. If, however, the first pixel is not at the background depth, an angle and distance to a second pixel may be selected at 504.


In another embodiment, a background depth is not determined, instead depth tests and the surrounding offset depth tree tests may be performed at each pixel, regardless of its depth.


In another embodiment, the depth map received at 502 already has the probability that each pixel is associated with one or more body parts assigned to each pixel. Accordingly, instead of testing depth at the first pixel and at offset locations, the probabilities may be tested.


A depth/probability test may be performed on the second pixel at 506. If the second pixel fails the depth/probability test (i.e. it is at the background depth/probability, the depth/probability of a second user, not within the range of a users body or the like) then location F-1 is selected at 510. If, however, the second pixel passes the depth/probability test (i.e. it is within a threshold of the body depth/probability), then location P-1 is selected at 508. Depth/probability tests will then be performed on third pixels at 508 or 510, and based on whether the third pixels pass or fail the depth/probability test, other pixel locations will be selected at one of 512, 514, 516 or 518. While these locations may, in some cases, be the same, they may also vary widely in location based on the results of the depth/probability tests.


In an example embodiment, depth/probability tests on any number of pixels may be performed with reference to a single pixel. For example, 16 tests may be performed, where each depth/probability test is at a different pixel. By performing some quantity of depth/probability tests, the probability that a pixel is associated with each body part may be assigned to each pixel. As another example, only one test may need to be performed on a particular pixel in order to determine the probability that it is associated with one or more body parts.



FIG. 10 depicts an example image that may come from a capture device, such as capture device 20, a graphics package, or other 3-D rendering along with a segmented body image of the example image. Original image 550 may be may be a depth map or other image from the capture device. In an example embodiment, the image of a body may be segmented into many parts as in segmented image 552, and each pixel in a depth map may be associated with a probability for each of the segments in FIG. 10. This probability may be determined using the methods, processes and systems described with respect to FIGS. 7, 8 and 9.



FIG. 11 depicts a series of images of poses from one or more users. For each pose, an image that may be received from a capture device such as capture device 20 is shown adjacent to an image of the pose that has been segmented into parts.


In a first embodiment, the tracking and processing system may receive the non-segmented images 602, 606, 610, and 614, and use the processes described at FIGS. 7, 8 and 9 to determine the probability that each pixel in the image is associated with each of the segmented body parts. The purpose of the processes described in FIGS. 7, 8 and 9 may be to segment the body into each of the parts shown at 604, 608, 612 and 616. These segmented parts may be used by one or more computer processes to determine the body pose of the user.


In a second embodiment, these images may be used in a feature test training module to determine the feature test of FIGS. 7, 8, and 9. Recall from FIGS. 7, 8, and 9 that a depth test may be performed on a pixel, and it either passes or fails, and based on the pass or fail, a next location will be selected. In one embodiment, the next location selected is not arbitrary, but is selected based on a training module. A training module may involve inputting a volume of thousands, hundreds of thousands, millions or any number of segmented poses such as those shown in FIG. 11 into a program. The program may perform one or more operations on the volume of poses to determine optimal feature tests for each pass or fail for the full volume, or some selection of poses. This optimized series of feature tests may be known as feature test trees.


A volume of poses input into a feature test training module may not contain every possible pose by a user. Further, it may increase the efficiency of the program to create several feature test training modules, each of which are based on a separate volume of body poses. Accordingly, the feature tests at each step of a feature test tree may be different and the final probabilities associated with each segment of a body at the conclusion of a test tree may also be different. In one embodiment, several feature test trees are provided for each pixel and the probabilities output from each test tree may be averaged or otherwise combined to provide a segmented image of a body pose.



FIG. 12 depicts an example flow chart to determine body segment probabilities associated with each pixel in human body pose estimation. At 650 a depth map such as the depth map shown in FIG. 3 may be received from a capture device 20. This depth map may be provided to a series of feature test trees at 652. In FIG. 12, three feature test trees, each having been trained on a different volume of body poses, test each pixel of a depth map. The probability that each pixel is associated with each segment of the body is determined at 654 as the soft body parts. In an example embodiment, the process stops here and these probabilities may be used to obtain the joints/nodes/centroids of FIG. 6 at 306.


In another embodiment, at 656, the depth map may again be provided to a series of feature test trees, each of which may have been created using a different volume of body pose images. In FIG. 12, this second series of feature tests contains three trees, each of which may output a probability for each pixel of the depth map associated with each segment of a body. At 658, the probabilities from the second set of feature test trees 656 and the soft body parts from 654 may be combined by averaging or some other method to determine the second pass of the body parts. FIG. 12 shows two sets of three feature test trees, however, the number of feature test trees is not limited by the number three, nor are the number of passes limited by FIG. 12. There may be any number of feature test trees and any number of passes.


In another embodiment, at 656, the depth map provided to the series of feature test trees may have the probability that each pixel of a depth map is associated with one or more body parts already associated with each pixel. For example, the probability maps determined by the feature test trees at 652 may be provided to the feature test trees at 656. In such a circumstance, instead of depth test training programs and trees, the system instead utilizes probability test training programs and trees. The number of trees and passes is not limited in any way, and the trees may be any combination of depth and probability feature tests.



FIG. 13 depicts a segmented body pose image wherein each segment contains a node/joint/centroid, such as those described at 306 with reference to FIG. 6. These joints/nodes/centroids may be determined by taking the centroid of all of the pixels associated with a body part segment after performing the feature tests of FIGS. 7, 8, 9, and 12. Other methods may also be used to determine the location of the nodes/centroids/joints. For example, a filtering process may remove outlying pixels or the like, after which a process may take place to determine the location of the joints/nodes/centroids.


The joints/nodes/centroids of FIG. 13 may be used to construction a skeletal model, or otherwise represent the body pose of a user. This model may be used by the tracking and processing system in any way, including determining the commands of one or more users, identifying one or more users and the like.


It should be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered limiting. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or the like. Likewise, the order of the above-described processes may be changed.


Additionally, the subject matter of the present disclosure includes combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as equivalents thereof.

Claims
  • 1. A method of generating a three dimensional model of at least part of a body, comprising: receiving, at a computing system, a depth image comprising pixels having coordinates with xyz values;determining, at the computing system, using the coordinates of the pixels that a first pixel in the depth image corresponds to the at least part of the body;determining, at the computing system, using the coordinates of the pixels that a second pixel in the depth image corresponds to the at least part of the body based on a position of the second pixel relative to the first pixel;selecting a third pixel among a plurality of pixels in the depth image for determination of whether the third pixel corresponds to the at least part of the body based on determining that the second pixel in the depth image corresponds to the at least part of the body, and based on a position of the third pixel relative to the second pixel; andgenerating the three dimensional model using the first pixel, the second pixel and the third pixel.
  • 2. The method of claim 1, wherein determining that the first pixel in the depth image corresponds to the at least part of the body comprises determining a first probability that the first pixel in the depth image corresponds to the at least part of the body, andwherein determining that the second pixel in the depth image corresponds to the at least part of the body comprises: determining that the second pixel in the depth image corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel, and the first probability.
  • 3. The method of claim 2, further comprising: determining that the third pixel in the depth image has a zero probability of corresponding to the at least part of the body based on the third pixel having a depth value associated with a background.
  • 4. The method of claim 1, wherein determining that the second pixel in the depth image corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel comprises determining that the second pixel in the depth image corresponds to the at least part of the body using a decision tree.
  • 5. The method of claim 1, wherein determining that the second pixel in the depth image corresponds to the at least part of the body comprises determining that the second pixel in the depth image corresponds to the at least part of the body based on an angle and a distance of the second pixel relative to the first pixel.
  • 6. The method of claim 5, wherein determining that the second pixel in the depth image corresponds to the at least part of the body comprises: determining that the second pixel in the depth image corresponds to the at least part of the body based on the angle and the distance of the second pixel relative to the first pixel, and a depth value of the first pixel.
  • 7. A system for generating a three dimensional model of at least part of a body, comprising: computing memory bearing instructions that cause the system to perform operations comprising: receive a depth image comprising pixels having coordinates with xyz values;determine using the coordinates of the pixels that a first pixel in the depth image corresponds to the at least part of the body;determine using the coordinates of the pixels that a second pixel in the depth image corresponds to the at least part of the body based on a position of the second pixel relative to the first pixel;select a third pixel among a plurality of pixels in the depth image for determination of whether the third pixel corresponds to the at least part of the body based on determining that the second pixel in the depth image corresponds to the at least part of the body, and based on a position of the third pixel relative to the second pixel; andgenerate the three dimensional model using the first pixel, the second pixel and the third pixel.
  • 8. The system of claim 7, wherein the memory further bears instructions that cause the system at least to: select the second pixel among the plurality of pixels in the depth image for determination of whether the second pixel corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel and a threshold amount.
  • 9. The system of claim 8, wherein the threshold amount would increase where a depth value associated with the first pixel were to decrease.
  • 10. The system of claim 7, wherein the memory further bears instructions that cause the system at least to:
  • 11. The system of claim 10, wherein the memory further bears instructions that cause the system at least to:
  • 12. The system of claim 7, wherein the instructions that cause the system at least to determine that the first pixel in the depth image corresponds to the at least part of the body further cause the system at least to determine a first probability that the first pixel in the depth image corresponds to the at least part of the body, andwherein the instructions that cause the system at least to determine that the second pixel in the depth image corresponds to the at least part of the body further cause the system at least to: determine that the second pixel in the depth image corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel, and the first probability.
  • 13. The system of claim 12, wherein the memory further bears instructions that cause the system at least to: determine that the third pixel in the depth image has a zero probability of corresponding to the at least part of the body based on the third pixel having a depth value associated with a background.
  • 14. A computer-readable storage device that is not a propagating signal comprising computer-executable instructions that cause a computing system to perform operations comprising: receiving a depth image comprising pixels having xyz coordinates;determining using the coordinates of the pixels that a first pixel in the depth image corresponds to at least part of a body;determining using the coordinates of the pixels that a second pixel in the depth image corresponds to the at least part of the body based on a position of the second pixel relative to the first pixel;selecting a third pixel among a plurality of pixels in the depth image for determination of whether the third pixel corresponds to the at least part of the body based on determining that the second pixel in the depth image corresponds to the at least part of the body, and based on a position of the third pixel relative to the second pixel; andgenerating a three dimensional model of the at least part of the body using the first pixel, the second pixel and the third pixel.
  • 15. The computer-readable storage device of claim 14, wherein determining that the first pixel in the depth image corresponds to the at least part of the body comprises determining a first probability that the first pixel in the depth image corresponds to the at least part of the body, andwherein determining that the second pixel in the depth image corresponds to the at least part of the body comprises: determining that the second pixel in the depth image corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel, and the first probability.
  • 16. The computer-readable storage device of claim 15, further comprising computer-executable instructions that cause the computing system to perform operations comprising: determining that the third pixel in the depth image has a zero probability of corresponding to the at least part of the body based on the third pixel having a depth value associated with a background.
  • 17. The computer-readable storage device of claim 14, further comprising computer-executable instructions that, upon execution on the computing system, cause the computing system to perform operations comprising: selecting the second pixel among the plurality of pixels in the depth image for determination of whether the second pixel corresponds to the at least part of the body based on the position of the second pixel relative to the first pixel and a threshold amount.
  • 18. The computer-readable storage device of claim 17, wherein the threshold amount would increase where if a depth value associated with the first pixel were to decrease.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 13/902,506 filed May 24, 2013, which is a continuation of U.S. patent application Ser. No. 12/454,628 filed May 20, 2009, now U.S. Pat. No. 8,503,720, which claims the benefit of U.S. Provisional Application No. 61/174,878, titled “Human Body Pose Estimation” filed May 1, 2009, the contents of each of which are hereby incorporated herein by reference in their entireties.

US Referenced Citations (322)
Number Name Date Kind
4215364 Takanashi et al. Jul 1980 A
4288078 Lugo Sep 1981 A
4627620 Yang Dec 1986 A
4630910 Ross et al. Dec 1986 A
4645458 Williams Feb 1987 A
4695953 Blair et al. Sep 1987 A
4702475 Elstein et al. Oct 1987 A
4711543 Blair et al. Dec 1987 A
4751642 Silva et al. Jun 1988 A
4796997 Svetkoff et al. Jan 1989 A
4809065 Harris et al. Feb 1989 A
4817950 Goo Apr 1989 A
4843568 Krueger et al. Jun 1989 A
4893183 Nayar Jan 1990 A
4901362 Terzian Feb 1990 A
4925189 Braeunig May 1990 A
5101444 Wilson et al. Mar 1992 A
5148154 MacKay et al. Sep 1992 A
5184295 Mann Feb 1993 A
5229754 Aoki et al. Jul 1993 A
5229756 Kosugi et al. Jul 1993 A
5239463 Blair et al. Aug 1993 A
5239464 Blair et al. Aug 1993 A
5288078 Capper et al. Feb 1994 A
5295491 Gevins Mar 1994 A
5320538 Baum Jun 1994 A
5347306 Nitta Sep 1994 A
5385519 Hsu et al. Jan 1995 A
5405152 Katanics et al. Apr 1995 A
5417210 Funda et al. May 1995 A
5423554 Davis Jun 1995 A
5454043 Freeman Sep 1995 A
5469740 French et al. Nov 1995 A
5495576 Ritchey Feb 1996 A
5516105 Eisenbrey et al. May 1996 A
5524637 Erickson et al. Jun 1996 A
5534917 MacDougall Jul 1996 A
5563988 Maes et al. Oct 1996 A
5577981 Jarvik Nov 1996 A
5580249 Jacobsen et al. Dec 1996 A
5594469 Freeman et al. Jan 1997 A
5597309 Riess Jan 1997 A
5616078 Oh Apr 1997 A
5617312 Iura et al. Apr 1997 A
5638300 Johnson Jun 1997 A
5641288 Zaenglein Jun 1997 A
5644689 Ban et al. Jul 1997 A
5682196 Freeman Oct 1997 A
5682229 Wangler Oct 1997 A
5690582 Ulrich et al. Nov 1997 A
5696591 Bilhorn et al. Dec 1997 A
5703367 Hashimoto et al. Dec 1997 A
5704837 Iwasaki et al. Jan 1998 A
5715834 Bergamasco et al. Feb 1998 A
5852672 Lu Dec 1998 A
5875108 Hoffberg et al. Feb 1999 A
5877803 Wee et al. Mar 1999 A
5913727 Ahdoot Jun 1999 A
5926568 Chaney et al. Jul 1999 A
5930392 Ho Jul 1999 A
5933125 Fernie Aug 1999 A
5980256 Carmein Nov 1999 A
5989157 Walton Nov 1999 A
5995649 Marugame Nov 1999 A
6005548 Latypov et al. Dec 1999 A
6009210 Kang Dec 1999 A
6009359 El-Hakim et al. Dec 1999 A
6054991 Crane et al. Apr 2000 A
6057909 Yahav et al. May 2000 A
6066075 Poulton May 2000 A
6072494 Nguyen Jun 2000 A
6073489 French et al. Jun 2000 A
6077201 Cheng et al. Jun 2000 A
6098458 French et al. Aug 2000 A
6100517 Yahav et al. Aug 2000 A
6100896 Strohecker et al. Aug 2000 A
6101289 Kellner Aug 2000 A
6128003 Smith et al. Oct 2000 A
6130677 Kunz Oct 2000 A
6141463 Covell et al. Oct 2000 A
6147678 Kumar et al. Nov 2000 A
6147764 Handa Nov 2000 A
6152856 Studor et al. Nov 2000 A
6159100 Smith Dec 2000 A
6173066 Peurach et al. Jan 2001 B1
6181343 Lyons Jan 2001 B1
6188777 Darrell et al. Feb 2001 B1
6215890 Matsuo et al. Apr 2001 B1
6215898 Woodfill et al. Apr 2001 B1
6226396 Marugame May 2001 B1
6229913 Nayar et al. May 2001 B1
6256033 Nguyen Jul 2001 B1
6256400 Takata et al. Jul 2001 B1
6283860 Lyons et al. Sep 2001 B1
6289112 Jain et al. Sep 2001 B1
6299308 Voronka et al. Oct 2001 B1
6308565 French et al. Oct 2001 B1
6316934 Amorai-Moriya et al. Nov 2001 B1
6363160 Bradski et al. Mar 2002 B1
6384819 Hunter May 2002 B1
6411744 Edwards Jun 2002 B1
6430997 French et al. Aug 2002 B1
6476834 Doval et al. Nov 2002 B1
6496598 Harman Dec 2002 B1
6498628 Iwamura Dec 2002 B2
6502515 Burckhardt et al. Jan 2003 B2
6503195 Keller et al. Jan 2003 B1
6504569 Jasinschi et al. Jan 2003 B1
6512838 Rafii et al. Jan 2003 B1
6539931 Trajkovic et al. Apr 2003 B2
6570555 Prevost et al. May 2003 B1
6633294 Rosenthal et al. Oct 2003 B1
6640202 Dietz et al. Oct 2003 B1
6661918 Gordon et al. Dec 2003 B1
6671049 Silver Dec 2003 B1
6674877 Jojic Jan 2004 B1
6681031 Cohen et al. Jan 2004 B2
6714665 Hanna et al. Mar 2004 B1
6731799 Sun et al. May 2004 B1
6738066 Nguyen May 2004 B1
6741756 Toyama et al. May 2004 B1
6765726 French et al. Jul 2004 B2
6771277 Ohba Aug 2004 B2
6781618 Beardsley Aug 2004 B2
6788809 Grzeszczuk et al. Sep 2004 B1
6801637 Voronka et al. Oct 2004 B2
6873723 Aucsmith et al. Mar 2005 B1
6876496 French et al. Apr 2005 B2
6879709 Tian Apr 2005 B2
6937742 Roberts et al. Aug 2005 B2
6940538 Rafey et al. Sep 2005 B2
6950534 Cohen et al. Sep 2005 B2
6963338 Bachelder et al. Nov 2005 B1
7003134 Covell et al. Feb 2006 B1
7006236 Tomasi et al. Feb 2006 B2
7036094 Cohen et al. Apr 2006 B1
7038855 French et al. May 2006 B2
7039676 Day et al. May 2006 B1
7042440 Pryor et al. May 2006 B2
7050177 Tomasi et al. May 2006 B2
7050606 Paul et al. May 2006 B2
7058204 Hildreth et al. Jun 2006 B2
7060957 Lange et al. Jun 2006 B2
7113918 Ahmad et al. Sep 2006 B1
7121946 Paul et al. Oct 2006 B2
7135992 Karlsson et al. Nov 2006 B2
7151530 Roeber et al. Dec 2006 B2
7162075 Littlefield et al. Jan 2007 B2
7167578 Blake et al. Jan 2007 B2
7170492 Bell Jan 2007 B2
7184047 Crampton Feb 2007 B1
7184048 Hunter Feb 2007 B2
7202898 Braun et al. Apr 2007 B1
7222078 Abelow May 2007 B2
7224384 Iddan et al. May 2007 B1
7227526 Hildreth et al. Jun 2007 B2
7259747 Bell Aug 2007 B2
7293356 Sohn et al. Nov 2007 B2
7308112 Fujimura et al. Dec 2007 B2
7310431 Gokturk et al. Dec 2007 B2
7317836 Fujimura et al. Jan 2008 B2
7340077 Gokturk et al. Mar 2008 B2
7348963 Bell Mar 2008 B2
7359121 French et al. Apr 2008 B2
7366325 Fujimura et al. Apr 2008 B2
7367887 Watabe et al. May 2008 B2
7379563 Shamaie May 2008 B2
7379566 Hildreth May 2008 B2
7389591 Jaiswal et al. Jun 2008 B2
7412077 Li et al. Aug 2008 B2
7421093 Hildreth et al. Sep 2008 B2
7430312 Gu Sep 2008 B2
7436496 Kawahito Oct 2008 B2
7450736 Yang et al. Nov 2008 B2
7452275 Kuraishi Nov 2008 B2
7460690 Cohen et al. Dec 2008 B2
7489812 Fox et al. Feb 2009 B2
7536032 Bell May 2009 B2
7555142 Hildreth et al. Jun 2009 B2
7560701 Oggier et al. Jul 2009 B2
7570805 Gu Aug 2009 B2
7574020 Shamaie Aug 2009 B2
7576727 Bell Aug 2009 B2
7590262 Fujimura et al. Sep 2009 B2
7593552 Higaki et al. Sep 2009 B2
7598942 Underkoffler et al. Oct 2009 B2
7607509 Schmiz et al. Oct 2009 B2
7620202 Fujimura et al. Nov 2009 B2
7627447 Marsh et al. Dec 2009 B2
7668340 Cohen et al. Feb 2010 B2
7680298 Roberts et al. Mar 2010 B2
7683954 Ichikawa et al. Mar 2010 B2
7684592 Paul et al. Mar 2010 B2
7697748 Dimsdale et al. Apr 2010 B2
7701439 Hillis et al. Apr 2010 B2
7702130 Im et al. Apr 2010 B2
7704135 Harrison, Jr. Apr 2010 B2
7710391 Bell et al. May 2010 B2
7729530 Antonov et al. Jun 2010 B2
7746345 Hunter Jun 2010 B2
7760182 Ahmad et al. Jul 2010 B2
7809167 Bell Oct 2010 B2
7834846 Bell Nov 2010 B1
7852262 Namineni et al. Dec 2010 B2
7860301 Se et al. Dec 2010 B2
RE42256 Edwards Mar 2011 E
7898522 Hildreth et al. Mar 2011 B2
7925081 Gupta et al. Apr 2011 B2
7974443 Kipman et al. Jul 2011 B2
8009880 Zhang et al. Aug 2011 B2
8031909 Se et al. Oct 2011 B2
8035612 Bell et al. Oct 2011 B2
8035614 Bell et al. Oct 2011 B2
8035624 Bell et al. Oct 2011 B2
8072470 Marks Dec 2011 B2
8103109 Winn et al. Jan 2012 B2
8144931 Hartman et al. Mar 2012 B1
8147066 Nozaki et al. Apr 2012 B2
8154590 Kressel et al. Apr 2012 B2
8290249 Mathe et al. Oct 2012 B2
8503720 Shotton et al. Aug 2013 B2
9165199 Zhu Oct 2015 B2
9247238 Izadi et al. Jan 2016 B2
9940553 Shotton et al. Apr 2018 B2
20020069013 Navab et al. Jun 2002 A1
20030067461 Fletcher et al. Apr 2003 A1
20040075738 Burke et al. Apr 2004 A1
20040104935 Williamson et al. Jun 2004 A1
20040145722 Uomori et al. Jul 2004 A1
20040167667 Goncalves et al. Aug 2004 A1
20040179728 Littlefield et al. Sep 2004 A1
20040207597 Marks Oct 2004 A1
20040233287 Schnell Nov 2004 A1
20050059488 Larsen et al. Mar 2005 A1
20050078178 Brown et al. Apr 2005 A1
20050082480 Wagner et al. Apr 2005 A1
20050238200 Gupta et al. Oct 2005 A1
20060188144 Sasaki et al. Aug 2006 A1
20060221250 Rossbach et al. Oct 2006 A1
20060239558 Rafii et al. Oct 2006 A1
20070013718 Ohba Jan 2007 A1
20070031001 Hamanaka Feb 2007 A1
20070052807 Zhou et al. Mar 2007 A1
20070060336 Marks et al. Mar 2007 A1
20070098222 Porter et al. May 2007 A1
20070116356 Gong et al. May 2007 A1
20070156286 Yamauchi Jul 2007 A1
20070195173 Nozaki et al. Aug 2007 A1
20070216894 Garcia et al. Sep 2007 A1
20070217676 Grauman et al. Sep 2007 A1
20070229498 Matusik et al. Oct 2007 A1
20070260984 Marks et al. Nov 2007 A1
20070279485 Ohba et al. Dec 2007 A1
20070283296 Nilsson Dec 2007 A1
20070298882 Marks et al. Dec 2007 A1
20080001951 Marks et al. Jan 2008 A1
20080026838 Dunstan et al. Jan 2008 A1
20080060854 Perlin Mar 2008 A1
20080062257 Corson Mar 2008 A1
20080100620 Nagai et al. May 2008 A1
20080126937 Pachet May 2008 A1
20080134102 Movold et al. Jun 2008 A1
20080137101 Spence et al. Jun 2008 A1
20080137956 Yang Jun 2008 A1
20080152191 Fujimura et al. Jun 2008 A1
20080152218 Okada Jun 2008 A1
20080215972 Zalewski et al. Sep 2008 A1
20080215973 Zalewski et al. Sep 2008 A1
20080304707 Oi et al. Dec 2008 A1
20080310757 Wolberg et al. Dec 2008 A1
20090033655 Boca et al. Feb 2009 A1
20090034622 Huchet et al. Feb 2009 A1
20090074238 Pfister et al. Mar 2009 A1
20090141933 Wagg Jun 2009 A1
20090167679 Klier et al. Jul 2009 A1
20090221368 Yen et al. Sep 2009 A1
20090231425 Zalewski Sep 2009 A1
20090252423 Zhu Oct 2009 A1
20100080415 Qureshi et al. Apr 2010 A1
20100085352 Zhou et al. Apr 2010 A1
20100085353 Zhou et al. Apr 2010 A1
20100093435 Glaser et al. Apr 2010 A1
20100094460 Choi et al. Apr 2010 A1
20100103196 Kumar et al. Apr 2010 A1
20100111370 Black et al. May 2010 A1
20100197393 Geiss Aug 2010 A1
20100201808 Hsu Aug 2010 A1
20100278384 Shotton et al. Nov 2010 A1
20100281432 Geisner et al. Nov 2010 A1
20100281439 Markovic et al. Nov 2010 A1
20100295783 El dokor et al. Nov 2010 A1
20100296724 Chang et al. Nov 2010 A1
20100302247 Perez et al. Dec 2010 A1
20100302395 Mathe et al. Dec 2010 A1
20110210915 Shotton et al. Sep 2011 A1
20110243386 Sofka et al. Oct 2011 A1
20110267344 Germann et al. Nov 2011 A1
20110293180 Criminisi et al. Dec 2011 A1
20120056982 Katz et al. Mar 2012 A1
20120075343 Chen et al. Mar 2012 A1
20120120199 Ben Himane May 2012 A1
20120147149 Liu et al. Jun 2012 A1
20120147152 Vogiatis et al. Jun 2012 A1
20120148162 Zhang et al. Jun 2012 A1
20120163656 Wang et al. Jun 2012 A1
20120194516 Newcombe et al. Aug 2012 A1
20120194517 Izadi et al. Aug 2012 A1
20120194644 Newcombe et al. Aug 2012 A1
20120194650 Izadi et al. Aug 2012 A1
20120195471 Newcombe et al. Aug 2012 A1
20120196679 Newcombe et al. Aug 2012 A1
20120212509 Benko et al. Aug 2012 A1
20120239174 Shotton et al. Sep 2012 A1
20120268940 Sahlin et al. Oct 2012 A1
20130051626 Abadpour et al. Feb 2013 A1
20130077059 Marti et al. Mar 2013 A1
20130251246 Tang et al. Sep 2013 A1
20130265502 Huebner Oct 2013 A1
20130266182 Shotton et al. Oct 2013 A1
20140079314 Yakubovich et al. Mar 2014 A1
20140241617 Shotton et al. Aug 2014 A1
20150029222 Hofmann Jan 2015 A1
Foreign Referenced Citations (29)
Number Date Country
1523352 Aug 2004 CN
101257641 Sep 2008 CN
201254344 Jun 2009 CN
101254344 Jun 2010 CN
101872491 Oct 2010 CN
102184558 Sep 2011 CN
102236912 Nov 2011 CN
102622762 Aug 2012 CN
102622776 Aug 2012 CN
102681661 Sep 2012 CN
0583061 Feb 1994 EP
2411532 Aug 2005 GB
8044490 Feb 1996 JP
2005210722 Aug 2005 JP
WO 1993010708 Jun 1993 WO
WO 1997017598 May 1997 WO
WO 1999015863 Apr 1999 WO
WO 1999044698 Sep 1999 WO
WO 2001059975 Jan 2002 WO
WO 2002082249 Oct 2002 WO
WO 2003001722 Mar 2003 WO
WO 2003046706 Jun 2003 WO
WO 2003073359 Nov 2003 WO
WO 2003054683 Dec 2003 WO
WO 2003071410 Mar 2004 WO
2007132451 Nov 2007 WO
WO 2009059065 May 2009 WO
2010095080 Aug 2010 WO
2010140613 Dec 2010 WO
Non-Patent Literature Citations (250)
Entry
“Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22.
Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX.
Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence.
Breen et al., “Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany.
Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications.
Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC.
Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA.
Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press.
Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY.
He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand.
Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France.
Isard et al., “Condensation—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands.
Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.
Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany.
Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany.
Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany.
Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA.
Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices.
Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence.
Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand.
Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan.
Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7.
Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages.
Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA.
Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.
Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan.
Thayananthan et al., “Pose Estimation and Tracking Using Multivariate Regression,” Elsevier, http://svr-www.eng.cam.ac.uk/.about.bdrs2/papers/thayananthan.sub.--prl08-.pdf, Oct. 8, 2007, pp. 1-15.
Laxton, B., “Monocular Human Pose Estimation,” University of California, San Diego, http://vision.ucsd.edu/.about.blaxton/pagePapers/laxton.sub.--researchExa- m2007.pdf, downloaded Jun. 26, 2009, 16 pages.
Navaratnam et al., “Hierarchical Part-Based Human Body Pose Estimation,” http://mi.eng.cam.ac.uk/reports/svr-ftp/navaratnam.sub.--hierarchical.pdf- , downloaded Jun. 1, 2009, pp. 1-10.
Kanaujia et al., “Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction,” Rutgers University and TTI-Chicago, http://www.cs.toronto.edu/.about.crismin/PAPERS/Smi-cvpr07.pdf, downloaded Jun. 1, 2009, pp. 1-8.
Zhong, Y., “A Weighting Scheme for Content-Based Image Retrieval,” University of Wollongong, http://www.library.uow.edu.au/adt-NWU/uploads/approved/adt-NWU20080116.09- 1123/public/01Front.pdf, 2007, 13 pages.
Agarwal et al., “Learning Methods for Recovering 3D Human Pose from Monocular Images,” Technical Report 5333, INRIA Rhone-Alpes, Oct. 2004, pp. 1-25.
Shivappa et al., “Person Tracking with Audio-Visual Cues Using Iterative Decoding Framework”, IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance, AVSS '08, Santa Fe, NM, Sep. 1-3, 2008, 260-267.
“International Search Report & Written Opinion for PCT Patent Application No. PCT/US2014/016749”, dated May 12, 2014, Filed Date: Feb. 18, 2014, 8 Pages.
Daniel, et al., “Pose Tracking from Natural Features on Mobile Phones”, in IEEE/ACM International Symposium on Mixed and Augmented Reality, Sep. 15, 2008, pp. 125-134.
Breiman, Leo, “Random Forests”, In Machine Learning, vol. 45, Issue 1, Oct. 2001, pp. 5-32.
Veas, et al., “Creating Meaningful Environment Models for Augmented Reality”, in IEEE Virtual Reality Conference, Mar. 8, 2008, pp. 295-296.
“Final Office Action Issued in U.S. Appl. No. 12/454,628”, dated Oct. 5, 2012, 9 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 12/454,628”, dated May 4, 2012, 9 Pages.
“Notice of Allowance Issued in U.S. Appl. No. 12/454,628”, dated Jan. 24, 2013, 8 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/902,506”, dated Mar. 13, 2015, 6 Pages.
“Notice of Allowance Issued in U.S. Appl. No. 13/902,506”, dated Sep. 23, 2015, 8 Pages.
Wren, et al., “Pfinder: Real-Time Tracking of the Human Body”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, Issue 7, Jul. 1997, 6 Pages.
Wu, et al., “3D Model Matching with Viewpoint-Invariant Patches (VIP)”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23, 2008, 8 Pages.
Wuhrer, et al., “Human Shape Correspondence with Automatically Predicted Landmarks”, In Journal of Machine Vision and Applications, vol. 22, Aug. 6, 2011, 9 Pages.
NWurm, et al., “OctoMap: A Probabilistic, Flexible, and Compact 3D Map Representation for Robotic Systems”, In Proceedings of Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation (ICRA) May 2010, 8 Pages.
Yu, et al., “Monocular Video Foreground/Background Segmentation by Tracking Spatial-Color Gaussian Mixture Models”, In Proceedings of Workshop on Motion and Video Computing, Feb. 2007, 8 Pages.
Zach, et al., “A Globally Optimal Algorithm for Robust TV-L1 Range Image Integration”, In IEEE 11th International Conference on Computer Vision, Oct. 2007, 2007, 8 Pages.
Zhao, Liang, “Dressed Human Modeling, Detection, and Parts Localization”, The Robotics Institute, Carnegie Mellon University, Jul. 2001, 121 Pages.
Zhou, et al., “Data-Parallel Octrees for Surface Reconstruction”, In Proceedings of IEEE Transactions on Visualization and Computer Graphics, vol. 17, Issue 5, May 27, 2011,13 Pages.
Bacon, Pierre-Luc, “Continous Head Pose Estimation using Random Regression Forests”, Retrieved From: http://pierrelucbacon.com/assets/papers/rrfpose.pdf, Feb. 9, 2013, 6 Pages.
Baatz, et al., “Leveraging 3D City Models for Rotation Invariant Place-of-Interest Recognition”, In International Journal of Computer Vision, vol. 96, Issue 3, May 27, 2011, 20 Pages.
Baak, et al., “A Data-Driven Approach for Real-Time Full Body Pose Reconstruction from a Depth Camera”, In Proceedings with IEEE International Conference on Computer Vision, Nov. 13, 2011, 8 Pages.
Azarbayejani, et al., “Visually Controlled Graphics”, In Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, Issue 6, Jun. 1, 1993, 4 Pages.
Anderson, et al., “Augmenting Depth Camera Output Using Photometric Stereo”, In Conference on Machine Vision Applications, Jun. 13, 2011, 4 Pages.
Amit, et al., “Shape Quantization and Recognition with Randomized Trees”, In Journal of Neural Computation, vol. 9, Issue 7, Oct. 1, 1997, 56 Pages.
Aggarwal, et al., “Human Motion Analysis: A Review”, In Proceedings of the IEEE Nonrigid and Articulated Motion Workshop, University of Texas at Austin, Austin, TX., Jun. 16, 1997, 13 Pages.
Agarwal, et al., “3D Human Pose from Silhouettes by Relevance Vector Regression”, In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, Jun. 27, 2004, 7 Pages.
“First Office Action and Search Report Issued in Chinese Patent Application No. 201480010236.1”, dated Jan. 19, 2018, 15 Pages.
“Third Office Action Issued in Chinese Patent Application No. 201380068406.7”, dated Feb. 27, 2018, 7 Pages.
“Second Office Action Issued in Chinese Patent Application No. 201380068406.7”, dated Sep. 20, 2017, 7 Pages.
“Third Office Action Issued in Chinese Patent Application No. 201210431972.7”, dated Nov. 26, 2015, 8 Pages.
“Channel Access Method”, Retrived From: http://en.wikipedia.org/wiki/Mlultiple_access_protocol#Circuit_mode_and_channelization_methods., Oct. 26, 2011, 7 Pages.
“From Structure-from-Motion Point Clouds to Fast Location Recognition”, In Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Jun. 20, 2009, 8 Pages.
“Kabsch algorithm”, Retrieved from:en.wikipedia.org/wiki/kabsch_algorithm, May 16, 2013, 3 Pages.
“MapReduce”, Retrieved from :http://web.archive.org/web/20120111070052/http://en.wikipedia.org/wiki/MapReduce, Jan. 11, 2012 , 5 Pages.
“Signed distance function”, Retrieved from :http://web.archive.org/web/20120120095205/http://en.wikipedia.org/wiki/Signed_distance_function, Jan. 20, 2012 , 1 Page.
“Simulation and Training”, In Journal of Division Interactive, 1994, 6 Pages.
“Virtual High Anxiety”, Retrieved from https://books.google.co.in/booksid=N2YEAAAAMBAJ&pg=PA22&lpg=PA22&dq=Virtual+High+Anxiety,+TECH+update,+August+1995&source=bl&ots=uJDb9jVCke&sig=9EZq7VIJ_mQ6kvGbPUnlyIsOHQ&hl=en&sa=X&ved=0ahUKEwiuio63pq3J AhWGco4KHXQxDe4Q6AEIHjAB#v=onepage, Aug. 1995, 2 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 12/972,837”, dated Jun. 26, 2013, 10 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/017,518”, dated May 20, 2014, 14 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 13/017,518”, dated Sep. 25, 2013, 11 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 13/017,518”, dated Mar. 5, 2015, 19 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/040,205”, dated Mar. 1, 2013, 7 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/040,205”, dated Aug. 21, 2012, 7 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/300,542”, dated Sep. 4, 2013, 10 Pages.
“Final Office Action Issued in U.S Appl. No. 13/729,324”, dated Nov. 13, 2015, 16 Pages.
“Final Office Action Issued in U.S Appl. No. 13/729,324”, dated Nov. 2, 2016, 18 Pages.
“Non-Final Office Action Issued in U.S Appl. No. 13/729,324”, dated May 10, 2016, 16 Pages.
“Non-Final Office Action Issued in U.S Appl. No. 13/729,324”, dated Jun. 2, 2015, 15 Pages.
“Non-Final Office Action Issued in U.S Appl. No. 13/729,324”, dated Apr. 6, 2017, 5 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/749,497”, dated Nov. 10, 2014, 14 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/749,497”, dated Jun. 10, 2014, 15 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/749,497”, dated Mar. 5, 2015, 14 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated Dec. 15, 2016, 18 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated Mar. 20, 2015, 16 Pages.
“Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated May 3, 2016, 18 Pages.
“Non Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated May 19, 2017, 20 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated Nov. 6, 2014, 12 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated Oct 2, 2015, 15 Pages.
“Non-Final Office Action Issued in U.S. Appl. No. 13/774,145”, dated Aug. 24, 2016, 16 Pages.
“Office Action Issued in European Patent Application No. 14709030.2”, dated Feb. 24, 2017, 6 Pages.
“Office Action Issued in European Patent Application No. 14709030.2”, dated Sep. 29, 2017, 5 Pages.
“First Office Action Issued in Chinese Patent Application No. 201110427844.0”, dated Mar. 4, 2014, 10 Pages.
“Second Office Action Issued in Chinese Patent Application No. 201110427844.0”, dated Nov. 4, 2014, 6 Pages.
“First Office Action and Search Report Issued in Chinese Patent Application No. 201210021180.2”, dated Dec. 4, 2013, 10 Pages.
“First Office Action and Search Report Issued in Chinese Patent Application No. 201210021241.5”, dated Dec. 24, 2013, 17 Pages.
“Second Office Action Issued in Chinese Patent Application No. 201210021241.5”, dated Aug. 7, 2014, 14 Pages.
“First Office Action and Search Report Issued in Chinese Patent Application No. 201210431972.7”, dated Dec. 1, 2014, 13 Pages.
“Second Office Action Issued in Chinese Patent Application No. 201210431972.7”, dated Jul. 7, 2015, 7 Pages.
Ballan, “Marker-less motion capture of skinned models in a four camera set-up using optical flow and silhouettes”, 3DPVT, Atlanta, GA, USA, 2008, 8 Pages.
Baltzakis, et al., “Tracking of human hands and faces through probabilistic fusion of multiple visual cues”, In Proceedings of 6th International Conference on Computer Vision Systems(ICVS), May 12, 2008, 10 Pages.
Benko, et al., “Depth Touch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface”, In Proceedings of IEEE Tabletops and Interactive Surfaces, Oct. 1, 2008, 1 page.
Blais, et al., “Registering Multiview Range Data to Create 3D Computer Objects”, In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, Issue 8, Aug. 1, 1995, 5 Pages.
Boehnke, Kay, “Fast Object Localization with Real Time 3D Laser Range Sensor Simulation”, In Book World Scientific and Engineering Academy and Society (WSEAS) Transactions on Electronics, Issue 3, vol. 5, Mar. 2008, 10 Pages.
Bolitho, et al., “Parallel Poisson Surface Reconstruction”, In Proceeding of the 5th International Symposium on Advances in Visual Computing (ISVC): Part I, Nov. 26, 2009, 12 Pages.
Bolitho, Matthew Grant, “The Reconstruction of Large Three-dimensional Meshes”, In Dissertation Submitted To the Johns Hopkins University in Conformity With The Requirements for the Degree of Doctor of Philosophy, Mar. 2010, 171 Pages.
Botterill, et al., “Bag-of-Words-driven Single Camera SLAM”, In Journal of Field Robotics, vol. 28, Issue 2, Oct. 22, 2010, 28 Pages.
Breen, et al.,“Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, In Proceedings of Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany, 1995, 22 Pages.
Brogan, et al., “Dynamically Simulated Characters in Virtual Environments”, In Proceedings of the IEEE Computer Graphics and Applications, vol. 18, Issue 5, Sep. 1998, 12 Pages.
Broll, et al., “Toward Next-Gen Mobile AR Games”, In Proceedings of IEEE Computer Graphics and Applications, vol. 28, Issue 4, Jul. 9, 2008, 9 Pages.
Calonder, et al., “Brief: Binary Robust Independent Elementary Features”, In Proceedings of the 11th European Conference on Computer Vision: Part IV, Sep. 5, 2010, 14 Pages.
Campbell, et al., “Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts”, In Journal Image and Vision Computing, vol. 28, Issue 1, Jan. 1, 2010, 10 Pages.
Carmody, Tim, “How Motion Detection Works in Xbox Kinect”, Retrieved from Internet:http://gizmodo.com/5681078/how-motion-detection-works-in-xbox-kinect, Nov. 3, 2010, 4 Pages.
Castle, et al., “Towards Simultaneous Recognition,Localization and Mapping for Hand-Held and Wearable Cameras”, In Proceedings 2007 IEEE International Conference on Robotics and Automation, Apr. 10, 2007, 6 Pages.
Castle, et al., “Video-Rate Localization in Multiple Maps for Wearable Augmented Reality”, In 12th IEEE International Symposium on Wearable Computers., Sep. 28, 2008, 8 Pages.
Celik, et al., “Monocular Vision SLAM for Indoor Aerial Vehicles”, In Proceedings of the IEEEIRSJ Inter-national conference on Intelligent Robots and Systems, Oct. 11, 2009, 8 Pages.
Chen, et al., “Object Modeling by Registration of Multiple Range Images”, In Journal Image and Vision Computing—Special issue: range image understanding archive, vol. 10, Issue 3, Apr. 1, 1992, 6 Pages.
Cheung, et al., “Robust Background Subtraction with Foreground Validation for Urban Traffic Video”, In Journal EURASIP Journal on Applied Signal Processing, vol. 2005, Jan. 1, 2005, 12 Pages.
Chiu, et al., “Improving the Kinect by Cross-Modal Stereo”, In Proceedings of 22nd British Machine Vision Conference., Aug. 2011, 10 Pages.
Chum, et al., “Locally Optimized RANSAC”, In Proceeding of 25th DAGM Symposium, Sep. 10, 2003, 8 Pages.
Cohen, et al., “Interactive Fluid-Particle Simulation using Translating Eulerian Grids”, In Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games, Feb. 19, 2010, 8 Pages.
Comaniciu, et al., “Mean Shift: A Robust Approach Toward Feature Space Analysis”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, Issue 5, May 2002, 17 Pages.
Criminisi, et al., “Decision Forests: A Unified Framework for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning”, In Journal of Foundations and Trends in Computer Graphics and Vision, vol. 7, Issue 2-3, Mar. 29, 2012, 150 Pages.
Curless, et al., “A Volumetric Method for Building Complex Models from Range Images”, In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, Aug. 1, 1996, 10 Pages.
Cutts, Matt, “Gadgets, Google, and SEO”, Retrieved from Internet:https://web.archive.org/web/20101129163015/http://www.mattcutts.com/blog/, Nov. 18, 2010, 10 Pages.
Dam, et al., “Immersive VR for Scientific Visualization: A Progress Report”, In Journal IEEE Computer Graphics and Applications, vol. 20 Issue 6, Nov. 1, 2000, 27 Pages.
Daniele, Ravi, “Kinect: The Next Generation of Motion Control”, Retrieved from: http://www.dmi.unict.it/-battiato/CVision1112/Kinect.pdf, Feb. 9, 2013, 66 Pages.
Davison, et al.M “Mobile Robot Localisation using Active Vision”, In Proceedings of 5th European Conference on Computer Vision, vol. II, Jun. 1998, 17 Pages.
Debevec, et al., “Acquiring the Reftectance Field of a Human Face”, In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, Apr. 4, 2004, 12 Pages.
Dong, et al., “Keyframe-Based Real-Time Camera Tracking”, In IEEE 12th International Conference on Computer Vision, Sep. 29, 2009, 8 Pages.
Eade, et al., “Unified Loop Closing and Recovery for Real Time Monocular Slam”, In Proceeding of 19th British Conference on Machine Vision, Sep. 1, 2008, 10 Pages.
Einarsson, et al., “Relighting Human Locomotion with Flowed Reflectance Fields”, In the Proceedings of Eurographics Symposium on Rendering, 2006, 2 Pages.
Elfes et al., “Sensor Integration for Robot Navigation: Combining Sonar and Stereo Range Data in a Grid-Based Representation”, In Proceedings of the 26th Conference on Decision and Control, Dec. 1987, 8 Pages.
Elgammal, “Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning”, In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, Jun. 27, 2004, 8 Pages.
Escalera, et al., “Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration”, In Sensors, vol. 10, Issue 3 Mar. 2010, 18 Pages.
Fanelli, et al., “Real Time Head Pose Estimation with Random Regression Forests”, In IEEE Conference on Computer Vision and Pattern Recognition, Jun. 20, 2011, 8 Pages.
Fisher, et al., “Virtual Environment Display System”, In Proceedings of the ACM Workshop on Interactive 3D graphics, Oct. 23, 1986, 12 Pages.
Fitzgibbon, “Robust Registration of 2D and 3D Point Sets”, In Proceedings of Image and Vision Computing, 2003, Dec. 1, 2003, 10 Pages.
Fraham, et al., “Building Rome on a Cloudless Day”, In Proceedings of European Conference on Computer Vision: Part IV, Sep. 2010, 14 Pages.
Freeman, et al., “Television Control by Hand Gestures”, In Proceedings of the IEEE International Workshop on Automatic Face and Gesture Recognition, Jun. 26, 1995, 5 Pages.
Fujii, et al., “Three-dimensional finger tracking using direct and reflected infrared images”, In Proceeedings of ACM User Interface Software and Technology, Oct. 2002, 2 Pages.
Furukawa, et al., “Towards Internet-scale Multi-view Stereo”, In Proceedings of Conference on Computer Vision and Pattern Recognition, Jun. 2010, 8 Pages.
Gall, et al., “Hough Forests for Object Detection, Tracking, and Action Recognition”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, Issue 11, Nov. 2011, 15 Pages.
Gee, et al., “6D Relocalisation for RGBD Cameras Using Synthetic View Regression”, In Proceeding of British Machine Vision Conference, Sep. 3, 2012, 11 Pages.
Gemeiner, et al., “Improving Localization Robustness in Monocular SLAM Using a High-Speed Camera”, In Proceedings of Robotics: Science and Systems, Jun. 25, 2008, 8 Pages.
Gemme, et al., “Pose Refinement Using ICP Applied to 3-D LIDAR Data for Exploration Rovers”, In Proceedings of International Symposium on Artificial Intelligence, Robotics and Automation in Space, Sep. 4, 2012, 8 Pages.
Goesele, et al., “Multi-View Stereo Revisited”, In Proceedings of Conference on Computer Vision and Pattern Recognition, vol. 2, 8 Pages.
Hadwiger, et al., “Advanced Illumination Techniques for GPU-Based Volume Raycasting”, In International Conference on Computer Graphics and Interactive Techniques,, Aug. 2009, 56 Pages.
Harada, Takahiro, “Real-Time Rigid Body Simulation on GPUs”, In Journal of GPU Gems 3, Chapter 29, 2008, 2007, 25 Pages.
Nister, et al., “Scalable Recognition with a Vocabulary Tree”, In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 17, 2006, 8 Pages.
Osher, et al., “Level Set Methods and Dynamic Implicit Surfaces, Signed Distance Functions”, In Book Level Set Methods and Dynamic Implicit Surfaces, Series Title Applied Mathematical Sciences, Series vol. 153, Apr. 6, 2006, 7 Pages.
Parker, et al., “Interactive Ray Tracing for Isosurface Rendering”, In Proceedings of IEEE Computer Society, Conference on Visualization, Oct. 1998, 6 Pages.
Pascal, et al., “Vision Based 3D Tracking and Pose Estimation for Mixed Reality”, In Emerging Technologies of Augmented Reality: Interfaces and Design, 2007, 20 Pages.
Paul, et al., “A Method for Registration of 3-D Shapes”, In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence-Special issue on interpretation of 3-D scenes-part-II, vol. 14, Issue 2, Feb. 1, 1992, 18 Pages.
Pavlovic, et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, Issue 7, Jul. 1, 1997, 19 Pages.
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US2012/020687”, dated Aug. 30, 2012, 8 Pages.
“International Search Report & Written Opinion Received for PCT Application No. PCT/US2013/077736”, dated Mar. 27, 2014,15 Pages.
“International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2014/012226”, dated Jul. 9, 2015,6 Pages.
“International Search Report & Written Opinion Issued in PCT Application No. PCT/US2014/012226”, dated May 12, 2014,8 Pages.
“Second Written Opinion Issued in PCT Application No. PCT/US2014/012226”, dated Apr. 7, 2015,5 Pages.
“International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2014/016749”, dated Jun. 5, 2015, 13 Pages.
“Second Written Opinion Issued in PCT Application No. PCT/US2014/016749”, dated Aug. 21, 2014, 6 Pages.
Pollefeys., “Detailed Real-Time Urban 3D Reconstruction from Video”, In International Journal of Computer Vision, vol. 78 Issue 2-3 Jul. 2008, 43 Pages.
Purcell, et al., “Ray Tracing on Programmable Graphics Hardware”, Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, vol. 1, Issue 3, Jun. 23, 2002, 10 Pages.
Qian, et al., “A Gesture-Driven Multimodal Interactive Dance System”, In Proceedings of the IEEE International Conference on Multimedia and Expo, vol. 3, Jun. 27, 2004, 4 Pages.
Rosenhahn, et al., “Automatic Human Model Generation”, In International Conference on Computer Analysis of Images and Patterns, Sep. 5, 2005, pp. 41-48.
Rosten, et al., “Faster and Better: A Machine Learning Approach to Corner Detection”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, Issue 1, Jan. 2010,15 Pages.
Rublee, et al., “ORB: An Efficient Alternative to SIFT or SURF”, In Proceeding of IEEE International Conference on Computer Vision, Nov. 6, 2011, 8 Pages.
Rusinkiewicz, et al., “Efficient variants of the ICP algorithm”, In Third International Conference on 3-D Digital Imaging and Modeling, 2001, pp. 145-152.
Rusinkiewicz, et al., “Real-Time 3D Model Acquisition”, In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, Jul. 2002, 9 Pages.
Sattler, et al., “Fast Image-Based Localization Using Direct 2D-to-3D Matching”, In Proceeding of International conference on Computer Vision, Nov. 6, 2011, 8 Pages.
Schindler, et al., “City-Scale Location Recognition”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Jun. 17, 2007, 7 Pages.
Se, et al., “Vision-Based Global Localization and Mapping for Mobile Robots”, In Journal of IEEE Transaction on Robotics, vol. 21, Issue 3, Jun. 2005, 12 Pages.
Seitz, et al., “A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms”, In Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, vol. 1, 2006, Jun. 2006, 8 Pages.
Sequeira, et al.,“Automated Reconstruction of 3D Models from Real Environments”, Retrieved from: https://www.sciencedirect.com/science/article/pii/S0924271698000264, Feb. 1999, 2 Pages.
Shao, et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, In Proceedings of 3rd TIDE Congress, Aug. 24, 1998, 8 Pages.
Sharp, et al., “Camera pose estimation for 3D reconstruction”, U.S. Appl. No. 13/749,497, Jan. 24, 2013, 27 Pages.
Sheridan, et al., “Virtual Reality Check”, In Journal of Technology Review, vol. 96, Issue 7, Oct. 1993, 9 Pages.
Shotton, et al.,“Computing pose and/or shape of modifiable entities”, U.S. Appl. No. 13/300,542, Nov. 18, 2011, 38 Pages.
Shotton, et al., “Efficient Human Pose Estimation from Single Depth Images”, In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, Issue 12, Jun. 21, 2011, 21 Pages.
Shotton, et al., “Real-Time Human Pose Recognition in Parts from Single Depth Images”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jun. 21, 2011, 8 Pages.
Shotton, et al., “Scene Coordinate Regression Forests for Camera Relocalization in RGB-D Images”, In Proceedings with IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 23, 2013, 8 Pages.
Sminchisescu, et al., “Human Pose Estimation from Silhouettes a Consistent Approach using Distance Level Sets”, In Proceedings of WSCG International Conference on Computer Graphics, Visualization and Computer Vision, 2002, 8 Pages
Stein, et al., “Structural Indexing: Efficient 3-D Object Recognition”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, Issue 2, Feb. 1992, 21 Pages.
Stevens, Jane E, “Flights into Virtual Reality Treating Real World Disorders”, In the Washington Post on Science Psychology, Mar. 27, 1995, 2 Pages.
Stuhmer, et al., “Real-Time Dense Geometry from a Handheld Camera”, In Journal of Springer-Verlag Berlin, LNCS 6376, Conference on Pattern Recognition (DAGM, Sep. 22, 2010, 10 Pages.
Sun, et al., “Conditional Regression Forests for Human Pose Estimation”, In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 16, 2012, 8 Pages.
Taylor, et al., “The Vitruvian Manifold: Inferring Dense Correspondences for One-Shot Human Pose Estimation”, In IEEE Conference on Computer Vision and Pattern Recognition, Jun. 16, 2012, 8 Pages.
Thrun, et al., “Probabilistic Robotics”, The MIT Press, Chapter 9, Sep. 2005, 56 Pages.
Tykkala, et al., “Direct Iterative Closest Point for Real-Time Visual Odometry”, In Proceedings of the IEEE International Conference on Computer Vision Workshops, Nov. 13, 2011, 7 Pages.
Vaughan-Nicholas, “Game-Console Makers Battle over Motion-Sensitive Controllers”, In IEEE Computer Society, Computer, Aug. 2009, 3 Pages.
Vidal, et al., “Pursuit-Evasion Games with Unmanned Ground and Aerial Vehicles”, In Proceedings of IEEE Inti conference on Robotics and Automation, May, 2001, 8 Pages.
Vogiatzis, et al., “Reconstructing Relief Surfaces”, In Image and Vision Computing, vol. 26, Issue 3, Mar. 1, 2008, 10 Pages.
Welch, et al., “Motion Tracking: No Silver Bullet, but a Respectable Arsenal”, In IEEE Computer Graphics and Application, vol. 22 Issue 6, Nov. 2002, 15 Pages.
Whelan, et al., “Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous”, In technical report of MIT., Sep. 2012, 10 Pages.
Winder, et al., “Learning Local Image Descriptors”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jul. 17, 2007, 8 Pages.
Williams, et al., “Real-Time SLAM Relocalisation”, In Proceedings of International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, Oct. 2007, 8 Pages.
Wilson, et al., “Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces”, In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, Oct. 3, 2010, 10 Pages.
Williams, et al., “Automatic Relocalization and Loop Closing for Real-Time Monocular Slam”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, Issue 9, Sep. 2011, 14 Pages.
Hasegawa, et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, In Proceedings of the ACM Computers in Entertainment, vol. 4, Issue 3, Article 6C, Jul. 1, 2006, 12 Pages.
He, Lei, “Generation of Human Body Models”, In University of Auckland, New Zealand, Apr. 2005, 111 Pages.
Henry, et al., “Create alert RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments”, In Proceedings of the 12th International Symposium on Experimental Robotics, Dec. 2010, 15 Pages.
Henry, et al., “RGB-D mapping: Using Kinect-style Depth Cameras for Dense 3D Modeling of Indoor Environments”, In International Journal of Robotic Research, vol. 31, Issue 5,, Feb. 10, 2012, 17 Pages.
Herath, et al., “Simultaneous Localisation and Mapping: A Stereo Vision Based Approach”, In International Conference on Intelligent Robots and Systems, Oct. 2006, 6 Pages.
Higo, et al., “A Hand-held Photometric Stereo Camera for 3-D Modeling”, In Proceedings of IEEE 12th International Conference on Computer Vision, Sep. 29, 2009, 8 Pages.
Hile, et al., “Information Overlay for Camera Phones in Indoor Environments”, In Proceedings of the 3rd International Conference on Location-and Context-Awareness, Sep. 20, 2007, 17 Pages.
Hirsch, et al., “BiOi Screen: A Thin, Depth-Sensing LCD for 30 Interaction using Light Fields”, In ACM Transactions on Graphics, Proceedings of ACM SIGGRAPH Asia, vol. 28, Issue 5, Dec. 2009, 9 Pages.
Hogue, et al., “Underwater Environment Reconstruction using Stereo and Inertial Data”, In IEEE International Conference on Systems, Man and Cybernetics, Jan. 2008, 6 Pages.
Holzer, et al., “Learning to Efficiently Detect Repeatable Interest Points in Depth Data”, In Proceedings of the 12th European Conference on Computer Vision, vol. Part I, Oct. 7, 2012, 14 Pages.
Hongo, et al., “Focus of Attention for Face and Hand Gesture Recognition Using MultipleCameras”, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France., Mar. 2000, pp. 156-161.
Isard, et al., “Condensation—Conditional Density Propagation for Visual Tracking”, In Proceedings of the International Journal of Computer Vision, vol. 29, Issue 01, Aug. 1998, 24 Pages.
Izadi, et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera”, In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Oct. 16, 2011, 10 Pages.
Jivet, et al. “Real Time Representation of 3D Sensor Depth Images”, WSEAS Transactions on Electronics, vol. 5, Issue 3, Mar. 2008, 7 Pages.
Kabsch., A Solution for the Best Rotation to Relate Two Sets of Vectors, Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, vol. 32, Issue 5., Sep. 1976, pp. 922-923.
Kanade, et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18, 1996, 7 Pages.
Kazhdan., “Poisson Surface Reconstruction”, In Proceedings of the fourth Eurographics symposium on Geometry processing. 2006, Jun. 2006, 10 Pages.
Kil, et al., “GPU-assisted Surface Reconstruction on Locally-Uniform Samples”, In Proceedings of the 17th International Meshing Roundtable, Oct. 12, 2008, 18 Pages.
Kim, et al., “Relocalization Using Virtual Keyframes for Online Environment Map Construction”, In Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology,, Nov. 2009, 8 Pages.
Klein, et al., “Improving the Agility of Keyframe-Based SLAM”, In Proceedings of the 10th European Conference on Computer Vision, Oct. 12, 2008, 14 Pages.
Klein, et al., “Parallel Tracking and Mapping for Small AR Workspaces”, In IEEE, International Symposium on Mixed and Augmented Reality,, Nov. 2007, 10 Pages.
Knoop, et al., “Sensor Fusion for 3D Human Body Tracking with an Articulated 3D Body Model”, In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), May 19, 2006, 7 Pages.
Kohler, Markus, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, In Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction, Sep. 1998, 12 Pages.
Kohler, Markus, “Technical Details and Ergonomical Aspects of Gesture Recognition Applied in Intelligent Home Environments”, Retrieved from Internet:https://pdfs.semanticscholar.org/0210/ daa859d8574b1e1f98f809814e83022c75e4.pdf, Jan. 1997, 35 Pages.
Kohler, Markus, “Vision Based Remote Control in Intelligent Home Environments”, In Journal of 3D Image Analysis and Synthesis, vol. 96, Nov. 1996, 8 Pages.
Krainin, et al., “Manipulator and Object Tracking for in Hand Model Acquisition”, In Journal of ICRA 2010 Workshop Paper, 2010, 34 Pages.
Kurihara., “Modeling Deformable Human Hands from Medical Images”, Proceedings of the 2004 ACM SIGGRAPH, 2004, 9 Pages.
Lai., “Sparse Distance Learning for Object Recognition Combining RGB and Depth Informatio”, In IEEE International Conference on Robotics and Automation, May 9, 2011, pp. 4007-4013.
Le Grand, “Broad-Phase Collision Detection with CUDA”, In journal of GPU Gems 3, Chapter 32, 2008, 2007, 28 Pages.
LePetit, et al., “Keypoint Recognition using Randomized Trees”, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, Issue 9 Sep. 2006, 15 Pages.
LePetit, et al., “Randomized Trees for Real-Time Keypoint Recognition”, In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, Jun. 20, 2005, pp. 775-781.
Levoy, et al., “The Digital Michelangelo Project: 3D Scanning of Large Statues”, In Proceedings of ACM SIGGRAPH, Jul. 2000, 14 Pages.
Li, et al., “Location Recognition using Prioritized Feature Matching”, In Proceedings of the 11th European Conference on Computer Vision, Sep. 5, 2010, 14 Pages.
Liu, et al., “An Investigation of Practical Approximate Nearest Neighbor Algorithms”, Advances in neural information processing systems, 2004, 8 Pages.
Lorenson, et al., “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”, In Proceedings of 14th Annual Conference on Computer Graphics and Interactive Techniques, vol. 21, Issue 4, Aug. 1, 1987, 7 Pages.
Lu, et al., “Multi-view human motion capture with an improved deformation skin model”, Digital Image Computing: Techniques and Applications, 2008, 8 Pages.
Lysenkov, et al., “Recognition and Pose Estimation of Rigid Transparent Objects with a Kinect Sensor”, In Robotics: Science and Systems 2012 ,Sydney, NSW, Australia., Jul. 13, 2012, 8 Pages.
Magnenat-Thalmann, et al., “Joint-dependent local deformations for hand animation and object grasping”, In Proceedings on Graphics interface , Canadian Information Processing Society, 1988, 12 Pages.
Mair, et al., “Efficient camera-based pose estimation for real-time applications”, In International Conference on Intelligent Robots and Systems, Oct. 10, 2009, 8 Pages.
Michel, et al., “GPU-Accelerated Real-time 3D Tracking for Humanoid Locomotion and Stair Climbing”, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2007, 7 Pages.
Miyagawa, et al., “CCD-Based Range Finding Sensor”, In Proceedings of IEEE Transactions on Electron Devices, vol. 44, Issue 10, Oct. 1997, pp. 1648-1652.
Molchanov, et al., “Non-iterative Second-order Approximation of Signed Distance Functions for Any Isosurface Representation”, In Eurographics/ IEEE-VGTC Symposium on Visualization, vol. 29, Issue 3, Nov. 2010, 10 Pages.
Montillo, et al., “Age Regression from Faces Using Random Forests”, In Proceedings of the 16th IEEE International Conference on Image Processing, Nov. 7, 2009, 4 Pages.
Newcombe et al., “DTAM: Dense Tracking and Mapping in Real-Time”, In Proceedings of IEEE International Conference on Computer Vision, Nov. 6, 2011, 8 Pages.
Newcombe, et al., “KinectFusion: Real-Time Dense Surface Mapping and Tracking”, In Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality, Oct. 26, 2011, 10 Pages.
Newcombe, et al., “Live Dense Reconstruction with a Single Moving Camera”, In Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, Jun. 13, 2010, 8 Pages.
Zhou, et al., “Highly Parallel Surface Reconstruction”, In Microsoft Research, Technical Report MSR-TR-2008-53,, Apr. 2008, 10 Pages.
Nguyan, “Gpu Gems 3. Addison-Wesley Professional”, Retrieved From: <<http://http.developer.nvidia.com/GPUGems3/gpugems3_pref01.html>>, 2007, 4 Pages.
Ni, et al., “Epitomic Location Recognition”, In Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23, 2008, 8 Pages.
Nister, David, “Preemptive RANSAC for Live Structure and Motion Estimation”, In Proceedings of the Ninth IEEE International Conference on Computer Vision, Oct. 13, 2003, 8 Pages.
Related Publications (1)
Number Date Country
20160171295 A1 Jun 2016 US
Provisional Applications (1)
Number Date Country
61174878 May 2009 US
Continuations (2)
Number Date Country
Parent 13902506 May 2013 US
Child 14978123 US
Parent 12454628 May 2009 US
Child 13902506 US