Electronic gaming platforms may obtain user input from a number of sources. As one example, handheld controller devices may be utilized by game players to provide a control input to the electronic gaming platform. As another example, a player's body positioning may be obtained via one or more cameras or optical elements of the electronic gaming platform. Motion of a player's body position may be tracked by the electronic gaming platform to be utilized as a control input of the player.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.
As described herein, a plurality of photographs may be captured of a player during active play of a game. The plurality of photographs may be scored to identify and present one or more higher scoring photographs. Scoring of captured photographs may be based on one or more event-based scoring parameters or photographic characteristics of the captured photographs, such as blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, or other suitable characteristic. For example, photographs captured by a camera may exhibit blur, particularly if a subject in the photograph is moving under poor ambient lighting conditions. A vector field of model may be utilized for scoring, where a higher score is provided to photographs exhibiting lower velocity or acceleration as indicated by blur, for example. A baseline photograph may be captured before game play to provide a control input for adjusting exposure and/or recording parameters of one or more cameras to reduce blur or optimize other suitable photographic characteristic in captured photographs.
Furthermore, event-based scoring parameters may include actions of a player, a group of players, and/or other observing users. For example, photographs capturing actions of a player during game play may be scored according to a pose of the player. In one particular example, a user's pose is scored according to how accurately the pose matches a predefined pose, such as a virtual pose displayed on a graphical display. As yet another example, photographs capturing reactions of other users observing a player during game play may be considered when scoring a photograph. In one particular example, a photograph capturing a player performing a high jump and a crowd of users reacting by cheering is scored based on the high jump of the player as well as the reaction of the crowd. In other words, capturing the actions of the crowd may be of interest, and thus may contribute to a higher score of the photograph. As yet another example, photographs capturing a facial expression of a player may be scored according to predefined criteria. In one particular example, facial recognition algorithms are performed on a photograph capturing a player's face to determine if they are smiling and increases the score of the photograph if the player is smiling.
Range cameras 115, 116, for example, may provide depth sensing functionality. In at least some implementations, range cameras or depth sensors 115, 116 may comprise an infrared light projector and a sensor for capturing reflected infrared light and/or ambient light. RGB camera 117 may capture visible light from ambient light sources. In some implementations, vision subsystem 114 may further include an audio sensor 119 to detect audio signals in the gaming environment 100 during game play. In one example, the audio sensor 119 may take the form of a microphone array.
Gaming environment 100 further includes a graphical display 118. Graphical display 118 may be a stand-alone device from gaming system 110, or may alternatively comprise a component of gaming system 110. Game console 112 may communicate with vision subsystem 114 to receive input signals from range cameras or depth sensors 115, 116, RGB camera 117, and audio sensor 119. Game console 112 may communicate with graphical display 118 to present graphical information to players.
A human user, referred to herein as a player 120, may interact with gaming system 110 within capture volume 122. Capture volume 122 may correspond to a physical space which may be captured by one or more cameras or optical elements of vision subsystem 114. Player 120 may move within capture volume 122 to provide user input to game console 112 via vision subsystem 114. Player 120 may additionally utilize another user input device to interact with gaming system 110, such as a controller, a mouse, a keyboard, or a microphone, for example.
Computing device 210 may correspond to an example implementation of previously described gaming system 110, including at least game console 112 and vision subsystem 114. Computing device 210 may include one or more processors such as example processor 220 for executing instructions. Such processors may be single core or multicore. Computing device 210 may include computer readable storage media 222 having or including instructions 224 stored thereon executable by one or more processors such as example processor 220 to perform one or more operations, processes, or methods described herein. In some implementations, programs executed thereon may be configured for parallel or distributed processing.
Computer readable storage media 222 may include removable media and/or built-in devices. In some implementations, computer readable storage media 222 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others.
In some implementations, computer readable storage media 222 may include removable computer readable storage media, which may be used to store and/or transfer data and/or instructions executable to implement the herein described methods and processes. Removable computer-readable storage media may take the form of CDs, DVDs, HD-DVDs, Blu-Ray Discs, EEPROMs, and/or floppy disks, among others.
In some implementations, computer readable storage media 222 may include one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
As one example, computing device 210 may establish a criterion that defines one or more moments of interest which correspond to predefined player movements or positions occurring in a capture volume, such as previously described capture volume 122. In at least some implementations, the established criteria may correspond to an anticipated pose that potentially will be assumed by the player during play of a game. The anticipated pose may be game dependent and/or may be dependent on a particular phase of game play that the player is interacting with. In some implementations, a moment of interest may be defined by a phase of game play or an action of a virtual avatar. For example, in a skiing game, a moment of interest may be defined as phase of the game where a virtual avatar jumps off of a ski jump. In some implementations, moments of interest may selectively trigger photograph capture and scoring during a player's game play. In some implementations, photographs may be generated and scored relatively continuously during a player's game play.
A player may assume a number of different body positions during game play. For example, a player may indicate a jumping position by extending arms and legs outward, and/or with the player's feet off of the ground. As another example, a player may indicate a skiing position by posing in a tucked skiing position. As yet another example, a user may indicate plugging of virtual holes existing in a game by positioning the player's body into specific position that correspond to the location of virtual holes. Two or more players may collaborate to indicate still other acts or actions within a game. Computing device 210 may interpret the player's position via input signals received from vision subsystem 230. Vision subsystem 230 may correspond to previously described vision subsystem 114, for example.
Computing device 210 may capture via one or more cameras of vision subsystem 230, a plurality of photographs of a player during play of a game. As one example, previously described RGB camera 117 may be utilized to capture photographs of the player or players during active game play. Photographs captured via vision subsystem 230 may be stored in a data store, such as a local data store 226, for example. Photographs captured via vision subsystem 230 may be additionally or alternatively stored in a remote data store of server device 214, for example.
Computing device 210 may score the captured photographs along a scale of desirability. In some implementations, the scale of desirability may be related to the criteria established to define the one or more moments of interest, where relatively higher scores are assigned to the captured photographs if the player's movements or positions correspond to the criteria established to define the one or more moments of interest. Computing device 210 may utilize a variety of information to score captured photographs, including information obtained from one or more depth or RGB cameras of vision subsystem 230.
In at least some implementations, computing device 210 may score the captured photographs based, at least in part, on one or more event-based scoring parameters or photographic characteristics of the photograph. Photographic characteristics may include, for example, one or more of blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, etc., among other suitable photographic characteristics. Note event-based scoring parameters may include photographic characteristics. As one example, computing device 210 may assess blur for different photographic regions of the captured photographs, where blur occurring in certain photographic regions of the captured photographs may be weighted more heavily in the score than blur occurring in other photographic regions of the captured photographs.
Furthermore, event-based scoring parameters may include actions of a player, a group of players, and/or other observing users. For example, facial recognition and/or skeletal frame recognition of players and/or users may be utilized to identify one or more regions of the photograph corresponding to the player's face, arms, legs, torso, etc. As one example, a photograph capturing a player may be scored based on how accurately the player assumes a predefined pose, such as a virtual pose displayed on graphical display 118. In some implementations, the player's pose may be scored along a scale of desirability of how closely different parts of a player's body align with a virtual position. In some implementations where multiple players interact with the computing device 210, the computing device 210 may score the pose of each player in a captured photograph. Moreover, score bonuses or multipliers may be achieved when multiple players score highly simultaneously. In some implementations, a facial expression of a player or a user may be scored based on established criteria to contribute to a score of a photograph. For example, the established criteria may dictate the score of a photograph to be increased, if the photograph captures a player that is smiling.
Furthermore, scoring of photographs can be based not only on a player's actions, but also on the actions of other users in the room around them. For example, if a player's photo gains a high score because of a super awesome high jump, the other users observing the player may explode with excitement and jump up and cheer, and catching the act of the crowd of user jumping up could also be of interest and add to a higher score of the photograph.
In some implementations, computing device 210 may score the captured photographs based, at least in part, on audio signals captured by audio sensor 119 at moments when a photograph is captured. For example, sound generated by a player, group of players, and/or other observing users may be identified and utilized to score a photograph. In one particular, example, the score of a photograph may be increased based on captured audio exceeding a sound level threshold, such as when a group of people cheer. As another example, the score of a photograph may be increased based on captured audio matching a model audio signature, such as a player singing a song. As yet another example, photograph capture may be triggered in response to an audio level exceeding an audio threshold and/or captured audio matching a model audio signature.
Computing device 210 may present scored photographs to the player via input/output devices 228. As one example, input/output devices 228 may comprise a graphical display such as previously described graphical display 118 of
In at least some implementations, scoring of the captured photographs may be performed remotely, for example, by server device 214. For example, computing device 210 may send one or more of the captured photographs to server device 214 via network 216 for scoring, whereby server device 214 may respond to computing device 210 with respective scores for the one or more of the captured photographs via network 216.
In at least some implementations, server device 214 may host a social networking platform that enables a player or user of computing device 210 to interact with a player or user of other computing device 212 via a social network. In at least some implementations, computing device 210 or server device 214 may identify one or more of the captured photographs that have been shared by a player of computing device with one or more players within the social network. Scoring of the captured photographs may be further based, at least in part, on whether one or more of the captured photographs were shared by the player within the social network. For example, a score of a photograph may be increased in response to a player sharing the photograph with another player. Sharing of photographs may also occur via text messaging, email, or other suitable forms of communication. In some implementations, computing device 210 may score the captured photographs based, at least in part, on a number of people viewing and/or reacting to a player's captured performance.
In at least some implementations, computing device 210 or server device 214 may identify player or user interaction with captured photographs, and may vary a score of the captured photographs in response to such interactions. Examples of user interactions include player ratings, player commentary, sharing of photographs (e.g., as previously discussed), etc. For example, computing device 210 or server device 214 may identify one or more player ratings assigned to captured photographs, such as via the social network. Scoring of captured photographs may be further based, at least in part, on the one or more player ratings. For example, captured photographs that have been assigned higher player ratings may be scored relatively higher than captured photographs that have been assigned lower player ratings. Player ratings may associate a thumbs up/thumbs down information, star rating information, number rating information, commentary, or other suitable information with a captured photograph as metadata, for example.
A score for a captured photograph may be associated with the captured photograph in a data store (e.g., data store 226) as scoring information, for example. In some cases, scoring information may be utilized to select a subset of captured photographs for presentation to a player, for example, as previously discussed. Scoring information may also be utilized as feedback to computing device 210 that may be utilized to determine when or if additional photographs of a player are to be captured. For example, photographs that are associated with a relatively low score may cause computing device 210 to capture additional photographs of the player during subsequent game play in an attempt to capture a photograph having a higher score. As another example, photographs captured during a particular moment or moments in a game may be attributed to or correlated with a higher score relative to other moments in the game. Computing device 210 may capture photographs of the player during the particular moment or moments of subsequent game play that are attributed to or correlated with the higher score in an attempt to capture higher scoring photographs.
Operation 302 comprises establishing photographic baseline conditions via capturing of a baseline photograph prior to capturing a plurality of photographs. In at least some implementations, the baseline photograph may comprise a combination of optical information obtained from two or more cameras or optical elements. For example, optical information obtained from an RGB camera may be combined with optical information obtained from one or more range cameras. Operation 304 comprises adjusting exposure and/or recording parameters of the vision subsystem in response to the baseline photograph prior to capturing the plurality of photographs of the player during play of the game.
In at least some implementations, adjustment of the exposure and/or recording parameters of the camera may be based on the entire capture volume of the baseline photograph or may be based on a particular region within the capture volume. For example, the exposure and/or recording parameters may be adjusted in response to a region of the baseline photograph that corresponds to the player's face. In this way, a photograph of a player's face may be obtained during game player even under poor ambient lighting conditions.
In some implementations, the baseline process may be performed more frequently, such as each time a photograph of the user is taken. By performing the baseline process more frequently, the computing system can more quickly adapt to changing conditions, such as the user turning on the lights.
Operation 306 comprises establishing criteria that defines one or more moments of interest which correspond to predefined player movements or positions occurring in a capture volume and/or game play events that direct a player to assume a pose. In at least some implementations, the established criteria correspond to an anticipated pose that potentially will be assumed by the player during play of the game. For example, a player may perform a particular task within a game by moving or positioning the player's body into particular positions within the capture volume. The moments of interest at which these positions may be anticipated may be based on the game being played by the player and/or the particular phase of the game with which the user is interacting. For example, moments of interest may be initiated by triggers placed in different phases of a game where it is anticipated that the player will be in specific poses. As another example, moments of interest may be initiated by detection of a pose defined by the established criteria.
In some implementations, moments of interest may be utilized to selectively trigger initiation of photograph capture. As such, establishing criteria may include identifying a trigger event during play of the electronic game. For example, the trigger event may include identifying a pose assumed by a player. As another example, the trigger event may include a given phase of the electronic game. As yet another example, the trigger event may include an avatar representative of the player assuming a movement or pose that approximates the established criteria. In some implementations where audio is captured via an audio interface during play of the electronic game, the trigger event may include the captured audio exceeding a sound level threshold or exhibiting a model audio signature.
In some implementations, operation 306 may be omitted and photograph capture may be performed relatively continuously throughout game play. For example, in computing systems with a suitably large amount of computing resources, photographs may be captured every frame during game play.
Operation 308 comprises capturing a plurality of photographs of the player. As one example, the plurality of photographs may be captured via an RGB camera of the vision subsystem. In at least some implementations, capturing the plurality of photographs of the player includes capturing the plurality of photographs if the player's movements or positions correspond, at least in part, to the criteria established to define the one or more moments of interest.
Operation 310 comprises for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. As discussed above, the event-based scoring parameter may include various established criteria that may be compared to the events corresponding to the captured photographs. For example, the established criteria may define one or more predefined player movements or poses within the capture volume. As another example, the established criteria may define one or more predefined avatar movements or poses of an avatar within the electronic game.
In some implementations, the event-based scoring parameter may include a predefined movement or pose of one or more other players or persons within the capture volume. Correspondingly, a captured photograph may be scored based on actions of multiple players, and/or a response from other user observing a player or group of players. In some implementations where audio is captured via an audio interface during game play, at moments corresponding to the captured photographs, the event-based scoring parameter may include a sound level threshold or a model audio signature that may be compared to the captured audio corresponding to the captured photograph.
Operation 312 comprises assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the event-based scoring parameter. The scores may be assigned from a scale of desirability which is related to the criteria established for the event-based scoring parameter, where relatively higher scores are assigned to the captured photographs if the player's movements or positions correspond to the established criteria. As one example, assigning respective scores to the captured photographs may include scoring a photograph of the captured photographs with a higher score if the photograph depicts the player attempting to assume a predefined body pose.
The scoring performed at operation 312 may be further based, at least in part, on one or more photographic characteristics identified in the captured photographs. Photographic characteristics may include, for example, one or more of blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, etc., among other suitable photographic characteristics. As one example, scoring the captured photographs includes scoring a captured photograph relatively higher if the captured photograph exhibits less blur, and relatively lower scores may be assigned to the captured photographs as an increasing function of blur in the captured photographs.
In at least some implementations, scoring the captured photographs may employ a weighting associated with different portions of a captured scene. As one example, photographic characteristics of a photo-representation of the player's face within the captured photographs may be weighted more heavily than photographic characteristics of a photo-representation of the player's lower body within the captured photographs. Operation 312 may comprise, for example, assessing blur or other photographic characteristic for different photographic regions of the captured photographs, where blur occurring in certain photographic regions of the captured photographs is weighted more heavily in the score than blur occurring in other photographic regions of the captured photographs. For example, blur may more heavily reduce a score of a photograph if the blur is within in a photographic region of the photograph corresponding to the player's face.
In at least some implementations, blur or other photographic characteristic may be identified or assessed from a combination of optical information obtained from an RGB camera and one or more ranging cameras of the vision subsystem. For example, a function may be utilized to combine three or more scores, including a skeletal motion score, an RGB score, and a depth score, and a lighting score. High pass filtering or other suitable approach may be applied to optical signals or combinations thereof to identify or assess blur.
In at least some implementations, the scoring performed at operation 312 may further comprise identifying one or more of the captured photographs shared by the player with one or more other players within a social network. The scoring may be further based, at least in part, on whether one or more of the captured photographs were shared by the player within the social network. Photographs that are shared with a great number of players may be increased in score to a greater extent than photograph that are not shared with other players or with a lesser number of players.
In at least some implementations, the scoring performed at operation 312 may further comprise identifying one or more player ratings assigned to a photograph of the captured photographs. The scoring may be further based, at least in part, on the one or more player ratings of the photograph. A score of a photograph may be increased in response to positive or higher player ratings, and may be reduced in response to negative or lower player ratings, for example.
Operation 314 comprises associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs. For example, the assigned scores and associated photographs may be stored in data store 226 of
Operation 316 comprises presenting one or more scored photographs to the player. In at least some implementations, when the scored photographs are presented to the player, operation 316 may further comprise prompting the player with an inquiry about a user action that may be taken with respect to one or more relatively higher scoring photographs. Prompting may be performed, for example, via an output device such as a graphical display. The inquiry may include an inquiry into whether the player wants to save, upload, and/or send the scored photographs to a desired location or to a desired user.
If, for example, scoring of the captured photographs is based on an amount of blur or a location of blur in the captured photographs, then
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4627620 | Yang | Dec 1986 | A |
4630910 | Ross et al. | Dec 1986 | A |
4645458 | Williams | Feb 1987 | A |
4695953 | Blair et al. | Sep 1987 | A |
4702475 | Elstein et al. | Oct 1987 | A |
4711543 | Blair et al. | Dec 1987 | A |
4751642 | Silva et al. | Jun 1988 | A |
4796997 | Svetkoff et al. | Jan 1989 | A |
4809065 | Harris et al. | Feb 1989 | A |
4817950 | Goo | Apr 1989 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4893183 | Nayar | Jan 1990 | A |
4901362 | Terzian | Feb 1990 | A |
4925189 | Braeunig | May 1990 | A |
5101444 | Wilson et al. | Mar 1992 | A |
5148154 | MacKay et al. | Sep 1992 | A |
5184295 | Mann | Feb 1993 | A |
5229754 | Aoki et al. | Jul 1993 | A |
5229756 | Kosugi et al. | Jul 1993 | A |
5239463 | Blair et al. | Aug 1993 | A |
5239464 | Blair et al. | Aug 1993 | A |
5263155 | Wang | Nov 1993 | A |
5280612 | Lorie et al. | Jan 1994 | A |
5288078 | Capper et al. | Feb 1994 | A |
5295491 | Gevins | Mar 1994 | A |
5320538 | Baum | Jun 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5385519 | Hsu et al. | Jan 1995 | A |
5405152 | Katanics et al. | Apr 1995 | A |
5417210 | Funda et al. | May 1995 | A |
5423554 | Davis | Jun 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5469740 | French et al. | Nov 1995 | A |
5485607 | Lomet et al. | Jan 1996 | A |
5485608 | Lomet et al. | Jan 1996 | A |
5495576 | Ritchey | Feb 1996 | A |
5516105 | Eisenbrey et al. | May 1996 | A |
5524637 | Erickson et al. | Jun 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5577981 | Jarvik | Nov 1996 | A |
5580249 | Jacobsen et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5597309 | Riess | Jan 1997 | A |
5616078 | Oh | Apr 1997 | A |
5617312 | Iura et al. | Apr 1997 | A |
5617566 | Malcolm | Apr 1997 | A |
5638300 | Johnson | Jun 1997 | A |
5641288 | Zaenglein | Jun 1997 | A |
5682196 | Freeman | Oct 1997 | A |
5682229 | Wangler | Oct 1997 | A |
5690582 | Ulrich et al. | Nov 1997 | A |
5703367 | Hashimoto et al. | Dec 1997 | A |
5704837 | Iwasaki et al. | Jan 1998 | A |
5715834 | Bergamasco et al. | Feb 1998 | A |
5742813 | Kavanagh et al. | Apr 1998 | A |
5870764 | Lo et al. | Feb 1999 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5877803 | Wee et al. | Mar 1999 | A |
5913727 | Ahdoot | Jun 1999 | A |
5933125 | Fernie | Aug 1999 | A |
5966706 | Biliris et al. | Oct 1999 | A |
5980256 | Carmein | Nov 1999 | A |
5989157 | Walton | Nov 1999 | A |
5995649 | Marugame | Nov 1999 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6054991 | Crane et al. | Apr 2000 | A |
6066075 | Poulton | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6073489 | French et al. | Jun 2000 | A |
6077201 | Cheng et al. | Jun 2000 | A |
6098458 | French et al. | Aug 2000 | A |
6100896 | Strohecker et al. | Aug 2000 | A |
6101289 | Kellner | Aug 2000 | A |
6128003 | Smith et al. | Oct 2000 | A |
6130677 | Kunz | Oct 2000 | A |
6141463 | Covell et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6152856 | Studor et al. | Nov 2000 | A |
6159100 | Smith | Dec 2000 | A |
6173066 | Peurach et al. | Jan 2001 | B1 |
6181343 | Lyons | Jan 2001 | B1 |
6185663 | Burke | Feb 2001 | B1 |
6188777 | Darrell et al. | Feb 2001 | B1 |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6226396 | Marugame | May 2001 | B1 |
6229913 | Nayar et al. | May 2001 | B1 |
6240413 | Learmont | May 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6256400 | Takata et al. | Jul 2001 | B1 |
6283860 | Lyons et al. | Sep 2001 | B1 |
6289112 | Jain et al. | Sep 2001 | B1 |
6299308 | Voronka et al. | Oct 2001 | B1 |
6308565 | French et al. | Oct 2001 | B1 |
6316934 | Amorai-Moriya et al. | Nov 2001 | B1 |
6363160 | Bradski et al. | Mar 2002 | B1 |
6374264 | Bohannon et al. | Apr 2002 | B1 |
6384819 | Hunter | May 2002 | B1 |
6411744 | Edwards | Jun 2002 | B1 |
6430997 | French et al. | Aug 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6503195 | Keller et al. | Jan 2003 | B1 |
6539931 | Trajkovic et al. | Apr 2003 | B2 |
6570555 | Prevost et al. | May 2003 | B1 |
6633294 | Rosenthal et al. | Oct 2003 | B1 |
6640202 | Dietz et al. | Oct 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6665678 | Ching Chen et al. | Dec 2003 | B2 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6714665 | Hanna et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6738066 | Nguyen | May 2004 | B1 |
6765726 | French et al. | Jul 2004 | B2 |
6785685 | Soetarman et al. | Aug 2004 | B2 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6801637 | Voronka et al. | Oct 2004 | B2 |
6873723 | Aucsmith et al. | Mar 2005 | B1 |
6876496 | French et al. | Apr 2005 | B2 |
6937742 | Roberts et al. | Aug 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6961733 | Mazzagatti | Nov 2005 | B2 |
6963872 | Whang et al. | Nov 2005 | B2 |
6970199 | Venturino et al. | Nov 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7015950 | Pryor | Mar 2006 | B1 |
7036094 | Cohen et al. | Apr 2006 | B1 |
7038855 | French et al. | May 2006 | B2 |
7039676 | Day et al. | May 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7050606 | Paul et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7060957 | Lange et al. | Jun 2006 | B2 |
7071914 | Marks | Jul 2006 | B1 |
7089253 | Hinshaw et al. | Aug 2006 | B2 |
7100112 | Winser | Aug 2006 | B1 |
7113918 | Ahmad et al. | Sep 2006 | B1 |
7121946 | Paul et al. | Oct 2006 | B2 |
7146366 | Hinshaw et al. | Dec 2006 | B2 |
7158975 | Mazzagatti | Jan 2007 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7174331 | Luo et al. | Feb 2007 | B1 |
7184048 | Hunter | Feb 2007 | B2 |
7202898 | Braun et al. | Apr 2007 | B1 |
7213041 | Mazzagatti et al. | May 2007 | B2 |
7222078 | Abelow | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7275074 | Chandrasekaran | Sep 2007 | B2 |
7290056 | McLaughlin, Jr. | Oct 2007 | B1 |
7293028 | Cha et al. | Nov 2007 | B2 |
7305386 | Hinshaw et al. | Dec 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7317836 | Fujimura et al. | Jan 2008 | B2 |
7340471 | Mazzagatti et al. | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7359121 | French et al. | Apr 2008 | B2 |
7363325 | Yianilos et al. | Apr 2008 | B2 |
7367887 | Watabe et al. | May 2008 | B2 |
7379563 | Shamaie | May 2008 | B2 |
7379566 | Hildreth | May 2008 | B2 |
7389591 | Jaiswal et al. | Jun 2008 | B2 |
7412077 | Li et al. | Aug 2008 | B2 |
7421093 | Hildreth et al. | Sep 2008 | B2 |
7424480 | Mazzagatti | Sep 2008 | B2 |
7430312 | Gu | Sep 2008 | B2 |
7434010 | Duffy et al. | Oct 2008 | B2 |
7436496 | Kawahito | Oct 2008 | B2 |
7450736 | Yang et al. | Nov 2008 | B2 |
7452275 | Kuraishi | Nov 2008 | B2 |
7460690 | Cohen et al. | Dec 2008 | B2 |
7489812 | Fox et al. | Feb 2009 | B2 |
7519628 | Leverett | Apr 2009 | B1 |
7536032 | Bell | May 2009 | B2 |
7538801 | Hu et al. | May 2009 | B2 |
7551772 | Lim et al. | Jun 2009 | B2 |
7555142 | Hildreth et al. | Jun 2009 | B2 |
7559841 | Hashimoto | Jul 2009 | B2 |
7560701 | Oggier et al. | Jul 2009 | B2 |
7570805 | Gu | Aug 2009 | B2 |
7574020 | Shamaie | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7590262 | Fujimura et al. | Sep 2009 | B2 |
7593552 | Higaki et al. | Sep 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7607509 | Schmiz et al. | Oct 2009 | B2 |
7620202 | Fujimura et al. | Nov 2009 | B2 |
7663689 | Marks | Feb 2010 | B2 |
7668340 | Cohen et al. | Feb 2010 | B2 |
7680298 | Roberts et al. | Mar 2010 | B2 |
7683954 | Ichikawa et al. | Mar 2010 | B2 |
7684592 | Paul et al. | Mar 2010 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
7702130 | Im et al. | Apr 2010 | B2 |
7704135 | Harrison, Jr. | Apr 2010 | B2 |
7710391 | Bell et al. | May 2010 | B2 |
7729530 | Antonov et al. | Jun 2010 | B2 |
7746345 | Hunter | Jun 2010 | B2 |
7760182 | Ahmad et al. | Jul 2010 | B2 |
7760248 | Marks et al. | Jul 2010 | B2 |
7809167 | Bell | Oct 2010 | B2 |
7822727 | Shaughnessy | Oct 2010 | B1 |
7834846 | Bell | Nov 2010 | B1 |
7852262 | Namineni et al. | Dec 2010 | B2 |
7895172 | Cooper et al. | Feb 2011 | B2 |
RE42256 | Edwards | Mar 2011 | E |
7898522 | Hildreth et al. | Mar 2011 | B2 |
7899799 | Furuya | Mar 2011 | B2 |
7899800 | Fachan et al. | Mar 2011 | B2 |
7904427 | Lomet | Mar 2011 | B2 |
7911447 | Kouno | Mar 2011 | B2 |
8035612 | Bell et al. | Oct 2011 | B2 |
8035614 | Bell et al. | Oct 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8072470 | Marks | Dec 2011 | B2 |
8086579 | Chandrasekaran et al. | Dec 2011 | B1 |
8121980 | Reid et al. | Feb 2012 | B2 |
8135690 | Eidt et al. | Mar 2012 | B2 |
8166481 | Dadiomov et al. | Apr 2012 | B2 |
8416996 | Ogawa | Apr 2013 | B2 |
8994790 | Ganapathi | Mar 2015 | B2 |
20020184242 | Holtz et al. | Dec 2002 | A1 |
20030109322 | Funk et al. | Jun 2003 | A1 |
20040005924 | Watabe et al. | Jan 2004 | A1 |
20040063481 | Wang | Apr 2004 | A1 |
20040064439 | Hinshaw et al. | Apr 2004 | A1 |
20040078379 | Hinshaw et al. | Apr 2004 | A1 |
20060004792 | Lyle et al. | Jan 2006 | A1 |
20060010153 | Bugaj | Jan 2006 | A1 |
20060022833 | Ferguson | Feb 2006 | A1 |
20060204012 | Marks et al. | Sep 2006 | A1 |
20070219999 | Richey et al. | Sep 2007 | A1 |
20080026838 | Dunstan et al. | Jan 2008 | A1 |
20080075385 | David et al. | Mar 2008 | A1 |
20080167102 | Diakopoulos et al. | Jul 2008 | A1 |
20080194323 | Merkli et al. | Aug 2008 | A1 |
20080228697 | Adya et al. | Sep 2008 | A1 |
20080256074 | Lev et al. | Oct 2008 | A1 |
20080316327 | Steinberg | Dec 2008 | A1 |
20090070330 | Hwang et al. | Mar 2009 | A1 |
20090153652 | Barenbrug | Jun 2009 | A1 |
20090251553 | Cambell | Oct 2009 | A1 |
20090252423 | Zhu et al. | Oct 2009 | A1 |
20090258703 | Brunstetter | Oct 2009 | A1 |
20090300295 | Eccles et al. | Dec 2009 | A1 |
20090318228 | Hughes | Dec 2009 | A1 |
20100007665 | Smith | Jan 2010 | A1 |
20100023545 | Gladkov et al. | Jan 2010 | A1 |
20100153953 | Adl-Tabatabai et al. | Jun 2010 | A1 |
20100189356 | Sugita | Jul 2010 | A1 |
20100197390 | Craig et al. | Aug 2010 | A1 |
20110029490 | Agarwal et al. | Feb 2011 | A1 |
20110077076 | Kalson et al. | Mar 2011 | A1 |
20110081088 | Xiao | Apr 2011 | A1 |
20110116726 | Hosaka | May 2011 | A1 |
20110137907 | Ikenoue | Jun 2011 | A1 |
20110143811 | Rodriguez | Jun 2011 | A1 |
20110292248 | de Leon | Dec 2011 | A1 |
20110306397 | Fleming | Dec 2011 | A1 |
20110320496 | Reid et al. | Dec 2011 | A1 |
20120021829 | Shoham et al. | Jan 2012 | A1 |
20140125634 | Yokokawa | May 2014 | A1 |
Number | Date | Country |
---|---|---|
101306249 | Nov 2008 | CN |
101482919 | Jul 2009 | CN |
201254344 | Jun 2010 | CN |
101842810 | Sep 2010 | CN |
0583061 | Feb 1994 | EP |
08044490 | Feb 1996 | JP |
1020010027533 | Apr 2001 | KR |
449490 | Aug 2001 | TW |
201036424 | Oct 2010 | TW |
9310708 | Jun 1993 | WO |
9717598 | May 1997 | WO |
9944698 | Sep 1999 | WO |
Entry |
---|
Your Shape Featuring Jenny McCarthy (w/Camera) Review, Available on the Wii and PC, Gamespot.com, Jan. 5, 2010. |
Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices. |
Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand. |
Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX. |
Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan. |
Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, in Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany. |
Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany. |
Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany. |
Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY. |
Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan. |
Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand. |
Isard et al., “Condensation—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands. |
Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA. |
Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA. |
Breen et al., “Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany. |
Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA. |
Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France. |
Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press. |
Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications. |
Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC. |
“Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22. |
Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7. |
Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages. |
“Simulation and Training”, 1994, Division Incorporated. |
Lin, et al., “Depth Recovery Using Defocus Blur at Infinity”, Retrieved at << http://figment.cse.usf.edu/˜sfefilat/data/papers/TuAT9.44.pdf >>, 2008, pp. 4. |
Huhle, et al., “Realistic Depth Blur for Images with Range Data”, Retrieved at << http://www.gris.uni-tuebingen.de/people/staff/tschairer/papers/dyn3d2009.pdf >>, Sep. 28, 2010, pp. 12. |
Nourani-Vatani, et al., “Automatic Camera Exposure Control”, Retrieved at << http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5085&rep=rep1&type=pdf >>, Retrieved Date: Sep. 28, 2010, pp. 6. |
“International Search Report”, dated Sep. 28, 2012, Application No. PCT/US2011/065709, Filed Date: Dec. 19, 2011, pp. 10. |
Yu, Heng, “On Decoupling Concurrency Control from Recovery in Database Repositories”, Retrieved at <<http://www.collectionscanada.gc.ca/obj/s4/f2/dsk3/OWTU/Tc-OWTU-649.pdf>>, MastersThesis, Electronic Theses and Dissertations (UW), Faculty of Mathematics Theses and Dissertations, 2005. |
Robinson, John T., “Design of Concurrency Controls for Transaction Processing Systems”, Retrieved at <<http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA121515>>, National Technical Information Service, Apr. 2, 1982. |
Bernstein, et al., “Concurrency Control in Distributed Database Systems”, Retrieved at <<http://66.102.9.132/search ?q=cache:vz4Y0mQtyiEJ:citeseerx.ist.psu.edu/viewdoc/download%3fdoi%3D10.1.196.1086%26rep%3Drep1%26type%Dpdf+detecting+conflict+%2B+concurrency+control+%2B+transation+system+%2B+database%cd=1&hl=en&ct=clnk>>, ACM Computing Surveys (CSUR), vol. 13, Issue 2, Jun. 1981. |
“Choosing Row Versioning-based Isolation Levels”, Retrieved at http://msdn.microsoft.com/en-us/library/ms188277.aspc>>, Nov. 2009. |
Xiangdong, et al., “Distributed Multiversion Optimistic Concurrency Control for Mobile Real-Time Database Systems”, Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04340523>>, International Conference on Wireless Communications, Networking and Mobile Computing, 2007, Sep. 21-25, 2007. |
The State Intellectual Property Office of the People's Republic of China, First Office Action and Search Report Issued in Chinese Patent Application No. 201110431139.8, dated Aug. 5, 2014, 13 Pages. |
The State Intellectual Property Office of China, Second Office Action and Search Report Issued in Chinese Patent Application No. 201110431139.8, Mar. 30, 2015, China, 11 Pages. |
Taiwan Intellectual Property Office, Office Action and Search Report Issued in Taiwan Patent Application No. 100141077, dated Apr. 11, 2016, 16 Pages. (Submitted With Translation of Search Report). |
The State Intellectual Property Office to China, Office Action Issued in Chinese Patent Application No. 201110431139.8, dated Aug. 12, 2016, 8 pages. (Submitted with partial translation of Office Action). |
Taiwan Intellectual Property Office, Office Action and Search Report Issued in Taiwan Patent Application No. 100141077, dated Aug. 23, 2016, 12 pages. (Submitted with translation of Search Repot). |
The State Intellectual Property Office of China, Third Office Action and Search Report Issued in Chinese Patent Application No. 201110431139.8, dated Sep. 6, 2015, China, 12 pages. |
The State Intellectual Property Office of China, Fourth Office Action Issued in Chinese Patent Application No. 201110431139.8, dated Mar. 4, 2016, China, 8 pages. |
Taiwan Intellectual Property Office, Office Action and Search Report Issued in Taiwan Patent Application No. 100141077, dated Dec. 21, 2016, Taiwan, 11 pages. (Submitted with translation of Search Repot). |
Number | Date | Country | |
---|---|---|---|
20120157200 A1 | Jun 2012 | US |