The present disclosure relates generally to video game systems and, more specifically, to an interactive video game system that enables simultaneous multi-player game play.
Video game systems generally enable players to control characters in a virtual environment to achieve predefined goals or objectives. Traditional video game systems generally rely on manual input devices, such as joysticks, game controllers, keyboards, and so forth, to enable players to control characters within the virtual environment of the game. Additionally, certain modern video game systems can include a camera capable of tracking the movements of players, enabling players to control video game characters based on their movements. However, these systems typically suffer from issues with occlusion, in which a portion of a player is at least temporarily obscured from the camera and, as a result, the system is no longer able to accurately track the position or movements of the player. For example, occlusion can cause jittering or stuttering in the movements of the characters in the virtual environment, as well as other imprecise or erroneous translation of player actions into character actions within the game. Additionally, for multi-player video game systems, the potential for occlusion dramatically increases with the number of players.
Present embodiments are directed to an interactive video game system that includes at least one sensor and at least one display device disposed near a play area. The system also includes a controller communicatively coupled to the at least one sensor and the at least one display device, wherein the controller is configured to: receive, from the at least one sensor, the scanning data of the player in the play area; generate at least one model from the scanning data of the player; identify an action of the player in the play area based on the at least one model; generate the virtual representation for the player based on the at least one model and the action of the player; and present, on the display device, the virtual representation of the player in a virtual environment, wherein an action of the virtual representation is augmented relative to the action of the player.
Present embodiments are also directed to a method of operating an interactive video game system. The method includes: receiving, via processing circuitry of a controller of the interactive video game system, scanning data of a player positioned within a play area; generating, via the processing circuitry, a shadow model and a skeletal model of the player based on the scanning data; generating, via the processing circuitry, a virtual representation for the player based on the shadow model, wherein the virtual representation is associated with an augmented ability; identifying, via the processing circuitry, an action of the player in the play area based on the skeletal model, wherein the action triggers the augmented ability associated with the virtual representation; and presenting, via a display device of the interactive video game system, the virtual representation in a virtual environment performing the augmented ability.
Present embodiments are also directed to an interactive video game system, that includes a controller configured to: receive, from at least one sensor of the interactive video game system, scanning data of a player in a play area; generate a shadow model and a skeletal model from the scanning data of the player; generate a virtual representation for the player based on the shadow model; identify an action of the player in the play area based on the skeletal model of the player, wherein the action triggers an augmented ability associated with the virtual representation; and present, on a display device of the interactive video game system, the virtual representation in a virtual environment performing the augmented ability.
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
As used herein, “scanning data” refers to two-dimensional (2D) or three-dimensional (3D) collected by sensing (e.g., measuring, imaging, ranging) visible outer surfaces of players in a play area. More specifically, “volumetric scanning data,” as used herein, refers to 3D scanning data, such as point cloud data, and may be contrasted with “2D scanning data,” such as image data.
As used herein, a “player model” is a 2D or 3D model generated from the scanning data of a player that generally describes the outer surfaces of the player and may include texture data. More specifically, a “volumetric player model” or “volumetric model,” as used herein, refers to a 3D player model generated from volumetric scanning data of a player, and may be contrasted with a “2D player model” that is generated from 2D scanning data of a player.
A “shadow model,” as used herein, refers to a texture-less volumetric model of a player generated from the scanning data of a player, either directly or by way of the player model. As such, when presented on a 2D surface, such as a display device, the shadow model of a player has a shape substantially similar to a shadow or silhouette of the player when illuminated from behind.
A “skeletal model,” as used herein, refers to a 3D model generated from the scanning data of a player that defines predicted locations and positions of certain bones (e.g., bones associated with the arms, legs, head, spine) of a player to describe the location and pose of the player within a play area. As such, the skeletal model is used to determine the movements and actions of players in the play area to trigger events in a virtual environment and/or in the play area.
Present embodiments are directed to an interactive video game system that enables multiple players (e.g., up to 12) to perform actions in a physical play area to control virtual representations of the players in a displayed virtual environment. The disclosed interactive video game system includes one or more sensors (e.g., cameras, light sensors, infrared (IR) sensors) disposed around the play area to capture scanning data (e.g., 2D or volumetric scanning data) of the players. For example, certain embodiments of the disclosed interactive video game system includes an array having two or more volumetric sensors, such as depth cameras and Light Detection and Ranging (LIDAR) devices, capable of volumetrically scanning each of the players. The system includes suitable processing circuitry that generates models (e.g., player models, shadow models, skeletal models) for each player based on the scanning data collected by the one or more sensors, as discussed below. During game play, one or more sensors capture the actions of the players in the play area, and the system determines the nature of these actions based on the generated player models. Accordingly, the interactive video game system continuously updates the virtual representations of the players and the virtual environment based on the actions of the players and their corresponding in-game effects.
As mentioned, the disclosed interactive video game system includes one or more sensors arranged around the play area to monitor the actions of the players within the play area. For example, in certain embodiments, an array including multiple sensors may be used to generally ensure that a skeletal model of each player can be accurately generated and updated throughout game play despite potential occlusion from the perspective of one or more sensors of the array. In other embodiments, fewer sensors may be used (e.g., a single camera), and the data may be processed using a machine-learning algorithm that generates complete skeletal models for the players despite potential occlusion. For such embodiments, the machine learning agent may be trained in advance using a corpus of scanning data in which the actual skeletal models of players are known (e.g., manually identified by a human, identified using another skeletal tracking algorithm) while portions of one or more players are occluded. As such, after training, the machine learning agent may then be capable of generating skeletal models of players from scanning data despite potential occlusion.
Additionally, the processing circuitry of the system may use the scanning data to generate aspects (e.g., size, shape, outline) of the virtual representations of each player within the virtual environment. In certain embodiments, certain aspects (e.g., color, texture, scale) of the virtual representation of each player may be further adjusted or modified based on information associated with the player. As discussed below, this information may include information related to game play (e.g., items acquired, achievements unlocked), as well as other information regarding activities of the player outside of the game (e.g., player performance in other games, items purchased by the player, locations visited by the player). Furthermore, the scanning data collected by the sensors can be used by the processing circuitry of the game system to generate additional content, such as souvenir images in which a player model is illustrated as being within the virtual world.
Furthermore, the processing circuitry of the system may use the scanning data to augment movements of the virtual representations of each player. For example, in certain embodiments, the processing circuitry of the system may use the scanning data to generate a skeletal model indicating that a player is moving or posing in a particular manner. In response, the processing circuitry may augment the virtual representation of the player to enable the virtual representation to move or change in a manner that goes beyond the actual movement or pose of the player, such that the motion and or appearance of the virtual representation is augmented or enhanced. For example, in an embodiment in which a virtual representation, such as a particular video game character, has particular enhanced abilities (e.g., an ability to jump extremely high, an ability to swim extremely fast, an ability to fly), then certain player movements or poses (e.g., a small hopping motion, a swim stroke through the air, a flapping motion) may be detected and may trigger these enhanced abilities in the virtual representations of the players.
Accordingly, the disclosed interactive video game system enables immersive and engaging experience for multiple simultaneous players.
With the foregoing in mind,
The embodiment of the interactive video game system 10 illustrated in
In the illustrated embodiment, each sensing unit 38 includes a respective sensor 40, which may be a volumetric sensor (e.g., an infra-red (IR) depth camera, a LIDAR device, or another suitable ranging device) or a 2D imaging device (e.g. an optical camera). For example, in certain embodiments, all of the sensors 40 of the sensing units 38 in the array 36 are either IR depth cameras or LIDAR devices, while in other embodiments, a mixture of both IR depth cameras, LIDAR devices, and/or optical cameras are present within the array 36. It is presently recognized that both IR depth cameras and LIDAR devices can be used to volumetrically scan each of the players 12, and the collected volumetric scanning data can be used to generate various models of the players, as discussed below. For example, in certain embodiments, IR depth cameras in the array 36 may be used to collect data to generate skeletal models, while the data collected by LIDAR devices in the array 36 may be used to generate player models and/or shadow models for the players 12, which is discussed in greater detail below. It is also recognized that LIDAR devices, which collect point cloud data, are generally capable of scanning and mapping a larger area than depth cameras, typically with better accuracy and resolutions. As such, in certain embodiments, at least one sensing unit 38 of the array 36 includes a corresponding volumetric sensor 40 that is a LIDAR device to enhance the accuracy or resolution of the array 36 and/or to reduce a total number of sensing units 38 present in the array 36.
Further, each illustrated sensing unit 38 includes a sensor controller 42 having suitable memory circuitry 44 and processing circuitry 46. The processing circuitry 46 of each sensing unit 38 executes instructions stored in the memory circuitry 44 to enable the sensing unit 38 to scan the players 12 to generate scanning data (e.g., volumetric and/or 2D scanning data) for each of the players 12. For example, in the illustrated embodiment, the sensing units 38 are communicatively coupled to the primary controller 34 via a high-speed internet protocol (IP) network 48 that enables low-latency exchange of data between the devices of the interactive video game system 10. Additionally, in certain embodiments, the sensing units 38 may each include a respective housing that packages the sensor controller 42 together with the sensor 40.
It may be noted that, in other embodiments, the sensing units 38 may not include a respective sensor controller 42. For such embodiments, the processing circuitry 35 of the primary controller 34, or other suitable processing circuitry of the system 10, is communicatively coupled to the respective sensors 40 of the array 36 to provide control signals directly to, and to receive data signals directly from, the sensors 40. However, it is presently recognized that processing (e.g., filtering, skeletal mapping) the volumetric scanning data collected by each of these sensors 40 can be processor-intensive. As such, in certain embodiments, it can be advantageous to divide the workload by utilizing dedicated processors (e.g., processors 46 of each of the sensor controllers 42) to process the scanning data collected by the respective sensor 40, and then to send processed data to the primary controller 34. For example, in the illustrated embodiment, each of processors 46 of the sensor controllers 42 process the scanning data collected by their respective sensor 40 to generate partial models (e.g., partial volumetric or 2D models, partial skeletal models, partial shadow models) of each of the players 12, and the processing circuitry 35 of the primary controller 34 receives and fuses or combines the partial models to generate complete models of each of the players 12, as discussed below.
Additionally, in certain embodiments, the primary controller 34 may also receive information from other sensing devices in and around the play area 16. For example, the illustrated primary controller 34 is communicatively coupled to a radio-frequency (RF) sensor 45 disposed near (e.g., above, below, adjacent to) the 3D play area 16A. The illustrated RF sensor 45 receives a uniquely identifying RF signal from a wearable device 47, such as a bracelet or headband having a radio-frequency identification (RFID) tag worn by each of the players 12. In response, the RF sensor 45 provides signals to the primary controller 34 regarding the identity and the relative positions of the players 12 in the play area 16. As such, for the illustrated embodiment, processing circuitry 35 of the primary controller 34 receives and combines the data collected by the array 36, and potentially other sensors (e.g., RF sensor 45), to determine the identities, locations, and actions of the players 12 in the play area 16 during game play. Additionally, the illustrated primary controller 34 is communicatively coupled to a database system 50, or any other suitable data repository storing player information. The database system 50 includes processing circuitry 52 that executes instructions stored in memory circuitry 54 to store and retrieve information associated with the players 12, such as various models (e.g., player, shadow, and/or skeletal models) associated with the player, player statistics (e.g., wins, losses, points, total game play time), player attributes or inventory (e.g., abilities, textures, items), player purchases at a gift shop, player points in a loyalty rewards program, and so forth. The processing circuitry 35 of the primary controller 34 may query, retrieve, and update information stored by the database system 50 related to the players 12 to enable the system 10 to operate as set forth herein.
Additionally, the embodiment of the interactive video game system 10 illustrated in
Additionally, the embodiment of the interactive video game system 10 illustrated in
In certain embodiments, the output devices 78 of the interface panel 74 include physical effect devices, such as an electronically controlled release valve coupled to a compressed air line, which provides burst of warm or cold air or mist in response to a suitable control signal from the primary controller 34 or the output controller 56. It may be appreciated that the output devices are not limited to those incorporated into the interface panel 74. In certain embodiments, the play area 16 may include output devices that provide physical effects to players indirectly, such as through the air. For example, in an embodiment, when a player strikes a particular pose to trigger an ability or an action of the virtual representation, then the player may experience a corresponding physical effect. By way of specific example, in an embodiment in which a player has the ability to throw snowballs, the player may receive a cold blast of air on their exposed palm in response to the player extending their hands in a particular manner. In an embodiment in which a player has the ability to throw fireballs, the player may receive a warm blast of air or IR irradiation (e.g., heat) in response to the player extending their hands in a particular manner. In still other embodiments, players may receive haptic feedback (e.g., ultrasonic haptic feedback) in response to the virtual representation of a player interacting with an object in the virtual world. For example, when the virtual representation of the player hits a wall with a punch in the virtual environment, the player may receive some physically perceptible effect on a portion of their body (e.g., an extended first) that corresponds to the activity in the virtual environment.
As illustrated in
As mentioned, the array 36 illustrated in
As mentioned, the scanning data collected by the array 36 of the interactive video game system 10 can be used to generate various models (e.g., a 2D or volumetric player model, a shadow model, a skeletal model) for each player. For example,
Additionally, embodiments of the interactive video game system 10 having the 3D play area 16A, as illustrated in
It may be noted that, for embodiments of the interactive video game system 10 having the 3D player area 16A, as represented in
For comparison,
It is presently recognized that embodiments of the interactive video game system 10 that utilize a 2D play area 16B, as represented in
Accordingly, it is recognized that the smaller array 36 of sensing units 38 used by embodiments of the interactive video game system 10 having the 2D play area 16B also generate considerably less data to be processed than embodiments having the 3D play area 16A. For example, because occlusion between players 12 is significantly more limited and predictable in the 2D play area 16B of
As mentioned, the interactive video game system 10 is capable of generating various models of the players 12. More specifically, in certain embodiments, the processing circuitry 35 of the primary controller 34 is configured to receive partial model data (e.g., partial player, shadow, and/or skeletal models) from the various sensing units 38 of the array 36 and fuse the partial models into complete models (e.g., complete volumetric, shadow, and/or skeletal models) for each of the players 12. Set forth below is an example in which the processing circuitry 35 of the primary controller 34 fuses partial skeletal models received from the various sensing units 38 of the array 36. It may be appreciated that, in certain embodiments, the processing circuitry 35 of the primary controller 34 may use a similar process to fuse partial shadow model data into a shadow model and/or to fuse partial volumetric model data.
In an example, partial skeletal models are generated by each sensing unit 38 of the interactive video game system 10 and are subsequently fused by the processing circuitry 35 of the primary controller 34. In particular, the processing circuitry 35 may perform a one-to-one mapping of corresponding bones of each of the players 12 in each of the partial skeletal models generated by different sensing units 38 positioned at different angles (e.g., opposite sides, perpendicular) relative to the play area 16. In certain embodiments, relatively small differences between the partial skeletal models generated by different sensing units 38 may be averaged when fused by the processing circuitry 35 to provide smoothing and prevent jerky movements of the virtual representations 14. Additionally, when a partial skeletal model generated by a particular sensing unit differs significantly from the partial skeletal models generated by at least two other sensing units, the processing circuitry 35 of the primary controller 34 may determine the data to be erroneous and, therefore, not include the data in the skeletal models 80. For example, if a particular partial skeletal model is missing a bone that is present in the other partial skeletal models, then the processing circuitry 35 may determine that the missing bone is likely the result of occlusion, and may discard all or some of the partial skeletal model in response.
It may be noted that precise coordination of the components of the interactive video game system 10 is desirable to provide smooth and responsive movements of the virtual representations 14 in the virtual environment 32. In particular, to properly fuse the partial models (e.g., partial skeletal, volumetric, and/or shadow models) generated by the sensing units 38, the processing circuitry 35 may consider the time at which each of the partial models is generated by the sensing units 38. In certain embodiments, the interactive video game system 10 may include a system clock 100, as illustrated in
The illustrated embodiment of the process 110 begins with the interactive video game system 10 collecting (block 112) a scanning data for each player. In certain embodiments, as illustrated in
Next, the interactive video game system 10 generates (block 114) corresponding models for each player based on the scanning data collected for each player. As set forth above, in certain embodiments, the processing circuitry 35 of the primary controller may receive partial models for each of the players from each of the sensing units 38 in the array 36, and may suitably fuse these partial models to generate suitable models for each of the players. For example, the processing circuitry 35 of the primary controller 34 may generate a player model (e.g., a volumetric or 2D player model) for each player that generally defines a 2D or 3D shape of each player. Additionally or alternatively, the processing circuitry 35 of the primary controller 34 may generate a shadow model for each player that generally defines a texture-less 3D shape of each player. Furthermore, the processing circuitry 35 may also generate a skeletal model that generally defines predicted skeletal positions and locations of each player within the play area.
Continuing through the example process 110, next, the interactive video game system 10 generates (block 116) a corresponding virtual representation for each player based, at least in part on, the on the scanning data collected for each player and/or one or more the models generated for each player. For example, in certain embodiments, the processing circuitry 35 of the primary controller 34 may use a shadow model generated in block 114 as a basis to generate a virtual representation of a player. It may be appreciated that, in certain embodiments, the virtual representations 14 may have a shape or outline that is substantially similar to the shadow model of the corresponding player, as illustrated in
It may be noted that, in certain embodiments, the virtual representations 14 of the players 12 may not have an appearance or shape that substantially resembles the generated player or shadow models. For example, in certain embodiments, the interactive video game system 10 may include or be communicatively coupled to a pre-generated library of virtual representations that are based on fictitious characters (e.g., avatars), and the system may select particular virtual representations, or provide recommendations of particular selectable virtual representations, for a player generally based on the generated player or shadow model of the player. For example, if the game involves a larger hero and a smaller sidekick, the interactive video game system 10 may select or recommend from the pre-generated library a relatively larger hero virtual representation for an adult player and a relatively smaller sidekick virtual representation for a child player.
The process 110 continues with the interactive video game system 10 presenting (block 118) the corresponding virtual representations 14 of each of the players in the virtual environment 32 on the display device 24. In addition to presenting, in certain embodiments, the actions in block 118 may also include presenting other introductory presentations, such as a welcome message or orientation/instructional information, to the players 12 in the play area 16 before game play begins. Furthermore, in certain embodiments, the processing circuitry 35 of the primary controller 34 may also provide suitable signals to set or modify parameters of the environment within the play area 16. For example, these modifications may include adjusting house light brightness and/or color, playing game music or game sound effects, adjusting the temperature of the play area, activating physical effects in the play area, and so forth.
Once game play begins, the virtual representations 14 generated in block 116 and presented in block 118 are capable of interacting with one another and/or with virtual objects (e.g., virtual objects 92 and 94) in the virtual environment 32, as discussed herein with respect to
The process 130 of
For the illustrated embodiment of the process 130, after receiving the partial models from the sensing units 38, the processing circuitry 35 fuses the partial models to generate (block 134) updated models (e.g., player, shadow, and/or skeletal) for each player based on the received partial models. For example, the processing circuitry 35 may update a previously generated model, such as an initial skeletal model generated in block 114 of the process 110 of
Next, the illustrated process 130 continues with the processing circuitry 35 identifying (block 136) one or more in-game actions of the corresponding virtual representations 14 of each of the players 12 based, at least in part, on the updated models of the players generated in block 134. For example, the in-game actions may include jumping, running, sliding, or otherwise moving of the virtual representations 14 within the virtual environment 32. In-game actions may also include interacting with (e.g., moving, obtaining, losing, consuming) an item, such as a virtual object in the virtual environment 32. In-game actions may also include completing a goal, defeating another player, winning a round, or other similar in-game actions.
Next, the processing circuitry 35 may determine (block 138) one or more in-game effects triggered in response to the identified in-game actions of each of the players 12. For example, when the determined in-game action is a movement of a player, then the in-game effect may be a corresponding change in position of the corresponding virtual representation within the virtual environment. When the determined in-game action is a jump, the in-game effect may include moving the virtual representation along the y-axis 20, as illustrated in
The illustrated process 130 continues with the processing circuitry 35 generally updating the presentation to the players in the play area 16 based on the in-game actions of each player and the corresponding in-game effects, as indicated by bracket 122. In particular, the processing circuitry 35 updates (block 140) the corresponding virtual representations 14 of each of the players 12 and the virtual environment 32 based on the updated models (e.g., shadow and skeletal models) of each player generated in block 134, the in-game actions identified in block 136, and/or the in-game effects determined in block 138, to advance game play. For example, for the embodiments illustrated in
Additionally, the processing circuitry 35 may provide suitable signals to generate (block 142) one or more sounds and/or one or more physical effects (block 144) in the play area 16 based, at least in part, on the determined in-game effects. For example, when the in-game effect is determined to be a particular virtual representation of a player crashing into a virtual pool, the primary controller 34 may cause the output controller 56 to signal the speakers 62 to generate suitable splashing sounds and/or physical effects devices 78 to generate a blast of mist. Additionally, sounds and/or physical effects may be produced in response to any number of in-game effects, including, for example, gaining a power-up, losing a power-up, scoring a point, or moving through particular types of environments. Mentioned with respect to
Furthermore, it may be noted that the interactive video game system 10 can also enable other functionality using the scanning data collected by the array 36 of sensing units 38. For example, as mentioned, in certain embodiments, the processing circuitry 35 of the primary controller 34 may generate a player model (e.g., a volumetric or 2D player model) that that includes both the texture and the shape of each player. At the conclusion of game play, the processing circuitry 35 of the primary controller 34 can generate simulated images that use the models of the players to render a 2D or 3D likeness of the player within a portion of the virtual environment 32, and these can be provided (e.g., printed, electronically transferred) to the players 12 as souvenirs of their game play experience. For example, this may include a print of a simulated image illustrating the volumetric model of a player crossing a finish line within a scene from the virtual environment 32.
With the foregoing in mind, in certain embodiments, virtual representations may be modified to appear and/or move differently from the corresponding players. That is, in certain embodiments, a virtual representation associated with a particular player may be able to transform or move in ways that do not directly correspond to (e.g., are not exactly the same as) the appearance or movement of the players. In certain embodiments, the virtual representations are not restricted by real world physical limitations imposed on the appearance or movement of the players, and, therefore, may be described as being associated with super human abilities. For example, in certain embodiments, virtual representations may include characters having greater-than-normal or super human abilities, such as characters that can jump higher or stretch farther than a realistic human can. In other embodiments, these super human abilities may include other super speed, super strength, size-altering abilities (e.g., to shrink and grow), abilities to shoot projectiles from various body parts (e.g., laser shooting eyes or hands, throwing fire or ice), and so forth. Accordingly, when players are in control of such virtual representations, then particular actual or real-world movements by the players trigger (e.g., a translated into) these super-human abilities of the virtual representations. By way of further example, and certain embodiments, the virtual representations may be representations of non-human entities. For example, in certain embodiments, the virtual representations may be animal-based representations of the players, wherein these representations have abilities (e.g., modes or styles of movement) that are distinct from, and/or augmented relative to, those of ordinary humans.
In one example illustrated in
For the example illustrated in
In certain embodiments, a virtual representation 14 may be associated with abilities that affect both the appearance and the movement of the virtual representation 14 in response to particular movements of the player 12. For the example of
For the example illustrated in
It may be appreciated that, for the example illustrated in
For the example of
For the embodiment illustrated in
In certain embodiments, rather than exactly reproducing the appearance and movements of the player, a virtual representation may appear and move like a real or fictitious non-human entity, such as an animal virtual representation. In certain embodiments, a player may select a particular animal-based virtual representation at the beginning of gameplay, while in other embodiments, the animal-based virtual representation may be assigned automatically based on scanning data and/or models associated with the player. In certain embodiments, once selected or assigned, the virtual representation may remain the same throughout gameplay, while in other embodiments, the virtual representation may change periodically, or in response to particular movements or achievements of the player (e.g., different animal representations for different terrains in the virtual environment or different levels). When the virtual representation takes the form of a particular animal, then the virtual representation may have particular types of abilities (e.g., types of movement) that are different from those of the player 12, including some that may be difficult or impossible for the player 12 to actually perform (e.g., trotting like a horse, hopping like a kangaroo, swimming like a fish, flying like a bird, and so forth). As such, the appearance and movements detected by the primary controller 34 may be augmented (e.g., exaggerated, enhanced), such that the player 12 can use feasible, realistic human poses and movements within the play area 16 that are augmented to generate movements of the animal-based virtual representation 14.
For the example illustrated in
In certain embodiments, one or more real world figures (e.g., robotic elements, animatronic devices) may be part of the interactive video game system 10. For example, in certain embodiments, in addition or alternative to the virtual representation 14, the interactive video game system 10 may include a robotic representation, such as a robotic stag representation. Like the stag virtual representation 14 discussed above, the robotic stag is controlled by the primary controller 34 based on detected the movements of the player 12, and the controller 34 may augment (e.g., enhance, exaggerate) the detected movements of the player 12 when determining how to move the robotic stag representation. Additionally, in certain embodiments, the interactive video game system 10 may include other robotic elements, such as the illustrated robotic rabbit 150 and robotic squirrel 152. In certain embodiments, the movements of these additional robotic elements 150, 152 may be controlled based on the movements of other players in the play area 16. In other embodiments, these additional robotic elements 150, 152 may move in response to things occurring in the virtual environment 32, the movement of a robotic or virtual representation 14, or a combination thereof, to provide a more immersive experience that includes the movement of 3D, real world figures.
For the example illustrated in
More specifically, for the example illustrated in
The technical effects of the present approach includes an interactive video game system that enables multiple players (e.g., two or more, four or more) to perform actions in a physical play area (e.g., a 2D or 3D play area) to control corresponding virtual representations in a virtual environment presented on a display device near the play area. The disclosed system includes a plurality of sensors and suitable processing circuitry configured to collect scanning data and generate various models, such as player models, shadow models, and/or skeletal models, for each player. The system generates the virtual representations of each player based, at least in in part, on a generated player models. Additionally, the interactive video game system may set or modify properties, such as size, texture, and/or color, of the of the virtual representations based on various properties, such as points, purchases, power-ups, associated with the players. Moreover, the interactive video game system enables augmented movements (e.g., super human abilities, animal-based movements) that are enhanced or exaggerated relative to the actual detected movements of the players in the play area. Further, embodiments of the interactive video game system may include robotic devices and/or physical effects devices that provide feedback relative to these augmented movements and abilities, to provide an immersive gameplay experience to the players.
While only certain features of the present technique have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the present technique. Additionally, the techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
This application is a continuation-in-part of U.S. patent application Ser. No. 15/833,839, entitled, “INTERACTIVE VIDEO GAME SYSTEM,” filed Dec. 6, 2017, which is herein incorporated by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5946444 | Evans et al. | Aug 1999 | A |
6141463 | Covell et al. | Oct 2000 | A |
6142368 | Mullins et al. | Nov 2000 | A |
6307952 | Dietz | Oct 2001 | B1 |
6346886 | De La Huerga | Feb 2002 | B1 |
6352205 | Mullins et al. | Mar 2002 | B1 |
6474557 | Mullins et al. | Nov 2002 | B2 |
6507353 | Huard | Jan 2003 | B1 |
6526158 | Goldberg | Feb 2003 | B1 |
6634949 | Briggs et al. | Oct 2003 | B1 |
6680707 | Allen et al. | Jan 2004 | B2 |
6761637 | Weston et al. | Jul 2004 | B2 |
6822569 | Bellum et al. | Nov 2004 | B1 |
6888502 | Beigel et al. | May 2005 | B2 |
6908387 | Hedrick et al. | Jun 2005 | B2 |
6921332 | Fukunaga | Jul 2005 | B2 |
6967566 | Weston et al. | Nov 2005 | B2 |
7029400 | Briggs | Apr 2006 | B2 |
7047205 | Hale et al. | May 2006 | B2 |
7066781 | Weston | Jun 2006 | B2 |
7071914 | Marks | Jul 2006 | B1 |
7204425 | Mosher, Jr. et al. | Apr 2007 | B2 |
7224967 | Hale et al. | May 2007 | B2 |
7311605 | Moser | Dec 2007 | B2 |
7327251 | Corbett, Jr. | Feb 2008 | B2 |
7336178 | Le | Feb 2008 | B2 |
7336185 | Turner et al. | Feb 2008 | B2 |
7385498 | Dobosz | Jun 2008 | B2 |
7396281 | Mendelsohn et al. | Jul 2008 | B2 |
7400253 | Cohen | Jul 2008 | B2 |
7445550 | Barney et al. | Nov 2008 | B2 |
7479886 | Burr | Jan 2009 | B2 |
7488231 | Weston | Feb 2009 | B2 |
7492254 | Bandy et al. | Feb 2009 | B2 |
7500917 | Barney et al. | Mar 2009 | B2 |
7528729 | Light et al. | May 2009 | B2 |
7541926 | Dugan et al. | Jun 2009 | B2 |
7564360 | Cote et al. | Jul 2009 | B2 |
7564426 | Poor et al. | Jul 2009 | B2 |
7606540 | Yoon | Oct 2009 | B2 |
7614958 | Weston et al. | Nov 2009 | B2 |
7642921 | Cutler et al. | Jan 2010 | B2 |
7674184 | Briggs et al. | Mar 2010 | B2 |
7720718 | Hale et al. | May 2010 | B2 |
7739925 | Foster | Jun 2010 | B2 |
7749089 | Briggs et al. | Jul 2010 | B1 |
7752794 | Kerlin | Jul 2010 | B2 |
7775894 | Henry et al. | Aug 2010 | B2 |
7786871 | Schwarze et al. | Aug 2010 | B2 |
7791557 | Mickle et al. | Sep 2010 | B2 |
7802724 | Nohr | Sep 2010 | B1 |
7812779 | Turner et al. | Oct 2010 | B2 |
7817044 | Posamentier | Oct 2010 | B2 |
7837567 | Holzberg et al. | Nov 2010 | B2 |
7850527 | Barney et al. | Dec 2010 | B2 |
7855697 | Chamarti et al. | Dec 2010 | B2 |
7878905 | Weston et al. | Feb 2011 | B2 |
7881713 | Hale et al. | Feb 2011 | B2 |
7885763 | Havens | Feb 2011 | B2 |
7896742 | Weston et al. | Mar 2011 | B2 |
7925308 | Greene et al. | Apr 2011 | B2 |
7942320 | Joe | May 2011 | B2 |
7956725 | Smith | Jun 2011 | B2 |
7994910 | Brooks et al. | Aug 2011 | B2 |
7997981 | Rowe et al. | Aug 2011 | B2 |
8016667 | Benbrahim | Sep 2011 | B2 |
8035335 | Duron et al. | Oct 2011 | B2 |
8082165 | Natsuyama et al. | Dec 2011 | B2 |
8085130 | Liu et al. | Dec 2011 | B2 |
8089458 | Barney et al. | Jan 2012 | B2 |
8123613 | Dabrowski | Feb 2012 | B2 |
8164567 | Barney et al. | Apr 2012 | B1 |
8169406 | Barney et al. | May 2012 | B2 |
8184097 | Barney et al. | May 2012 | B1 |
8200515 | Natsuyama et al. | Jun 2012 | B2 |
8213862 | Muth | Jul 2012 | B2 |
8222996 | Smith et al. | Jul 2012 | B2 |
8226493 | Briggs et al. | Jul 2012 | B2 |
8231047 | Canora | Jul 2012 | B2 |
8237561 | Beigel et al. | Aug 2012 | B2 |
8248208 | Renfro, Jr. | Aug 2012 | B2 |
8248367 | Barney et al. | Aug 2012 | B1 |
8253533 | Jones | Aug 2012 | B2 |
8253542 | Canora et al. | Aug 2012 | B2 |
8296983 | Padgett et al. | Oct 2012 | B2 |
8313381 | Ackley et al. | Nov 2012 | B2 |
8330284 | Weston et al. | Dec 2012 | B2 |
8330587 | Kupstas | Dec 2012 | B2 |
8342929 | Briggs et al. | Jan 2013 | B2 |
8353705 | Dobson et al. | Jan 2013 | B2 |
8368648 | Barney et al. | Feb 2013 | B2 |
8373543 | Brommer et al. | Feb 2013 | B2 |
8373659 | Barney et al. | Feb 2013 | B2 |
8384668 | Barney et al. | Feb 2013 | B2 |
8392506 | Rowe et al. | Mar 2013 | B2 |
8416087 | Canora et al. | Apr 2013 | B2 |
8425313 | Nelson et al. | Apr 2013 | B2 |
8430749 | Nelson et al. | Apr 2013 | B2 |
8437506 | Williams et al. | May 2013 | B2 |
8463183 | Muth | Jun 2013 | B2 |
8475275 | Weston et al. | Jul 2013 | B2 |
8477046 | Alonso | Jul 2013 | B2 |
8489657 | Shepherd et al. | Jul 2013 | B2 |
8491389 | Weston et al. | Jul 2013 | B2 |
8531050 | Barney et al. | Sep 2013 | B2 |
8552597 | Song et al. | Oct 2013 | B2 |
8564414 | Bergevoet | Oct 2013 | B2 |
8571905 | Risnoveanu et al. | Oct 2013 | B2 |
8581721 | Asher et al. | Nov 2013 | B2 |
8593283 | Smith | Nov 2013 | B2 |
8593291 | Townsend et al. | Nov 2013 | B2 |
8597111 | LeMay et al. | Dec 2013 | B2 |
8608535 | Weston et al. | Dec 2013 | B2 |
8618928 | Weed et al. | Dec 2013 | B2 |
8621245 | Shearer et al. | Dec 2013 | B2 |
8635126 | Risnoveanu et al. | Jan 2014 | B2 |
8681000 | August et al. | Mar 2014 | B2 |
8682729 | Werbitt | Mar 2014 | B2 |
8686579 | Barney et al. | Apr 2014 | B2 |
8702515 | Weston et al. | Apr 2014 | B2 |
8708821 | Barney et al. | Apr 2014 | B2 |
8711094 | Barney et al. | Apr 2014 | B2 |
8742623 | Biederman et al. | Jun 2014 | B1 |
8753165 | Weston | Jun 2014 | B2 |
8758136 | Briggs et al. | Jun 2014 | B2 |
8773245 | Canora et al. | Jul 2014 | B2 |
8790180 | Barney et al. | Jul 2014 | B2 |
8797146 | Cook et al. | Aug 2014 | B2 |
8810373 | Kim et al. | Aug 2014 | B2 |
8810430 | Proud | Aug 2014 | B2 |
8814688 | Barney et al. | Aug 2014 | B2 |
8816873 | Bisset et al. | Aug 2014 | B2 |
8821238 | Ackley et al. | Sep 2014 | B2 |
8827810 | Weston et al. | Sep 2014 | B2 |
8830030 | Arthurs et al. | Sep 2014 | B2 |
8851372 | Zhou et al. | Oct 2014 | B2 |
8866673 | Mendelson | Oct 2014 | B2 |
8870641 | Dabrowski | Oct 2014 | B2 |
8888576 | Briggs et al. | Nov 2014 | B2 |
8913011 | Barney et al. | Dec 2014 | B2 |
8915785 | Barney et al. | Dec 2014 | B2 |
8917172 | Charych | Dec 2014 | B2 |
8923994 | Laikari et al. | Dec 2014 | B2 |
8924432 | Richards et al. | Dec 2014 | B2 |
8937530 | Smith et al. | Jan 2015 | B2 |
8961260 | Weston | Feb 2015 | B2 |
8961312 | Barney et al. | Feb 2015 | B2 |
8971804 | Butler | Mar 2015 | B2 |
8972048 | Canora et al. | Mar 2015 | B2 |
9002264 | Zhang | Apr 2015 | B2 |
9021277 | Shearer et al. | Apr 2015 | B2 |
9039533 | Barney et al. | May 2015 | B2 |
9072965 | Kessman et al. | Jul 2015 | B2 |
9087246 | Chin et al. | Jul 2015 | B1 |
9109763 | Wein | Aug 2015 | B1 |
9122964 | Krawczewicz | Sep 2015 | B2 |
9130651 | Tabe | Sep 2015 | B2 |
9138650 | Barney et al. | Sep 2015 | B2 |
9149717 | Barney et al. | Oct 2015 | B2 |
9159151 | Perez | Oct 2015 | B2 |
9162148 | Barney et al. | Oct 2015 | B2 |
9162149 | Weston et al. | Oct 2015 | B2 |
9178569 | Chakravarty et al. | Nov 2015 | B2 |
9183676 | McCulloch | Nov 2015 | B2 |
9186585 | Briggs et al. | Nov 2015 | B2 |
9196964 | Baringer | Nov 2015 | B2 |
9207650 | Narendra et al. | Dec 2015 | B2 |
9215592 | Narendra et al. | Dec 2015 | B2 |
9225372 | Butler | Dec 2015 | B2 |
9229530 | Wu et al. | Jan 2016 | B1 |
9232475 | Heinzelman et al. | Jan 2016 | B2 |
9245158 | Gudan et al. | Jan 2016 | B2 |
9272206 | Weston et al. | Mar 2016 | B2 |
9318898 | John | Apr 2016 | B2 |
9320976 | Weston | Apr 2016 | B2 |
9367852 | Canora et al. | Jun 2016 | B2 |
9377857 | Geisner | Jun 2016 | B2 |
9383730 | Prestenback | Jul 2016 | B2 |
9393491 | Barney et al. | Jul 2016 | B2 |
9393500 | Barney et al. | Jul 2016 | B2 |
9411992 | Marek et al. | Aug 2016 | B1 |
9412231 | Dabrowski | Aug 2016 | B2 |
9413229 | Fleming | Aug 2016 | B2 |
9424451 | Kalhous et al. | Aug 2016 | B2 |
9438044 | Proud | Sep 2016 | B2 |
9443382 | Lyons | Sep 2016 | B2 |
9446319 | Barney et al. | Sep 2016 | B2 |
9463380 | Weston et al. | Oct 2016 | B2 |
9468854 | Briggs et al. | Oct 2016 | B2 |
9474962 | Barney et al. | Oct 2016 | B2 |
9480929 | Weston | Nov 2016 | B2 |
9483906 | LeMay et al. | Nov 2016 | B2 |
9491584 | Mendelson | Nov 2016 | B1 |
9523775 | Chakraborty et al. | Dec 2016 | B2 |
9542579 | Mangold et al. | Jan 2017 | B2 |
9563898 | McMahan et al. | Feb 2017 | B2 |
9579568 | Barney et al. | Feb 2017 | B2 |
9582981 | Rokhsaz et al. | Feb 2017 | B2 |
9589224 | Patterson et al. | Mar 2017 | B2 |
9613237 | Nikunen et al. | Apr 2017 | B2 |
9616334 | Weston et al. | Apr 2017 | B2 |
9626672 | Fisher | Apr 2017 | B2 |
9642089 | Sharma et al. | May 2017 | B2 |
9646312 | Lyons et al. | May 2017 | B2 |
9651992 | Stotler | May 2017 | B2 |
9661450 | Agrawal et al. | May 2017 | B2 |
9675878 | Barney et al. | Jun 2017 | B2 |
9680533 | Gudan et al. | Jun 2017 | B2 |
9692230 | Biederman et al. | Jun 2017 | B2 |
9696802 | Priyantha et al. | Jul 2017 | B2 |
9706924 | Greene | Jul 2017 | B2 |
9707478 | Barney et al. | Jul 2017 | B2 |
9713766 | Barney et al. | Jul 2017 | B2 |
9731194 | Briggs et al. | Aug 2017 | B2 |
9737797 | Barney et al. | Aug 2017 | B2 |
9741022 | Ziskind et al. | Aug 2017 | B2 |
9743357 | Tabe | Aug 2017 | B2 |
9747538 | Gudan et al. | Aug 2017 | B2 |
9748632 | Rokhsaz et al. | Aug 2017 | B2 |
9754139 | Chemishkian et al. | Sep 2017 | B2 |
9754202 | Gudan et al. | Sep 2017 | B2 |
9756579 | Zhou et al. | Sep 2017 | B2 |
9762292 | Manian et al. | Sep 2017 | B2 |
9767649 | Dabrowski | Sep 2017 | B2 |
9770652 | Barney et al. | Sep 2017 | B2 |
9813855 | Sahadi et al. | Nov 2017 | B2 |
9814973 | Barney et al. | Nov 2017 | B2 |
9821224 | Latta | Nov 2017 | B2 |
9831724 | Copeland et al. | Nov 2017 | B2 |
9836103 | Kramer et al. | Dec 2017 | B2 |
9837865 | Mitcheson et al. | Dec 2017 | B2 |
9861887 | Briggs et al. | Jan 2018 | B1 |
9864882 | Geist et al. | Jan 2018 | B1 |
9867024 | Larson | Jan 2018 | B1 |
9871298 | Daniel et al. | Jan 2018 | B2 |
9909896 | Bass et al. | Mar 2018 | B2 |
9928527 | Woycik et al. | Mar 2018 | B2 |
9928681 | LeMay, Jr. et al. | Mar 2018 | B2 |
9931578 | Weston | Apr 2018 | B2 |
9936357 | Mills et al. | Apr 2018 | B2 |
9949219 | Belogolovy | Apr 2018 | B2 |
9972894 | Dion et al. | May 2018 | B2 |
9993724 | Barney et al. | Jun 2018 | B2 |
1001079 | Weston et al. | Jul 2018 | A1 |
1002262 | Barney et al. | Jul 2018 | A1 |
20080151092 | Vilcovsky | Jun 2008 | A1 |
20100302138 | Poot | Dec 2010 | A1 |
20100306715 | Geisner et al. | Dec 2010 | A1 |
20110054870 | Dariush et al. | Mar 2011 | A1 |
20120286938 | Cote et al. | Nov 2012 | A1 |
20130324059 | Lee et al. | Dec 2013 | A1 |
20140094307 | Doolittle | Apr 2014 | A1 |
20140122170 | Padgett et al. | May 2014 | A1 |
20140162693 | Wachter et al. | Jun 2014 | A1 |
20140176565 | Adeyoola et al. | Jun 2014 | A1 |
20140198096 | Mitchell | Jul 2014 | A1 |
20150046202 | Hunt | Feb 2015 | A1 |
20150078140 | Riobo Aboy et al. | Mar 2015 | A1 |
20150138556 | LeBoeuf et al. | May 2015 | A1 |
20150194817 | Lee et al. | Jul 2015 | A1 |
20150236551 | Shearer et al. | Aug 2015 | A1 |
20150244976 | Chen | Aug 2015 | A1 |
20150255226 | Rouvala et al. | Sep 2015 | A1 |
20150312517 | Hoyt et al. | Oct 2015 | A1 |
20150336013 | Stenzler et al. | Nov 2015 | A1 |
20150371194 | Marshall et al. | Dec 2015 | A1 |
20160019423 | Ortiz et al. | Jan 2016 | A1 |
20160020636 | Khlat | Jan 2016 | A1 |
20160020637 | Khlat | Jan 2016 | A1 |
20160067600 | Barney et al. | Mar 2016 | A1 |
20160129335 | Domansky et al. | May 2016 | A1 |
20160144280 | Pawlowski et al. | May 2016 | A1 |
20160170998 | Frank et al. | Jun 2016 | A1 |
20160182165 | Margon et al. | Jun 2016 | A1 |
20160203663 | Proctor | Jul 2016 | A1 |
20160217496 | Tuchman et al. | Jul 2016 | A1 |
20160226610 | Pinzon Gonzales, Jr. | Aug 2016 | A1 |
20160275722 | Bretschneider et al. | Sep 2016 | A1 |
20160279516 | Gupta et al. | Sep 2016 | A1 |
20160307398 | Walker et al. | Oct 2016 | A1 |
20160321548 | Ziskind et al. | Nov 2016 | A1 |
20160373522 | Carlos et al. | Dec 2016 | A1 |
20170091850 | Alvarez et al. | Mar 2017 | A1 |
20170093463 | Wang et al. | Mar 2017 | A1 |
20170115018 | Mintz | Apr 2017 | A1 |
20170132438 | Cletheroe et al. | May 2017 | A1 |
20170162006 | Sahadi et al. | Jun 2017 | A1 |
20170169449 | Heaven et al. | Jun 2017 | A1 |
20170186270 | Acres | Jun 2017 | A1 |
20170201003 | Ackley et al. | Jul 2017 | A1 |
20170225069 | Goslin | Aug 2017 | A1 |
20170228804 | Soni et al. | Aug 2017 | A1 |
20170235369 | Acer et al. | Aug 2017 | A1 |
20170237466 | Carr | Aug 2017 | A1 |
20170270734 | Geraghty et al. | Sep 2017 | A1 |
20170288735 | Zhou et al. | Oct 2017 | A1 |
20170293985 | Deria et al. | Oct 2017 | A1 |
20170331509 | Gollakota et al. | Nov 2017 | A1 |
20170340961 | Weston et al. | Nov 2017 | A1 |
20170345167 | Ard et al. | Nov 2017 | A1 |
20170348593 | Barney et al. | Dec 2017 | A1 |
20170358957 | Mitcheson et al. | Dec 2017 | A1 |
20170361216 | Cao | Dec 2017 | A1 |
20170361236 | Barney et al. | Dec 2017 | A1 |
20170373526 | Huang et al. | Dec 2017 | A1 |
20180014385 | Wein | Jan 2018 | A1 |
20180078853 | Barney et al. | Mar 2018 | A1 |
20180101244 | Orrick et al. | Apr 2018 | A1 |
20180108183 | Schuneman et al. | Apr 2018 | A1 |
20180214769 | Briggs et al. | Aug 2018 | A1 |
20180318723 | Weston | Nov 2018 | A1 |
20180339226 | Barney et al. | Nov 2018 | A1 |
20180365887 | Wang | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
2003288472 | Oct 2003 | JP |
2004126791 | Apr 2004 | JP |
2005267179 | Sep 2005 | JP |
2010000178 | Jan 2010 | JP |
2012244846 | Dec 2012 | JP |
2013188019 | Sep 2013 | JP |
6152919 | Jun 2017 | JP |
Entry |
---|
Clements, Ryan; “Kinect Adventures Review—IGN”, Nov. 3, 2010; URL:https://www.ign.com/articles/2010/11/04/kinect-adventures-review; pp. 6-8. |
Mirabella, Fran; “Your Shape: Fitness Evolved Review—IGN”, Nov. 8, 2010; URL:https://www.ign.com/articles/2010/11/09/your-shape-fitness-evolved-review; pp. 6-8. |
PCT/US2018/059465 Invitation to Pay Additional Fees dated Jan. 23, 2019. |
PCT/US2018/059470 Invitation to Pay Additional Fees dated Jan. 24, 2019. |
U.S. Appl. No. 15/861,502, filed Jan. 3, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/874,671, filed Jan. 18, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,761, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,721, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,788, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,738, filed Jan. 29, 2018, Travis Jon Cossairt. |
U.S. Appl. No. 15/972,940, filed May 7, 2018, Unavailable. |
U.S. Appl. No. 15/995,633, filed Jun. 1, 2018, Unavailable. |
U.S. Appl. No. 16/196,967, filed Nov. 20, 2018, Matthew Usi. |
U.S. Appl. No. 15/826,357, filed Nov. 29, 2017, Wei Yeh. |
Number | Date | Country | |
---|---|---|---|
20190172265 A1 | Jun 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15833839 | Dec 2017 | US |
Child | 16149563 | US |