This invention relates generally to Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR), hereinafter collectively referred to as “XR,” and, more particularly, to methods, systems, and devices supporting gaming, entertainment, exercise, and training in a VR environment.
XR devices allow a user to view and interact with virtual and augmented environments. A user may effectively immerse themselves in a created digital environment and interact with that environment. For example, a user may interact (e.g., play a game) in a virtual environment, where the user's real-world movements are translated to movements and actions in the virtual world. Thus, e.g., a user may simulate a game of tennis or fencing or the like in a virtual environment by their real-world movements.
People should get regular exercise, and many attend group or individual exercise programs at gyms, schools, or the like. During these programs, users are instructed and guided through exercise routines, usually by a person (e.g., a coach) who can monitor, instruct, and encourage participants. A good coach or instructor may customize a routine for a user and may modify the routine based on that user's performance. However, in some situations (e.g., during quarantine for a pandemic), users may not be able or willing to attend such programs outside the comfort of their homes.
It is desirable, and an object of this invention, to provide instruction and guided personalized exercise routines.
It is a further object of this invention to provide various features to the game that provide new challenges and interest to the player during gameplay.
The present invention is specified in the claims as well as in the below description. Preferred embodiments are particularly specified in the dependent claims and the description of various embodiments.
A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that, in operation, causes the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
One general aspect includes a method for determining if a certain level of gameplay performance of a player has been reached while using a computer system to play a game within a virtual reality environment. The player wears a headset having a display for viewing the VR environment and an object-tracking system. The object tracking system of the headset is capable of tracking movement of a portion of the player in 3-D space to establish an actual-response path by the player during gameplay. The method comprises a) calculating, at a first time, an ideal-response path of gameplay for the tracked portion of the player in 3-D space to follow in order for the player to achieve a certain level of gameplay performance. The method further comprises b) comparing, at a second time, the actual-response path with the calculated ideal-response path, and c) indicating, in response to a match in the comparing step, that the certain level of gameplay performance has been reached by the player.
Another general aspect includes a computer-implemented method for conveying information to a person within a virtual environment. The computer-implemented method also includes the person using a device in a real-world environment, where the device may include a virtual reality (VR) headset being worn by the person, the VR headset being capable of providing images of scenes and objects to the person through the VR headset to generate a visual representation of a virtual world. The method may include: (a) determining a visual routine for the person to view in the virtual world using the VR headset, the visual routine may include a series of images representing projectiles, including triangles, where a triangle has a shape with a defined perimeter and an apex, the visual routine lasting a predetermined period of time. The method may also include (b) presenting, based on the determining in (a), a first triangle to the person in the virtual world. The method may further include (c) generating a virtual graphical mark at a prescribed location about the defined perimeter of the first triangle within the virtual world, the virtual graphical mark on the defined perimeter being visible to the person, a relative position of the virtual graphical mark with respect to the apex conveying information relating to an aspect of the visual routine.
Other exemplary embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Another general aspect includes a computer-implemented method where a person uses a device in a real-world environment, where the device may include a virtual reality (VR) headset and at least one handheld controller, being worn by the person, the VR headset being capable of providing images of scenes and objects to the person through the VR headset to generate a visual representation of a virtual world, the handheld controller being able to translate a real-hand location of the person's hand in the real world to a corresponding VR-hand location in the virtual world. The method may also include (a) providing a graphically generated and dynamically located handheld implement at the VR-hand location in the virtual world, where the handheld implement is viewable in the VR/virtual world based on the person's real-hand location in the real world. The method may also include (b) determining a visual routine for the person to view in the virtual world using the VR headset, the visual routine may include at least one target projected towards the person in the virtual world, the at least one target being generated at a graphically generated portal within the virtual world at a portal location in front of the person in the virtual world, the at least one target being designed to be hit by the handheld implement in the virtual world when the target is located near the person in the virtual world. The method may also include (c) projecting, based on the determining in (b), at least one target towards the person in the virtual world. The method may also include (d) changing at least one aspect of the at least one target before the target reaches the person in the virtual world. Other exemplary embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features, alone and/or in combination(s):
Another general aspect includes a computer-implemented method where a person uses a device in a real-world environment, where the device may include a virtual reality (VR) headset and at least one handheld controller, being worn by the person, the VR headset being capable of providing images of scenes and objects to the person through the VR headset to generate a visual representation of a virtual world, the handheld controller being able to translate a real-hand location of the person's hand in the real world to a corresponding VR-hand location in the virtual world. The method may include (a) providing a graphically generated and dynamically located handheld implement at the VR-hand location in the virtual world, so that the handheld implement is viewable in the virtual world based on the person's real-hand location in the real world. The method may also include (b) determining a visual routine for the person to view in the virtual world using the VR headset, the visual routine may include at least one target projected towards the person in the virtual world, the at least one target being generated at a graphically generated portal within the virtual world at a portal location in front of the person, the at least one target being designed to be hit by the handheld implement in the virtual world when the target is located near the person in the virtual world. The method may also include (c) projecting, based on the determining in (b), at least one target towards the person in the virtual world. The method may also include (d) changing at least one aspect of the handheld implement before the at least one target reaches the person in the virtual world. Other exemplary embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features, alone and/or in combination(s):
Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
Below is a list of process (method) embodiments. Those will be indicated with the letter “P.” Whenever such embodiments are referred to, this will be done by referring to “P” embodiments.
Below are device embodiments, indicated with the letter “D.”
Below is an article of manufacture embodiment, indicated with the letter “M.”
Below are computer-readable recording medium embodiment, indicated with the letter “R.”
The above features, along with additional details of the invention, are described further in the examples herein, which are intended to further illustrate the invention but are not intended to limit its scope in any way.
Objects, features, and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structure, and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification.
As used herein, unless used otherwise, the following terms or abbreviations have the following meanings:
In the following, exemplary embodiments of the invention will be described, referring to the figures. These examples are provided to provide further understanding of the invention, without limiting its scope. The exemplary embodiments described herein are described as being applied to virtual reality, including the use of a VR headset being worn by a VR player, which displays a VR game having a VR environment within a VR display. It is to be understood that the present technology, as described, may equally be applied to all XR devices, without departing from the gist of the invention and that the application towards VR devices is just exemplary.
In the following description, a series of features and/or steps are described. The skilled person will appreciate that unless required by the context, the order of features and steps is not critical for the resulting configuration and its effect. Further, it will be apparent to the skilled person that irrespective of the order of features and steps, the presence or absence of time delay between steps, can be present between some or all of the described steps.
It will be appreciated that variations to the foregoing embodiments of the invention can be made while still falling within the scope of the invention. Alternative features serving the same, equivalent, or similar purpose can replace features disclosed in the specification, unless stated otherwise. Thus, unless stated otherwise, each feature disclosed represents one example of a generic series of equivalent or similar features.
Although the term “game” is used throughout this description, the present technology can be applied to a variety of electronic devices and immersive experiences, including, but not limited to, games, educational interactions, such as online teaching of math or languages, fitness-related activities using electronic devices, such as the below-described fitness program, called “Supernatural,” for use with a virtual reality headset, and even business activities. For reasons of simplicity and clarity, all the applications of the present technology are referred to as “games” and “gaming” in this application.
With the development of virtual reality (VR) and with recent improvements of accurate inertial sensors, high resolution displays, and specifically-developed 3-D software, playing a VR game can become a truly immersive and often emotional experience. The hardware requirements to achieve a truly immersive, interactive and realistic gaming experience lends itself perfectly to providing valuable biometric and accurate body movement information, in real time, without adding additional sensors.
For example, a popular VR system called the Quest 2, designed and manufactured by Oculus, a brand of Facebook Technologies, Inc., located in Menlo Park, California, includes a VR headset and two handheld wireless controllers. This VR headset includes forward-facing cameras, a gyroscope and accelerometer (also called an IMU or Inertial Measurement Unit), and an array of infrared LEDs and an IMU located within each hand controller. These sensors are very accurate and provide precise orientation and position information of both the headset and each controller, in 3-D space, essentially in real time (updated 1,000 times per second) during gameplay. Game developers of VR games use this information to effectively establish the location and orientation of the player's head, their hands and fingers in real time during a game. Software developers create virtual handheld objects, such as tennis rackets or batons, which appear in the VR/virtual world, e.g., as seen by the player, through the VR headset (and, with remote displays connected, may also appear for others not wearing the VR headset to view). The virtual handheld objects are directly controlled by the player's hand movements in the real world. Therefore, the location, orientation, speed and direction of movement of the virtual handheld objects, in real time, are known as well.
The use of a VR system allows a player's senses to become truly isolated from the surrounding real-world environment. By wearing a typical VR headset, the player can only view the images that are presented by the VR system, similar to the focus an audience gains when watching a movie in a dark movie theater. The VR headset effectively provides a 3-Dimensional movie theater experience. The VR system also provides sound input for the player's ears, thereby further enhancing the sense that the experience is real. By controlling both visual and auditory inputs to a player, and effectively separating the player from real-world sensory inputs (i.e., real-world distractions), VR games offer a player a greater chance to focus, improve gaming performance, and, depending on the type of VR software being played, even provide an effective workout.
To help explain this inventive technology, a representative fitness game is illustrated in the accompanying figures. It should be noted that the present technology is meant to be applied to a specific type of 3-D virtual reality game that uses projectiles and geometric shapes, projected at a player, to encourage the user to move various muscle groups for the purpose of both entertainment and exercise. One exemplary such game of which the presently described embodiments could be applied is a virtual reality fitness game called “Supernatural.” It was developed by, and is currently available from, a company called Within, Inc., located in Los Angeles, California.
In this particular exemplary game, a player dons a suitable virtual reality headset and hand controllers, such as the above-identified Quest 2, by Facebook's Oculus brand. Once the Supernatural game begins, a three-dimensional environment image, such as a mountain setting, is automatically generated and displayed within the player's headset, placing the player at a center point within this computer-generated virtual environment, as is well known by those of ordinary skill in the art of VR technology. The player will experience this virtual environment as a realistic three-dimensional image, one that can be viewed in all directions, as if the player were standing in the same environment in the real world. As the player moves their head left and right, up and down, the above-described sensors located within the VR headset will detect this head movement in extremely fine resolution. The running software program (e.g., Supernatural) will collect and analyze this sensor data to adjust the displayed environment image in real time (effectively immediately) to match the exact minute increments of head movement and also the direction and speed of the player's head to accurately create an illusion of presence, within the environment. The illusion is sufficient to convince the player that they are truly part of the virtual world being displayed, literally right in front of the player's eyes.
Continuing with this example, in the fitness game called Supernatural, the player is meant to remain at a substantially fixed location in the real world during gameplay so that their VR presence remains at a central point within the VR environment.
A system supporting a real-time virtual reality (VR) environment 100 for a virtual and augmented reality fitness training system is described now with reference to
Sensors (not shown in the drawings) in the VR headset 104 and/or other sensors 110 in the user's environment may track the VR user's actual movements (e.g., head movements, etc.) and other information. The VR headset 104 preferably provides user tracking without external sensors. In a presently preferred implementation, the VR headset 104 is an Oculus Quest headset made by Facebook Technologies, LLC.
Tracking or telemetry data from the VR headset 104 may be provided in real-time (as all or part of data 118) to the training system 106.
Similarly, data from the sensor(s) 110 may also be provided to the training system 106 (e.g., via the access point 108).
The user 102 preferably has one or two handheld devices 114-1, 114-2 (collectively handheld device(s) and/or controller(s) 114) (e.g., Oculus Touch Controllers). Hand movement information and/or control information from the handheld controller(s) 114 may be provided with the data 118 to the training system 106 (e.g., via the access point 108).
In some embodiments, hand movement information and/or control information from the handheld controller(s) 114 may be provided to the VR headset 104 or to another computing device which may then provide that information to the training system 106. In such cases, the handheld controller(s) 114 may communicate wirelessly with the VR headset 104.
In some embodiments, at least some of a user's hand movement information may be determined by tracking one or both of the user's hands (e.g., if the user does not have a handheld controller 114 on/in one or both of their hands, then the controller-free hand(s) may be tracked directly, e.g., using 3D tracking).
Although described here as using one or two handheld controllers 114, those of skill in the art will understand, upon reading this description, that a user may have no handheld controllers or may have only one. Furthermore, even when a user has a handheld controller in/on their hand, that hand may also (or instead) be tracked directly.
The VR headset 104 presents the VR user 102 with a view 124 corresponding to that VR user's virtual or augmented environment.
Preferably, the view 124 of the VR user's virtual environment is shown as if seen from the location, perspective, and orientation of the VR user 102. The VR user's view 124 may be provided as a VR view or as an augmented view (e.g., an AR view).
In some embodiments, the user 102 may perform an activity such as an exercise routine or a game or the like in the VR user's virtual environment. The training system 106 may provide exercise routine information to the VR headset 104. In presently preferred embodiments, the activity system 126 may provide a so-called beat-map and/or other information 128 to the headset (e.g., via the network 119 and the access point 108).
As the user progresses through an activity such as an exercise routine, the VR headset 104 may store information about the position and orientation of the VR headset 104 and of the controllers 114 for the user's left and right hands.
In a present implementation, the user's activity (and a beat-map) is divided into sections (e.g., 20-second sections), and the information is collected and stored at a high frequency (e.g., 72 Hz) within a section. The VR headset 104 may also store information about the location of targets, portals and all objects that are temporally variant, where they are in the 3-D space, whether any have been hit, etc., at the same or similar frequency. This collected information allows the fitness system to evaluate and/or recreate a scene at any moment in time in the space of that section.
Collected information may then be sent to the training system 106, preferably in real-time, as all or part of data 118, as the user's activity/workout continues, and several of these sections may be sent to the training system 106 over the course of an activity/workout. The data 118 that are provided to the training system 106 preferably include beat-map information.
The training system 106 may be part of backend/cloud framework 120.
As explained in greater detail below, in some implementations/embodiments, the fitness training system provides a user with an individualized customized VR training routine, tracks the user as they carry out the routine (in VR), modifies the routine if needed, and provides guidance to the user. The routine may involve the user interacting (virtually) with various objects, and the system may monitor and evaluate the user's interactions and movements in order to determine possible modifications to the routine. The system may also use physiological data (e.g., heart rate data) to evaluate a user during a routine.
With reference to
Although only one user 102 is shown in
The training programs 210 of the training system 106 may include data collection mechanism(s) 212, movement/tracking mechanism(s) 214, mapping and transformation mechanism(s) 216, calibration mechanism(s) 218, routine generation mechanism(s) 220, and routine evaluation mechanism(s) 222.
The data structures 224 may include a routine data structure 226 and a user data structure 228.
In operation, the data collection mechanism(s) 212 obtains data 118 (
The movement/tracking mechanism(s) 214 determines or approximates, from that data, the user's actual movements in the user's real-world space 112. The user's movements may be given relative to a 3-D coordinate system 116 the user's real-world space 112. If the data 118 includes data from the user's handheld controller(s) 114, the movement/tracking mechanism(s) 214 may also determine movement of one or both of the user's hands in the user's real-world space 112. In some cases, the user's headset 104 may provide the user's actual 3-D coordinates in the real-world space 112.
The movement/tracking mechanism(s) 214 may determine or extrapolate aspects of the user's movement based on machine learning (ML) or other models of user movement. For example, a machine learning mechanism may be trained to recognize certain movements and/or types of movements and may then be used to recognize those movements based on the data 118 provided by the user 102.
With reference to
Those of skill in the art will understand, upon reading this description, that the mapping and transformation mechanism(s) 216 may operate prior to or in conjunction with the movement/tracking mechanism(s) 214. As with all mechanisms described herein, the logical boundaries are used to aid the description and are not intended to limit the scope hereof.
For the sake of this description, the user's movement data in the real-world space 112 are referred to as the user's real-world movement data, and the user's movement data in the virtual-world space 312 are referred to as the user's virtual movement data.
In some exemplary embodiments, the training system 106 may also receive or have other user data (e.g., physiological data or the like) and may use some of the physiological data (e.g., heart rate, temperature, sweat level, breathing rate, etc.) to determine or evaluate the user's movements and actions in the virtual space. Such physiological data may be obtained by one or more sensors 121 (
The training system 106 may be co-located with the user (e.g., in the same room), or it may be fully or wholly located elsewhere. For example, the training system 106 may be located at a location distinct from the user, in which case the user's data 118 may be sent to the training system 106 via a network 119 (e.g., the Internet). Although in preferred cases, the user's data 118 are provided to the training system 106 as the data are generated (i.e., in real time), in some cases, the user's data 118 may be collected and stored at the user's location, and then sent to the training system 106. When located apart from the user, and accessed via a network, the training system 106 may be considered to be a cloud-based system.
As noted above, the fitness training system may provide a user with an individualized customized VR training routine. A user's routine may be stored in a routine data structure 226 in the memory 204 of the training system 106.
With reference to
An object 406 may comprise a shape 408 and properties 410. Some properties may be shape-specific, as described below.
A shape 408 may be a hit shape 412 (e.g., an orb or circle or the like) or a squat shape 414 (e.g., a symmetric triangle) or a lunge shape 416 (e.g., an oblique or asymmetric triangle).
A lunge shape 416 may have a lunge direction 418 (left or right), and may thus be a left lunge shape or a right lunge shape.
A squat shape 414 or lunge shape 416 may also include a “hold” shape 420, 422, which may include a hold duration (not shown).
The properties 410 of a shape may include its speed 411 (i.e., the speed at which the object or shape approaches the user in VR).
A hit shape (i.e., a target) 412 may include a direction indicator 424, showing the direction in which the shape should be hit. A hit shape 412 may include a color 426 or other indicator showing which hand should be used to hit the shape.
Recall that the user preferably has two controllers 114-1 and 114-2 (see
A hit shape 412 may include an arc or tail 428, indicating the type of hit to be used to hit the shape (e.g., a flowing or follow-through hit).
Those of skill in the art will understand, upon reading this description, that different and/or other shapes and/or shape properties may be used.
Example hit shapes 412-A-412-H are shown in
Various example interactions are shown with reference to
In the example in
When a user successfully hits a hit object in the correct direction with the correct controller (baton), the user's hit score may be increased. The user may be given a higher score based on how hard they hit the object.
In the example in
Note that in the example in
In the example in
The tail 625-D indicates that the user should follow through with the hit, preferably with a flowing motion generally following the shape of the particular tail.
In the example in
In the example in
In the example in
As should be appreciated, each of the shapes and/or objects discussed in these examples corresponds to an event in a routine. A routine may include multiple events, and a routine may include multiple simultaneous events. For example, a routine may send multiple hit objects to a user at the same time from the same or different sources.
As described above, during gameplay of the present gaming/workout system, hit-objects 412 may project from portal 602 and advance towards player 102, similar to a ball being thrown. As mentioned above, the player holds the baton 114 and uses the same to hit objects when the objects enter a hitting zone, which is adjacent to the player. Each hit-object 412 may include a hit-direction indicator 424 which adds complexity and interest to the game because for the player to receive full credit for a particular hit, the player not only has to hit a passing object, but also hit it in the indicated direction.
Every so often, as a player plays a game session, a lunge triangle 416 will project from portal 602 and advance towards player 102. According to the above-mentioned gameplay rules, in this instance, the player must lunge either left or right, or squat their body down towards the floor in the real world, so that their bodies “fit” within the triangle as the triangle passes the player in the virtual world. In this manner, the shape of the triangle in the virtual world is effectively able to control the shape of the player's body in the real world, at least as the player continues to play the game correctly.
As the game session, or workout is being played, the player can easily become fatigued, both physically and mentally. Applicants contend that it would be desirable for a player to know how long they have been playing a particular game, or workout session, and, perhaps more importantly, how much longer will gameplay or the workout continue before the session ends. Of course, a timer or clock display may be graphically generated within the virtual environment to directly provide this duration information to the player. Unfortunately, providing a simple clock display, even within the field of view of the player, would cause an unwanted diversion from the player's gameplay concentration whenever they just wanted to learn the duration information conveyed by the display. A simple glance at the clock display would require the player to divert their attention from hitting the continuous flow of objects before them, read the time shown on the display, understand its meaning (a diversion within the player's brain), and then quickly return to focus on the objects quickly advancing before them. This entire process may take only a second or two, but that would be long enough to cause a disruption in the player's gameplay focus. The player would likely end up missing at least one or two objects as they passed, and would have to take a few seconds to reorient themselves back to the task of hitting objects, and would likely end up missing a few more passing objects. In this example, learning the time duration of the game would not be worth disrupting the player's flow.
To overcome this deficiency, and according to exemplary embodiments, specific information, such as duration information is displayed, not just in the field of view of the player, but at a known point of focus of the player, as the player plays the game. Some of the known points of focus of a player during gameplay include objects, as they advance, and the lunge and squat triangles. For example, when a triangle advances towards a player, the player will eventually, even if just for a moment, focus on the triangle to understand its shape and the location of its apex, so they can move their body to fit within the triangle as it passes, as the gameplay rules of this particular exemplary workout game require. As shown in
According to this embodiment, every time a triangle 902 appears in the sequence of projected objects and triangles, and in the user's view, the position of the notch 908 about the perimeter 904 would be updated to a new location.
For example, as mentioned above, the location of the notch 908 could indicate how much game-time has passed, and also how much time is remaining. Applicants prefer that this time-related information is conveyed only graphically and not numerically so that the player does not have to read numbers. This would be similar to how an analog clock does not require numbers, whereas a digital clock does. For an analog clock, the read of the time only has to see the relative locations of the two hands with respect to each other and with respect to the up (or twelve o'clock) position. If the player sees the notch at the bottom of the triangle, then they would understand that the game or workout is halfway complete. Similarly, if the notch is at the nine o'clock position, then the game is three-quarters complete, as illustrated in
As mentioned above, the location of notch 908 about triangle 902 may be used to convey other information to a player besides game duration, such as how many of the total number of objects have been hit, or how many objects have so far been projected towards the player, and also how many remain. Conveying this particular information can positively affect a player's motivation and hit efficiency, since if a player understands how much time is remaining, or how many objects have already been hit, they can better plan how to use their remaining strength and mental acuity—similar to how a runner in a race often finds a “second wind” of energy when they learn that they are close to the finish line.
Notch 908 can be graphically represented by the notch cutout, as shown in
Although other types of information conveyed in this manner could be considered distracting, Applicants further contemplate that numbers and even words can be graphically displayed on the perimeter of the triangles. This may include lyrics to a particular song being played during the game or workout, or immediately relevant coaching advice, such as “Squat lower,” or “Swing the batons harder.” Of course, all this information may be just announced by a gaming voice, over the music being played, but Applicants have determined that providing additional information audibly is also distracting to the player. It must be appreciated that the player is immersed with a graphically intense experience, and conveying information graphically is less distracting, since it appears exactly where a player will be focused at certain times during gameplay.
According to another exemplary embodiment, as shown in
As mentioned above, in this particular VR game, one or more batons may be used by a player to selectively hit fast approaching objects in the directions indicated by their respective hit direction indicators, and according to the color of a particular object being hit. For example, a white colored object must be hit by the white baton, the black object with the black baton. According to another exemplary embodiment, as shown in
Also related to the batons, and according to another exemplary embodiment herein, one or both of the player's batons may exchange their colors at any time during gameplay to again, provide additional challenges to the player. During a color change, the player's left black baton becomes white, and the player's right white baton becomes black. This provides an immediate challenge to the player who must mentally reverse “muscle memory” that was established and set during earlier gameplay.
Continuing with another exemplary embodiment, the batons (i.e., the virtual representation of the batons) may become bent, broken or otherwise damaged during gameplay, or even lost entirely, should a player exceed prescribed limits of baton usage, such as power, lack of power, or even hitting the batons together, or hitting the perimeter of passing triangles. For example, if a player hits too wildly and with too much speed and power, the baton may be programmed to bend or even break, for a prescribed period of time. The same may occur in response to a player swinging a baton too weakly.
Alternatively, to help encourage a player to provide sufficient power when hitting the objects, according to additional embodiments, the system may be programmed so that the objects only explode when they are hit with the baton with sufficient power and speed.
If not, the objects will simply become dented or bounce away undamaged. To ensure the satisfaction of an object being hit and exploding, the player must hit the object with sufficient speed and power.
According to yet other exemplary embodiments, referring now to
Furthermore, select objects may fade a prescribed amount, from slightly transparent to completely invisible (0% opaque), prior to reaching the player. This is illustrated in
According to yet another exemplary embodiment, objects that approach a player during gameplay may change speed and trajectory between the portal and the player. For example, some objects may follow a straight path between the portal and the player, while other objects may follow a parabolic (or some other curved) path, similar to the path that any thrown object follows due to gravity. This change in flight paths of select objects provides additional challenges to any player during gameplay.
According to yet other exemplary embodiments, referring to
According to yet other exemplary embodiments, objects, batons, environments, and other items used and seen during gameplay may be shaped, colored, or printed thereon following a common theme that aligns with events commonly occurring on calendar dates, such as Easter, or the Fourth of July. Objects would resemble eggs on Easter, and Christmas would change the batons into candy canes and the objects would resemble ornaments, for example. The themes may automatically generate based on the current date. This may include the player's birthday and other more personal events captured from the player's profile. Additionally, location data (e.g., GPS data) read by the system may generate themes within the VR gaming world that are based on the location of the player. For example, a theme within the VR game could be directed to a local sports team.
According to yet other exemplary embodiments, the present VR system projects specifically identifiable objects towards the player in such a manner that encourages the player to only hit the object when it actually passes the player, either on the player's left side, right side, or above them. According to these embodiments, these specifically identifiable objects would differentiate from other types of objects through unique markings, or unique illumination, or any other means that would allow a player to understand which objects are meant to be hit after passing the player.
For example, referring to
A player (not shown in the figure) is holding a left controller and a right controller in the real world, which is translated in the virtual world as a black baton 1410 for the left controller and a white baton 1412 for the right controller. The black baton 1410 in
According to this embodiment, and the rules of the game for this particular embodiment, the player would encounter a mix of objects, some colored completely black, some completely white, and other objects with mixed colors, black and white (or, as described above, objects with distinctive and different halves). As before, for solid colors, the player would hit the object 1402 with the corresponding colored baton when the object 1402 reached a point in front of the player (not shown). For mixed-colored objects, such as object 1402, shown in
According to this one embodiment, the player may be given a choice to either hit either half 1406 or 1408 as the object approaches or may be instructed to hit one or the other half by the computer at a predetermined time before the object reaches the player in the virtual world. Such instruction may include illuminating one of the two batons to indicate to the player which half of the object 1402 should hit.
Also, according to another aspect of this one embodiment of the invention, the object 1402 may be instructed to rotate with respect to the player (in any manner, about any axis), thereby making it more challenging for the player to hit either half of the object 1402 when the object reaches the player.
Although the term “half” (and “halves”) are used in the previous description, this is done by way of example, those of skill in the art will understand, upon reading this description, the object 1402 may split other than in parts other than equal halves. Similarly, those of skill in the art will understand, upon reading this description, that colors other than black and white may be used to distinguish the parts of the object and the batons with which those parts should be hit.
According to yet other exemplary embodiments, the objects may be projected toward the player and the player may try to hit them based on their color and the direction indicator. If the player is successful, the object may be shown bursting apart with a sound and a flash of light. If the player misses, the object may simply continue along its trajectory, passing the player, and then disappearing, no longer to reappear again. According to these embodiments, missed objects may be recycled back into play so that the session will end when all the starting objects have been successfully hit, however long it takes. The music track being played may just continue as a remix until all the objects are eventually hit. Alternatively, new music tracks can be played. Any recycled object can either be identical to the other objects, or identified with a different color, or blinking, or wobbling, or other. The recycled objects may reenter the game smaller in size than the original, to offer more of a challenge to the player.
According to yet other exemplary embodiments, the objects may be projected towards the player from the portal and either successfully hit by the player, or missed. If an object is successfully hit by a baton, instead of bursting apart, as before, the object may project away following a specific new trajectory, depending on the angle and magnitude of impact by the baton. The object may then either impact a distant spot within the virtual environment, with a realistic or otherwise dramatic explosion, or it may ricochet along a new trajectory to a new impact spot in the environment, again and again. This feature may result in hundreds of objects flying around the player within the environment, providing challenges to the player as they struggle to concentrate hitting newly projected objects.
According to yet other exemplary embodiments, at some point during the game session, a plurality of objects fly towards the player at once or sequentially in very fast succession, surprising the player with a kind of bonus opportunity to hit many objects as quickly as possible.
According to yet other exemplary embodiments, a uniquely identified object, when successfully hit, may offer the player a reward of a respite from projected objects for a predetermined period of time. This allows the player to relax and recharge for more intense gameplay, or a chance to just briefly dance along with the music.
According to yet other exemplary embodiments, the player may be able to change an aspect of gameplay simply by tapping the virtual batons against each other in the virtual world. For example, the action may change the song being played, or perhaps a particular mode of the gameplay, such as the speed of objects being projected, or switching to a mode with no lunge triangles, or change the type of handheld gaming implement from a baton, for example, to a boxing glove, and so on. In the latter example, both objects meant to be hit with batons and objects meant to be punched can be projected towards a player. The different types of objects may be identified and, following the rules of the game, and these embodiments, the player may have to switch between batons and boxing gloves, depending on the type of object next to be hit. The player would make the switch by tapping either the virtual batons or virtual boxing gloves against each other in the virtual world.
Although different embodiments are described herein, those of skill in the art will understand, upon reading this description, that the various embodiments may be combined and/or mixed, and that such combination and/or mixing is contemplated herein. Thus, a particular system may include some or all of the above-described embodiments, alone or in various combinations. For example, and without limitation, a particular implementation may include aspects of one or more of:
Those of ordinary skill in the art will realize and understand, upon reading this description, that, as used herein, the term “real time” means near real time or sufficiently real time. It should be appreciated that there are inherent delays in electronic components and in network-based communication (e.g., based on network traffic and distances), and these delays may cause delays in data reaching various components. Inherent delays in the system do not change the real time nature of the data. In some cases, the term “real time data” may refer to data obtained in sufficient time to make the data useful for its intended purpose.
Although the term “real time” may be used here, it should be appreciated that the system is not limited by this term or by how much time is actually taken. In some cases, real-time computation may refer to an online computation, i.e., a computation that produces its answer(s) as data arrives, and generally keeps up with continuously arriving data. The term “online” computation is compared to an “offline” or “batch” computation.
In some cases, in the context of a Virtual Reality (VR), Mixed Reality (MR), or Augmented Reality (AR) system, the term “real-time” may mean sufficient time to allow a user's interactions and/or movements with the system to be reflected in the system in a manner that appears or is perceived to be immediate and without perceptible lag.
The applications, services, mechanisms, operations, and acts shown and described above are implemented, at least in part, by software running on one or more computers.
Programs that implement such methods (as well as other types of data) may be stored and transmitted using a variety of media (e.g., computer-readable media) in a number of manners. Hard-wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments. Thus, various combinations of hardware and software may be used instead of software only.
One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that the various processes described herein may be implemented by, e.g., appropriately programmed general-purpose computers, special-purpose computers and computing devices. One or more such computers or computing devices may be referred to as a computer system.
According to the present example, the computer system 1500 includes a bus 1502 (i.e., interconnect), one or more processors 1504, a main memory 1506, read-only memory 1508, removable storage media 1510, mass storage 1512, and one or more communications ports 1514. Communication port(s) 1514 may be connected to one or more networks (not shown) by way of which the computer system 1500 may receive and/or transmit data.
As used herein, a “processor” means one or more microprocessors, central processing units (CPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture. An apparatus that performs a process can include, e.g., a processor and those devices such as input devices and output devices that are appropriate to perform the process.
Processor(s) 1504 can be any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP® processor(s), or Motorola® lines of processors, and the like. Communications port(s) 1514 can be any of an Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 1514 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), or any network to which the computer system 1500 connects. The computer system 1500 may be in communication with peripheral devices (e.g., display screen 1516, input device(s) 1518) via Input/Output (I/O) port 1520.
Main memory 1506 can be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art. Read-only memory (ROM) 1508 can be any static storage device(s), such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor(s) 1504. Mass storage 1512 can be used to store information and instructions. For example, hard disk drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), or any other mass storage devices may be used.
Bus 1502 communicatively couples processor(s) 1504 with the other memory, storage, and communications blocks. Bus 1502 can be a PCI/PCI-X, SCSI, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like. Removable storage media 1510 can be any kind of external storage, including hard-drives, floppy drives, USB drives, Compact Disc-Read Only Memory (CD-ROM), Compact Disc-Re-Writable (CD-RW), Digital Versatile Disk-Read Only Memory (DVD-ROM), etc.
Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. As used herein, the term “machine-readable medium” refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g., instructions, data structures) which may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random-access memory, which typically constitutes the main memory of the computer. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves, and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
The machine-readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
Various forms of computer-readable media may be involved in carrying data (e.g., sequences of instructions) to a processor. For example, data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
A computer-readable medium can store (in any appropriate format) those program elements which are appropriate to perform the methods.
As shown, main memory 1506 is encoded with application(s) 1522 that support(s) the functionality as discussed herein (the application(s) 1522 may be an application(s) that provides some or all of the functionality of the services/mechanisms described herein, e.g., VR sharing application 230,
During operation of one embodiment, processor(s) 1504 accesses main memory 1506 via the use of bus 1502 in order to launch, run, execute, interpret, or otherwise perform the logic instructions of the application(s) 1522. Execution of application(s) 1522 produces processing functionality of the service related to the application(s). In other words, the process(es) 1524 represent one or more portions of the application(s) 1522 performing within or upon the processor(s) 1504 in the computer system 1500.
For example, process(es) 1524 may include an AR application process corresponding to VR sharing application 230.
It should be noted that, in addition to the process(es) 1524 that carries (carry) out operations as discussed herein, other exemplary embodiments herein include the application(s) 1522 itself (i.e., the un-executed or non-performing logic instructions and/or data). The application(s) 1522 may be stored on a computer-readable medium (e.g., a repository) such as a disk or in an optical medium. According to other exemplary embodiments, the application(s) 1522 can also be stored in a memory type system such as in firmware, read-only memory (ROM), or, as in this example, as executable code within the main memory 1506 (e.g., within Random Access Memory or RAM). For example, application(s) 1522 may also be stored in removable storage media 1510, read-only memory 1508, and/or mass storage device 1512.
Those skilled in the art will understand that the computer system 1500 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
As discussed herein, embodiments of the present invention include various steps or acts or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware. The term “module” refers to a self-contained functional component, which can include hardware, software, firmware, or any combination thereof.
One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that embodiments of an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.
Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process.
Where a process is described herein, those of ordinary skill in the art will appreciate that the process may operate without any user intervention. In another exemplary embodiment, the process includes some human intervention (e.g., a step is performed by or with the assistance of a human).
Although embodiments hereof are described using an integrated device (e.g., a smartphone), those of ordinary skill in the art will appreciate and understand, upon reading this description, that the approaches described herein may be used on any computing device that includes a display and at least one camera that can capture a real-time video image of a user.
For example, the system may be integrated into a heads-up display of a car or the like. In such cases, the rear camera may be omitted.
Each of the following patent applications/publications is hereby fully incorporated herein by reference for all purposes and in its/their entirety:
As used herein, including in the claims, the phrase “at least some” means “one or more,” and includes the case of only one. Thus, e.g., the phrase “at least some ABCs” means “one or more ABCs,” and includes the case of only one ABC.
The term “at least one” should be understood as meaning “one or more,” and therefore includes both embodiments that include one or multiple components. Furthermore, dependent claims that refer to independent claims that describe features with “at least one” have the same meaning, both when the feature is referred to as “the” and “the at least one.”
As used in this description, the term “portion” means some or all. So, for example, “A portion of X” may include some of “X” or all of “X.” In the context of a conversation, the term “portion” means some or all of the conversation.
As used herein, including in the claims, the phrase “based on” means “based in part on” or “based, at least in part, on,” and is not exclusive. Thus, e.g., the phrase “based on factor X” means “based in part on factor X” or “based, at least in part, on factor X.” Unless specifically stated by use of the word “only,” the phrase “based on X” does not mean “based only on X.”
As used herein, including in the claims, the phrase “using” means “using at least,” and is not exclusive. Thus, e.g., the phrase “using X” means “using at least X.” Unless specifically stated by use of the word “only,” the phrase “using X” does not mean “using only X.”
As used herein, including in the claims, the phrase “corresponds to” means “corresponds in part to” or “corresponds, at least in part, to,” and is not exclusive. Thus, e.g., the phrase “corresponds to factor X” means “corresponds in part to factor X” or “corresponds, at least in part, to factor X.” Unless specifically stated by use of the word “only,” the phrase “corresponds to X” does not mean “corresponds only to X.”
In general, as used herein, including in the claims, unless the word “only” is specifically used in a phrase, it should not be read into that phrase.
As used herein, including in the claims, the phrase “distinct” means “at least partially distinct.” Unless specifically stated, distinct does not mean fully distinct. Thus, e.g., the phrase, “X is distinct from Y” means that “X is at least partially distinct from Y,” and does not mean that “X is fully distinct from Y.” Thus, as used herein, including in the claims, the phrase “X is distinct from Y” means that X differs from Y in at least some way.
It should be appreciated that the words “first” and “second” in the description and claims are used to distinguish or identify and not to show a serial or numerical limitation. Similarly, the use of letter or numerical labels (such as “(a),” “(b),” and the like) are used to help distinguish and/or identify and not to show any serial or numerical limitation or ordering.
No ordering is implied by any of the labeled boxes in any of the flow diagrams unless specifically shown and stated. When disconnected boxes are shown in a diagram, the activities associated with those boxes may be performed in any order, including fully or partially in parallel.
As used herein, including in the claims, singular forms of terms are to be construed as also including the plural form and vice versa unless the context indicates otherwise. Thus, it should be noted that as used herein, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Throughout the description and claims, the terms “comprise,” “including,” “having,” and “contain” and their variations should be understood as meaning “including but not limited to” and are not intended to exclude other components.
The present invention also covers the exact terms, features, values, and ranges, etc., in case these terms, features, values, and ranges, etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least, etc. (i.e., “about 3” shall also cover exactly 3 or “substantially constant” shall also cover exactly constant).
Use of exemplary language, such as “for instance,” “such as,” “for example,” and the like, is merely intended to better illustrate the invention and does not indicate a limitation on the scope of the invention unless so claimed. Any steps described in the specification may be performed in any order or simultaneously, unless the context clearly indicates otherwise.
All of the features and/or steps disclosed in the specification can be combined in any combination, except for combinations where at least some of the features and/or steps are mutually exclusive. In particular, preferred features of the invention are applicable to all aspects of the invention and may be used in any combination.
Reference numerals have just been referred to for reasons of quicker understanding and are not intended to limit the scope of the present invention in any manner.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
This application claims the benefit of U.S. provisional patent application No. 63/257,146, filed Oct. 19, 2021, the entire contents of which are hereby fully incorporated herein by reference for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/046894 | 10/17/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63257146 | Oct 2021 | US |