Video game interfaces have evolved with the advent of motion gaming that permits users to interact with a video game through bodily movements. In such systems, input to the game may be spoken commands or bodily gestures.
As noted above, motion gaming permits interaction with video games through bodily movements. Virtual reality games may also allow users to navigate through various virtual scenes with gestures, such as swinging an arm or walking. However, a user playing a motion video game in a confined area may not have the physical space needed to make these large movements. Furthermore, large movements may be cumbersome for a user and may lead to fatigue. This may cause users to become irritated and give up playing the game.
In view of the foregoing, disclosed herein are an apparatus, method, and non-transitory computer readable medium for configuring three dimensional movement translations. In one aspect, an apparatus can comprise a sensor for generating information indicative of three dimensional movement of an object and a memory for storing at least on parameter. In a further aspect, the apparatus can comprise at least one processor configured to: receive from the sensor information indicative of three dimensional movement of the object; generate a displayable interface to permit configuration of at least one parameter for translating three dimensional movement of the object; store the at least one parameter in the memory; translate the three dimensional movement of the object in accordance with the at least one parameter; and generate an image corresponding to a translation of the three dimensional movement of the object with a displayable moving image.
In another example, the at least one processor can detect a control input comprising the at least one parameter and the at least one parameter can include an amplification parameter. In yet another example, the at least one processor can amplify movement of the moving image based at least partially on the amplification parameter and an acceleration of the three dimensional movement. In another aspect, the sensor can be a camera.
In yet another aspect, the at least one parameter can comprise at least one of a seat mode parameter, a telescoping arm action parameter, a non-linear motion parameter, a z-axis boost parameter, a motion hysteresis parameter, and a hand balance parameter.
In a further aspect, a method for configuring three dimensional movement translations can include: generating a displayable interface to permit configuration of at least one parameter for translating three dimensional movement; storing the at least one parameter in a memory; detecting three dimensional movement captured by a sensor; translating the three dimensional movement in accordance with the at least one parameter; and generating a translation of the three dimensional movement with a displayable moving image.
In yet another example, a non-transitory computer readable medium can have instructions therein which, upon execution, can cause at least one processor to: generate a displayable interface to permit configuration of at least one parameter for translating three dimensional movement; store the at least one parameter in a memory; detect three dimensional movement captured by a sensor; translate the three dimensional movement in accordance with the at least one parameter; and generate a translation of the three dimensional movement with a displayable moving image.
The aspects, features and advantages of the present disclosure will be appreciated when considered with reference to the following description of examples and accompanying figures. The following description does not limit the application; rather, the scope of the disclosure is defined by the appended claims and equivalents.
Processor 110 can provide further support for image processor 104. Processor 110 can include integrated circuitry for managing the overall functioning of apparatus 100. Processor 110 can also be an ASIC or a processor manufactured by Intel® Corporation or Advanced Micro Devices. Three dimensional (“3D”) movement translator 106 can comprise circuitry, software, or both circuitry and software for receiving image characteristic data derived by image processor 104. 3D movement translator 106 can translate this data in accordance with the configuration contained in translator configuration database 108. While only two processors are shown in
Although the architecture of translator configuration database 108 is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files. The data can also be formatted in any computer-readable format. The data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, references to data stored in other areas of the same memory or different memories (including other network locations) or information that is used by a function to calculate the relevant data. As will be discussed in more detail further below, translation configuration interface 114 can be generated to permit a user to change the parameters of the movement translation. This interface can have a number of parameters that can alter the way the physical movement or gestures detected by the sensor are portrayed on a display. Translation configuration interface 114 can also be implemented in software, hardware, or a combination of software and hardware.
As noted above, 3D movement translator 106 and translation configuration interface 114 can also be implemented in software. In this instance, the computer readable instructions of the software can comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by processor 110. The computer executable instructions can be stored in any computer language or format, such as in object code or modules of source code. The instructions can be stored in object code format for direct processing by the processor, or in any other computer language including, but not limited to, scripts or collections of independent source code modules that are interpreted on demand or compiled in advance.
In a software implementation, the computer executable instructions of 3D movement translator 106 and translation configuration interface 114 can be stored in a memory (not shown) accessible by processor 110 including, but not limited to, a random access memory (“RAM”) or can be stored in a non-transitory computer readable medium. Such non-transitory computer readable medium can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable non-transitory computer-readable media include, but are not limited to, a portable magnetic computer diskette such as floppy diskettes or hard drives, a read-only memory (“ROM”), an erasable programmable read-only memory, a portable compact disc or other storage devices that can be coupled to apparatus 100 directly or indirectly. The medium can also include any combination of one or more of the foregoing and/or other devices as well.
Display 112 can include, but is not limited to, a CRT, LCD, plasma screen monitor, TV, projector, or any other electronic device that is operable to display information. Display 112 can be integrated with apparatus 100 or can be a device separate from apparatus 100. When the display 112 is a separate device, display 112 and apparatus 100 can be coupled via a wired or wireless connection. In one example, display 112 can be integrated with a head mounted display or virtual reality goggles used for virtual reality applications. In this instance, there can be a display for each eye to provide the user with a sense of depth.
Working examples of the system, method, and non-transitory computer readable medium are shown in
Referring to
One example parameter shown in
The telescoping arm action parameter 304 can be used to translate a fully extended physical arm of a user to a continuous expansion of a virtual arm on a display. The length of the expansion can be adjusted anywhere between a weak expansion and a strong expansion. This parameter can be effective in particular video game situations. For example, a strong expansion can allow a user to reach virtual objects that are well beyond the user's reach in the virtual workspace. Therefore, a stronger telescoping parameter setting can eliminate the need to reduce the size of the virtual workspace. In another example, this feature can be triggered by fully extending the arm and can be turned off by pulling the arm back. In yet a further example, the telescoping arm action parameter 304 can control the speed of the telescoping action or control the degree to which the user has to extend the arm to trigger the telescoping action. In the event the seat mode parameter is adjusted to a narrower setting, a small movement can trigger the telescoping feature. For example, a user can trigger the telescoping feature by fully extending the finger rather than the arm.
The non-linear velocity parameter 306 can be used for non-linear amplification of a movement's velocity. The configuration can allow a reference velocity to be set. When a user moves a body part at or below the reference velocity, the amplification of the velocity can be at or close to the actual velocity. In contrast, when the user moves a body part at a velocity greater than the configured reference velocity, the velocity can be amplified multiple times (e.g. three times) greater than the actual velocity. The reference velocity can be configured anywhere between a weakest reference velocity and a strongest reference velocity. If a high reference velocity is configured, the user can need to move faster to exceed the higher threshold and trigger the non-linear amplification. In a further aspect, the weaker or stronger setting can change the equation used in the amplification. By way of example, a change in the setting can change the slope or breakpoint in a piecewise-linear function. A function ƒ(x) can have a unity slope for small values of x, but a slope greater than one for larger values of x. In this example, the setting can change the high-value slope from unity to ten, or can change the x threshold where the slope changes from unity to ten. In yet another example, the weaker and stronger setting can change the N in the equation ƒ(x)=xN. Thus, a very weak setting can be ƒ(x)=x1 and a very strong setting can be ƒ(x)=x2.
Also depicted in
The boundary repulsion parameter 310 can be used to trigger repulsion between a moving image, such as a cursor, and the boundary of the virtual 3D space on the screen. The boundary of the virtual 3D space can be defined by, for example, how far a user can comfortably swing the arm in actual physical space. In the event the seat mode is adjusted to a narrower setting, the physical boundary can be narrower. In this instance, the virtual 3D space can be defined by, for example, how far a user can swing a finger, a hand, etc. This parameter can be used to help a user become accustomed to keeping movements within a camera's purview, since the moving image will be repelled by the virtual three dimensional boundaries, when the user moves outside the camera's purview.
The motion hysteresis parameter 312 can be configured to prevent an image on the screen from moving in response to slight inadvertent movements by a user. The motion hysteresis parameter can be adjusted anywhere between weak and strong. A stronger setting can prevent the image from moving in response to inadvertent movements; in this instance, an image can move in response to physical movement, if the physical movement surpasses a threshold. A weaker setting (e.g., 0 hysteresis) can render the moving image sensitive to even the slightest movements, such as an inadvertent shaking of the hand. The threshold can be a distance or velocity threshold.
The balance parameter 314 can be configured to bias the amplification of a movement to a particular side. This parameter can be configured if, for example, a user has more space on the left than on the right; in this instance, the user can bias the balance parameter toward the left. The same can be done with the right side.
Referring back to
Referring back to
Advantageously, the above-described apparatus, non-transitory computer readable medium, and method allow a user to configure various parameters for 3D motion. In this regard, a user can configure, for example, the amplification of a motion so that large movements on the screen can be triggered with small physical movements. In turn, users playing a game in a confined space can avoid disturbing others around them. Furthermore, users can enjoy generating large movements on the screen without fatigue.
Although the disclosure herein has been described with reference to particular examples, it is to be understood that these examples are merely illustrative of the principles of the disclosure. It is therefore to be understood that numerous modifications can be made to the examples and that other arrangements can be devised without departing from the scope of the disclosure as defined by the appended claims. Furthermore, while particular processes are shown in a specific order in the appended drawings, such processes are not limited to any particular order unless such order is expressly set forth herein. Rather, various steps can be handled in a different order or simultaneously, and steps can be omitted or added.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2016/027241 | 4/13/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62147641 | Apr 2015 | US |