In the past, computing applications such as computer games and multimedia applications used controllers, remotes, keyboards, mice, or the like to allow users to manipulate game characters or other aspects of an application. More recently, computer games and multimedia applications have begun employing cameras and software gesture recognition engines to provide a natural user interface (“NUI”). With NUI, user gestures are detected, interpreted and used to control game characters or other aspects of an application.
When using a mouse or other integrated controller, only minor initial calibration is necessary. However, in a NUI system, the interface is controlled by a user's position in, and perception of, the 3-D space in which they move. Thus, many gaming and other NUI applications have an initial calibration process which correlates the user's 3-D real world movements to the 2-D screen space. In the initial calibration process, a user may be prompted to point at an object appearing at a screen boundary, and the user's movements to complete this action is noted and used for calibration. However, over a gaming or other session, a user may tire, become excited or otherwise alter the movements with which the user interacts with the system. In such instances, the system will no longer properly register movements that initially affected a desired interaction with the system.
Disclosed herein are systems and methods for periodically calibrating a user interface in a NUI system by performing periodic active calibration events. The system includes a capture device for capturing position data relating to objects in a field of view of the capture device, a display and a computing environment for receiving image data from the capture device and for running applications. The system further includes a user interface controlled by the computing environment and operating by mapping a 3-D position of a pointing object to a 2-D position on the display. In embodiments, the computing environment periodically recalibrates the mapping of the user interface while the computing environment is running an application.
In a further embodiment, the present technology relates to a method of active calibration of a user interface for a user to interact with objects on a display. The method includes the steps of running an application on a computing environment; receiving input for interacting with the application via the user interface; periodically performing an active calibration of the user interface while running the application; and recalibrating the user interface based at least in part on the performed active calibration.
In a further embodiment, the present technology relates to a method of active calibration of a user interface for a user to interact with objects on a display, including the steps of providing the user interface, the user interface mapping a position of a user interface pointer in 3-D space to a 2-D position on the display; displaying a target object on the display; detecting an attempt to select the target object on the display via the user interface and user interface pointer; measuring a 3-D position of the user interface pointer in selecting the target object; determining a 2-D screen position corresponding to the user's measured position; determining a disparity between the determined 2-D screen position and the 2-D screen position of the target object; and periodically repeating the above steps.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Embodiments of the present technology will now be described with reference to
Referring initially to
As shown in
Other movements by the user 18 may also be interpreted as other controls or actions, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches. The embodiment of
As shown in
As shown in
According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles, to obtain visual stereo data that may be resolved to generate depth information.
The capture device 20 may further include a microphone 30. The microphone 30 may include a transducer or sensor that may receive and convert sound into an electrical signal. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the target recognition, analysis, and tracking system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing environment 12.
In an example embodiment, the capture device 20 may further include a processor 32 that may be in operative communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions that may include instructions for receiving the depth image, determining whether a suitable target may be included in the depth image, converting the suitable target into a skeletal representation or model of the target, or any other suitable instruction.
The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera or RGB camera, or any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. As shown in
As shown in
Additionally, the capture device 20 may provide the depth information and images captured by, for example, the 3-D camera 26 and/or the RGB camera 28, and a skeletal model that may be generated by the capture device 20 to the computing environment 12 via the communication link 36. A variety of known techniques exist for determining whether a target or object detected by capture device 20 corresponds to a human target. Skeletal mapping techniques may then be used to determine various spots on that user's skeleton, joints of the hands, wrists, elbows, knees, nose, ankles, shoulders, and where the pelvis meets the spine. Other techniques include transforming the image into a body model representation of the person and transforming the image into a mesh model representation of the person.
The skeletal model may then be provided to the computing environment 12 such that the computing environment may perform a variety of actions. In accordance with the present technology, the computing environment 12 may use the skeletal model to determine the calories being burned by the user. Although not pertinent to the present technology, the computing environment may further track the skeletal model and render an avatar associated with the skeletal model on the display 14. The computing environment may further determine which controls to perform in an application executing on the computer environment based on, for example, gestures of the user that have been recognized from the skeletal model. For example, as shown, in
A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the GPU 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM.
The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB host controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.
The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
When the multimedia console 100 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of the application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge of the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 26, 28 and capture device 20 may define additional input devices for the console 100.
In
The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in
When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Aspects of the present technology will now be explained with reference to the flowcharts of
This user may have interacted with a system 10 in the past. If so, calibration data may have been captured from these prior interaction sessions and stored as explained below. The calibration data may be stored in memory associated with the system 10 and/or remotely in a central storage location accessible by a network connection between the central storage location and the system 10. In step 406, the registration algorithm can check whether there is any stored calibration data for the registered user. If so, the calibration data for that user is retrieved in step 408. If there is no stored calibration data, step 408 is skipped. Steps 406 and 408 may be omitted in further embodiments.
In the event the calibration data was stored remotely in a central storage location, a user may obtain the calibration data using their own system 10 (i.e., the one previously used to generate and store calibration data), or another system 10 which they have not previously used. One advantage to having stored calibration data is that the system may automatically calibrate the interface to that user once the user begins use of a system 10, and no separate, initial calibration routine is needed. Even if there is no stored calibration data, the present technology allows omission of a separate, initial calibration routine, as calibration is performed “on the fly” in active calibration events as explained below. Although the present technology allows omission of a separate, initial calibration routine, it is conceivable that initial calibration data be obtained from a separate, initial calibration routine in further embodiments.
In step 410, a user may launch an application over the computing environment 12. The application, referred to herein as the NUI application, may be a gaming application or other application where the user interface by which the user interacts with the application is the user himself moving in the space in front of the capture device 20. The capture device captures and interprets the movements as explained above. In the following description, a user's hand is described as the user interface (UI) pointer which controls the NUI application. However, it is understood that other body parts, including feet, legs, arms and/or head may also or alternatively be the UI pointer in further examples.
In the example shown in
In step 412, the NUI application runs normally, i.e., it runs according to its intended purpose without active calibration events. In step 414, the NUI application looks for a triggering event. If found, the NUI application performs an active calibration event as explained below. A triggering event may be a wide variety of different events. In embodiments, it may simply be a countdown of a system clock so that the triggering event automatically occurs every preset period of time. This period of time may vary in different embodiments, but may for example be every minute, two minutes, five minutes, etc. The countdown period may be shorter or longer than these examples. Thus, where the countdown period is one minute, once every minute the user is running the NUI application, the triggering event will happen and the active calibration event will occur.
The triggering event may be events other than a countdown in further embodiments. In one such embodiment, the NUI application or other algorithm running on computing environment 12 may monitor success versus failure with respect to how often the user successfully selects or connects with an intended object, e.g., a object 19, on display 14 during normal game play. “Connects” in this context refers to a user successfully orienting his or her UI pointer, such as his hand, in 3-D space so as to accurately align with the 2-D screen location of an object on the display Thus, in the example of
Those of skill in the art will appreciate that the above embodiment may be tuned with a wide variety of criteria, including what percentage drop to be used for the threshold, and for how long this percentage drop needs to be seen. As one of many examples, the system may establish the baseline success rate over a first time period of five minutes. If, after that period, the system detects a drop in successful connections by, for example, 10% over a period of one minute, this may trigger the calibration step. The percentage drop and the time period over which it is seen may both vary above or below the example values set forth above in further embodiments. Other types of events are contemplated for triggering the need for the active calibration step.
If no trigger event is detected in step 414, the NUI application performs its normal operations. However, if a trigger event is detected, the NUI application performs an active calibration event in step 416. Further details of the active calibration step 416 are described below with respect to the flowchart of
In general, the calibration event includes the steps of putting up a target object (e.g., target object 21,
The target is displayed in step 432.
As shown in
Once a target object 21 is displayed, the system detects a user's movement in step 434 to point to or connect with the target object 21. If the system does not detect a calibration event in step 434 of the user moving to select a target object, the system may return to step 432 to display another target object 21.
Assuming the user moves to point at the target object, the system measures the X, Y and Z position of the UI pointer (the user's hand in this example) in 3-D space in step 438. The system may make separate, independent measurements of X, Y and Z positions, and may recalibrate X, Y and Z positions independently of each other. Assuming a reference system where X direction is horizontal, Y direction is vertical and Z is toward and away from the capture device 20, the greatest deviation in movement may occur along the Y axis due to gravity-driven fatigue. This may not be the case in further examples.
Calibration of movements along the Z-axis may present a special case, in that these movements often represent a control action rather than translating to pure positional movement in 2-D screen space. For example, in the shooting embodiment of
Once the system measures the X, Y and Z position of the UI pointer in 3-D space, the system maps this to the corresponding position of the UI pointer in 2-D screen space in step 440. This determination may be made one of two ways. It may be the actual 2-D position indicated by the 3-D world position of the UI pointer (i.e., without any calibration adjustment), or it may be the actual 2-D position adjusted based on a prior recalibration of the UI pointer to screen objects.
In step 442, the system determines any deviation between the 2-D position of the target and the determined 2-D position corresponding to the 3-D position of the UI pointer. This deviation represents the amount by which the system may recalibrate so that the 2-D position determined in step 440 matches the 2-D position of the target 21. As explained below with respect to the recalibration step, the amount of the recalibration that is performed may be less than indicated by step 442 in embodiments.
Returning to the flowchart of
In further embodiments, instead of the most recent deviation being used as the sole correction factor, the system may average the most recent deviation together with prior determined deviations from prior active calibration events. In this example, the system may weight the data from the active calibration events (current and past) the same or differently. This process is explained in greater detail with respect to the flowchart of
As indicated, in embodiments, the recalibration step 418 may be performed by averaging weighted values for the current and past calibration events. The past calibration events are received from memory as explained below. If the user is using the same system in the same manner as in prior sessions, the past calibration events may be weighted the same or similarly to the current calibration event. The weighting assigned to the different calibration events (current and past) may be different in further embodiments. In embodiments where weights are different, the data for the current calibration event may be weighted higher than stored data for past calibration events. And of the stored values, the data for the more recent stored calibration events may be weighted more than the data for the older stored calibration events. The weighting may be tuned differently in further embodiments.
It may happen that some aspect has changed with respect to how the user is interacting with the system 10 in the current session in comparison to past sessions. It could be that an injury or other factor is limiting the user's movement and ability to interact with the system. It could be that a user wore flat shoes during prior sessions and is now wearing high heels. It could be that the user stood in prior sessions and is now seated. It could also be that the user is interacting with a new display 14 that is larger or smaller than the user is accustomed to. It could be a wide variety of other changes. Each of these changes may cause the X and/or Y position (and possibly the Z position) to change with respect to the capture device 20 in comparison to prior sessions.
Thus, in the embodiment described with respect to
Whether weighting per some predetermined scheme, or skewing the weight of the current active calibration event data more heavily in step 452, the system uses the weighted average of the current and stored active calibration events in step 456 to determine the recalibration of the interface. Thus, in an example, the interface may be recalibrated only a portion of the total current deviation between the most recent determined 2-D position at which the user is pointing and the position of the target object 21. Or the interface may be recalibrated an amount greater than the current measured deviation. The number of past calibration events which may play into the recalibration of the interface may be limited to some number of most recently stored active calibration events, such as for example using only the most recent five to ten active calibration events. The number used may be more or less than that in further embodiments.
As indicated above, the system may alternatively simply use the most current active calibration event for recalibration purposes. In this event, the system may recalibrate the entire amount of the deviation between the current determined 2-D position at which the user is pointing and the position of the target object 21, and use that as the sole basis for the correction. In such an embodiment, the steps shown in
Referring again to the flowchart of
However, upon completion of the boxing game, the user may choose to play the shooting game of
In addition to storing calibration event data locally, the determined calibration event data may be stored remotely in a central storage location. Such an embodiment may operate as described above, but may have the further added advantage that stored calibration event data may be used for recalibration purposes when the user is interacting with a different system 10 than that which generated the stored calibration event data. Thus, as an example, a user may play a game at a friend's house, and the system would automatically calibrate the interface to that particular user when the user first starts playing, even if the user has never played at that system before. Further details relating to the remote storing of data and use of that data on other systems is disclosed for example in U.S. patent application Ser. No. 12/581,443 entitled “Gesture Personalization and Profile Roaming,” filed on Oct. 19, 2009, which application is assigned to the owner of the current application and which application is incorporated herein by reference in its entirety.
In embodiments, the active calibration routine is built into the NUI application developed for use on system 10. In further embodiments, portions or all of the active calibration routine may be run from a system or other file in the computing environment 12 operating system, or some other algorithm running on computing environment 12 which is separate and distinct from the NUI application into which the active calibration events are inserted.
The foregoing detailed description of the inventive system has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the inventive system to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the inventive system and its practical application to thereby enable others skilled in the art to best utilize the inventive system in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the inventive system be defined by the claims appended hereto.