In the past, computing applications such as computer games and multimedia applications used controllers, remotes, keyboards, mice, or the like to allow users to manipulate game characters or other aspects of an application. More recently, computer games and multimedia applications have begun employing cameras and software gesture recognition engines to provide a natural user interface (“NUT”). With NUI, raw joint data and user gestures are detected, interpreted and used to control game characters or other aspects of an application.
One of the challenges of a NUI system is distinguishing a person in the field of view of an image sensor, and correctly identifying the positions of his or her body parts including hands and fingers within the field of view. Routines are known for tracking arms, legs, heads and torso. However, given the subtle detail and wide variety of positions of a user's hands, conventional systems are not able to satisfactorily recognize and track a user's body including finger and hand positions.
Disclosed herein are systems and methods for recognizing and tracking a user's skeletal joints, including hand and finger positions with a NUI system. In examples, the tracking of hand and finger positions may be used by NUI systems for triggering events such as selecting, engaging, or grabbing and dragging objects on a screen. A variety of other gestures, control actions and applications may be enabled by the present technology for recognizing and tracking hand and finger positions and motions. By determining states of a user's hand and fingers, interactivity of a user with a NUI system may be increased, and simpler and more intuitive interfaces may be presented to a user.
In one example, the present disclosure relates to a method for generating a model of a user's hand including one or more fingers for a natural user interface, comprising: (a) receiving image data of a user interacting with the natural user interface; and (b) analyzing the image data to identify the hand in the image data, said step (b) including the steps of: (b)(1) analyzing depth data from the image data captured in said step (a) to segment the image data into data of the hand, and (b)(2) extracting a shape descriptor by applying one or more filters to the image data of the hand identified in said step (b)(1), the one or more filters analyzing image data of the hand as compared to image data outside of a boundary of the hand to discern a shape and orientation of the hand.
In a further example, the present disclosure relates to a system for generating a model of a user's hand including one or more fingers for a natural user interface, the system comprising: a skeletal recognition engine for recognizing a skeleton of a user from received image data; an image segmentation engine for segmenting one or more regions of the body into a region representing a hand of the user; and a descriptor extraction engine for extracting data representative of a hand including one or more fingers and an orientation of the hand, the descriptor extraction engine applying a plurality of filters for analyzing pixels in the region representing the hand, each filter in the plurality of filters determining a position and orientation of the hand, the descriptor extraction engine combining the results of each filter to arrive at a best estimate of the position and orientation of the hand.
In another example, the present disclosure relates to a computer-readable storage medium not consisting of a modulated data signal, the computer-readable storage medium having computer-executable instructions for programming a processor to perform a method for generating a model of a user's hand including one or more fingers for a natural user interface, the method comprising: (a) receiving image data of a user interacting with the natural user interface; (b) analyzing the image data to identify the hand in the image data; and (c) comparing the image data of the identified hand against predefined hand positions to determine if the user has performed one of the following predefined hand gestures or control actions: (c)(1) counting on the user's fingers, (c)(2) performing an “a-okay” gesture, (c)(3) actuation of a virtual button, (c)(4) pinching together of the thumb and a finger of the hand, (c)(5) writing or drawing, (c)(6) sculpting, (c)(7) puppeteering, (c)(8) turning a knob or combination lock, (c)(9) shooting a gun, (c)(10) performing a flicking gesture, (c)(11) performing a gesture where a finger can be used on an open palm to scroll across and navigate through the virtual space, and (c)(12) moving fingers in a scissor motion to control the legs of a virtual character.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Embodiments of the present technology will now be described with reference to
Referring initially to
The system 10 further includes a capture device 20 for capturing image and audio data relating to one or more users and/or objects sensed by the capture device. In embodiments, the capture device 20 may be used to capture information relating to body and hand movements and/or gestures and speech of one or more users, which information is received by the computing environment and used to render, interact with and/or control aspects of a gaming or other application. Examples of the computing environment 12 and capture device 20 are explained in greater detail below.
Embodiments of the target recognition, analysis and tracking system 10 may be connected to an audio/visual (A/V) device 16 having a display 14. The device 16 may for example be a television, a phone, a monitor for a computer, a high-definition television (HDTV), or the like that may provide game or application visuals and/or audio to a user. For example, the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audio/visual signals associated with the game or other application. The A/V device 16 may receive the audio/visual signals from the computing environment 12 and may then output the game or application visuals and/or audio associated with the audio/visual signals to the user 18. According to one embodiment, the audio/visual device 16 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, a component video cable, or the like.
In embodiments, the computing environment 12, the A/V device 16 and the capture device 20 may cooperate to render an avatar or on-screen character 19 on display 14. For example,
As explained above, motion estimation routines such as skeleton mapping systems may lack the ability to detect subtle gestures of a user, such as for example the movement of a user's hand. For example, a user may wish to interact with NUI system 10 by scrolling through and controlling a user interface 21 with his hand as shown in
Accordingly, systems and methods, described below herein, are directed to determining a state of a hand of a user. For example, the action of closing and opening the hand may be used by such systems for triggering events such as selecting, engaging, or grabbing and dragging objects, e.g., object 27 (
Suitable examples of a system 10 and components thereof are found in the following co-pending patent applications, all of which are hereby specifically incorporated by reference: U.S. patent application Ser. No. 12/475,094, entitled “Environment and/or Target Segmentation,” filed May 29, 2009; U.S. patent application Ser. No. 12/511,850, entitled “Auto Generating a Visual Representation,” filed Jul. 29, 2009; U.S. patent application Ser. No. 12/474,655, entitled “Gesture Tool,” filed May 29, 2009; U.S. patent application Ser. No. 12/603,437, entitled “Pose Tracking Pipeline,” filed Oct. 21, 2009; U.S. patent application Ser. No. 12/475,308, entitled “Device for Identifying and Tracking Multiple Humans Over Time,” filed May 29, 2009, U.S. patent application Ser. No. 12/575,388, entitled “Human Tracking System,” filed Oct. 7, 2009; U.S. patent application Ser. No. 12/422,661, entitled “Gesture Recognizer System Architecture,” filed Apr. 13, 2009; and U.S. patent application Ser. No. 12/391,150, entitled “Standard Gestures,” filed Feb. 23, 2009.
As shown in
As shown in
In some embodiments, pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the scene. Additionally, in other example embodiments, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device 20 to a particular location on the targets or objects.
According to another example embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
In another example embodiment, the capture device 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as a grid pattern or a stripe pattern) may be projected onto the scene via, for example, the IR light component 24. Upon striking the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 26 and/or the RGB camera 28 and may then be analyzed to determine a physical distance from the capture device 20 to a particular location on the targets or objects.
According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles, to obtain visual stereo data that may be resolved to generate depth information. In another example embodiment, the capture device 20 may use point cloud data and target digitization techniques to detect features of the user. Other sensor systems may be used in further embodiments, such as for example an ultrasonic system capable of detecting x, y and z axes.
The capture device 20 may further include a microphone 30. The microphone 30 may include a transducer or sensor that may receive and convert sound into an electrical signal. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the target recognition, analysis, and tracking system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing environment 12.
In an example embodiment, the capture device 20 may further include a processor 32 that may be in operative communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions that may include instructions for receiving the depth image, determining whether a suitable target may be included in the depth image, converting the suitable target into a skeletal representation or model of the target, or any other suitable instruction.
The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera or RGB camera, or any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. As shown in
As shown in
Additionally, the capture device 20 may provide the depth information and images captured by, for example, the 3-D camera 26 and/or the RGB camera 28. With the aid of these devices, a partial skeletal model may be developed in accordance with the present technology, with the resulting data provided to the computing environment 12 via the communication link 36.
The computing environment 12 may further include a gesture recognition engine 190 for recognizing gestures as explained below. In accordance with the present system, the computing environment 12 may further include a skeletal recognition engine 192, an image segmentation engine 194, a descriptor extraction engine 196 and a classifier engine 198. Each of these software engines is described in greater detail below.
A model of a target can be variously configured without departing from the scope of this disclosure. In some examples, a model may include one or more data structures that represent a target as a three-dimensional model including rigid and/or deformable shapes, or body parts. Each body part may be characterized as a mathematical primitive, examples of which include, but are not limited to, spheres, anisotropically-scaled spheres, cylinders, anisotropic cylinders, smooth cylinders, boxes, beveled boxes, prisms, and the like.
For example, body model 70 of
A model including two or more body parts may also include one or more joints. Each joint may allow one or more body parts to move relative to one or more other body parts. For example, a model representing a human target may include a plurality of rigid and/or deformable body parts, wherein some body parts may represent a corresponding anatomical body part of the human target. Further, each body part of the model may include one or more structural members (i.e., “bones” or skeletal parts), with joints located at the intersection of adjacent bones. It is to be understood that some bones may correspond to anatomical bones in a human target and/or some bones may not have corresponding anatomical bones in the human target.
The bones and joints may collectively make up a skeletal model, which may be a constituent element of the body model. In some embodiments, a skeletal model may be used instead of another type of model, such as model 70 of
The above described body part models and skeletal models are non-limiting examples of types of models that may be used as machine representations of a modeled target. Other models are also within the scope of this disclosure. For example, some models may include polygonal meshes, patches, non-uniform rational B-splines, subdivision surfaces, or other high-order surfaces. A model may also include surface textures and/or other information to more accurately represent clothing, hair, and/or other aspects of a modeled target. A model may optionally include information pertaining to a current pose, one or more past poses, and/or model physics. It is to be understood that a variety of different models that can be posed are compatible with the herein described target recognition, analysis, and tracking system.
Software pipelines for generating skeletal models of one or more users within a FOV of capture device 20 are known. One such system is disclosed for example in U.S. patent application Ser. No. 12/876,418, entitled “System For Fast, Probabilistic Skeletal Tracking,” filed Sep. 7, 2010, which application is incorporated by reference herein in its entirety. Under certain conditions, for example where a user is sufficiently close to capture device 20 and at least one of the user's hands is distinguishable from other background noise, a software pipeline may further be able to generate hand models for the hand and/or fingers of one or more users within the FOV.
In step 204, the skeletal recognition engine 192 of the pipeline estimates a skeleton model of the user as described above to obtain a virtual skeleton from a depth image obtained in step 200. For example, in
In step 208, the pipeline segments a hand or hands of the user via the image segmentation engine 194 of the pipeline. In some examples, image segmentation engine 194 may additionally segment one or more regions of the body in addition to the hands. Segmenting a hand of a user includes identifying a region of the depth image corresponding to the hand, where the identifying is at least partially based on the skeleton information obtained in step 204.
Hands or body regions may be segmented or localized in a variety of ways and may be based on selected joints identified in the skeleton estimation described above. As one example, hand detection and localization in the depth image may be based on the estimated wrist and/or hand tip joints from the estimated skeleton. For example, in some embodiments, hand segmentation in the depth image may be performed using a topographical search of the depth image around the hand joints, locating nearby local extrema in the depth image as candidates for finger tips. The image segmentation engine 194 then segments the rest of the hand by taking into account a body size scaling factor as determined from the estimated skeleton, as well as depth discontinuities for boundary identification.
As another example, a flood-fill approach may be employed to identify regions of the depth image corresponding to a user's hands. In a flood-fill approach, the depth image may be searched from a starting point and a starting direction, e.g., the starting point may be the wrist joint and the starting direction may be a direction from the elbow to the wrist joint. Nearby pixels in the depth image may be iteratively scored based on the projection on the starting direction as a way for giving preference to points moving away from the elbow and toward the hand tip, while depth consistency constraints such as depth discontinuities may be used to identify boundaries or extreme values of a user's hands in the depth image. In some examples, threshold distance values may be used to limit the depth map search in both the positive and negative directions of the starting direction based on fixed values or scaled based on an estimated size of the user, for example.
As still another example, a bounding sphere or other suitable bounding shape, positioned based on skeleton joints (e.g. wrist or hand tip joints), may be used to include all pixels in the depth image up to a depth discontinuity. For example, a window may be slid over the bounding sphere to identify depth discontinuities which may be used to establish a boundary in the hand region of the depth image.
The bounding shape method may also be used to place a bounding shape around a center of the palm of the hand which may be iteratively identified. One example of such a iterative bounding method is disclosed in a presentation by David Tuft, titled “Kinect Developer Summit at GDC 2011: Kinect for XBOX 360,” attached hereto as Attachment 1, and in a publication by K. Abe, H. Saito, S. Ozawa, titled “3D drawing system via hand motion recognition from cameras”, IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, 2000, which publication is incorporated by reference herein in its entirety.
In general, such a method involves several iterative passes to cull pixels from the model. In each pass, the method culls pixels outside the sphere or other shape centered at the hand. Next, the method culls pixels too far from the tip of the hand (along the arm vector). Then the method performs an edge detection step to edge detect hand boundary and remove unconnected islands. Example steps from such a method are shown in the flowchart of
It may happen that a user's hand is close to his or her body, or to the user's second hand, in the depth image, and the data from those other body portions will initially be included in the segmented image. Connected component labeling may be performed to label different centroids in the segmented image. The centroid, which is most likely the hand, is selected, based on its size and the location of the hand joint. The centroids not selected may be culled. In step 230, pixels which are too far from the tip of the hand along a vector from the attached arm may also be culled.
The skeletal data from the skeletal recognition engine 192 may be noisy, so the data for the hand is further refined to identify the center of the hand. This may be done by iterating over the image and measuring the distance of each pixel to the edge of the silhouette of the hand. The image segmentation engine 194 may then perform a weighted average to figure out the maximum/minimum distance. That is, in step 232, for each pixel in the segmented hand image, a maximum distance along the x and y axes to an edge of the hand silhouette is identified, and a minimum distance along the x and y axes to an edge of the hand silhouette is identified. The distance to the edge is taken as a weight, and a weighted average of the minimum determined distances is then taken across all measured pixels to figure out the likely center of the hand position within the image (step 234). Using the new center, the process may be iteratively repeated until the change in the palm center from the previous iteration is within some tolerance.
In some approaches, segmenting of hand regions may be performed when a user raises the hand outward or above or in front of the torso. In this way, identification of hand regions in the depth image may be less ambiguous since the hand regions may be distinguished from the body more easily. Hand images are particularly clear when a user's hand is oriented palm toward the capture device 20, at which point, the features of that hand can be detected as a silhouette. Features may be noisy, but a silhouetted hand allows some informed decisions about what a hand is doing, based on for example detecting gaps between fingers and seeing the overall shape of the hand and mapping that using a variety of different approaches. Detecting those gaps and other features allows recognition of particular fingers, and a general direction of where that finger is pointing.
It should be understood that the example hand segmentation examples described above are presented for the purpose of example and are not intended to limit the scope of this disclosure. In general, any hand or body part segmentation method may be used alone or in combination with each other and/or one of the example methods described above.
Continuing with the pipeline of
The descriptor extraction engine 196 may use any of a variety of filters in step 210 to extract a shape descriptor. One filter may be referred to as a pixel classifier, which will now be described with reference to the flowchart of
In step 242, the pixel classifier filter determines how many edges of the box are intersected. An intersection is where the image transitions from a foreground (on the hand) to a background (not on the hand). For example,
In step 246, the pixel classifier filter determines whether the intersections are in the same or different edges. As seen in
As opposed to a fingertip, a pixel which intersects its box 280 at four points will be considered a finger for the purposes of defining hand centroids as explained below. For example,
In step 242 of the flowchart of
Referring again to 266, where two edges are intersected, it could be a fingertip, but it could also be a space between two adjacent fingers. The pixel classifier filter therefore checks the corners of the non-intersected edges (step 248). Where the corners of non-intersected edges are solid, this means the box lies on the hand at those corners and the intersection points define a valley between adjacent fingers. Conversely, where the corners of non-intersected edges are empty (as shown in the illustration associated with 266), this means the box lies on background pixels at those corners and the intersection points define a part of the hand.
If the corners are empty, the pixel classifier filter checks at 269 whether the distance, referred to as chord length, between the intersection points is less than the maximum width of a finger (step 252). That is, where there are two intersection points, it could be a fingertip as shown in
In addition to identifying a fingertip or finger, a two-point or a four-point intersection also can reveal a direction in which the fingertip/finger is pointing. For example, in
It may happen that a hand is held with two fingers together, three fingers together, or four fingers together. Thus, after the above steps are run using box 280 for each pixel in the hand, the process may be repeated using a box 280 that is slightly larger than the maximum width of two fingers together, and then repeated again using a box 280 that is slightly larger than the maximum width of three fingers together, etc.
Once the pixel classifier filter data is gathered, the pixel classifier filter next attempts to construct a hand model from the data in step 258 (
Next, the orientation of the finger region is used to project where the knuckle of that finger is believed to be, based on the skeleton size and how big a finger is believed to be. The size, position and orientations of any identified valleys between fingers can also be used to confirm the determined hand model. Next, the projected knuckle position is connected to the palm. Upon completion, the pixel classifier engine determines a skeleton hand model 284, two examples of which are shown in
The above will construct a hand model even if one or more portions of the hand are missing from the model. For example, a finger may have been occluded, or too close to the user's body or other hand to be detected. Or the user may be missing a finger. The pixel classification filter will construct a hand model using the finger and hand positions that it detects.
Another filter which may be run in addition to or instead of the pixel classification filter may be referred to as a curvature analysis filter. This filter focuses on the curvature along the boundaries of the segmented hand silhouette to determine peaks and valleys in an attempt to differentiate fingers. Referring to the flowchart in
Peaks around the hand silhouette are identified in step 289, and each is analyzed with respect to various features of the peak. A peak may be defined by a start point, a peak and an end point. These three points may form a triangle as explained below. The various features of a peak that may be examined include for example:
This information may be run through various machine learning techniques in step 290, such as for example a support vector machine, to differentiate fingers and the hand. Support vector machines are known and described for example in C. Cortes and V. Vapnik, titled Support-Vector Networks, Machine Learning, 20(3):273-297, September 1995, and Vladimir N. Vapnik, titled The Nature of Statistical Learning Theory. Springer, New York, 1995, both of which are incorporated by reference herein in their entirety. In embodiments, noisy data may be smoothed using a Hidden Markov Model to maintain the state of the hands and filter out noise.
The above-described filters may be referred to as silhouette filters in that they examine the data relating to the silhouette of a hand. A further filter which may be used is a histogram filter and is referred to as a depth filter in that it uses depth data to construct a hand model. This filter may be used in addition to or instead of the above-described filters, and may be particularly useful when a user has his or hand pointed toward the image capture device 20.
In the histogram filter, a histogram of distances in the hand region may be constructed. For example, such a histogram may include fifteen bins, where each bin includes the number of points in the hand region whose distance in the Z-direction (depth) from the closest point to the camera is within a certain distance range associated with that bin. For example, the first bin in such a histogram may include the number of points in the hand region whose distance to the hand centroid is between 0 and 0.40 centimeters, the second bin includes the number of points in the hand region whose distance to the hand centroid is between 0.40 and 0.80 centimeters, and so forth. In this way, a vector may be constructed to codify the shape of the hand. Such vectors may further be normalized based on estimated body size, for example.
In another example approach, a histogram may be constructed based on distances and/or angles from points in the hand region to a joint, bone segment or palm plane from the user's estimated skeleton, e.g., the elbow joint, wrist joint, etc.
It should be understood that the filter examples of shape descriptors are exemplary in nature and are not intended to limit the scope of this disclosure. In general, any suitable shape descriptor for a hand region may be used alone or in combination with each other and/or one of the example methods described above. For example, shape descriptors, such as the histograms or vectors described above, may be mixed and matched, combined, and/or concatenated into larger vectors, etc. This may allow the identification of new patterns that were not identifiable by looking at them in isolation. These filters may be augmented by the use of historical frame data, which can indicate whether an identified finger, for example, deviates too much from that finger identified in a previous frame.
In addition to open or closed hand states, the present technology may be used to identify specific finger orientations, such as for example pointing in a particular direction with one or more fingers. The technology may also be used to identify various hand positions oriented at various angles within x, y, z Cartesian space.
In embodiments, various post-classification filtering steps may be employed to increase accuracy of the hand and finger position estimations in step 216 (
In step 220, the pipeline of
The present technology enables a wide variety of interactions with a NUI system such as for example shown in
Other finger and hand interactions are contemplated.
A graphics processing unit (GPU) 608 and a video encoder/video codec (coder/decoder) 614 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the GPU 608 to the video encoder/video codec 614 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 640 for transmission to a television or other display. A memory controller 610 is connected to the GPU 608 to facilitate processor access to various types of memory 612, such as, but not limited to, a RAM.
The multimedia console 600 includes an I/O controller 620, a system management controller 622, an audio processing unit 623, a network interface controller 624, a first USB host controller 626, a second USB host controller 628 and a front panel I/O subassembly 630 that are preferably implemented on a module 618. The USB controllers 626 and 628 serve as hosts for peripheral controllers 642(1)-642(2), a wireless adapter 648, and an external memory device 646 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 624 and/or wireless adapter 648 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
System memory 643 is provided to store application data that is loaded during the boot process. A media drive 644 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 644 may be internal or external to the multimedia console 600. Application data may be accessed via the media drive 644 for execution, playback, etc. by the multimedia console 600. The media drive 644 is connected to the I/O controller 620 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
The system management controller 622 provides a variety of service functions related to assuring availability of the multimedia console 600. The audio processing unit 623 and an audio codec 632 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 623 and the audio codec 632 via a communication link. The audio processing pipeline outputs data to the A/V port 640 for reproduction by an external audio player or device having audio capabilities.
The front panel I/O subassembly 630 supports the functionality of the power button 650 and the eject button 652, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 600. A system power supply module 636 provides power to the components of the multimedia console 600. A fan 638 cools the circuitry within the multimedia console 600.
The CPU 601, GPU 608, memory controller 610, and various other components within the multimedia console 600 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
When the multimedia console 600 is powered ON, application data may be loaded from the system memory 643 into memory 612 and/or caches 602, 604 and executed on the CPU 601. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 600. In operation, applications and/or other media contained within the media drive 644 may be launched or played from the media drive 644 to provide additional functionalities to the multimedia console 600.
The multimedia console 600 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 600 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 624 or the wireless adapter 648, the multimedia console 600 may further be operated as a participant in a larger network community.
When the multimedia console 600 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of the application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
After the multimedia console 600 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 601 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
Input devices (e.g., controllers 642(1) and 642(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge of the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 26, 28 and capture device 20 may define additional input devices for the console 600.
In
The computer 741 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 741 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 746. The remote computer 746 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 741, although only a memory storage device 747 has been illustrated in
When used in a LAN networking environment, the computer 741 is connected to the LAN 745 through a network interface or adapter 737. When used in a WAN networking environment, the computer 741 typically includes a modem 750 or other means for establishing communications over the WAN 749, such as the Internet. The modem 750, which may be internal or external, may be connected to the system bus 721 via the user input interface 736, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 741, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
The foregoing detailed description of the inventive system has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the inventive system to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the inventive system and its practical application to thereby enable others skilled in the art to best utilize the inventive system in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the inventive system be defined by the claims appended hereto.
The present application claims priority to U.S. Provisional Patent Application No. 61/493,850, entitled “System for Finger Recognition and Tracking,” filed Jun. 6, 2011, which application is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4627620 | Yang | Dec 1986 | A |
4630910 | Ross et al. | Dec 1986 | A |
4645458 | Williams | Feb 1987 | A |
4695953 | Blair et al. | Sep 1987 | A |
4702475 | Elstein et al. | Oct 1987 | A |
4711543 | Blair et al. | Dec 1987 | A |
4751642 | Silva et al. | Jun 1988 | A |
4796997 | Svetkoff et al. | Jan 1989 | A |
4809065 | Harris et al. | Feb 1989 | A |
4817950 | Goo | Apr 1989 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4893183 | Nayar | Jan 1990 | A |
4901362 | Terzian | Feb 1990 | A |
4925189 | Braeunig | May 1990 | A |
5101444 | Wilson et al. | Mar 1992 | A |
5148154 | MacKay et al. | Sep 1992 | A |
5184295 | Mann | Feb 1993 | A |
5229754 | Aoki et al. | Jul 1993 | A |
5229756 | Kosugi et al. | Jul 1993 | A |
5239463 | Blair et al. | Aug 1993 | A |
5239464 | Blair et al. | Aug 1993 | A |
5288078 | Capper et al. | Feb 1994 | A |
5295491 | Gevins | Mar 1994 | A |
5320538 | Baum | Jun 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5385519 | Hsu et al. | Jan 1995 | A |
5405152 | Katanics et al. | Apr 1995 | A |
5417210 | Funda et al. | May 1995 | A |
5423554 | Davis | Jun 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5469740 | French et al. | Nov 1995 | A |
5495576 | Ritchey | Feb 1996 | A |
5516105 | Eisenbrey et al. | May 1996 | A |
5524637 | Erickson | Jun 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5577981 | Jarvik | Nov 1996 | A |
5580249 | Jacobsen et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5597309 | Riess | Jan 1997 | A |
5616078 | Oh | Apr 1997 | A |
5617312 | Iura et al. | Apr 1997 | A |
5638300 | Johnson | Jun 1997 | A |
5641288 | Zaenglein | Jun 1997 | A |
5682196 | Freeman | Oct 1997 | A |
5682229 | Wangler | Oct 1997 | A |
5690582 | Ulrich et al. | Nov 1997 | A |
5703367 | Hashimoto et al. | Dec 1997 | A |
5704837 | Iwasaki et al. | Jan 1998 | A |
5715834 | Bergamasco et al. | Feb 1998 | A |
5767842 | Korth | Jun 1998 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5877803 | Wee et al. | Mar 1999 | A |
5913727 | Ahdoot | Jun 1999 | A |
5933125 | Fernie | Aug 1999 | A |
5980256 | Carmein | Nov 1999 | A |
5989157 | Walton | Nov 1999 | A |
5995649 | Marugame | Nov 1999 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6054991 | Crane et al. | Apr 2000 | A |
6066075 | Poulton | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6073489 | French et al. | Jun 2000 | A |
6077201 | Cheng et al. | Jun 2000 | A |
6098458 | French et al. | Aug 2000 | A |
6100896 | Strohecker et al. | Aug 2000 | A |
6101289 | Kellner | Aug 2000 | A |
6128003 | Smith et al. | Oct 2000 | A |
6130677 | Kunz | Oct 2000 | A |
6141463 | Covell et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6152856 | Studor et al. | Nov 2000 | A |
6159100 | Smith | Dec 2000 | A |
6173066 | Peurach et al. | Jan 2001 | B1 |
6181343 | Lyons | Jan 2001 | B1 |
6188777 | Darrell et al. | Feb 2001 | B1 |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6226396 | Marugame | May 2001 | B1 |
6229913 | Nayar et al. | May 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6256400 | Takata et al. | Jul 2001 | B1 |
6283860 | Lyons et al. | Sep 2001 | B1 |
6289112 | Jain et al. | Sep 2001 | B1 |
6299308 | Voronka et al. | Oct 2001 | B1 |
6308565 | French et al. | Oct 2001 | B1 |
6316934 | Amorai-Moriya et al. | Nov 2001 | B1 |
6363160 | Bradski et al. | Mar 2002 | B1 |
6384819 | Hunter | May 2002 | B1 |
6411744 | Edwards | Jun 2002 | B1 |
6430997 | French et al. | Aug 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6503195 | Keller et al. | Jan 2003 | B1 |
6539931 | Trajkovic et al. | Apr 2003 | B2 |
6570555 | Prevost et al. | May 2003 | B1 |
6624833 | Kumar et al. | Sep 2003 | B1 |
6633294 | Rosenthal et al. | Oct 2003 | B1 |
6640202 | Dietz et al. | Oct 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6714665 | Hanna et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6738066 | Nguyen | May 2004 | B1 |
6765726 | French et al. | Jul 2004 | B2 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6801637 | Voronka et al. | Oct 2004 | B2 |
6873723 | Aucsmith et al. | Mar 2005 | B1 |
6876496 | French et al. | Apr 2005 | B2 |
6937742 | Roberts et al. | Aug 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7036094 | Cohen et al. | Apr 2006 | B1 |
7038855 | French et al. | May 2006 | B2 |
7039676 | Day et al. | May 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7050606 | Paul et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7060957 | Lange et al. | Jun 2006 | B2 |
7113918 | Ahmad et al. | Sep 2006 | B1 |
7121946 | Paul et al. | Oct 2006 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7184048 | Hunter | Feb 2007 | B2 |
7202898 | Braun et al. | Apr 2007 | B1 |
7222078 | Abelow | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7274800 | Nefian et al. | Sep 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7317836 | Fujimura et al. | Jan 2008 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7359121 | French et al. | Apr 2008 | B2 |
7367887 | Watabe et al. | May 2008 | B2 |
7372977 | Fujimura et al. | May 2008 | B2 |
7379563 | Shamaie | May 2008 | B2 |
7379566 | Hildreth | May 2008 | B2 |
7389591 | Jaiswal et al. | Jun 2008 | B2 |
7412077 | Li et al. | Aug 2008 | B2 |
7421093 | Hildreth et al. | Sep 2008 | B2 |
7430312 | Gu | Sep 2008 | B2 |
7436496 | Kawahito | Oct 2008 | B2 |
7450736 | Yang et al. | Nov 2008 | B2 |
7452275 | Kuraishi | Nov 2008 | B2 |
7460690 | Cohen et al. | Dec 2008 | B2 |
7489812 | Fox et al. | Feb 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7555142 | Hildreth et al. | Jun 2009 | B2 |
7560701 | Oggier et al. | Jul 2009 | B2 |
7570805 | Gu | Aug 2009 | B2 |
7574020 | Shamaie | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7590262 | Fujimura et al. | Sep 2009 | B2 |
7593552 | Higaki et al. | Sep 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7607509 | Schmiz et al. | Oct 2009 | B2 |
7620202 | Fujimura et al. | Nov 2009 | B2 |
7668340 | Cohen et al. | Feb 2010 | B2 |
7680298 | Roberts et al. | Mar 2010 | B2 |
7683954 | Ichikawa et al. | Mar 2010 | B2 |
7684592 | Paul et al. | Mar 2010 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
7702130 | Im et al. | Apr 2010 | B2 |
7704135 | Harrison, Jr. | Apr 2010 | B2 |
7710391 | Bell et al. | May 2010 | B2 |
7729530 | Antonov et al. | Jun 2010 | B2 |
7746345 | Hunter | Jun 2010 | B2 |
7760182 | Ahmad et al. | Jul 2010 | B2 |
7809167 | Bell | Oct 2010 | B2 |
7834846 | Bell | Nov 2010 | B1 |
7852262 | Namineni et al. | Dec 2010 | B2 |
RE42256 | Edwards | Mar 2011 | E |
7898522 | Hildreth et al. | Mar 2011 | B2 |
7970176 | Kutliroff et al. | Jun 2011 | B2 |
8035612 | Bell et al. | Oct 2011 | B2 |
8035614 | Bell et al. | Oct 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8072470 | Marks | Dec 2011 | B2 |
20080026838 | Dunstan et al. | Jan 2008 | A1 |
20080212836 | Fujimura et al. | Sep 2008 | A1 |
20080225041 | El Dokor et al. | Sep 2008 | A1 |
20090033623 | Lin | Feb 2009 | A1 |
20100073287 | Park et al. | Mar 2010 | A1 |
20100204953 | Onishi et al. | Aug 2010 | A1 |
20110025689 | Perez et al. | Feb 2011 | A1 |
20110119640 | Berkes et al. | May 2011 | A1 |
Number | Date | Country |
---|---|---|
101254344 | Jun 2010 | CN |
0583061 | Feb 1994 | EP |
08044490 | Feb 1996 | JP |
9310708 | Jun 1993 | WO |
9717598 | May 1997 | WO |
9944698 | Sep 1999 | WO |
Entry |
---|
International Search Report and Written Opinion dated Nov. 30, 2012 in International Patent Application No. PCT/US2012/040741. |
Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices. |
Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand. |
Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX. |
Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan. |
Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany. |
Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany. |
Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany. |
Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY. |
Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan. |
Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand. |
Isard et al., “Condensation—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands. |
Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA. |
Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA. |
Breen et al., “Interactive Occlusion and Collision of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany. |
Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA. |
Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France. |
Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press. |
Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications. |
Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC. |
“Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22. |
Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7. |
Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages. |
“Simulation and Training”, 1994, Division Incorporated. |
English Machine-translation of Japanese Publication No. JP08-044490 published on Feb. 16, 1996. |
Puranam, Muthukumar B. “Towards Full-Body Gesture Analysis and Recognition.” Master's Thesis, University of Kentucky, Lexington, Kentucky, USA, 2005 http://archive.uky.edu/bitstream/10225/221/finalreport.pdf. |
Tang, Matthew. “Recognizing Hand Gestures with Microsoft's Kinect,” Mar. 16, 2011 http://www.stanford.edu/class/ee368/Project—11/Reports/Tang—Hand—Gesture—Recognition.pdf. |
Number | Date | Country | |
---|---|---|---|
20120309532 A1 | Dec 2012 | US |
Number | Date | Country | |
---|---|---|---|
61493850 | Jun 2011 | US |