Embodiments of the present invention are related to the field of video image processing. More specifically, embodiments of the present invention relate to automatically identifying features of objects in an interactive video display system.
One aspect of image processing includes human-computer interaction by detecting human forms and movements to allow interaction with images. Applications of such processing can use efficient or entertaining ways of interacting with images to define digital shapes or other data, animate objects, create expressive forms, etc.
Detecting the position and movement of a human body is referred to as “motion capture.” With motion capture techniques, mathematical descriptions of a human performer's movements are input to a computer or other processing system. Natural body movements can be used as inputs to the computer to study athletic movement, capture data for later playback or simulation, enhance analysis for medical purposes, etc.
Motion capture techniques tend to be complex. Some techniques require the human actor to wear special suits with high-visibility points at several locations. Other approaches use radio frequency or other types of emitters, multiple sensors and detectors, blue screens, extensive post processing, etc. Techniques that rely on simple visible light image capture are not accurate enough to provide well-defined and precise motion capture.
Some motion capture applications allow an actor, or user, to interact with images that are created and displayed by a computer system. For example, an actor may stand in front of a large video screen projection of several objects. The actor can move, or otherwise generate, modify, and manipulate the objects by using body movements. Different effects based on an actor's movements can be computed by the processing system and displayed on the display screen. For example, the computer system can track a path of the actor in front of the display screen and render an approximation, or artistic interpretation of the path onto the display screen. The images with which the actor interacts can be, e.g., on the floor, wall, or other surface, suspended three-dimensionally in space, displayed on one or more monitors, projection screens or other devices. Any type of display device or technology can be used to present images with which a user can control or interact.
In some applications, such as point of sale, retail advertising, promotions, arcade entertainment sites, etc., it is desirable to capture the motion of an untrained user (e.g., a person passing by) in a very unobtrusive way. Ideally, the user will not need special preparation or training and the system will not use unduly expensive equipment. Also, the method and system used to motion capture the actor should be invisible or undetectable to the user. Many real world applications must work in environments where there are complex and changing background and foreground objects, short capture intervals and other factors that can make motion capture difficult.
Various embodiments of the present invention, a method and system for detecting a feature of an object in an interactive video display system, are described herein. In one embodiment of the invention, a tip of an object, e.g., a finger, is detected in a vision image. In another embodiment of the invention, a tip of a foot is detected in a vision image. In one embodiment of the invention, a memory stored template image comprising weighted values (e.g., a value image) is compared to pixels of a vision image to determine a feature of an object. In one embodiment of the invention, pixels of the foreground/background classification image (e.g., vision image) are multiplied by the corresponding values (for a particular orientation) of the value image to determine the degree that the object matches the value image. In one embodiment of the invention, multiple orientations of the value image are multiplied by the vision image to determine an orientation of the value image that best matches a feature of an object.
More specifically, embodiments of the present invention include a method for processing captured image information in an interactive video display system. The method includes accessing a region of a vision image. The method further includes comparing the region of the vision image to a first orientation of a value image. The value image comprises a plurality of weighted values representing a feature to be detected. The method further includes comparing the region of the vision image to a second orientation of the value image. The method further includes determining which orientation of the value image best matches the feature to be detected.
Embodiments of the present invention further include a system for processing captured image information in an interactive video display system. The system includes an input for receiving a region of a vision image. The system also includes a comparer for comparing the region of the vision image to a plurality of orientations of a value image. The value image comprises a plurality of weighted values representing a feature of the vision image to be detected. The system further includes a determiner for determining which of the plurality of orientations of the value image best matches the feature to be detected.
Embodiments of the present invention further include a computer usable medium having computer-readable program code embedded therein for causing a computer system to perform a method for processing captured image information in an interactive video display system as described above.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention:
Reference will now be made in detail to various embodiments of the invention, a system and method for sensing features of objects in an interactive video display system, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it is understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the invention, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be recognized by one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the invention.
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “sensing” or “comparing” or “multiplying” or “accessing” or “averaging” or “representing” or “transmitting” or “updating” or “identifying” or the like, refer to the action and processes of an electronic system (e.g., projection interactive video display system 100 of
Various embodiments of the present invention in the form of one or more exemplary embodiments will now be described. The described embodiments may be implemented on an interactive video display system including a vision system that captures and processes information relating to a scene. The processed information is used to generate certain visual effects that are then displayed to human users via an interactive display device. Human users are able to interact with such visual effects on a real-time basis.
The local computer processes the camera 105 input to discern on a pixel-by-pixel basis what portions of the volume in front of surface 102 (e.g., interactive space 115) are occupied by people (or moving objects) and what portions of surface 102 are background e.g., static images. The local computer may accomplish this by developing several evolving models of what the background is believed to look like, and then comparing its concepts of the background to what camera 105 is currently imaging. Alternatively, components of the local computer that process camera 105 input are collectively known as the vision system. Various embodiments of projection interactive video display system 100 and the vision system are described in co-pending U.S. patent application Ser. No. 10/160,217, filed on May 28, 2002, entitled “INTERACTIVE VIDEO DISPLAY SYSTEM,” by Bell, and assigned to the assignee of the present application, and in co-pending U.S. Provisional Patent Application No. 60/514,024, filed on Oct. 24, 2003, entitled “METHOD AND SYSTEM FOR PROCESSING CAPTURED IMAGE INFORMATION IN AN INTERACTIVE VIDEO SYSTEM,” by Bell, and assigned to the assignee of the present application, both of which are herein incorporated by reference.
Various embodiments of self-contained interactive video display system 150 are described in co-pending U.S. patent application Ser. No. 10/946,263, filed on Sep. 20, 2004, entitled “SELF-CONTAINED INTERACTIVE VIDEO DISPLAY SYSTEM,” by Bell et al., and assigned to the assignee of the present application, co-pending U.S. patent application Ser. No. 10/946,084, filed on Sep. 20, 2004, entitled “SELF-CONTAINED INTERACTIVE VIDEO DISPLAY SYSTEM,” by Bell, and assigned to the assignee of the present application, and co-pending U.S. patent application Ser. No. 10/946,414, filed on Sep. 20, 2004, entitled “INTERACTIVE VIDEO WINDOW DISPLAY SYSTEM,” by Bell, and assigned to the assignee of the present application, all of which are herein incorporated by reference. Furthermore, various embodiments of the vision system are described in co-pending U.S. patent application Ser. No. 10/160,217, filed on May 28, 2002, entitled “INTERACTIVE VIDEO DISPLAY SYSTEM,” by Bell, and assigned to the assignee of the present application, and in co-pending U.S. Provisional Patent Application No. 60/514,024, filed on Oct. 24, 2003, entitled “METHOD AND SYSTEM FOR PROCESSING CAPTURED IMAGE INFORMATION IN AN INTERACTIVE VIDEO SYSTEM,” by Bell, and assigned to the assignee of the present application, both of which are herein incorporated by reference.
According to one embodiment of the interactive video display system (e.g., projection interactive video display system 100 of
The camera input image (e.g., vision image) is an image representing a real world scene viewed by the camera. This real world scene contains a static background of unknown brightness and appearance as well as various foreground objects that are able to move, such as, people and objects held or moved by people. The camera input image may be manipulated or cropped so that the area viewed by the camera corresponds to the boundaries of a display. Embodiments of the present invention determine a location and direction of a feature of the object in the foreground of the camera image.
The computer vision system outputs a foreground/background distinction (also referred to as a vision image) that corresponds to the camera input image. Each pixel in this image is capable of assuming one of two values: one value for foreground and another value for background. This pixel's value represents whether the vision system determines the pixel with the same position in the camera input image is foreground or background. In one exemplary embodiment, the foreground/background distinction image is an 8-bit grayscale image, with a pixel value of “0” for background and a pixel value of “255” for foreground. In some embodiments, the vision image may have gradated values representing probabilistic foreground/background assessments or other methods of representing vision information. Other implementations may represent foreground/background distinction using different techniques. In each case, the objective of the foreground/background distinction processing for the vision image is to generate a data structure that indicates the position and shape of input objects (e.g., people etc.) with interactive space.
The camera input image may be preprocessed before being input into the vision system. For example, the image may be blurred slightly to reduce noise or gamma corrected to increase or decrease the vision system's sensitivity to dark or light areas. In many cases, the camera input image may be cropped, linearly transformed, or otherwise calibrated. Other well-known ways and/or methods to preprocess the camera input image could also be used. In one embodiment of the invention, the resolution is decreased to save time for processing the image. In addition, if a pixel is determined to not be like the feature being detected, it can usually be assumed that neighboring pixels are not like the feature being detected.
Alternatively, the feature matching system may only be run on pixels that fall on the border between a foreground area and a background area on the vision image. In one embodiment of the invention, the feature being detected is a tip, e.g., a finger tip.
It is appreciated that the term “tip” has been used for illustrative purposes only. It is appreciated that embodiments of the present invention can be used to detect the location and orientation of any feature of foreground objects of an interactive video display system. The foreground object can be a user or any object held or manipulated by a user of the interactive video display system. For example, the feature being detected may be the shape formed by two arms crossing in an “X” shape. Or, the feature may correspond to two user's hands reaching toward each other but not touching. The feature detector can detect any feature of the vision image viewable by the camera.
Embodiments of the present invention determine a tip of an object in the foreground of the vision image to improve user interaction with images displayed by the interactive video display system. Embodiments of the present invention compare the objects of the foreground to template images (e.g., value images) to determine if the object is like the feature represented by the value image. For example, a vision image of an object is compared to a value image comprising weighted regions representing a tip.
In one embodiment of the invention, the foreground object is compared to a plurality of orientations of the value image. The orientation of the value image that is most tip-like can be used to determine the direction of the tip. This further improves user interaction with the displayed objects e.g., menu items, graphical elements, etc.
System 190 further includes a comparer 192. The portions of the vision image are sent from the input 191 to the comparer. In one embodiment of the invention, the comparer 192 compares the portion of the vision image to a plurality of orientations of a value image 196.
A determiner 193 determines which of the plurality of orientations of the value image 196 best matches an object or feature of the portion of the vision image. Based on the match of the value image, a feature sensor 194 determines the location of the features detected by the determiner 193. The result is output to a graphical user interface 195.
It is appreciated that the values of zero and 225 are arbitrary and could be any values, however, in accordance with embodiments of the invention, background areas are set to a value of zero and foreground areas are set to a positive value. It is also appreciated that background regions 210 of the vision image 200 are considered “off” regions and objects 220 in the foreground are considered “on” regions. A perimeter 225 is defined as the boundary between on region 220 and off region 210. In one embodiment of the invention, the perimeter 225 is the width of a single pixel.
In one embodiment of the invention, a tip can be the tip of a finger, the tip of a foot or the tip of any object in the vision image. It is appreciated that in most cases, a user interacts with displayed objects with the tip of an object. In most cases, the user may use the tip of a finger or the tip of a foot (or shoe) to interact with the displayed objects. However, it is appreciated that the tip could be of any object used to interact with the displayed images.
It is also appreciated that locating the tip of an object improves recognition of user gestures in which tip movement may define the gesture. For example, in the case of an interactive video game, user gestures can be used to control and interact with the video game. Improved recognition of user gestures enhances the user's experience with the interactive video game.
The value image 300 includes a plurality of regions 320, 321, 323, 324 comprising weighted values (e.g., 20, 10, 0 and −10) around a center point 310. The weighted regions of the value image that best match the desired shape (e.g., the desired shape that is being matched) have a higher value (e.g., region 320 has a value of 20) than the regions outside the desired shape (e.g., region 324 has a value of zero). In one embodiment of the invention, regions closest to the center point 310 are assigned higher weighted values. Region 323 has a value of negative ten.
In one embodiment of the invention, individual frames of the vision image are multiplied pixel by pixel by the corresponding value of the value image (for a particular orientation) and a result is determined. In one embodiment of the invention, the average value of the value image is equal to zero. In one embodiment of the invention tip-like features are identified over features that are not tip-like based on a threshold result of the multiplication. In one embodiment of the invention, if the average value of the result of the multiplication is greater than the selected threshold value, the image is determined to be a match of the value image. If the resulting value is less than the threshold value, the image is determined to not be a match of the value image (for a particular orientation).
In one embodiment of the invention, each pixel of the vision image is multiplied by the corresponding weighted value of the value image. In one embodiment of the invention, only the pixels on the perimeter of an “on” region of the vision image are multiplied by the corresponding values of the value image.
In one embodiment of the invention, at each pixel of the vision image, several different orientations of the value image are compared to the region surrounding the pixel. The “best fit” is the one with the highest value as computed by multiplying the vision image's region with the value image. If the value for the “best fit” is above the threshold value for tips, it is classified as a tip. If it is lower than the threshold value, it is not classified as a tip. In some cases, multiple tips are detected very close together (e.g., only a few pixels apart). In these cases, embodiments of the present invention may apply a tip thinning technique so that only one tip is recognized per arm or leg, for example. This can be performed many different ways. In one embodiment of the invention, tips that have another tip of a higher value (as computed by the value image multiplication) that is closer to it than a distance of N pixels are deleted.
In one embodiment of the invention, the vision image is multiplied by the value image in a plurality of orientations to determine an orientation of the value image that best matches the shape of the vision image to be identified. In one embodiment of the invention, the value image is compared to the vision image in sixteen different orientations in clockwise fashion across sixteen different rotational offsets. In one embodiment of the invention, the pixels on the perimeter of the vision image are multiplied by a plurality of orientations of the value image to determine the direction of the tip. The orientation that results in the highest value is determined to be the direction of the tip (assuming a tip exists at that location).
It is appreciated that the way in which the value image is scanned over the vision image can be one of many methods used for comparing images well known in the art. It is also appreciated that
As stated above, the value image can be weighted such that the average value of the value image is, for example, zero. In one embodiment of the invention, a threshold value for the result of multiplying the value image by the vision image is set (e.g., a threshold value of one). Resulting values of the multiplication that are greater than the threshold value are considered tip-like and resulting values of the multiplication that are less than the threshold value are not considered tip-like. For example, the average value of the result 400 of
In one embodiment of the invention, only the pixels on the perimeter of a foreground area are considered. In this embodiment of the invention, each pixel of the perimeter is compared to a plurality of orientations of the value image. The most positive result is assigned to the pixel and the pixel on the perimeter with the highest value is considered the tip (assuming this value exceeds the threshold value, e.g., threshold value of 0.5).
Multiple tips may be chosen for a single foreground area if they are far enough away from each other. Or, no tips may be chosen. In
In one embodiment of the invention, locating a tip of an object improves user interaction with the interactive video display system. For example, determining a tip of an object improves recognizing user gestures. This becomes important in interactive games. Recognized user gestures can be used as user input for controlling an interactive game. A movement path of the tip of an object can define a gesture. In one embodiment of the invention, a user tip is interactive with a button, icon or other menu items displayed by the interactive video display system.
It is appreciated that embodiments of the present invention can be used for identifying salient features of an interactive video projection system. For example, assume an interactive baseball game wherein a user physically holds a baseball and makes motions with the baseball to interact with projected images. Embodiments of the present invention can determine the position and orientation of the baseball to improve the user experience with the interactive game. In this embodiment of the invention, the value image comprises weighted regions that facilitate recognition of a baseball rather than a tip.
In another embodiment embodiments of the present invention can be used to make a gesture game in which the player can make different types of gestures (e.g., a pointing index finger, an “ok” sign, cupping hands together to form a “C”, crossing arms to form an “X,” etc.) to affect the game state. For example, the user could be in a fighting game where they make one gesture to shoot and another to block.
In another embodiment of the invention, persistent attributes of a foreground image are used to identify features. For example, specific curvatures (e.g., small radius curves) of objects can indicate a tip. In another embodiment of the invention, the point furthest from a center point of an object is determined to be the tip of the object. In this embodiment of the invention, the end of an extended leg or arm is determined to be the tip.
At step 702, method 700 includes accessing a region of a vision image, e.g., digital image. In one embodiment of the invention, the vision image comprises background regions and foreground regions. In one embodiment of the invention, the foreground regions comprise a user of an interactive video display system.
At step 704, method 700 includes comparing a portion of the region of the vision image to a first orientation of a value image, wherein the value image comprises a plurality of weighted values representing a feature to be detected. In one embodiment of the invention, the value image represents a feature used to interact with the interactive video display system. For example, the value image can represent human features such as a tip of a finger, a tip of a foot or shoe or the value image can represent other objects such as a baseball, hokey stick, fishing rod or any other object used to interact with displayed objects of the interactive video display system.
In one embodiment of the invention, only pixels on a perimeter of a foreground object are examined. In this embodiment of the invention, zero or more pixels on the perimeter may be considered tip-like.
At step 706, method 700 includes comparing the portion of the vision image to a second orientation of the value image. In one embodiment of the invention, the direction of the tip is determined. In one embodiment of the invention, multiple orientations of the value image are compared to the portion of the vision image and the orientation that results in the highest value is used to determine the direction of the tip. Other techniques may be used, for example, multiple orientations with high values may be averaged.
At step 708, method 700 includes determining which orientation of the value image best matches said feature to be detected. The value image orientation that has the highest average resulting value when the value image and vision image are multiplied together is considered the most tip-like direction. The above is repeated for multiple different portions of the image in order to locate the best match.
At step 802, method 800 includes accessing a region of a vision image. In one embodiment of the invention, background regions of the vision image are considered “off” regions and foreground objects of the vision image are considered “on” regions. In one embodiment of the invention, on regions are assigned a higher value than off regions for purposes of detecting features in accordance with embodiments of the present invention.
At step 804, method 800 includes multiplying pixels of the region of the vision image by weighted values of a value image. In one embodiment of the invention, all pixels in the “on” regions are evaluated. In other embodiments of the invention, a subset, random or ordered, of the pixels in the “on” regions are evaluated. In another embodiment of the invention, only pixels on a perimeter of the “on” region are evaluated.
At step 806, method 800 includes determining an average value of the pixels after the multiplying of step 804. In one embodiment of the invention, the average value of the pixels is used to determine how well the region matches the value image.
At step 808, method 800 includes comparing the average value of the pixels to a feature threshold value wherein average values greater than the feature threshold value indicate a match between the region of the vision image and the value image.
At step 810, method 800 includes doing the next region. In one embodiment of the invention, the entire vision image is scanned in a fashion described in conjunction with
Referring now to
Computer system 900 includes an address/data bus 901 for communicating information, a central processor 902 coupled with bus 901 for processing information and instructions, a volatile memory unit 903 (e.g., random access memory, static RAM, dynamic RAM, etc.) coupled with bus 901 for storing information and instructions for central processor 902 and a non-volatile memory unit 904 (e.g., read only memory, programmable ROM, flash memory, EPROM, EEPROM, etc.) coupled with bus 901 for storing static information and instructions for processor 902. Computer system 900 may also contain a display device 906 coupled to bus 901 for displaying information to the computer user. In one embodiment of the invention, display device 906 is a video display projector. Moreover, computer system 900 also includes a data storage device 905 (e.g., disk drive) for storing information and instructions.
Also included in computer system 900 of
Embodiments of the present invention, a system and method for sensing a feature of an object in an interactive video display system have been described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following Claims.
Number | Name | Date | Kind |
---|---|---|---|
2917980 | Grube et al. | Dec 1959 | A |
3068754 | Benjamin et al. | Dec 1962 | A |
3763468 | Ovshinsky et al. | Oct 1973 | A |
4053208 | Kato et al. | Oct 1977 | A |
4275395 | Dewey et al. | Jun 1981 | A |
4573191 | Kidode et al. | Feb 1986 | A |
4725863 | Dumbreck et al. | Feb 1988 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4887898 | Halliburton et al. | Dec 1989 | A |
4948371 | Hall | Aug 1990 | A |
5001558 | Burley et al. | Mar 1991 | A |
5138304 | Bronson | Aug 1992 | A |
5151718 | Nelson | Sep 1992 | A |
5239373 | Tang et al. | Aug 1993 | A |
5276609 | Durlach | Jan 1994 | A |
5319496 | Jewell et al. | Jun 1994 | A |
5325472 | Horiuchi et al. | Jun 1994 | A |
5325473 | Monroe et al. | Jun 1994 | A |
5426474 | Rubstov et al. | Jun 1995 | A |
5436639 | Arai et al. | Jul 1995 | A |
5442252 | Golz | Aug 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5497269 | Gal | Mar 1996 | A |
5510828 | Lutterbach et al. | Apr 1996 | A |
5526182 | Jewell et al. | Jun 1996 | A |
5528263 | Platzker et al. | Jun 1996 | A |
5528297 | Seegert et al. | Jun 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5548694 | Gibson | Aug 1996 | A |
5591972 | Noble et al. | Jan 1997 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5633691 | Vogeley et al. | May 1997 | A |
5703637 | Miyazaki et al. | Dec 1997 | A |
5808784 | Ando et al. | Sep 1998 | A |
5861881 | Freeman et al. | Jan 1999 | A |
5882204 | Iannazo et al. | Mar 1999 | A |
5923380 | Yang et al. | Jul 1999 | A |
5923475 | Kurtz et al. | Jul 1999 | A |
5953152 | Hewlett | Sep 1999 | A |
5969754 | Zeman | Oct 1999 | A |
5978136 | Ogawa et al. | Nov 1999 | A |
5982352 | Pryor | Nov 1999 | A |
6008800 | Pryor | Dec 1999 | A |
6058397 | Barrus et al. | May 2000 | A |
6075895 | Qiao et al. | Jun 2000 | A |
6084979 | Kanada et al. | Jul 2000 | A |
6088612 | Blair | Jul 2000 | A |
6097369 | Wambach | Aug 2000 | A |
6106119 | Edwards | Aug 2000 | A |
6118888 | Chino et al. | Sep 2000 | A |
6125198 | Onda | Sep 2000 | A |
6166744 | Jaszlics et al. | Dec 2000 | A |
6176782 | Lyons et al. | Jan 2001 | B1 |
6191773 | Maruno et al. | Feb 2001 | B1 |
6198487 | Fortenbery et al. | Mar 2001 | B1 |
6198844 | Nomura | Mar 2001 | B1 |
6263339 | Hirsch | Jul 2001 | B1 |
6292171 | Fu et al. | Sep 2001 | B1 |
6308565 | French et al. | Oct 2001 | B1 |
6323895 | Sata | Nov 2001 | B1 |
6333735 | Anvekar | Dec 2001 | B1 |
6335977 | Kage | Jan 2002 | B1 |
6339748 | Hiramatsu | Jan 2002 | B1 |
6349301 | Mitchell et al. | Feb 2002 | B1 |
6353428 | Maggioni et al. | Mar 2002 | B1 |
6359612 | Peter et al. | Mar 2002 | B1 |
6388657 | Natoli | May 2002 | B1 |
6400374 | Lanier | Jun 2002 | B2 |
6407870 | Hurevich et al. | Jun 2002 | B1 |
6414672 | Rekimoto et al. | Jul 2002 | B2 |
6445815 | Sato | Sep 2002 | B1 |
6454419 | Kitazawa | Sep 2002 | B2 |
6480267 | Yanagi et al. | Nov 2002 | B2 |
6491396 | Karasawa et al. | Dec 2002 | B2 |
6501515 | Iwamura | Dec 2002 | B1 |
6522312 | Ohshima et al. | Feb 2003 | B2 |
6545706 | Edwards et al. | Apr 2003 | B1 |
6552760 | Gotoh et al. | Apr 2003 | B1 |
6598978 | Hasegawa | Jul 2003 | B2 |
6607275 | Cimini et al. | Aug 2003 | B1 |
6611241 | Firester et al. | Aug 2003 | B1 |
6654734 | Mani et al. | Nov 2003 | B1 |
6658150 | Tsuji et al. | Dec 2003 | B2 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6677969 | Hongo | Jan 2004 | B1 |
6707054 | Ray | Mar 2004 | B2 |
6707444 | Hendriks et al. | Mar 2004 | B1 |
6712476 | Ito et al. | Mar 2004 | B1 |
6720949 | Pryor et al. | Apr 2004 | B1 |
6732929 | Good et al. | May 2004 | B2 |
6747666 | Utterback | Jun 2004 | B2 |
6752720 | Clapper et al. | Jun 2004 | B1 |
6754370 | Hall-Holt et al. | Jun 2004 | B1 |
6791700 | Omura et al. | Sep 2004 | B2 |
6826727 | Mohr et al. | Nov 2004 | B1 |
6831664 | Marmaropoulos et al. | Dec 2004 | B2 |
6871982 | Holman et al. | Mar 2005 | B2 |
6877882 | Haven et al. | Apr 2005 | B1 |
6912313 | Li | Jun 2005 | B2 |
6965693 | Kondo et al. | Nov 2005 | B1 |
6971700 | Blanger et al. | Dec 2005 | B2 |
6975360 | Slatter | Dec 2005 | B2 |
6999600 | Venetianer | Feb 2006 | B2 |
7015894 | Morohoshi | Mar 2006 | B2 |
7042440 | Pryor | May 2006 | B2 |
7054068 | Yoshida et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7068274 | Welch et al. | Jun 2006 | B2 |
7069516 | Rekimoto | Jun 2006 | B2 |
7084859 | Pryor | Aug 2006 | B1 |
7088508 | Ebina et al. | Aug 2006 | B2 |
7149262 | Nayar et al. | Dec 2006 | B1 |
7158676 | Rainsford | Jan 2007 | B1 |
7170492 | Bell | Jan 2007 | B2 |
7190832 | Frost et al. | Mar 2007 | B2 |
7193608 | Stuerzlinger | Mar 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7262874 | Suzuki | Aug 2007 | B2 |
7289130 | Satoh et al. | Oct 2007 | B1 |
7330584 | Weiguo et al. | Feb 2008 | B2 |
7339521 | Scheidemann et al. | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7379563 | Shamaie | May 2008 | B2 |
7382897 | Brown et al. | Jun 2008 | B2 |
7394459 | Bathiche et al. | Jul 2008 | B2 |
7428542 | Fink et al. | Sep 2008 | B1 |
7432917 | Wilson et al. | Oct 2008 | B2 |
7536032 | Bell | May 2009 | B2 |
7559841 | Hashimoto | Jul 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7619824 | Poulsen | Nov 2009 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7710391 | Bell et al. | May 2010 | B2 |
7737636 | Li et al. | Jun 2010 | B2 |
RE41685 | Feldman et al. | Sep 2010 | E |
7809167 | Bell | Oct 2010 | B2 |
7834846 | Bell | Nov 2010 | B1 |
20010012001 | Rekimoto et al. | Aug 2001 | A1 |
20010033675 | Maurer et al. | Oct 2001 | A1 |
20020006583 | Michiels et al. | Jan 2002 | A1 |
20020032697 | French et al. | Mar 2002 | A1 |
20020041327 | Hildreth et al. | Apr 2002 | A1 |
20020064382 | Hildreth et al. | May 2002 | A1 |
20020081032 | Chen et al. | Jun 2002 | A1 |
20020103617 | Uchiyama et al. | Aug 2002 | A1 |
20020105623 | Pinhanez | Aug 2002 | A1 |
20020130839 | Wallace et al. | Sep 2002 | A1 |
20020140633 | Rafii et al. | Oct 2002 | A1 |
20020140682 | Brown et al. | Oct 2002 | A1 |
20020178440 | Agnihotri et al. | Nov 2002 | A1 |
20020186221 | Bell | Dec 2002 | A1 |
20030032484 | Ohshima et al. | Feb 2003 | A1 |
20030076293 | Mattsson | Apr 2003 | A1 |
20030093784 | Dimitrova et al. | May 2003 | A1 |
20030098819 | Sukthankar et al. | May 2003 | A1 |
20030103030 | Wu | Jun 2003 | A1 |
20030113018 | Nefian et al. | Jun 2003 | A1 |
20030122839 | Matraszek et al. | Jul 2003 | A1 |
20030137494 | Tulbert | Jul 2003 | A1 |
20030161502 | Morihara et al. | Aug 2003 | A1 |
20030178549 | Ray | Sep 2003 | A1 |
20040005924 | Watabe et al. | Jan 2004 | A1 |
20040015783 | Lennon et al. | Jan 2004 | A1 |
20040046736 | Pryor et al. | Mar 2004 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040073541 | Lindblad et al. | Apr 2004 | A1 |
20040091110 | Barkans | May 2004 | A1 |
20040095768 | Watanabe et al. | May 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20050088407 | Bell | Apr 2005 | A1 |
20050089194 | Bell | Apr 2005 | A1 |
20050104506 | Youh et al. | May 2005 | A1 |
20050110964 | Bell | May 2005 | A1 |
20050122308 | Bell et al. | Jun 2005 | A1 |
20050132266 | Ambrosino et al. | Jun 2005 | A1 |
20050147282 | Fujii | Jul 2005 | A1 |
20050162381 | Bell et al. | Jul 2005 | A1 |
20050185828 | Semba et al. | Aug 2005 | A1 |
20050195598 | Dancs et al. | Sep 2005 | A1 |
20050265587 | Schneider | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060031786 | Hillis et al. | Feb 2006 | A1 |
20060132432 | Bell | Jun 2006 | A1 |
20060139314 | Bell | Jun 2006 | A1 |
20060168515 | Dorsett, Jr. et al. | Jul 2006 | A1 |
20060184993 | Goldthwaite et al. | Aug 2006 | A1 |
20060187545 | Doi | Aug 2006 | A1 |
20060227099 | Han et al. | Oct 2006 | A1 |
20060242145 | Krishnamurthy et al. | Oct 2006 | A1 |
20060256382 | Matraszek et al. | Nov 2006 | A1 |
20060258397 | Kaplan et al. | Nov 2006 | A1 |
20060294247 | Hinckley et al. | Dec 2006 | A1 |
20070285419 | Givon | Dec 2007 | A1 |
20080040692 | Sunday et al. | Feb 2008 | A1 |
20080062123 | Bell | Mar 2008 | A1 |
20080090484 | Lee et al. | Apr 2008 | A1 |
20080150890 | Bell et al. | Jun 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080170776 | Albertson et al. | Jul 2008 | A1 |
20080245952 | Troxell et al. | Oct 2008 | A1 |
20080252596 | Bell et al. | Oct 2008 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090077504 | Bell et al. | Mar 2009 | A1 |
20090102788 | Nishida et al. | Apr 2009 | A1 |
20090225196 | Bell et al. | Sep 2009 | A1 |
20090235295 | Bell et al. | Sep 2009 | A1 |
20090251685 | Bell et al. | Oct 2009 | A1 |
20100026624 | Bell et al. | Feb 2010 | A1 |
20100039500 | Bell et al. | Feb 2010 | A1 |
20100060722 | Bell et al. | Mar 2010 | A1 |
20100121866 | Bell et al. | May 2010 | A1 |
Number | Date | Country |
---|---|---|
0055366 | Jul 1982 | EP |
0626636 | Nov 1994 | EP |
0913790 | May 1999 | EP |
1689172 | Jun 2002 | EP |
57094672 | Jun 1982 | JP |
2000-105583 | Apr 2000 | JP |
2002-014997 | Jan 2002 | JP |
2002-092023 | Mar 2002 | JP |
2002-171507 | Jun 2002 | JP |
2003-517642 | May 2003 | JP |
2003-271084 | Sep 2003 | JP |
2003-0058894 | Jul 2003 | KR |
WO 9838533 | Sep 1998 | WO |
WO 0016562 | Mar 2000 | WO |
WO 01063916 | Aug 2001 | WO |
WO 0201537 | Jan 2002 | WO |
WO 02100094 | Dec 2002 | WO |
WO 2004055776 | Jul 2004 | WO |
WO2004097741 | Nov 2004 | WO |
WO 2005041578 | May 2005 | WO |
WO 2005041579 | May 2005 | WO |
WO 2005057398 | Jun 2005 | WO |
WO 2005057399 | Jun 2005 | WO |
WO 2005057921 | Jun 2005 | WO |
WO 2005091651 | Sep 2005 | WO |
WO 2007019443 | Feb 2007 | WO |
WO 2008124820 | Oct 2008 | WO |
WO 2009035705 | Mar 2009 | WO |