This invention relates generally to computer user interface systems and methods, and more specifically to computer user interface systems and methods employing distance, depth and/or proximity sensors.
A wide variety of computer devices are known, including, but not limited to, personal computers, laptop computers, personal digital assistants (PDAs), and the like. Such computer devices are typically configured with a user interface that allows a user to input information, such as commands, data, control signals, and the like, via user interface devices, such as a keyboard, a mouse, a trackball, a stylus, a touch screen, and the like.
Other devices, such as kiosks, automated teller machines (ATMs), and the like, may include a processor or otherwise be interfaced with a processor. Accordingly, such devices may essentially include a computer and thus may include a similar user interface as employed for computer systems.
Various embodiments described herein are directed to computer user interface systems and methods that provide user input to a computer based on data obtained from at least one distance, depth and/or proximity sensor associated with the computer. In particular, various embodiments involve a plurality of distance, depth and/or proximity sensors associated with the computer.
Various embodiments contemplate a computer user interface systems that allow a computer to be controlled or otherwise alter its operation based on information detected by such sensors. In some embodiment, an operating system of the computer may be controlled based on such information. Alternatively or additionally, an active application may be controlled based on such information.
In particular, the information detected by such sensors may define various user contexts. In other words, the sensor(s) may detect a user parameter relative to the computer such that different user contexts may be determined. For example, a user presence context may be determined to be presence of a user, absence of a user or presence of multiple users within a vicinity of the computer. Based on the determined user presence context, the computer may be placed in a particular mode of operation and/or may alter an appearance of information displayed by the computer.
Various other user contexts may be determined based on information or user parameters detected by one or more sensors. A user proximity context may be determined, for example, in terms of a location of a user relative to the computer or a distance of a user from the computer. In such embodiments, an appearance of information displayed by the computer may be altered or otherwise controlled based on the user proximity context. For example, a size and/or a content of the information displayed by the computer may be altered or controlled.
Also, in some embodiments, a user gesture context may be determined based on information or user parameters detected by one or more sensors. In such embodiments, one or more operations of the computer may be performed based on the determined user gesture context. For example, such operations may include, but are not limited to, scrolling, selecting, zooming, or the like. In various embodiments, such operations may be applied to an active application of the computer. In general, user gesture contexts may be employed to allow a user to operate the computer remotely via the one or more sensors, without a remote controller or other auxiliary user manipulated input device.
Thus, various embodiments contemplate a sensor-based computer user interface system. Other embodiments contemplate a method user interaction with a computer system via one or more sensors. Still further embodiments contemplate a computer readable storage medium including stored instructions that, when executed by a computer, cause the computer to perform any of the various methods described herein and/or any of the functions of the systems disclosed herein.
These and other embodiments and features will be apparent to those of ordinary skill in the art upon reading this disclosure in its entirety, along with the appended claims.
Computer systems and methods disclosed herein may provide an entirely new category of user interface systems as compared to conventional user interfaces. Various embodiments contemplate using one or more sensors as a means for a user to interact with a computer system, either separately or in conjunction with conventional peripheral user input devices, such as mice, keyboards, and the like.
In some embodiments, such an approach may allow a user to interact with a computer remotely, that is, at a distance outside of physical reach, without using a remote controller or other user manipulated input device. In some embodiments, such an approach may allow a computer system to automatically perform various operations in response to changes in user contexts.
As used herein, a user context should be understood as defining how a user relates to a computer system in terms of relative location, distance or proximity, presence or absence, and/or movements or gestures. A user context should be understood as defining such relationships to the computer for multi-user settings as well. It should also be understood that additional information may form part of the user context, including but not limited to, information regarding the activity of the user, the identity of the user, the time and/or date, the location of the computer system, the environment of the computer system/user(s), active application(s), operating system variables, and the like. In other words, any relevant information available to the computer system regarding the user(s) and his environment that affects the user's(s') interaction with the computer system may be included, as appropriate or desired, to define one or more user contexts. As described herein, sensor data may provide certain information regarding the user(s), while other information may be provided by other aspects of the computer system, such as user account settings and/or preferences, open applications and/or windows, current/active application, application history, time, date, location, IP address, and the like.
As used herein, a user parameter should be understood as including any detectable parameter associated with a user. This may include a user's physical being, for example, based on which presence, depth, distance, proximity, movements, gestures, and the like may be detected. In some cases, this may also include objects that a user may carry or keep on his person, for example, based on which presence, depth, distance, proximity and/or changes in such values may be detected. For example, a user's wireless communication device (e.g., cell phone, electronic identification card or tag, and the like) may be detected, for example, via wireless communication protocols, such as Bluetooth® or RFID (radio frequency identification).
The systems and methods described herein may provide a sensor-based user interface and/or ways for a user to interact with a computer system via one or more sensors. The sensor or sensors may be of any suitable type, either currently known or hereafter developed, that is configured to detect one or more of depth, distance, proximity, presence, or the like. For example, approaches such as near field radio frequency (RF), ultrasonic, infrared (IR), antenna diversity, or the like may be employed. This lists is not intended to be exhaustive, and it should be understood that other sensors may be employed as well, including, but again not limited to, visible light sensors, ambient light sensors, mechanical vibration sensors. It should be understood that the term sensors as used herein is intended to include systems of emitters and sensors, for example, where the emitters “paint” a field of interest and the sensors detect resulting reflections of objects within the field. It should also be understood that the number of sensors as well as the type, the sensitivity, the range, the field of detection (angle of incidence), and the like, may be determined, for example, on the user interaction to be detected and/or the computer system control/response to be implemented. Further, it should be understood that the sensors contemplated herein are relatively “simple” sensors that do not generally require intensive data processing and may provide suitable “reaction time” to changing parameters that indicate changes in user contexts. This does not mean that more complex, processing-intensive systems may not be employed, but rather that such more complex systems may be invoked based on the relatively simple and reactive sensor systems contemplated herein.
In some embodiments, a plurality of sensors may be employed. Based on a known distribution of sensors, such as a linear or two-dimensional array of sensors, various algorithms may be employed to calculate relative states of the sensors and/or changes in the states of the sensors and to map or interpret the states/changes to determine user contexts and/or changes in user contexts. In general, sensor systems employing digital signal processing techniques and state machine logic may be employed to implement the various features described herein.
For gesture extraction, it should be understood that the sensor system and/or context engine may be calibrated and trained to learn a specific set of gestures, for example, from a specific user. Such a process may be adaptive, for example, beginning with a relatively small set of gestures that are more readily detectable or recognizable/distinguishable, and building a user-dependent gesture database with more complex gestures through training and/or use.
Gestures may have time and/or sequence dependence, for example, such that a specific sequence of detected user movements may be interpreted as a gesture. Gestures may also be distance dependent, for example, such that gestures include a distance component. In general, it should be understood that the gesture contexts may be dependent on other user contexts, as discussed herein. The user contexts and changes in user contexts, whether gesture contexts or not, may have corresponding control instructions or commands that are provided to the computer system to effect user control and/or interaction with the computer system.
In some embodiments, the sensor(s) may be configured to measure values. Alternatively or additionally, the sensor(s) may be employed in a binary manner, either on or off. In general, the sensors contemplated herein may measure analog values that may be converted to digital signals and processed by a digital signal processor (DSP), which may be implemented in hardware and/or software. The processed digital signals may be provided to threshold detection logic, for example, to generate binary information about the user context (such as user present or absent), may be provided to multi-threshold logic, for example, to determine information having more than two states (such as, user near/intermediate/far or user left/center/right) and/or may be used as a digital measure (such as user distance or relative location).
As will be further understood from the following description, the user contexts that may be determined based on detected user parameter(s) may include, for example, a user presence context, a user proximity context and a user gesture context. While these contexts are described separately, it should be understood that one or more of such contexts may be combined to define a user context of interest. As discussed above, other information may form part of the user context(s) as well.
In the sensor-based computer user interface systems and methods described herein, the control of and/or response by the computer system may be at an operating system level or at an application level. For example, on the operating system level, the operating system may increase screen brightness if the user is determined to be far from the computer display. The operating system may transfer control of a displayed pointer from a mouse, for example, when the user is near, to gesture detection, for example, when the user is far. In some embodiments, an active application may be controlled or otherwise respond to the information detected by the sensor(s).
Turning now to particular embodiments that provide examples of how sensor-based computer user interface systems and methods may be implemented, a schematic representation of a monitor 100 of a computer system including a sensor-based user interface is shown in
The frame 104 of the monitor 100 may provide a suitable location for sensors 110. As a user or users may typically be positioned in a manner to view the screen 102, the sensors 110 may be mounted to the frame 104 or otherwise incorporated into the monitor 100 in a manner that will be effective for detecting the user(s) and/or user parameters.
Although the sensors 110 are shown as being located at top, bottom, left and right positions, it should be understood that other arrangements are possible and may be suitable for a given application. For example, an arrangement of a plurality of sensors 110 may be suitable for detecting user parameters and/or changes in three dimensions. Also, it should be understood that a single sensor 110 may be employed for some applications. For example, a single proximity sensor may be employed to detect user presence/absence and/or user near/far.
It should be understood that the sensor arrangement illustrated in
In general, the sensor(s) need only be configured to detect a user parameter and/or changes in a user parameter. For example, sensors that are not attached or incorporated into a structure of the computer system may be employed. For example, it may be sufficient to place the sensor(s) at a desired location of use to allow a user to control and/or interact with the computer system from the desired location. In some embodiments in which the sensor or sensors is/are not attached or incorporated into a structure of the computer system, a mechanism may be provided to detect or determine a location of the sensor(s) relative to the computer system or a part thereof. In other words, wired or wireless communication between the sensors and the rest of the system would be needed and, at least for some user contexts, the relative location(s) of the sensor(s) to the computer system and/or each other may be needed.
As discussed above, the sensor-based user interface system 300 may include one or more sensors 310. Operation of the sensors 310 may be supported by a sensor processing or data acquisition module 320. The sensor processing/data acquisition module 320 may be implemented as software and/or hardware, as appropriate or desired. For example, the sensor processing/data acquisition module 320 may include analog to digital conversion, digital signal processing (such as filtering, thresholds, and the like), such as illustrated in
Data or information acquired and/or processed from the sensor(s) 310 by the sensor processing/data acquisition module 320 may be provided to a user context engine 330. The user context engine 330 may be implemented as software and/or hardware, as appropriate or desired. For example, the user context engine 330 may be implemented as a state machine with various levels of complexity, may be based on or incorporate neural networks, fuzzy logic, statistical model building and reasoning based on Bayesian classifications, or the like. In general, the user context engine 330 may be configured to interpret the information from the sensor(s) 310 to determine a corresponding user context and/or changes in the user context.
In particular, the user context engine 330 may employ one or more algorithms to determine user contexts and/or changes in user contexts. The user context engine 330 may access a user context database 340 to determine whether the result(s) of the algorithm(s) correspond to defined user contexts and/or changes in user contexts. Once a user context or change in user context is determined, the user context engine 330 may access a corresponding control instruction, for example, stored in the user context database 340, and provide the corresponding control instruction to the computer system 302 to effect a desired control of and/or response by the computer system 302.
Alternatively, the user context engine 330 may provide the determined user context and/or change in user context to the computer system 302, which may be configured to implement a corresponding control instruction. For example, the computer system 302 may be configured to map user contexts and/or changes in user contexts to corresponding control instructions.
In addition to the information provided by the sensor processing/data acquisition module 320, the user context engine 330 may be configured to receive information from the computer system 302 regarding its operating state. For example, information regarding the particular operational state of, a particular active application on, and/or particular information for display by the computer system 302 may be used by the user context engine 330 to interpret the information received from the sensor(s) to determine the user contexts and/or the changes in user contexts and/or to map the user contexts and/or the changes in user contexts to a particular set of control instructions.
The user context engine 330 may also be configured to “learn” or to be “trained” to recognize various user contexts and/or changes in user contexts, for example, by executing a training algorithm with a user providing various inputs via the sensor(s) 310. In such a manner, the user context database 340 may be populated or “educated.” As appropriate or desired, the user context database 340 may be populated for specific users so that the sensor-based user interface is tailored, for example, to the characteristics and/or mannerisms of the particular user to better define the user parameters to be detected by the sensor(s) 310. Any suitable learning approach may be employed, particularly adaptive learning approaches. In general, the approach should allow the system to be calibrated to decrease and/or minimize false detection and increase and/or maximize positive recognition of gestures.
As discussed above, the sensor-based user interface system 300 may include a plurality of sensors 3101, 3102 . . . 310N. As illustrated in
For example, the sensors 3101, 3102 . . . 310N may be analog sensors such that the respective sensor processing modules 3201, 3202 . . . 320N include an analog-to-digital (A/D) converter 322. Of course, if the corresponding sensor provides data in digital form. Digital data from the analog-to-digital converter 322 may be provided to either a slow filter 324 or a fast filter 326, depending on reaction time, sensitivity or the like desired for the particular user parameter and/or user context of concern. As discussed above, various approaches to threshold settings may be employed, again as appropriate or desired for the particular user parameter and/or user context of concern. For example, an adaptive threshold circuit or module 327 and/or a multi-level detection logic circuit or module 328 may be employed.
Processed sensor information, such as presence, distance, and the like, as discussed herein, may be provided from the sensor processing modules 3201, 3202 . . . 320N to the user context engine 330. As discussed above, the user context engine 330 may employ a state machine, illustrated by the user context state module 332a, the next state logic 332b and the data flow lines, including current state and user operating system variables, for example, as input to the next state logic 322b to determine the updating of the user state context 322a to the next state based on the processed sensor information provided.
Also, the user context engine 330 may include gesture detection logic 324 that may implement interpretation of the processed sensor information to determine whether such information corresponds to various gesture commands, for example, stored in a database of gesture commands as discussed above. The gesture detection logic 324 may base its interpretation of the processed sensor information based on changes in user context state and/or other information, such as user OS variables as shown, provided from the computer system 302.
As will be understood from the foregoing, various methods may be envisioned for user interaction with a computer system via a sensor-based user interface. As such, the following description of methods should be understood to be for the sake of illustration, and not limiting.
In S420, an initial user parameter may be detected. As discussed above, the detected user parameter may be the user's body or a part thereof, or an object associated with the user. Based on the detected initial user parameter, an initial user context may be determined AT s430. For example, a user presence context may be determined to be that a user or a specific user is present in a certain vicinity of the computer system.
Based on the determined initial user context, an operating system and/or an active application of the computer system may be controlled at S440. For example, when the user presence context is that a user is in the vicinity of the computer system, the computer system may be placed in a use mode—that is, an operational mode in which the computer is running normally.
At S450, a determination may be made as to whether or not a change in the user parameter or user context is detected by the sensor(s). This may involve evaluating information received from the sensor(s) to determine if state(s) of the sensor(s) has/have changed and/or whether such change corresponds to a change in the user context. For example, a user may change location relative to the computer system, which may change the user parameter. However, the change in the user parameter may or may not be sufficient to cause a change in the user context. However, the user may change his location relative to the computer significantly enough to result in a change in the user context.
If so, a new user context may be determined at S460. For example, the user may have changed his location relative to the computer system so as to no longer be within the certain vicinity of the computer system. Based on the new user context, the operating system and/or the active application of the computer system may be controlled at S470. For example, when the new user presence context is that a user is not in the certain vicinity of the computer system, the computer system may be placed in a non-use mode—that is, an operational mode in which the computer is running less active, such as a sleep mode or a hibernate mode. Alternatively, the computer system may automatically logout the user so that another person cannot access the computer via that user's login.
It should be understood that such actions by the computer system may be implemented only after a predetermined period of time has elapsed after the change in user parameter is detected and/or the change in user context is determined, as appropriate or desired. For example, a user may be allowed to leave the certain vicinity for a certain amount of time without being logged out or having the computer enter a non-use mode.
Control may return to S450 for detection of a further change in user parameter and/or user context. If no change is detected, for example, within a predetermined amount of time, control may continue to S480 where a determination may be made as to whether the sensor-based user interface is disabled. If not, control may return to S440 to continue control of the computer system based on the current context. If so, control may proceed to S490 and end. It should be understood, however, that ending the sensor-based user interface operation may be implemented at any time, for example, based on other user input.
While the flowchart in
One example of how a computer system may respond or be controlled based on a change in a user proximity context is illustrated by
When the user proximity context changes by the user moving to a location relatively far from the computer system (or the monitor thereof, for example), the relatively dense visual media 502 may be filtered and/or enlarged to provide information that is usable/viewable from the increased distance. As shown in
Another example of how a computer system may respond or be controlled based on a change in a user proximity context is illustrated by
When the user proximity context changes by the user moving to a location relatively far from the computer system (or the monitor thereof, for example), the relatively dense information 602 may be filtered and/or enlarged to provide information that is usable/viewable from the increased distance. As shown in
Other features of the display or the computer system may be controlled or otherwise may respond to changes in user proximity contexts and/or user presence contexts. For example, screen brightness may be controlled based on such user contexts to automatically adjust for changing viewing conditions for the user.
Also, for example, content may be hidden/viewable and/or available or active applications may be enabled/disabled based upon user presence contexts. Private or sensitive information may only be viewable when the user presence context is that a single user is present, defining a private user presence context. Such information may be automatically hidden, for example, when a change in the user presence context is determined by detecting the presence of another person in a vicinity of the computer system (or the monitor thereof, for example). Closing or hiding windows based on such a change in user presence context to a public user context may present the private or sensitive information from being viewed by others, even if the user is unaware of the other person's presence or forgets that the information is viewable, private or sensitive.
In general, it should be understood that any desired feature of a computer system may be controlled or enabled to automatically respond to changes in user contexts using a sensor-based user interface as described herein. Thus, the foregoing descriptions should be understood to be illustrative, and not exhaustive.
In particular, as discussed above, the sensor-based user interface systems and methods described herein may allow a user to control and/or to interact with a computer system remotely, without the use of a user manipulatable input device or remote controller. In some embodiments, the sensor-based user interface systems and methods may be configured to detect a variety of user movements and/or gestures to determine various user gesture contexts. The various determined user gesture contexts may have corresponding control instructions for the computer system to cause the computer system to perform various operations, which may depend, for example, on an application that is active on the computer system.
The user gesture contexts that are determined based on the detected movements and/ort gestures of the user may define an entire set of “multi-touch” display gestures, that is, a set of gestures analogous to the multi-touch gestures developed in other technologies. For example, gestures analogous to those disclosed in the following U.S. patents and published U.S. patent applications may be employed, as appropriate or desired, with the computer user interface systems and methods described herein: U.S. Pat. Nos. 6,323,846, 6,570,557, 6,888,536, 7,030,861, 7,339,580 and RE40,153; and U.S. patent publications nos. 2006/0026521, 2006/0026535, 2006/0026536, 2007/0257890, 2008/0036743, 2008/0158167, 2008/0158168, 2008/0158172, 2008/0204426, 2008/0211775, 2008/0211783, 2008/0211784 and 2008/0211785. For example, movement of the user's hand left-to-right, right-to-left, up-to-down, and down-to-up may effect a respective scrolling control. Movement of a user's hand from far-to-close may effect a selecting control. Movement of the user's hands from left and right-to-middle or from up and down-to-middle may effect a zooming control. The opposite directional gesture may zoom out.
A simple example of pseudo-code for a “scroll left-to-right gesture” may be as follows. A user interface system including three distance/depth sensors disposed on a computer monitor, for example, a left sensor (LS), a middle sensor (MS) and a right sensor (RS), may be configured to detect a user at distances up to a couple feet, for example, with an angular directionality of less than five degrees. The sensor processing of the system may be configured to produce a distance function DISTANCE with values of NEAR, MIDRANGE and FAR from a set of distance measurements d and a presence function PRESENCE with values of YES and NO.
The control instructions provided to the computer system for each of the user gesture contexts may be applied to an active (e.g., topmost) application, or to an entire desktop view, as appropriate or desired.
One example of a method of user interaction with a computer system via a sensor-based user interface system is illustrated in
It should be understood that the operations of S710 are optional and need not be included, for example, where the user gesture contexts are relatively simple and/or limited. Also, once the sensor-based user interface system has been “trained” for a particular user, the sensor-based user interface system may operate without such operations being performed, for example, with only a user identification operation that accesses user gesture context information stored based on the training. Alternatively or additionally, the “training” of the sensor-based user interface system may be performed and/or continue during its normal use.
Normal use may be defined by S720 through S780 illustrated in
Next, at S730, a user movement or gesture (user parameter) may be detected by the one or more sensors of the sensor-based user interface system. The data or information from the sensor(s) may be processed at S740, such as described above. Based on the processing, a determination may be made at S750 as to whether a user gesture context is recognized. If not, control may return to S730 to continue to detect user movements or gestures.
If a user gesture context is recognized at S750, control may proceed to S760 where a control instruction corresponding to the recognized gesture context may be obtained and output to the computer system. Thus, the computer system may implement an operation based on the control instruction corresponding to the recognized gesture context.
Next, at S770, a determination may be made as to whether gesture control has been ended. If not, control may return to S730 to continue to detect user movements or gestures. If so, control may proceed to S780 and end. In some embodiments, a gesture context may have a corresponding control instruction that end gesture control. However, it should be understood that gesture control may be ended in any appropriate or desired manner.
The foregoing merely illustrates certain principles of the invention. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements and methods which, although not explicitly shown or described herein, embody the principles disclosed in this document and are thus within the spirit and scope of the present invention. From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustration only and are not intended to limit the scope of the present invention. References to details of particular embodiments are not intended to limit the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3363104 | Waite et al. | Jan 1968 | A |
3761947 | Volkmann et al. | Sep 1973 | A |
4620222 | Baba et al. | Oct 1986 | A |
5272473 | Thompson et al. | Dec 1993 | A |
5274494 | Rafanelli et al. | Dec 1993 | A |
5337081 | Kamiya et al. | Aug 1994 | A |
5517429 | Harrison | May 1996 | A |
5757423 | Tanaka et al. | May 1998 | A |
5892856 | Cooper et al. | Apr 1999 | A |
6282655 | Given | Aug 2001 | B1 |
6310662 | Sunakawa et al. | Oct 2001 | B1 |
6339429 | Schug | Jan 2002 | B1 |
6389153 | Imai et al. | May 2002 | B1 |
6416186 | Nakamura | Jul 2002 | B1 |
6516151 | Pilu | Feb 2003 | B2 |
6560711 | Given et al. | May 2003 | B1 |
6561654 | Mukawa et al. | May 2003 | B2 |
6636292 | Roddy et al. | Oct 2003 | B2 |
6807010 | Kowarz | Oct 2004 | B2 |
6862022 | Slupe | Mar 2005 | B2 |
6877863 | Wood et al. | Apr 2005 | B2 |
6903880 | Beatson et al. | Jun 2005 | B2 |
6921172 | Ulichney et al. | Jul 2005 | B2 |
6924909 | Lee et al. | Aug 2005 | B2 |
6930669 | Weiner et al. | Aug 2005 | B2 |
6931601 | Vronay et al. | Aug 2005 | B2 |
6970080 | Crouch et al. | Nov 2005 | B1 |
7028269 | Cohen-Solal et al. | Apr 2006 | B1 |
7058234 | Gindele et al. | Jun 2006 | B2 |
7079707 | Baron | Jul 2006 | B2 |
7123298 | Schroeder et al. | Oct 2006 | B2 |
7307709 | Lin et al. | Dec 2007 | B2 |
7352913 | Karuta et al. | Apr 2008 | B2 |
7370336 | Husain et al. | May 2008 | B2 |
7413311 | Govorkov et al. | Aug 2008 | B2 |
7453510 | Kolehmainen et al. | Nov 2008 | B2 |
7460179 | Pate et al. | Dec 2008 | B2 |
7512262 | Criminisi et al. | Mar 2009 | B2 |
7551771 | England | Jun 2009 | B2 |
7570881 | Perala et al. | Aug 2009 | B2 |
7590335 | Kobayashi et al. | Sep 2009 | B2 |
7590992 | Koplar et al. | Sep 2009 | B2 |
7598980 | Imai et al. | Oct 2009 | B2 |
7613389 | Suzuki et al. | Nov 2009 | B2 |
7641348 | Yin et al. | Jan 2010 | B2 |
7653304 | Nozaki et al. | Jan 2010 | B2 |
7658498 | Anson | Feb 2010 | B2 |
7834846 | Bell | Nov 2010 | B1 |
7869204 | Bair et al. | Jan 2011 | B2 |
7901084 | Willey et al. | Mar 2011 | B2 |
7964835 | Olsen et al. | Jun 2011 | B2 |
8044880 | Nakamura et al. | Oct 2011 | B2 |
20020021288 | Schug | Feb 2002 | A1 |
20020095222 | Lignoul | Jul 2002 | A1 |
20030038927 | Alden | Feb 2003 | A1 |
20030051181 | Magee et al. | Mar 2003 | A1 |
20030086013 | Aratani | May 2003 | A1 |
20030117343 | King | Jun 2003 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20050034147 | Best et al. | Feb 2005 | A1 |
20050132408 | Dahley et al. | Jun 2005 | A1 |
20050168583 | Thomason | Aug 2005 | A1 |
20050182962 | Given et al. | Aug 2005 | A1 |
20050243019 | Fuller et al. | Nov 2005 | A1 |
20050280786 | Moiroux et al. | Dec 2005 | A1 |
20060140452 | Raynor et al. | Jun 2006 | A1 |
20060197843 | Yoshimatsu | Sep 2006 | A1 |
20070027580 | Ligtenberg et al. | Feb 2007 | A1 |
20070069030 | Sauerwein et al. | Mar 2007 | A1 |
20070105072 | Koljonen | May 2007 | A1 |
20070177279 | Cho et al. | Aug 2007 | A1 |
20070236485 | Trepte | Oct 2007 | A1 |
20070300091 | Lee | Dec 2007 | A1 |
20070300312 | Chitsaz et al. | Dec 2007 | A1 |
20080062164 | Bassi et al. | Mar 2008 | A1 |
20080131107 | Ueno | Jun 2008 | A1 |
20080158362 | Butterworth | Jul 2008 | A1 |
20080174427 | Banerjee et al. | Jul 2008 | A1 |
20080191864 | Wolfson | Aug 2008 | A1 |
20090008683 | Nishizawa | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090051797 | Yao | Feb 2009 | A1 |
20090058842 | Bull et al. | Mar 2009 | A1 |
20090079765 | Hoover | Mar 2009 | A1 |
20090115915 | Steinberg et al. | May 2009 | A1 |
20090150551 | Pagan | Jun 2009 | A1 |
20090221368 | Yen et al. | Sep 2009 | A1 |
20090262306 | Quinn et al. | Oct 2009 | A1 |
20090262343 | Archibald | Oct 2009 | A1 |
20090273679 | Gere et al. | Nov 2009 | A1 |
20090296997 | Rocheford | Dec 2009 | A1 |
20090309826 | Jung et al. | Dec 2009 | A1 |
20100060803 | Slack et al. | Mar 2010 | A1 |
20100061659 | Slack et al. | Mar 2010 | A1 |
20100073499 | Gere et al. | Mar 2010 | A1 |
20100079426 | Pance et al. | Apr 2010 | A1 |
20100079468 | Pance et al. | Apr 2010 | A1 |
20100079653 | Pance | Apr 2010 | A1 |
20100079884 | Gere et al. | Apr 2010 | A1 |
20100103172 | Purdy | Apr 2010 | A1 |
20110074931 | Bilbrey et al. | Mar 2011 | A1 |
20110075055 | Bilbrey | Mar 2011 | A1 |
20110115964 | Gere | May 2011 | A1 |
20110149094 | Chen et al. | Jun 2011 | A1 |
20120044328 | Gere | Feb 2012 | A1 |
20120076363 | Kessler et al. | Mar 2012 | A1 |
Number | Date | Country |
---|---|---|
167314 | Jan 1986 | EP |
2053844 | Apr 2009 | EP |
2002354493 | Dec 2002 | JP |
WO9311631 | Jun 1993 | WO |
WO2007100057 | Sep 2007 | WO |
WO2009001512 | Dec 2008 | WO |
Entry |
---|
Author Unknown, “YCbCr,” http://en.wikipedia.org/wiki/Y%27CbCr, 4 pages, at least as early as Jun. 17, 2010. |
Koschan et al., “Finding Objects in a 3D Environment by Combining Distance Measurement and Color Indexing,” IEEE, vol. 1, pp. 858-861, Oct. 2001. |
Sokolova et al., “Experiments in Stereo Vision,” Computer Science 570, Final Project, http://disparity.wikidot.com/, 14 pages, at least as early as Jun. 16, 2010. |
Number | Date | Country | |
---|---|---|---|
20100083188 A1 | Apr 2010 | US |