Image sensors are used in a wide variety of devices. Electrical signals from image sensors may be amplified and processed to provide a visual representation of a scene. The magnitude of the amplification (gain setting) may affect the appearance of the image.
For example, if a scene to be imaged contains a bright region and a dark region, a gain setting may be calculated to provide a compromise between the appearance of the bright region and the dark region on the image. The dynamic range of the sensor (e.g., either the charge storage or the width of an analog to digital converter used to measure the charge) may be insufficient to capture the worst case dynamic range for a particular scene. As such, some regions of an image may not appear as desired.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
According to one aspect of this disclosure, calculating a gain setting for a primary image sensor includes receiving a test-matrix of pixels from a test image sensor, and receiving a first-frame matrix of pixels from a primary image sensor. A gain setting is calculated for the primary image sensor using the first-frame matrix of pixels except those pixels imaging one or more exclusion regions identified from the test matrix of pixels.
The present description is related to calculating an exposure setting for a primary image sensor, including but not limited to a visible-light digital camera sensor. An exposure setting may include any number of settings, such as a gain setting, aperture setting, or shutter speed, for example. As one example, a gain setting may be calculated in accordance with the present disclosure based on a subset of pixels which do not image one or more exclusion regions that are identified using a test image sensor, such as a depth camera.
At 110, the method 100 includes receiving a test matrix of pixels from a test image sensor. In some embodiments, the test image sensor is used to observe and output infrared light information in the form of a test matrix of pixels. Each pixel in the test matrix may include infrared light information indicating an intensity of infrared light observed at that pixel. However, in some embodiments the test image sensor may be a visible light image sensor, and each pixel in the test matrix may include visible light information indicating an intensity of visible light observed at that pixel.
The test matrix may be received from the test image sensor via any suitable communication channel. Wired or wireless connections may be used.
At 120, the method 100 includes receiving a first-frame matrix of pixels from a primary image sensor. The primary image sensor may be any sensor suitable for observing and outputting light information, such as a CCD or CMOS image sensor. In some embodiments, the primary image sensor is used to observe and output visible light information in the form of a first-frame matrix of pixels. In some embodiments, each pixel in the first-frame matrix includes visible light information indicating an intensity of visible light observed at that pixel. For example, each pixel may include an intensity value for each of one or more different color channels, The first-frame matrix of pixels, as well as the individual pixels, may be represented using any suitable data structure without departing from the scope of this disclosure.
The first-frame matrix may be received from the primary image sensor via any suitable communication channel. Wired or wireless connections may be used.
At 130, the method 100 includes identifying one or more exclusion regions imaged by a subset of the test matrix of pixels. Exclusion regions may include any portion of a scene that is not to be used for the purposes of calculating an exposure setting. As a nonlimiting example, a user may want to image an object in a scene which is backlit by a light source. The bright background of the scene may be identified as an exclusion region, and the exposure setting may be calculated without considering the bright background. Any suitable method and/or criteria may be utilized to identify exclusion regions.
For example, identifying exclusion regions may include finding a depth of an object observed at each pixel in the test matrix. The depth of an object may be found by using the test image sensor in a depth camera, for example. The depth camera may determine, for each pixel in the test matrix, the three dimensional depth of a surface in the scene relative to the depth camera. Virtually any depth finding technology may be used without departing from the scope of this disclosure (e.g., structured light, time-of-flight, stereo vision, etc.). When a depth camera is used, the three dimensional depth information determined for each pixel may be used to generate a depth image. Depth images may take the form of virtually any suitable data structure, including but not limited to, a matrix of pixels, where each pixel indicates a depth of an object observed at that pixel.
The pixel(s) from the depth image may be mapped to the corresponding pixel(s) from the primary image sensor in contemporaneous and/or subsequent frames. In this way, the depth of an object imaged by the primary image sensor may be determined. The exclusion regions may include a subset of pixels that image an object that is not within a threshold tolerance of a reference depth. For example, all pixels from the primary image sensor that image objects that are closer than a near-limit and/or are farther than a far-limit, as determined by the test matrix of pixels, may be effectively ignored for purposes of determining an exposure setting. As explained below, such exclusion regions may be applied to exposure calculations for one or more frames.
As another example, identifying exclusion regions may include finding a surface temperature of an object observed at each pixel. The test matrix may be used to determine the surface temperature. The surface temperature may be found via any number of techniques, such as using a look up table to correlate an intensity of infrared light to a surface temperature of an object.
The temperature(s) determined using the test matrix may be mapped to the pixel(s) from the primary image sensor. In this way, the surface temperature of an object imaged by the primary image sensor may be determined. The exclusion regions may include a subset of pixels that image an object that is not within a threshold tolerance of a reference surface temperature. For example, all pixels from the primary image sensor that image objects that are hotter than a high-limit and/or are cooler than a cool-limit, as determined by the test matrix of pixels, may be effectively ignored for one or more frames when determining an exposure setting. The cool-limit could be utilized when focusing on human subjects, and the high-limit could be utilized to ignore a lamp, for example
As another example, identifying exclusion regions may include finding regions of high active and/or ambient light. As a nonlimiting example, high active and/or ambient light may be determined by projecting infrared light onto a scene and imaging the scene with a first matrix of pixels while the infrared light is being projected. The same scene can also be imaged with a second matrix of pixels when the infrared light is not being projected onto the scene. By comparing the images with and without the projected infrared light, active and/or ambient light can be identified. Any suitable comparison technique may be used, including but not limited to subtracting the first matrix from the second matrix.
By comparing these matrices, it may be determined which pixels in the first-frame matrix image regions of high ambient light. High ambient light regions may result from a lamp, the sun, or anything else adding light to one or more regions of a scene. The exclusion regions may include a particular pixel if the intensity of infrared light observed at that pixel while infrared light is being projected to that pixel is within a threshold tolerance of the intensity of infrared light observed at that pixel while infrared light is not being projected to that pixel. In other words, if the pixel images an area that is saturated with enough infrared light that the addition of projected infrared light does not cause a significant difference, that pixel can be included in an exclusion region.
While the above testing for ambient light was described with reference to projecting and imaging infrared light, it is to be understood that a similar technique may be used with visible light.
At 140, the method 100 may include calculating an exposure setting for the primary image sensor using the first-frame matrix of pixels except those pixels imaging one or more exclusion regions identified from the test matrix of pixels. For example, a histogram of the first-frame matrix of pixels except the pixels imaging the one or more exclusion regions may be constructed. The exposure setting may be calculated based on this histogram.
A gain setting or other exposure control may be adjusted so that a histogram of pixel intensities has a desired character. As an example, the exposure can be increased and/or decreased so that an average intensity, as represented by the histogram, is within a desired range. Because the histogram does not consider pixels from the exclusion regions, those pixels do not influence the adjustments. As a result, the adjustments are tuned to improve the exposure of the pixels that are not part of the exclusion regions. It should be appreciated that virtually any technique for calculating an exposure setting based on a histogram may be used without departing from the scope of this disclosure.
In this way, those pixels imaging exclusion regions may not be considered when calculating an exposure setting, and thus the exposure setting may provide a better image in regions of interest.
It is to be understood that the test matrix of pixels and the first-frame matrix of pixels may be sampled at different times relative to one another without departing from the scope of this disclosure. In some embodiments, the test matrix of pixels will be sampled prior to the first-frame matrix of pixels.
In some embodiments, the primary image sensor may provide sequential frames which collectively form a video representation of a scene. At 150, the method 100 may include applying the calculated exposure setting to the primary sensor for subsequent matrices of pixels. In other words, the exposure setting may be calculated at a fixed period, such as every frame, every other frame, after a fixed duration, every time the game console is powered on, and/or if a threshold change in lighting or scene composition is detected, for example.
The period may be dependent on whether or not the primary image sensor is substantially stationary. For example, for a fixed camera, if the user frequently eclipses a light source from the camera perspective, that region may always be excluded to avoid variability in the exposure/gain settings.
The display device 230 may be operatively connected to the gaming system 240 via a display output of the gaming system. For example, the gaming system may include an HDMI or other suitable display output. Likewise, the primary image sensor 210, test image sensor 220, and infrared light source 290 may be operatively connected to the gaming system 240 via one or more inputs. As a nonlimiting example, the gaming system 240 may include a universal serial bus to which a device including the image sensors may be connected. The image sensors and the gaming system 240 may be configured to wirelessly communicate with one another.
Gaming system 240 may be used to play a variety of different games, play one or more different media types, and/or control or manipulate non-game applications and/or operating systems.
In the illustrated embodiment, the test image sensor is used in a depth camera capable of generating depth maps, temperature maps, and otherwise observing and outputting infrared light information, as described above with reference to
The image of the scene may be displayed on the display device 230, further processed, or sent to another device, for example. Exposure settings for the primary image sensor may be calculated using a previously sampled image.
A simplified representation of an example first-frame image 300 is shown in
For example, exclusion region 380 includes pixels that image an object (e.g. the background person 280) not within a threshold tolerance of a reference depth. Exclusion region 370 includes pixels that image an object (e.g. the couch 270) not within a threshold tolerance of a reference surface temperature.
Exclusion region 360 includes pixels where the intensity of infrared light observed at that pixel while infrared light is being projected to that pixel is within a threshold tolerance of the intensity of infrared light observed at that pixel while infrared light is not being projected to that pixel. In other words, exclusion region 360 includes those pixels imaging the active light source 260.
A histogram of the first-frame matrix of pixels except the pixels imaging the one or more exclusion regions may be constructed. In the illustrated embodiment, the histogram does not include those pixels imaging the active light source 260, the couch 270, or the background person 280. Using the histogram, the exposure setting may be calculated and applied for subsequent matrices of pixels. As described above, this process may be repeated periodically.
In some embodiments, the above described methods and processes may be tied to a computing system including one or more computers. In particular, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product.
Computing system 400 includes a logic subsystem 402 and a data-holding subsystem 404. Computing system 400 may optionally include a display subsystem 406, communication subsystem 408, sensor subsystem 410, and/or other components not shown in
Logic subsystem 402 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors that are configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 404 may include one or more physical, non-transitory, devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 404 may be transformed (e.g., to hold different data).
Data-holding subsystem 404 may include removable media and/or built-in devices. Data-holding subsystem 404 may include optical memory devices (e.g., CD, DVD, H-D-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 404 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 402 and data-holding subsystem 404 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
It is to be appreciated that data-holding subsystem 404 includes one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
When included, display subsystem 406 may be used to present a visual representation of data held by data-holding subsystem 404. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 406 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 406 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 402 and/or data-holding subsystem 404 in a shared enclosure, or such display devices may be peripheral display devices.
When included, communication subsystem 408 may be configured to communicatively couple computing system 400 with one or more other computing devices. Communication subsystem 408 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As nonlimiting examples, the communication subsystem may be configured for communication via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, etc. In some embodiments, the communication subsystem may allow computing system 400 to send and/or receive messages to and/or from other devices via a network such as the Internet.
In some embodiments, sensor subsystem 410 may include a depth camera 414. Depth camera 414 may include left and right cameras of a stereoscopic vision system, for example. Time-resolved images from both cameras may be registered to each other and combined to yield depth-resolved video.
In other embodiments, depth camera 414 may be a structured light depth camera configured to project a structured infrared illumination comprising numerous, discrete features (e.g., lines or dots). Depth camera 414 may be configured to image the structured illumination reflected from a scene onto which the structured illumination is projected. Based on the spacings between adjacent features in the various regions of the imaged scene, a depth map of the scene may be constructed.
In other embodiments, depth camera 414 may be a time-of-flight camera configured to project a pulsed infrared illumination onto the scene. The depth camera may include two cameras configured to detect the pulsed illumination reflected from the scene. Both cameras may include an electronic shutter synchronized to the pulsed illumination, but the integration times for the cameras may differ, such that a pixel-resolved time-of-flight of the pulsed illumination, from the source to the scene and then to the cameras, is discernable from the relative amounts of light received in corresponding pixels of the two cameras.
In some embodiments, sensor subsystem 410 may include an visible light camera 416. Virtually any type of digital camera technology may be used without departing from the scope of this disclosure. As a nonlimiting example, visible light camera 416 may include a CCD image sensor.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4627620 | Yang | Dec 1986 | A |
4630910 | Ross et al. | Dec 1986 | A |
4645458 | Williams | Feb 1987 | A |
4695953 | Blair et al. | Sep 1987 | A |
4702475 | Elstein et al. | Oct 1987 | A |
4711543 | Blair et al. | Dec 1987 | A |
4751642 | Silva et al. | Jun 1988 | A |
4796997 | Svetkoff et al. | Jan 1989 | A |
4809065 | Harris et al. | Feb 1989 | A |
4817950 | Goo | Apr 1989 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4893183 | Nayar | Jan 1990 | A |
4901362 | Terzian | Feb 1990 | A |
4925189 | Braeunig | May 1990 | A |
5053808 | Takagi | Oct 1991 | A |
5101444 | Wilson et al. | Mar 1992 | A |
5148154 | MacKay et al. | Sep 1992 | A |
5184295 | Mann | Feb 1993 | A |
5229754 | Aoki et al. | Jul 1993 | A |
5229756 | Kosugi et al. | Jul 1993 | A |
5239463 | Blair et al. | Aug 1993 | A |
5239464 | Blair et al. | Aug 1993 | A |
5288078 | Capper et al. | Feb 1994 | A |
5295491 | Gevins | Mar 1994 | A |
5320538 | Baum | Jun 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5385519 | Hsu et al. | Jan 1995 | A |
5405152 | Katanics et al. | Apr 1995 | A |
5417210 | Funda et al. | May 1995 | A |
5423554 | Davis | Jun 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5469740 | French et al. | Nov 1995 | A |
5495576 | Ritchey | Feb 1996 | A |
5516105 | Eisenbrey et al. | May 1996 | A |
5524637 | Erickson | Jun 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5577981 | Jarvik | Nov 1996 | A |
5580249 | Jacobsen et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5597309 | Riess | Jan 1997 | A |
5616078 | Oh | Apr 1997 | A |
5617312 | Iura et al. | Apr 1997 | A |
5638300 | Johnson | Jun 1997 | A |
5641288 | Zaenglein, Jr. | Jun 1997 | A |
5682196 | Freeman | Oct 1997 | A |
5682229 | Wangler | Oct 1997 | A |
5690582 | Ulrich et al. | Nov 1997 | A |
5703367 | Hashimoto et al. | Dec 1997 | A |
5704837 | Iwasaki et al. | Jan 1998 | A |
5715834 | Bergamasco et al. | Feb 1998 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5877803 | Wee et al. | Mar 1999 | A |
5913727 | Ahdoot | Jun 1999 | A |
5933125 | Fernie | Aug 1999 | A |
5980256 | Carmein | Nov 1999 | A |
5989157 | Walton | Nov 1999 | A |
5995649 | Marugame | Nov 1999 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6054991 | Crane et al. | Apr 2000 | A |
6066075 | Poulton | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6073489 | French et al. | Jun 2000 | A |
6077201 | Cheng et al. | Jun 2000 | A |
6098458 | French et al. | Aug 2000 | A |
6100896 | Strohecker et al. | Aug 2000 | A |
6101289 | Kellner | Aug 2000 | A |
6128003 | Smith et al. | Oct 2000 | A |
6130677 | Kunz | Oct 2000 | A |
6141463 | Covell et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6152856 | Studor et al. | Nov 2000 | A |
6159100 | Smith | Dec 2000 | A |
6173066 | Peurach et al. | Jan 2001 | B1 |
6181343 | Lyons | Jan 2001 | B1 |
6188777 | Darrell et al. | Feb 2001 | B1 |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6226396 | Marugame | May 2001 | B1 |
6229913 | Nayar et al. | May 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6256400 | Takata et al. | Jul 2001 | B1 |
6283860 | Lyons et al. | Sep 2001 | B1 |
6289112 | Jain et al. | Sep 2001 | B1 |
6299308 | Voronka et al. | Oct 2001 | B1 |
6308565 | French et al. | Oct 2001 | B1 |
6316934 | Amorai-Moriya et al. | Nov 2001 | B1 |
6363160 | Bradski et al. | Mar 2002 | B1 |
6384819 | Hunter | May 2002 | B1 |
6411744 | Edwards | Jun 2002 | B1 |
6430997 | French et al. | Aug 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6503195 | Keller et al. | Jan 2003 | B1 |
6539931 | Trajkovic et al. | Apr 2003 | B2 |
6570555 | Prevost et al. | May 2003 | B1 |
6633294 | Rosenthal et al. | Oct 2003 | B1 |
6640202 | Dietz et al. | Oct 2003 | B1 |
6646636 | Popovich et al. | Nov 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6714665 | Hanna et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6738066 | Nguyen | May 2004 | B1 |
6765726 | French et al. | Jul 2004 | B2 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6801637 | Voronka et al. | Oct 2004 | B2 |
6856407 | Knighton et al. | Feb 2005 | B2 |
6873723 | Aucsmith et al. | Mar 2005 | B1 |
6876496 | French et al. | Apr 2005 | B2 |
6937742 | Roberts et al. | Aug 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7036094 | Cohen et al. | Apr 2006 | B1 |
7038855 | French et al. | May 2006 | B2 |
7039676 | Day et al. | May 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7050606 | Paul et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7060957 | Lange et al. | Jun 2006 | B2 |
7113918 | Ahmad et al. | Sep 2006 | B1 |
7121946 | Paul et al. | Oct 2006 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7184048 | Hunter | Feb 2007 | B2 |
7202898 | Braun et al. | Apr 2007 | B1 |
7222078 | Abelow | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7317836 | Fujimura et al. | Jan 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7359121 | French et al. | Apr 2008 | B2 |
7367887 | Watabe et al. | May 2008 | B2 |
7379563 | Shamaie | May 2008 | B2 |
7379566 | Hildreth | May 2008 | B2 |
7389591 | Jaiswal et al. | Jun 2008 | B2 |
7412077 | Li et al. | Aug 2008 | B2 |
7421093 | Hildreth et al. | Sep 2008 | B2 |
7430312 | Gu | Sep 2008 | B2 |
7436496 | Kawahito | Oct 2008 | B2 |
7443443 | Raskar et al. | Oct 2008 | B2 |
7450736 | Yang et al. | Nov 2008 | B2 |
7452275 | Kuraishi | Nov 2008 | B2 |
7460690 | Cohen et al. | Dec 2008 | B2 |
7489812 | Fox et al. | Feb 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7555142 | Hildreth et al. | Jun 2009 | B2 |
7560701 | Oggier et al. | Jul 2009 | B2 |
7570805 | Gu | Aug 2009 | B2 |
7574020 | Shamaie | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7590262 | Fujimura et al. | Sep 2009 | B2 |
7593552 | Higaki et al. | Sep 2009 | B2 |
7595825 | Tsuruoka | Sep 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7607509 | Schmiz et al. | Oct 2009 | B2 |
7620202 | Fujimura et al. | Nov 2009 | B2 |
7668340 | Cohen et al. | Feb 2010 | B2 |
7680298 | Roberts et al. | Mar 2010 | B2 |
7683954 | Ichikawa et al. | Mar 2010 | B2 |
7684592 | Paul et al. | Mar 2010 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
7702130 | Im et al. | Apr 2010 | B2 |
7704135 | Harrison, Jr. | Apr 2010 | B2 |
7710391 | Bell et al. | May 2010 | B2 |
7729530 | Antonov et al. | Jun 2010 | B2 |
7746345 | Hunter | Jun 2010 | B2 |
7760182 | Ahmad et al. | Jul 2010 | B2 |
7809167 | Bell | Oct 2010 | B2 |
7834846 | Bell | Nov 2010 | B1 |
7852262 | Namineni et al. | Dec 2010 | B2 |
RE42256 | Edwards | Mar 2011 | E |
7898522 | Hildreth et al. | Mar 2011 | B2 |
8035612 | Bell et al. | Oct 2011 | B2 |
8035614 | Bell et al. | Oct 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8072470 | Marks | Dec 2011 | B2 |
20020012053 | Yoshida | Jan 2002 | A1 |
20050011959 | Grosvenor | Jan 2005 | A1 |
20050024538 | Park et al. | Feb 2005 | A1 |
20060290957 | Kim et al. | Dec 2006 | A1 |
20080026838 | Dunstan et al. | Jan 2008 | A1 |
20080041960 | Kotlarsky et al. | Feb 2008 | A1 |
20090160815 | Steer | Jun 2009 | A1 |
20110304541 | Dalal | Dec 2011 | A1 |
Number | Date | Country |
---|---|---|
201254344 | Jun 2010 | CN |
0583061 | Feb 1994 | EP |
08044490 | Feb 1996 | JP |
9310708 | Jun 1993 | WO |
9717598 | May 1997 | WO |
9944698 | Sep 1999 | WO |
Entry |
---|
Osadchy, et al., “Efficient detection under varying illumination conditions and image plane rotations”, Retrieved at <<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.9897&rep=rep1&type=pdf>>, Computer Vision and Image Understanding 93, 2004, pp. 245-259. |
Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices. |
Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand. |
Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX. |
Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan. |
Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, in Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany. |
Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/ Germany, 1996, pp. 147-154, Germany. |
Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany. |
Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY. |
Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan. |
Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand. |
Isard et al., “CONDENSATION—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands. |
Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA. |
Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA. |
Breen et al., “Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany. |
Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA. |
Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France. |
Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press. |
Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications. |
Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC. |
“Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22. |
Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7. |
Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages. |
“Simulation and Training”, 1994, Division Incorporated. |
Number | Date | Country | |
---|---|---|---|
20130044222 A1 | Feb 2013 | US |