Some devices combine infrared information and color information to achieve some effect. These devices may obtain the color information from a first CMOS sensor that includes photoreceptive elements that are selectively responsive to visible-spectrum electromagnetic radiation. These devices may obtain the infrared information from a second CMOS sensor that includes photoreceptive elements that are selectively responsive to infrared radiation.
However, the use of a separate infrared sensor and visible-spectrum sensor may lead to artifacts. Such problems are generally caused by the difficulty in precisely matching the IR information with the separately obtained color information. To counter these issues, the devices may resort to complex calibration and synchronization mechanisms. Nevertheless, artifacts can still occur.
To address the above issues, some devices use a hybrid sensor that includes both visible-spectrum photoreceptive elements and infrared photoreceptive elements. For instance, one design modifies a Bayer pattern by replacing one of the green photoreceptive elements with an infrared photoreceptive element, such that the hybrid sensor includes a domain that includes one red photoreceptive element, one green photoreceptive element, one infrared photoreceptive element, and one blue photoreceptive element.
A hybrid sensor is described herein that includes a plurality of photoreceptive element domains. Each domain includes: a first subset of infrared (IR) photoreceptive elements that are selectively receptive to infrared radiation; and a second subset of visible-spectrum photoreceptive elements that are selectively receptive to visible spectrum light. In one implementation, the number of IR photoreceptive elements in the first subset is equal to or greater than a number of visible-spectrum photoreceptive elements in the second subset. In other words, in one implementation, the percentage of IR photoreceptive elements is at least 50%.
In other implementations, the percentage of IR photoreceptive elements is even greater, such as, without limitation, at least 75%, or at least 80%, etc.
According to another illustrative aspect, each domain includes a mix of visible-spectrum photoreceptive elements that conforms to a Bayer pattern, or to some other color filter array pattern. With respect to the Bayer pattern, for instance, each domain includes an RGB mix that includes one red photoreceptive element, two green photoreceptive elements, one blue photoreceptive element, or some multiple of that RGB mix.
According to another illustrative aspect, a data collection component collects IR information from the IR photoreceptive elements of the hybrid sensor, and collects color information from the visible-spectrum photoreceptive elements of the hybrid sensor. By virtue of the characteristics described above, the IR information can capture an imaged scene with a resolution that is the equal to or greater than the color information.
According to another illustrative aspect, different applications can incorporate the hybrid sensor, including, but not limited to, scene reconstruction applications, object recognition applications, biometric authentication applications, night vision applications, and so on.
According to one technical merit among others, the hybrid sensor allows an application to construct a depth map (or some other processing result based on the infrared information) having particularly high resolution. For instance, a hybrid sensor which replaces one green photoreceptive element with an IR photoreceptive element has, overall, only 25% IR photoreceptive elements, whereas the hybrid sensor described herein has (in one implementation) at least 50% IR photoreceptive elements. According to another merit, the hybrid sensor allows an application to construct balanced color information, e.g., because it preserves the mix of color photoreceptive elements in a Bayer pattern (or some other standard pattern). For instance, a hybrid sensor which replaces one green photoreceptive element with an IR photoreceptive element has a reduced capability of detecting green light (which is the portion of the spectrum to which human eyes are most sensitive), whereas the hybrid sensor described herein can preserve the same mix of red, green, and blue photoreceptive elements found in the Bayer pattern.
Moreover, by virtue of the fact that the hybrid sensor combines IR photoreceptive elements with visible-spectrum photoreceptive elements, an application can combine the IR information with the color information without the types of problems described above (associated with devices which use an IR sensor that is separate from the visible-spectrum sensor). This is because, in the hybrid sensor described herein, the visible-spectrum photoreceptive elements and the IR photoreceptive elements have the same lens focal length and camera perspective, and are subject to the same distortion.
The above technique can be manifested in various types of systems, devices, components, methods, computer-readable storage media, data structures, graphical user interface presentations, articles of manufacture, and so on.
This Summary is provided to introduce a selection of concepts in a simplified form; these concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The same numbers are used throughout the disclosure and figures to reference like components and features. Series 100 numbers refer to features originally found in
This disclosure is organized as follows. Section A describes a hybrid sensor and various applications thereof. Section B sets forth illustrative methods which explain the operation and application of the hybrid sensor of Section A. And Section C describes illustrative computing functionality that can be used to implement any application of the hybrid sensor.
As a preliminary matter, some of the figures describe concepts in the context of one or more structural components, also referred to as functionality, modules, features, elements, etc. In one case, the illustrated separation of various components in the figures into distinct units may reflect the use of corresponding distinct physical and tangible components in an actual implementation. Alternatively, or in addition, any single component illustrated in the figures may be implemented by plural actual physical components. Alternatively, or in addition, the depiction of any two or more separate components in the figures may reflect different functions performed by a single actual physical component.
Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are illustrative and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein (including a parallel manner of performing the blocks).
As to terminology, the phrase “configured to” encompasses various physical and tangible mechanisms for performing an identified operation. The mechanisms can be configured to perform an operation using, for instance, software running on computer equipment, or other logic hardware (e.g., FPGAs), etc., or any combination thereof.
The term “logic” encompasses various physical and tangible mechanisms for performing a task. For instance, each operation illustrated in the flowcharts corresponds to a logic component for performing that operation. An operation can be performed using, for instance, software running on computer equipment, or other logic hardware (e.g., FPGAs), etc., or any combination thereof. When implemented by computing equipment, a logic component represents an electrical component that is a physical part of the computing system, in whatever manner implemented.
Any of the storage resources described herein, or any combination of the storage resources, may be regarded as a computer-readable medium. In many cases, a computer-readable medium represents some form of physical and tangible entity. The term computer-readable medium also encompasses propagated signals, e.g., transmitted or received via a physical conduit and/or air or other wireless medium, etc. However, the specific terms “computer-readable storage medium” and “computer-readable storage medium device” expressly exclude propagated signals per se, while including all other forms of computer-readable media.
The following explanation may identify one or more features as “optional.” This type of statement is not to be interpreted as an exhaustive indication of features that may be considered optional; that is, other features can be considered as optional, although not explicitly identified in the text. Further, any description of a single entity is not intended to preclude the use of plural such entities; similarly, a description of plural entities is not intended to preclude the use of a single entity. Further, while the description may explain certain features as alternative ways of carrying out identified functions or implementing identified mechanisms, the features can also be combined together in any combination. Finally, the terms “exemplary” or “illustrative” refer to one implementation among potentially many implementations.
A. Illustrative System
More specifically, the domain 104 includes a first subset of infrared (IR) photoreceptive elements that are receptive to the infrared portion of the spectrum (e.g., 700-1000 nm), and a second subset of visible-spectrum photoreceptive elements that are receptive to visible portion of the spectrum (e.g., 400-700 nm). The second subset of visible-spectrum photoreceptive elements, in turn, can include plural groups of photoreceptive elements that are receptive to different portions of the visible spectrum. For instance, in the illustrative case of
In one implementation, the hybrid sensor 102 corresponds to a complementary metal-oxide-semiconductor (CMOS) sensor, provided as an integrated circuit chip. In one implementation, the hybrid sensor 102 can include a sensor array 106 of sensor elements made of silicon material (and/or some other semiconductor material). The sensor elements are sensitive to the intensity of light that impinges upon them, producing grayscale output results. The hybrid sensor 102 includes a color filter array 108 which overlays the sensor array. Different cells of the color filter array 108 have receptivity to different portions of the electromagnetic spectrum, thereby operating as bandpass filters. A photoreceptive element that is receptive to a particular portion of the electromagnetic spectrum may correspond to a filter array cell in conjunction with a sensor element. The use of the color filter array 108, in conjunction with the sensor array 106, enables filtering in both in the visible and IR realms, providing rich contextual information for depth sensing, semantic labeling, object recognition, and other computer vision applications.
In one case, the sensor array 106 and the color filter array 108 can be produced in a single semiconductor manufacturing process. In a second case, the sensor array 106 and the color filter array 108 can be produced in separate processes, and the color filter array 108 can then be combined with the sensor array 106. A color filter array can be conventionally produced using appropriate dyes or pigments.
A data collection component 110 collects infrared (IR) information from the IR photoreceptive elements, and collects color information from the visible-spectrum photoreceptive elements. In one case, the hybrid sensor 102 incorporates the data collection component 110 as a part thereof. For instance, in one implementation, the hybrid sensor 102 can correspond to an integrated circuit that includes a data collection component 110 in the form of reading circuitry. The reading circuitry can accept a column and row address and, in response, reads data from a specific photoreceptive element associated with the specified column and row. A first data store 112 may store the IR information, while a second data store 114 may store the color information.
According to one aspect of the hybrid sensor 102, the number of IR photoreceptive elements in a domain (and in the hybrid sensor 102 overall) is equal to or greater than the number of visible-spectrum photoreceptive elements. In other words, at least 50% of the photoreceptive elements of the hybrid sensor 102 are IR photoreceptive elements. In other implementations, the percentage of IR photoreceptive elements is at least 75%, or at least 80%, or at least 90%, etc. For the particular configuration shown in
In any case, the hybrid sensor includes significantly more IR photoreceptive elements compared to other designs. For example, consider a hybrid sensor that modifies a conventional Bayer pattern by replacing one of the green photoreceptive elements with an IR photoreceptive element. This type of hybrid sensor allocates just 25% of a domain to collecting infrared radiation.
According to another illustrative aspect, each domain includes a mix of visible-spectrum photoreceptive elements that conforms to a Bayer pattern, or to some other color filter array pattern. With respect to the Bayer pattern, for instance, each domain includes an RGB mix that includes one red photoreceptive element, two green photoreceptive elements, one blue photoreceptive element, or some multiple of that RGB mix. Further, each domain conforms to the Bayer pattern insofar as it preserves the relationship among photoreceptive elements having different colors, as specified in the Bayer pattern. For example, the domain 104 shown in
In other implementations, the hybrid sensor can conform to other types of color filter arrays. Without limitation, illustrative color filter arrays that can be used include: (1) an RGBE filter pattern that includes an RGBE mix of one red photoreceptive element, one green photoreceptive element, one blue photoreceptive element, and one emerald photoreceptive element, or some multiple of that RGBE mix; (2) a CYYM filter pattern that includes a CYYM mix of one cyan photoreceptive elements, two yellow photoreceptive elements, and one magenta photoreceptive element, or some multiple of that CYYM mix; (3) a CYGM filter pattern that includes a CYGM mix of one cyan photoreceptive element, one yellow photoreceptive element, one green photoreceptive element, and one magenta photoreceptive element, or some mix of that CYGM mix; (4) an RGBW filter pattern that includes an RGBW mix of one red photoreceptive element, one green photoreceptive element, one blue photoreceptive element, and one white photoreceptive element, or some mix of that RGBW mix, and so on.
In other cases, the hybrid sensor includes a first subset of IR photoreceptive elements and a second subset of monochrome (e.g., gray-scale-detecting) visible-spectrum photoreceptive elements.
The use of a relatively large number of IR photoreceptive elements has various advantages. Generally, any device or application that relies on active and/or passive IR imaging and/or multi-spectrum imaging can produce higher quality images due to the large number of IR photoreceptive elements provided by the hybrid sensor 102. For instance, consider an application (described more fully below) that relies on the IR information to produce a depth map (e.g., through a structured light technique, time-of-flight technique, stereo vision technique, etc.), and then uses the depth map to reconstruct the shapes of objects in a scene. That application can produce a high-resolution depth map by virtue of the fact that so many IR photoreceptive elements are allocated to detecting infrared radiation.
Second, the hybrid sensor 102 is capable of detecting infrared radiation that forms relatively small dots, lines, or other shapes on the surface of the hybrid sensor 102. This is because there is an increased chance that these small shapes will overlap with one or more IR photoreceptive elements. As a related advantage, a camera system that uses the hybrid sensor 102 can efficiently collect IR photons, again due to the increased probability that IR light will impinge on an IR photoreceptive element. For instance, this characteristic can enable the use of a reduced-power IR illumination source. Any application or device can also use various processing components to further control its levels of detection sensitivity and resolution (and the tradeoff between these two measures), such as pixel binning, filtering, de-noising, etc.
The use of a sensor array that conforms to a Bayer pattern or some other pattern has additional technical advantages. For example, the hybrid sensor 102 shown in
Further, the use of a single hybrid sensor eliminates various problems that may occur due the use of a separate infrared sensor and a visible-spectrum sensor. These problems include, but are not limited to, artifacts caused by occlusions (in which one sensor detects a feature and the other does not), differences in perspective between the sensors, mechanical misalignment between the sensors, etc. The single hybrid sensor 102 eliminates or reduces these problems because it produces color information that is precisely linked to the infrared information, e.g., because all the information is captured at the same time by a single sensor having the same focal length and camera perspective. This design also eliminates the need for complex and costly calibration and synchronization of two image sensors to address the above-noted problems. Further, the use of a single sensor can be potentially more cheaply produced and efficiently powered compared to a design that uses two or more sensors.
Note, however, that while a camera system can employ a single hybrid sensor of the type described above, other implementations can employ two or more of the hybrid sensors of the type shown in
The post-processing components 116 can include a conventional demosaicing component for reconstructing the color of a single pixel based on the values extracted from neighboring color photoreceptive elements. The post-processing components 116 can also include any type of sampling components. One such sampling component can up-sample the color information to the same resolution as the IR information, e.g., using texture information extracted from the IR information as a guide.
In a structured light technique, a diffraction grating 208 produces a pattern of infrared light. For instance, the illumination source 204 in conjunction with the diffraction grating 208 can produce a speckle pattern of infrared light. The speckle pattern may include a random arrangement of dots, optionally having different dot sizes. The camera system 202 stores information which describes the original speckle pattern that is emitted, which constitutes an undistorted source pattern. The speckle pattern impinges on different objects in the environment 206 and is distorted by the shapes of these objects, thereby producing a distorted reflected pattern, which is the counterpart of the undistorted source pattern.
The hybrid sensor 102 captures the infrared radiation 210 that is reflected from the objects in the scene, corresponding to the distorted reflected pattern. The hybrid sensor 102 may also receive ambient infrared radiation that is not attributed to the illumination source 204. The hybrid sensor 102 also captures the visible-spectrum light 212 that is reflected from the objects in the scene. The data collection component 110 (not specifically shown in
Another implementation of the camera system 202 employs a time-of-flight technique. Here, the illumination source can correspond to an infrared laser that emits a pulse of infrared light, or a series of pulses of infrared light. A diffuser can spread the infrared light across the environment 206. The hybrid sensor 102 captures the infrared radiation 210 that is reflected from objects in the environment 206, together with visible light 212. For each IR photoreceptive element, the hybrid sensor 102 can include timing circuitry which measures the difference in time between when the infrared radiation was emitted, and when the infrared radiation was captured by the IR photoreceptive element. That difference in time (or, equivalently, a difference in phase) relates to the distance traveled by the infrared light, which, in turn, is related to the depth of a point in the environment 206 from which the infrared light has been reflected.
Another implementation uses a stereo vision technique. Here, two or more hybrid sensors capture IR information and color information that describes a scene from two or more respective vantage points. Stereo vision techniques can also reconstruct depth information based on image information collected by a single hybrid sensor, e.g., based on a sequence of images captured by the sensor at different respective times.
Finally,
Beginning with
An object reconstruction component 304 reconstructs the shapes of objects in the environment 206 based on the depth map. The object reconstruction component 304 can do so using machine-learned statistical models and/or segmentation algorithms to identify clusters of points that correspond to objects in a scene, and then linking those points together to describe the shapes of the objects, e.g., as a mesh of triangle vertices, voxel vertices, etc. The object reconstruction component 304 can then apply the color information as textures to the objects, in effect, by “pasting” the color information onto the identified shapes. Illustrative background information regarding the general topic of object reconstruction based on depth images can be found, for instance, in Keller, et al., “Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion,” in Proceedings of the 2013 International Conference on 3D Vision, 2013, pp. 1-8, and Izadi, et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, October 2011, pp. 559-568.
In another use scenario, the object detection component 402 can perform a semantic labeling operation. Here, the object detection component 402 can use the IR information and the color information to annotate an image with meaningful labels. In another case, a Simultaneous Localization and Mapping (SLAM) application can use the object detection component 402 to identify features in an environment, e.g., for the purpose of creating a feature-based map of the environment. The SLAM application then tracks the location of a mobile camera system within the environment based on the thus-created map.
As noted above, each domain conforms to the Bayer pattern, insofar as it has the same proportion of red, green, and blue photoreceptive-elements as a Bayer pattern, and because the arrangement of the red, green, and blue photoreceptive elements conforms to the Bayer pattern.
Also note that the hybrid sensor includes plural rows and columns of infrared photoreceptive elements. That is, each row or column forms a series of contiguous IR photoreceptive elements. The rows and columns of IR photoreceptive elements are interleaved with visible-spectrum photoreceptive elements. Overall, 75% of the photoreceptive elements in the hybrid sensor 102 are IR photoreceptive elements.
More generally, a hybrid sensor can include groups of rows of IR photosensitive elements and groups of columns of IR photosensitive elements, where each group has n contiguous lines (rows or columns), where n=1, 2, 3, etc. The groups of IR photosensitive elements are interleaved amongst the visible-spectrum photosensitive elements. Increases in n yield progressively higher ratios of IR photosensitive elements to visible-spectrum photosensitive elements.
In the above examples, the hybrid sensors include photosensitive elements having the same physical sizes. But in other implementations, a hybrid sensor can include photosensitive elements of different sizes. For example, a hybrid sensor can include visible-spectrum photosensitive elements having a first physical size, and IR photosensitive elements having a second size, where the first size differs from the first size. Alternatively, or in addition, a hybrid sensor can include photosensitive elements of the same kind having different physical sizes, such as by including IR photosensitive elements of different sizes. Alternatively, or in addition, a hybrid sensor can create the equivalent of different-sized photosensitive elements through post-processing operations (e.g., via analog binning, digital binning, etc.). In the last-mentioned case, a hybrid sensor can also adjust the “sizes” of photosensitive elements in a dynamic manner.
Finally, the above examples emphasized cases in which the hybrid sensor includes at least 50% IR photosensitive elements. But other implementations of the hybrid sensor can include a ratio of IR photosensitive elements less than 50% but greater than 25%.
B. Illustrative Processes
C. Representative Computing Functionality
The computing functionality 1202 can include one or more hardware processor devices 1204, such as one or more central processing units (CPUs), and/or one or more graphical processing units (GPUs), and so on. The computing functionality 1202 can also include any storage resources (also referred to as computer-readable storage media or computer-readable storage medium devices) 1206 for storing any kind of information, such as machine-readable instructions, settings, data, etc. Without limitation, for instance, the storage resources 1206 may include any of RAM of any type(s), ROM of any type(s), flash devices, hard disks, optical disks, and so on. More generally, any storage resource can use any technology for storing information. Further, any storage resource may provide volatile or non-volatile retention of information. Further, any storage resource may represent a fixed or removable component of the computing functionality 1202. The computing functionality 1202 may perform any of the functions described above when the hardware processor device(s) 1204 carry out computer-readable instructions stored in any storage resource or combination of storage resources. For instance, the computing functionality 1202 may carry out computer-readable instructions to perform the computation of a depth map, the reconstruction of objects in a scene, the recognition of an object, etc. The computing functionality 1202 also includes one or more drive mechanisms 1208 for interacting with any storage resource, such as a hard disk drive mechanism, an optical disk drive mechanism, and so on.
The computing functionality 1202 also includes an input/output component 1210 for receiving various inputs (via input devices 1212), and for providing various outputs (via output devices 1214). One illustrative input device corresponds to the hybrid sensor 102 of
The communication conduit(s) 1222 can be implemented in any manner, e.g., by a local area computer network, a wide area computer network (e.g., the Internet), point-to-point connections, etc., or any combination thereof. The communication conduit(s) 1222 can include any combination of hardwired links, wireless links, routers, gateway functionality, name servers, etc., governed by any protocol or combination of protocols.
Alternatively, or in addition, any of the functions described in the preceding sections can be performed, at least in part, by one or more hardware logic components. For example, without limitation, the computing functionality 1202 (and its hardware processor) can be implemented using one or more of: Field-programmable Gate Arrays (FPGAs); Application-specific Integrated Circuits (ASICs); Application-specific Standard Products (ASSPs); System-on-a-chip systems (SOCs); Complex Programmable Logic Devices (CPLDs), etc. In this case, the machine-executable instructions are embodied in the hardware logic itself.
The following summary provides a non-exhaustive list of illustrative aspects of the technology set forth herein.
According to a first aspect, a hybrid sensor is described that has a plurality of domains that include photoreceptive elements. Each domain includes: a first subset of infrared (IR) photoreceptive elements that are selectively receptive to infrared radiation between 700 and 1000 nm; and a second subset of visible-spectrum photoreceptive elements that are selectively receptive to visible spectrum light. A number of IR photoreceptive elements in the first subset is equal to or greater than a number of visible-spectrum photoreceptive elements in the second subset.
According to a second aspect, the second subset of visible-spectrum elements includes at least: a first group of photoreceptive elements that are receptive to a first color of light; a second group of photoreceptive elements that are receptive to a second color of light; and a third group of photoreceptive elements that are receptive to a third color of light.
According to a third aspect, in the hybrid sensor according to the above-mentioned second aspect, the first color of light is green, the second color of light is red, and the third color of light is blue.
According to a fourth aspect, the photoreceptive elements in the second subset are arranged to have a same proportion of color photoreceptive elements as specified in a Bayer pattern, such that the second subset includes an RGB mix of one red photoreceptive element, two green photoreceptive elements, and one blue photoreceptive element, or some multiple of that RGB mix.
According to a fifth aspect, in the hybrid sensor according to the fourth aspect, the photoreceptive elements in the second subset are also configured such that the red photoreceptive element, the green photoreceptive elements, and the blue photoreceptive element are arranged with respect to each other in conformance with the Bayer pattern.
According to a sixth aspect, the photoreceptive elements in the second subset are arranged to have a same proportion of photoreceptive elements as specified in one of: an RGB (Bayer) pattern that includes an RGB mix of one red photoreceptive element, two green photoreceptive elements, and one blue photoreceptive element, or some multiple of that RGB mix; an RGBE filter pattern that includes an RGBE mix of one red photoreceptive element, one green photoreceptive element, one blue photoreceptive element, and one emerald photoreceptive element, or some multiple of that RGBE mix; or a CYYM filter pattern that includes a CYYM mix of one cyan photoreceptive elements two yellow photoreceptive elements, and one magenta photoreceptive element, or some multiple of that CYYM mix; or a CYGM filter pattern that includes a CYGM mix of one cyan photoreceptive element, one yellow photoreceptive element, one green photoreceptive element, and one magenta photoreceptive element, or some mix of that CYGM mix; or an RGBW filter pattern that includes an RGBW mix of one red photoreceptive element, one green photoreceptive element, one blue photoreceptive element, and one white photoreceptive element, or some mix of that RGBW mix.
According to a seventh aspect, the second subset of visible-spectrum photoreceptive elements includes a collection of monochrome photoreceptive elements.
According to an eighth aspect, the IR photoreceptive elements across the domains form a plurality of lines of IR photoreceptive elements, interspersed amongst visible-spectrum photoreceptive elements, wherein each line of IR photoreceptive elements forms a contiguous series of IR photoreceptive elements.
According to a ninth aspect, in the hybrid sensor according to the eighth aspect, the hybrid sensor includes a plurality of horizontal and/or vertical and/or diagonal lines of IR photoreceptive elements.
According to a tenth aspect, in the hybrid sensor according to the eighth aspect, each line of IR photoreceptive elements has at least one neighboring line of IR photoreceptive elements that is contiguous thereto.
According to an eleventh aspect, at least 75% of the photoreceptive elements in the hybrid sensor are IR photoreceptive elements.
According a twelfth aspect, at least 80% of the photoreceptive elements in the hybrid sensor are IR photoreceptive elements.
According to a thirteenth aspect, a device is described herein for capturing images of an environment. The device provides a hybrid sensor that includes a plurality of domains that include photoreceptive elements. Each domain, in turn, includes: a first subset of infrared (IR) photoreceptive elements that are selectively receptive to infrared radiation; and a second subset of visible-spectrum photoreceptive elements that are selectively receptive to visible spectrum light. A number of IR photoreceptive elements in the first subset is equal to or greater than a number of visible-spectrum photoreceptive elements in the second subset. The device further includes a data collection component that is configured to collect IR information from the IR photoreceptive elements and color information from the visible-spectrum photoreceptive elements. The device also includes an application configured to processes the IR information and the color information to provide at least one application end result.
According to a fourteenth aspect, the device further includes at least one infrared source for irradiating a scene with infrared radiation. The IR photoreceptive elements detect infrared radiation produced by the infrared source and reflected from objects in the scene. The application includes a depth determination component that produces a depth map of the scene based on the IR information.
According to a fifteenth aspect, the depth determination component uses one or more of a stereo vision, and/or structured light, and/or a time-of-flight technique to produce the depth map, based on the IR information.
According to a sixteenth aspect, a method is described for collecting information from a scene. The method includes: collecting infrared (IR) information from IR photoreceptive elements provided by a hybrid sensor; and collecting color information from visible-spectrum photoreceptive elements provided by the hybrid sensor. The hybrid sensor has a plurality of domains, each domain including: a first subset of infrared (IR) photoreceptive elements that are selectively receptive to infrared radiation; and a second subset of visible-spectrum photoreceptive elements that are selectively receptive to visible spectrum light. The IR information captures the scene with a same or higher resolution compared to the color information.
According to a seventeenth aspect, in the method of the sixteenth aspect, the photoreceptive elements in the second subset are arranged to have a same proportion of color photoreceptive elements as specified in a Bayer pattern, such that the second subset includes an RGB mix of one red photoreceptive element, two green photoreceptive elements, and one blue photoreceptive element, or some multiple of that RGB mix.
According to an eighteenth aspect, in the method of the seventeenth aspect, the photoreceptive elements in the second subset are further configured such that the red photoreceptive element, the green photoreceptive elements, and the blue photoreceptive element are arranged with respect to each other in conformance with the Bayer pattern.
According to a nineteenth aspect, in the method according to the sixteenth aspect, at least 75% of the photoreceptive elements in the hybrid sensor are IR photoreceptive elements.
According to twentieth aspect, in the method of sixteenth aspect, at least 80% of the photoreceptive elements in the hybrid sensor are IR photoreceptive elements.
A twenty-first aspect corresponds to any combination (e.g., any permutation or subset that is not logically inconsistent) of the above-referenced first through twentieth aspects.
A twenty-second aspect corresponds to any method counterpart, device counterpart, system counterpart, means-plus-function counterpart, computer-readable storage medium counterpart, data structure counterpart, article of manufacture counterpart, graphical user interface presentation counterpart, etc. associated with the first through twenty-first aspects.
In closing, the description may have set forth various concepts in the context of illustrative challenges or problems. This manner of explanation is not intended to suggest that others have appreciated and/or articulated the challenges or problems in the manner specified herein. Further, this manner of explanation is not intended to suggest that the subject matter recited in the claims is limited to solving the identified challenges or problems; that is, the subject matter in the claims may be applied in the context of challenges or problems other than those described herein.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.