In optical systems, imaging lenses are utilized to collimate light, focus light, and the like. Despite the progress made in the development of optical systems, there is a need in the art for improved imaging lenses.
The present invention relates generally to imaging systems with a multiple f-number lens. According to an embodiment of the present invention, an imaging system includes a near infrared (NIR) light source configured to emit a plurality of NIR light pulses toward one or more first objects. A portion of each of the plurality of NIR light pulses may be reflected off of the one or more first objects. The imaging system further includes an imaging lens. The imaging lens includes one or more lens elements configured to receive and focus the portion of the each of the plurality of NIR light pulses reflected off of the one or more first objects onto an image plane, and to receive and focus visible light reflected off of one or more second objects onto the image plane. The imaging lens further includes an aperture stop, and a filter positioned at the aperture stop. The filter includes a central region with a first linear dimension, and an outer region surrounding the central region with a second linear dimension greater than the first linear dimension. The central region of the filter is characterized by a first transmission band in an NIR wavelength range and a second transmission band in a visible wavelength range. The outer region of the filter is characterized by a third transmission band in the NIR wavelength range and substantially low transmittance values in the visible wavelength range. The imaging system further includes an image sensor positioned at the image plane. The image sensor includes a two-dimensional array of pixels. The image sensor is configured to detect a two-dimensional intensity image of the one or more second objects in the visible wavelength range at an unbinned pixel resolution, and detect a time-of-flight three-dimensional image of the one or more first objects in the NIR wavelength range in a binned pixel resolution.
According to another embodiment of the present invention, an imaging lens includes one or more lens elements configured to receive and focus light in a first wavelength range reflected off of one or more first objects onto an image plane, and to receive and focus light in a second wavelength range reflected off of one or more second objects onto the image plane. The imaging lens further includes an aperture stop, and a filter positioned at the aperture stop. The filter includes a central region with a first linear dimension, and an outer region surrounding the central region with a second linear dimension greater than the first linear dimension. The central region of the filter is characterized by a first transmission band in the first wavelength range and a second transmission band in the second wavelength range. The outer region of the filter is characterized by a third transmission band in the first wavelength range and substantially low transmittance values in the second wavelength range.
According to a yet another embodiment of the present invention, a method of operating an imaging system is provided. The imaging system includes a near infrared (NIR) light source, an imaging lens, and an image sensor positioned at an image plane of the imaging lens. The method includes performing three-dimensional sensing using the imaging system by: emitting, using the NIR light source, a plurality of NIR light pulses toward one or more first objects, wherein a portion of each of the plurality of NIR light pulses is reflected off of the one or more first objects, receiving and focusing, using the imaging lens, the portion of each of the plurality of NIR light pulses reflected off of the one or more first objects onto the image sensor, and detecting, using the image sensor, a three-dimensional image of the one or more first objects by determining a time of flight for the portion of each of the plurality of NIR light pulses from emission to detection. The imaging lens includes an aperture stop and a wavelength-selective filter positioned at the aperture stop. The wavelength-selective filter has a first region and a second region surrounding the first region. The wavelength-selective filter is configured to transmit NIR light through the first region and the second region, and to transmit visible light through the first region only. The method further includes performing computer vision using the imaging system by: receiving and focusing, using the imaging lens, visible light from ambient light source reflected off of one or more second objects onto the image sensor, and detecting, using the image sensor, a two-dimensional intensity image of the one or more second objects.
According to a further embodiment of the present invention, an image sensor for sensing light in a first wavelength range and a second wavelength range includes a two-dimensional array of pixels and a processor. The processor is configured to measure light intensity for each pixel of the array of pixels in the first wavelength range, and measure light intensities in the second wavelength range for a set of pixel groups. Each pixel group includes m×n pixels of the array of pixels, where m and n are integers, and at least one of m and n is greater than one. In some embodiments, the first wavelength range corresponds to visible wavelengths, and the second wavelength range corresponds to near infrared (NIR) wavelengths. In some embodiments, m is equal to two, and n is equal to two. In some embodiments, measuring light intensities in the second wavelength range for the set of pixel groups includes reading out a total amount of charge for each group of m×n pixels. In some alternative embodiments, measuring light intensities in the second wavelength range for the set of pixel groups includes reading out an amount of charge for each pixel of the array of pixels, and calculating a total amount of charge for each group of m×n pixels by summing the amount of charge of the m×n pixels in each group.
Numerous benefits are achieved by way of the present invention over conventional techniques. For example, embodiments of the present invention provide an imaging lens that may be characterized by a lower f-number for NIR light and a higher f-number for visible light by utilizing a wavelength-selective filter at its aperture stop. Moreover, embodiments of the present invention provide an image sensor that may be operated at a lower resolution mode for NIR light using pixel binning and at a higher resolution mode for visible light using native pixel resolution. The imaging lens and the image sensor may be suitable for use as a TOF depth sensor with active illumination in the NIR wavelength range where a faster lens and more light integration are desired, as well as a computer vision sensor with passive illumination in the visible wavelength range where higher image resolution and greater depth of field are desired. The imaging lens may be suitable for use for both imaging visible light at a lower photo speed and imaging IR light at a faster photo speed. These and other embodiments of the invention along with many of its advantages and features are described in more detail in conjunction with the text below and attached figures.
The present invention relates generally to imaging systems with a multiple f-number lens. In optics, the f-number (sometimes referred to as the focal ratio, f-ratio, f-stop, or relative aperture) of a lens is the ratio of the lens's focal length to the diameter of the entrance pupil. The f-number is a dimensionless number that is a quantitative measure of lens speed. Thus, the f-number or ƒ/# is given by:
where ƒ is the focal length, and D is the diameter of the entrance pupil (effective aperture). A higher f-number implies a smaller diameter stop for a given focal-length lens. Since a circular stop has area A=πr2, doubling the aperture diameter and therefore halving the f-number will admit four times as much light into the system. Conversely, increasing the f-number of an imaging lens decreases the amount of light entering a camera by decreasing the aperture size. For example, doubling the f-number will admit ¼ as much light into the system.
To maintain the same photographic exposure when doubling the f-number, the exposure time would need to be four times as long, or alternatively, the illumination would need to be increased to a level four times as high as the original level. Increasing the f-number may have the benefit of increasing the depth of field (DoF) and increasing the spatial resolution of an image (e.g., as measured by modulation transfer function or MTF).
In one embodiment, the imaging system 102 and the illumination source 104 may be used for time-of-flight (TOF) depth sensing. The illumination source 104 can be configured to emit a plurality of laser pulses. A portion of each of the plurality of laser pulses may be reflected off of an object in front of the user. The portion of each of the plurality of laser pulses reflected off of one or more objects may be received and imaged by the imaging system 102. The imaging system 102 can be configured to determine a time of flight for each of the laser pulses from emission to detection, thereby determining the distance of the object from the user. The illumination source 104 may comprise a laser source, such as a vertical-cavity surface-emitting laser (VCSEL). In some embodiments, the laser source may be configured to emit laser pulses in the near infrared (NIR) wavelength range, for example in the wavelength range from about 750 nm to about 1400 nm. The illumination source 104 may also include a collimation lens for collimating the plurality of laser pulses.
In some embodiments, the imaging system 102 may also be used for computer vision. When used for computer vision, the imaging system 102 is configured to image objects in front of the user that are illuminated by passive ambient light in the visible wavelength range. By using a shared imaging system for both TOF depth sensing and computer vision, lower cost and more compact system design may be realized. It should be understood that, although the imaging system 102 is described above as part of an AR or VR system, the imaging system 102 may be used in other systems. In other embodiments, the world cameras (WC) 106 and 108, as well as the picture camera 110, may also be configured for dual functions, i.e., for imaging both visible and infrared light.
In some embodiments, the system 100 may operate the imaging system 102 in a time-shared fashion such that depth sensing and computer vision are alternately performed at different time slots. In some embodiments, the duration of each time slot may range from about 1 ms to about 50 ms, so that there is no significant latency in either depth sensing or computer vision. In other embodiments, the system 100 may operate the imaging system 102 to perform depth sensing and computer vision simultaneously, as described in more detailed below.
When the imaging system 200 is used for TOF depth sensing, it may be advantageous to configure the imaging lens 210 as a fast lens so that a relatively low power laser source may be used for active illumination. Lower power illumination may lead to lower cost, smaller form factor, and lower power consumption, among other advantages. In some cases, a relatively low ƒ/#, for example in a range from about ƒ/1 to about ƒ/1.4,may be desirable for TOF depth sensing. In contrast, when the imaging system 200 is used for computer vision, it may be advantageous to configure the imaging lens 210 as a slow lens so that higher spatial resolution and greater depth of field (DoF) may be achieved. In some cases, a relatively high ƒ/#, for example in a range from about ƒ/2 to about ƒ/2.8, may be desirable for computer vision. The imaging system 200 may be applied to other applications where it may be desirable to have different lens speeds for sensing light in different wavelength ranges (e.g., for infrared sensing and visible light sensing).
According to an embodiment of the present invention, the imaging lens 210 includes a filter 214 positioned at the aperture stop 212 that may function as a wavelength selective filter.
In some embodiments, the filter 214 may comprise a multilayer thin film stack formed on a surface of a transparent substrate such as glass. A multilayer thin film may comprise a periodic layer system composed from two or more materials of differing indices of refraction. This periodic system may be engineered to significantly enhance the transmittance of the surface in one or more desired wavelength ranges, while suppressing the transmittance of the surface in other wavelength ranges. The maximum transmittance may be increased up to nearly 100% with increasing number of layers in the stack. The thicknesses of the layers making up the multilayer thin film stack are generally quarter-wave, designed such that transmitted beams constructively interfere with one another to maximize transmission and minimize reflection. In one embodiment, the multilayer thin film stack in the central region 310 may be engineered to have two high transmittance bands, one in the visible wavelength range and the other in the NIR wavelength range, and have low transmittance for all other wavelengths. The multilayer thin film stack in the annular region 320 may be engineered to have only one high transmittance band in the NIR wavelength range, and have low transmittance for all other wavelengths. In other embodiments, other types of bandpass filters, such as metasurface filter, may be used.
The first multilayer thin film 510 may be configured to have a transmittance curve that exhibits a first transmission band 430 in the NIR wavelength range (e.g., about 800 nm to about 950 nm) and a second transmission band 440 in the visible (VIS) wavelength range (e.g., about 400 nm to about 700 nm), as illustrated in
When the filter 214 or 500 is positioned at the aperture stop 212 in the imaging lens 210 as illustrated in
Assume that the imaging lens 210 has a focal length ƒ. When the imaging lens is used for imaging visible light, the imaging lens 210 may be characterized by a first ƒ/# for visible light given by,
When the imaging lens is used for imaging NIR light, the imaging lens 210 may be characterized by a second ƒ/# for NIR light given by,
Thus, the imaging lens 210 can be configured to have a relatively low ƒ/#NIR for TOF depth sensing in the NIR wavelength range, and a relatively high ƒ/#VIS for computer vision in the visible wavelength range. For TOF depth sensing, a lower ƒ/# means that more active illumination NIR light can pass through the imaging lens 210. Therefore a relatively low power laser source may be used for illumination, which may lead to lower cost, smaller form factor, and lower power consumption, among other advantages. In some embodiments, the value of D2 may be chosen such that ƒ/#NIR is in a range from about ƒ/1 to about ƒ/1.4.
For computer vision in the visible wavelength rage, a higher ƒ/# may afford higher spatial resolution at the image plane (e.g., as measured by MTF) and greater DoF, among other advantages. In fact, a lower ƒ/# may not be desired when imaging visible light in some cases. As described more fully below, image sensors typically have higher quantum efficiencies in the visible wavelength range than in the NIR wavelength range. Thus, the image sensor may be saturated when a fast lens is used for imaging visible light. In some embodiments, the value of D1 may be chosen such that ƒ/#VIS is in a range from about ƒ/2 to about ƒ/2.8. The intensity ratio between VIS and NIR modes can be controlled by setting the ratio D1/D2 accordingly. In some embodiments, a ratio of D1/D2 may be chosen to be in the range from about 0.4 to about 0.6. In one embodiment the ratio of D1/D2 may be chosen to be about 0.5, so that the value of ƒ/#VIS is about twice as large as the value of ƒ/#NIR.
As illustrated in
The filter 900 may further include a third thin film 910 formed on a back side of the substrate 602. The third thin film 910 may have an annular shape with an outer diameter D2 and an inner diameter D3. D3 may be slightly greater than the inner diameter D1 of the second multilayer thin film 606, so as not to block incoming light rays entering the imaging system through the central region (e.g., the first multilayer thin film 604) of the wavelength-selective filter 600. In some embodiments, the value of D3 may depend on the thickness of the substrate 602. For a relatively thin substrate 602, D3 may be comparable to D1. The third thin film 910 may be configured to have high absorption coefficients in the visible wavelength range and high transmittance values in the NIR wavelength range. Thus, the third thin film 910 may be referred to as a “black coating.” As visible light reflected off of the image sensor 620 incident on the third thin film 910, a significant portion of it may be absorbed by the third thin film 910, and only a small portion of it may be transmitted by the third thin film 910 and incident on the back surface of the second multilayer thin film 606 as illustrated by the light path represented by the thinner dashed arrows in
Note that the “black coating” 1140 has both low reflectance values and low transmittance values in the visible wavelength range. Thus, the “black coating” 1140 may substantially absorb visible light, thereby preventing visible light reflected off of the image sensor 620 (as illustrated in
In some embodiments, the image sensor 220 in the imaging system 200 illustrated in
According to some embodiments of the present invention, the image sensor 220 may be operated at different resolution modes for the visible wavelength range and the NIR wavelength range. In one embodiment, the image sensor 220 may be operated at the native resolution for the visible wavelength range, i.e., at the maximum possible resolution that the physical pixel size of the image sensor can support. Thus, for computer vision in the visible wavelength range, the image sensor 220 may be operated such that the accumulated charge in each pixel cell 222 is read out.
For the NIR wavelength range, the image sensor 220 may be operated at a resolution that is lower than the native resolution for greater light integration.
In one embodiment, binning may be performed at the analog level, where the value of the total accumulated charge for the m×n pixels in each group is read out. In such cases, the readout noise is not added. In another embodiment, binning may be performed at the digital level, where the value of the accumulated charge for each pixel is read out, and the readout values for the m×n pixels in each group are then summed. In such cases, the readout noise is added in the summation process. Thus, the later embodiment may be more appropriate where the readout noise is relatively low.
As described above, the imaging system 200 illustrated in
In an embodiment, three-dimensional sensing may be performed by: emitting, using the NIR light source, a plurality of NIR light pulses toward one or more first objects (1910). A portion of each of the plurality of NIR light pulses may be reflected off of the one or more first objects. The method also includes receiving and focusing, using the imaging lens, the portion of each of the plurality of NIR light pulses reflected off of the one or more first objects onto the image sensor (1912). The imaging lens may include an aperture stop and a wavelength-selective filter positioned at the aperture stop. The wavelength-selective filter may have a first region and a second region surrounding the first region. In one embodiment, the wavelength-selective filter is configured to transmit NIR light through both the first region and the second region, and to transmit visible light through the first region only. The method further includes detecting, using the image sensor, a three-dimensional image of the one or more first objects by determining a time of flight for the portion of each of the plurality of NIR light pulses from emission to detection (1914).
The method 1900 further includes performing computer vision in a second time slot using the imaging system. Performing computer vision may be performed in a second time slot following the first time slot. In an embodiment, computer vision may be performed by receiving and focusing, using the imaging lens, visible light from an ambient light source reflected off of one or more second objects onto the image sensor (1916), and detecting, using the image sensor, a two-dimensional intensity image of the one or more second objects (1918). In some embodiments, some of the second objects can be the same as some of the first objects that were imaged in steps 1910-1914 described above.
According to an embodiment of the present invention, the image sensor includes a two dimensional array of pixels. In some embodiments, detecting the three-dimensional image of the one or more first objects is performed by reading out a total amount of charge for each group of m×n pixels, where m and n are integers, and at least one of m and n is greater than one. In some other embodiments, detecting the three-dimensional image of the one or more first objects is performed by reading out an amount of charge for each pixel of the two-dimensional array of pixels, and calculating a total amount of charge for each group of m×n pixels by summing the amount of charge of the m×n pixels in each group, where m and n are integers, and at least one of m and n is greater than one.
In one embodiment, detecting the two-dimensional intensity image of the one or more second objects is performed by reading out an amount of charge for each pixel of the two-dimensional array of pixels.
In some embodiments, the method 1900 may include alternately performing three-dimensional sensing and computer vision in sequential time slots, and the duration of each time slot may range from about 1 ms to about 50 ms.
In some other embodiments, the method 1900 may include performing three-dimensional sensing and computer vision simultaneously using an imaging system such as that illustrated in
It should be appreciated that the specific steps illustrated in
It is also understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 15/803,351, filed on Nov. 3, 2017, now U.S. Pat. No. 10,659,701, issued on May 19, 2020, entitled “METHOD AND SYSTEM FOR MULTIPLE F-NUMBER LENS,” which is a non-provisional of and claims the benefit of and priority to U.S. Provisional Patent Application No. 62/420,249, filed on Nov. 10, 2016, entitled “METHOD AND SYSTEM FOR MULTIPLE F-NUMBER LENS,” which are hereby incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
7297951 | Chen et al. | Nov 2007 | B2 |
9063574 | Ivanchenko | Jun 2015 | B1 |
9716845 | Schlechter | Jul 2017 | B2 |
9784822 | Metz | Oct 2017 | B2 |
10659701 | Pellman et al. | May 2020 | B2 |
20030072161 | Hough et al. | Apr 2003 | A1 |
20030205671 | Thomas et al. | Nov 2003 | A1 |
20070023663 | Chen et al. | Feb 2007 | A1 |
20100091490 | Reichel et al. | Apr 2010 | A1 |
20120075427 | Yahav et al. | Mar 2012 | A1 |
20130038881 | Pesach et al. | Feb 2013 | A1 |
20140063332 | Miyawaki | Mar 2014 | A1 |
20140078459 | Kim | Mar 2014 | A1 |
20140104397 | Shin | Apr 2014 | A1 |
20140118497 | Kim | May 2014 | A1 |
20140253688 | Metz | Sep 2014 | A1 |
20150316473 | Kester et al. | Nov 2015 | A1 |
20150356351 | Saylor | Dec 2015 | A1 |
20150357360 | Tian et al. | Dec 2015 | A1 |
20160019684 | Hudman | Jan 2016 | A1 |
20160044253 | Dainty | Feb 2016 | A1 |
20160092731 | Dainty | Mar 2016 | A1 |
20160210509 | Hanna | Jul 2016 | A1 |
20160262602 | Yu | Sep 2016 | A1 |
20160335778 | Smits | Nov 2016 | A1 |
20170155873 | Nguyen | Jun 2017 | A1 |
20170352139 | Mrdjen | Dec 2017 | A1 |
20180131853 | Pellman et al. | May 2018 | A1 |
20180149833 | Lee et al. | May 2018 | A1 |
20180180840 | Wada | Jun 2018 | A1 |
20180199801 | Tsuruta | Jul 2018 | A1 |
20200096613 | Li | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
101631193 | Jan 2010 | CN |
2916142 | Sep 2015 | EP |
2002369223 | Dec 2002 | JP |
2018089266 | May 2018 | WO |
Entry |
---|
Kim et al. “A 640×480 image sensor with unified pixel architecture for 2D/3D imaging in 0.11 μm CMOS.” 2011 Symposium on VLSI Circuits—Digest of Technical Papers. IEEE (Year: 2011). |
U.S. Appl. No. 15/803,351, “Final Office Action”, dated May 14, 2019, 14 pages. |
U.S. Appl. No. 15/803,351, “Non-Final Office Action”, dated Dec. 27, 2018, 22 pages. |
U.S. Appl. No. 15/803,351, “Non-Final Office Action”, dated Sep. 5, 2019, 17 pages. |
U.S. Appl. No. 15/803,351, “Notice of Allowance”, dated Jan. 13, 2020, 10 pages. |
CN201780069250.2, “Office Action”, English Translation, dated Nov. 2, 2020, 9 pages. |
EP17869247.1, “Extended European Search Report”, dated Oct. 31, 2019, 6 pages. |
IL266393, “Office Action”, English Translation, dated Dec. 21, 2020, 10 pages. |
PCT/US2017/059951, “International Preliminary Report on Patentability”, dated May 23, 2019, 8 pages. |
PCT/US2017/059951, “International Search Report and Written Opinion”, dated Jan. 12, 2018, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20200244857 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
62420249 | Nov 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15803351 | Nov 2017 | US |
Child | 16846027 | US |