1. Field
Embodiments relate to an image sensor, a method of sensing an image, and an image capturing apparatus including the image sensor. Embodiments may also relate to an image sensor capable of, e.g., reducing influence of a change in integral time, a method of sensing an image, and an image capturing apparatus including the image sensor.
2. Description of the Related Art
Technologies relating to image capturing apparatuses and methods of capturing images have advanced at high speed. In order to sense more accurate image information, image sensors have been developed to sense depth information as well as color information of an object.
Embodiments may be realized by providing an image sensor that receives reflected light from an object having an output light incident thereon, the image sensor including a pixel array including pixels that sample a plurality of modulation signals having different phases from the reflected light and that output pixel output signals corresponding to the plurality of modulation signals, the output pixel output signals being used to generate first images, an integral time adjusting unit that detects a change in an integral time applied to generate the first images such that the integral time adjusting unit compares intensities of the first images to a reference intensity and determines an adjusted integral time when the change in the integral time is detected, when the change in the integral time is detected, the pixel array generates second images that are subsequent to the first images by applying the adjusted integral time determined by the integral time adjusting unit based on the first images.
The integral time adjusting unit may include an image condition detector that generates a control signal indicating whether the first images are excessively or insufficiently exposed, by comparing the intensities of the first images to the reference intensity, and an integral time calculator that calculates the adjusted integral time in response to the control signal. The image condition detector may compare a maximum image intensity among the intensities of the first images to the reference intensity. The integral time calculator may calculate the adjusted integral time by multiplying a non-adjusted integral time by a ratio of the maximum image intensity and the reference intensity.
The image condition detector may compare a ratio of a maximum image intensity among the intensities of the first images and the reference intensity with a reference value. The ratio of the maximum image intensity and the reference intensity may be equal to or greater than 1. The reference value may be equal to or greater than 0 and may be set as a value equal to or less than an inverse of a factor. The reference intensity may be equal to the factor multiplied by a maximum pixel output signal from among the pixel output signals in a normal state of the image sensor. The integral time calculator may calculate the adjusted integral time by multiplying a non-adjusted integral time by the ratio of the maximum image intensity and the reference intensity.
The image condition detector may compare a ratio of the reference intensity and a smoothed maximum image intensity to a reference value. The smoothed maximum image intensity may be calculated by smooth-filtering a maximum image intensity among the intensities of the first images. The integral time calculator may calculate the adjusted integral time by multiplying a non-adjusted integral time by the ratio of the smoothed maximum image intensity and the reference intensity.
The image sensor may include a depth information calculator that calculates depth information regarding the object by estimating a delay between the output light and the reflected light from the first images that have different phases and that have a same integral time as the second images. Each of the modulation signals may be phase-modulated from the output light by one of about 0°, 90°, 180°, and 270°.
The pixel array may include color pixels that receive wavelengths of the reflected light for detecting color information regarding the object and that generate pixel output signals of the color pixels corresponding to the received wavelengths, and depth pixels that receive wavelengths of the reflected light for detecting depth information regarding the object and that generate pixel output signals of the depth pixels corresponding to the received wavelengths. The image sensor may further include a color information calculator that receives the pixel output signals of the color pixels and calculates the color information. The image sensor may be a time of flight image sensor.
Embodiments may be also be realized by providing an image sensing method using an image sensor that receives reflected light from an object having an output light incident thereon, and the image sensing method includes sampling, from the reflected light, a plurality of modulation signals having different phases, and sequentially generating first images by simultaneously outputting pixel output signals corresponding to the plurality of modulation signals, detecting a change in an integral time applied to generate the first images by comparing intensities of the first images to a reference intensity and determining an adjusted integral time when the change in the integral time is detected, and when the change in the integral time is detected, forming second images that are subsequent to the first images by applying the adjusted integral time to the second images.
Embodiments may also be realized by providing an image sensor for sensing an object that includes a light source driver that emits output light toward the object, a pixel array including a plurality of pixels that convert light reflected from the object into an electric charge to generate first images, an integral time adjusting unit that is connected to the pixel array and that detects a change in an integral time applied to generate the first images such that the integral time adjusting unit compares intensities of the first images to a reference intensity and determines an adjusted integral time when the change in the integral time is detected, and when the change in the integral time is detected, the pixel array generates second images that are subsequent to the first images by applying the adjusted integral time determined by the integral time adjusting unit based on the first images.
When the change in the integral time is detected, the integral time adjusting unit may calculate a maximum image intensity among the intensities of the first images. When the maximum image intensity is less than the reference intensity, the pixel array may generate the second images by applying a non-adjusted integral time. When the maximum image intensity is greater than or equal to the reference intensity, the pixel array may generate the second images by applying the adjusted integral time.
When the change in the integral time is detected, the integral time adjusting unit may calculate a maximum image intensity among the intensities of the first images and calculates a ratio of the maximum image intensity and the reference intensity. When the ratio is less than a reference value, the pixel array may generate the second images by applying a non-adjusted integral time. When the ratio is greater than or equal to the reference value, the pixel array may generate the second images by applying the adjusted integral time.
When the change in the integral time is detected, the integral time adjusting unit may calculate a maximum image intensity among the intensities of the first images, calculates a smoothed maximum image intensity, and calculates a ratio of the smoothed maximum image intensity and the reference intensity. When the ratio is less than a reference value, the pixel array may generate the second images by applying a non-adjusted integral time. When the ratio is greater than or equal to the reference value, the pixel array may generate the second images by applying the adjusted integral time.
The integral time adjusting unit may include an image condition detector that compares the intensities of the first images to the reference intensity and outputs a corresponding signal, and an integral time calculator that receives the corresponding signal from the image condition detector and determines the adjusted integral time.
Features will become apparent to those of ordinary skill in the art by describing in detail exemplary embodiments with reference to the attached drawings in which:
Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings; however, they may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like reference numerals refer to like elements throughout.
Referring to
As shown in
The pixel array PA of
Although the color pixels PXc and the depth pixels PXd are separately arranged in
The depth pixels PXd may each include a photoelectric conversion element (not shown) for converting the reflected light RLIG into an electric change. The photoelectric conversion element may be, e.g., a photodiode, a phototransistor, a photo-gate, a pinned photodiode, and so forth. Also, the depth pixels PXd may each include transistors connected to the photoelectric conversion element. The transistors may control the photoelectric conversion element or output an electric change of the photoelectric conversion element as pixel signals. For example, read-out transistor included in each of the depth pixels PXd may output an output voltage corresponding to reflected light received by the photoelectric conversion element of each of the depth pixels PXd as pixel signals. Also, the color pixels PXc may each include a photoelectric conversion element (not shown) for converting the visible light into an electric change. A structure and a function of each pixel will not be explained in detail for clarity.
If the pixel array PA of the present embodiment separately includes the color pixels PXc and the depth pixels PXd, e.g., as shown in
Referring to
The timing generator TG may control the depth pixels PXd to be activated, e.g., so that the depth pixels PXd of the image sensor ISEN may demodulate from the reflected light RLIG synchronously with the clock ‘ta’. The photoelectric conversion element of each the depth pixels PXd may output electric charges accumulated with respect to the reflected light RLIG for a depth integration time Tint
The depth pixel signals POUTd of the image sensor ISEN may be output to correspond to a plurality of demodulated optical wave pulses from the reflected light RLIG that includes modulated optical wave pulses. For example,
Referring back to
The color information calculator CC may calculate the color information CINF from the color pixel signals POUTc converted to digital data by the analog-to-digital converter ADC.
The depth information calculator DC may calculate the depth information DINF from the depth pixel signals POUTd=A0 through A3 converted to digital data by the analog-to-digital converter ADC. For example, the depth information calculator DC estimates a phase delay φ between the output light OLIG and the reflected light RUG as shown in Equation 1, and determines a distance D between the image sensor ISEN and the object OBJ as shown in Equation 2.
In Equation 2, the distance D between the image sensor ISEN and the object OBJ is a value measured in the unit of meter, Fm is a modulation wave period measured in the unit of second, and ‘c’ is a speed of light measured in the unit of m/s. Thus, the distance D between the image sensor ISEN and the object OBJ may be sensed as the depth information DINF from the depth pixel signals POUTd output from the depth pixels PXd of
A method of calculating the depth information DINF in units of the pixels PX is described above. A method of calculating the depth information DINF in units of images each formed of the pixel output signals POUT from N*M pixels PX (N and M are integers equal to or greater than 2) will now be described.
Further, the pixel output signals POUT may be sensed by excessively or insufficiently exposed pixels PX. The pixel output signals POUT (image values) output from the excessively or insufficiently exposed pixels PX may be inaccurate. The image sensor ISEN may reduce the possibility of and/or prevent the above-described error by automatically detecting an integral time applied to the excessively or insufficiently exposed pixels PX (an image) and adjusting the integral time to a new integral time. A detailed description of the integral time will now be provided.
Referring to
In
In this case, like a method of calculating the depth information DINF regarding the first through fourth pixel output signals A0 through A3 by using Equations 1 and 2, the depth information DINF (the distance D) at the time t5 may be calculated by calculating a phase delay φ0 at the time t5 that is obtained according to Equation 3 below, and substituting the phase delay φ0 in Equation 4.
In this manner, phase delays φ1, φ2, φ3, and φ4 may be calculated by substituting values of four images newly captured at subsequent times (an image Ai+1,1 at a time t6, an image Ai+1,2 at a time t7, an image Ai+1,3 at a time t8, and an image Ai+2,0 at a time t9) according to Equations 5 through 8, respectively. The phase delays φ1, φ2, φ3, and φ4 may be used to calculate the depth information DINF (the distance D) as shown in Equation 4.
Referring to
If a plurality of (four) images of different phases are substituted into Equations 3 through 8 to calculate the depth information DINF have different integral times Tint, the depth information calculator DC may stop the calculation of the depth information DINF until the images have the same integral time Tint. For example, if the integral time Tint has changed from the first integral time Tint1 to the second integral time Tint2 at the time t6, as illustrated in
Further, if the integral time Tint has changed, an image to which the changed integral time Tint is applied may be excessively or insufficiently exposed. If an image is excessively or insufficiently exposed, values of images substituted in Equations 3 through 8 are not constants, the depth information DINF may be calculated inaccurately or may not be calculated.
In this case, the image sensor ISEN may automatically detect the changed integral time Tint, may adjust the detected integral time Tint, and may accurately calculate the depth information DINF without stopping the calculation of the depth information DINF. A detailed description thereof will now be provided.
Referring to
The image condition detector ICD compares an intensity I of the image Aj,k to a reference intensity Iref and determines whether the image Aj,k is excessively or insufficiently exposed. For example, the image condition detector ICD detects the intensity I of the image Aj,k by using Equation 9 (operation S841).
As shown in Equation 9, the intensity I of the image Aj,k is an average value of the pixel output signals POUT output from N*M pixels PX for forming the image Aj,k. In Equation 9, (x,y) represents a coordinate in the image Aj,k (a coordinate of each pixel PX). It is assumed that the image Aj,k, of which the intensity I is currently calculated, has the same integral time Tint as a previously captured image Aj,k-1 or Aj-1,k.
Equation 9 shows a case when the image Aj,k has a value of zero (“0”) in a black level. However, the image Aj,k may have an arbitrary value B that is not zero (“0”) with respect to the reflected light RLIG in the black level. That is, if the image Aj,k has the arbitrary value B in the black level, the arbitrary value B has to be subtracted from a value of each pixel PX (each pixel output signal POUT) of the image Aj,k (error correction) before calculating the intensity I of the image Aj,k, as represented in Equation 10 (operation S842).
Hereinafter, for accuracy of calculation, it is assumed that the intensity I of the image Aj,k is calculated by using Equation 10.
The image condition detector ICD may calculate the intensities I of a plurality of (four) images having different phases by using the Equation 10, and may select a maximum image intensity IM from among the intensities I. For example, the image condition detector ICD may calculate the maximum image intensity IM of images Aj,0, Aj,1, Aj,2, and Aj,3 having phases of about 0°, 90°, 180°, and 270°, respectively, by using Equation 11 (operation S843).
I
M(j)=max(I(j,0),I(j,1),I(j,2),I(j,3)) [Equation 11]
Then, the image condition detector ICD may compare the maximum image intensity IM to the reference intensity Iref as represented in Inequation 12 (operation S844), and detect whether the image Aj,k is excessively or insufficiently exposed by the corresponding pixel.
I
M
≧I
ref [Inequation 12]
In this case, the reference intensity Iref is a value obtained by multiplying a maximum pixel output signal pM by a factor α as represented in Equation 13, and corresponds to a certain ratio of the maximum pixel output signal pM.
I
ref
=α·p
M, where 0<α<1 [Equation 13]
In Equation 13, the maximum pixel output signal pM is a maximum value of pixel output signals POUT for forming a general image that is captured by a general image capturing apparatus and is not excessively or insufficiently exposed, and the factor α is a value between 0 and 1. For example, the maximum pixel output signal is one from one the pixel output signals in a normal state of the image sensor.
In the above description, the maximum image intensity IM, which is the largest value from among the intensities I of the plurality of (four) images having different phases, is compared to the reference intensity Iref to detect whether the image Aj,k is excessively or insufficiently exposed because if a smaller intensity I having a value less than the maximum image intensity IM is compared to the reference intensity Iref, an image having an intensity I greater than the smaller intensity I cannot be detected.
If the factor α in Equation 13 is set as a large value, a larger number of images are detected as being excessively or insufficiently exposed and thus the image condition detector ICD may more accurately detect the excessively or insufficiently exposed images. If the factor α is set as a small value, the integral time calculator ATC adjusts the integral time Tint less frequently and thus an operation speed of the image sensor ISEN may be increased.
Back in Inequation 12, if Inequation 12 is true (“YES” in operation S844), the image condition detector ICD determines that the image Aj,k is excessively or insufficiently exposed. Accordingly, the image condition detector ICD may transmit to the integral time calculator ATC information Inf_exp about the changed integral time Tint.
Continuously referring to
T
int,adj(j,k)=Tint(j,k)*(Iref/IM(j)) [Equation 14]
As such, the integral time calculator ATC may reduce an influence of the changed integral time Tint by adjusting the changed integral time Tint according to the ratio of the maximum image intensity IM and the reference intensity Iref. Thereafter, the pixel array PA may capture a subsequent image(s) by applying the adjusted integral time Tint,adj (operation S860).
If Inequation 12 is false (“NO” in operation S844), the pixel array PA may capture the subsequent image without adjusting the integral time Lint (i.e., while maintaining the integral time Tint) (operation S870). That is, the pixel array PA uses the adjusted integral time Tint,adj instead of the integral time Tint only if the adjusted integral time Tint,adj is applied.
The depth information calculator DC generates the depth information DINF regarding the captured images (operation S880).
The integral time adjusting unit TAU, e.g., according to the image sensing method 800 illustrated in
Referring to
R(j)≧TR [Inequation 15]
In Inequation 15, the ratio R between the maximum image intensity IM and the reference intensity Iref may be calculated according to Equation 16 and may have a value greater than 1.
The reference value TR in Inequation 15 may be equal to or greater than 0 and may be less than an inverse of the factor α that is multiplied by the maximum pixel output signal pM in Equation 13 above to calculate the reference intensity Iref, as represented in Inequation 17.
If Inequation 15 is true (“YES” in operation S944), the image condition detector ICD determines that the image Aj,k is excessively or insufficiently exposed, i.e., that the integral time Tint has changed. Accordingly, the image condition detector ICD may transmit to the integral time calculator ATC information Inf_exp about the changed integral time Tint.
The integral time calculator ATC receives the information Inf_exp, calculates the adjusted integral time Tint,adj by multiplying the integral time Tint of the image Aj,k by the ratio of the maximum image intensity IM and the reference intensity Iref as represented in Equation 14, and applies the adjusted integral time Tint,adj to the pixel array PA (operation S945).
The pixel array PA captures a subsequent image(s) by applying the adjusted integral time Tint,adj (operation S960). Otherwise, if Inequation 15 is false (“NO” in operation S944, the pixel array PA captures the subsequent image without adjusting the integral time Tint (i.e., while maintaining the integral time Tint) (operation S970).
Operation S980 of the depth information calculator DC is substantially the same as operation S880 in the image sensing method 800 illustrated in
Referring to
In Equation 18, a difference between a current maximum image intensity IM(j) regarding images Aj,0, Aj,1, Aj,2 and Aj,3 and a recent maximum image intensity IM(j-1) regarding images Aj-1,0, Aj-1,1, Aj-1,2 and Aj-1,3 may be reduced by oppositely multiplying a smoothing coefficient β to the current maximum image intensity IM(j) and the recent maximum image intensity IM(j-1). The smoothing coefficient β has a value greater than 0 and equal to or less than 1.
Images captured initially, or images newly captured by using a new integral time, do not have the recent maximum image intensity IM(j-1) by which the current maximum image intensity IM(j) is to be smoothed, accordingly the smoothed maximum image intensity IMA may be set equal to the maximum image intensity IM.
If the smoothing coefficient β in Equation 18 is set as a large value, a time for capturing an image and then capturing a subsequent image may be reduced. If the smoothing coefficient β is set as a small value, an operation of sequentially capturing images may be performed stably.
R′(j)≧TR [Inequation 19]
The ratio R′ between the smoothed maximum image intensity IMA and the reference intensity Iref in Inequation 19 may be calculated by using Equation 20.
If Inequation 19 is true (“YES” in operation S1044), the image condition detector ICD determines that the image Aj,k is excessively or insufficiently exposed, i.e., that the integral time Tint has changed. Accordingly, the image condition detector ICD may transmit to the integral time calculator ATC information Inf_exp about the changed integral time Tint.
The integral time calculator ATC receives the information Inf_exp, calculates the adjusted integral time Tinj,adj by multiplying the integral time Tint of the image Aj,k by the ratio R′ between the smoothed maximum image intensity IMA and the reference intensity Iref as represented in Equation 21, and applies the adjusted integral time Tint,adj to the pixel array PA (operation S1045).
T
int,adj(j,k)=Tint(j,k)*(Iref/IMA(j)) [Equation 21]
The pixel array PA captures a subsequent image(s) by applying the adjusted integral time Tint,adj (operation S1060). Otherwise, if Inequation 19 is false (“NO” in operation S1044), the pixel array PA captures the subsequent image without adjusting the integral time Tint (i.e., while maintaining the integral time Tint) (operation S1070).
Operation S1080 of the depth information calculator DC is substantially the same as operation S880 in the image sensing method 800 illustrated in
As described above, depth information may be accurately calculated without stopping the calculation of the depth information by automatically detecting whether an integral time is changed and, if the integral time is changed, adjusting the changed integral time.
Referring back to
The image sensor ISEN senses both of the color information CINF and the depth information DINF in
Referring to
Referring to
Referring to
The computing system COM may further include a power supply PS. The computing system COM may also include a storing device RAM for storing the image information IMG transmitted from the image capturing apparatus CMR.
If the computing system COM is, e.g., a mobile apparatus, the computing system COM may additionally include a battery for applying an operational voltage to the computing system COM, and a modem such as a baseband chipset. Also, it is well known to one of ordinary skill in the art that the computing system COM may further include an application chipset, a mobile dynamic random access memory (DRAM), and the like, and thus detailed descriptions thereof are not provided here.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims.