This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2023-0143171, filed on Oct. 24, 2023, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
Example embodiments of the disclosure relate to an image acquisition device that provides an image with a wide color gamut and an electronic device including the same.
Image sensors are devices that receive light incident from a subject and photoelectrically convert the received light to generate an electrical signal.
For color expression, image sensors typically use a color filter configured as an array of filter elements that selectively transmit red, green, and blue light, and sense the amount of light transmitted through each of the filter elements.
In such image acquisition, wavelength bands transmitted by the filter elements provided in the color filter are limited, which limits a color gamut for expressing a subject.
One or more example embodiments of the disclosure provide an image acquisition device that provides an image with a wide color gamut and an electronic device including the same.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments of the disclosure.
According to an aspect of an example embodiment, an image acquisition device includes a first image sensor having a first spatial resolution and configured to acquire a first image based on a first wavelength band, a second image sensor having a second spatial resolution lower than the first spatial resolution and configured to acquire a second image based on a second wavelength band wider than the first wavelength band, and a processor configured to register the first image and the second image in association with each other, by using a relative position information between the first image sensor and the second image sensor; generate a third image, of which a color component is determined by replacing a color component of the first image with a color component of the second image; and generate a fourth image, of which a color component is determined by synthesizing the color component of the first image and the color component of the third image through alpha blending.
The processor may determine the color component of the fourth image by summing a value acquired by multiplying the color component of the first image by an alpha blending coefficient α and a value acquired by multiplying the color component of the third image by (1−α), and the alpha blending coefficient α may have a value between about 0 or more and about 1 or less.
The processor may calculate parameters for registering the first image and the second image based on at least one of the relative position information, a resolution, a field of view, or a focal length of each of the first image sensor and the second image sensor.
The third image may have the first spatial resolution and a color gamut that corresponds to a color gamut of the second image.
The processor may determine the color component of the third image by matching a color statistic of the first image with a color statistic of the second image.
The color statistic may correspond to an average value with respect to peripheral pixels adjacent to a central pixel.
The color statistic may correspond to a standard deviation value with respect to peripheral pixels adjacent to a central pixel.
The processor may calculate the average value by weighting a similarity between the central pixel and the peripheral pixels.
The processor may calculate the standard deviation value by weighting a standard deviation with a similarity between the central pixel and the peripheral pixels.
The similarity may be calculated by using, as feature vectors, a luminance value of the central pixel and luminance values of the peripheral pixels.
The processor may determine the alpha blending coefficient α as a logical sum of a first coefficient and a second coefficient, the first coefficient may represent information about an occlusion region according to a disparity between the first image and the second image, and the second coefficient may represent false color information of the color component of the third image.
The processor may set the first coefficient of a pixel of interest of the first image to 1 when no pixel of the second image corresponds to the pixel of interest pixel of the first image, and set the first coefficient to 0 when at least one pixel of the second image corresponds to the pixel of interest pixel of the first image.
The processor may determine a greater value among the first coefficient and the second coefficient as a value of the alpha blending coefficient α.
The second coefficient may be proportional to a color difference between the first image and the third image and has a value between about 0 or more and about 1 or less.
The second coefficient may have a value closer to 0 as a saturation of the first image increases, or closer to 1 as the saturation of the first image decreases.
The processor may set the second coefficient based on a difference in luminance values between corresponding pixels of the first image and the second image, such that the second coefficient is closer to 1 as the difference increases or closer to 0 as the difference decreases.
The processor may separate the first image and the second image into a luminance component and a color component, respectively, and generate the third image and the fourth image by using the separated first image and the separated second image.
The processor may register the first image and the second image by extracting at least one of an edge feature or a corner feature from the first image and the second image, and matching the extracted at least one of the edge feature or the corner feature between the first image and the second image.
According to an aspect of an example embodiment, an electronic device including the foregoing image acquisition device.
According to an aspect of an example embodiment, a method of controlling an image acquisition device includes acquiring a first image from a first image sensor and acquiring a second image from a second image sensor; registering the acquired first image and the acquired second image in association with each other; generating a third image, of which a color component is determined by replacing a color component of the first image with a color component of the second image; and generating a fourth image, of which a color component is determined by synthesizing the color component of the first image and the color component of the third image through alpha blending.
The above and other aspects, features, and advantages of certain example embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the example embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
Hereinafter, example embodiment will be described in detail with reference to the accompanying drawings. The example embodiments of the disclosure may be variously modified and may be embodied in many different forms. In the following drawings, like reference numerals refer to like components, and the size of each component in the drawings may be exaggerated for clarity and convenience of description.
Hereinafter, what is described as “upper” or “on” may include those directly above in contact, as well as above in non-contact.
The terms such as “first” or “second” used herein may be used to describe various components, but may be used for the purpose of distinguishing one component from another component. These terms do not limit the difference in the material or structure of the components.
The singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. When a part “comprises” or “includes” a component in the specification, unless otherwise defined, it is not excluding other components but may further include other components.
Also, in the specification, the term “unit” or “module” denote a unit or a module that processes at least one function or operation, and may be implemented by hardware, software, or a combination of hardware and software.
The term “above” and similar directional terms may be applied to both singular and plural.
In the disclosure, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. For example, the term “a processor” may refer to either a single processor or multiple processors. When a processor is described as carrying out an operation and the processor is referred to perform an additional operation, the multiple operations may be executed by either a single processor or any one or a combination of multiple processors.
Reference throughout the present disclosure to “one embodiment,” “an embodiment,” “an example embodiment,” or similar language may indicate that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present solution. Thus, the phrases “in one embodiment”, “in an embodiment,” “in an example embodiment,” and similar language throughout this disclosure may, but do not necessarily, all refer to the same embodiment. The embodiments described herein are example embodiments, and thus, the disclosure is not limited thereto and may be realized in various other forms. It will also be appreciated that one or more features included in one embodiment can be combined with one or more features included in other embodiments.
Operations of a method described herein may be performed in any suitable order unless explicitly stated that they must be performed in the order described. In addition, the use of all exemplary terms (e.g., etc.) is merely for describing the technical idea in detail, and unless explicitly limited by the claims, the scope of rights is not limited by these terms.
The image acquisition device 1000 includes a first image sensor 100 that acquires a first image IMG1 based on M (M being an integer) spectral bands, a second image sensor 200 that acquires a second image IMG2 based on N spectral bands, N being an integer greater than M, and a processor 500 that performs signal processing on the first image IMG1 and the second image IMG2 to form (or generate) a third image IMG3 and synthesize the first image IMG1 and the third image IMG3 to form (or generate) a fourth image IMG4.
The first image sensor 100 is a sensor employed in a general RGB camera, and may be a complementary metal oxide semiconductor (CMOS) image sensor using a Bayer color filter array. The first image IMG1 acquired by the first image sensor 100 may be a red, green, and blue (RGB)-based image, and may have a color gamut range acquired by the general RGB camera. For example, the first image IMG1 may have a standard RGB (s-RGB) color gamut, a BT.709 color gamut, a DCI-P3 color gamut, or an Adobe RGB color gamut.
The second image sensor 200 is a sensor that senses light of more types of wavelengths than the first image sensor 100. The second image sensor 200 may use, for example, 16 channels, or 31 channels, or another number of channels. A bandwidth of each channel may be set to be narrower than each of R, G, and B bands, and a total bandwidth which is the sum of bandwidths of all channels may include an RGB bandwidth, that is, a visible light bandwidth, and be wider than the RGB bandwidth. For example, the sum of bandwidths of all channels of the second image sensor 200 may correspond to a bandwidth of about 350 nm to about 1000 nm (see e.g.,
The first image sensor 100 and the second image sensor 200 may be configured as separate chips or a single chip.
The first image IMG1 and the second image IMG2 respectively acquired by the first image sensor 100 and the second image sensor 200 may be stored in a memory 300. The memory 300 may be a line memory that stores the first image IMG1 and the second image IMG2 in a line unit, or may be a frame buffer that entirely stores the first image IMG1 and the second image IMG2. The memory 300 may use a static random access memory (SRAM) or a dynamic random access memory (DRAM). The memory 300 may be located outside each of the first image sensor 100 and the second image sensor 200, or may be integrated into each of the first image sensor 100 and the second image sensor 200. When integrated into each of the first image sensor 100 and the second image sensor 200, the memory 300 may be integrated together with a sensor circuit. In this regard, a pixel part as one part, and a circuit part and the memory 300, as the other part, may be configured as separate stacks and integrated into 2 stacks to form one chip. Alternatively, according to a configuration, 3-D stacking with three layers of a pixel part, a circuit part, and the memory 300 is also possible.
The processor 500 uses the first image IMG1 as an input image and the second image IMG2 as a reference image to form the third image IMG3. The third image IMG3 has a spatial resolution corresponding to the first image IMG1 by the first image sensor 100 and a color gamut corresponding to the second image IMG2 by the second image sensor 200. The third image IMG3 may have the same spatial resolution as the first image IMG1 and also have an expanded color gamut, for example, a color gamut range of DCI-P3 or more in
The third image IMG3 may be acquired by replacing a color component of the first image IMG1 with a color component of the second image IMG2. To this end, corresponding relationships between pixels of the first image IMG1 and pixels of the second image IMG2 may be identified by image registering the first image IMG1 and the second image IMG2 on a two-dimensional plane.
The processor 500 may form the fourth image IMG4 by combining the first image IMG1 and the third image IMG3 through alpha blending. The fourth image IMG4 may have the same spatial resolution as the first image IMG1, and also have an expanded color gamut, for example, the color gamut range of DCI-P3 or more in
An image processing process of the processor 500 is described in more detail as follows.
Referring back to
The image acquisition unit 510 may acquire an image from each of the first image sensor 100 and the second image sensor 200, and then perform basic image processing before or after storing the image in the memory 300. For example, the image acquisition unit 510 may perform bad pixel correction, fixed pattern noise correction, crosstalk reduction, remosaicing, demosaicing, false color reduction, denoising, chromatic aberration correction, etc. In addition, the image acquisition unit 510 may perform the same image processing on images from the first image sensor 100 and the second image sensor 200, or may differently perform image processing on images from the first image sensor 100 and the second image sensor 200.
The image acquisition unit 510 may apply demosaicing to interpolate a raw image of each of the first image IMG1 and the second image IMG2 each having one channel into an RGB image and a multi-channel spectral image, respectively. In addition, the image acquisition unit 510 may convert the first image IMG1, which is an RGB image, and the second image IMG2, which is a multi-channel spectral image, into a three-channel XYZ color space through image domain conversion processing. In order to accurately transfer color information of the second image IMG2 to the first image IMG1, the image acquisition unit 510 may normalize a luminance of the second image IMG2 with respect to the first image IMG1.
The image registration unit 520 may use relative position information between the first image sensor 100 and the second image sensor 200 to register the first image IMG1 and the second image IMG2. The image registration unit 520 may determine a positional relationship between pixels of the image acquired from each of the first image sensor 100 and the second image sensor 200 by considering a spatial resolution of the image, a field of view of an optical system used to acquire the image, a focal length, etc. At this time, the image registration unit 520 may overlay an image of one sensor (e.g., the first image of the first image sensor) on an image of another sensor (e.g., the second image of the second image sensor). For example, the image registration unit 520 may retrieve pixels of the second image IMG2 corresponding to each pixel of the first image IMG1 with respect to the first image IMG1 acquired by the first image sensor 100. To this end, the image registration unit 520 may perform processing such as scaling, translation, rotation, affine transform, perspective transform, etc. on the pixel of the second image IMG2.
One or more pixels of the second image IMG2 may correspond to a pixel of the first image IMG1. When a plurality of pixels of the second image IMG2 correspond to a pixel of the first image IMG1, the image registration unit 520 may calculate a pixel value of the second image IMG2 corresponding to the pixel of the first image IMG1 through a weighted sum of the plurality of pixels of the second image IMG2 according to positions of the plurality of pixels.
There may be no pixel of the second image IMG2 that corresponds to a pixel of the first image IMG1. This is possible due to a disparity between the first image sensor 100 and the second image sensor 200, and when there is no corresponding pixel in the second image IMG1 for a pixel of the first image IMG1, the image registration unit 520 may determine such pixel of the first image IMG to be an occlusion region. The image registration unit 520 may store information about the pixel determined to be the occlusion region in a form of a map. In an embodiment, the image registration unit 520 may generate an occlusion map by setting a value of the corresponding pixel to 1 when there is no corresponding pixel between the first image IMG1 and the second image IMG2, and by setting a value of the corresponding pixel to 0 when there is a corresponding pixel between the first image IMG1 and the second image IMG2.
The image registration unit 520 may perform registration in a sub-pixel unit to increase registration precision. In an embodiment, in registration of the sub-pixel unit, a position of a pixel may be expressed as a real number rather than an integer.
The image registration unit 520 may also increase registration efficiency by allowing the first image sensor 100 and the second image sensor 200 to focus on a subject at the same position through focus control. In addition, the image registration unit 520 may quickly and accurately perform image registration by ensuring that two sensors have the same field of view. For example, when imaging optical systems for forming optical images on the first image sensor 100 and the second image sensor 200 have the same focal length and the same field of view, only translation exists between the first image IMG1 and the second image IMG2, and related parameters may be calculated using relative positions between the first image sensor 100 and the second image sensor 200 and focal lengths of the imaging optical systems.
Before performing registration, the image registration unit 520 may correct aberrations included in the first image IMG1 and the second image IMG2. That is, the image registration unit 520 may perform registration after correcting effects of distortion, geometric aberration, chromatic aberration, etc. caused by lenses included in the imaging optical systems used when acquiring the first image IMG1 and the second image IMG2.
The image registration unit 520 may extract edge feature information within an image and match features between two images by using the edge feature information. When image registration is misaligned in an edge region of a subject, a color distortion may occur. Thus, the image registration unit 520 may extract edge information of two images and perform registration of the two images based on matching between the edge information thereof such that edges in the two images are aligned to prevent distortion from occurring in the edge region. In an embodiment, the image registration unit 520 may perform image registration using other image features such as corner features (or corner points), in addition to or alternative to the edge.
The color transfer unit 530 may use the first image IMG1 as an input image and the second image IMG2 as a reference image to change a color of the first image IMG1 to match a color of the second image IMG2 (that is, replacing a color component of the first image IMG1 with a color component of the second image IMG2) and form the third image IMG3 based thereon. In an embodiment, the color transfer unit 530 may form (or determine) a color component of the third image IMG3 by matching a color statistic of the first image IMG1 with a color statistic of the second image IMG2. The color statistic may be based on an average value with respect to peripheral pixels adjacent to a central pixel and/or a standard deviation value with respect to the peripheral pixels adjacent to the central pixel.
The color transfer unit 530 may form the third image IMG3 by respectively separating the first image IMG1 and the second image IMG2 into a luminance component and a color component. For example, the color transfer unit 530 may form the color component of the third image IMG3 by changing the color of the first image IMG1 to match the color of the second image IMG2 in a CIE LAB color space having a high independence of the luminance component and the color component, and form (or determine) a luminance component of the third image IMG3 by using the luminance component of the first image IMG1 as it is.
In an embodiment, the color transfer unit 530 may form the third image IMG3 using a color transfer function expressed in Equations 1 and 2 below.
First, the luminance component IMG3L(p) of the third image IMG3 may use a luminance component value IMG1L(p) of the first image IMG1 as shown in Equation 1 below.
Next, the color component of the third image IMG3 may be independently defined as follows using statistics of peripheral pixels q adjacent to a central pixel p with respect to a color component 163, {IMG3a(p),IMG3b(p)} in the central pixel p.
In an embodiment, the color component IMG3c(p) of the third image IMG3 may be calculated through Equation 2 below.
Here, “μ” is an average weighted by a similarity of the central pixel “p” and the peripheral pixels “q” surrounding the central pixel p, and “STD” is a standard deviation weighted by the similarity of the central pixel “p” and the peripheral pixels “q” surrounding the central pixel p. Equation 2 may be interpreted as a process of replacing a statistic {μ1c(p), STD1C(p)} in the periphery of the central pixel p of the first image IMG1 with a statistic {{μ2c(p), STD2C(p)} of the second image IMG2.
The statistic {μc(p), STDC(p)} may be calculated as shown in Equations 3 and 4 below.
Here, ωp is a peripheral region set with respect to the central pixel p and may have a square window. Alternatively, the peripheral region may have another shape of a region. ωp may be the peripheral region set adjacent to the central pixel p with respect to the central pixel p. |ω| means the number of pixels of the peripheral region set ωp, and in an embodiment, when ωp is the peripheral region set adjacent to the central pixel p and has a square window, |ω| may be 8. Dc(p,q) is a Gaussian kernel that represents a similarity between the central pixel p and the peripheral pixels q.
The average μ and the standard deviation STD used in a transfer function (e.g., as expressed in Equation 2) to determine the color component of the third image IMG3 may be calculated by weighting the similarity of the central pixel p and the peripheral pixels q, and thus, the transfer function (e.g., Equation 2) for determining the color component of the third image IMG3 may depend on similar pixels between the first image IMG1 and the second image IMG2.
Dc(p,q), which represents the similarity between the central pixel p and the peripheral pixels q, may be calculated as shown in Equation 5 below.
Here, σ2 is a hyperparameter that represents smoothness of a kernel. In an example embodiment, a feature quantity vector (P) may use a luminance value of a simple vector as a feature vector. When the luminance value is used as the feature vector, an amount of similarity calculation may be reduced. However, the embodiments of the disclosure are not limited thereto, and any value may be used as long as the feature quantity vector
(P) is a real number vector corresponding to the similarity of pixels.
The image synthesis unit 540 may form (or determine) the color component of the fourth image IMG4 by synthesizing the color component of the first image IMG1 and the color component of the third image IMG3 through alpha blending.
A luminance component IMG4L(p) of the fourth image IMG4, like the luminance component IMG3L(p) of the third image IMG3, may use the luminance component value IMG1L(p) of the first image IMG1 as it is (see Equation 1).
A transfer function to determine a color component IMG4C(p) of the fourth image IMG4 may be expressed as Equation 6 below.
The image synthesis unit 540 may determine the color component IMG4C(p) of the fourth image IMG4 by summing a value acquired by multiplying the color component of the first image IMG1 by an alpha blending coefficient α and a value acquired by multiplying the color component of a third image IMG3 by (1−α). The alpha blending coefficient α may have a value between about 0 or more and about 1 or less.
The image synthesis unit 540 may determine a value of the alpha blending coefficient α by logical sum of a first coefficient α1 and a second coefficient α2. In an embodiment, the image synthesis unit 540 may set a greater value among the first coefficient α1 and the second coefficient α2 as the value of the alpha blending coefficient α. This may be expressed as shown in Equation 7 below.
The first coefficient α1 may be a parameter related to the occlusion region that is determined due to the disparity between the first image IMG1 and the second image IMG2.
The image synthesis unit 540 may receive information OC about the occlusion region from the image registration unit 520. As described above, when no pixel of the second image IMG2 corresponds to the pixel of the first image IMG1, the image registration unit 520 may determine the corresponding pixel of the first image IMG1 to be the occlusion region. The image registration unit 520 may store information about the pixel determined to be the occlusion region in a form of a map. In an embodiment, the image registration unit 520 may generate an occlusion map by setting a value of the corresponding pixel to 1 when there is no corresponding pixel in the second image IMG2, and by setting a value of the corresponding pixel to 0 when there is a corresponding pixel in the second image IMG2. As a result, the first coefficient α1 may have a value of 1 when the pixel corresponds to the occlusion region (that is, when there is no corresponding pixel between the first image IMG1 and the second image IMG2), and have a value of 0 when the pixel does not correspond to the occlusion region (that is, when there is a corresponding pixel between the first image IMG1 and the second image IMG2).
The second coefficient α2 may be a parameter related to false color information of the third image IMG3. The second image sensor 200 may sense relatively more wavelength channels than the first image sensor 100. Accordingly, a distance between pixels of the same wavelength channel in the second image sensor 200 may be greater than a distance between pixels of the same wavelength channel in the first image sensor 100 (for example, see
The image synthesis unit 540 may set the second coefficient α2 to be proportional to a color difference between the first image IMG1 and the third image IMG3 and have a value between about 0 or more and about 1 or less.
In addition, the image synthesis unit 540 may determine the second coefficient α2 based on a difference in luminance values between corresponding pixels of the first image IMG1 and the second image IMG2. This is based on the empirical fact that the false color information is especially prominent in a region with a large difference in luminance values between the corresponding pixels of the first image IMG1 and the second image IMG2. Therefore, the image synthesis unit 540 may set the second coefficient α2 to be proportional to the difference in luminance values between the corresponding pixels of the first image IMG1 and the second image IMG2 and have a value between about 0 or more and about 1 or less. In an embodiment, the image synthesis unit 540 may set the second coefficient α2 to be closer to 1 as the difference in luminance values between the corresponding pixels of the first image IMG1 and the second image IMG2 increases, and to be closer to 0 as the difference decreases.
In addition, the image synthesis unit 540 may set the second coefficient α2 to be closer to 0 as a saturation of the first image IMG1 increases, and to be closer to 1 as the saturation of the first image IMG1 decreases. This is based on the empirical fact that the false color information is especially prominent in an achromatic region. An example of a method performed by the image synthesis unit 540 of setting the second coefficient α2 is as shown in Equation 8 below.
A clip function checks whether an input value is between 0 and 1, and when the input value is less than 0, outputs 0, and when the input value is greater than 1, outputs 1. As a result, the clip function sets the second coefficient α2 to have a value between about 0 or more and about 1 or less. ΔC(p) is a color difference value between the first image IMG1 and the third image IMG3. S1(p) is a saturation value indicating sharpness of the color component of the first image IMG1. k1 and k2 are each a heuristic hyperparameter.
In an example embodiment, the image synthesis unit 540 may form (or generate) a final output image by converting the luminance component IMG4L(p) of the fourth image IMG4 and the color component IMG4C(p) of the fourth image IMG4 into an RGB image through image domain conversion processing.
As described above, the image acquisition device 1000 according to the embodiment may use the first image IMG1 with a high spatial resolution captured by the first image sensor 100 as an input image and the second image IMG2 with a wide color gamut captured by the second image sensor 200 as a reference image to transfer the color information of the second image IMG2 to the first image IMG1, thereby forming the final image having the high spatial resolution of the first image IMG1 and the wide color gamut of the second image IMG2.
Color information transfer is implemented by matching (or replacing) the color statistic of the first image IMG1 with the color statistic of the second image IMG2.
In addition, the image acquisition device 1000 may synthesize the first image and an intermediate image (third image) through alpha blending to form a final image (fourth image). To perform alpha blending, the image acquisition device 1000 may determine the alpha blending coefficient α as the logical sum of the first coefficient α1 which is a parameter related to the occlusion region and the second coefficient α2 which is a parameter related to the false color information of the intermediate image (third image) to correct an incorrect color information transfer and form the final image (fourth image).
The image acquisition device 1000 may include the first image sensor 100 that acquires the first image IMG1 based on a first wavelength band, the second image sensor 200 that acquires the second image IMG2 based on a second wavelength band that is wider than the first wavelength band, and the processor 500 that performs signal processing on the first image IMG1 and the second image IMG2 to form the fourth image IMG4. The image acquisition device 1000 may further include the memory 300 that stores data related to the first image IMG1 and the second image IMG2.
The image acquisition device 1000 may also include a first imaging optical system 190 that forms an optical image of an object OBJ on the first image sensor 100, and a second imaging optical system 290 that forms an optical image of the object OBJ on the second image sensor 200.
The first image sensor 100 may include a first pixel array PA1. The first pixel array PA1 may include a first sensor layer 110 in which a plurality of first sensing elements are arrayed, and a color filter 120 disposed on the first sensor layer 110. The color filter 120 may include red filters, green filters, and blue filters which are alternately arranged. A first micro lens array 130 may be disposed on the first pixel array PA1. Various examples of a pixel arrangement applied to the first pixel array PA1 will be described with reference to
The second image sensor 200 may include a second pixel array PA2. The second pixel array PA2 may include a second sensor layer 210 in which a plurality of second sensing elements are arrayed, and a spectral filter 220 disposed on the second sensor layer 210. The spectral filter 220 may include a plurality of filter groups. Each of the plurality of filter groups may include a plurality of unit filters having different transmission wavelength bands. The spectral filter 220 may be configured to subdivide and filter a wider wavelength band than that of the color filter 120, for example, filter a wavelength band including a wavelength band of an ultraviolet to infrared wavelength range. A second micro lens array 230 may be disposed on the second pixel array PA2. Various examples of a pixel arrangement applied to the second pixel array PA2 will be described with reference to
The first sensor layer 110 and the second sensor layer 210 may each include, but are not limited to, a charge coupled device (CCD) sensor or a CMOS sensor.
The first pixel array PA1 and the second pixel array PA2 may be disposed to be spaced apart horizontally, for example, in an X direction, on the same circuit substrate SU.
First circuit elements that process signals from the first sensor layer 110 and second circuit elements that process signals from the second sensor layer 210 may be provided in the circuit substrate SU. However, the first and second circuit elements are not limited thereto, and may be respectively provided on separate substrates.
The memory 300 that stores data of the first image IMG1 and the second image IMG2 may be separately provided from the circuit substrate SU, but this is an example, and the memory 300 may be disposed within the circuit substrate SU in the same layer as or a separate layer from the circuit elements. The memory 300 may be a line memory that stores the first image IMG1 and the second image IMG2 in a line unit, or may be a frame buffer that entirely stores the first image IMG1 and the second image IMG2. The memory 300 may use, for example, an SRAM or a DRAM.
Various circuit elements used in the image acquisition device 1000 may be integrated into and disposed in the circuit substrate SU. For example, a logic layer including various analog circuits and digital circuits, and a memory layer which stores data may be provided in the circuit substrate SU. The logic layer and the memory layer may each include different layers or the same layer.
Referring to
A row decoder 202, an output circuit 203, and a TC 201 may be connected to the second pixel array PA2, and a signal from the second pixel array PA2 may be processed similarly as described above. In addition, a processor for processing the second image IMG2 output through the output circuit 203 may be implemented as a single chip along with the TC 201, the row decoder 202, and the output circuit 203.
The first pixel array PA1 and the second pixel array PA2 are illustrated to have different pixel sizes and numbers, but this is an example and embodiments of the disclosure are not limited thereto.
When operating two different types of sensors, timing control may be needed depending on different resolutions and output speeds, and the size of a region subject to image registration. For example, when one image string corresponding to a region is readout with respect to the first image sensor 100, an image string of the second image sensor 200 corresponding to the region may already be stored in a buffer or may need to be read anew. It is needed to correctly calculate such timing and performing readout according to the calculated timing. Alternatively, operations of the first image sensor 100 and the second image sensor 200 may be synchronized by using the same synchronization signal. For example, a TC 400 may be further provided to transmit a synchronization signal sync to each of the first image sensor 100 and the second image sensor 200.
Referring to
Referring to
For example, referring to
In addition, referring to
Referring to
The first and second unit filters F1 and F2 may respectively have central wavelengths UV1 and UV2 in an ultraviolet region, and the third to fifth unit filters F3 to F5 may respectively have central wavelengths B1 to B3 in a blue light region. The sixth to eleventh unit filters F6 to F11 may respectively have central wavelengths G1 to G6 in a green light region, and the twelfth to fourteenth unit filters F12 to F14 may respectively have central wavelengths R1 to R3 in a red light region. In addition, the fifteenth and sixteenth unit filters F15 and F16 may respectively have central wavelengths NIR1 and NIR2 in a near-infrared region.
The unit filters F1 to F25 provided in the spectral filter 220 may have a resonance structure with two reflectors, and a transmission wavelength band may be determined according to characteristics of the resonance structure. The transmission wavelength band may be adjusted according to a material of a reflector, a material of a dielectric material in a cavity, and a thickness of the cavity. In addition, a structure using grating and a structure using a distributed Bragg reflector (DBR) may be applied to the unit filters F1 to F25.
In addition, pixels of the second pixel array PA2 may be arranged in various ways according to color characteristics of the second image sensor 200.
Referring to
The image acquisition device 1000 may perform basic image processing on the first image IMG1 and/or the second image IMG2 before or after storing the first image IMG1 and/or the second image IMG2 in the memory 300. For example, the image acquisition device 1000 may perform bad pixel correction, fixed pattern noise correction, crosstalk reduction, remosaicing, demosaicing, false color reduction, denoising, chromatic aberration correction, etc. (S1420).
The image acquisition device 1000 uses relative position information between the first image sensor 100 and the second image sensor 200 to register the first image IMG1 and the second image IMG2 (S1430). The image acquisition device 1000 determines a positional relationship between pixels of each of the first image IMG1 and the second image IMG2 respectively acquired from the first image sensor 100 and the second image sensor 200 by considering, for example, a spatial resolution of each of the first image IMG1 and the second image IMG2, a field of view of an optical system used to acquire each of the first image IMG1 and the second image IMG2, a focal length, etc. At this time, the image acquisition device 1000 may overlay an image of one sensor (e.g., the first image of the first image sensor) on an image of another sensor (e.g., the second image of the second image sensor). For example, the image acquisition device 1000 may retrieve pixels of the second image IMG2 corresponding to each pixel of the first image IMG1 with respect to the first image IMG1 acquired by the first image sensor 100. To this end, the image acquisition device 1000 may perform scaling, translation, rotation, affine transform, perspective transform, etc. on the pixel of the second image IMG2.
The image acquisition device 1000 may form a color component of the third image IMG3 by replacing, in the first image IMG1, a color component of the first image IMG1 with a color component of the second image IMG2 (S1440). The image acquisition device 1000 uses the first image IMG1 as an input image and the second image IMG2 as a reference image and change a color of the first image IMG1 to match a color of the second image IMG2 and form the third image IMG3. In an embodiment, the color transfer unit 530 forms a color component of the third image IMG3 by matching (or replacing) a color statistic of the first image IMG1 with a color statistic of the second image IMG2. The color statistic may be based on an average value with respect to peripheral pixels adjacent to a central pixel and/or a standard deviation value with respect to the peripheral pixels adjacent to the central pixel.
The image acquisition device 1000 forms the color component of the fourth image IMG4 by synthesizing the color component of the first image IMG1 and the color component of the third image IMG3 through alpha blending (S1450).
A luminance component of the fourth image IMG4, like a luminance component of the third image IMG3, uses a luminance component value of the first image IMG1 as it is.
The image acquisition device 1000 forms the color component of the fourth image IMG4 by summing a value acquired by multiplying the color component of the first image IMG1 by the alpha blending coefficient α and a value acquired by multiplying the color component of a third image IMG3 by “1−α”. The alpha blending coefficient α may have a value between about 0 or more and about 1 or less. The image acquisition device 1000 determines a value of the alpha blending coefficient α by logical sum of the first coefficient α1 and the second coefficient α2. In an example embodiment, the image acquisition device 1000 may determine a greater value among the first coefficient α1 and the second coefficient α2 as the value of the alpha blending coefficient α.
The first coefficient α1 is a parameter related to the occlusion region determined according to the disparity between the first image IMG1 and the second image IMG2. The second coefficient α2 is a parameter related to false color information of the third image IMG3.
The image acquisition device 1000 described above may be employed in various high-performance optical devices or high-performance electronic devices. The electronic devices may include, for example, smart phones, mobile phones, cell phones, personal digital assistants (PDAs), laptops, PCs, various portable devices, home appliances, security cameras, medical cameras, automobiles, Internet of Things (IoT) devices, other mobile or non-mobile computing device, but are not limited thereto.
In addition to the image acquisition device 1000, the electronic devices may further include a processor, for example, an application processor (AP), configured to control image sensors provided therein, and may control multiple hardware or software components by running an operating system or application program through the processor, and perform various data processing and calculation operations. The processor may further include, for example, a graphics processing unit (GPU) and/or an image signal processor. When the processor includes the image signal processor, an image (or video) acquired by the image sensor may be stored and/or output by using the processor.
The processor ED20 may control one or more components (e.g., hardware, software components, etc.) of the electronic device ED01 connected to the processor ED20 by executing software (e.g., program ED40, etc.), and may perform various data processes or operations. As a part of the data processing or operations, the processor ED20 may load a command and/or data received from another component (e.g., sensor module ED76, communication module ED90, etc.) to a volatile memory ED32, may process the command and/or data stored in the volatile memory ED32, and may store result data in a non-volatile memory ED34. The processor ED20 may include a main processor ED21 (e.g., central processing unit, application processor, etc.) and an auxiliary processor ED23 (e.g., graphic processing unit, image signal processor, sensor hub processor, communication processor, etc.) that may be operated independently from or along with the main processor ED21. The auxiliary processor ED23 may use less power than that of the main processor ED21, and may perform specified functions.
The auxiliary processor ED23, on behalf of the main processor ED21 while the main processor ED21 is in an inactive state (e.g., sleep state) or along with the main processor ED21 while the main processor ED21 is in an active state (e.g., application executed state), may control functions and/or states related to some of the components (e.g., display device ED60, sensor module ED76, communication module ED90, etc.) in the electronic device ED01. The auxiliary processor ED23 (e.g., image signal processor, communication processor, etc.) may be implemented as a part of another component (e.g., camera module ED80, communication module ED90, etc.) that is functionally related thereto.
The memory ED30 may store various data to be used by the components (e.g., processor ED20, sensor module ED76, etc.) of the electronic device ED01. The data may include, for example, input data and/or output data about software (e.g., program ED40, etc.) and commands related thereto. The memory ED30 may include the volatile memory ED32 and/or the non-volatile memory ED34. The non-volatile memory ED34 may include an internal memory ED36 fixedly installed in the electronic device ED01 and an external memory ED38 that is removable.
The program ED40 may be stored as software in the memory ED30, and may include an operation system ED42, middleware ED44, and/or an application ED46.
The input device ED50 may receive commands and/or data to be used in the components (e.g., processor ED20, etc.) of the electronic device ED01, from outside (e.g., user, a surrounding environment, etc.) of the electronic device ED01. The input device ED50 may include a microphone, a mouse, a keyboard, and/or a digital pen (e.g., stylus pen), a button, a switch, a camera, a virtual reality (VR) headset, haptic gloves, and the like.
The sound output device ED55 may output a sound signal to outside of the electronic device ED01. The sound output device ED55 may include a speaker and/or a receiver, a speaker, a buzzer, an alarm, and the like. The speaker may be used for a general purpose such as multimedia reproduction or record play, and the receiver may be used to receive a call. The receiver may be coupled as a part of the speaker or may be implemented as an independent device.
The display device ED60 may provide visual information to outside of the electronic device ED01. The display device ED60 may include a display (e.g., a liquid crystal display (LCD), light-emitting diodes (LEDs), organic light emitting diodes (OLEDs), etc.), a hologram device, or a projector, and a control circuit for controlling the corresponding device. The display device ED60 may include a touch circuitry set to sense a touch, and/or a sensor circuit (e.g., pressure sensor, etc.) that is set to measure a strength of a force generated by the touch.
The audio module ED70 may convert sound into an electrical signal or vice versa. The audio module ED 70 may acquire sound through the input device ED50, or may output sound via the sound output device ED55 and/or a speaker and/or a headphone of another electronic device (e.g., electronic device ED02, etc.) connected directly or wirelessly to the electronic device ED01.
The sensor module ED76 may sense an operating state (e.g., power, temperature, etc.) of the electronic device ED01, or an outer environmental state (e.g., user state, brightness level, time of day, geographic location, etc.), and may generate an electrical signal and/or data value corresponding to the sensed state. The sensor module ED76 may include a gesture sensor, a gyro-sensor, an accelerometer, a pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) ray sensor, a vivo sensor, a temperature sensor, a humidity sensor, an illuminance sensor, an actuator, a transducer, a contact sensor, a ranging device, a global positioning system (GPS) sensor, and the like.
The interface ED77 may support one or more designated protocols that may be used in order for the electronic device ED01 to be directly or wirelessly connected to another electronic device (e.g., electronic device ED02, etc.) The interface ED77 may include a high-definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, and/or an audio interface.
The connection terminal ED78 may include a connector by which the electronic device ED01 may be physically connected to another electronic device (e.g., electronic device ED02, etc.). The connection terminal ED78 may include an HDMI connector, a USB connector, an SD card connector, and/or an audio connector (e.g., headphone connector, etc.).
The haptic module ED79 may convert the electrical signal into a mechanical stimulation (e.g., vibration, motion, etc.) or an electric stimulation that the user may sense through a tactile or motion sensation. The haptic module ED79 may include a motor, a piezoelectric device, and/or an electric stimulus device.
The camera module ED80 may capture a still image and a video. The camera module ED80 may include the image acquisition device 10000 described above, an additional lens assembly, image signal processors, and/or flashes. The lens assembly included in the camera module ED80 may collect light emitted from a subject that is an object to be captured.
The power management module ED88 may manage the power supplied to the electronic device ED01. The power management module ED88 may be implemented as a part of a power management integrated circuit (PMIC).
The battery ED89 may supply electric power to components of the electronic device ED01. The battery ED89 may include a primary battery that is not rechargeable, a secondary battery that is rechargeable, and/or a fuel cell.
The communication module ED90 may support the establishment of a direct (e.g., wired) communication channel and/or a wireless communication channel between the electronic device ED01 and another electronic device (e.g., electronic device ED02, the electronic device ED04, the server ED08, etc.), and execution of communication through the established communication channel. The communication module ED90 may be operated independently from the processor ED20 (application processor, etc.), and may include one or more communication processors that support the direct communication and/or the wireless communication. The communication module ED90 may include a wireless communication module ED92 (e.g., cellular communication module (e.g., fifth generation (5G), long-term evolution (LTE), code division multiple access (CDMA), and the like), a short-range wireless communication module (e.g., FlashLinQ, WiMedia, Bluetooth™, Bluetooth™ Low Energy (BLE), ZigBee, Institute of Electrical and Electronics Engineers (IEEE) 802.11x (Wi-Fi), and the like), a global navigation satellite system (GNSS) communication module) and/or a wired communication module ED94 (e.g., local area network (LAN) communication module, a power line communication module, an IEEE 1094 (FireWire) module, etc.). From among the communication modules, a corresponding communication module may communicate with another electronic device via a first network ED09 (e.g., short-range communication network such as Bluetooth™, Wi-Fi direct, or infrared data association (IrDA)) or a second network ED99 (e.g., long-range communication network such as a cellular network, Internet, or computer network (e.g., LAN, WAN, etc.)). Such above various kinds of communication modules may be integrated as one component (e.g., single chip, etc.) or may be implemented as a plurality of components (e.g., a plurality of chips) separately from one another. The wireless communication module ED92 may identify and authenticate the electronic device ED01 in a communication network such as the first network ED98 and/or the second network ED99 by using subscriber information (e.g., international mobile subscriber identifier (IMSI), etc.) stored in the subscriber identification module ED96.
The antenna module ED97 may transmit or receive the signal and/or power to/from outside (e.g., another electronic device, etc.) An antenna may include a radiator formed as a conductive pattern formed on a substrate (e.g., printed circuit board (PCB), etc.). The antenna module ED97 may include one or more antennas. When the antenna module ED97 includes a plurality of antennas, from among the plurality of antennas, an antenna that is suitable for the communication type used in the communication network such as the first network ED98 and/or the second network ED99 may be selected by the communication module ED90. The signal and/or the power may be transmitted between the communication module ED90 and another electronic device via the selected antenna. Another component (e.g., a radio-frequency integrated circuit (RFIC), etc.) other than the antenna may be included as a part of the antenna module ED97.
Some of the components may be connected to one another via the communication method among the peripheral devices (e.g., bus, general purpose input and output (GPIO), serial peripheral interface (SPI), mobile industry processor interface (MIPI), etc.) and may exchange signals (e.g., commands, data, etc.).
The command and/or data may be transmitted and/or received between the electronic device ED01 and the external electronic device ED04 via the server ED08 connected to the second network ED99. Other electronic devices ED02 and ED04 may be the devices that are the same as or different kinds from the electronic device ED01. All or some of the operations executed in the electronic device ED01 may be executed in one or more devices among the other electronic devices ED02, ED04, and ED08. For example, when the electronic device ED01 is to perform a certain function or service, the electronic device ED01 may request one or more other electronic devices to perform some or entire function or service, instead of executing the function or service by itself. One or more electronic devices receiving the request execute an additional function or service related to the request and may transfer a result of the execution to the electronic device ED01. To do this, for example, a cloud computing, a distributed computing, or a client-server computing technique may be used.
The image sensor CM30 may include the first image sensor 100 and the second image sensor 200 provided in the image acquisition device 1000 described above. The first image sensor 100 and the second image sensor 200 may convert light emitted or reflected from the subject and transmitted through the lens assembly CM10 into electrical signals, thereby acquiring an image corresponding to the subject. The first image sensor 100 may acquire an RGB image, and the second image sensor 200 may acquire a hyperspectral image in an ultraviolet to infrared wavelength range.
The image sensor CM30 may include one or a plurality of sensors selected from image sensors having different attributes such as an RGB sensor, a black and white (BW) sensor, an IR sensor, or UV sensor, in addition to the first image sensor 100 and the second image sensor 200. Each sensor included in the image sensor CM30 may be implemented by a CCD sensor and/or a CMOS sensor.
The lens assembly CM10 may collect light emitted from a subject for image capturing. The camera module ED80 may include a plurality of lens assemblies CM10, and in this case, the camera module ED80 may include, for example, a dual camera, a 360 degrees camera, or a spherical camera. Some of the lens assemblies CM10 may have the same lens attributes (e.g., a viewing angle, a focal length, auto focus, F Number, optical zoom, etc.), or different lens attributes. The lens assembly CM10 may include a wide angle lens or a telescopic lens.
The lens assembly CM10 may be configured and/or focus controlled so that two image sensors included in the image sensor CM30 may form an optical image of a subject at the same position.
The flash CM20 may emit light used to reinforce light emitted or reflected from a subject. The flash CM20 may include one or a plurality of light-emitting diodes (e.g., a red-green-blue (RGB) LED, a white LED, an infrared LED, an ultraviolet LED, etc.), and/or a xenon lamp.
The image stabilizer CM40, in response to a motion of the camera module ED80 or the electronic device ED01 including the camera module ED80, moves one or more lenses included in the lens assembly CM10 and/or the image sensor CM30 in a certain direction or controls the operating characteristics of the image sensor CM30 (e.g., adjusting of a read-out timing, etc.) in order to compensate for a negative influence of the motion. The image stabilizer CM40 may sense the movement of the camera module ED80 or the electronic device ED01 by using a gyro sensor (not shown) or an acceleration sensor (not shown) arranged in or out of the camera module ED80. The image stabilizer CM40 may be implemented as an optical type.
The memory CM50 may store a part or entire data of an image acquired through the image sensor 20 for a subsequent image processing operation. For example, when a plurality of images are acquired at high speed, only low resolution images are displayed while the acquired original data (e.g., Bayer-Patterned data, high resolution data, etc.) is stored in the memory CM50. Then, the memory CM50 may be used to transmit the original data of a selected (e.g., user selection, etc.) image to the image signal processor CM60. The memory CM50 may be incorporated into the memory ED30 of the electronic device ED01, or configured to be an independently operated separate memory.
The image signal processor CM60 may perform image processing on the image acquired through the image sensor CM30 or the image data stored in the memory CM50. As described in
The image processing performed by the image signal processor CM60 may include, for example but not limited to, depth map generation, three-dimensional modeling, panorama generation, feature point extraction, image synthesis, and/or image compensation (e.g., noise reduction, resolution adjustment, brightness adjustment, blurring, sharpening, softening, etc.) The image signal processor CM60 may perform control (e.g., exposure time control, or read-out timing control, etc.) on constituent elements (e.g., the image sensor CM30, etc.) included in the camera module ED80. The image processed by the image signal processor CM60 may be stored again in the memory CM50 for additional processing and/or provided to external constituent elements (e.g., the memory ED30, the display device ED60, the electronic device ED02, the electronic device ED04, the server ED08, etc.) of the camera module ED80. The image signal processor CM60 may be incorporated into the processor ED20, or configured to be a separate processor operated independently of the processor ED20. When the image signal processor CM60 is configured as a separate processor from the processor ED20, the image processed by the image signal processor CM60 may undergo additional image processing by the processor ED20 and then displayed through the display device ED60.
The electronic device ED01 may include a plurality of camera modules ED80 having different attributes or functions. In this case, one of the camera modules ED80 may be a wide angle camera, and another may be a telescopic camera. Similarly, one of the camera modules ED80 may be a front side camera, and another may be a read side camera.
The image acquisition device 1000 according to some embodiments may be applied to a mobile phone or smart phone 5100m illustrated in
Furthermore, the image acquisition device 1000 may be applied to a smart refrigerator 5600 illustrated in
Furthermore, the image acquisition device 1000 may be applied to a vehicle 6000 as illustrated in
It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should be considered as available for other similar features or aspects in other embodiments. While one or more example embodiments have been described with reference to the figures, it should be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2023-0143171 | Oct 2023 | KR | national |