This application claims priority from Korean Patent Application No. 10-2023-0092794 filed on Jul. 18, 2023 in the Korean Intellectual Property Office, the contents of which is incorporated by reference herein in its entirety.
The disclosure relates to an image sensor module assembly device and an assembly method for the same.
An image sensor tends to be developed as a fine pixel structure to provide a high resolution in consideration of a limited form factor of a mobile device. For the whole pixel array, each of the pixels needs to maintain the same or similar sensitivity under the same optical conditions. However, as the pixels become finer, and a unit pixel size of N×N becomes larger, pixels in a sensor periphery show a large sensitivity difference, which causes a deterioration in image quality.
Due to the physical limit in the amount of light that may be received per micro-pixel, extended Bayer color filter arrays or color filter arrays having pixel structures of different forms depending on the positions even for the same color have been developed. In addition, a sensitivity difference may occur between same color pixels due to a sensor optical system of an image sensor. Such sensitivity difference may be calibrated after the module optical system is installed and assembled, and a signal compensation may be performed at an image signal processing step. However, according to this method, it is also necessary to compensate for a sensitivity difference that increases due to the module optical system depending on the position of the module lens, by acquiring information on the sensitivity difference after determining the influences of the sensor optical system and the module optical system.
If a certain level of image quality is not guaranteed even if the sensitivity difference in the module optical system is compensated for, the entire image sensor module to which the image sensor is mounted is determined to be defective, and the defective image sensor module is discarded after its assembly. As such, there is a problem of an increase in cost.
Aspects of the disclosure provide an image sensor module assembly device and an assembly method thereof that may detect optimal positions of a module optical system and a sensor optical system.
Aspects of the disclosure also provide an image sensor module assembly device and an assembly method thereof that may assemble a module lens at a position where elements influenced by a sensor optical system and a module optical system are minimized or optimized when mounting the module lens to improve a process yield of a sensor module.
One embodiment of the disclosure provides an assembly method of an image sensor module assembly device, the method including: setting an image sensor and a module lens of an image sensor module at a first position; inputting image data to the image sensor module set at the first position; pre-processing data that are output from the image sensor module based on sensing the input image data; obtaining, based on the pre-processed data, a module optical system reference value of the module lens and a sensor optical system reference value of the image sensor; and determining whether to maintain the first position based on the module optical system reference value and the sensor optical system reference value.
Another embodiment of the disclosure provides an image sensor module assembly device including: a device setting unit configured to assemble an image sensor module by setting an image sensor and a module lens at a first position and mounting the image sensor and the module lens, set at the first position, to a module body; and a test module configured to input image data to the image sensor module; obtain, based on data output from the image sensor module based on sensing the input image data, a module optical system reference value and a sensor optical system reference value; and determine whether to maintain the first position based on comparing each of the module optical system reference value and the sensor optical system reference value with a corresponding preset threshold range.
Other embodiment of the disclosure provides an image sensor module assembly device including at least one processor configured to: set an image sensor and a module lens at a first position and control to mount the image sensor and the module lens to a module body to firstly assemble an image sensor module; input image data to the firstly assembled image sensor module; pre-process data that are output from the image sensor module based on sensing the input image data; obtain, based on the pre-processed data, a module optical system reference value and a sensor optical system reference value; compare each of the module optical system reference value and the sensor optical system reference value with a corresponding preset threshold range; and set the image sensor and the module lens at a second position to secondly assembly the image sensor module depending on a comparison result.
However, aspects of the disclosure are not restricted to the one set forth herein. The above and other aspects of the disclosure will become more apparent to one of ordinary skill in the art to which the disclosure pertains by referencing the detailed description of the disclosure given below.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings.
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the example embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the example embodiments are merely described below, by referring to the figures, to explain aspects of the disclosure. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
An image sensor module assembly device and an assembly method thereof according to some embodiments of the disclosure will be described below with reference to
Referring to
Among the incident lights, when an incident angle (chief ray angle) of a chief ray in the incident light passing through the center of the module lens 10a is 0° (CRS=0°), the chief ray is perpendicularly incident on an upper face of the image sensor 30a. However, the incidence angle of the incident light increases toward the periphery of the image sensor 30a, and as the incidence angle increases, a sensitivity difference depending on the pixel position of the image sensor 30a increases. However, if the incidence angle of the chief ray is 0°, the center of the pixel array and a central axis of the module lens almost match with each other, and therefore, even if the sensitivity difference between pixels increases toward the periphery from the center of the pixel array, the sensitivity difference is symmetrical relative to the center of the pixel array. Thus, the sensitivity difference may be solved to some extent by a signal calibration.
However, if the module lens 10a is distortedly assembled without coinciding with the center of the image sensor 30a, that is, if the incidence angle CRS of the chief ray is not 0°, it becomes difficult to solve the pixel sensitivity difference in the pixel array by the signal calibration.
For example, in the image sensor modules 100b and 100c, when the module lens 10b or 10c is assembled by shifting to one side (e.g., to a left side in the shown examples), the central axis of the module lens 10b or 10c does not match with the center of the pixel array, and the incidence angle of the incident light on the pixel array varies on both left and right sides of the pixel array. For example, an optical path of left incident light in a shift direction (that is, left direction), in which the module lens 10b or 10c has been shifted, becomes relatively longer than an optical path of right incident light in an opposite direction (that is, right direction), and the sensitivity difference between pixels in the pixel array further increases, compared to the image sensor module 100a. When comparing the optical path of the incident light in the image sensor module 100b with the optical path of the incident light in the image sensor module 100c, the degree of shift of the module lens 10c is large than the module lens 10b. Therefore, in the image sensor 100c, the degree of asymmetry of the optical path increases, and the sensitivity difference between the pixels caused by crosstalk also increases due to the asymmetry of the optical path. Although not shown, the optical path becomes asymmetric not only when the position of the module lens is shifted horizontally but also when it is tilted vertically and horizontally, and the sensitivity difference depending on the position of the pixel array may increase. In the case of a micro-pixel structure, a process or calibration for compensating for the pixel sensitivity differences due to a module lens needs to be additionally considered. However, as the pixels become finer to implement higher resolution, the process difficulty and the accuracy variance in performing such calibration increase.
Therefore, according to an example embodiment, the optimal position for the module lens and the image sensor is determined to minimize the influence of physical pixel sensitivity differences, prior to assembly of the image sensor module. Accordingly, yield of the image sensor modules may be improved, while increasing the resolution of the image being sensed.
Referring to
The image sensor module assembly device inputs image data for testing to the image sensor module in which the module lens is set at the position A (S20). The input image data may not be an image including a subject having a specific shape, but may be an image captured under a certain optical condition without a subject. The optical condition may be light that exhibits a certain fixed color depending on the setting, or may be light that exhibits at least two colors that change at a predetermined cycle.
The image sensor module assembly device analyzes and pre-processes the input data sensed by the image sensor module (S30).
Input data (or input image) may be expressed by using a pixel array including unit pixels of various patterns depending on the pattern of the color filter array of the image sensor. Although the pixel array including unit pixels is described, the unit pixel may be referred to as different terms such as a unit kernel, a unit window, or the like according to various embodiments.
The pixel array may be placed in a Bayer pattern according to some embodiments. The Bayer pattern includes a column in which R (red) sub-pixels and Gr (green) sub-pixels are repeatedly placed, and a column in which Gb (green) sub-pixels and B (blue) sub-pixels are repeatedly placed. Referring to
According to some embodiments, the unit pixel may refer to a smallest color pattern unit in which same color sub-pixels in the form of an N×N array (N is a natural number equal to or greater than 1) are placed in a Bayer pattern. As an example, the unit pixel K may be implemented in a 1×1 pattern including one R (red) sub-pixel, one Gr (green) sub-pixel, one Gb (green) sub-pixel, one B (blue) sub-pixel. As an example, the unit pixel K may be implemented in a 2×2 pattern including four R (Red) pixels, four Gr, Gb (green) pixels, and four B (blue) pixels. As an example, the unit pixel K may be implemented in a 3×3 pattern including nine R (red) sub-pixels, nine Gr (green) sub-pixels, nine Gb (green) sub-pixels, and nine B (blue) sub-pixels. As an example, the unit pixel K may be implemented in a 4×4 pattern including sixteen R (red) sub-pixels, sixteen Gr (green) sub-pixels, sixteen Gb (green) sub-pixels, and sixteen B (blue) sub-pixels. Further, although not shown, array sizes of the same color pixels such as 5×5 and 6×6 may be implemented in various sizes depending on the degree of pixel miniaturization process.
Alternatively, according to some embodiments, the unit pixel refers to a smallest color pattern unit in which the same color sub-pixels in the form of an N×N array (Nis a natural number equal to or greater than 1) are placed in a Bayer pattern and a microlense matches thereto. That is, the unit pixel may be implemented in various ways depending on the matching ratio and placement form of the microlens and the sub-pixel.
As an example, the unit pixels K may be implemented differently even in the N×N pattern depending on the matching ratio between the pixel array of the image sensor and the microlens included in the image sensor. As an example, referring to
Alternatively, according to some embodiments, the sub-pixel colors of the Bayer pattern constituting the unit pixel are implemented as C (cyan), M (magenta), and Y (yellow) colors instead of R, G, and B colors.
The image sensor module assembly device may generate pre-processed input data by pre-processing the input data in S30 according to the form of the input data. The pre-processed input data may be images that are intentionally processed to determine pixel sensitivity differences influenced by the module optical system and the sensor optical system.
According to some embodiments, the image sensor module assembly device may process the unit pixels of the N×N array pattern as shown in
Referring to
Alternatively, as shown in a right diagram of
In order to enhance the processing speed, as shown in
The image sensor module assembly device calculates a reference value for the module optical system from the pre-processed input data (S40 of
The reference value for the modular optical system refers to a value for expressing a number of factors or features that affect the image quality of an image sensed by an image sensor due to the influence of the module lens. In other words, the reference value may be a value that expresses spatial position-specific features of the pre-processed input data. For example, the reference value for the module optical system may include a value of an optical axis, a lens shading value, an image asymmetry (tilt) value, or the like of the pre-processed input data that changes depending on the position of the module lens. For example, the reference value for the module optical system may be a value of an optical axis, a lens shading value, or an image asymmetry (tilt) value that is not correlated with a color. As another example, the reference value may be a value of an optical axis, a lens shading value, or an image asymmetry (tilt) value of a particular color.
Referring to
For example, the image sensor module assembly device may calculate a reference value of the lens shading by applying a preset gain to an input image that is pre-processed to produce an intentional color fringe.
For example, referring to
While it is illustrated in
The image sensor module assembly device calculates a reference value for the sensor optical system from the pre-processed input data (S50 of
For example, the reference value for the sensor optical system may be expressed using a gain calculated from the pixel value of pre-processed input data, a sensitivity difference between adjacent pixels (e.g., differential), an absolute value of the sensitivity difference, a gradient value between adjacent pixels, and the like.
Referring to
The image sensor module assembly device calculates a monitoring value for each channel to check the pixel sensitivity difference of the sensor optical system. For example, the monitoring value may be determined for each color, based on at least one of an average value of same color sub-pixels, a normalized offset value, a pixel value change amount or a gain, and the monitoring value may be used as the reference value for the sensor optical system.
As an example, an average value of each channel pixel value is calculated for the input image according to each color channel (S110). The average value may be an average value of each color included in a unit pixel. For example, in a unit pixel structure of 2×2 pattern (that is, color sub-pixels of each color has a 2×2 array), the average value of each color may be calculated as the average value of four sub-pixels for each color unit pixel.
The image sensor module assembly device generates sample gains of the input image for each color channel (S120). For example, the sample gain may be calculated as in Formula 1.
In Formula 1, for example, C denotes pixel values of R, B, Gr, and Gb color sub-pixels for each color channel, n is each index of a same color sub-pixel included in a unit pixel (for example, n is 0 to 3 in the 2×2 pattern), and M denotes a total number of same color sub-pixels included in the unit pixel (for example, M is 4 in the 2×2 pattern).
To explain the sample gain in more detail, calculation of the sample gain for a particular color (for example, a red color) in the input image captured under a white optical condition of 5100K is described with reference to
calculated in S110 in
The image sensor module assembly device may calculate a monitoring value from the sample gain image (S130 in
The image sensor module assembly may calculate an analysis value for each channel from the monitoring value and the sample gain image, may compare the calculated analysis value with a preset threshold range or threshold value (S140 in
To explain the analysis value by using an example of the sample gain image shown in
The image sensor module assembly device according to some embodiments is configured to perform a process of firstly assembling the image sensor module by setting the image sensor and the module lens at a first position and attaching the image sensor and the module lens to a module body, a process of inputting the image data to the first assembled image sensor module, a process of pre-processing the input data that is output from the image sensor module, a process of calculating the module optical system reference value and the sensor optical system reference value from the pre-processed input data, a process of comparing the calculated module optical system reference value and the sensor optical system reference value with the preset threshold range (or preset threshold value), and according to a result of comparison, a process of setting the image sensor and the module lens at a second position to secondarily assemble the image sensor module. In some embodiments, the image sensor module assembly device may perform the above processes after assembling the image sensor module based on the second position.
According to some embodiments, the image sensor module assembly device 1000 is connected to an image sensor module 2000, and includes a device setting unit 1100 and a test module 1200.
The device setting unit 1100 mounts the image sensor and the module lens to the module body at the set position. For example, the device setting unit 1100 mounts the image sensor and the module lens to a temporarily set initial position S, and transmits the currently set position S to the test module 1200. Thereafter, the device setting unit 1100 may adjust the positions of the image sensor and the module lens in the image sensor module 2000 according to a notification signal C of the test module 1200, and mount the image sensor and the module lens again based on the adjusted positions.
The test module 1200 inputs image data I to the image sensor module 2000 in which the image sensor and the module lens are temporarily mounted in the device setting unit 1100. The image sensor module 2000 senses the input image data I and outputs input data O. The test module 1200 receives the input data O, pre-processes the input data O, and calculates a module optical system reference value and a sensor optical system reference value from the pre-processed input data.
The input data O may include a unit pixel of an N×N (N is an integer of 2 or more) array pattern, and pre-processing of the input data O may include, as an example, a process of calculating the pixel average value of same color sub-pixels belonging to the unit pixel, and a process of outputting the pre-processed input data including the pre-processed unit pixels of the N×N (e.g., 2×2) array pattern to which the pixel average value is mapped for each color. Alternatively, as another example, the pre-processing of the input data O may include a process of calculating one pseudo color value for all color sub-pixels belonging to the unit pixel on the image data including the unit pixels of the N×N (N is an integer of 2 or more) array pattern, and a process of outputting the pre-processed input data including the pre-processed unit pixel to which the pseudo color value is mapped. At this time, in order to calculate the N×N (e.g., 2×2) array pattern or the pseudo color value of the pre-processed unit pixel, the input data may be the input data itself that is output from the image sensor module 2000 according to various embodiments, or the output input data reduced to a preset ratio, or data that is obtained by cropping a partial region of the output input data.
The test module 1200 may calculate the module optical system reference value from the pre-processed input data. The module optical system reference value is a value that changes according to the position of the module lens in the image sensor module 2000, and may be one of a value of the optical axis, a lens shading value, and an image asymmetry (tilt) value calculated from the pre-processed input data as explained in
The test module 1200 may calculate the sensor optical system reference value from the pre-processed input data. The sensor optical system reference value is a value that changes according to the position of the image sensor in the image sensor module 2000, and may be at least one of a pixel average value for each channel, a pixel minimum value for each channel, a pixel maximum value for each channel, a sensitivity difference (e.g., differential) between adjacent pixels for each channel, an absolute value of the sensitivity difference, and a gradient value between adjacent pixels for each channel calculated from pre-processed input data. The sensor optical system reference value may be calculated as explained in the embodiment of
The test module 1200 compares each of the calculated module optical system reference value and the sensor optical system reference value with a corresponding preset threshold range. As an example, preset threshold ranges may be stored for respective types of numerical values in a memory provided in the test module 1200. The preset threshold range may be a range in which the image sensor module 2000 may be discriminated as a good (or non-faulty) product, that is, a maximum value and a minimum value allowed for the numerical value are stored in a form of a mapping table, for each value including, for example, an optical axis, a lens shading value, an image asymmetry (tilt) value, a pixel average value for each channel, a pixel minimum value for each channel, a pixel maximum value for each channel, a sensitivity difference between adjacent pixels for each channel, an absolute value of the sensitivity difference, a gradient value between adjacent pixels for each channel, or the like. Further, as another example, the preset threshold range may be stored in consideration of a correlation between a first numerical value and a second numerical value. The first numerical value may be one of numerical values included in the module optical system reference value and the sensor optical system reference value, and the second numerical value may be a numerical value included in another module optical system reference value and another sensor optical system reference value that are different from the first numerical value.
The test module 1200 compares each of the calculated module optical system reference value and the sensor optical system reference value with a corresponding preset threshold range, and determines whether the currently set positions of the image sensor and the module lens are appropriate depending on the comparison results. If there is a value outside the corresponding preset threshold range among the module optical system reference value and the sensor optical system reference value (that is, at least one of the module optical system reference value and the sensor optical system reference value does not belong to the corresponding preset threshold range), the test module 1200 determines to change the currently set position(s) of the image sensor and the module lens.
The test module 1200 may output a notification signal C indicating that the currently set position(s) is not appropriate, according to an embodiment. Alternatively, according to another embodiment, information of a position to be changed is calculated based on the module optical system reference value and the sensor optical system reference value, and may be output to the device setting unit 1100, by including the calculated information of the position in the notification signal. The device setting unit 1100 outputs a control signal P for adjusting the positions of the image sensor and the module lens to the image sensor module 2000 according to the notification signal C.
For example, as in the test module 1200, the device setting unit 1100, and the image sensor module 2000, the term “unit” or “module” as used herein refers to a hardware component such as a software, a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and the “unit” or “module” perform some roles. However, the meaning of the “unit” or “module” is not limited to software or hardware. The “unit” or “module” may be configured to be in an addressable storage medium and configured to recycle one or more processors. Thus, as an example, the “unit” or “module” includes components such as software components, object-oriented software components, class components, and task components, and processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, micro code, circuit, data, database, data structures, tables, arrays, and variables. The functionality provided within the components and “unit” or “module” may be combined into fewer components and “units” or “modules”, or may be further separated into additional components and “units” or “modules”.
The image sensor module assembly device and the assembly method thereof according to the embodiments of the disclosure may calculate in advance an optimal module position that minimizes a pixel sensitivity difference, which is caused due to combined influence of the sensor optical system and the module optical system on fine pixels, and assemble the image sensor module by using the calculated optimal module position. Accordingly, the resolution of the image sensor module may be improved, and also the yield of the image sensor modules may be improved, which reduces costs.
While example embodiments of the disclosure have been described above with reference to the accompanying drawings, the disclosure is not limited to the above embodiments, and may be fabricated in various forms. Those skilled in the art will appreciate that the disclosure may be embodied in other specific forms without changing the technical spirit or essential features of the disclosure. Accordingly, the above-described embodiments should be understood in all respects as illustrative and not restrictive.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0092794 | Jul 2023 | KR | national |