Embodiments of the present application relate to the technical field of an image sensor, and for example, to an image sensor, a method for manufacturing the image sensor, an image recognition method, and an electronic device.
An image sensor converts an optical image into an electrical signal. With the development of the computer and communication industries, there is an increasing need for high performance image sensors in various fields such as a digital camera, a video recorder, a personal communication system (PCS), a game console, a camera, and a medical micro-camera.
In the related art, the image sensor may include an image sensing chip and a lens covering the image sensing chip. An imaging object is imaged on the image sensing chip through the lens, and then the image sensing chip is controlled to be exposed through a control unit disposed on the periphery of the image sensing chip, such that an optical signal is converted into an electric signal, and thus an image of the imaging object is obtained.
However, the image sensor in the related art requires a large area of the image sensing chip, and the image sensing chip is expensive, resulting in the high cost of the image sensor.
Embodiments of the present application provide an image sensor, a method for manufacturing the image sensor, an image recognition method, and an electronic device, to avoid the high cost for manufacturing the image sensor in the related art.
In a first aspect, an embodiment of the present application provides an image sensor. The image sensor includes a sensor unit array, an encapsulation layer, a rewiring layer and a circuit board. The sensor unit array includes multiple sensor units, the multiple sensor units are arranged in an array, each of the multiple sensor units is configured to generate a respective partial size image of an imaging object, and each of the multiple sensor units includes at least one interconnection structure. The encapsulation layer wraps the sensor unit array, and exposes the at least one interconnection structure of each of the multiple sensor units. The rewiring layer is disposed on a side of the encapsulation layer, and is electrically connected to the at least one interconnection structure. The circuit board is disposed on a side of the rewiring layer away from the encapsulation layer, and is electrically connected to the rewiring layer.
In a second aspect, an embodiment of the present application further provides a method for manufacturing an image sensor. The method includes that: a base substrate is provided; a sensor unit array is formed on the base substrate, where the sensor unit array includes multiple sensor units, the multiple sensor units are arranged in an array, each of the multiple sensor units is configured to generate a respective partial size image of an imaging object, and each of the multiple sensor units includes at least one interconnection structure; an encapsulation layer is prepared on the base substrate, where the encapsulation layer wraps the sensor unit array, and exposes the at least one interconnection structure of each of the multiple sensor units; a rewiring layer is prepared on a side of the encapsulation layer away from the base substrate, where the rewiring layer is electrically connected to the at least one interconnection structure; and a circuit board is prepared on a side of the rewiring layer away from the encapsulation layer, where the circuit board is electrically connected to the rewiring layer.
In a third aspect, an embodiment of the present application further provides an image recognition method. The image recognition method adopts the image sensor provided in the first aspect. The method includes that: multiple partial size recognition images generated by the sensor unit array are acquired; position information of at least two image feature points is acquired based on the multiple partial size recognition images; and an image feature point recognition algorithm is adopted to recognize a recognition image captured by the image sensor according to the position information of the at least two image feature points.
In a fourth aspect, an embodiment of the present application further provides an electronic device. The electronic device includes the image sensor provided in the first aspect.
Embodiments of the present application provide an image sensor. The image sensor includes a sensor unit array, an encapsulation layer, a rewiring layer and a circuit board. The sensor unit array includes multiple sensor units, the multiple sensor units are arranged in an array, each sensor unit is configured to generate a respective partial size image of an imaging object, and each sensor unit includes at least one interconnection structure. The encapsulation layer wraps the sensor unit array, and exposes the at least one interconnection structure of each sensor unit. The rewiring layer is disposed on a side of the encapsulation layer, and is electrically connected to the at least one interconnection structure. The circuit board is disposed on a side of the rewiring layer away from the encapsulation layer, and is electrically connected to the rewiring layer. By adopting the technical scheme described above, the image sensor includes the sensor unit array, the sensor unit array includes the multiple sensor units, the sensor includes the multiple sensor units arranged in the array, and each sensor unit generates the respective partial size image of the imaging object. Compared with a sensor chip disposed in a whole piece manner, the coverage area of the sensor chip can be saved, the total volume of a whole image sensor can be effectively reduced without affecting the imaging quality, the miniaturization design of the image sensor is easy to be implemented, and the manufacturing cost of the image sensor is saved. Meanwhile, each sensor unit includes at least one interconnection structure, the whole sensor unit array is connected to the rewiring layer and the circuit board through the interconnection structures, and the whole image sensor is encapsulated by adopting a fan-out process, so that a good encapsulation effect is ensured.
As shown in
Referring to
In summary, according to the image sensor provided in the embodiment of the present application, the sensor includes multiple sensor units arranged in the array, and each sensor unit generates the respective partial size image of the imaging object. Compared with a sensor chip arranged in a whole piece manner, the coverage area of the sensor chip can be saved, the total volume of the whole image sensor can be effectively reduced without affecting the imaging quality, the miniaturization design of the image sensor is easy to be implemented, and the manufacturing cost of the image sensor is saved. Meanwhile, each sensor unit includes at least one interconnection structure, the whole sensor unit array is connected to the circuit board through the rewiring layer, and the whole image sensor is encapsulated by adopting a fan-out process, so that the good encapsulation effect is ensured.
Exemplarily, the encapsulation cover plate 1011 may be a flexible substrate, and the material thereof may include at least one of polyimide, polyethylene terephthalate, polyethylene naphthalate, polycarbonate, polyarylate, or polyether sulfone. Alternatively, the encapsulation cover plate 1011 may be a rigid substrate, such as a silicon wafer, a glass substrate, or another rigid substrate. The type and material of the substrate are not limited in the embodiment of the present application.
The optical element 1013 is disposed corresponding to each sensor chip 1012. When the image sensor is in operation, the optical element 1013 receives the part of incident light of the imaging object and images the part of incident light on the sensor chip 1012 corresponding to the optical element 1013, and the sensor chip 1012 generates the partial size image of the imaging object.
A lens is used as an example. According to an imaging principle of an optical lens, 1/f=1/u+1/v, where f denotes a focal distance of the lens, u denotes an image distance, and v denotes an object distance. By adjusting the focal distance f of the lens and the distance v from the lens to an object to be imaged, the distance u between the optical element 1013 and the sensor chip 1012 may be adjusted, so that the area of the image is less than the area of the object by a certain multiple, and the size of the sensor chip 1012 is controlled, which provides a degree of freedom for the design of the sensor chip 1012 and ensures the flexibility of setting the size of each sensor chip 1012.
In an embodiment, the optical element 1013 may be disposed between a film layer where the encapsulation cover plate 1011 is located and a film layer where the sensor chip 1012 is located, as shown in
In an embodiment, each sensor chip 1012 may correspond to at least one optical element 1013.
In an embodiment, the interconnection structure 1014 may include at least one of a metal solder ball, a metal pad or a metal bump, which is not limited in the embodiments of the present application. The interconnection structure 1014 only needs to satisfy electrical and mechanical connection functions, and the drawings in the embodiments of the present application are illustrated by only using an example in which the interconnection structure 1014 is the metal solder ball.
Exemplarily,
In an embodiment, in the sensor chip 101 provided in the embodiments of the present application, the optical element 1013 may be at least one of a lens, an imaging aperture or a collimator.
In an embodiment, referring to
In an embodiment, the image sensor provided in the embodiment of the present application includes the sensor unit array 10, the sensor unit array 10 includes multiple sensor units 101, each sensor unit 101 generates a respective partial size image of an imaging object, and the whole sensor unit array 10 may generate a complete size image of the imaging object or the partial size images of the imaging object, which is not limited in the embodiment of the present application. When the image recognition is performed, in the case where the sensor unit array 10 generates the complete size image of the imaging object, the complete size image of the imaging object generated by the sensor unit array 10 is compared with a preset image of the imaging object, and then the image recognition may be performed, which is not detailed in the embodiment of the present application. The embodiment of the present application focuses on describing how to perform the image recognition in the case where the sensor unit array 10 generates the partial size images of the imaging object below.
Exemplarily, in the case where the distance L between the coverage areas S of two adjacent sensor units 101 is greater than 0, i.e. L >0, it is indicated that an effective visual angle of the sensor unit array 10 provided in the embodiment of the present application cannot completely cover the imaging object, and the sensor unit array 10 does not acquire a complete size image of the imaging object, so that the image recognition cannot be performed through a conventional image recognition method. Based on this, the embodiment of the present application creatively provides an image recognition method adopting “image feature point recognition”.
In step S110, multiple partial size recognition images generated by a sensor unit array are acquired.
Exemplarily, multiple partial size recognition images generated by the sensor unit array are acquired first, and this step is completed through the capturing of the image sensor provided in the embodiment of the present application.
In step S120, position information of at least two image feature points is acquired based on the multiple partial size recognition images.
Exemplarily, as shown in
Since each sensor unit array may include M rows and N columns of sensor units, each sensor unit may include X rows and Y columns of pixels. Thus, an image feature point falling within a coverage range of the sensor unit may be represented by a coordinate (x, y, m, n, a) located in a feature space. Here, x denotes an abscissa of the image feature point in a certain sensor unit, and 0≤x≤X; y denotes an ordinate of the image feature point in a certain sensor unit, and 0≤y≤Y; m denotes an abscissa of the sensor unit where the image feature point is located in the whole sensor unit array, and 0≤m≤M; n denotes an ordinate of the sensor unit where the image feature point is located in the whole sensor unit array, and 0≤n≤N; and a denotes a feature angle of the image feature point.
Since the position of each sensor unit in the whole sensor unit array is known, a set of all image feature points located within the coverage ranges of the sensor units may be determined and acquired.
In step S130, an image feature point recognition algorithm is adopted to recognize a recognition image captured by the image sensor according to the position information of the at least two image feature points.
Exemplarily, the image feature point recognition algorithm is adopted to recognize the recognition image captured by the image sensor according to the acquired position information of the at least two image feature points.
The image feature point recognition algorithm may adopt the image feature point recognition algorithm known in the art. For example, the image feature point recognition algorithm may refer to a document “Direct gray-scale minutiae detection in fingerprints” with doi: 10.1109/34.566808, a document “Pores and ridges High-resolution fingerprint matching using level 3 features” with doi: 10.1109/TPAMI.2007.250596, a document “Fingerprint minutiae extraction from skeletonized binary images” with doi: 10.1016/S0031-3203(98)00107-1, and a document “Extraction of high confidence minutiae points from fingerprint images” with doi: 10.1109/ICCACS.2015.7361357.
According to the image recognition method provided in the embodiment of the present application, based on the recognition image captured by the image sensor provided in the embodiment of the present application, multiple partial size recognition images generated by the sensor unit array are acquired, the position information of the at least two image feature points is acquired based on the multiple partial size recognition images, and the image feature point recognition algorithm is adopted to recognize the recognition image captured by the image sensor according to the position information of the at least two image feature points. Since the recognition image captured by the image sensor cannot include all recognition image information, the image recognition method of “image feature point recognition” is creatively adopted in the embodiment of the present application. Therefore, it is ensured that the image recognition method is accurate and feasible, and that according to the image recognition method provided in the embodiment of the present application, the recognition image captured by the image sensor provided in the embodiment of the present application can be accurately recognized.
In an embodiment, the step in which the image feature point recognition algorithm is adopted to recognize the recognition image captured by the image sensor according to the position information of the at least two image feature points may include that: a distance between any two image feature points is calculated according to the position information of the at least two image feature points; and the image feature point recognition algorithm is adopted to recognize the recognition image captured by the image sensor according to the distance between any two image feature points.
Exemplarily, referring to
In an embodiment, before the multiple partial size recognition images generated by the sensor unit array are acquired, the method may further include that: multiple partial size entry images generated by the sensor unit array are acquired multiple times, and a partial size entry image library is generated; and an image stitching algorithm is adopted to generate a complete size entry image according to the partial size entry image library.
Exemplarily, the image recognition may generally be divided into two processes, i.e., image entry and image recognition. In the image entry, the system may require the entered object to move multiple times on an image entry plane of the image sensor, multiple partial size entry images generated by the sensor unit array are acquired multiple times, and the partial size entry image library is generated. Then, according to the partial size entry image library, the image stitching algorithm is adopted to cut and stitch the partial size entry images, and the complete entry image containing all image feature point information is generated. In a subsequent image recognition process, the acquired recognition image containing part of the image feature points is compared with the entry image containing all the image feature points to perform the image recognition.
It is to be noted that the image recognition method provided in the embodiment of the present application is only explained by taking fingerprint recognition as an example. It can be understood that since the image distance of the sensor unit and the focal distance of the optical element in the image sensor provided in the embodiment of the present application are adjustable, the object distance of the sensor unit in the embodiment of the present application is also adjustable. Therefore, the image sensor provided in the embodiment of the present application may recognize objects with different object distances, for example, the image sensor provided in the embodiment of the present application may implement face recognition in combination with a face recognition algorithm, as shown in
An embodiment of the present application further provides a method for manufacturing an image sensor. As shown in
In step S210, a base substrate is provided.
In step S220, a sensor unit array is formed on the base substrate, where the sensor unit array includes multiple sensor units, the multiple sensor units are arranged in an array, each sensor unit is configured to generate a respective partial size image of an imaging object, and each sensor unit includes at least one interconnection structure.
In an embodiment, the sensor unit array 10 may be glued to the base substrate 50 by glue.
In an embodiment, a material of the interconnection structure 1014 may be solder metal, such as Sn, Ag, Cu, Pb, Au, Ni, Zn, Mo, Ta, Bi or In, and alloys thereof.
In step S230, an encapsulation layer is prepared on the base substrate, where the encapsulation layer wraps the sensor unit array, and exposes the at least one interconnection structure of each sensor unit.
Exemplarily, the step in which the encapsulation layer wrapping the sensor unit array and exposing the at least one interconnection structure of each sensor unit is prepared on the base substrate may include that: the encapsulation layer wrapping the sensor unit array is prepared on the base substrate; and the encapsulation layer is thinned to expose the at least one interconnection structure of each sensor unit.
In step S240, a rewiring layer is prepared on a side of the encapsulation layer away from the base substrate, where the rewiring layer is electrically connected to the interconnection structures.
In step S250, a circuit board is prepared on a side of the rewiring layer away from the encapsulation layer, where the circuit board is electrically connected to the rewiring layer.
In summary, according to the method for manufacturing the image sensor provided in the embodiment of the present application, the sensor includes multiple sensor units arranged in the array, and each sensor unit generates the respective partial size image of the imaging object. Compared with a sensor chip disposed in a whole piece manner, the coverage area of the sensor chip can be saved, the total volume of the whole image sensor can be effectively reduced without affecting the imaging quality, the miniaturization design of the image sensor is easy to be implemented, and the manufacturing cost of the image sensor can be saved. Meanwhile, each sensor unit includes at least one interconnection structure, the whole sensor unit array is connected to the circuit board through the rewiring layer, the whole image sensor is encapsulated by adopting a fan-out process, so that a good encapsulation effect is ensured.
In an embodiment, the method for manufacturing the image sensor provided in the embodiment of the present application may further include that: the base substrate is stripped.
Exemplarily,
An embodiment of the present application further provides an electronic device, and the electronic device may include the image sensor provided in the embodiments of the present application, which is not repeated herein. In an embodiment, the electronic setting device provided in the embodiment of the present application may be a camera, a video camera, an attendance machine, a lens module, or other electronic device needing to use an image sensor, and the embodiments of the present application do not list them one by one.
Number | Date | Country | Kind |
---|---|---|---|
201910160614.9 | Mar 2019 | CN | national |
This is a National Stage Application filed under 35 U.S.C. 371 based on International Patent Application No. PCT/CN2019/122025, filed on Nov. 29, 2019, which claims priority to Chinese Patent Application No. 201910160614.9 filed Mar. 4, 2019, the disclosures of both of which are incorporated herein by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/122025 | 11/29/2019 | WO | 00 |