This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2021-0060329, filed on May 10, 2021, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
The inventive concept relates to a camera module and an imaging device including the camera module.
Recently, due to the expansion of camera functions of electronic devices such as smartphones, a plurality of camera modules, each including an image sensor, are being provided in smartphones. These camera modules may employ image stabilization (IS) technology to correct or prevent image shake due to camera movement caused by an unstable fixing device or a user's movement.
IS technology may include optical image stabilizer (OIS) which involves a method of correcting image quality by moving a lens or image sensor of a camera to correct the optical path. In particular, camera movement, such as caused by shaking of a hand of a user, is detected through a gyro sensor, a distance a lens or image sensor needs to move is calculated based on the detected camera movement, and the effects caused by the camera movement are overcome using a lens movement method or a module tilting method.
The inventive concept provides a camera module for capturing an optimal scene by using a tilting device, and an imaging device including the camera module.
According to an aspect of the inventive concept, there is provided an imaging device including a lens module configured to receive an optical signal, an image sensor configured to generate image data based on the received optical signal, a tilting module configured to adjust a position of the image sensor, and a processor configured to control the tilting module based on the image data. The processor is further configured to recognize a target object from the image data and control the tilting module in response to comparing a location of the target object with reference location information including object location information.
According to another aspect of the inventive concept, there is provided an electronic device including a camera module configured to provide a selfie photograph assistance function, and a processor. The camera module includes a lens module configured to receive an optical signal, an image sensor configured to generate image data based on the received optical signal, a tilting module configured to adjust a position of the camera module, and an interface circuit configured to communicate with the processor, transmit the image data to the processor, and receive tilting control data for controlling the tilting module from the processor. The processor is configured to recognize a target object from the image data, compare a location of the target object with reference location information including object location information when capturing an image and generate the tilting control data, and transmit the tilting control data to the tilting module through the interface circuit.
According to another aspect of the inventive concept, there is provided a method of controlling a camera module supporting selfie photograph assistance, the method including: generating image data based on an optical signal received through the camera module, recognizing a target object based on the generated image data, and generating target object information; comparing the target object information with reference location information including object location information when capturing an image; and controlling a field of view for photographing by adjusting a position of the camera module in response to comparing the target object information with the reference location information.
Embodiments of the inventive concept will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
Hereinafter, embodiments of the inventive concept will be described in detail with reference to the accompanying drawings.
An imaging device 100 according to an example embodiment of the inventive concept may be provided in an electronic device having an image capturing or light sensing function. For example, the imaging device 100 may be provided in an electronic device such as a digital still camera, a digital video camera, a smartphone, a wearable device, an Internet of Things (IoT) device, a tablet personal computer (PC), a personal digital assistant (PDA), a portable multimedia player (PMP), and a navigation device. Also, the imaging device 100 may be provided in an electronic device provided as a component in a vehicle, furniture, a manufacturing facility, a door, and various measurement devices.
Referring to
The lens module 112 may include a lens. The lens may receive light reflected by an object located within a viewing angle. According to an embodiment, the lens module 112 may be arranged to face a specified direction (e.g., a front or rear direction of the imaging device 100). According to an embodiment, the lens module 112 may include an aperture that adjusts the amount of input light.
The lens module 112 may further include a sensor for detecting the movement of the imaging device 100 or the electronic device, such as a shaking motion, etc. In an embodiment, the lens module 112 may further include a gyro sensor, and the gyro sensor may detect a movement of the imaging device 100 or the electronic device and transmit a detection result to the tilting module 120 or the processor 130. The processor 130 may control the tilting module 120 to correct a shake, based on the detection result.
The lens may be mounted on an optical axis of the image sensor 114, and the optical axis may be in the front or rear direction of the imaging device 100. The lens may receive light reflected by an object located within a viewing angle. The lens module 112 may include one lens or a plurality of lenses.
The image sensor 114 may convert light input through the lens module 112 into an electrical signal. For example, the image sensor 114 may generate an image by using object information in light input through the lens module 112. The image sensor 114 may transmit the generated image data to the processor 130.
The image sensor 114 may include a pixel array that receives an optical signal. The pixel array may include, for example, a photoelectric conversion device such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS), and may include various types of photoelectric conversion devices. The pixel array may include a plurality of pixels that convert a received optical signal into an electrical signal, and the plurality of pixels may be arranged in a matrix. Each of the plurality of pixels includes a photo-sensing element. For example, the photo-sensing element may include a photodiode, a phototransistor, a photogate, a pinned photodiode, or the like.
The tilting module 120 may control an actuator driving module (not shown) to change the position or location of a lens of the lens module 112. The actuator driving module may change the position or location as well as the direction of the lens in the lens module 112 by driving an actuator. The tilting module 120 may control the actuator to change the location of the lens of the lens module 112, the location of the lens module 112, or the location of the camera module 110 including the lens module 112.
A target correction value for moving the lens may be calculated to correct noise caused by the movement of the imaging device 100 or the electronic device. The target correction value may be a value corresponding to a movement distance of the lens for shake correction (i.e., image stabilization) in a first direction or a second direction.
The tilting module 120 may be a shake correction module, i.e., an image stabilization module. The image stabilization module may perform hand shake correction of the imaging device 100 or the electronic device in order to prevent image shake due to a user's hand shake during image capture.
The processor 130 may control the overall operation of the imaging device 100. According to an embodiment, the processor 130 may control each of the lens module 112, the image sensor 114, and the tilting module 120 to perform correction when capturing an image according to various embodiments of the inventive concept.
For example, the image sensor 114 may receive light reflected by an object through the lens module 112 to generate image data IDT. The image sensor 114 may transmit the generated image data IDT to the processor 130. The processor 130 may recognize a target object based on the received image data IDT, and compare the location of the target object with reference location information including optimal object location information when capturing an image and generate tilting control data TCD. The processor 130 may transmit the generated tilting control data TCD to the tilting module 120. The tilting module 120 may move the lens, the lens module 112, or the camera module 110 according to the received tilting control data TCD to generate improved image data. The terms “tilt” and “tilting”, as used herein, mean any type of movement and/or position adjustment of a lens, lens module 112, and camera module 110.
A camera module 200 according to an example embodiment of the inventive concept may be provided in an electronic device having an image capturing or light sensing function. For example, the camera module 200 may be provided in an electronic device such as a digital still camera, a digital video camera, a smartphone, a wearable device, an IoT device, a tablet PC, a PDA, a PMP, and a navigation device.
Referring to
An electronic device 300 may include a processor 310, a display 320, a vibration module 330 configured to generate vibration, and the camera module 200.
The processor 310 may control the overall operation of the electronic device 300. According to an embodiment, the processor 310 may control the camera module 200 to perform correction when capturing an image according to various embodiments of the inventive concept.
The display 320 may display an image captured by the camera module 200 or an image stored in a memory. According to an embodiment, the display 320 may be located on a first surface of the electronic device 300, and the camera module 200 may be located on the first surface on which the display 320 is located, or may be located on a second surface opposite to the first surface on which the display 320 is located.
An interface circuit (not shown) may be included in the camera module 200 to receive image data IDT and transmit the image data IDT to the processor 310 according to a set protocol. The interface circuit may generate data by packing the image data IDT in individual signal units, packet units, or frame units according to a set protocol and transmit the data to the processor 310. For example, the interface circuit may include a mobile industry processor interface (MIPI).
For example, the image sensor 220 may receive an object through the lens module 210 to generate image data IDT. The image sensor 220 may transmit the generated image data IDT to the processor 310. The processor 310 may recognize a target object based on the received image data IDT, and compare the location of the target object with reference location information including optimal object location information when capturing an image and generate tilting control data TCD. The processor 310 may transmit the generated tilting control data TCD to the tilting module 230. The tilting module 230 may move (i.e., adjust a position of) the lens, the lens module 210, or the camera module 200 according to the received tilting control data TCD to generate improved image data. In addition, the processor 310 may control the tilting module 230 to generate a photographing guide signal including vibration through the vibration module 330 when the location of a target object in acquired image data is within a preset range.
Referring to
Referring to
Referring to
Referring to
The imaging device recognizes the location of a target object in an output image generated through an image sensor, grasps information of a scene being photographed, and controls a tilting module. For example, referring to
Referring to
Referring to
Referring to
The imaging device may capture an improved image by combining a tilting operation with a zoom function in the imaging device. For example, in
The imaging device may perform panoramic photographing through continuous photographing and synthesizing. For example, referring to
Referring to
The imaging device may generate image data by processing a received optical signal by an image sensor (operation S110).
The imaging device may recognize a target object from the generated image data (operation S120). The imaging device may recognize the target object based on a user's face information. For example, the imaging device may determine whether there is the target object by considering a direction of the gaze from the face of a person in the image data, and may determine whether there is the target object by considering the movement or amount of movement of a person in the image data. The imaging device may recognize the target object based on the location of an object in the image data or the type of information of the object. For example, when an object in the image data is a person and is close to the center thereof, the imaging device may determine the object as the target object.
The imaging device may generate tilting control data by comparing the location of the target object with reference location information including optimal object location information when capturing an image (operation S130).
The imaging device may control the tilting module based on the generated tilting control data (operation S140). The tilting module may be an optical image stabilization (OIS) module for compensating for shaking of the imaging device.
The imaging device may perform an operation of correcting distortion of image data generated while controlling the tilting module.
The imaging device may capture a plurality of pieces of image data by controlling the tilting module in stages based on the reference location information and the location of the target object, and may generate panoramic image data.
The imaging device may perform a zoom-in or zoom-out function such that the target object has a target ratio in the image data.
The imaging device may generate a photographing guide signal including vibration when the location of the target object in the image data is within a threshold range.
Referring to
As illustrated in
Through vias extending in a third direction (a Z direction) may be arranged in the peripheral area PERR of the first chip CH1 and the peripheral area PEI of the second chip CH2. The first chip CH1 and the second chip CH1 may be electrically coupled to each other through the through vias. Wiring lines and vertical contacts extending in a first direction (an X direction) or a second direction (a Y direction) may be further formed in the peripheral area PERR of the first chip CH1.
A plurality of wiring lines extending in the first direction (the X direction) and the second direction (the Y direction) may also be arranged in a wiring layer of the second chip CH2, and the wiring lines may be connected to the logic circuit.
Although a structure in which the first chip CH1 and the second chip CH2 are electrically coupled to each other through the through vias has been described, the inventive concept is not limited thereto. For example, the first chip CH1 and the second chip CH2 may be implemented to have various coupling structures such as copper (Cu)—Cu bonding, coupling of a through via and a Cu pad, coupling of a through via and an external connection terminal, and coupling through an integral through via.
Referring to
The camera module group 1100 may include a plurality of camera modules 1100a, 1100b, and 1100c. Although three camera modules 1100a, 1100b, and 1100c are illustrated in
The detailed configuration of the camera module 1100b will be described with reference to
Referring to
The prism 1105 may include a reflective surface 1107 of a light reflecting material and may change the path of light L incident from outside.
In some embodiments, the prism 1105 may change the path of the light L incident in a first direction (an X direction) into a second direction (a Y direction) perpendicular to the first direction (the X direction). The prism 1105 may rotate the reflective surface 1107 of the light reflecting material in a direction A around a central shaft 1106 or rotate the central shaft 1106 in a direction B to change the path of the light L incident in the first direction (the X direction) into the second direction (the Y direction) perpendicular to the first direction (the X direction). In this case, the OPFE 1110 may move in a third direction (a Z direction), which is perpendicular to the first direction (the X direction) and the second direction (the Y direction).
In some embodiments, as illustrated in
In some embodiment, the prism 1105 may move by an angle of about 20 degrees or in a range from about 10 degrees to about 20 degrees or from about 15 degrees to about 20 degrees in a plus or minus B direction. In this case, an angle by which the prism 1105 moves in the plus B direction may be the same as or similar, within a difference of about 1 degree, to an angle by which the prism 1105 moves in the minus B direction.
In some embodiments, the prism 1105 may move the reflective surface 1107 of the light reflecting material in the third direction (the Z direction) parallel with an extension direction of the central shaft 1106.
The OPFE 1110 may include, for example, “m” optical lenses, where “m” is a natural number. The “m” lenses may move in the second direction (the Y direction) and change an optical zoom ratio of the camera module 1100b. For example, when the default optical zoom ratio of the camera module 1100b is Z, the optical zoom ratio of the camera module 1100b may be changed to 3Z, 5Z, or greater by moving the “m” optical lenses included in the OPFE 1110.
The actuator 1130 may move the OPFE 1110 or an optical lens to a certain location. For example, the actuator 1130 may adjust the location of the optical lens such that an image sensor 1142 is located at a focal length of the optical lens for accurate sensing.
The image sensing device 1140 may include the image sensor 1142, a control logic 1144, and a memory 1146. The image sensor 1142 may sense an image of an object using the light L provided through the optical lens. The image sensor 1142 may generate image data having a high motion range by merging high conversion gain HCG image data with low conversion gain LCG image data.
The control logic 1144 may control all operations of the camera module 1100b. For example, the control logic 1144 may control operation of the camera module 1100b according to a control signal provided through a control signal line CSLb.
The memory 1146 may store information, such as calibration data 1147, necessary for the operation of the camera module 1100b. The calibration data 1147 may include information, which is necessary for the camera module 1100b to generate image data using the light L provided from outside. For example, the calibration data 1147 may include information about the degree of rotation, information about a focal length, information about an optical axis, or the like. When the camera module 1100b is implemented as a multi-state camera that has a focal length varying with the location of the optical lens, the calibration data 1147 may include a value of a focal length for each location (or state) of the optical lens and information about auto focusing.
The storage 1150 may store image data sensed by the image sensor 1142. The storage 1150 may be provided outside the image sensing device 1140 and may form a stack with a sensor chip of the image sensing device 1140. In some embodiments, although the storage 1150 may include electrically erasable programmable read-only memory (EEPROM), the inventive concept is not limited thereto.
Referring to
In some embodiments, one (e.g., the camera module 1100b) of the camera modules 1100a, 1100b, and 1100c may be of a folded-lens type including the prism 1105 and the OPFE 1110 while the other camera modules (e.g., the camera modules 1100a and 1100c) may be of a vertical type that does not include the prism 1105 and the OPFE 1110. However, the inventive concept is not limited thereto.
In some embodiments, one (e.g., the camera module 1100c) of the camera modules 1100a, 1100b, and 1100c may include a vertical depth camera, which extracts depth information using an infrared ray (IR). In this case, the application processor 1200 may generate a three-dimensional (3D) depth image by merging image data provided from the depth camera with image data provided from another camera module (e.g., the camera module 1100a or 1100b).
In some embodiments, at least two camera modules (e.g., the camera modules 1100a and 1100b) among the camera modules 1100a, 1100b, and 1100c may have different field-of-views. In this case, for example, the two camera modules (e.g., the camera modules 1100a and 1100b) among the camera modules 1100a, 1100b, and 1100c may respectively have different optical lenses. However, the inventive concept is not limited thereto.
In some embodiments, the camera modules 1100a, 1100b, and 1100c may have different field-of-views from one another. In this case, although the camera modules 1100a, 1100b, and 1100c may respectively have different optical lenses, the inventive concept is not limited thereto.
In some embodiments, the camera modules 1100a, 1100b, and 1100c may be physically separated from one another. In other words, the sensing area of the image sensor 1142 is not divided and used by the camera modules 1100a, 1100b, and 1100c, but the image sensor 1142 may be independently included in each of the camera modules 1100a, 1100b, and 1100c.
Referring back to
The image processing unit 1210 may include a plurality of sub-image processors 1212a, 1212b, and 1212c, an image generator 1214, and a camera module controller 1216.
The image processing unit 1210 may include as many sub-image processors 1212a, 1212b, and 1212c as the camera modules 1100a, 1100b, and 1100c.
Pieces of image data respectively generated by the camera modules 1100a, 1100b, and 1100c may be respectively provided to the sub-image processors 1212a, 1212b, and 1212c through image signal lines ISLa, ISLb, and ISLc separated from each other. For example, image data generated by the camera module 1100a may be provided to the sub-image processor 1212a through the image signal line ISLa, image data generated by the camera module 1100b may be provided to the sub-image processor 1212b through the image signal line ISLb, and image data generated by the camera module 1100c may be provided to the sub-image processor 1212c through the image signal line ISLc. Such image data transmission may be performed using, for example, a MIPI-based camera serial interface (CSI). However, the inventive concept is not limited thereto.
In some embodiments, a single sub-image processor may be provided for a plurality of camera modules. For example, differently from
The image data provided to each of the sub-image processors 1212a, 1212b, and 1212c may be provided to the image generator 1214. The image generator 1214 may generate an output image using the image data provided from each of the sub-image processors 1212a, 1212b, and 1212c according to image generation information or a mode signal.
In detail, the image generator 1214 may generate the output image by merging at least portions of respective pieces of image data, which are respectively generated by the camera modules 1100a, 1100b, and 1100c having different field-of-views, according to the image generation information or the mode signal. Alternatively, the image generator 1214 may generate the output image by selecting one of pieces of image data, which are respectively generated by the camera modules 1100a, 1100b, and 1100c having different field-of-views, according to the image generation information or the mode signal.
In some embodiments, the image generation information may include a zoom signal or a zoom factor. In some embodiments, the mode signal may be based on a mode selected by a user.
When the image generation information includes a zoom signal or a zoom factor and the camera modules 1100a, 1100b, and 1100c have different field-of-views, the image generator 1214 may perform different operations according to different kinds of zoom signals. For example, when the zoom signal is a first signal, the image generator 1214 may merge image data output from the camera module 1100a with image data output from the camera module 1100c and then generate an output image by using a merged image signal and image data output from the camera module 1100b and not used for merging. When the zoom signal is a second signal different from the first signal, the image generator 1214 may generate an output image by selecting one of the pieces of image data respectively output from the camera modules 1100a, 1100b, and 1100c, instead of performing the merging. However, the inventive concept is not limited thereto, and a method of processing image data may be changed whenever necessary.
In some embodiments, the image generator 1214 may receive a plurality of pieces of image data, which have different exposure times, from at least one of the sub-image processors 1212a, 1212b, and 1212c and perform high dynamic range (HDR) processing on the pieces of image data, thereby generating merged image data having an increased dynamic range.
The camera module controller 1216 may provide a control signal to each of the camera modules 1100a, 1100b, and 1100c. A control signal generated by the camera module controller 1216 may be provided to a corresponding one of the camera modules 1100a, 1100b, and 1100c through a corresponding one of control signal lines CSLa, CSLb, and CSLc, which are separated from one another.
One (e.g., the camera module 1100b) of the camera modules 1100a, 1100b, and 1100c may be designated as a master camera according to the mode signal or the image generation signal including a zoom signal, and the other camera modules (e.g., the camera modules 1100a and 1100c) may be designated as slave cameras. Such designation information may be included in a control signal and provided to each of the camera modules 1100a, 1100b, and 1100c through a corresponding one of the control signal lines CSLa, CSLb, and CSLc, which are separated from one another.
A camera module operating as a master or a slave may be changed according to a zoom factor or an operation mode signal. For example, when the field-of-view of the camera module 1100a is greater than that of the camera module 1100b and the zoom factor indicates a low zoom ratio, the camera module 1100a may operate as a master and the camera module 1100b may operate as a slave. Contrarily, when the zoom factor indicates a high zoom ratio, the camera module 1100b may operate as a master and the camera module 1100a may operate as a slave.
In some embodiments, a control signal provided from the camera module controller 1216 to each of the camera modules 1100a, 1100b, and 1100c may include a sync enable signal. For example, when the camera module 1100b is a master camera and the camera module 1100a is a slave camera, the camera module controller 1216 may transmit the sync enable signal to the camera module 1100b. The camera module 1100b provided with the sync enable signal may generate a sync signal based on the sync enable signal and may provide the sync signal to the camera modules 1100a and 1100c through a sync signal line SSL. The camera modules 1100a, 1100b, and 1100c may be synchronized with the sync signal and may transmit image data to the application processor 1200.
In some embodiments, a control signal provided from the camera module controller 1216 to each of the camera modules 1100a, 1100b, and 1100c may include mode information according to the mode signal. The camera modules 1100a, 1100b, and 1100c may operate in a first operation mode or a second operation mode in relation with a sensing speed based on the mode information.
In the first operation mode, the camera modules 1100a, 1100b, and 1100c may generate an image signal at a first speed (e.g., at a first frame rate), encode the image signal at a second speed higher than the first speed (e.g., at a second frame rate higher than the first frame rate), and transmit an encoded image signal to the application processor 1200. In this case, the second speed may be 30 times or less the first speed.
The application processor 1200 may store the received image signal, i.e., the encoded image signal, in the internal memory 1230 therein or an external memory 1400 outside the application processor 1200. Thereafter, the application processor 1200 may read the encoded image signal from the internal memory 1230 or the external memory 1400, decode the encoded image signal, and display image data generated based on a decoded image signal. For example, a corresponding one of the sub-image processors 1212a, 1212b, and 1212c of the image processing unit 1210 may perform the decoding and may also perform image processing on the decoded image signal.
In the second operation mode, the camera modules 1100a, 1100b, and 1100c may generate an image signal at a third speed lower than the first speed (e.g., at a third frame rate lower than the first frame rate) and transmit the image signal to the application processor 1200. The image signal provided to the application processor 1200 may not have been encoded. The application processor 1200 may perform image processing on the image signal or store the image signal in the internal memory 1230 or the external memory 1400.
The PMIC 1300 may provide power, e.g., a power supply voltage, to each of the camera modules 1100a, 1100b, and 1100c. For example, under the control of the application processor 1200, the PMIC 1300 may provide first power to the camera module 1100a through a power signal line PSLa, second power to the camera module 1100b through a power signal line PSLb, and third power to the camera module 1100c through a power signal line PSLc.
The PMIC 1300 may generate power corresponding to each of the camera modules 1100a, 1100b, and 1100c and adjust the level of the power, in response to a power control signal PCON from the application processor 1200. The power control signal PCON may include a power adjustment signal for each operation mode of the camera modules 1100a, 1100b, and 1100c. For example, the operation mode may include a low-power mode. In this case, the power control signal PCON may include information about a camera module to operate in the low-power mode and a power level to be set. The same or different levels of power may be respectively provided to the camera modules 1100a, 1100b, and 1100c. The level of power may be dynamically changed.
While the inventive concept has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0060329 | May 2021 | KR | national |