Apparatuses and methods consistent with exemplary embodiments relate to stabilizing images captured by one or more cameras by using a sensor fusion scheme.
In general, an image stabilization scheme, which corrects image trembling occurring from various causes, uses a method of matching or tracking corresponding points in two images.
However, this image stabilization scheme is very susceptible to an external impact, image quality degradation, or the like. Also, when a moving object exists in a captured image, image correction may not be properly performed with respect to the moving object.
Particularly, in the case of a method of correcting a motion based on image processing, when a motion deviating from a tracking region or a tracking window size occurs, the motion may be difficult to correct.
Exemplary embodiments of the inventive concept provide apparatuses and methods of outputting stable images without image movement such as vibration by using a sensor fusion scheme even in the cases of the degradation of image quality, the occurrence of external impact on an image capturing apparatus, and the existence of a moving object in a captured image.
Various aspects of the inventive concept will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments.
According to one or more exemplary embodiments, there is provided an image stabilizing apparatus which may include: a feature point extractor configured to extract a feature point in a first input image captured by an image capturing apparatus; a movement amount detector configured to detect a movement amount of the image capturing apparatus in response to movement of the image capturing apparatus; a movement position predictor configured to predict a movement position, to which the extracted feature point is expected to have moved by the movement of the image capturing apparatus, in a second input image capture by the image capturing apparatus; a corresponding position detector configured to determine a position of a corresponding feature point in the second input image corresponding to the extracted feature point in the first input image after the movement of the image capturing apparatus; a comparator configured to compare the predicted movement position and the position of the corresponding feature point; and an image stabilizer configured to correct image distortion in the second input image caused by the movement of the image capturing apparatus, based on a result of the comparison.
The comparator may determine that the position of the corresponding feature point is accurate if a distance between the predicted movement position and the position of the corresponding feature point is within a predetermined range, and the image stabilizer may correct the image distortion based on the position of the corresponding feature point if the distance is within the predetermined range. However, the distance is out of the predetermined range, and then, the position of the corresponding feature point is not used as valid data for correction of the image distortion.
According to one or more exemplary embodiments, there is provided a method of performing image stabilization in an image capturing apparatus. The method may include: extracting, by a feature point extractor, a feature point in a first input image captured by an image capturing apparatus; detecting, by a movement amount detector, a physical movement amount of the image capturing apparatus in response to movement of the image capturing apparatus; predicting, by the movement position predictor, a movement position, to which the extracted feature point is expected to have moved by the movement of the image capturing apparatus, in a second input image capture by the image capturing apparatus; determining, by a corresponding position detector, a position of a corresponding feature point in the second input image corresponding to the extracted feature point in the first input image after the movement of the image capturing apparatus; comparing, by a comparator, the predicted movement position with the position of the corresponding feature point; and correcting, by an image stabilizer, image distortion in the second input image caused by the movement of the image capturing apparatus, based on a result of the comparison.
The method may further include determining that the position of the corresponding feature point is accurate if a distance between the predicted movement position and the position of the corresponding feature point is within a predetermined range, and correcting the image distortion may be performed based on the position of the corresponding feature point if the distance is within the predetermined range. However, the distance is out of the predetermined range, and then, the position of the corresponding feature point is not used as valid data for correction of the image distortion.
These and/or other aspects will become apparent and more readily appreciated from the following description of the exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
Reference will now be made in detail to exemplary embodiments which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the exemplary embodiments are merely described below, by referring to the drawings, to explain various aspects of the inventive concept. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Referring to
When the image capturing apparatus is moved as shown in
According to an exemplary embodiment, an image stabilizing apparatus may stabilize an image by using a sensor fusion scheme. The image stabilizing apparatus may stabilize an image by removing distortion caused by various movement of the image capturing apparatus such as wobbling or vibration thereof by using both an image sensor and a motion sensor. The image stabilizing apparatus may be implemented in the image capturing apparatus such as robot, vehicle, military equipment, camera, mobile phone, smart phone, laptop computer, tablet, handheld apparatus, not being limited thereto. According to an exemplary embodiment, the image capturing apparatus may capture an image by using an image sensor such as a complementary metal-oxide-semiconductor (CMOS) image sensor not being limited thereto.
A camera will be described below as an example of the image stabilizing apparatus.
Referring to
Also, the camera detects a physical movement amount of the camera by using a motion sensor when the camera moves by wind, external impact, or a hand shake of a user. Here, the physical movement may indicate intentional or unintentional tilting, panning, rotation, and/or any other movement of the camera. Next, the camera predicts a movement position P2 320, to which the feature point P1 310 is expected to have moved or would have moved in a current image that is captured after the occurrence of the movement, on the basis of the detected physical movement amount.
Thereafter, the camera detects a corresponding feature point P3 330, which corresponds to the feature point P1 310 detected in the previous image, in the current image captured by the camera after the occurrence of the movement. The camera may detect the corresponding feature point P3 330 from the movement position P2 320 by using feature point tracking. In this case, a calculation amount and a calculation time to obtain the corresponding feature point P3 330 may be reduced compared to the case in which the corresponding feature point P3 330 is detected from the feature point P1 310.
When a distance between the predicted movement position P2 320 and the position of the corresponding feature point P3 330 is within a predetermined range, the camera determines that the position of the corresponding feature point P3 330 is accurate, and uses the same as valid data for distortion correction.
However, when the distance between the predicted movement position P2 320 and the position of the corresponding feature point P3 330 is out of the predetermined range, the camera determines that the position of the corresponding feature point P3 330 is inaccurate or that the corresponding feature point P3 330 is a feature point of a moving object, and does not use data of the corresponding feature point P3 330.
Referring to
The feature point extractor 210 extracts a feature point from an image signal input to an image capturing apparatus. According to an exemplary embodiment, a feature point may be extracted in an input image by using a general feature point extraction scheme.
The movement amount detector 220 detects a physical movement amount of the image capturing apparatus by using a motion sensor installed in the image capturing apparatus when the image capturing apparatus moves intentionally or unintentionally. The motion sensor may be, for example, a gyro sensor. The gyro sensor may be a three-dimensional (3D) gyro sensor. Also, as illustrated in
The movement position predictor 240 determines a local motion vector S311 (see
The corresponding position detector 230 determines a position of a corresponding feature point P3 330 (see
In this case, the corresponding position detector 230 calculates a global motion vector S130 (see
According to another exemplary embodiment, the corresponding position detector 230 may include at least one of a low-pass filter (LPF) and a high-pass filter (HPF) to determine whether a distortion of the input image is caused by a user of the camera or by an external environment.
According to an exemplary embodiment, since a motion such as a hand shake of the user may probably be distributed in a low-frequency domain, it may be extracted by an LPF. Also, since a motion caused by an external environment such as wind may probably be distributed in a high-frequency domain, it may be extracted by an HPF.
The comparator 250 compares the movement position P2 320 (see
According to another exemplary embodiment, the comparator 250 may compare the local motion vector S311 with the global motion vector S130 to determine whether a different between the two vectors is within a given range. When the difference is within the given range, the comparator 250 may determine that the position of the corresponding feature point P3 330 is accurate, and when the difference is out of the given range, the comparator 250 may determine that the position of the corresponding feature point P3 330 is inaccurate.
The image stabilizer 260 corrects image distortion by using position information about the corresponding feature point P3 330 (see
Referring to
Also, the image processing apparatus 500 matches the feature point (510) extracted in the previous image and a portion corresponding to a feature point extracted in the current image (513).
In order to determine whether the calculated local motion vector (S510) is accurate, it is determined whether a distance between the position of the movement point (511) predicted on the basis of the physical movement amount of the image processing apparatus 500 measured by the motion sensor and the position of a corresponding feature point (513) extracted in the current image by the image sensor is within a predetermined range (520 and 530).
If the distance is within the predetermined range, the image processing apparatus 500 corrects image distortion by using information about the corresponding feature point (513) extracted by the image sensor and then outputs a stabilized image (S520).
On the other hand, if the distance is out of the predetermined range, the image processing apparatus 500 determines that information about the corresponding feature point (513) extracted in the current image is incorrect or that the feature point is a feature point extracted in a moving object, and does not use the information (S531).
Referring to
Referring to
The movement position predictor (see 240 in
Also, according to an exemplary embodiment, when the image capturing apparatus is moved, the image capturing apparatus extracts a feature point in the current image at a time t2 by image processing in the corresponding position detector (see 230 in
The comparator of the image capturing apparatus determines whether the corresponding feature point is valid data, by determining whether a distance between the position of the corresponding feature point and the predicted movement position is within a predetermined range (S770). Thereafter, the image capturing apparatus corrects image distortion by using position information about the corresponding feature point within the predetermined range (S780).
Referring to
Also, the digital camera may further include a user input (UI) module 20 for inputting a user's operation signal, synchronous dynamic random access memory (SDRAM) 30 for temporarily storing input image data, data for processing operations, and processing results, a flash memory 40 for storing an algorithm needed for the operation of the digital camera, and a Secure Digital (SD)/Compact Flash (CF)/SmartMedia (SM) card 50 as a recording medium for storing image files.
Also, the digital camera may be equipped with a liquid crystal display (LCD) 60 as a display. Also, the digital camera may further include an audio signal processor 71 for converting sound into a digital signal or a digital signal from a sound source into an analog signal, and generating an audio file, a speaker (SPK) 72 for outputting sound, and a microphone (MIC) 73 for inputting sound. Also, the digital camera may further include a digital signal processor (DSP) 80 for controlling the operation of the digital camera.
The configuration and function of each component will now be described in more detail.
The motor 14 may be controlled by the driver 15. The driver 15 may control the operation of the motor 14 in response to a control signal received from the DSP 80.
The imaging sensor 12 may receive an optical signal from the optical module 11 and form an image of the object. The imaging sensor 12 may include a CMOS sensor or a charge-coupled device (CCD) sensor.
The input signal processor 13 may include an A/D converter for converting an electrical signal, which is supplied from the imaging sensor 12 such as a CMOS or a CCD sensor as an analog signal, into a digital signal. Also, the input signal processor 13 may further include a circuit for performing signal processing, such as gain control or waveform shaping, on the electrical signal provided by the imaging sensor 12.
The DSP 80 may perform image signal processing operations on input image data. The image signal processing operations may include gamma correction, color filter array interpolation, color matrix, color correction, color enhancement, estimation of wobbling parameters, and image restoration based on the estimated wobbling parameters. Also, the DSP 80 may compress image data obtained by the image signal processing into an image file or restore the original image data from the image file. Images may be compressed by using a reversible or irreversible algorithm.
The DSP 80 may perform the above-described image signal processing and control each component according to the processing results or in response to a user control signal input through the UI module 20.
As described above, according to the above exemplary embodiments, the image stabilizing apparatus may compensate for image movement such as image translation, in-plane rotation, and vibration caused by external impact and camera movement such as three-axis rotation and translation.
Also, according to the above exemplary embodiments, the image stabilizing apparatus and method thereof may provide stable image signals in various intelligent image surveillance systems that are used in major national facilities, such as military airports, harbors, roads, and bridges, subways, buses, buildings' roofs, stadia, parking lots, cars, mobile devices, and robots.
According to exemplary embodiments, the methods described above in reference to the drawings may be realized as a program code which is executable by a computer, and the program code may be stored in various non-transitory computer readable media and provided to each device so as to be executed by a processor. For example, there may be provided a non-transitory computer readable medium in which a program for providing a different user interaction function to the first area where the transparent display is overlapped with the first body and the second area which is the remaining part of the transparent display is stored in response to the second body being slid from the first body to open the first body. The non-transitory computer readable medium refers to a medium which may store data semi-permanently rather than storing data for a short time such as a register, a cache, and a memory and may be readable by an apparatus. Specifically, the above-described various applications or programs may be stored and provided in a non-transitory recordable medium such as compact disc (CD), digital versatile disk (DVD), hard disk, Blu-ray disk, universal serial bus (USB), memory card, read-only memory (ROM), etc.
At least one of the components, elements or units represented by a block as illustrated in
The terminology used herein is for the purpose of describing exemplary embodiments only and is not intended to limit the meaning thereof or the scope of the inventive concept defined by the following claims. While one or more exemplary embodiments have been described with reference to the drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims.
It should be understood that exemplary embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each exemplary embodiment should typically be considered as available for other similar features or aspects in other exemplary embodiments.
While one or more exemplary embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0082470 | Jul 2013 | KR | national |
10-2014-0007467 | Jan 2014 | KR | national |
10-2015-0040206 | Mar 2015 | KR | national |
This is a continuation of U.S. application Ser. No. 15/288,788 filed Oct. 7, 2016, which is a continuation-in-part (CIP) application of U.S. application Ser. No. 14/830,894 filed Aug. 20, 2015, U.S. application Ser. No. 14/601,467 filed Jan. 21, 2015 and published as US 2015/0206290 A1 on Jul. 23, 2015, and U.S. application Ser. No. 14/075,768 filed Nov. 8, 2013 and published as US 2015/0015727 A1 on Jan. 15, 2015, and which claims priority from Korean Patent Application No. 10-2015-0040206 filed on Mar. 23, 2015, Korean Patent Application No. 10-2014-0007467 filed on Jan. 21, 2014, and Korean Patent Application No. 10-2013-0082470 filed on Jul. 12, 2013, respectively, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein in their entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5107293 | Sekine et al. | Apr 1992 | A |
5748231 | Park et al. | May 1998 | A |
6625214 | Umehara et al. | Sep 2003 | B1 |
7010045 | Lee | Mar 2006 | B2 |
7956898 | Chen et al. | Jun 2011 | B2 |
8116576 | Kondo | Feb 2012 | B2 |
8139885 | Hsu | Mar 2012 | B2 |
8325810 | Vella et al. | Dec 2012 | B2 |
8340185 | Biswas et al. | Dec 2012 | B2 |
8416307 | McLeod | Apr 2013 | B2 |
8737687 | Jelinek | May 2014 | B2 |
8773542 | Jiang et al. | Jul 2014 | B2 |
8787656 | Park et al. | Jul 2014 | B2 |
8797414 | Park et al. | Aug 2014 | B2 |
8988536 | Park et al. | Mar 2015 | B2 |
9055223 | Slutsky et al. | Jun 2015 | B2 |
9100575 | Lee et al. | Aug 2015 | B2 |
9232140 | Saitwal et al. | Jan 2016 | B2 |
20060177209 | Miyasako | Aug 2006 | A1 |
20070140528 | Anai et al. | Jun 2007 | A1 |
20080107307 | Altherr | May 2008 | A1 |
20080152332 | Koo | Jun 2008 | A1 |
20100014709 | Wheeler et al. | Jan 2010 | A1 |
20100053343 | Kim et al. | Mar 2010 | A1 |
20100074531 | Tanaka | Mar 2010 | A1 |
20100134640 | Kuo et al. | Jun 2010 | A1 |
20110105181 | McLeod | May 2011 | A1 |
20110150284 | Son | Jun 2011 | A1 |
20120014565 | Akiyama et al. | Jan 2012 | A1 |
20120120264 | Lee et al. | May 2012 | A1 |
20120242847 | Narita | Sep 2012 | A1 |
20130107066 | Venkatraman | May 2013 | A1 |
20130121537 | Monobe | May 2013 | A1 |
20130129144 | Chang et al. | May 2013 | A1 |
20130170698 | Jelinek | Jul 2013 | A1 |
20130265460 | Wu | Oct 2013 | A1 |
20130307937 | Kim | Nov 2013 | A1 |
20140204228 | Yokokawa et al. | Jul 2014 | A1 |
20140247370 | Lee et al. | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
2941815 | Aug 1999 | JP |
2011-3057 | Jan 2011 | JP |
5008421 | Aug 2012 | JP |
1997-0004927 | Apr 1997 | KR |
10-2004-0083822 | Oct 2004 | KR |
100498042 | Jul 2005 | KR |
10-2009-0034836 | Apr 2009 | KR |
100968974 | Jul 2010 | KR |
10-2012-0105764 | Sep 2012 | KR |
10-1202642 | Nov 2012 | KR |
10-2013-0057283 | May 2013 | KR |
101288945 | Jul 2013 | KR |
Entry |
---|
Office Action dated Feb. 15, 2019 by the Korean Intellectual Property Office in counterpart Korean Patent Application No. 10-2014-0007467. |
Number | Date | Country | |
---|---|---|---|
20190020821 A1 | Jan 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15288788 | Oct 2016 | US |
Child | 16133196 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14830894 | Aug 2015 | US |
Child | 15288788 | US | |
Parent | 14601467 | Jan 2015 | US |
Child | 14830894 | US | |
Parent | 14075768 | Nov 2013 | US |
Child | 14601467 | US |