This application claims the benefit of Korea Patent Application No. 10-2023-0018106 filed on Feb. 10, 2023, which are incorporated herein by reference for all purposes as if fully set forth herein.
The present disclosure is a method of deblurring a motion in a light field image using coded exposure photography (CEP) based on computational photography to restore an image of a light field (LF).
Coded exposure photography (CEP), also known as a flutter shutter, is a method of effectively reducing motion blur in a photograph.
A theoretical background of effectiveness of the CEP is as follows.
A captured image b(x,y) in a spatial domain is defined as shown in Equation 1 below.
When this is converted into a frequency domain, Equation 2 below is obtained.
Image deblurring based on this is defined as shown in Equation 3 below.
Meanings of respective symbols in Equations 1 to 3 are as follows.
Incidentally, as schematically illustrated in
On the other hand, since PSF(H(u,v)) does not have the value of “0” in the spatial frequency domain when CEP is applied as schematically illustrated in
Meanwhile, it is effective to apply a motion deblurring technology called coded exposure photography (CEP) in an environment where a speed and direction are controlled. This is because code that is a core of a CEP technology is optimized and generated on the basis of a single motion given in a scene in the related art.
However, even in an environment where a speed and direction are given, a 3D image to which a depth rather than a 2D data axis has been added is acquired in the case of a light field image in which 3D data is acquired, and thus, a motion may be different for each sub-aperture image of the light field image. In this case, it is more difficult to predict a motion that will occur as compared with that in a general camera that acquires existing 2D data, and therefore, it is difficult to apply a CEP technology to the light field image.
An object of the present disclosure is to effectively deblur a motion by applying CEP to a light field image.
In order to solve the above technical problem, an embodiment of the present disclosure is a method for deblurring a motion in a light field image in a processing device including at least one processor, the deblurring method including: a first step of modeling PSF for a sub-aperture image of an acquired light field image on the basis of optical parameters; and a second step of deriving an optimization code on the basis of the obtained PSF of each sub-aperture image.
According to the present disclosure, it is possible to provide a clearer image to a user of a light field camera by applying a CEP technology optimized for existing 2D imaging to a light field image in which 3D data is acquired. This makes it possible to provide clearer data in which a motion has been deblurred by using a 3D light field camera in various fields, such as medical/bio fields that require precise data.
The accompanying drawings, which are included to provide a further understanding of the present disclosure and constitute a part of the detailed description, illustrate embodiments of the present disclosure and serve to explain technical features of the present disclosure together with the description.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. However, detailed descriptions of known functions or configurations that may obscure the gist of the present disclosure are omitted in the following description and accompanying drawings. In addition, throughout the specification, ‘including’ a certain component does not mean excluding other components unless specifically stated to the contrary, but rather means that other components may be further included.
Further, terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms may be used for the purpose of distinguishing one component from another component. For example, a first component may be named a second component, and similarly, the second component may also be named the first component without departing from the scope of the present disclosure.
The terms used in the present disclosure are only used to describe specific embodiments and are not intended to limit the present disclosure. Singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, it should be understood that terms such as “include” or “comprise” are intended to designate the presence of described features, numbers, steps, operations, components, parts, or combinations thereof, but do not exclude in advance a likelihood of the presence or addition of one or more other features, numbers, steps, operations, components, parts, or combinations thereof.
Unless specifically defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as generally understood by those skilled in the art to which the present disclosure pertains. Terms such as those defined in generally used dictionaries should be construed as having meanings consistent with meanings in the context of the related technology, and should not be construed as having ideal or excessively formal meanings unless clearly defined in the present application.
Hereinafter, the present disclosure to be described relates to a method of effectively deblurring a motion in a light field image using CEP, and more particularly, to a method of effectively optimizing code in an environment where distance information between a camera sensor and an object is known.
CEP modulates a temporal domain sequence of light received by an image sensor of a camera by controlling a shutter, external lighting, or the like of the camera for an exposure time. In this case, since a motion in an image is modulated according to a sequence of the shutter or a light source, a key technology is to control the external shutter or light source so that good reversibility of the modulated motion is obtained.
Incidentally, the CEP technology has been developed for general 2D cameras and has been applied to various fields to date. However, since the technology is optimized for a specific motion, different motions are generated in each sub-aperture image with multiple viewpoints, such as a light field image, and thus, there is a problem that it is difficult to generate a single code optimal for each motion, which is solved in the present disclosure.
Referring to
The deblurring method of the embodiment includes a step (S10) of modeling the PSF of each sub-aperture image of the acquired light field image on the basis of the optical parameters, and a step (S20) of deriving the optimization code on the basis of the obtained PSF of each sub-aperture image. Here, the optimization code is code that is used when the light field image is modulated (into a phase frequency domain) in a process of acquiring the light field image, and is a value that is equally applied to all the sub-aperture images.
Hereinafter, the respective steps will be described in detail, and step (S10) of modeling the PSF for each sub-aperture image on the basis of the optical parameters will be first described with reference to
In
In
In
θn: Angle between a midpoint of a focal plane located at an entire surface of each of the sub-aperture image S1 to S3 and the object at an object position of green position; Tilt angel of two view's optical axis
According to Equation 4, when first and last positions of the object and a depth h are known, and theta (θ) is also known, a motion blur (PSF) of each sub-aperture image can be inferred.
In other words, distances k1 to k3 between green points and red points in
Each sub-aperture image is defined as sn, and the blur in the sub-aperture image is defined as kn, where n is an index of the sub-aperture image and n∈[1, . . . , N]. In the description of the drawing, N=3 since there are three sensors. When θ for a position of an object moving over time is different, a length of the blur may be different, and this can be expressed as shown in Equation 5.
According to Equation 5, when the object does not move, there is only θ1 as θ. On the other hand, when the object moves and θ2 is generated, motion blur occurs for each of the sub-aperture images S1 to S3, and the motion blur (or PSF) of each sub-aperture image can be obtained by using Equation 5.
Hereinafter, a step (S20) of deriving the optimization code through an argmax function on the basis of the obtained PSF of each sub-aperture image in the deblurring method of an embodiment will be described.
A case where, when the optical parameters are known, the PSF of each sub-aperture image can be obtained from Equations 4 and 5 in the first step (S10) has been described.
Thus, since PSF of each of the sub-aperture images S1 to S3 can be modeled in advance, an algorithm for generating an optimal code for all the sub-aperture images can be precisely applied. Optimization of the code is performed through reversibility analysis of PSF modulated with the code, and a code optimization algorithm for a large number of PSFs precisely predicted by using Equations 4 and 5 above is shown in Equation 6 below.
Here, ∪ denotes a code to be optimized, F denotes discrete Fourier transform (DFT), g denotes PSF, and gncep is PSF modulated with ∪ and means coded PSF of an n-th sub-aperture image motion blur. Here, g can be precisely modeled by using Equation 5, as described in the first step (S10).
Meanwhile, a core of CEP is to modulate an exposure pattern of the shutter. As illustrated in
In the present disclosure, this is considered in order to derive the optimization code through the argmax function. More specifically, code optimization is performed in an iterative manner in a code candidate group (U∈Q). Therefore, in Equation 6, an average value is calculated by dividing a minimum value of an MTF graph (spatial frequency domain) of gncep modulated with U in each sub-aperture image (SAI) by N. The argmax function refers to a function for maximizing an average value of the minimum value in the MTF function of motions modulated with U.
A memory 810 is configured to store a series of programs that implement the method of deblurring the motion in the light field image described above. The deblurring method of the embodiment described above is programmed in various computer-readable languages and stored in the memory 810.
A processor 820 is configured to execute the program for deblurring a motion by applying CEP in the light field image using the program stored on the memory 810. Here, the program executed by the processor 820 includes instructions for modeling the PSF for each sub-aperture image of the acquired light field image on the basis of the optical parameters, and deriving the optimization code through the argmax function on the basis of the obtained PSF of each sub-aperture image.
Meanwhile, the embodiments of the present disclosure can be implemented as computer-readable code on a computer-readable recording medium. The computer-readable recording medium includes all types of recording devices that store data that can be read by a computer system.
Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and optical data storage device. Further, the computer-readable recording medium can be distributed to computer systems connected to a network, and computer-readable code can be stored and executed in a distributed manner. Functional programs, code, and code segments for implementing the present disclosure can be easily inferred by programmers in the technical field to which the present disclosure pertains.
The present disclosure has been described focusing on various embodiments. Those skilled in the art will understand that the present disclosure can be implemented in a modified form without departing from the essential characteristics of the present disclosure. Therefore, the disclosed embodiments should be considered from an illustrative perspective rather than a restrictive perspective. The scope of the present disclosure is indicated in the claims rather than the foregoing description, and all differences within the equivalent scope should be construed as being included in the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0018106 | Feb 2023 | KR | national |