METHOD OF DEBLURRING MOTION IN LIGHT FIELD IMAGE AND DEVICE THEREFOR

Information

  • Patent Application
  • 20240273687
  • Publication Number
    20240273687
  • Date Filed
    February 09, 2024
    a year ago
  • Date Published
    August 15, 2024
    11 months ago
Abstract
A deblurring method of an embodiment is a method for deblurring a motion in a light field image in a processing device including at least one processor, and includes a first step of modeling PSF for a sub-aperture image of an acquired light field image on the basis of optical parameters, and a second step of deriving an optimization code on the basis of the obtained PSF of each sub-aperture image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Korea Patent Application No. 10-2023-0018106 filed on Feb. 10, 2023, which are incorporated herein by reference for all purposes as if fully set forth herein.


BACKGROUND
Field

The present disclosure is a method of deblurring a motion in a light field image using coded exposure photography (CEP) based on computational photography to restore an image of a light field (LF).


Related Art

Coded exposure photography (CEP), also known as a flutter shutter, is a method of effectively reducing motion blur in a photograph.


A theoretical background of effectiveness of the CEP is as follows.


A captured image b(x,y) in a spatial domain is defined as shown in Equation 1 below.










b

(

x
,
y

)

=



h

(

x
,
y

)

*

s

(

x
,
y

)


+

η

(

x
,
y

)






[

Equation


1

]







When this is converted into a frequency domain, Equation 2 below is obtained.










B

(

u
,
v

)

=



H

(

u
,
v

)



S

(

u
,
v

)


+

N

(

u
,
v

)






[

Equation


2

]







Image deblurring based on this is defined as shown in Equation 3 below.










s
^

=




-
1




{


S

(

u
,
v

)

+


N

(

u
,
v

)


H

(

u
,
v

)



}






[

Equation


3

]







Meanings of respective symbols in Equations 1 to 3 are as follows.










b

(

x
,
y

)

:
Captured



image
.


(






B

(

u
,
v

)


)







n

(

x
,
y

)

:

Noise
.


(






N

(

u
,
v

)


)









h

(

x
,
y

)

:
Point


spread




function





(
PSF
)

.


(






H

(

u
,
v

)









:
Discrete


Fourier




transform





(
DFT
)

.








s

(

x
,
y

)

:
Latent



image
.


(






S

(

u
,
v

)


)









-
1


:
Inverse


discrete


Fourier



transform





(
IDFT
)








Incidentally, as schematically illustrated in FIG. 1, when DFT is performed in a case where an exposure time of a shutter is kept constant according to a traditional scheme, PSF(H(u,v)) in a spatial frequency domain has a value of “0”, and thus, noise N(u,v) increases in a deblurring process.


On the other hand, since PSF(H(u,v)) does not have the value of “0” in the spatial frequency domain when CEP is applied as schematically illustrated in FIG. 2, noise is not amplified unlike a traditional scheme and, as a result, it is possible to effectively deblur a motion.


Meanwhile, it is effective to apply a motion deblurring technology called coded exposure photography (CEP) in an environment where a speed and direction are controlled. This is because code that is a core of a CEP technology is optimized and generated on the basis of a single motion given in a scene in the related art.


However, even in an environment where a speed and direction are given, a 3D image to which a depth rather than a 2D data axis has been added is acquired in the case of a light field image in which 3D data is acquired, and thus, a motion may be different for each sub-aperture image of the light field image. In this case, it is more difficult to predict a motion that will occur as compared with that in a general camera that acquires existing 2D data, and therefore, it is difficult to apply a CEP technology to the light field image.


SUMMARY

An object of the present disclosure is to effectively deblur a motion by applying CEP to a light field image.


In order to solve the above technical problem, an embodiment of the present disclosure is a method for deblurring a motion in a light field image in a processing device including at least one processor, the deblurring method including: a first step of modeling PSF for a sub-aperture image of an acquired light field image on the basis of optical parameters; and a second step of deriving an optimization code on the basis of the obtained PSF of each sub-aperture image.


According to the present disclosure, it is possible to provide a clearer image to a user of a light field camera by applying a CEP technology optimized for existing 2D imaging to a light field image in which 3D data is acquired. This makes it possible to provide clearer data in which a motion has been deblurred by using a 3D light field camera in various fields, such as medical/bio fields that require precise data.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are included to provide a further understanding of the present disclosure and constitute a part of the detailed description, illustrate embodiments of the present disclosure and serve to explain technical features of the present disclosure together with the description.



FIGS. 1 and 2 are schematic diagrams illustrating a deblurring technology.



FIG. 3 is a conceptual diagram illustrating the present disclosure.



FIG. 4 is a diagram illustrating a sequence of a method of deblurring a motion in a light field image according to an embodiment.



FIGS. 5 and 6 are schematic diagrams illustrating a method of modeling PSF of each sub-aperture image in the light field image.



FIG. 7 is a diagram illustrating a configuration of a computing device for deblurring a motion in the light field image according to an embodiment of the present disclosure.





DESCRIPTION OF EXEMPLARY EMBODIMENTS

Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. However, detailed descriptions of known functions or configurations that may obscure the gist of the present disclosure are omitted in the following description and accompanying drawings. In addition, throughout the specification, ‘including’ a certain component does not mean excluding other components unless specifically stated to the contrary, but rather means that other components may be further included.


Further, terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms may be used for the purpose of distinguishing one component from another component. For example, a first component may be named a second component, and similarly, the second component may also be named the first component without departing from the scope of the present disclosure.


The terms used in the present disclosure are only used to describe specific embodiments and are not intended to limit the present disclosure. Singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, it should be understood that terms such as “include” or “comprise” are intended to designate the presence of described features, numbers, steps, operations, components, parts, or combinations thereof, but do not exclude in advance a likelihood of the presence or addition of one or more other features, numbers, steps, operations, components, parts, or combinations thereof.


Unless specifically defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as generally understood by those skilled in the art to which the present disclosure pertains. Terms such as those defined in generally used dictionaries should be construed as having meanings consistent with meanings in the context of the related technology, and should not be construed as having ideal or excessively formal meanings unless clearly defined in the present application.


Hereinafter, the present disclosure to be described relates to a method of effectively deblurring a motion in a light field image using CEP, and more particularly, to a method of effectively optimizing code in an environment where distance information between a camera sensor and an object is known.


CEP modulates a temporal domain sequence of light received by an image sensor of a camera by controlling a shutter, external lighting, or the like of the camera for an exposure time. In this case, since a motion in an image is modulated according to a sequence of the shutter or a light source, a key technology is to control the external shutter or light source so that good reversibility of the modulated motion is obtained.


Incidentally, the CEP technology has been developed for general 2D cameras and has been applied to various fields to date. However, since the technology is optimized for a specific motion, different motions are generated in each sub-aperture image with multiple viewpoints, such as a light field image, and thus, there is a problem that it is difficult to generate a single code optimal for each motion, which is solved in the present disclosure.



FIG. 3 is a conceptual diagram illustrating the present disclosure, and FIG. 4 is a diagram illustrating a sequence of a method for deblurring a motion in a light field image according to an embodiment.


Referring to FIG. 3, in the present disclosure, the light field camera derives PSF of sub-aperture images (SAls) of a light field image acquired on the basis of prior information on a motion of a scene (for example, PSF of a previously acquired frame) and optical parameters of the light field camera for motion deblurring of the image, and deblurs the light field image acquired with an optimization code derived on the basis of the derived PSF of the sub-aperture image to acquire a clear image.


The deblurring method of the embodiment includes a step (S10) of modeling the PSF of each sub-aperture image of the acquired light field image on the basis of the optical parameters, and a step (S20) of deriving the optimization code on the basis of the obtained PSF of each sub-aperture image. Here, the optimization code is code that is used when the light field image is modulated (into a phase frequency domain) in a process of acquiring the light field image, and is a value that is equally applied to all the sub-aperture images.


Hereinafter, the respective steps will be described in detail, and step (S10) of modeling the PSF for each sub-aperture image on the basis of the optical parameters will be first described with reference to FIGS. 5 and 6.


In FIGS. 5 and 6, it is assumed that the light field image has three sub-aperture images S1 to S3, and a motion of an object is a motion without movement in a y-axis direction on three-dimensional axes x, y, and h.


In FIGS. 5 and 6, a first position (x1, h1) when an object first starts moving is indicated in green, and a second position (x2, h2) when the object is located at a last moment of the exposure time is indicated arbitrarily in red in order to more easily describe movement of the same object.


In FIG. 6, a disparity δ between sub-aperture images can be expressed by Equation 4 below, which is presented in the paper “Baseline and triangulation geometry in a standard plenoptic camera” published on Jan. 20, 2021.









δ
=



-
f



tan



(
θ
)


+


f
h


W






[

Equation


4

]







θn: Angle between a midpoint of a focal plane located at an entire surface of each of the sub-aperture image S1 to S3 and the object at an object position of green position; Tilt angel of two view's optical axis

    • f: Distance between the focal plane and a sensor; Focal length
    • h: Distance between the focal plane and the object; Depth of the object
    • W: Distance between midpoints of adjacent sub-aperture images; Interval between adjacent sub-aperture images
    • δ: Disparity with a position of a previous frame for the same pixel in the adjacent sub-aperture image


According to Equation 4, when first and last positions of the object and a depth h are known, and theta (θ) is also known, a motion blur (PSF) of each sub-aperture image can be inferred.


In other words, distances k1 to k3 between green points and red points in FIG. 6 correspond to the motion blur. It is to be noted that, since it is assumed in advance that there is no motion to a y-axis, the motion is a motion parallel to an x-axis of a lenslet image, this is assumed to give description in a simple form, but the motion can be expanded to a 3D motion.


Each sub-aperture image is defined as sn, and the blur in the sub-aperture image is defined as kn, where n is an index of the sub-aperture image and n∈[1, . . . , N]. In the description of the drawing, N=3 since there are three sensors. When θ for a position of an object moving over time is different, a length of the blur may be different, and this can be expressed as shown in Equation 5.










k
n

=


k
1

+


(

n
-
1

)



(


δ
1

-

δ
2


)







[

Equation


5

]







According to Equation 5, when the object does not move, there is only θ1 as θ. On the other hand, when the object moves and θ2 is generated, motion blur occurs for each of the sub-aperture images S1 to S3, and the motion blur (or PSF) of each sub-aperture image can be obtained by using Equation 5.


Hereinafter, a step (S20) of deriving the optimization code through an argmax function on the basis of the obtained PSF of each sub-aperture image in the deblurring method of an embodiment will be described.


A case where, when the optical parameters are known, the PSF of each sub-aperture image can be obtained from Equations 4 and 5 in the first step (S10) has been described.


Thus, since PSF of each of the sub-aperture images S1 to S3 can be modeled in advance, an algorithm for generating an optimal code for all the sub-aperture images can be precisely applied. Optimization of the code is performed through reversibility analysis of PSF modulated with the code, and a code optimization algorithm for a large number of PSFs precisely predicted by using Equations 4 and 5 above is shown in Equation 6 below.









arg



max
U

(





n
=
1

N



min

(



"\[LeftBracketingBar]"


F
(

g
n
CEP




"\[RightBracketingBar]"


)


N

)





[

Equation


6

]







Here, ∪ denotes a code to be optimized, F denotes discrete Fourier transform (DFT), g denotes PSF, and gncep is PSF modulated with ∪ and means coded PSF of an n-th sub-aperture image motion blur. Here, g can be precisely modeled by using Equation 5, as described in the first step (S10).


Meanwhile, a core of CEP is to modulate an exposure pattern of the shutter. As illustrated in FIGS. 1 and 2, a best code of CEP is code for maximizing a minimum value in a spatial frequency domain (Modulated Transfer Function, |F(h)|).


In the present disclosure, this is considered in order to derive the optimization code through the argmax function. More specifically, code optimization is performed in an iterative manner in a code candidate group (U∈Q). Therefore, in Equation 6, an average value is calculated by dividing a minimum value of an MTF graph (spatial frequency domain) of gncep modulated with U in each sub-aperture image (SAI) by N. The argmax function refers to a function for maximizing an average value of the minimum value in the MTF function of motions modulated with U.



FIG. 7 is a block diagram illustrating a processing device 800 that deblurs a motion in a light field image, a series of processes of deblurring a motion by applying CEP in the light field image according to the present disclosure described above are reconstructed in terms of a hardware configuration, and the processing device 800 may be a light field camera or a light field microscope. Therefore, only an overview of functions and operations of respective components will be described here in order to avoid duplication of description.


A memory 810 is configured to store a series of programs that implement the method of deblurring the motion in the light field image described above. The deblurring method of the embodiment described above is programmed in various computer-readable languages and stored in the memory 810.


A processor 820 is configured to execute the program for deblurring a motion by applying CEP in the light field image using the program stored on the memory 810. Here, the program executed by the processor 820 includes instructions for modeling the PSF for each sub-aperture image of the acquired light field image on the basis of the optical parameters, and deriving the optimization code through the argmax function on the basis of the obtained PSF of each sub-aperture image.


Meanwhile, the embodiments of the present disclosure can be implemented as computer-readable code on a computer-readable recording medium. The computer-readable recording medium includes all types of recording devices that store data that can be read by a computer system.


Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and optical data storage device. Further, the computer-readable recording medium can be distributed to computer systems connected to a network, and computer-readable code can be stored and executed in a distributed manner. Functional programs, code, and code segments for implementing the present disclosure can be easily inferred by programmers in the technical field to which the present disclosure pertains.


The present disclosure has been described focusing on various embodiments. Those skilled in the art will understand that the present disclosure can be implemented in a modified form without departing from the essential characteristics of the present disclosure. Therefore, the disclosed embodiments should be considered from an illustrative perspective rather than a restrictive perspective. The scope of the present disclosure is indicated in the claims rather than the foregoing description, and all differences within the equivalent scope should be construed as being included in the present disclosure.

Claims
  • 1. A method for deblurring a motion in a light field image in a processing device including at least one processor, the deblurring method comprising: a first step of modeling PSF for a sub-aperture image of an acquired light field image on the basis of optical parameters; anda second step of deriving an optimization code on the basis of the obtained PSF of each sub-aperture image.
  • 2. The deblurring method of claim 1, wherein the first step includes expressing a disparity δ between the sub-aperture images as shown in Equation 1 below.
  • 3. The deblurring method of claim 2, wherein a length Kn of motion blur in the sub-aperture image is expressed as Equation 2 below.
  • 4. The deblurring method of claim 1, wherein, in the second step, the optimization code is code for maximizing an average value obtained by dividing a minimum value in an MTF graph obtained through PSF of each sub-aperture image modeled in the first step by the number of sub-aperture images.
  • 5. A computer-readable recording medium having a program recorded thereon, the program causing a computer to execute the deblurring method according to claim 1.
  • 6. A computing device comprising: a memory configured to record a program for deblurring a motion in a light field image; anda processor configured to execute the program, whereinthe program executed by the processormodels PSF for sub-aperture images of an acquired light field image on the basis of optical parameters, andderives an optimization code on the basis of obtained PSF of each sub-aperture image.
  • 7. The computing device of claim 6, wherein a disparity δ between the sub-aperture images is expressed as shown in Equation 3 below.
  • 8. The computing device of claim 7, wherein a length (Kn) of motion blur in the sub-aperture image is expressed as shown in Equation 4 below.
  • 9. The computing device of claim 6, wherein the optimization code is code for maximizing an average value obtained by dividing a minimum value in an MTF graph obtained through PSF of each sub-aperture image modeled in the first step by the number of sub-aperture images.
Priority Claims (1)
Number Date Country Kind
10-2023-0018106 Feb 2023 KR national