Method and device for coding a digital hologram sequence

Information

  • Patent Grant
  • 12132933
  • Patent Number
    12,132,933
  • Date Filed
    Wednesday, June 24, 2020
    4 years ago
  • Date Issued
    Tuesday, October 29, 2024
    a month ago
Abstract
Disclosed is a method and a device for coding a sequence including first and second digital holograms representing respective scenes, the digital holograms being represented by a set of wavelets each defined by a multiplet of coordinates in multidimensional space. The first and second holograms are represented by first and second coefficients respectively associated with wavelets. The coding method includes the following steps: for each second coefficient, determining a remainder by a difference between the second coefficient concerned, associated with a first wavelet defined by a given multiplet, and the first coefficient) associated with a second wavelet defined by a multiplet having as its image the multiplet by a transform in the multidimenisonal space; coding the determined remainders. The transform is determined by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram.
Description

This application is the U.S. national phase of International Application No. PCT/EP2020/067744 filed 24 Jun. 2020, which designated the U.S. and claims priority to FR Patent Application No. 1907555 filed 5 Jul. 2019, the entire contents of each of which are hereby incorporated by reference.


BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to the technical field of digital holography.


More particularly, it relates to a method and a device for encoding a digital hologram sequence.


Description of the Related Art

It has already been proposed, for example in the article “View-dependent compression of digital hologram based on matching pursuit”, by Anas El Rhammad, Patrick Gioia, Antonin Gilles, Marco Cagnazzo and Beatrice Pesquet-Popescu in Optics, Photonics, and Digital Technologies for Imaging Applications V. International Society for Optics and Photonics, 2018, vol. 10679, p. 106790L, to represent a digital hologram by means of a set of wavelets (for example Gabor wavelets).


Each wavelet is defined by several parameters characteristic of the wavelet concerned. The digital hologram is then represented by a set of coefficients respectively associated with the different wavelets.


The digital hologram can thus be easily reconstructed by summing the different wavelets, each time weighted by the associated coefficient.


SUMMARY OF THE INVENTION

In this context, the present invention proposes a method for encoding a sequence comprising at least a first digital hologram representing a first scene and a second digital hologram representing a second scene, the first digital hologram and the second digital hologram being represented by means of a set of wavelets each defined by a multiplet of coordinates in a multidimensional space,

    • the first hologram being represented by a set of first coefficients respectively associated with certain at least of the wavelets of said set of wavelets and the second hologram being represented by a set of second coefficients respectively associated with certain at least of the wavelets of said set of wavelets,
    • the encoding method comprising the following steps:
    • for each of a plurality of second coefficients, determining a residual by difference between the second coefficient concerned, associated with a first wavelet defined by a given multiplet, and the first coefficient associated with a second wavelet defined by a multiplet having for image the given multiplet by a transform in the multidimensional space;
    • encoding the determined residuals,
    • wherein the transform is determined by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram.


The transform makes it possible to assign certain at least of the first coefficients to wavelets other than those to which these first coefficients are assigned in the first hologram.


This transform thus makes it possible to construct (at least in part) a predicted hologram, which can be subtracted from the second hologram (coefficient by coefficient) in order to obtain residuals of lower value, whose encoding is more efficient.


Moreover, due to the fact that the transform is determined by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram, the predicted hologram will be as close as possible to the second hologram. This variation can correspond in practice to the movement of an object between the first scene and the second scene.


It can further be provided, for at least one second coefficient outside of said plurality of second coefficients, a step of determining a residual by difference between this second coefficient, associated with a third wavelet defined by another given multiplet, and the first coefficient associated with a fourth wavelet defined by another multiplet having for image the other given multiplet by another transform in the multidimensional space.


Another transform is thus used for other second coefficients, which makes it possible to refine the prediction of the second hologram by means of the first hologram.


This other transform is for example determined by analysis of another variation between the first scene and the second scene. This other variation can correspond in practice to the movement of another object (different from the above-mentioned object) between the first scene and the second scene.


The encoding method can moreover comprise the following steps:

    • distributing a part at least of the wavelets into different groups of wavelets respectively associated with different parts of the first scene or the second scene;
    • determining a transform of the multidimensional space for each group of wavelets;
    • for each of the second coefficients of a given group of wavelets, determining a residual by difference between the second coefficient concerned, associated with a fifth wavelet defined by a given multiplet, and the first coefficient associated with a sixth wavelet defined by a multiplet having for image this given multiplet by the transform associated with the given group of wavelets.


The above-mentioned transform can be determined in practice as a function of a movement, between the first scene and the second scene, of a set of connected points (set of points called “connected component” in the following description).


The transform can be determined, for example, on the basis of three-dimensional representations of the first scene and of the second scene.


According to another possible embodiment, the encoding method can comprise the following steps:

    • constructing a first depth map by means of the first digital hologram;
    • constructing a second depth map by means of the second digital hologram;
    • determining the transform on the basis of the first depth map and the second depth map.


According to a possible embodiment, the depth being defined in a given direction (here, a given direction of the three-dimensional space containing the scene represented by the first digital hologram), the step of constructing the first depth map (and/or the step of constructing the second depth map) can comprise the following steps:

    • reconstructing, by means of the first digital hologram (or, depending on the case, by means of the second digital hologram), the light field at a plurality of points;
    • for each of a plurality of depths, segmenting the points associated with the depth concerned into a plurality of segments, and determining values of a sharpness metric respectively associated with said segments on the basis of the light field reconstructed on the segment concerned;
    • for each element of the first (or second, depending on the case) depth map, determining the depth for which the sharpness metric is maximum among a set of segments aligned along said given direction and respectively associated with the different depths of the plurality of depths (the so-determined depth can thus be associated with the element concerned of the first depth map or, depending on the case, of the second depth map).


As described hereinafter, the coordinates of said multidimensional space can represent respectively a parameter representative of a first spatial coordinate in the plane of the hologram, a parameter representative of a second spatial coordinate in the plane of the hologram, a spatial frequency dilation parameter and an orientation parameter.


The invention also proposes a device for encoding a sequence comprising at least a first digital hologram representing a first scene and a second digital hologram representing a second scene, the first digital hologram and the second digital hologram being represented by means of a set of wavelets each defined by a multiplet of coordinates in a multidimensional space, the encoding device comprising:

    • a unit for storing a set of first coefficients, respectively associated with certain at least of the wavelets of said set of wavelets, and a set of second coefficients, respectively associated with certain at least of the wavelets of said set of wavelets, the set of first coefficients representing the first digital hologram and the set of second coefficients representing the second digital hologram;
    • a unit for determining, for each of a plurality of second coefficients, a residual by difference between the second coefficient concerned, associated with a first wavelet defined by a given multiplet, and the first coefficient associated with a second wavelet defined by a multiplet having for image the given multiplet by transform in the multidimensional space;
    • a unit for encoding the determined residuals,
    • wherein the determination unit is designed to determine the transform by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram.


The determination unit and the encoding unit can for example be implemented in practice by means of a processor of the encoding device, this processor being programmed (for example, by means of computer program instructions stored in a memory of the encoding device) to implement respectively the steps of determining the residuals and the step of encoding the residuals.


The invention moreover proposes, independently, a method for distributing coefficients respectively associated with wavelets into a plurality of sets of coefficients, the coefficients associated with the wavelets representing a digital hologram intended to reproduce a scene comprising a plurality of parts, the method comprising the following steps implemented for each of a plurality of said coefficients:

    • determining a straight line corresponding to the light ray represented by the wavelet associated with the coefficient concerned;
    • assigning the coefficient concerned to a set associated with the part of the scene passed through by the determined straight line.


When each wavelet is defined by a multiplet of coordinates in a multidimensional space, the straight line can be determined using the coordinates of this multiplet.


For example, when these coordinates (defining the wavelet) comprise a first spatial coordinate in the plane of the hologram, a second spatial coordinate in the plane of the hologram, a spatial frequency dilation parameter and an orientation parameter, the orientation of the straight line corresponding to the light ray represented by the wavelet is determined as a function of the dilation parameter and the orientation parameter and/or the position of the straight line corresponding to the light ray represented by the wavelet is determined as a function of these first and second spatial coordinates.


The invention finally proposes, here again independently, a method for constructing a depth map related to a scene represented by a digital hologram, the depth being defined in a given direction of space (here, the three-dimensional space containing the scene), the method comprising the following steps:

    • reconstructing, by means of the digital hologram, the light field at a plurality of points in space;
    • for each of a plurality of depths, segmenting the points associated with the depth concerned into a plurality of segments, and determining values of a sharpness metric respectively associated with said segments on the basis of the light field reconstructed on the segment concerned (that is to say on the basis of the reconstructed light field values relative to the points of the segment concerned);
    • for each element of the depth map, determining the depth for which the sharpness metric is maximum among a set of segments aligned along said given direction and respectively associated with the different depths of the plurality of depths, and associating the so-determined depth with this element.


When the digital hologram is represented by coefficients respectively associated with wavelets, the light field reconstruction is made by means of these coefficients.


Of course, the different features, alternatives and embodiments of the invention can be associated with each other according to various combinations, insofar as they are not mutually incompatible or exclusive.





SUMMARY OF THE INVENTION

Moreover, various other features of the invention will be apparent from the appended description made with reference to the drawings that illustrate non-limitative embodiments of the invention, and wherein:



FIG. 1 illustrates an encoding device according to an exemplary embodiment of the invention;



FIG. 2 illustrates steps of an encoding method in accordance with the teachings of the invention;



FIG. 3 illustrates the relative positioning of a digital hologram and of the scene that is represented by this digital hologram;



FIG. 4 schematically shows the calculation of the residuals during the encoding; and



FIG. 5 illustrates steps of a method for constructing a depth map from a digital hologram.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

The encoding device 1 of FIG. 1 comprises a processor 2 and a storage device 4 (such as a hard drive or a memory). The encoding device 1 can also comprise a communication circuit 6 allowing the processor 2 to exchange data with an external electronic device (not shown).


The storage device 4 stores at least two digital holograms H1, H2 (each represented by a set of coefficients, as explained hereinafter) belonging to a digital hologram sequence (this sequence being intended to reproduce the evolution over time of a given three-dimensional scene).


In the example described herein, the storage device 4 further stores a three-dimensional representation S1; S2 of the three-dimensional scene represented by each of the digital holograms H1, H2. However, as an alternative, no three-dimensional representation of the scene could be present within the encoding device 1. This is in particular the case when the digital holograms H1, H2 are received by the encoding device 1, via the communication circuit 6.


Indeed, the digital holograms H1, H2 can in practice be constructed (previously to the encoding method described hereinafter) within the encoding device 1 on the basis of the three-dimensional representations S1, S2 (as described, for example, in the above-mentioned article “View-dependent compression of digital hologram based on matching pursuit”), or be received from an external electronic device.


The storage device 4 also stores computer program instructions designed to implement a method as described hereinafter with reference to FIG. 2 when these instructions are executed by the processor 2.


In the following of the description, the context considered is that shown in FIG. 3: using a reference system (O, x, y, z), the digital holograms H1, H2 are defined in the plane of equation z=0.


The digital holograms H1, H2 are here respectively represented by two sets of real coefficients c1(k,s,X), c2(k,s,X), each coefficient c1(k,s,X), c2(k,s,X) being associated with a Gabor-Morlet wavelet Ψk,s,X defined by the parameters k, s, X, where

    • k is a parameter (integer) that defines the wavelet orientation θk, with θk=2πk/N (k varying between 0 and N−1);
    • s is a parameter (integer) that defines the spatial frequency dilation (s varying between 1 and a);
    • X is a couple of integers that define respectively the two-dimensional spatial coordinates in the plane of the digital hologram (that is to say the plane (O, x, y) in FIG. 3), with X∈[0,Nx[x[0,Ny[.


The values N, a, Nx and Ny are fixed for the representation considered.


In other words, each Gabor-Morlet wavelet Ψk,s,X is defined by a multiplet of coordinates k, s, X in a multidimensional space (here, four-dimensional).


Hereinafter, the coefficients c1(k,s,X) representing the digital hologram H1 will be called “first coefficients” and the coefficients c2(k,s,X) representing the digital hologram H2 will be called “second coefficients”.


The first and second component of X will be respectively denoted Xx and Xy.


The digital holograms H1, H2 could thus be reconstructed as follows:







H
1

=







k
,
s
,
X






c
1

(

k
,
s
,
X

)

·

Ψ

k
,
s
,
X











H
2

=







k
,
s
,
X






c
1

(

k
,
s
,
X

)

·

Ψ

k
,
s
,
X










    • (the summing being made for all the integers k between 0 and N−1, for all the integers s between 1 and a and for all the couples X of integers in [0,Nx[x[0,Ny[),

    • with Ψk,s,X the function defined by Ψk,s,Z(Y)=1/s·Φ(Rk−1[(Y−ηx)/(s·Δs)]) for Y∈R2,

    • where ηx=(XX·Δx, Xy·Δy), Δx and Δy and Δs denote discretization pitches of, respectively, the first spatial component in the plane of the hologram, the second spatial component in the plane of the hologram and the spatial frequency dilation, Φ(A)=exp(|A|2/2)·exp(2iπAxf) for A∈R2,

    • where |A| and Δx respectively denote the norm (or module) of A and the first component thereof, exp is the exponential function (exp(p)=eρ), f is a parameter (predefined for the representation concerned) and










R
k

=


(




cos

(


2

π

k

N

)




-

sin

(


2

π

k

N

)







sin

(


2

π

k

N

)




cos

(


2

π

k

N

)




)

.





An example of encoding method in accordance with the invention will now be described with reference to FIG. 2. This method is aimed at a differential encoding of the digital hologram H2 on the basis of the digital hologram H1. In this differential encoding, the digital hologram H1 is used as the reference digital hologram.


This method here starts by a step E2 of segmenting the coefficients into sets of coefficients Ei respectively associated with parts Pi of the scene (which amounts to grouping the wavelets Ψk,s,X into groups of wavelets respectively associated with these parts Pi of the scene).


Each part Pi of the scene is formed by a set of points of a same region liable to have a similar movement. Such a part Pi of the scene is hereinafter called “connected component”. In practice, it is for example an object of the scene.


In the example described herein, the connected components Pi are for example identified on the basis of the three-dimensional representation S1 of the scene (three-dimensional representation corresponding to the digital hologram H1).


As an alternative, the connected components Pi can be reconstructed from a digital hologram (here H1), for example by means of a depth map, as described hereinafter.


In step E2, for each coefficient c1(k,s,X) of the digital hologram H1, it is determined which part Pi (or connected component) of the scene is passed through by a straight line Δ (representing a light ray associated with the wavelet Ψk,s,X) passing through the point of coordinates X (in the plane of the digital hologram) and oriented along the direction vector Vk,s of coordinates:

(cos[θk]·sin[φs], sin[θk]·sin[φs], cos[φs]),

    • with φs=arcsin(λf/(s·Δs)) (where λ is the reference wavelength of the digital hologram).


The coefficient c1(k,s,X) is then placed in the set Ei associated with the part Pi passed through by this straight line Δ.


Hence, a plurality of sets Ei is constructed, each set Ei comprising coefficients c1(k,s,X) associated with wavelets Ψk,s,X that model light rays having an intersection with the part Pi associated with the set Ei concerned. In other words, each set Ei corresponds to a group of wavelets Ψk,s,X that model light rays having an intersection with the part Pi associated with the set Ei concerned.


In certain embodiments (for example, when the scene contains a single object, i.e. a single connected component P1), the segmentation step E2 could be omitted. It is considered in this case hereinafter that a single set Ei of coefficients (herein the set E1) is processed.


The method of FIG. 2 continues with a step E4, in which a rigid transform Fi is determined for each connected component (or part) Pi of the scene.


This rigid transform Fi is for example determined by analyzing the movement of the connected component Pi between the scene represented by the hologram H1 and the scene represented by the hologram H2.


This movement analysis is for example made by comparing the three-dimensional representation S1 (scene represented by the digital hologram H1) and the three-dimensional representation S2 (scene represented by the digital hologram H2). On this subject, reference will be made for example to the article “A Hierarchical Method for 3D Rigid Motion Estimation”, by Srinark T., Kambhamettu C., Stone M. in Computer Vision—ACCV 2006. ACCV 2006 Lecture Notes in Computer Science, vol 3852. Springer, Berlin, Heidelberg.


As an alternative, this movement analysis could be made by comparing a first depth map derived (as explained hereinafter) from the digital hologram H1 and a second depth map derived (as explained hereinafter) from the digital hologram H2. Such depth maps allow coming down to the above-mentioned three-dimensional case.


For each connected component Pi, the rigid transform Fi is classically decomposed into a translation ti=(tix, tiy, tiz) and a rotation ri that can be written (using the Euler angles) in matrix form, by means of the three following matrices:







R
x
i

=

(



1


0


0




0



cos


α
i






-
sin



α
i






0



sin


α
i





cos


α
i





)








R
y
i

=

(




cos


β
i




0



sin


β
i






0


1


0






-
sin



β
i




0



cos


β
i





)








R
z
i

=

(




cos


γ
i






-
sin



γ
i




0





sin


γ
i





cos


γ
i




0




0


0


1



)





The method of FIG. 2 then comprises a step E6 of determining, for each set Ei of coefficients, a linear transform Ti of the space-frequency domain on the basis of the rigid transform determined at step E4 for the connected component Pi associated with the set Ei concerned.


In the example described herein, the linear transform Ti is defined as follows (on the basis of the corresponding rigid transform Fi):







Ω
i

=

(




cos


γ
i






-
sin



γ
i




0


0





sin


γ
i





cos


γ
i




0


0




0


0



cos


γ
i






-
sin



γ
i






0


0



sin


γ
i





cos


γ
i





)








τ
i

=

(




I
2






2


t
z
i


λ



I
2






0



I
2




)








b
i

=

(




t
x
i






t
y
i






α
i






β
i




)










T
i

:



4





4


,

w




Ω
i



τ
i


w

+

b
i









    • where λ is the already mentioned reference wavelength and I2 the identity matrix with 2 rows and 2 columns.





The method of FIG. 2 then comprises, at step E8, constructing a predicted digital hologram Hp as a function of the digital hologram H1 and by means of the linear transforms Ti determined at step E6.


For that purpose, for each coefficient c1(k,s,X) associated with a wavelet Ψk,s,X defined by the multiplet (k,s,X) within the digital hologram H1, it is determined to which wavelet Ψk′,s′,X′ applies this coefficient c1(k,s,X) within the predicted hologram Hp by means of the transform Ti associated with the set Ei containing this coefficient c1(k,s,X):


Considering Σ1=f·[cos(θk)]/(s·Δs), ξ2=f·[sin(θk)]/(s·Δs) and η=(ηxy)=(Xx·Δx, Xy·Δy), we calculate







(




η







ξ
1







ξ
2





)

=


T
i

[

(



η





ξ
1






ξ
2




)

]







    • and, considering θ′=atan 2(ξ1, ξ2), then:

    • k′=ent(Nθ′/2π), where ent is the function “integer part”, s′=ent(f/[SQRT(ξ′12+ξ′22)·Δs)]) and X′=(ent(η′xx), ent(η′yy),

    • with η′=(η′x, η′y).





In other words, for each set Ei of coefficients, it is defined (as just indicated), using the transform Ti associated with this set Ei, a transform Gi in the multidimensional space (here four-dimensional) such that a coefficient c1(k,s,X) belonging to the set Ei and applied to the wavelet Ψk,s,X in the digital hologram H1 is applied to the wavelet ΨGi(k,s,X) in the predicted digital hologram Hp, as schematically illustrated in FIG. 4. (We hence have: (k′,s′,X′)=Gi(k,s,X).)


This transform Gi is thus the transform that corresponds, in the multidimensional space of the wavelet definition coordinates, to the rigid transform Fi of the connected component Pi. This linear transform Gi is valid for the coefficients of the set Ei associated with this connected component Pi.


The predicted digital hologram Hp can thus be written:







H
p

=







k
,
s
,
X






c
1

(

k
,
s
,
X

)

·

Ψ

Gi

(

k
,
s
,
X

)








In this summing, no account will be taken of the coefficients c1(k,s,X) for which the image Gi(k,s,X) is out of the domain of the values used in the representation concerned, that is to say, here, out of the following part of the multidimensional space: [0,N−1]x[1,a]x[O,Nx[x[O,Ny[. These coefficients indeed correspond to rays that exit from the digital hologram frame.


The method of FIG. 2 then comprises a step E10 of determining a set of residuals by difference between the digital hologram H2 (the digital hologram to be encoded) and the digital hologram Hp predicted on the basis of the digital hologram H1 (reference digital hologram).


Precisely, for each coefficient c2(k′,s′,X′) of the digital hologram H2 (this coefficient being relative to a wavelet Ψk′,s′,X′ defined by the multiplet (k′,s′,X′)), it is determined a residual Ik′,s′,X′ by difference between this coefficient c2(k′,s′,X′) and the coefficient relative to the same wavelet Ψk′,s′,X′ in the predicted digital hologram Hp, i.e. c1(k,s,X), as illustrated in FIG. 4, with (k′,s′,X′)=Gi(k,s,X) as already indicated. We hence have:







I


k


,

s


,

X




=



c
2

(


k


,

s


,

X



)

-



c
1

(

k
,
s
,
X

)

.






Each residual is hence determined by difference between a coefficient c2(k′,s′,X′), associated (in the digital hologram H2) with the wavelet Ψk′,s′,X′ defined by the multiplet (k′,s′,X′), and a coefficient c1(k,s,X) associated, in the digital hologram H1, to a wavelet Ψk,s,X defined by a multiplet (k,s,X) having for image the multiplet (k′,s′,X′) by the transform Gi associated with the set Ei comprising the coefficient c1(k,s,X).


The method of FIG. 2 finally comprises a step E12 of encoding the residuals Ik′,s′,X′.


For example, this can be done as follows:

    • ordering the residuals Ik′,s′,X′ in a predetermined order of the multiplets (k′,s′,X′);
    • entropy encoding to the ordered residuals using a method of the Huffman encoding or arithmetic encoding type.


In the just described example, the differential encoding of the digital hologram H2 is made with reference to a single digital hologram H1. As an alternative, it could be provided to encode the digital hologram H2 with reference to two digital holograms respectively located before and after the digital hologram H2 in the digital hologram sequence.


In this case, the value of the bidirectionally predicted coefficients can be equal to the mean of the coefficients predicted from said two digital holograms.


For example, if H3 denotes a digital hologram posterior to the digital hologram H2 in the digital hologram sequence and c3 the coefficients of this digital hologram H3, the residual will be defined by:








I


k


,

s


,

X




=



c
2

(


k


,

s


,

X



)

-


(



c
1

(

k
,
s
,
X

)

+


c
3

(


k


,

s


,

X



)


)

/
2



,






    • where, as previously, (k′,s′,X′)=Gi(k,s,X) and where (k′,s′,X′)=G′i(k″,s″,X″), with G′i a transform defined similarly to the transform Gi, but this time on the basis of a rigid transform F′i determined as a function of the evolution of a connected component Pi of the scene represented by the digital hologram H3 to the scene represented by the digital hologram H2.






FIG. 5 illustrates steps of a method for constructing a depth map from a digital hologram H (as already indicated, this method can be applied to the hologram H1 and/or to the hologram H2).


The depth is here understood in the direction (Oz).


Let's denote Mx and My the horizontal and vertical resolutions desired for the depth map, and Mz the number of levels of the depth map.


Let's finally denote zmin and zmax the minimum and maximum values of the z coordinate in the scene (these values being predefined).


The method of FIG. 5 starts with a step E20 in which a variable d is initialized to the value 0.


The method then comprises a step E22 of reconstructing the light field U at the depth zd=d·(zmax−zmin)/Mz+zmin, for example using the propagation of the angular spectrum:







U
=


F

-
1




{


F

(
H
)

.

exp

(

2

π



iz
d

.

SQRT
[


λ

-
2


-

f
x
2

-

f
y
2


]



)


}



,






    • where SQRT is the square root function, F and F−1 are respectively the direct and inverse Fourier transforms, and fx and fy are the frequency coordinates of the hologram in the Fourier domain.





The method then comprises a step E24 of segmenting the reconstructed field U into Mx·My segments (rectangular), each segment having a horizontal resolution Kx and a vertical resolution Ky. (The field reconstructed thanks to the hologram H at a horizontal resolution Nx and a vertical resolution Ny, as already indicated, and we have thus: Mx·Kx=Nx et My·Ky=Ny.)


The method then comprises a step E26 of calculating a sharpness metric v for each of the segments obtained at step E24. If each segment is indexed by a horizontal index i and a vertical index j, the value v[i,j,d] of the sharpness metric is calculated for each segment of indices i, j, here by means of the normalized variance:







v
[

i
,
j
,
d

]

=


(


1
/

M
x


,


M
y

.

µ
[

i
,
j

]



)

.






n
,
m





(





"\[LeftBracketingBar]"


U
[



i
.

K
x


+
n

,


j
.

K
y


+
m


]



"\[RightBracketingBar]"


2

-

µ
[

i
,
j

]


)

2









    • where is the mean intensity of the field of the segment concerned:










µ
[

i
,
j

]

=


(

1
/


M
x

.

M
y



)

.






n
,
m








"\[LeftBracketingBar]"


U
[



i
.

K
x


+
n

,


j
.

K
y


+
m


]



"\[RightBracketingBar]"


2

.







As an alternative, another sharpness metric can be used, for example one of the metrics mentioned in the article “Comparative analysis of autofocus functions in digital in-line phase-shifting holography”, by E. S. R. Fonseca, P. T. Fiadeiro, M. Pereira, and A. Pinheiro in Appl. Opt., AO, vol. 55, no. 27, pp. 7663-7674, September 2016.


(Such a sharpness metric calculation is performed for all the segments, i.e. for any i between 0 et Mx−1 and for any j between 0 and My−1.)


The processing related to the depth zd associated with the current variable d is then finished.


The method then comprises a step E28 of incrementing the variable d and a step E30 of testing the equality between the current value of the variable d and the number Mz of levels of the depth map.


In case of equality (arrow P), all the levels have been processed and the method continues at step E32 described hereinafter.


In the absence of equality, at step E30 (arrow N), the method loops to step E22 for processing the depth level zd corresponding to the (new) current value of the variable d.


The method can then construct, at step E32, the depth map D by choosing, for each element of the map (here indexed by the indices i, j), the depth (here denoted D[i,j]) for which the sharpness metric is maximum (among the different segments aligned along the axis Oz, here all of indices i, j, and respectively associated with the different depths for d varying from 0 to Mz−1). With the notations already used, we have:







D
[

i
,
j

]

=


argmax
d




v
[

i
,
j
,
d

]

.






A depth value D[i,j] is hence obtained for all the elements of the depth map D, i.e. here for any i between 0 and Mx−1 and for any j between 0 and My−1.


The so-obtained depth map D can be used, as already mentioned, to determine the connected components (or parts) Pi of the scene, for example by means of a partitioning algorithm (or “clustering algorithm”).


A k-means algorithm can be used for that purpose, as described for example in the article “Some methods for classification and analysis of multivariate observations”, by MacQueen, J. in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281-297, University of California Press, Berkeley, Calif., 1967.


In this case, the partitioning algorithm makes it possible to group the connected segments (here of indices i, j) of close depth values (here D[i,j]), the so-produced groups forming the connected components Pi.

Claims
  • 1. A method for encoding a sequence comprising at least a first digital hologram representing a first scene and a second digital hologram representing a second scene, the first digital hologram and the second digital hologram being represented by a set of wavelets, each of the wavelets being defined by a multiplet of coordinates in a multidimensional space, the first digital hologram being represented by a set of first coefficients respectively associated with at least certain of the wavelets of said set of wavelets and the second digital hologram being represented by a set of second coefficients respectively associated with at least certain of the wavelets of said set of wavelets,the encoding method comprising the following steps:for each given one of a plurality of the second coefficients, determining a residual as a difference betweenthe given second coefficient, associated with a first said wavelet defined by a given said multiplet, andthe first coefficient associated with a second said wavelet defined by a said multiplet having for image the given said multiplet by a transform in the multidimensional space; andencoding the determined residuals,wherein the transform is determined by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram.
  • 2. The method according to claim 1, wherein said variation corresponds to movement of an object between the first scene and the second scene.
  • 3. The method according to claim 2, further comprising, for at least one given said second coefficient outside of said of the second plurality coefficients, a step of determining a residual as a difference between the given said second coefficient, associated with a third said wavelet defined by another given said multiplet, andthe first coefficient associated with a fourth said wavelet defined by another said multiplet having for image said another given multiplet by another transform in the multidimensional space.
  • 4. The method according to claim 2, wherein the transform is determined as a function of a movement, between the first scene and the second scene, of a set of connected points.
  • 5. The method according to claim 2, wherein the transform is determined on the basis of three-dimensional representations of the first scene and the second scene.
  • 6. The method according to claim 1, further comprising, for at least one given said second coefficient outside of said plurality of the second coefficients, a step of determining a residual as a difference between the given said second coefficient, associated with a third said wavelet defined by another given said multiplet, andthe first coefficient associated with a fourth said wavelet defined by another said multiplet having for image said another given multiplet by another transform in the multidimensional space.
  • 7. The method according to claim 6, wherein said another transform is determined by analysis of another variation between the first scene and the second scene.
  • 8. The method according to claim 7, further comprising the following steps: distributing at least a part of the wavelets into different groups of said wavelets respectively associated with different parts of the first scene or the second scene;determining a transform of the multidimensional space for each said group of the wavelets;for each given one of the second coefficients of a given said group of the wavelets, determining a residual as a difference betweenthe given said second coefficient, associated with a fifth said wavelet defined by a given said multiplet, andthe first coefficient associated with a sixth said wavelet defined by a said multiplet having for image this given multiplet by the transform associated with the given group of the wavelets.
  • 9. The method according to claim 7, wherein the transform is determined as a function of a movement, between the first scene and the second scene, of a set of connected points.
  • 10. The method according to claim 6, further comprising the following steps: distributing at least a part of the wavelets into different groups of said wavelets respectively associated with different parts of the first scene or the second scene;determining a transform of the multidimensional space for each said group of the wavelets;for each given one of the second coefficients of a given said group of the wavelets, determining a residual as a difference betweenthe given said second coefficient, associated with a fifth said wavelet defined by a given said multiplet, andthe first coefficient associated with a sixth said wavelet defined by a said multiplet having for image this given multiplet by the transform associated with the given group of the wavelets.
  • 11. The method according to claim 10, wherein the transform is determined as a function of a movement, between the first scene and the second scene, of a set of connected points.
  • 12. The method according to claim 6, wherein the transform is determined as a function of a movement, between the first scene and the second scene, of a set of connected points.
  • 13. The method according to claim 6, wherein the transform is determined on the basis of three-dimensional representations of the first scene and the second scene.
  • 14. The method according to claim 1, wherein the transform is determined as a function of a movement, between the first scene and the second scene, of a set of connected points.
  • 15. The method according to claim 1, wherein the transform is determined on the basis of three-dimensional representations of the first scene and the second scene.
  • 16. The method according to claim 1, further comprising the following steps: constructing a first depth map by means of the first digital hologram;constructing a second depth map by means of the second digital hologram;determining the transform on the basis of the first depth map and the second depth map.
  • 17. The method according to claim 16, wherein, depth being defined along a given direction, the step of constructing the first depth map comprises the following steps: reconstructing, by means of the first digital hologram, a light field at a plurality of points;for each given one of a plurality of depths, segmenting those of the plurality of points that are associated with the given depth into a plurality of segments, and determining values of a sharpness metric respectively associated with said segments on the basis of the light field reconstructed on a respective said segment;for each element of the first depth map, determining a depth for which the sharpness metric is maximum among a set of the segments aligned along said given direction and respectively associated with depths of the plurality of depths.
  • 18. The method according to claim 1, wherein the coordinates of said multidimensional space represent respectively a parameter representative of a first spatial coordinate in a plane of the hologram, a parameter representative of a second spatial coordinate in the plane of the hologram, a spatial frequency dilation parameter (s) and an orientation parameter.
  • 19. A device for encoding a sequence comprising at least a first digital hologram representing a first scene and a second digital hologram representing a second scene, the first digital hologram and the second digital hologram being represented by means of a set of wavelets, each of the wavelets being defined by a multiplet of coordinates in a multidimensional space, the encoding device comprising: a unit for storing a set of first coefficients, respectively associated with at least certain of the wavelets of said set of wavelets, and a set of second coefficients, respectively associated with at least certain of the wavelets of said set of wavelets, the set of first coefficients representing the first digital hologram and the set of second coefficients representing the second digital hologram;a unit for determining, for each given one of a plurality of the second coefficients, a residual as a difference betweenthe given second coefficient, associated with a first said wavelet defined by a given said multiplet, andthe first coefficient associated with a second said wavelet defined by a said multiplet having for image the given said multiplet by a transform in the multidimensional space; anda unit for encoding the determined residuals,wherein the unit for determining is designed to determine the transform by analysis of variation between the first scene represented by the first digital hologram and the second scene represented by the second digital hologram.
  • 20. The device according to claim 19, wherein said variation corresponds to movement of an object between the first scene and the second scene.
Priority Claims (1)
Number Date Country Kind
1907555 Jul 2019 FR national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2020/067744 6/24/2020 WO
Publishing Document Publishing Date Country Kind
WO2021/004797 1/14/2021 WO A
US Referenced Citations (12)
Number Name Date Kind
6621605 Grossetie Sep 2003 B1
7277209 Grossetie Oct 2007 B1
20030128760 Lee Jul 2003 A1
20040021917 Plesniak Feb 2004 A1
20080025624 Brady Jan 2008 A1
20090213443 Kang Aug 2009 A1
20110032994 Bar-On Feb 2011 A1
20120213447 Prakash Aug 2012 A1
20150029566 Gioia Jan 2015 A1
20160327905 Gioia Nov 2016 A1
20180267465 Viswanathan Sep 2018 A1
20210058639 Blinder Feb 2021 A1
Non-Patent Literature Citations (16)
Entry
Sivaramakrishnan et al., “A uniform transform domain video codec based on dual tree complex wavelet transform”, 2001 IEEE International Conference on Acoustics, Speech, and Signal Iprocessing. Proceedings, May 7-11, 2001, vol. 3, May 7, 2001 (May 7, 2001), pp. 1821-1824 (Year: 2001).
International Search Report for PCT/EP2020/067744, mailed Sep. 18, 2020, 7 pages.
Written Opinion of the ISA for PCT/EP2020/067744, mailed Sep. 18, 2020, 9 pages.
Siv Aramakrishnan et al., “A uniform transform domain video codec based on dual tree complex wavelet transform”, 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings, May 7-11, 2001, vol. 3, May 7, 2001 (May 7, 2001), pp. 1821-1824.
Sim et al., “Reconstruction Depth Adaptive Coding of Digital Holograms”, IE/CE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Engineering Sciences Society, vol. E95A, No. 2, Feb. 1, 2012, pp. 617-620.
Blinder et al., “Signal processing challenges for digital holographic video display systems”, Signal Processing. Image Communication., vol. 70, Oct. 4, 2018, pp. 114-130.
Anas et al., “View-dependent compression of digital hologram based on matching pursuit”, Proceedings of SPIE; {Proceedings of SPIE ISSN 0277-786X vol. 10524], SPIE, US, vol. 10679, May 24, 2018, pp. 106790L-106790L.
Blinder et al., “Global motion compensation for compressing holographic videos” Optics Express, vol. 26, No. 20, Sep. 17, 2018, pp. 25524-25533.
Fonseca et al., “Comparative analysis of autofocus functions in digital in-line phase-shifting holography”, Applied Optics, vol. 55, No. 27, Sep. 20, 2016, pp. 7663-7674.
Magarey et al., “Motion Estimation Using a Complex-Valued Wavelet Transform”, IEEE Transactions on Signal Processing, vol. 46, No. 4, Apr. 1, 1998.
Peixeiro et al., “Holographic Data Coding: Benchmarking and Extending HEVC with Adapted Transforms”, IEEE Transactions on Multimedia, Feb. 2018, vol. 20, No. 2, pp. 282-297 [submission pending].
Kambhamettu et al., “A Hierarchical Method for 3D Rigid Motion Estimation”, ACCV 2006, Lecture Notes in Computer Science, vol. 3852 [submission pending].
MacQueen, “Some Methods for Classification and Analysis of Multivariate Observations”, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, University of California, Berkeley, vol. 1, Statistics, pp. 281-297 [submission pending], 1967.
Peixeiro et al., “Holographic Data Coding: Benchmarking and Extending HEVC with Adapted Transforms,” IEEE Transactions on Multimedia, Feb. 2018, vol. 20, No. 2, pp. 282-297.
Srinark et al., “A Hierarchical Method for 3D Rigid Motion Estimation,” ACCV 2006, Lecture Notes in Computer Science, vol. 3852, 2006, pp. 791-800.
MacQueen, “Some Methods for Classification and Analysis of Multivariate Observations,” Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, University of California, Berkeley, vol. 1, 1967, Statistics, pp. 281-297.
Related Publications (1)
Number Date Country
20220272380 A1 Aug 2022 US