The field of the invention is that of digitally generating a hologram from a real or virtual three-dimensional scene, this hologram being intended to be reproduced for a user using a head mounted display type device, that they wear on their head and which comprises a screen placed in front of their eyes.
The invention can in particular, but not exclusively, be applied to uses of virtual reality, when the observer is immersed in a three-dimensional virtual scene, or augmented reality, when the image reproduced for the user superposes an image of a virtual scene in the real world that they perceive through the see-through screen of their headset.
For displaying colour and possibly animated holograms, a device comprising one or more liquid crystal display (LCD) screens is known, called SLM (Spatial Light Modulator), which modulate one or more laser beams in phase and/or in amplitude. The image thus produced is channelled into the visual field of the observer by a waveguide, or simply reflected by a beam separator.
A disadvantage of this display device is the low resolution of LCD screens, which limits the visual field of the observer.
To overcome this disadvantage without increasing the resolution of the screens of an SLM device, it is known from the document by T. Ichikawa et al., entitled, “CGH calculation with the ray tracing method for the Fourier transform optical system,” and published in the journal, Opt. Express, vol. 21, no. 26, pp. 32019-32031, in December 2013, to place a convergent lens between the SLM device and the user so as to form an enlarged hologram, and therefore offer a wider visual field to the user. However, due to the passage of the light waves modulated by the hologram through the lens, the rays are curved and the perspective of an object of the scene is thus no longer correct, i.e. it no longer corresponds to that which had been calculated at the time of generating the hologram. Faced with this problem, the authors have proposed an adaptation of their technique for generating holograms, making it possible to compensate for the inflection of the rays by using a points-based approach and a technique of tracing rays from the centre of the plane of the screen.
A disadvantage of this approach is that generating the hologram is based on an approach by points, which is all the more expensive in calculation time that the scene comprises a greater number of points.
The invention improves the situation.
The invention has in particular the aim of overcoming these disadvantages of the prior art.
More specifically, an aim of the invention is to propose a solution for generating and displaying a hologram which makes it possible to enlarge the visual field of a holographic screen, rapidly and in a non-complex manner.
Another aim of the invention is to generate a small hologram, adapted to that of the screens of a head mounted display, from an actual-sized scene, which is greater in size.
These aims, as well as others which will subsequently appear, are achieved using a method for digitally generating a hologram of a three-dimensional scene in a plane, called screen plane, of a screen of a hologram display device, intended to be worn by a user, said screen being illuminated by a plane coherent light wave, a convergent lens being arranged between the screen and said user, such that the hologram is formed in the plane of the screen, said method comprising the following steps:
According to another aspect of the invention, the propagation comprises a transformation of the light wave emitted by the object plane through a kernel, calculated according to the scale factor.
It is, for example, the Fresnel-Bluestein Transform. An advantage is that this embodiment makes it possible to control the magnification independently from the propagation distance, from the wavelength or from the resolution of the plane. In addition, it is expressed in the form of a convolution product, which can be calculated effectively by using a Fast Fourier Transform. Another advantage of a convolution product is that, due to it requiring a passage into the Fourier domain, it is easy for it to add a frequential filtering so as to remove artefacts, for example due to undesired orders of diffraction.
According to another aspect of the invention, the propagation step comprises a first propagation of the light wave emitted by the object plane to an intermediate virtual plane, then a second propagation from the intermediate virtual plane to the plane of the screen, the first and the second propagation being achieved using a transform, such that the scale factor between a starting plane and an arrival plane depends on a distance between the planes and the control of the scale factor comprises a placing of the intermediate virtual plane between the object plane and the plane of the screen so as to respect the scale factor between the object plane and the plane of the screen.
For example, the propagation is achieved using a transform called Fresnel Double Step Transform.
An advantage of this method is that it makes it possible to control the scale factor using the position of the virtual plane, i.e. according to the distance travelled during the first then during the second propagation. In addition, the Fresnel Transform is expressed in the simple form of a complex multiplication followed by a Fourier Transform, and does not require doubling the number of samples of the reconstruction window. It can therefore be calculated rapidly.
Advantageously, the intermediate virtual plane is placed at a distance from the plane of the screen corresponding to the focal distance of a virtual camera, of which the inverted perspective projection of the points of the scene would produce the plurality of modified planes and the summation of the propagated light waves is achieved in the virtual plane.
An advantage of this embodiment is to be a lot less complex. Indeed, the light waves of each plane are propagated until one single virtual plane, summed, then the resulting light wave is propagated until the screen plane. It is a particular use of the preceding transform, based on an approximation made possible by the relatively low resolution of current hologram devices with respect to the wavelength of the light.
The invention also relates to a device adapted to implement the method for generating a hologram according to any one of the particular embodiments defined above. This device can, of course, comprise the different features relating to the method according to the invention. Thus, the features and advantages of this device are the same as those of the generation method, and are not detailed further.
According to a particular embodiment of the invention, such a device is comprised in an item of terminal equipment.
The invention also relates to an item of terminal equipment comprising:
The invention also relates to a computer program comprising instructions for implementing the steps of a method for generating a hologram such as described above, when this program is executed by a processor.
These programs can use any programming language. They can be downloaded from a communication network and/or recorded on a support which can be read by computer.
The invention finally relates to recording supports, which can be read by a processor, integrated or not with the device for generating a hologram according to the invention, possibly removable, respectively storing a computer program implementing a method for generating a hologram, such as described above.
Other advantages and features of the invention will appear more clearly upon reading the following description of a particular embodiment of the invention, given as a simple illustrative and non-limiting example, and the appended drawings, from among which:
The general principle of the invention is based on the generation of a hologram from the perspective projection of a 3D scene from the viewpoint of an observer, on a plurality of planes parallel to the screen plane, a correction of the positions and of the size of the planes to compensate for the distortion introduced by the convergent lens coupled with the screen and a propagation of light waves emitted by the corrected planes towards the screen plane, the hologram being formed on the screen plane by the sum of the light waves thus propagated.
In relation to
The display device 10 also comprises a convergent lens 12 having a focal length f, a waveguide 13 and a coherent light point source 14. The convergent lens 12 is located between the SLM 11 and the inlet of the waveguide 13, the light source 14 is located in the focal plane of the lens 12, and the eye of the user 15 is located at the outlet of the waveguide.
The spherical light wave emitted by the point source 14 is transformed into a plane wave by the lens 12 and illuminates the SLM 11. This plane wave is thus modulated by the hologram H displayed on the SLM and it is reflected towards the lens 12. The modulated wave thus passes through the lens 12, then it is transmitted by the waveguide 13 up to the eye 15 of the user.
In relation to
The display device 10 is intended to be placed in front of the eyes of the user and worn on their head. Advantageously, it can be integrated to an item of terminal equipment ET of head mounted display type.
In relation to
In a second example illustrated by
The size of the screen 11 is given by (Sx,Sy)=(Nx·p,Ny·p), with (Nx,Ny) being the resolution of the screen, of around a few thousand pixels per dimension, and p being the size of the pixels, of around a few micrometres.
θ refers to the maximum diffraction angle of the screen 11. It is expressed as follows:
with λ being the wavelength of the light source 14, of around a few hundred nanometres.
The maximum viewing field is obtained when the eye of the observer is located in the plane (x,y,−f′). It is thus given
in the horizontal plane and
in the vertical plane, with
Consequently, the viewing field (ϕx,ϕy) of the hologram can be increased by decreasing the focal distance from the lens.
In return for the increase of the viewing field of the hologram, the lens aims to distort the geometry of the virtual scene. This distortion must therefore be considered during the calculation of the holographic video stream to be displayed on the SLM screen 11.
In relation to
During a step E1, a colour intensity map and a depth map of the real or virtual 3D scene are obtained, corresponding to the viewpoint of the observer when he is placed in the plane (x, y, −f′). For this, a virtual or real 2D+Z camera is used, according to the nature of the scene, with a viewing field of ϕx and ϕy in the horizontal and vertical plane, respectively, and a resolution of (Nx,Ny). If the scene is virtual, it is, for example, described in the form of a mesh or of a point cloud, and thus a virtual camera is used to construct the intensity and depth maps. If the scene is real, a real camera is resorted to.
Each point of the 3D scene of coordinates (x,y,z) in a reference frame of the camera is thus projected on an image element or pixel of coordinates (u,v) in the image plane of the camera, such that:
where the symbol {tilde over ( )} means that the vectoral equality is defined possibly including a scalar factor, due to the homogenous coordinates used (in a manner known to a person skilled in the art), and M corresponds to the projection matrix of this camera in homogenous coordinates, given by:
where
are the coordinates of the main focal point of the camera, expressed in the pixel marker (known to a person skilled in the art).
Coming from this step, an intensity map I is obtained in the form of an image having dimensions (Nx,Ny) of which the intensity values are of between 0 and 255 for each colour and a depth map D of the same dimensions, of which the depth values are standardised between 0 for a real depth of zmin and 255, for a real depth of zmax.
During a step E2, the intensity and depth map points are projected according to an inverted perspective projection model of the virtual or real camera in the 3D reference frame Rc of the camera, as illustrated by
As the depth D is encoded on 8 bits, each pixel (u,v) of the intensity I and depth d=D(u,v) map is projected into a point having coordinates P(xu, yu, zd) in the camera reference frame Rc=(O, x, y, z), as follows:
with d=D(u,v). The point cloud thus projected is therefore naturally split into a set of N=256 planes Pz
However, this point cloud cannot be directly used for calculating the hologram, due to the presence of the lens 12.
During a step E3, the coordinates of the points P(xu, yu, zd) of the plurality of planes (Pd) are modified, in order to compensate for the distortion induced by the convergent lens 12, as described below in relation to
It is considered that a point P(xu, yu, zd) of the plane (Pd) is the image point of an object point P′ by the conjugation effect of the lens 12. This results in, that for each image point P of coordinates (xu, yv, zd), the coordinates (x′u, y′v, z′d) of the corresponding object point P′ are given by the following expression:
This expression corresponds to the formula of conjugation by a convergent lens, known to a person skilled in the art.
As illustrated by
In
A scale factor γd is defined between the object plane (P′d) and the screen plane 11.
It is expressed as follows:
The size of the object plane (P′d) is therefore given by
γd has a value of less than 1. This is therefore a reduction.
The focal distance f″ is now considered of a virtual camera, of which the inverted projection model would project the points of the 3D scene on the plurality of object planes (P′d) and ψx the angle of the corresponding visual field.
By application of the intercept theorem, the following happens:
If Sx≥Sy, the following happens:
Conversely, if Sx<Sy, the following happens:
During a step E4, the light waves emitted by the object planes (P′d) are propagated on the plane of the screen.
The depth (P′d) having depth d is considered as a surface light source which emits the light wave given by:
where ϕu,v∈[0,2π] is the initial phase making it possible to control the dispersion of light emitted by each point, h is a window function which makes it possible to control the size thereof, and δ is the Dirac impulse.
In an embodiment, the phase ϕu,v can be defined as a uniform random variable, providing a diffuse rendering of the scene, but other distributions can be used. In the same manner, several windowing functions can be used for h. An embodiment is to use a Gaussian distribution:
but, a rectangular window or a Hann window would have also been able to be used.
To simplify the calculations, od is sampled on a regular grid having a resolution (Nx,Ny). In relation to
The last step of calculating the hologram consists of propagating the light emitted by the scene in the plane of the screen 11. For this, the light waves emitted by each plane are digitally propagated until the plane of the screen and summed to obtain the hologram H of the scene, such that
with Pz′
According to a first embodiment of the invention, this propagation is achieved using a propagation technique called the Fresnel-Bluestein technique, known to a person skilled in the art and, for example, described in the document by Restrepo et al., entitled “Magnified reconstruction of digitally recorded holograms by Fresnel-Bluestein transform”, published in the journal “Appl. Opt.”, vol. 49, no. 33, pp. 6430-6435, in November 2010, given by
where ξ and η are the coordinates of a point in the object plane (P′d).
The propagation of a light wave od(ξ,η) of the object plane towards the plane of the hologram is calculated by achieving a convolution product of the wave with a kernel g(n,m) which depends on the scale factor between the two planes.
According to a second embodiment of the invention, which will now be described in relation to
P
z′
,γ
{o
d
}(x,y)=SSFz
where SSFz is the Fresnel propagation (1FFT), given by:
The DSF consists of successively applying two Fresnel propagations, first from the object plane (P′d) to an intermediate virtual plane (P′i), then from the intermediate virtual plane to the plane of the hologram. The sampling interval of the destination plane of the Fresnel propagation depends on a distance between the source plane and the destination plane. Consequently, the action of resorting to an intermediate virtual plane, makes it possible to select the two distances zd,1 and zd,2 so as to control the magnification γd between the sampling interval pd on the object plane and the sampling interval p on the plane of the hologram 11. The intermediate distances zd,1 and zd,2 are given by
For example, for z′d=1 m and γd=0.25, the following happens: zd,1=0.8 m and zd,2=0.2 m.
According to a third embodiment of the invention, illustrated by
with an error of less than 3%.
So, the following happens:
So, the following can be set:
Therefore, one single intermediate virtual plane (P′i) common to all the object planes can be defined.
Generally, f″≈10 cm.
Indeed, by using the Fresnel propagation (1FFT), the sampling interval (p′x,p′y) on the destination plane is given by the sampling interval (px,py) on the source plane such that
By putting px=py=p′d, the sampling interval (p″x,p″y) is obtained in the plane of the hologram by:
Therefore, it is verified that the sampling interval in the plane of the hologram is respected.
In this third embodiment, the light waves emitted by each object plane are thus digitally propagated until the virtual plane (P′i)=(p′f″), then propagated from this virtual plane until the plane of the hologram and finally summed to obtain the hologram H of the scene, such that
An advantageous option is to sum the waves digitally propagated until the virtual plane (P′f″), then to propagate the resulting wave by an SSF until the plane of the hologram, as follows:
The complexity of this embodiment is therefore divided by two with respect to the more general embodiment.
It will be noted that the invention which has just been described, can be implemented by means of software and/or hardware components. With this in mind, the terms “module” and “entity”, used in this document, can correspond, either to a software component, or to a hardware component, or also to a set of hardware and/or software components, capable of implementing the function(s) described for the module or the entity in question.
In relation to
For example, the device 100 comprises a processing unit 110, equipped with a processor μ1, and controlled by a computer program Pg2 120, stored in a memory 130 and implementing the method according to the invention.
Upon initialisation, the code instructions of the computer program Pg1 120 are, for example, loaded into a RAM memory before being executed by the processor of the processing unit 110. The processor of the processing unit 110 implements the steps of the method described above, according to the instructions of the computer program 120. In this embodiment example of the invention, the device 100 comprises a reprogrammable calculation machine or a dedicated calculation machine, capable of and being configured to:
Advantageously, such a device 100 can be integrated to an item of terminal equipment ET, for example of head mounted display type. The device 100 is thus arranged to engage at least with the following modules of the terminal ET:
It goes without saying, that the embodiments which have been described above have been given in a manner which is purely informative and not at all limiting, and that numerous modifications can easily be applied to them by a person skilled in the art, without moving away from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
1756004 | Jun 2017 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/065666 | 6/13/2018 | WO | 00 |