This application claims priority to foreign French patent application No. FR 2303528, filed on Apr. 7, 2023, the disclosure of which is incorporated by reference in its entirety.
The present invention relates in general to the projection of an image onto an eye, in augmented reality applications, and in particular to a projection device with an optimized emission point distribution on a discretized emission surface and based on an isoline configuration, and to a method for manufacturing such a device.
A portable optical augmented reality data display system superimposes, on the real-world view of a user of the system, an image containing information intended for the user, such as for example information about their environment, their position, their speed of movement, etc.
Some known portable optical augmented reality data display systems use an image projection device comprising a transparent integrated optical circuit composed of an array of nanometric light guides, an electrode array and a holographic film, as described for example in patent applications FR3122929A1 and FR3022642A1. Such an image projection device is implemented without a screen or an optical system, thereby making it possible to obtain a compact optical system and a wide field of view for the user. The intersections of the nanometric light guide circuit with the electrode array make it possible to define a set of emission points able to emit a light wave directed towards the pupil of the eye of the user. The set of emission points is subdivided into various subsets, each subset comprising emission points that are distributed as randomly as possible. The light waves associated with one and the same subset of emission points propagate in one and the same direction so as to form a single light spot at the retina of the eye of the user. A light spot corresponds to a pixel of an image to be projected.
However, such a device has the drawback of using emission point distributions that are defined according to certain spatial periodicities that generate diffraction effects when the image is formed on the retina. Moreover, it also has a limited density of the emission points of each subset subdivided so as to form each light spot. This results in a luminous halo associated with retinal projection, which degrades the contrast of the image to be displayed. As a result, the quality of the projection of images onto the retina of a user is insufficient.
There is thus a need for an improved image projection device that makes it possible to increase the density of emission points of a subdivided subset while at the same time improving the pseudo-random and aperiodic distribution of the emission points.
The present invention aims to improve the situation by proposing an image projection device for projecting an image onto an eye. The device is defined in an orthogonal reference system (X,Y,Z) and comprises an emission surface S extending generally in the plane (X,Y) of the orthogonal reference system (X,Y,Z). The emission surface S comprises a stack of elements, the elements comprising a set of Mx waveguides gp, a set of Mx×My diffraction gratings rpq and a set of My electrodes eq, Mx and My being positive integers whose product Mx×My is strictly greater than 1. Each diffraction grating rqp is positioned at the intersection of one of the waveguides gp and of one of the electrodes ep so as to form an emission point EPpq for a light wave.
The emission surface S is discretized into a plurality of Lx×Ly elementary emission zones Zij in a continuous mesh in the plane (X,Y), each elementary emission zone Zij comprising a subset of mxij×myij emission points EPpqij among the Mx×My emission points EPpq of the emission surface S, the subset of mxij×myij emission points EPpq being distributed in a number ηxij×ηyij of emission point distributions EPDuvij. The emission points EPpqij of one and the same emission point distribution EPDuvij of the elementary emission zone Zij are configured to emit a resultant light wave directed with a wave vector {right arrow over (κ)}uvij contained in an angular domain defined based on the number Lx×Ly of elementary emission zones Zij discretizing the emission surface S and on the position of the elementary emission zone Zij on the emission surface S, the number ηxij×ηyij of emission point distributions EPDuvij corresponding to the number ηxij×ηyij of pixels of the image to be projected in the angular domain.
In some embodiments, the discretization of the emission surface S may be uniform in the plane (X,Y).
Alternatively, the discretization of the emission surface S may be non-uniform in the plane (X,Y).
According to some embodiments, for each elementary emission zone Zij, the distribution of the emission points EPpqij in an emission point distribution EPDuvij may be determined randomly or pseudo-randomly.
Advantageously, the stack of the emission surface S may furthermore comprise a set of Mx×My holograms hpq, each hologram hpq being positioned at the intersection between one of the waveguides gp and one of the electrodes ep so as to form the emission point EPpq. The holograms hpq associated with the emission points EPpqij of one and the same emission point distribution EPDuvij of the elementary emission zone Zij may be encoded such that the emission points EPpqij emit light waves that are angle-matched and phase-matched to one another so as to generate the resultant light wave defined according to the direction of the wave vector {right arrow over (κ)}uvij contained in an angular domain.
The device may furthermore comprise, in the plane (X,Y), at least one other emission surface Sxy distinct from the emission surface S. The other emission surface Sxy may be discretized into elementary emission zones comprising emission points designed to emit a light wave in a direction contained in a determined angular domain along an optical axis centred with respect to a point Prxy and directed towards the emission surface Sxy, the point Prxy being associated with the position of the eye, after the eye has rotated in its orbit towards the emission surface Sxy.
The device may furthermore comprise, in the plane (X,Y), at least one other emission surface S identical to the emission surface S. The other emission surface S may be discretized into elementary emission zones comprising emission points configured to emit a light wave in a direction contained in an angular domain defined along the axis Z and centred with respect to a point Ptxy associated with the translation of the eye in the plane (X,Y).
Advantageously, the elementary emission zones Zij may have a size in the plane (X,Y) of between 200 μm and 800 μm.
Another subject of the invention is a transparent portable optical data display system comprising an image projection device. The system is a glasses system or an augmented reality headset.
The invention also provides a method for manufacturing the image projection device. The method comprises a phase of designing the device and a phase of physically manufacturing the device thus designed. The design phase comprises the following steps:
The image projection device according to the embodiments of the invention makes it possible to improve the quality, in particular the contrast, of the projection of images onto the retina of a user, based on an optimized aperiodic emission point distribution on a discretized emission surface, an increase in the densification of emission points on an emission surface and isoline configurations.
Such a device also makes it possible to facilitate and speed up the method for manufacturing the device.
Other features, details and advantages of the invention will become apparent on reading the description provided with reference to the appended drawings, which are given by way of example.
Identical references are used in the figures to denote identical or similar elements. For the sake of clarity, the elements that are shown are not to scale. Moreover, in the remainder of the description, unless indicated otherwise, the terms “substantially” and “generally” mean “to within plus or minus 10%”.
The numbers Mx and My are positive integers whose product (Mx×My) is strictly greater than 1. The parameter ρ denotes an index associated with the various waveguides gp, with ρ∈[1, Mx], and the parameter q denotes an index associated with the various electrodes eq, with q∈[1, My].
The image projection device 10 may be used for example in a portable optical data display system, in the field of augmented reality (AR) image display, and more generally in the field of virtual reality (VR) and mixed reality (MR).
The image projection device 10 comprises an emission surface S, generally extending in the plane (X,Y) and formed of (Mx×My) emission points, denoted EPpq. Each emission point EPpq corresponds to the intersection of a waveguide gp and an electrode eq. Each emission point EPpq is designed to emit a light wave propagating along a propagation axis (or wave vector) {right arrow over (κ)}, from the emission surface S to the pupil P, which then focuses the light onto the retina R of the eye. The emission surface S then corresponds to a retinal projection screen.
The image projection device 10 is designed to project an image of size (Nx×Ny) in terms of number of pixels and substantially defined in the plane of projection of the retina R of the eye.
The numbers Nx and Ny are positive integers whose product (Nx×Ny) is strictly greater than 1. The parameter u denoting an index associated with the pixels of the image on the axis X, with u∈[1, Nx], and the parameter ν denoting an index associated with the pixels of the image along the axis Y, with ν∈[1, Ny]. For example and without limitation, a minimum number of the product of the integers (Nx×Ny) may be equal to 100 pixels of an image to be projected.
The set of (Mx×My) emission points EPpq is then distributed into (Nx×Ny) subsets of emission points. These subsets of emission points are also called ‘emission point distributions’ and are denoted by the notation EPDuv. It should be noted that a minimum number of the product of the integers (Mx×My) may be defined based on the maximization of the number of emission point distributions (Nx×Ny) on the emission surface S (that is to say pixels to be projected) and of the number ηem/EPD of emission points EPpq per emission point distribution EPDuv. For example and without limitation, the number ηem/EPD may be between 50 and 200, and the minimum number of the product of the integers (Mx×My) may therefore be equal to 5000 emission points EPpq.
Each emission point distribution EPDuv may be formed such that all of the emission points EPpq of one and the same distribution all emit a phase-matched light wave along one and the same propagation axis {right arrow over (κ)}uv. It should be noted that a first light wave and a second light wave each having one and the same propagation axis {right arrow over (κ)}uv are phase-matched if, in a plane perpendicular to {right arrow over (κ)}uv, the value of the phase of the second light wave is substantially equal to the value of the phase of the first light wave modulo 21. The light waves emitted by the emission points EPpq of one and the same distribution may therefore propagate in parallel between the image projection device 10 and the pupil P of the eye over the distance Zer, and converge substantially at the same point Ruv on the retina by virtue of the lens C of the eye, as shown in
The device 10 comprises a first layer corresponding to the set of Mx waveguides gp arranged parallel to the emission surface S. Each waveguide gp extends on either side of the emission surface S, from an initial position to a final position that are arranged along an axis of extension X′ substantially parallel to the longitudinal axis X. For simplicity, in
The waveguides gp are configured to receive coherent light, emitted by one or more laser sources (not shown in the figures), and to propagate the coherent light along the emission surface S. The waveguides gp may for example be formed of silicon nitride, and have a width (gy), in the direction of the axis Y in
The device also comprises a second layer corresponding to the set of My electrodes eq arranged parallel to the emission surface S. Each electrode eq extends on either side of the emission surface S, from an initial position to a final position that are arranged along an axis of extension Y′ substantially parallel to the transverse axis Y. For simplicity, in
The electrodes eq are configured to receive a specific bias voltage (or electrical modulation), managed by one or more power supplies (not shown in the figures), and to propagate the electrical modulation along the emission surface S. The electrodes eq are made of a conductive material, for example, for transparent electrodes; this conductive material may be indium tin oxide. The electrodes eq may have a width (ex), in the direction of the axis X in
The set of My electrodes eq is superimposed on the set of Mx waveguides gp, parallel to the emission surface S. Each electrode eq thus ‘crosses’ multiple waveguides gp, so as to define multiple intersections (or crossings). Each intersection corresponds to a position of an emission point EPpq. As used here, the term ‘intersection’ refers to a superposition of a waveguide gp (that is to say first layer) and an electrode eq (that is to say second layer) that are not parallel to one another.
The device 10 furthermore comprises a third layer, contained between the first and second layer, corresponding to a set of (Mx×My) diffraction gratings, denoted rpq. Each diffraction grating rqp is optically coupled to a waveguide gp and is joined to an electrode ep. Each diffraction grating rqp is formed from a periodic variation of at least one material with a refractive index able to be modulated by applying a bias voltage passing through the electrode ep.
The third layer comprising the set of diffraction gratings rpq may consist of a continuous structure in the plane (X,Y) of the emission surface S. Alternatively, the third layer may also consist of structures located in the plane (X,Y) of the emission surface S and positioned substantially at the intersections formed by the waveguides gp and the electrodes eq. The continuous structure or the localized structures may be formed for example by inclusions defining a pattern with a periodic variation in silicon oxide. The inclusions may then consist of any material having an electrically adjustable refractive index, such as for example a liquid crystal. When the wavelength of the light emitted by a laser source is 532 nm, the period of the pattern of the diffraction grating rpq may be between 300 nm and 400 nm. A diffraction grating may be spread in a range of periodic patterns over a length (rx) and spread over a width (ry). These quantities (rx) and (ry) may be defined based on the zones of superposition formed by the waveguides gp and the electrodes eq. For example, the diffraction grating may be spread over 10 periodic patterns such that the length (rx) may be greater than or substantially equal to the width (ex) of the electrodes ep inducing the modulation of the refractive index. The third layer comprising the set of diffraction gratings may have a thickness (rz) of for example between 100 nm and 500 nm.
The device 10 may comprise a set of light wave orientation elements configured to control a light wave. As used here, the expression “light wave control” (also referred to as ‘light wave manipulation’) refers to various phenomena related to electromagnetic waves that may occur when an optical beam interacts notably with the material of a given object, as shown with the element hpq in
Advantageously, the device 10 may comprise a fourth layer H, positioned on the second layer, comprising the set of light wave orientation elements configured to control a light wave. The fourth layer H may be a holographic film comprising a set of (Mx×My) holograms, denoted hpq. Each hologram hpq corresponds to a (reflective or transmissive) orientation element and is associated with a diffraction grating rpq along the axis Z. The holographic film may be a photopolymer, for example polymethyl methacrylate, or a photoresist, with a thickness (hz) of between 2 μm and 20 μm. A hologram recorded (or encoded) on the holographic film may extend over a length (hx) of between 2 μm and 20 μm, and a width (hy) of between 1 μm and 10 μm. The magnitudes (hx) and (hy) of the holograms may be defined based on the zones of superposition formed by the waveguides gp and the electrodes eq. Advantageously, the length (hx) may be greater than the width (ex) of the electrodes ep inducing holograms that are contiguous or overlap in pairs, as shown in
The stack of the various layers and elements of the structure of the device 10 is arranged on a support V. The support V may be a transparent support made of glass or polycarbonate and contained in a spectacle lens or a visor of the transparent portable optical data display system.
In
In
Each distribution EPDuv therefore makes it possible to form a light spot that is perceived by the user and associated with a pixel of an image. An image may be formed by successively illuminating various emission point distributions EPDuv, so as to form an image comprising a large number of pixels. The illumination frequency of the various emission point distributions EPDuv is dimensioned such that the user is able to experience the formation of a still image under the effect of retinal persistence, despite technically sequential formation of the various pixels of the image.
In a first embodiment, the emission surface S may be discretized into (Lx×Ly) elementary emission zones denoted Zij forming a continuous mesh of the emission surface S in the plane (X,Y), as shown in
The numbers Lx and Ly are positive integers whose product (Lx×Ly) is strictly greater than 1. The parameter i is an index associated with a discretization of the emission surface S along the axis X, with i∈[1, Lx], and the parameter j is an index associated with a discretization of the emission surface S along the axis Y, with j∈[1, Ly]. For example and without limitation, a minimum number of the product of the integers (Lx×Ly) may be equal to (1×2) or (2×1) elementary emission zones.
Each elementary emission zone Zij may be associated with a number of mxij waveguides gpij, with mxij∈[1, Mx], and with a number myij of electrodes eqij, with myij∈[1, My], as illustrated in
An elementary emission zone Zij therefore comprises a subset of (mxij×myij) emission points EPpqij defined among the (Mx×My) emission points EPpq of the emission surface S.
Each elementary emission zone Zij may be associated with a finite number of emission point distributions, which are then denoted EPDuvij. Each emission point distribution EPDuvij of an elementary emission zone Zij consists of one or more emission points EPpqij determined only among the (mxij×myij) emission points EPpqij of the elementary emission zone Zij.
Advantageously, for each elementary emission zone Zij, the distribution of the emission points EPpqij for each emission point distribution EPDuvij may be determined randomly or pseudo-randomly. For example and without limitation, the distribution of the emission points EPpqij may be determined from the random or pseudo-random selection of mxij indices ρ and myij indices q associated respectively with the various waveguides gpij and with the various electrodes eqij making up the elementary emission zone Zij.
Each emission point distribution EPDuvij is associated with the point projected onto the retina R, which is then denoted Ruvij. The number of emission point distributions EPDuvij of an elementary emission zone Zij is thus equal to the number (ηxij×ηyij) of pixels of the image to be projected associated with the points Ruvij projected onto the retina R by the elementary emission zone Zij. The integers ηxij and ηyij are numbers of pixels defined along the axis X and axis Y, respectively, with ηxij∈[1, Nx] and ηyij∈[1, Ny]. For example, the (ηxij×ηyij) pixels of the image to be projected may be identical for each elementary emission zone Zij of the emission surface S. The integers ηxij and ηyij may be defined based on the total number of pixels (Nx×Ny) of the image to be projected and the total number of elementary zones (Lx×Ly) discretizing the emission surface S, according to the following equations (01) and (02):
For example, an image to be projected may comprise (400×200) pixels and an emission surface S may be discretized into (16×12) elementary emission zones Zij. In this case, each zone Zij may comprise for example (25×17) pixels of the image to be projected (that is to say points Ruvij projected onto the retina R) from a number of (25×17) corresponding emission point distributions EPDuvij.
If, for each elementary emission zone Zij, the numbers (ηxij×ηyij) of pixels of the image to be projected are identical, the indices uij and νij corresponding to the index intervals may be defined based on the indices (i, j) of the zone Zij, the total number of pixels (Nx×Ny) of the image to be projected and the total number of elementary zones (Lx×Ly) discretizing the emission surface S, and according to the following expressions (03) and (04):
For each elementary emission zone Zij, an emission point EPpqij of an emission point distribution EPDuvij may be associated with a wave vector {right arrow over (κ)}uvij contained in an angular domain, represented by the angular ranges Δφ and Δψ, and defined based on the total number of elementary zones (Lx×Ly) discretizing the emission surface S, according to the following equations (05) and (06):
In the above equations (05) and (06), the quantities θx and θy correspond to angular projections respectively onto the axis X and the axis Y of the viewing cone θ (also called field of view or FOV) of the emission surface S of the image projection device 10 by an eye, as illustrated in
In particular, for each elementary emission zone Zij, the angular domain (Δφ, Δψ) may be directed along a central vector {right arrow over (κ)}ij oriented from the elementary emission zone Zij to a central point of impact tij in the plane (X,Y) at the pupil P. The angular direction of the central vector κxij in the plane (X,Y,Z) is characterized by angles
Advantageously, for each emission point EPpqij corresponding to the intersection of a waveguide gp and an electrode eq of an emission point distribution EPDuvij of an elementary emission zone Zij (as shown in
In the above equations (09) and (10), the quantity δ corresponds to the angular resolution of the image and depends on the desired level of sharpness on the display. For example, an angular resolution δ considered to be a visual limit value of angular resolution (that is to say a separable limit) corresponds to an angle of one arc minute equal to the angle of 1/60˜0,02° (and converted into radians in the calculations in the above equations). It should be noted that the above equations (09) and (10) are therefore obtained with a small-angle approximation (that is to say tan δ≈δ). In this case, it may be considered that, below this value, the eye is not able to perceive the pixelation of an image to be projected onto the retina R.
For each elementary emission zone Zij, the beams emitted by the emission points EPpqij may impact the plane of the pupil P around the central point of impact tij over an impact range wij. For example and without limitation, the impact range wij may be considered to be uniform for all of the elementary emission zones Zij of the emission surface S. The range wij of the impacts of the emission points EPpqij in the plane of the pupil P may notably take into account the range of the angular domain of the elementary emission zone Zij and the contribution of a diffraction of the emission points.
The coordinates txij and tyij of the central point of impact tij on the axis X and the axis Y, respectively, in the plane of the pupil P, may be expressed according to the following equations (11) and (12):
In the above equations (11) and (12), the quantities demxij and demyij correspond to the size of the elementary emission zone Zij along the axis X and the axis Y, respectively, as shown in
The discretization of the emission surface S into elementary emission zones Zij may be uniform, the quantities demxij and demyij then being values of constant sizes for all of the elementary zones Zij, and able to be defined based on the size of the emission surface S and the total number of elementary zones (Lx×Ly) discretizing the surface S, according to the following equations (13) and (14):
For example and without limitation, the quantities demxij and demyij may be equivalent to one and the same quantity ϕ equal to 0.5 mm. In the above equations (13) and (14), the quantities Demx and Demy correspond to the size of the emission surface S along the axis X and the axis Y, respectively, as shown in
The discretization of the emission surface S into elementary emission zones Zij may also be uniform, the number mxij of waveguides gp in an elementary zone and the number myij of electrodes eq in an elementary zone then being constant values for all of the elementary zones Zij and being able to be defined based on the total number of (Mx×My) emission points of the emission surface S and the total number of elementary zones (Lx×Ly) discretizing the surface S, according to the following equations (17) and (18):
Alternatively, the discretization of the emission surface S into elementary zones Zij may be non-uniform, the number mxij of waveguides gp and the number myij of electrodes eq being defined for example according to increasing and/or decreasing functions as a function of the axes X and Y, respectively. The non-uniformity of the discretization of the emission surface S into elementary zones Zij may similarly result in non-homogeneous quantities demxij and demyij as a function of the axes X and Y, respectively.
In particular, it should be noted that the quantities associated with the emission surface S and with the various elementary zones are expressed, in the above equations, using the small-angle approximation for simplification. This approximation is applicable for a small emission surface S size (that is to say a small viewing cone θ) and/or an emission surface S centred with respect to the optical axis Z of the gaze of the user of the system. Alternatively, if the small-angle approximation might no longer be applicable (that is to say tan δ≠δ), the formulas of the above equations, notably (09), (10), (15) and (16), are more complex. In this case, some peripheral elementary zones of an emission surface S may be widened in the plane (X,Y) for high viewing angles compared to the elementary zones referred to as central zones with respect to the optical axis Z of this same emission surface S.
As shown in
However, this effect is partly compensated for by the orientation of the eye in its orbit, leading to a displacement of the position of the pupil P. Indeed, as illustrated by the circle P′ in dotted lines in
Advantageously, in order to extend the viewing cone θ of the image projection device 10 for an eye, that is to say the angular range of image projection, the emission surface S may be reproduced in the plane (X,Y). In particular, the image projection device 10 may comprise a finite number of distinct emission surfaces Sxy distributed in the plane (X,Y). For each emission surface Sxy, each hologram hpq associated with an emission point EPpqij of an emission point distribution EPDuvij may encode the angular directions φuvij and ψuvij determined with respect to an optical axis Zxy rotated in the planes (X,Z) and (Y,Z) and centred with respect to the point Prxy corresponding to the position of the centre of the pupil of the eye after rotation in its orbit and in the angular direction of the zone of the emission surface Sxy in question. The image projection device 10 shown schematically in
In particular, in the case of the optical axes Zxy rotated with respect to the optical axis Z of the gaze of the user of the system, the small-angle approximation might no longer be applicable (that is to say tan δ≠δ) for high viewing angles. In this case, the emission surfaces Sxy may be widened in the plane (X,Y) for high viewing angles compared to what is referred to as a central emission surface with respect to the optical axis Z. In
In addition, to extend the size of the “eye box”, that is to say the zone in which the user (and therefore their eye) is able to move in front of the image projection device 10 while still viewing the entire image, the emission surface S may also be duplicated in the plane (X,Y). In this case, the image projection device 10 may comprise a finite number of one and the same emission surface S distributed in the plane (X,Y). For each distributed emission surface S, each hologram hpq associated with an emission point EPpqij of an emission point distribution EPDuvij may encode the angular directions φuvij and ψuvij determined with respect to the optical axis Z in the planes (X,Z) and (Y,Z) and centred with respect to the point Ptxy corresponding to the position of the centre of the pupil of the eye after translation in the plane (X,Y). The image projection device 10 shown schematically in
For example and without limitation, in the cases shown schematically in
The discretization of the emission surface S into elementary emission zones Zij makes it possible to significantly increase the densification of the emission points EPpqij of one and the same emission point distribution EPDuvij. In particular, the density ρ of the emission points may be defined according to the following expression (19):
The number ηem/EPD of emission points EPpq of one and the same emission point distribution may be approximated based on the size of the emission surface S, the surface area sem of the emission point EPpq, and the number of pixels (Nx×Ny) of the image to be projected, as defined according to the following expression (20):
For example and without limitation, the number ηem/EPD may be equal to around 50. For an image to be projected of (400×200) pixels and a discretization of the emission surface S into (16×12) elementary emission zones Zij, the density ρ of emission points EPpqij of one and the same emission point distribution EPDuvij may be equal, according to the above equation (19), to around 0.24%. In the above equation (20), the surface area sem of the emission point EPpq may be defined based on the diameter ω1 of the emission point EPpq according to the following expression (21):
It should be noted that there is at least one criterion for quantifying the quality of the image formed on the retina R by self-focusing of the emission points EPpq of one and the same emission point distribution EPDuv. In particular, to quantify the efficiency of the self-focusing, a power ratio denoted γ may be defined based on the spatial distribution of the intensity of the percussive response (referred to by the acronym PSF, or point spread function), formed by an emission point distribution EPDuv at a point Ruv on the retina R of the eye, as shown in
In the above equation (22), the quantity P1 corresponds to the power of the central intensity peak formed by the emission point distribution EPDuv, that is to say the area of the central peak shown in light grey in
The power ratio γ is optimum for a zero power P2 of the intensity noise formed by the emission point (value of γ tending towards 0).
As illustrated by the image simulation results in
The manufacture of an image projection device 10 is implemented based on a method comprising a design phase and a phase of physically manufacturing the device 10 thus designed.
The phase of designing the image projection device may for example be computer-implemented.
In step 120, a discretization of the emission surface S into (Lx×Ly) elementary emission zones Zij may be applied, such that each elementary emission zone Zij comprises a subset of (mxij×myij) emission points EPpqij.
In step 140, a distribution of the subset of (mxij×myij) emission points EPpqij into (ηxij×ηyij) emission point distributions EPDuvij is carried out.
In step 160, for each elementary emission zone Zij, the (ηxij×ηyij) emission point distributions EPDuvij are assigned to (that is to say associated with or allocated to) (ηxij×ηyij) pixels of the image to be projected.
In step 180, for each emission point distribution EPDuvij, the direction of the wave vector {right arrow over (κ)}uvij of the resultant light wave emitted by the emission points EPpqij of the emission point distribution EPDuvij is determined such that the direction {right arrow over (κ)}uvij is contained in an angular domain directed along a central vector {right arrow over (κ)}ij defined based on the number (Lx×Ly) of elementary emission zones Zij discretizing the emission surface S and the position of the elementary emission zone Zij on the emission surface S.
In sub-step 102, the resolution of the image to be projected may be determined. For example, the resolution (Nx×Ny) of the image to be projected may be chosen to be equal to (400×200). In sub-step 104, the size of the field of view may be determined. For example, the quantities θx and θy of the field of view may be chosen to be equal to 12° and 6°, respectively. In sub-step 106, the distance of the eye relief may be determined. For example, the distance Zer of the eye relief may be chosen to be equal to 20 mm. In sub-step 108, the size of the emission surface S may be calculated based on the values fixed in sub-steps 102 to 106. According to the equations (15) and (16), the quantities Demx and Demy of the emission surface S may be equal to around 8 mm and 6 mm, respectively.
In sub-step 110, the size of the elementary emission zones Zij may be determined. For example, the quantities demxij and demyij of the elementary emission zones may be chosen to be equal to 500 μm. In sub-step 112, the number of elementary emission zones Zij may be calculated from the quantities Demx and Demy and the values fixed in sub-step 110. According to the equations (13) and (14), the number (Lx×Ly) of elementary emission zones Zij may be equal to around (19×12). The number of pixels defined per elementary emission zone Zij may then be deduced from the results of sub-step 110 and from the values fixed in sub-step 102. For example, the number (ηxij×ηyij) of pixels, that is to say emission point distribution, to be encoded per elementary emission zone may be equal to around 350.
Advantageously, in sub-step 114, widths ex and gy and configurations of the waveguides gp and electrodes eq may be determined. In sub-step 116, the numbers (mxij×myij) of waveguides gp and electrodes eq per elementary emission zone Zij may be calculated based on the results of sub-step 114. And, in sub-step 118, the maximum number ηem/EPD of emission points per emission paint distribution may be calculated based on the numbers of waveguides gp and electrodes eq, and on the resolution of the image to be projected (or number (ηxij×ηyij) of pixels, that is to say emission point distribution, per elementary emission zone Zij, for example).
In sub-step 122, the number of waveguides gp and the number of electrodes eq may be determined by random and/or pseudo-random drawing, in particular in order to obtain (or approximate) the number ηem/EPD of emission points. For example, the maximum number ηem/EPD of emission points per emission point distribution calculated in sub-step 118 may be equal to 50. The number of waveguides may be equal to 10 and the number of electrodes may be equal to 5.
In sub-step 124, the position of the waveguides gp and the position of the electrodes eq among the (mxij×myij) waveguides gp and electrodes eq of the elementary emission zone Zij may be determined by random and/or pseudo-random drawing.
The sub-steps 122 and 124 may be applied sequentially for each emission point distribution EPDuvij, for example. Pseudo-random drawing carried out for a distribution EPDuvij may take into account the previous results of sub-steps 122 and 124 of one or more other emission point distributions of one and the same elementary emission zone Zij, such that a pair of a waveguide gp and an electrode eq representing a specific emission point EPpqij is able to be associated only with a single emission point distribution EPDuvij.
The phase of physically manufacturing the device 10 takes into account the results from the design phase implemented beforehand. The physical manufacturing phase may comprise steps of manufacturing waveguides, diffraction gratings and electrodes using conventional techniques, such as for example, and without limitation, chemical or physical deposition techniques. The physical manufacturing phase may also comprise layering steps.
In addition, the physical manufacturing phase may comprise a step of recording holograms hpq, each hologram being associated with an emission point EPpq.
In particular, the step of recording the holograms hpq may take into account the results from step 180 of determining, for each emission point distribution EPDuvij, the direction of the wave vector {right arrow over (κ)}uvij of the resultant light wave emitted by the emission points EPpqij of the emission point distribution EPDuvij.
Recording holograms is based on the interference of two groups of light beams on the holographic film. The generated interference fringes are physically or chemically stored on the holographic film, resulting in a variation in the refractive index. The two groups of beams are conventionally referred to as ‘reference beams’ and ‘object beams’. To encode holograms hpq of an emission point distribution EPDuv(or EPDuvij), the reference beams correspond to the light waves extracted from the diffraction gratings rpq optically coupled to the waveguides gp and to the electrodes ep selected to form the emission points EPpq (or EPpqij) of the distribution EPDuv (or EPDuvij). The hologram recording step requires an optical recording system consisting in forming object beams having an encoding angle β with the emission surface S (or specifically the elementary emission zone Zij). The encoding angle β then induces the direction of the wave vector {right arrow over (κ)}uv (or {right arrow over (κ)}uvij). The object beams, coming from a coherent light source that is not shown in the figures, are obtained from the projection of a given mask through a converging lens (La or Lb), as shown in
The discretization of the emission surface S according to the embodiments of the invention makes it possible notably to facilitate and speed up the step of recording the holograms during the phase of physically manufacturing the device 10 compared to a non-discretized surface S. Indeed, since the emission points EPpqij of one and the same emission point distribution EPDuvij are contained within a single elementary emission zone Zij of size smaller than the overall size of the emission surface S, recording (or encoding) the direction of the wave vector {right arrow over (κ)}uvij in the holograms associated with the emission points EPpqij makes it possible to encode angles on larger values. As shown by the optical hologram recording systems in
As a variant, the set of Mx waveguides gp of the image projection device 10 may be arranged in a configuration of non-rectilinear isolines in the plane defined by the emission surface S, as shown in
As used here, the expression “non-rectilinear isolines” refers to a set of at least two geometric curves f1(X,Y) and f2(X,Y), defined in a plane (X,Y), such that the distance between a point p1 belonging to the geometric curve f1(X,Y) and the geometric curve f2(X,Y) is the same regardless of the point p1, said distance being defined by the length of the shortest path between the point p1 and a point of the geometric curve f2(XY) according to a reference metric dref. Conversely, the distance between a point p2 belonging to the geometric curve f2(X,Y) and the geometric curve f1(X,Y) is the same regardless of the point p2, said distance being defined by the length of the shortest path between the point p2 and a point of the geometric curve f1(XY) according to the reference metric dref. Non-rectilinear isolines thus correspond to a set of geometric curves, such that the curves of the set are always separated in pairs by a constant distance. Such geometric curves may be parametric curves defined in the plane of the emission surface S. For a given geometric curve, an associated isoline may assume a geometric curve shape that is highly variable as a function of the value of the metric dref under consideration.
Advantageously, the configuration of the electrodes on the emission surface S may be similar to the configuration of the waveguides. Thus, as shown in
If the isoline waveguide configuration in the plane (X,Y) is defined according to a constant reference metric value dref
The reference metric values dref
Advantageously, the isoline waveguide configuration and/or the isoline electrode configuration may be implemented using a fast-marching method (FMM) or level-set method (LSM).
It should be noted that the FMM and LSM methods (James Sethian, Level Set Methods and Fast Marching Methods Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision and Materials Science, Cambridge University Press, 1999) are numerical algorithms for solving problems at the limits of the Eikonal equation so as to follow the evolution of interfaces.
In step 210, a calculation matrix is initialized. The calculation matrix corresponds to the emission surface S, the size Demx and Demy, and is characterized according to predetermined calculation steps dcal
The calculation steps may be defined for example as a function of the width (gy) of the waveguides gp and the width (ex) of the electrodes eq, and/or as a function of the reference metric values dref
In step 230, for the isoline waveguide configuration, an initial geometric waveguide curve g0(X,Y) is determined and implemented in the initialized calculation matrix, as illustrated by the exemplary implementation in
The initial geometric waveguide and/or electrode curve f0(X,Y), also called ‘initial geometric curve’, has a plurality of construction characteristics. For example and without limitation, some construction characteristics may comprise:
Advantageously, the initial geometric curve f0(X,Y) may be defined randomly while complying with one or more of the above construction characteristics.
The initial geometric curve f0(X,Y) may for example consist of a succession of segments. In particular, the initial geometric curve f0(X,Y) may comprise segments inclined at an angle of inclination β(X, Y) with respect to the axis of extension X′ or Y′ substantially parallel to the axis X or Y. The angle of inclination β(X, Y) may correspond to a constant quantity β or to a variable function in the plane (X,Y), for example a function that is increasing, decreasing or any other function, and defined randomly, pseudo-randomly and/or as a function of the construction characteristics. Along the axis of extension X′ or Y′, the angle of inclination β(X, Y) may be positive or negative. For example, the angle of inclination may be constant β, and alternatively positive and negative. The size of each inclined segment may be determined randomly, pseudo-randomly and/or as a function of the construction characteristics. The segments of the initial geometric curve f0(X,Y) may then be connected to one another by curved segments with a radius of curvature greater than or equal to the minimum radius of curvature.
The initial positions (0, Y0) and (X0,0), and/or similarly the “content zone” of the initial geometric curve, may be defined randomly, pseudo-randomly and/or defined for example so as to be centred in the calculation matrix initialized along the axis Y and the axis X, respectively.
In step 250, a distance function may be applied to the initial geometric curve f0(X,Y) so as to generate the isoline waveguide configuration and/or the isoline electrode configuration. Such a distance function corresponds to a succession of steps for applying one or more reference metric values dref between various curves forming a non-rectilinear isoline configuration based on the initial geometric curve f0(X,Y). For example and without limitation, one distance function used in step 250 may be the fast-marching method.
By way of illustration, in step 250 for the isoline waveguide configuration, a distance map may be calculated using the fast-marching (or level-set) method applied to the initial geometric curve f0(X,Y) in the calculation matrix. For each of the nbcal
In step 270, the value of a minimum distance dmin may be determined.
In particular, in the case of determining the isoline waveguide configuration, the minimum distance dmin corresponds to the constant and fixed spacing distance dmin-p between two waveguides gp on the emission surface S. The value of dmin-p may be defined with respect to the width (gy) of the waveguides gp, for example according to the following expression (25):
In the case of determining the isoline electrode configuration, the minimum distance dmin corresponds to the constant and fixed spacing distance dmin-q between electrodes eq on the emission surface S. The value of dmin-q may be defined with respect to the width (ex) of the electrodes eq, for example according to the following expression (26):
In step 270 as well, one or more multiples c of the value of the minimum distance dmin may be determined. The determination of these multiples may depend notably on the size Demx and Demy of the emission surface S, and on the initial position (X0, Y0) of the initial geometric curve f0(X,Y).
For example and without limitation, if the initial position (0, Y0) is fixed so as to be centred in the calculation matrix initialized respectively along the axis Y, the multiples c x dmin may be determined for the interval indices c defined according to the following expression (27):
In step 290, the set of isolines (also called isoline paths) of a configuration may be determined based on the distance map and as a function of the multiples c of the minimum distance value dmin. In particular, for each order c, an isoline associated with the initial geometric curve f0(X,Y) that is implemented corresponds to the position of the points of the calculation matrix associated with the closest distance value c x dmin. Thus, in step 290, the waveguide configuration and/or said electrode configuration are determined.
The steps of generating the configurations of the set of waveguides gp and of the set of electrodes eq on the emission surface S in the design phase of the method for manufacturing the image projection device 10 make it possible in particular to minimize degradations of the projected image related to spatial periodicities between the emission points EPpq and to optimize the number of emission points EPpq on the emission surface S so as to maximize the resolution of the image formed on the retina R, so as for example to obtain an optimized contrast.
In particular, implementing fast-marching (or level-set) methods to calculate the distance maps in step 250 makes it possible to promote an aperiodic distribution of the intersections between the waveguides gp and the electrodes eq, and therefore the aperiodic distribution of the various emission points EPpq, thereby leading to an improvement in image quality.
Conversely,
Moreover, calculating the number of intersections between the waveguides gp and the electrodes eq, and therefore of emission points EPpq, for an emission surface S associated with isoline waveguide and electrode configurations, increases (notably by more than 14%) compared to an emission surface S associated with waveguide and electrode configurations based on “segment translations”, each emission surface S in this case comprising the same numbers Mx of waveguides gp and My of electrodes eq. This significant increase in emission points EPpq induces an increase in density and therefore an increase in the possible number of pixels of the image to be projected and/or an increase in the intensity of the illuminated points on the retina R, while at the same time improving the aperiodic distribution of the various emission points EPpq.
In particular,
According to some embodiments, the emission surface S may be discretized into (Lx×Ly) elementary emission zones Zij in the plane (X,Y), as illustrated by
Advantageously, for each elementary emission zone Zij, the isoline waveguide sub-configuration and/or the isoline electrode sub-configuration may be generated based on the application of the fast-marching (or level-set) method as described notably by the steps of the design phase for generating the configurations.
For example and without limitation, for an elementary emission zone Zij, a non-rectilinear isoline sub-configuration may be formed from an initial geometric curve extending in the plane of the emission surface S on either side of the elementary emission zone Zij, from an intermediate initial position to an intermediate final position that are arranged along an axis of extension X′ or Y′ substantially parallel to the axis X or to the axis Y.
In the embodiments described above, each emission point EPpq is formed from a hologram hpq inscribed in a holographic film H and illuminated by light extracted from a light guide gp. In these cases, each hologram hpq makes it possible to control the phase and to control the angle of the light wave extracted at the diffraction grating rpq. Other alternatives for forming (or configuring) the set of orientation elements and therefore the emission points may be envisaged.
In particular, the fourth layer (that is to say the holographic film H) may be replaced by a spatial light modulator (SLM) arranged facing the diffraction grating. An SLM located at an emission point may be formed by liquid crystals, for example. The SLM may then be configured (that is to say manufactured) to control the phase of the light wave extracted at the diffraction grating rpq. In this case, each diffraction grating rpq may be configured (that is to say manufactured) to control the angle of the light wave in addition to extracting the light wave. As a variant, the fourth layer may additionally comprise a hologram holographic film hpq configured (that is to say manufactured) to control only the angle of the light wave.
It should be noted that some features of the invention may have advantages when considered separately.
Those skilled in the art will readily understand that some method steps and sub-steps described above may be carried out simultaneously and/or in a different order, for example in an order defined based on the characteristics of the image projection device 10.
The device and the methods described above according to the embodiments of the invention or sub-elements of this system may be implemented in various ways using hardware, software or a combination of hardware and software, notably in the form of program code able to be distributed in the form of a program product, in various forms.
The invention is not limited to the embodiments described above by way of non-limiting example. It encompasses all variant embodiments that might be envisaged by those skilled in the art. In particular, those skilled in the art will understand that the invention is not limited to the various elementary emission zones and to the various isoline configurations of the image projection device described by way of non-limiting example. In particular, some embodiments of the invention may be combined.
Number | Date | Country | Kind |
---|---|---|---|
2303528 | Apr 2023 | FR | national |