This application claims priority to European Patent Application No. 23 180 876.7 filed Jun. 22, 2023, the disclosure of which is incorporated herein by reference.
The present disclosed subject matter relates to a display apparatus comprising a light source configured to emit a light beam, a light source driver configured to drive the light source according to pixels of an image frame to be displayed on an image area within a frame duration, a mirror assembly with one or more mirrors configured to oscillate and deflect the light beam towards the image area according to a scan pattern, and a mirror driver configured to drive the mirror assembly according to said scan pattern.
Display apparatus are commonly used in virtual reality (VR) or augmented reality (AR) glasses, helmets or head-up displays (HUDs) for a broad range of applications like navigation, training, entertainment, education or work. A light source driver drives a light source to emit a mono- or multicoloured light beam carrying an image (frame) comprised of pixels onto a mirror assembly having one or more moving micro-electro-mechanical-system (MEMS) mirrors driven by a mirror driver. The mirror assembly has, e.g., one MEMS mirror oscillating about two axes or two MEMS mirrors each oscillating about a respective axis, to deflect the light beam into subsequent directions (angles) towards an image area, one direction (angle) per pixel of the image. In the following a mirror assembly having one single mirror is described, however, the following applies as well to a mirror assembly having more than one mirror.
In raster scanning, the mirror oscillates fast about a vertical axis and slowly about a horizontal axis to sweep the directions and, thus, scans the light beam over the pixels of the image area column by column and line by line. For the fast axis oscillation, the mirror can be driven in resonance with the natural harmonics of its articulation. However, for the slow sweep about its other axis the mirror needs to be forcedly driven against its resonance frequency, which either requires more power and a larger drive system or limits the scanning speed and hence the per-pixel refresh rate and frame rate (which is the inverse of the frame duration).
To overcome these miniaturisation and speed limits of raster scanning, other scan patterns may be employed. For instance, in so-called Lissajous scanning the mirror is driven according to a Lissajous pattern to oscillate resonantly—or near resonance—about both axes. The frequencies of oscillation about the two axes are greater than the frame rate and the beginnings of their respective oscillation periods usually meet only every one or more frames. In this way, each image frame is “painted” with the very complex, “dense” Lissajous pattern on the image area.
With Lissajous scanning, higher speeds of the light beam along the Lissajous pattern and hence higher frame rates can be achieved with low driving powers and small actuators because of exploiting the resonance of the MEMS mirror. However, current Lissajous scanners still suffer from a complex and slow synchronisation of the light source driver with the mirror assembly movement via a frame buffer that stores the image frame and feeds the light source driver with pixels. To synchronise the pixel feeding with the MEMS mirror position, the mirror driver periodically provides a synchronisation signal indicating the current mirror position within the Lissajous pattern to the frame buffer. The frame buffer identifies the currently needed pixel in the image frame for the indicated mirror position, retrieves that pixel from the frame buffer and feeds the same to the light source driver. While this setup ensures that each pixel provided to the light source driver matches the current MEMS mirror position, the buffer requires a high processing power for identifying the currently needed pixel and accessing the memory locations in the buffer that are scattered according to the Lissajous pattern.
As a result, the pixel feeding rate is limited by the buffer's size and latency. Displaying image frames at a high resolution and/or a high frame rate requires a huge size and low latency buffer which is expensive. Moreover, with limited processing power of the buffer an adaption of the pixels on-the-fly, e.g., of their colour or intensity values to account for changed ambient lighting or of their individual durations to account for geometric distortions of the displayed image, is impossible to implement.
It is an object of the present disclosed subject matter to provide a display apparatus which allows for displaying an image frame with a high resolution and/or at a high frame rate.
This object is achieved with a display apparatus, comprising a light source configured to emit a light beam;
The present display apparatus is based on a separation of time-critical real-time components, like the mirror driver, the light source driver and the buffer that need to be exactly synchronised to one another, from a central processing unit (CPU) that may only be loosely synchronised to the real-time components. The CPU thus gains valuable headroom or “slack” for the task of determining the playout-order of the pixels and transferring the pixels in that order to the buffer. The CPU can, hence, be a commercially available general purpose CPU that does not need to be real-time capable. Provided that the sequence of pixels is determined sufficiently fast so that the buffer is sufficiently filled for each next feeding, the CPU is free to perform additional tasks, e.g., computations of dynamic brightness or distortion corrections by altering the pixels of the image frame held in the memory on-the-fly.
The buffer receives the pixels from the CPU already in the correct order, i.e. as they are to be played-out according to the scan pattern. The buffer can, thus, retrieve the buffered pixels with a fast sequential contiguous (“linear”) buffer access and quickly feed them to the light source driver. Furthermore, as the buffer buffers the pixels in the correct order it need not be synchronised each time a new pixel is to be played-out but, e.g., only when the play-out of several pixels (a “batch” of pixels) shall be (re-)synchronised to the mirror movement. Hence, a synchronisation or trigger signal may be sent less often to the buffer. As a result, the buffer is eased from processing frequent synchronisation signals and from identifying scattered memory addresses when retrieving the pixels from the buffer. The buffer can feed the pixels to the light source driver at a higher rate, and the present display apparatus is capable to display image frames with a higher resolution and/or with a higher frame rate.
The scan pattern may be a non-raster scan pattern, e.g., a spiral pattern, or even a raster scan pattern. In a beneficial embodiment the scan pattern is a Lissajous pattern, which allows to exploit resonances of the mirrors of the mirror assembly and, hence, to achieve higher speeds of the light beam and higher frame rates.
The mirror driver may synchronise the feeding of the pixels from the buffer, one by one or in batches, only once in a frame period. In an optional embodiment the mirror driver is configured to synchronise said feeding at least twice per frame duration. This ensures a tight synchronisation of the buffer read-out and light source driving to the mirror movement.
In an advantageous variant of this embodiment the mirror driver is configured to synchronise the buffer each time a periodic driving signal for driving one of said one or more mirrors about an axis reaches a predetermined level. In this way, the mirror driver employs the periodic driving signal to synchronise the buffer periodically, e.g., at every zero-crossing, maximum, minimum and/or turning point of a sine or cosine driving signal. Such a periodic and, hence, regular and more predictable synchronisation allows to use simpler circuitry for the buffer controller and for a faster processing in the buffer.
In a favourable embodiment the CPU is configured to transfer the sequence of pixels in at least two successive segments. By using several smaller segments instead of one large segment the buffer may be smaller and faster. For example, the buffer may be cost-efficiently embodied as a field programmable gate array (FPGA), an application specific integrated circuit ASIC and/or be even integrated into the mirror driver or the laser driver.
The loose synchronisation of the CPU to the real-time components may be established in many ways. For instance, the CPU may be synchronised by the mirror driver, by means of a common system clock, by transferring a new segment every n-th clock cycle of the CPU, etc. The disclosed subject matter provides for two optional embodiments of a synchronisation of the CPU to the real-time components.
In the first optional embodiment the CPU is configured to transfer a new one of said segments to the buffer when a filling level of the buffer falls below a predetermined threshold. Thereby, the CPU is—via the buffer—indirectly synchronised by the mirror driver and, thus, in approximate synchronism with the mirror movement. Moreover, as the transfer of a new segment depends on the filling level of the buffer, both buffer overflow and buffer underflow are efficiently avoided.
In the second optional embodiment the CPU is configured to transfer a new one of said segments to the buffer when a predetermined time interval has lapsed. Thereby, the segments are transferred at a constant frequency, allowing to utilise simple and fast timing circuitry in the buffer controller and to display the image frame with a particularly high resolution and/or frame rate.
In a favourable variant of the second optional embodiment each segment comprises the number of pixels fed between two synchronisations, the time interval is equal to or smaller than a shortest duration between two synchronisations, and the central processing unit is configured to suspend transferring a new one of said segments when the filling level of the buffer exceeds a predetermined threshold. Thereby, the segments are transferred at a constant frequency (i.e., the inverse of the time interval) corresponding to the fastest possible mirror movement (i.e., to the shortest duration between two synchronisations) such that a buffer underflow is strictly avoided. Suspending the transfer of a new one of said segments when the filling level of the buffer exceeds a predetermined threshold inhibits any buffer overflow.
In a further embodiment the CPU is configured to determine the sequence of pixels in successive parts. In this way, the CPU determines the pixels of the sequence of pixels at several instances of time, e.g., one part every n-th clock cycle of the CPU. Hence, between each two of those instances the CPU may adapt the pixels of the image frame to dynamically correct display brightness, for instance to account for a change in ambient lighting or to correct for geometrical distortions, e.g., due to a change in image area geometry. To this end, each part comprises optionally at least one segment such that pixels of a determined part may be promptly transferred to the buffer, e.g., within the same or the next CPU clock cycle of its determination.
The CPU may determine the sequence of pixels based on an on-the-fly calculation of the scan pattern. For a particularly fast determination the CPU is optionally configured to store a look-up table of indices of the pixels to be successively displayed according to said scan pattern and to determine the sequence of pixels by retrieving the pixels according to the look-up table from the memory. The CPU can easily and quickly determine the sequence of pixels by accessing the look-up-table to obtain the indices of the pixels and then the memory to retrieve the pixels according to the indices.
In an advantageous variant of this embodiment the CPU has a graphics processing unit (GPU) configured to process the indices as coordinates of a first texture and to retrieve the pixels by sampling the image frame according to the first texture. The processing of the indices as coordinates of a texture exploits the sophisticated texture mapping capability of modern GPUs.
Optionally, the GPU is configured to process the image frame as a second texture. A sampling of one texture, the image frame, according to another texture, the indices, by means of the, e.g., “texture” or “texture2D” functions in the GPU language standard OpenGL Shading Language (GLSL), results in a fast and efficient determination of the sequence.
In some embodiments the image frame may be displayed by a mono-coloured light beam. For displaying a multi-coloured image frame in optional embodiments, the light beam has at least two colours, e.g. the three colours red, green, blue, and each pixel comprises a colour value for each of said colours.
In some multi-colour embodiments the light beam may be comprised of coincident partial light beams each of a respective one of said colours, for instance by merging partial light beams emitted at different locations by different sub-light sources. In order to reduce beam merging optics, in an optional multi-colour embodiment the light beam is comprised of mutually spaced partial light beams each of a respective one of said colours.
In a first variant of this embodiment, the central processing unit is configured to determine the sequence of pixels, for each of successive positions within said scan pattern starting from said initial position, by retrieving those pixels of the image frame that are to be displayed by the partial light beams at this position and using the respective colour values of the retrieved pixels for the pixel of the sequence to be displayed at that position. In this way, each pixel of the sequence of pixels holds the correct colour values to be concurrently displayed by the different partial light beams.
In a second variant of this embodiment with parallel partial light beams, the central processing unit is configured to establish, for each of said colours, a respective offset of the indices of the pixels to be displayed by the partial light beam of that colour from the indices of the look-up table, to write, into each pixel of the image frame, the respective colour value of those pixels of the image frame whose indices are offset by the respective offset from the index of that pixel, and to retrieve the pixels according to the look-up table from the memory. Thereby, the colour values of the pixels of the image frame are “re-sorted”, i.e., the image frame is pre-processed, to be read-out according to a single scan pattern following the look-up table, despite of the different light beams following actually slightly offset scan patterns. On the one hand, the pre-processing can be carried out very fast, e.g., pixel-by-pixel of the image frame with a sequential memory access, or by merging the red, green, and blue colour values of three copies of the image frame which are mutually shifted by the offset. On the other hand, the accurate retrieval along the scan pattern needs to be carried out only once. Consequently, a fast and efficient determination of the sequence of pixels is obtained for a multi-colour embodiment of the display apparatus.
For a tight integration of the display apparatus into, e.g., a temple of a frame of AR- or VR-glasses, the mirror driver, the laser source driver, the buffer, and the central processing unit may favourably be arranged on a single printed circuit board.
The disclosed subject matter will now be described by means of exemplary embodiments thereof with reference to the enclosed drawings, in which show:
The image frame 2 is displayed for at least one frame duration Tfr and may be part of a movie M or be a single image, e.g., a photo to be displayed for a longer period of time. Instead of a wall 3, the display apparatus 1 could display the light beam 4 onto any kind of image area, such as a board, projection screen, poster, the retina of an eye, an augmented reality (AR) combiner waveguide, another combiner optics, or the like. Accordingly, the display apparatus 1 may be part of a projector, AR or VR (virtual reality) glasses, a helmet, a head-up display, etc.
With reference to
The mirror assembly 7 may either comprise one MEMS mirror 8 oscillating about the horizontal and vertical axes 10, 11 or two MEMS mirrors 8, one after the other in the optical path of the light beam 4, each of which MEMS mirrors 8 then oscillating about one of the horizontal and vertical axes 10, 11.
Depending on the Lissajous pattern 5 to be displayed, Th and Tv may be chosen such that the trajectory of the light beam 4 on the image plane 2 densely covers the entire image plane 2 during the period Tfr of one image frame 2. Such a “complex” or “dense” Lissajous pattern 5 can be achieved when the frequencies fh=1/Th, fv=1/Tv are greater than the frame rate ffr=1/Tfr, e.g., greater than 1 kHz or tens of kHz, and the beginnings of their respective oscillation periods meet, e.g., only over every one or more image frames 2, in particular when the frequencies fh, fv are close to each other. To this end, frequencies fh, fv with a small greatest common divisor, e.g. smaller than 10, may be employed, for example.
The light source 6 may be any light source known in the art, e.g., an incandescent lamp, a gas, liquid or solid laser, a laser diode, an LED, etc. The light source 6 is driven by a light source driver 12 according to the pixels Pi of the image frame 2. In case the light source 6 displays a mono-colour, black and white, or grey scale image frame 2 with a mono-coloured light beam 4, each pixel Pi comprises a single colour value, e.g., a brightness or intensity value, and in case the light source 6 displays a multi-colour image frame 2 with a multi-coloured light beam 4, each pixel comprises several colour values, e.g., RGB values indicating the brightness or intensity of a red, green, and blue colour, YPbPr values, etc. In addition to the colour value/s, each pixel Pi may comprise a duration di (
To synchronise the light source driver 12 and the mirror driver 9 the display apparatus 1 has a buffer 13 which is connected to the light source driver 9 and the mirror driver 12. The buffer 13 buffers pixels Pi of the image frame 2 in the correct order, i.e. in that order in which they are to be displayed. The buffer 13, e.g. by means of an internal buffer controller, feeds—synchronised by the mirror driver 9—the buffered pixels Pi successively to the light source driver 12. In one embodiment the buffer 13 feeds the buffered pixels Pi in batches 14 of one or more successive pixels Pi, one batch 14 each time a synchronisation or trigger signal trig is received. In another embodiment the buffer 13 feeds the pixels Pi successively according to an internal clock of the buffer 13, which internal clock is re-synchronised with the frequencies fh, fv of the mirror driver 9 each time it receives the trigger signal trig.
In any case, the light source driver 12 drives the light source 6 according to the pixels Pi fed thereto. The buffer 13, the light source driver 12 and the mirror driver 9 are, thus, tightly synchronised and form the time-critical real-time part of the display apparatus 1.
To supply the buffer 13 with the pixels Pi in said correct order the display apparatus 1 has a central processing unit (CPU) 15. The CPU 15 transforms the image frame 2, whose pixels Pi are not ordered according to the Lissajous pattern 5, to a pixel sequence 16 whose pixels Pi are ordered according to the Lissajous pattern 5. The CPU 15 transfers the pixel sequence 16 to the buffer 13 for buffering. The CPU 15 holds the image frame 2 in a memory 17, e.g. an SRAM or DRAM memory, which may be part of the CPU 15 or external therefrom. The CPU 15 determines the sequence 16 of pixels Pi of the image frame 2 to be successively displayed according to the Lissajous pattern 5, e.g., as described below with reference to
To achieve a “loose” synchronisation between the CPU 15 and the real-time components per one (or more) image frame/s 2, the CPU 15 determines the sequence 16 starting from an initial position 19 within the Lissajous pattern 5. The initial position 19 can be chosen arbitrarily within the Lissajous pattern, however, it needs to be a common reference for both the CPU 15 when determining the pixel sequence 16 and the buffer 13 when feeding the pixels Pi to the light source driver 12. For example, the initial (“reference”) position 19 can correspond the top left pixel Pi drawn on the wall 3, or any other selected pixel Pi within an image frame 2. The repeated synchronisation of the buffer 13 can thus be considered to “start” anew whenever the light beam 4 re-visits the initial position 19. Hence, initially the CPU 15 determines and transfers a first segment 18 of the sequence 16 to the buffer 13 so that it is available there for feeding to the light source driver 12 to start the displaying of the pixels along the Lissajous pattern 5 from the initial position 19 onwards.
The mirror driver 9 may employ a variety of timing schemes to synchronise the feeding of the pixels Pi from or the buffer 13, e.g., regularly at a given frequency or irregularly, only once per frame duration Tfr (to feed the whole image frame 2 in one large batch 14, not shown) or several times per frame duration Tfr to feed the image frame 2 in several smaller batches 14 (
It shall be noted that the pixels Pi that are displayed between each two synchronisations may be regarded as a “line” BL of pixels Pi and the buffer 13 as a line buffer buffering one or more lines BL of pixels Pi. Two lines BL need not necessarily comprise the same number of pixels Pi due to the Lissajous oscillation. The CPU 15 may pad or discard pixels Pi in the sequence 16 or the segments 18 to generate lines BL for the buffer 13 that each have the same number of pixels Pi to simplify the implementation of the buffer 13 as a line buffer.
The CPU 15 guarantees that the buffer 13 is always sufficiently filled. To this end, the CPU 15 may employ a variety of timing schemes to transfer the sequence 16, be it once per frame duration Tfr in one large segment 18 or several times per frame duration Tfr in several smaller segments 18, to the buffer 13.
In a first embodiment shown with a dashed line 22 in
In a second embodiment shown in
In a third exemplary embodiment, the CPU 15 transfers a new segment 18 each time a predetermined time interval Ttrs has lapsed, e.g., every n-th cycle of the clock of the CPU 15, and thus at a constant frequency ftrs=1/Ttrs.
In a variant of the third embodiment shown in
When the MEMS mirror 8 oscillates as fast as possible, the time interval Ttrs is equal to the duration Ttrg and the buffer 13 is substantially filled at a constant level L. When the MEMS mirror 8 oscillates slower than possible, the time interval Ttrs is shorter than the duration Ttrg and the filling level L of the buffer 13 rises. To avoid a buffer overflow in such a situation, the CPU 15 detects that the filling level L exceeds a predetermined threshold 25 of, e.g., 80% and suspends transferring a new segment 18, see crossed-out arrows trs in
The CPU 15 may determine the sequence 16 of pixels Pi in any time granularity, e.g., for each image frame 7 at once or successively in subsequent parts 161, 162, . . . , generally 16j. Each part 16j may comprise one or more segments 18. Moreover, the CPU 15 may determine the sequence 16 in many ways, e.g., on-the-fly by matching positions that follow each other in time along the Lissajous pattern 5 to pixels Pi in the image frame 2 occurring at these positions.
Alternatively, with reference to
As illustrated in
In the example of
As mentioned above, each pixel Pi may optionally contain a duration di indicating how long that pixel Pi is to be displayed by the light source 6. The duration di may be provided in a duration table 32 and included into the corresponding pixel Pi of the sequence 16 when determining the sequence 16. For instance, the duration d33 of the pixel P33 to be played out first may be included in the pixel P33 (see arrow 33), the duration d43 of the pixel P43 to be played out next may be included in the pixel P43 (see arrow 34), etc. Instead of using a pre-calculated duration table 32 the durations di may be calculated by the CPU 15 in dependence of the Lissajous pattern 5 on-the-fly.
The coordinates ui, vi of the texture 36 follow the Lissajous pattern 5 over the image frame 2 such that the GPU 35 can retrieve the pixels Pi by sampling the image frame 2 according to the texture 36 in a hardware texture mapping unit of the GPU 35. The GPU 35 may optionally further process the image frame 2 as another texture 39 which provides for an additional speed-up. For instance, the indices i of the look-up table 26 and the image frame 2 may each be processed with the “texture” or “texture2D” instruction according to the GPU standard OpenGL Shading Language (GLSL).
With reference to
In a first multi-colour embodiment shown in
In a second multi-colour embodiment shown in
In the first variant shown in
In the second variant shown in
Firstly, the CPU 15 establishes for each colour red, green and blue a respective offset OR, OG, OB of the indices i of the pixels Pi to be displayed by the respective partial light beam 4R, 4G, 4B from the indices i of the look-up table 26. For instance, for the colour red, the CPU 15 establishes a red offset OR which indicates how the indices i of the pixels Pi that are displayed by the red partial light beam 4R are offset from the indices i of the look-up table 26 at the corresponding positions pos1 within the Lissajous pattern 5. In the example shown the CPU 15 establishes one single constant red offset OR (in
Depending on the format of the indices i of the pixels Pi, the offsets OR, OG, OB may, e.g., each be an integer when each index i is an integer or each be a vector when each index i is given as a composite index. Of course, any of the Lissajous pattern copies 5R, 5G or 5B may be used as the reference Lissajous pattern 5 when generating the look-up table 26 such that one of the offsets OR, OG or RB will be zero.
The CPU 15 may establish the offsets OR, OG, OB in many ways to relate the physical mutual displacement of the light beams 4R, 4G, 4B to the image frame 2, e.g., to best fit the image frame 2 with the Lissajous patterns 5R, 5G, 5B, to optimise an overlap of the Lissajous patterns 5R, 5G, 5B on the image frame 2.
Secondly, the colour values Ri, Gi, Bi of the pixels Pi of the image frame 2 are now rewritten (“preprocessed”). To this end, the CPU 15 writes, into each pixel Pi of the image frame 2, the colour values Ri, Gi, Bi of those pixels Pi whose indices are offset by the respective offset OR, OG, OB from the index i of that pixel Pi. In the example of
The rewriting or preprocessing of the image frame 2 may be carried out using a linear memory access, not following the Lissajous pattern 5 but the plain order of the pixels Pi in the image frame 2, i.e. P1->P2->P3->P3->P4->Ps->P6-> . . . . The rewriting may as well be carried out by merging the corresponding colour values of offset red, green and blue copies of the image frame 2.
After the image frame 2 has been preprocessed in this way, the CPU 15 determines the sequence 16 of pixels Pi therefrom as mentioned above to follow the Lissajous pattern 5, e.g., by retrieving the colour values of the pixels Pi according to the indices i of the look-up table 26.
Besides determining and transferring the sequence 16 to the buffer 13 the CPU 15 may perform additional tasks. The CPU 15 may, e.g., generate the image frame 2, or may adapt the colour values Ri, Gi, Bi and/or the durations di of the pixels Pi, for instance according to ambient conditions such as ambient brightness, image area geometry etc. measured by sensors (not shown) connected to the CPU 15.
The disclosed subject matter is not restricted to the specific embodiments described above but encompasses all variants, modifications and combinations thereof that fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
23180876.7 | Jun 2023 | EP | regional |