This relates generally to electronic devices, and, more particularly, to electronic devices with displays.
Electronic devices often include displays. In some cases, displays may include lenticular lenses that enable the display to provide three-dimensional content to the viewer. The lenticular lenses may be formed over an array of pixels such as organic light-emitting diode pixels or liquid crystal display pixels.
An electronic device may include a lenticular display. The lenticular display may have a lenticular lens film formed over an array of pixels. A plurality of lenticular lenses may extend across the length of the display. The lenticular lenses may be configured to enable stereoscopic viewing of the display such that a viewer perceives three-dimensional images.
The electronic device may also include an eye and/or head tracking system. The eye and/or head tracking system uses sensors to obtain sensor data regarding the position of a viewer of the display. The captured images may be used to determine a viewer's eye position.
The display may have a number of independently controllable viewing zones. Each viewing zone displays a respective two-dimensional image. Each eye of the viewer may receive a different one of the two-dimensional images, resulting in a perceived three-dimensional image.
The different viewing zones may account for horizontal parallax as a viewer moves horizontally relative to the display. To prevent visible artifacts caused by vertical parallax mismatch as a viewer moves vertically relative to the display, the displayed images may be compensated based on a vertical position of the viewer.
The display may be dimmed globally based on the vertical position of the viewer. The content on the display may be rendered for a baseline viewing angle (where there is no vertical parallax mismatch). The magnitude of dimming applied to the display may increase with increasing deviation of the viewer from the baseline viewing angle.
In another possible arrangement, the display may render content that compensates for the real-time vertical position of the viewer. Content rendering circuitry may render a plurality of two-dimensional images that are each associated with a respective viewing zone. The two-dimensional images that are each associated with a respective viewing zone may be two-dimensional images of the same content at different horizontal perspectives and a single vertical perspective. The single vertical perspective may be based on the vertical eye position determined using the eye tracking system. The single vertical perspective may be updated as the vertical eye position changes to provide the image with vertical parallax that matches the vertical eye position.
The lenticular lens film may include lenticular lenses that spread light in the horizontal direction but not the vertical direction. Another option for the stereoscopic display is to include a lens film that has an array of lenses. Each lens in the array of lenses spreads light in the horizontal direction and the vertical direction. In this way, the stereoscopic display may account for both horizontal parallax and vertical parallax as the viewer moves relative to the display.
An illustrative electronic device of the type that may be provided with a display is shown in
As shown in
To support communications between device 10 and external equipment, control circuitry 16 may communicate using communications circuitry 21. Circuitry 21 may include antennas, radio-frequency transceiver circuitry, and other wireless communications circuitry and/or wired communications circuitry. Circuitry 21, which may sometimes be referred to as control circuitry and/or control and communications circuitry, may support bidirectional wireless communications between device 10 and external equipment over a wireless link (e.g., circuitry 21 may include radio-frequency transceiver circuitry such as wireless local area network transceiver circuitry configured to support communications over a wireless local area network link, near-field communications transceiver circuitry configured to support communications over a near-field communications link, cellular telephone transceiver circuitry configured to support communications over a cellular telephone link, or transceiver circuitry configured to support communications over any other suitable wired or wireless communications link). Wireless communications may, for example, be supported over a Bluetooth® link, a WiFi® link, a 60 GHz link or other millimeter wave link, a cellular telephone link, or other wireless communications link. Device 10 may, if desired, include power circuits for transmitting and/or receiving wired and/or wireless power and may include batteries or other energy storage devices. For example, device 10 may include a coil and rectifier to receive wireless power that is provided to circuitry in device 10.
Input-output circuitry in device 10 such as input-output devices 12 may be used to allow data to be supplied to device 10 and to allow data to be provided from device 10 to external devices. Input-output devices 12 may include buttons, joysticks, scrolling wheels, touch pads, key pads, keyboards, microphones, speakers, tone generators, vibrators, cameras, sensors, light-emitting diodes and other status indicators, data ports, and other electrical components. A user can control the operation of device 10 by supplying commands through input-output devices 12 and may receive status information and other output from device 10 using the output resources of input-output devices 12.
Input-output devices 12 may include one or more displays such as display 14. Display 14 may be a touch screen display that includes a touch sensor for gathering touch input from a user or display 14 may be insensitive to touch. A touch sensor for display 14 may be based on an array of capacitive touch sensor electrodes, acoustic touch sensor structures, resistive touch components, force-based touch sensor structures, a light-based touch sensor, or other suitable touch sensor arrangements.
Some electronic devices may include two displays. In one possible arrangement, a first display may be positioned on one side of the device and a second display may be positioned on a second, opposing side of the device. The first and second displays therefore may have a back-to-back arrangement. One or both of the displays may be curved.
Sensors in input-output devices 12 may include force sensors (e.g., strain gauges, capacitive force sensors, resistive force sensors, etc.), audio sensors such as microphones, touch and/or proximity sensors such as capacitive sensors (e.g., a two-dimensional capacitive touch sensor integrated into display 14, a two-dimensional capacitive touch sensor overlapping display 14, and/or a touch sensor that forms a button, trackpad, or other input device not associated with a display), and other sensors. If desired, sensors in input-output devices 12 may include optical sensors such as optical sensors that emit and detect light, ultrasonic sensors, optical touch sensors, optical proximity sensors, and/or other touch sensors and/or proximity sensors, monochromatic and color ambient light sensors, image sensors, fingerprint sensors, temperature sensors, sensors for measuring three-dimensional non-contact gestures (“air gestures”), pressure sensors, sensors for detecting position, orientation, and/or motion (e.g., accelerometers, magnetic sensors such as compass sensors, gyroscopes, and/or inertial measurement units that contain some or all of these sensors), health sensors, radio-frequency sensors, depth sensors (e.g., structured light sensors and/or depth sensors based on stereo imaging devices), optical sensors such as self-mixing sensors and light detection and ranging (lidar) sensors that gather time-of-flight measurements, humidity sensors, moisture sensors, gaze tracking sensors, and/or other sensors.
Control circuitry 16 may be used to run software on device 10 such as operating system code and applications. During operation of device 10, the software running on control circuitry 16 may display images on display 14 using an array of pixels in display 14.
Display 14 may be an organic light-emitting diode display, a liquid crystal display, an electrophoretic display, an electrowetting display, a plasma display, a microelectromechanical systems display, a display having a pixel array formed from crystalline semiconductor light-emitting diode dies (sometimes referred to as microLEDs), and/or other display. Configurations in which display 14 is an organic light-emitting diode display are sometimes described herein as an example.
Display 14 may have a rectangular shape (i.e., display 14 may have a rectangular footprint and a rectangular peripheral edge that runs around the rectangular footprint) or may have other suitable shapes. Display 14 may be planar or may have a curved profile.
Device 10 may include cameras and other components that form part of eye and/or head tracking system 18. The camera(s) or other components of system 18 may face an expected location for a viewer and may track the viewer's eyes and/or head (e.g., images and other information captured by system 18 may be analyzed by control circuitry 16 to determine the location of the viewer's eyes and/or head). This head-location information obtained by system 18 may be used to determine the appropriate direction with which display content from display 14 should be directed. Eye and/or head tracking system 18 may include any desired number/combination of infrared and/or visible light detectors. Eye and/or head tracking system 18 may optionally include light emitters to illuminate the scene. Eye and/or head tracking system may include a light detection and ranging (lidar) sensor, a time-of-flight (ToF) sensor, an accelerometer (e.g., to detect the orientation of electronic device 10), a camera, or a combination of two or more of these components. Including sensors such as a light detection and ranging (lidar) sensor, a time-of-flight (ToF) sensor, or an accelerometer may improve acquisition speeds when tracking eye/head position of the viewer.
A top view of a portion of display 14 is shown in
Display driver circuitry may be used to control the operation of pixels 22. The display driver circuitry may be formed from integrated circuits, thin-film transistor circuits, or other suitable circuitry. Display driver circuitry 30 of
To display the images on display pixels 22, display driver circuitry 30 may supply image data to data lines D while issuing clock signals and other control signals to supporting display driver circuitry such as gate driver circuitry 34 over path 38. If desired, circuitry 30 may also supply clock signals and other control signals to gate driver circuitry on an opposing edge of display 14.
Gate driver circuitry 34 (sometimes referred to as horizontal control line control circuitry) may be implemented as part of an integrated circuit and/or may be implemented using thin-film transistor circuitry. Horizontal control lines G in display 14 may carry gate line signals (scan line signals), emission enable control signals, and other horizontal control signals for controlling the pixels of each row. There may be any suitable number of horizontal control signals per row of pixels 22 (e.g., one or more, two or more, three or more, four or more, etc.).
Display 14 may sometimes be a stereoscopic display that is configured to display three-dimensional content for a viewer. Stereoscopic displays are capable of displaying multiple two-dimensional images that are viewed from slightly different angles. When viewed together, the combination of the two-dimensional images creates the illusion of a three-dimensional image for the viewer. For example, a viewer's left eye may receive a first two-dimensional image and a viewer's right eye may receive a second, different two-dimensional image. The viewer perceives these two different two-dimensional images as a single three-dimensional image.
There are numerous ways to implement a stereoscopic display. Display 14 (sometimes referred to as stereoscopic display 14, lenticular display 14, three-dimensional display 14, etc.) may be a lenticular display that uses lenticular lenses (e.g., elongated lenses that extend along parallel axes), may be a parallax barrier display that uses parallax barriers (e.g., an opaque layer with precisely spaced slits to create a sense of depth through parallax), may be a volumetric display, or may be any other desired type of stereoscopic display. Configurations in which display 14 is a lenticular display are sometimes described herein as an example.
As shown in
The lenses 46 of the lenticular lens film cover the pixels of display 14. An example is shown in
Consider the example of display 14 being viewed by a viewer with a first eye (e.g., a right eye) 48-1 and a second eye (e.g., a left eye) 48-2. Light from pixel 22-1 is directed by the lenticular lens film in direction 40-1 towards left eye 48-2, light from pixel 22-2 is directed by the lenticular lens film in direction 40-2 towards right eye 48-1, light from pixel 22-3 is directed by the lenticular lens film in direction 40-3 towards left eye 48-2, light from pixel 22-4 is directed by the lenticular lens film in direction 40-4 towards right eye 48-1, light from pixel 22-5 is directed by the lenticular lens film in direction 40-5 towards left eye 48-2, light from pixel 22-6 is directed by the lenticular lens film in direction 40-6 towards right eye 48-1. In this way, the viewer's right eye 48-1 receives images from pixels 22-2, 22-4, and 22-6, whereas left eye 48-2 receives images from pixels 22-1, 22-3, and 22-5. Pixels 22-2, 22-4, and 22-6 may be used to display a slightly different image than pixels 22-1, 22-3, and 22-5. Consequently, the viewer may perceive the received images as a single three-dimensional image.
Pixels of the same color may be covered by a respective lenticular lens 46. In one example, pixels 22-1 and 22-2 may be red pixels that emit red light, pixels 22-3 and 22-4 may be green pixels that emit green light, and pixels 22-5 and 22-6 may be blue pixels that emit blue light. This example is merely illustrative. In general, each lenticular lens may cover any desired number of pixels each having any desired color. The lenticular lens may cover a plurality of pixels having the same color, may cover a plurality of pixels each having different colors, may cover a plurality of pixels with some pixels being the same color and some pixels being different colors, etc.
Display 14 may be viewed by both a first viewer with a right eye 48-1 and a left eye 48-2 and a second viewer with a right eye 48-3 and a left eye 48-4. Light from pixel 22-1 is directed by the lenticular lens film in direction 40-1 towards left eye 48-4, light from pixel 22-2 is directed by the lenticular lens film in direction 40-2 towards right eye 48-3, light from pixel 22-3 is directed by the lenticular lens film in direction 40-3 towards left eye 48-2, light from pixel 22-4 is directed by the lenticular lens film in direction 40-4 towards right eye 48-1, light from pixel 22-5 is directed by the lenticular lens film in direction 40-5 towards left eye 48-4, light from pixel 22-6 is directed by the lenticular lens film in direction 40-6 towards right eye 48-3, light from pixel 22-7 is directed by the lenticular lens film in direction 40-7 towards left eye 48-2, light from pixel 22-8 is directed by the lenticular lens film in direction 40-8 towards right eye 48-1, light from pixel 22-9 is directed by the lenticular lens film in direction 40-9 towards left eye 48-4, light from pixel 22-10 is directed by the lenticular lens film in direction 40-10 towards right eye 48-3, light from pixel 22-11 is directed by the lenticular lens film in direction 40-11 towards left eye 48-2, and light from pixel 22-12 is directed by the lenticular lens film in direction 40-12 towards right eye 48-1. In this way, the first viewer's right eye 48-1 receives images from pixels 22-4, 22-8, and 22-12, whereas left eye 48-2 receives images from pixels 22-3, 22-7, and 22-11. Pixels 22-4, 22-8, and 22-12 may be used to display a slightly different image than pixels 22-3, 22-7, and 22-11. Consequently, the first viewer may perceive the received images as a single three-dimensional image. Similarly, the second viewer's right eye 48-3 receives images from pixels 22-2, 22-6, and 22-10, whereas left eye 48-4 receives images from pixels 22-1, 22-5, and 22-9. Pixels 22-2, 22-6, and 22-10 may be used to display a slightly different image than pixels 22-1, 22-5, and 22-9. Consequently, the second viewer may perceive the received images as a single three-dimensional image.
Pixels of the same color may be covered by a respective lenticular lens 46. In one example, pixels 22-1, 22-2, 22-3, and 22-4 may be red pixels that emit red light, pixels 22-5, 22-6, 22-7, and 22-8 may be green pixels that emit green light, and pixels 22-9, 22-10, 22-11, and 22-12 may be blue pixels that emit blue light. This example is merely illustrative. The display may be used to present the same three-dimensional image to both viewers or may present different three-dimensional images to different viewers. In some cases, control circuitry in the electronic device 10 may use eye and/or head tracking system 18 to track the position of one or more viewers and display images on the display based on the detected position of the one or more viewers.
It should be understood that the lenticular lens shapes and directional arrows of
The X-axis may be considered the horizontal axis for the display whereas the Y-axis may be considered the vertical axis for the display. As shown in
The example herein of the display having 14 independently controllable zones is merely illustrative. In general, the display may have any desired number of independently controllable zones (e.g., more than 2, more than 6, more than 10, more than 12, more than 16, more than 20, more than 30, more than 40, less than 40, between 10 and 30, between 12 and 25, etc.).
Each zone is capable of displaying a unique image to the viewer. The sub-pixels on display 14 may be divided into groups, with each group of sub-pixels capable of displaying an image for a particular zone. For example, a first subset of sub-pixels in display 14 is used to display an image (e.g., a two-dimensional image) for zone 1, a second subset of sub-pixels in display 14 is used to display an image for zone 2, a third subset of sub-pixels in display 14 is used to display an image for zone 3, etc. In other words, the sub-pixels in display 14 may be divided into 14 groups, with each group associated with a corresponding zone (sometimes referred to as viewing zone) and capable of displaying a unique image for that zone. The sub-pixel groups may also themselves be referred to as zones.
Control circuitry 16 may control display 14 to display desired images in each viewing zone. There is much flexibility in how the display provides images to the different viewing zones. Display 14 may display entirely different content in different zones of the display. For example, an image of a first object (e.g., a cube) is displayed for zone 1, an image of a second, different object (e.g., a pyramid) is displayed for zone 2, an image of a third, different object (e.g., a cylinder) is displayed for zone 3, etc. This type of scheme may be used to allow different viewers to view entirely different scenes from the same display. However, in practice there may be crosstalk between the viewing zones. As an example, content intended for zone 3 may not be contained entirely within viewing zone 3 and may leak into viewing zones 2 and 4.
Therefore, in another possible use-case, display 14 may display a similar image for each viewing zone, with slight adjustments for perspective between each zone. This may be referred to as displaying the same content at different perspectives, with one image corresponding to a unique perspective of the same content. For example, consider an example where the display is used to display a three-dimensional cube. The same content (e.g., the cube) may be displayed on all of the different zones in the display. However, the image of the cube provided to each viewing zone may account for the viewing angle associated with that particular zone. In zone 1, for example, the viewing cone may be at a −10° angle relative to the surface normal of the display (along the horizontal direction). Therefore, the image of the cube displayed for zone 1 may be from the perspective of a −10° angle relative to the surface normal of the cube (as in
There are many possible variations for how display 14 displays content for the viewing zones. In general, each viewing zone may be provided with any desired image based on the application of the electronic device. Different zones may provide different images of the same content at different perspectives, different zones may provide different images of different content, etc.
In one possible scenario, display 14 may display images for all of the viewing zones at the same time. However, this requires emitting light with all of the sub-pixels in the display in order to generate images for each viewing zone. Simultaneously providing images for all of the viewing zones at the same time therefore may consume more power than is desired. To reduce power consumption in the display, one or more of the zones may be disabled based on information from the eye and/or head tracking system 18.
Eye and/or head tracking system 18 (sometimes referred to as viewer tracking system 18, head tracking system 18, or tracking system 18) may use one or more cameras such as camera 54 to capture images of the area in front of the display 14 where a viewer is expected to be present. The example of eye and/or head tracking system 18 including a camera 54 is merely illustrative. Eye and/or head tracking system may include a light detection and ranging (lidar) sensor, a time-of-flight (ToF) sensor, an accelerometer (e.g., to detect the orientation of electronic device 10), a camera, or a combination of two or more of these components. Including sensors such as a light detection and ranging (lidar) sensor, a time-of-flight (ToF) sensor, or an accelerometer may improve acquisition speeds when tracking eye/head position of the viewer. The tracking system may use information gathered by the sensors (e.g., sensor data) to identify a position of the viewer relative to the viewing zones. In other words, the tracking system may be used to determine which viewing zone(s) the viewer is occupying. Each eye of the user may be associated with a different viewing zone (in order to allow three-dimensional content to be perceived by the user from the display). Based on the captured images, tracking system 18 may identify a first viewing zone associated with a left eye of the viewer and a second viewing zone associated with a right eye of the viewer. Tracking system 18 may use one camera, two cameras, three cameras, more than three cameras, etc. to obtain information on the position of the viewer(s). The cameras in the tracking system may capture visible light and/or infrared light images.
Control circuitry 16 may use information from tracking system 18 to selectively disable unoccupied viewing zones. Disabling unoccupied viewing zones conserves power within the electronic device. Control circuitry 16 may receive various types of information from tracking system 18 regarding the position of the viewer. Control circuitry 16 may receive raw data from head tracking system 18 and process the data to determine the position of a viewer, may receive position coordinates from head tracking system 18, may receive an identification of one or more occupied viewing zones from head tracking system 18, etc. If head tracking system 18 includes processing circuitry configured to process data from the one or more cameras to determine the viewer position, this portion of the head tracking system may also be considered control circuitry (e.g., control circuitry 16). Control circuitry 16 may include a graphics processing unit (GPU) that generates image data to be displayed on display 14. The GPU may generate image data based on the viewer position information.
In general, electronic device 10 includes one or more cameras 54 for capturing images of an environment around the display (e.g., an area in front of the display where viewers are expected to be located). Control circuitry (e.g., control circuitry 16) within the electronic device uses the images from the one or more cameras to identify which viewing zones are occupied by the viewer. The control circuitry then controls the display accordingly based on the occupied viewing zones. The control circuitry may include hard disk drive storage, nonvolatile memory, microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, and/or application specific integrated circuits.
A camera in head tracking system 18 may capture an image of the viewer and identify the location of eyes 48-1 and 48-2. Accordingly, control circuitry in the electronic device may determine that the user's eyes are present in viewing zones 3 and 5. In response, the control circuitry controls display 14 to display the desired images in viewing zones 3 and 5. However, the other viewing zones (e.g., zones 1, 2, 4, and 6-14) are disabled. In other words, the sub-pixels of the other zones are turned off so that they do not emit light and do not consume power. This saves power consumption within the electronic device while providing a satisfactory user experience with the active zones 3 and 5. The zones where light is emitted (e.g., zones 3 and 5 in
The active zones may be updated based on the real-time position of the viewer. For example, the viewer may shift horizontally in direction 56 as shown in
Ideally, tracking system 18 would always quickly and accurately identify the position of the viewer. This information would then be used by the control circuitry to update the display in real time, such that the activated viewing zones always align with the viewer's eyes. In practice, however, there may be latency between a viewer changing position and the display being updated accordingly. If the user changes position quickly, they may move into an inactive zone and the display will appear dark (off) until the display updates. In other scenarios, due to a variety of possible factors the tracking system 18 may lose the position of the viewer in the scene. This is sometimes referred to as tracking loss. If tracking loss occurs, the viewer may shift position to a new viewing zone without being detected by the tracking system. This again may result in the viewer shifting to a position where the display appears to be dark (even though the display should be showing content to the user).
To prevent visible artifacts caused by to latency and/or tracking loss, the display may emit light for viewing zones that are not occupied.
The arrangement of
It should be noted that each zone may have a corresponding image. As shown in
Because zones 3 and 5 are displaying images C and E at full brightness, if the user shifts position to zones 3 and 5 they will immediately perceive the images C and E (which have the correct perspective for those positions) without waiting for the display to update. Therefore, the user may seamlessly transition between viewing zones without visible artifacts caused by latency, loss of viewer tracking capabilities, etc.
In
In
Of course, the viewer's second eye may be present in a zone near the viewer's first eye. Unoccupied zones that are interposed between two eyes may have a brightness dictated by the dimming profile for the closer eye, may have the highest brightness of the two magnitudes associated with each respective eye's brightness profile, etc. The number of unoccupied zones between a user's eyes may depend upon the particular display design, the distance of the user from the display, etc. Therefore, for simplicity, the zone brightness profiles (as in
The specific characteristics of the brightness profile of
In other words, the number of adjacent zones on either side of Z n in
In the step function of
As shown in
To either side of the occupied zone Zn, the brightness decreases with increasing distance from zone Zn. As shown, a brightness level of BR3 may be used one zone from the occupied zone (e.g., zones Zn−1 and Zn+1), a brightness level of BR4 may be used two zones from the occupied zone (e.g., zones Zn−2 and Zn+2), a brightness level of BR5 may be used three zones from the occupied zone (e.g., zones Zn−3 and Zn+3), and a brightness level of BR2 may be used more than three zones from the occupied zone (e.g., zones Zn−4 and Zn+4). In
This example is merely illustrative. Brightness levels BR1-BR5 may have any desired magnitudes. The brightness level BR1 may be 100% or less than 100%. Brightness level BR2 may be 0% or greater than 0%. In general, the brightness level may gradually decrease with increasing distance from the closest occupied zone. The brightness level may decrease monotonically with increasing distance from the closet occupied zone (as in
In addition to using information from eye and/or head tracking system 18 to reduce power consumption, information from eye and/or head tracking system 18 may be used to increase sharpness in the display.
As previously mentioned, an image intended for a given viewing area may not be contained exclusively to that viewing zone. Crosstalk may occur between viewing zones within the display. To mitigate crosstalk, the images for unoccupied zones may be modified based on the viewer eye position. In
In
A similar concept as in
For example, as shown in
As shown in
In
Zone 14A may display image N. Accordingly, zones 3A and 4A may also be used to display image N. This causes adjacent, non-occupied secondary zones 3B and 4B to display image N, improving the sharpness of the display. Similarly, zone 2A may be used to display image N. The secondary zone 2B that is a duplicate of zone 2A overlaps primary zone 14A. Displaying image N in zone 2A therefore ensures that image N is also displayed in zone 2B (which overlaps primary zone 14A also displaying image N). If zone 2A displayed a different image (e.g., image B), then a combination of image N and image B would be perceived by eye 48-2, resulting in an unclear image.
To summarize, secondary viewing zones may be leveraged to improve the sharpness of the display when head tracking indicates the viewer is viewing from a high viewing angle as in
The techniques described thus far ensure that an image on the display has a desired appearance as the viewer moves in the horizontal direction (e.g., between viewing zones). However, a viewer may also move in the vertical direction (e.g., along the Y-direction) while viewing display 14. If care is not taken, the viewer's movement in the vertical direction may cause undesired artifacts from vertical parallax mismatch.
In the diagram of
As shown previously in connection with
There are multiple ways to compensate the image on the display to correct for vertical parallax mismatch. The eye and/or head tracking system 18 may detect the relative position of the viewer in the vertical direction. Based on the relative position in the vertical direction, control circuitry 16 may update display 14 to compensate the display for the vertical position of the viewer.
One option for compensating the display is to dim the display as a function of the vertical position of the viewer. At an on-axis vertical viewing angle, the display may operate at full brightness. As the viewing angle in the vertical direction increases in the off-axis direction, however, the display may be dimmed by greater and greater amounts. This mitigates the negative aesthetic effect of the vertical parallax mismatch to the viewer. This dimming based on the vertical viewing angle may be performed instead of or in addition to the dimming based on the horizontal viewing zone position (as already shown and discussed in connection with
Another option for compensating the display is to update the content to account for the vertical position of the viewer. In other words, the image on the display is updated in real time based on the detected vertical position.
There are numerous steps that may be involved in display pipeline circuitry 64 generating pixel data for the pixel array. First, the display pipeline circuitry may render content that is intended to be displayed by the three-dimensional display. The display pipeline circuitry may render a plurality of two-dimensional images of target content, with each two-dimensional image corresponding to a different view of the target content. In one example, the target content may be based on a two-dimensional (2D) image and a three-dimensional (3D) image. The two-dimensional image and the three-dimensional image may optionally be captured by a respective two-dimensional image sensor and three-dimensional image sensor in electronic device 10. This example is merely illustrative. The content may be rendered based on two-dimensional/three-dimensional images from other sources (e.g., from sensors on another device, computer-generated images, etc.). In some cases, the content may be rendered based on the viewer position detected by eye and/or head tracking system 18.
The two-dimensional images associated with different views may be compensated based on various factors. For example, the two-dimensional images associated with different views may be compensated based on a brightness setting for the device, ambient light levels, and/or a viewer position that is detected using eye tracking system 18. After the two-dimensional images of different views are compensated, the plurality of two-dimensional images may be combined and provided to the single pixel array 62. A pixel map (sometimes referred to as a display calibration map) may be used to determine which pixels in the pixel array correspond to each view (e.g., each of the plurality of two-dimensional images). Additional compensation steps may be performed after determining the pixel data for the entire pixel array. Once the additional compensation is complete, the pixel data may be provided to the display driver circuitry 30. The pixel data provided to display driver circuitry 30 includes a brightness level (e.g., voltage) for each pixel in pixel array 62. These brightness levels are used to simultaneously display a plurality of two-dimensional images on the pixel array, each two-dimensional image corresponding to a unique view of the target content that is displayed in a respective unique viewing zone.
As shown in
Content rendering circuitry 102 may render content for the plurality of views based on a two-dimensional image and a three-dimensional image. The two-dimensional image and three-dimensional image may be images of the same content. In other words, the two-dimensional image may provide color/brightness information for given content while the three-dimensional image provides a depth map associated with the given content. The two-dimensional image only has color/brightness information for one view of the given content. However, content rendering circuitry 102 may render two-dimensional images for additional views (at different perspectives) based on the depth map and the two-dimensional image from the original view. Content rendering circuitry 102 may render as many two-dimensional images (views) as there are viewing zones in the display (e.g., more than 1, more than 2, more than 6, more than 10, more than 12, more than 16, more than 20, more than 30, more than 40, less than 40, between 10 and 30, between 12 and 25, etc.).
Content rendering circuitry 102 may optionally include a machine learning model. The machine learning model may use additional information (e.g., additional images of the content) to render two-dimensional images (views) for each viewing zone in the display.
In some possible arrangements, content rendering circuitry 102 may receive viewer position information from eye and/or head tracking system 18. To mitigate vertical parallax mismatch in the display, content rendering circuitry 102 may render content that accounts for the viewer's position in the vertical direction. If the viewer is positioned such that they are viewing the display from an on-axis direction (e.g., position B in
Content rendering circuitry 102 renders a plurality of two-dimensional images that are each associated with a respective viewing zone. The two-dimensional images that are each associated with a respective viewing zone may be two-dimensional images of the same content at different horizontal perspectives and a single vertical perspective (that is based on the vertical eye position determined using eye tracking system 18). The single vertical perspective may be updated as the vertical eye position changes to provide the image with vertical parallax that matches the vertical eye position (e.g., real-time updates to match the vertical eye position).
Additional per-view processing circuitry (sometimes referred to as per-2D-image compensation circuitry) may be included in the device if desired. The per-view processing circuitry may individually process each two-dimensional image rendered by circuitry 102 before the images are mapped by pixel mapping circuitry 104. The per-view processing circuitry is used to make content adjustments that are based on the perceived image that ultimately reaches the viewer (e.g., the pixels that are adjacent on the user's retina when viewing the display). As examples, the per-view processing circuitry may include one or more of tone mapping circuitry, ambient light adaptation circuitry, white point calibration circuitry, dithering circuitry, and/or any other desired processing circuitry.
After optional per-view processing is complete, the multiple 2D images from content rendering circuitry 102 may be provided to pixel mapping circuitry 104. Pixel mapping circuitry 104 may receive all of the two-dimensional images that are produced by content rendering circuitry 102. Pixel mapping circuitry 104 may also receive (or include) a pixel map (sometimes referred to as a display calibration map) from pixel map generation circuitry 152. Pixel mapping circuitry 104 may perform various steps (e.g., steps 112-118 in
As shown in
As an example, the pixel mapping circuitry may receive a first two-dimensional image that corresponds to a first view intended for viewing zone 1 of the display. The pixel map may identify a first subset of pixels in the pixel array that is visible at viewing zone 1. Accordingly, the first two-dimensional image is mapped to the first subset of pixels. Once displayed, the first two-dimensional image is viewable at viewing zone 1. The pixel mapping circuitry may also receive a second two-dimensional image that corresponds to a second view intended for viewing zone 2 of the display. The pixel map may identify a second subset of pixels in the pixel array that is visible at viewing zone 2. Accordingly, the second two-dimensional image is mapped to the second subset of pixels. Once displayed, the second two-dimensional image is viewable at viewing zone 2. This type of pixel mapping is repeated for every view included in the display. Once complete, pixel mapping circuitry 104 outputs pixel data for each pixel in the pixel array. The pixel data includes a blend of all the independent, two-dimensional images from content rendering circuitry 102.
It should be understood that the subset of pixels used to display each view may be non-continuous. For example, the subset of pixels for each view may include a plurality of discrete vertical pixel strips. These discrete sections of pixels may be separated by pixels that are used to display other views to the viewer.
After pixel mapping is complete, panel-level processing circuitry may optionally be used to perform additional processing on the pixel data. Panel-level processing circuitry may include one or more of color compensation circuitry, border masking circuitry, burn-in compensation circuitry, and panel response correction circuitry. In contrast to the aforementioned per-view processing circuitry, panel-level processing circuitry may be used to make adjustments that are based on the pixels on the display panel (as opposed to perceived pixels at the user's eye).
After the panel-level processing is complete, the output pixel brightness values for the entire pixel array may be provided to the display driver circuitry 30, where it is subsequently displayed on pixel array 62.
It should be noted that per-view processing circuitry (e.g., processing in the view space) is used to process the pixel data before pixel mapping whereas panel-level processing circuitry (e.g., processing in the display panel space) is used to process the pixel data after pixel mapping. This allows processing that relies on the final view of the image (e.g., per-view processing) to be completed before the data is split to a subset of pixels on the panel and interleaved with other views during pixel mapping. Once pixel mapping is complete, the processing that relies on the full panel luminance values (e.g., panel-level processing) may be completed.
In addition to updating the content rendered by content rendering circuitry 102 to compensate for the vertical position of a viewer, the texture map 154 may intermittently be updated based on the viewer position determined by eye tracking system 18. Specifically, the texture map 154 may be updated based on the viewer's position in the vertical direction to help prevent vertical parallax mismatch in the display.
As previously discussed, pixel dimming may be used to control the brightness of the viewing zones in order to minimize power consumption, crosstalk, etc. This pixel dimming is based on the occupied viewing zones (and, accordingly, the viewer's position in the horizontal direction). As shown in
The texture information (u, v) is identified at step 112 based on each pixel coordinate and the pixel map. For example, a first pixel in the lenticular display may have a corresponding pixel coordinate. The pixel map may be used to identify a texture that corresponds to that particular pixel coordinate. The pixel map may have texture information for each pixel based on the texture map 154 (which is based on the 3D image that is used to generate the 2D images and/or the vertical position of the viewer). The texture information may sometimes be referred to as depth information.
The view number associated with a given pixel coordinate is identified at step 114 based on the pixel coordinate and the pixel map. For example, a first pixel in the lenticular display may have a corresponding pixel coordinate. The pixel map may be used to identify a viewing zone to which that particular pixel coordinate belongs.
The pixel map may have a viewing zone associated with each pixel based on calibration information (e.g., the display may be tested to determine the viewing zone to which each pixel in the display belongs). The viewing zone of each pixel does not change over time during operation of the display. However, the texture information (e.g., the UV map portion of the pixel map) may intermittently be updated at some frequency during operation of the display (e.g., to account for the vertical position of the viewer).
Next, at step 116, the pixel mapping circuitry may generate dimming factors for each pixel based on the view number and texture of each pixel as well as the real-time viewer position received from eye tracking system 18. As one example, the dimming factors may be between (and including) 0 and 1 and may be multiplied by the original brightness value. For example, a dimming factor of 0 would mean that the input brightness value is dimmed to 0 (e.g., that pixel has a brightness of 0 and is effectively turned off) at step 118. A dimming factor of 1 would mean that the input brightness value is unchanged (e.g., that pixel is not dimmed). A dimming factor of 0.9 would mean that an output brightness value has a brightness that is 90% of its corresponding input brightness value. These examples of possible values for the dimming factors are merely illustrative. Any possible values may be used for the dimming factors. As another possible example, the dimming factors may be subtracted from the input pixel brightness values to dim the pixel brightness values. For example, the input pixel brightness values may be between (and including) 0 and 255. Consider, as an example, an input pixel brightness value of 200. A dimming factor of 0 would mean that the pixel is not dimmed (because no brightness reduction occurs, and the brightness remains 200). The dimming factor may be 60, resulting in the brightness value being reduced to 140 (e.g., 200−60=140). In general, any scheme may be used for the magnitudes and application of the dimming factors (e.g., BrightnessOUTPUT=BrightnessINPUT−Dimming Factor, BrightnessOUTPUT=BrightnessINPUT×Dimming Factor, etc.). The output brightness for a pixel may be a function of the input brightness for that pixel and the dimming factor for that pixel.
At step 118, the dimming factors may be applied to the input pixel brightness values (e.g., using a function as described above). The input pixel brightness values may already have been mapped to the display panel space by pixel mapping circuitry 104. For each pixel coordinate, the input brightness value for that coordinate is dimmed by the corresponding dimming factor determined for that coordinate in step 116. Depending on the type of dimming factor used, the dimming factor may be multiplied by the input brightness value, subtracted from the input brightness value, etc.
There are many factors that may influence the magnitude of the dimming factor determined at step 116. First, the horizontal position of the viewer may be used to determine the occupied viewing zone(s). The dimming factor for a pixel may depend on the position of the occupied viewing zone relative to the view corresponding to that pixel. For example, unoccupied zones may be turned off (as in
In addition to or instead of the horizontal position of the viewer, the vertical position of the viewer may be used to determine to the dimming factor. Dimming based on the vertical position of the viewer may be used to mitigate the effect of vertical parallax mismatch in the display. As the viewer's viewing angle increases in an off-axis vertical direction, the dimming factor for the display may increase. The dimming factor based on vertical viewer position may be determined globally. In other words, every pixel in the display may receive the same dimming factor based on the vertical position of the viewer.
As an example, if the viewer is at a first position aligned with the surface normal of the display (e.g., position B in
As shown, in
In the example of
The example of determining the dimming factor based on the vertical viewing angle is merely illustrative. It should be understood that the vertical viewing angle is a function of the vertical position of the viewer. Therefore, the dimming factor may instead be a function of the vertical position of the viewer (which is, necessarily, a function of the vertical viewing angle of the viewer). There may be a baseline vertical viewer position (associated with the baseline vertical viewing angle). The content rendered by content rendering circuitry 102 may be rendered for the baseline vertical viewer position (and baseline vertical viewing angle). There may be no vertical parallax mismatch when the viewer is at the baseline vertical viewer position. The magnitude of dimming applied to the display may increase with increasing deviation from the baseline vertical viewer position. For example, at the baseline vertical position, no dimming is performed. At a second vertical position that is a first distance from the baseline vertical position, a second amount of dimming is performed. At a third vertical position that is a second distance from the baseline vertical position, a third amount of dimming is performed. The third distance may be greater than the second distance and, accordingly, the third amount of dimming may be greater than the second amount of dimming.
In arrangements where display dimming based on vertical viewer position is performed, content rendering circuitry 102 and texture map 154 may optionally omit the aforementioned viewer position compensation.
In some cases, pixel mapping circuitry 104 may generate dimming factors based solely on the horizontal viewer position. In these cases, content rendering circuitry 102 and texture map 154 may be the only sources of vertical viewer position compensation in the display pipeline. In other cases, pixel mapping circuitry 104 may generate dimming factors based only on the vertical viewer position (e.g., by increasing dimming with increasing deviation from a baseline vertical viewing angle). In yet other cases, pixel mapping circuitry 104 may generate dimming factors based on both the horizontal and vertical viewer position.
As one example, the dimming factor ultimately applied to a pixel may be a function of a horizontal dimming factor determined based on horizontal position and a vertical dimming factor determined based on vertical position (e.g., DFFINAL=DFVERTICAL+DFHORIZONTAL, DFFINAL=DFVERTICAL×DFHORIZONTAL, or DFFINAL=DFVERTICAL−DFHORIZONTAL, where DFFINAL is the total dimming factor applied to a pixel, DFVERTICAL is the vertical dimming factor, and
DFHORIZONTAL is the horizontal dimming factor).
As yet another option, the dimming factor may be used to selectively dim portions of the displayed image that are susceptible to ghosting. The edge viewing zones of the display may be particularly susceptible to ghosting. To avoid excessively dimming the display, selective dimming may be performed only on content that is susceptible to ghosting. Ghosting may be particularly noticeable in areas of high contrast within the image (e.g., at borders), at areas of high luminance (e.g., bright objects) within the image, and/or at content-specific points of interest within the image (e.g., portions of the image that display important parts of the image). Portions of the image with low contrast and/or low luminance (e.g., portions of the image that are approximately the same across all of the viewing zones) may not be dimmed as these areas will not cause ghosting (or will not cause ghosting that detracts from the viewer experience). The pixel mapping circuitry may therefore factor in the content on the display, texture information from step 112, and/or viewing zone from step 114 to generate a content based dimming factor that may also optionally be used when determining the dimming factor for each pixel (e.g., DFFINAL=DFVERTICAL+DFHORIZONTAL+DFCONTENT, DFFINAL=DFVERTICAL×DFHORIZONTAL×DFCONTENT, or DFFINAL=DFVERTICAL−DFHORIZONTAL−DFCONTENT, where DFCONTENT is the content-based dimming factor).
When the display is updated based on the detected position of the viewer, changes may optionally be made gradually. For example, viewing zones that are turned on and off may fade in and fade out to avoid visible flickering. Global dimming applied based on vertical viewer position may be applied gradually. The control circuitry may gradually transition any portion of the display between two desired brightness levels any time the brightness level changes.
At step 144, the position of one or more viewers of the display may be determined. Control circuitry such as control circuitry 16 may use the captured images from the camera to determine how many viewers are present and the positions of the viewers. The example of using a camera to determine viewer position is merely illustrative. Eye and/or head tracking system may include a light detection and ranging (lidar) sensor, a time-of-flight (ToF) sensor, an accelerometer (e.g., to detect the orientation of electronic device 10), a camera, or a combination of two or more of these components. Based on sensor data from one or more sensors in the eye and/or head tracking system, the control circuitry may determine in which viewing zone each viewer eye is located (e.g., the horizontal position of each viewer eye). The control circuitry may also determine the vertical position of each viewer eye based on the sensor information. The gaze direction of the viewer need not be determined to identify which viewing zones the viewer eyes are located in. In other words, control circuitry 16 may, in some cases, use only the determined position of the user's eyes (e.g., in a plane in front of the display) for subsequent processing, and not the direction-of-gaze of the user's eyes.
Finally, at step 146, based on the determined positions of the viewer, the brightness of one or more zones and/or the image displayed by one or more zones may be updated.
In the display described in connection with
However, another option for avoiding vertical parallax mismatch artifacts is to incorporate lenses in the display that spread light in both the horizontal and vertical directions. In this way, the lenses can provide multiple viewing zones in the vertical direction in addition to multiple viewing zones in the horizontal direction. The display viewing zones may then account for the vertical parallax such that the three-dimensional content on the display has an appropriate simulated real-life appearance regardless of the horizontal viewing angle and vertical viewing angle.
In
The example of
The example of
In
In
In general, film 42 may include an array of lenses with any desired arrangement (e.g., a square grid, offset grid, or another desired arrangement). Each lens 202 in the lens film 42 may have any desired footprint (e.g., circular, oval, square, non-square rectangular, hexagonal, octagonal, etc.).
Each lens 202 in
Each lens 202 in
The lenses in two-dimensional lens film 42 in
It has previously been discussed how dimming factors may be applied to different viewing zones of a display based the position of a viewer relative to the display. For example, the viewing zones of the display may have a brightness profile of the type shown in
Consider the example of
Because the viewers are positioned in different viewing zones, different dimming profiles may be assigned to each viewer. For example, eyes 48-1 and 48-2 are provided with dimming factors across the viewing zones based on a Gaussian profile (as previously shown in
Additionally, the different viewers may be assigned different global dimming factors based on their respective vertical viewing angles. For example, eyes 48-1 and 48-2 may be at position B in
The number of viewing zones associated with each viewer may be the same or may be different. In general, each viewer may have any number of associated viewing zones. In
If desired, the same global dimming profile (e.g., the profile of
In accordance with an embodiment, an electronic device is provided that includes a display that includes an array of pixels and a lenticular lens film formed over the array of pixels, the lenticular lens film spreads light from the display in a horizontal direction and the display has a plurality of independently controllable viewing zones in the horizontal direction; at least one sensor configured to obtain sensor data; and control circuitry configured to: determine eye position information based on the sensor data, the eye position information includes a vertical eye position and a horizontal eye position; and dim at least one pixel in the array of pixels based on the vertical eye position.
In accordance with another embodiment, dimming the at least one pixel in the array of pixels based on the vertical eye position includes globally dimming all of the pixels in the array of pixels based on the vertical eye position.
In accordance with another embodiment, dimming the at least one pixel in the array of pixels based on the vertical eye position includes applying a dimming factor to all of the pixels in the array of pixels, the dimming factor is based on the vertical eye position, and the same dimming factor is used for every pixel in the array of pixels.
In accordance with another embodiment, dimming the at least one pixel in the array of pixels based on the vertical eye position includes applying a dimming factor to an input brightness value for the at least one pixel.
In accordance with another embodiment, the dimming factor is proportional to a deviation between the vertical eye position and a baseline vertical eye position.
In accordance with another embodiment, the dimming factor is a function of a horizontal dimming factor that is based on the horizontal eye position and a vertical dimming factor that is based on the vertical eye position.
In accordance with another embodiment, dimming the at least one pixel in the array of pixels based on the vertical eye position includes at a first time, while the vertical eye position differs from a baseline vertical eye position by a first magnitude, dimming the at least one pixel by a first amount; and at a second time subsequent to the first time, while the vertical eye position differs from the baseline vertical eye position by a second magnitude that is greater than the first magnitude, dimming the at least one pixel by a second amount that is greater than the first amount.
In accordance with another embodiment, dimming the at least one pixel in the array of pixels based on the vertical eye position includes at a third time subsequent to the second time, while the vertical eye position is equal to the baseline vertical eye position, operating the at least one pixel without any dimming.
In accordance with another embodiment, the control circuitry is configured to: determine additional eye position information based on the sensor data, the additional eye position information includes an additional vertical eye position and an additional horizontal eye position; and dim an additional pixel that is different than the at least one pixel based on the additional vertical eye position.
In accordance with an embodiment, an electronic device is provided that includes a display that includes an array of pixels and a lenticular lens film formed over the array of pixels, the lenticular lens film spreads light from the display in a horizontal direction and the display has a plurality of independently controllable viewing zones in the horizontal direction; at least one sensor configured to obtain sensor data; and control circuitry configured to: determine eye position information from the sensor data, the eye position information includes a vertical eye position and a horizontal eye position; and render content for the display based at least partially on the vertical eye position, the rendered content includes two-dimensional images that are each associated with a respective viewing zone.
In accordance with another embodiment, the control circuitry is further configured to map each two-dimensional image to respective pixels on the array of pixels to obtain pixel data for the array of pixels.
In accordance with another embodiment, the two-dimensional images that are each associated with a respective viewing zone are two-dimensional images of the same content at different horizontal perspectives.
In accordance with another embodiment, the two-dimensional images that are each associated with a respective viewing zone are two-dimensional images of the same content at different horizontal perspectives and a single vertical perspective that is based on the vertical eye position.
In accordance with another embodiment, the control circuitry is further configured to dim at least some of the pixels based on the horizontal eye position.
In accordance with another embodiment, dimming at least some of the pixels based on the horizontal eye position includes, for each pixel: determining a texture associated with the pixel; determining a viewing zone associated with the pixel; and generating a dimming factor based on the texture, the viewing zone, and the horizontal eye position.
In accordance with another embodiment, dimming at least some of the pixels based on the horizontal eye position includes, for each pixel, dimming the pixel based on the horizontal eye position and content information associated with the pixel.
In accordance with an embodiment, an electronic device is provided that includes a stereoscopic display includes an array of pixels; and an array of lenses formed over the array of pixels, each lens in the array of lenses spreads light from the array of pixels in both a first direction and a second direction that is orthogonal to the first direction and the array of lenses directs the light to a plurality of independently controllable viewing zones; and content rendering circuitry configured to render content for the stereoscopic display, the rendered content includes two-dimensional images that are each associated with a respective viewing zone; and pixel mapping circuitry configured to map each two-dimensional image to respective pixels on the array of pixels to obtain pixel data for the array of pixels.
In accordance with another embodiment, each lens in the array of lenses has a circular footprint.
In accordance with another embodiment, each lens in the array of lenses has a square footprint.
In accordance with another embodiment, each lens in the array of lenses has a hexagonal footprint.
In accordance with another embodiment, each lens in the array of lenses has an upper surface that has first curvature along the first direction and has second curvature along the second direction.
The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination.
This application is a continuation of international patent application No. PCT/US2022/021558, filed Mar. 23, 2022, which claims priority to U.S. provisional patent application No. 63/172,508, filed Apr. 8, 2021, which are hereby incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
63172508 | Apr 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2022/021558 | Mar 2022 | US |
Child | 18478701 | US |