There are currently numerous different display devices for presenting three-dimensional (3D) images. Some systems use glasses or goggles and other systems may be used without them. In either case, some technologies allow multiple users, and some technologies work only for a single user. Goggleless displays may offer a shared user experience without obstructing structures that, at least to some degree, isolate the viewer from the surrounding real world. With head mounted displays (HMDs), the level of isolation ranges from complete blockage of the natural view, which is a property of virtual reality (VR) systems, to the mildly-obstructing visors or lightguides placed in front of the eyes that allow augmented reality (AR) and mixed reality (MR) user experiences. Many companies developing MR systems are aiming for a user experience in which virtual objects are visually indistinguishable from real objects. Even if this goal is achieved, head mounted devices put the viewer behind a “looking glass” or a “window” that makes the experience feel artificial. One way to present a natural 3D scene is to do so without goggles.
Overall, goggleless 3D display solutions may be more technically challenging than systems with some kind of headgear. Visual information that a person uses enters the human visual perception system through the eye pupils. HMDs are very close to the eyes and may cover a large Field-Of-View (FOV) with much more compact optical constructions than goggleless displays. HMDs may be more efficient in producing light because the “viewing window” is small and confined to a relatively fixed position. Goggleless displays may be physically large to cover a significant portion of the viewers FOV, and goggleless system may be more expensive to make. Because user position is not fixed to the display device, projected images may be spread over a large angular range to make the picture visible from multiple positions, which may result in wasting much of the emitted light. This issue may be especially challenging with mobile devices that have a very limited battery life and they may be used in environments where the display image contrast is enhanced with high display brightness if the ambient light levels are high.
HMDs also may use much less 3D image data than goggleless devices. A single user may not use more than one stereoscopic viewpoint to the 3D scene because the display system attached to the head moves together with the eyes. In contrast, the user without goggles is free to change position around the 3D display, and the goggleless system provides several different “views” of the same 3D scenery. This issue multiplies the amount of 3D image information that is processed. To ease the burden of heavy data handling with goggleless displays, specialized eye tracking systems may be used to determine the position and line of sight of the user(s). In this case, 3D sub-images may be directed straight towards the pupils and not spread out to the whole surrounding space. By determining the position of the eyes, the “viewing window” size may be greatly reduced. In addition to lowering the amount of data, eye tracking also may be used for reducing power consumption because the light may be emitted towards the eyes only. Use of such eye tracking and projection systems may require more hardware and require more process power, which, e.g., may limit the number of viewers due to the limited performance of the sub-system.
A display device according to some embodiment comprises: a bendable light-emitting layer comprising an addressable array of light-emitting elements; and a deformable optical layer having a plurality of lens regions, the deformable optical layer overlaying the light-emitting layer and being bendable along with the light-emitting layer; wherein the deformable optical layer is configured such that optical powers of the lens regions change in response to bending of the optical layer.
In some embodiments, the deformable optical layer is configured such that, while the deformable optical layer is in at least a first curved configuration, the lens regions form a lenticular array of cylindrical lenses.
In some embodiments, the deformable optical layer is configured such that, while the deformable optical layer is substantially flat, the optical powers of the lens regions are substantially zero.
In some embodiments, the display device further includes a plurality of baffles provided between adjacent lens regions, wherein the baffles are more rigid than the deformable optical layer. The baffles may be transparent.
In some embodiments, the display device is operable as a 2D display in a substantially flat configuration and as a 3D display in at least a first curved configuration.
In some embodiments, the display device further comprises control circuitry operative to control the light-emitting elements to display a 2D image or a 3D image according to a selected display mode.
In some embodiments, the display device further comprises a sensor operative to determine a degree of bending of at least one of the deformable optical layer and the light-emitting layer, wherein the control circuitry is operative to select a 2D display mode or a 3D display mode based the degree of bending.
In some embodiments, the control circuitry is operative to display an image in a privacy mode while the display device is in at least a second curved configuration.
A method of operating a display device in some embodiments, includes: determining a degree of bending of the display device; selecting a display mode based on the degree of bending, wherein the selection is made from among a group of display modes including at least a 2D display mode and a 3D display mode; and operating the display device according to the selected display mode.
In some embodiments, selecting a display mode comprises selecting the 2D display mode in response to a determination that the display device is in a substantially flat configuration.
In some embodiments, selecting a display mode comprises selecting the 3D display mode in response to a determination that the display device is in a first curved configuration.
In some embodiments, the group of display modes further includes a privacy mode, and selecting a display mode comprises selecting the privacy mode in response to a determination that the display device is in a second curved configuration.
In some embodiments, the display device includes a deformable optical layer having a plurality of lens regions, wherein the deformable optical layer is configured such that optical powers of the lens regions change in response to bending of the optical layer.
In some embodiments, determining a degree of bending of the display device comprises operating a bending sensor.
A 3D multi-view display may be created by bending a flexible 2D display. Ordered buckling of an elastic optical layer under mechanical stress may be used to generate a 3D multi-view display structure from the flexible 2D display structure. An example flexible display with a dense array of small pixels may be coated with an elastic layer of optical material that has a linear array of transparent and more rigid baffles. The frame around the display may enable bending of the device into a curved shape. Bending may inflict mechanical stress to the elastic material and may cause the layer to buckle into an ordered lenticular shape using a baffle array. The lenticular shape collimates light emitted from display pixels into narrow light beams in one direction, enabling rendering of a multi-view 3D image. A display device with such a structure may be switched between a 2D mode with an outer optical layer that is flat and a 3D mode with an outer optical layer that has a lenticular structure. Such a display device enables the use of 2D without loss of resolution.
The entities, connections, arrangements, and the like that are depicted in—and described in connection with—the various figures are presented by way of example and not by way of limitation.
A wireless transmit/receive unit (WTRU) may be used, e.g., as a display, a multi-view display, a curved display, a 2D display, a 3D display, and/or a flexible display in some embodiments described herein.
As shown in
The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1×, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
The base station 114b in
The RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in
The CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While
The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
Although the transmit/receive element 122 is depicted in
The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
The processor 118 may receive power from the power source 134 and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
In view of
The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
One known technique for presenting three-dimensional (3D) images is stereoscopy. In this method, two two-dimensional (2D) images are displayed separately to the left and right eye. In goggleless displays, the two views are commonly generated either by using a parallax barrier method (e.g., see U.S. Patent Application No. 2016/0116752) or lenticular sheets (e.g., see U.S. Pat. Nos. 6,118,584 and 6,064,424) that are able to limit the visibility of a pair of light emitting pixels in such a way that the pixels are able to be seen only with the designated eye. Perception of depth is created when matrices of these pixel pairs are used to create images taken from slightly different viewing angles and the 3D image is combined in the brain. However, presentation of two 2D images is perceptually not the same thing as displaying an image in full 3D. One difference is the fact that head and eye movements will not give more information about the objects being displayed—the 2D images are able to present only the same two slightly different viewpoints. These types of systems are commonly called 3D displays, although stereoscopic displays would be the more accurate term. 3D displays are stereoscopic because they are able to present the image pairs to the two eyes of the viewer. The use of only two views may cause the 3D image to be “flipped” if the viewer moves to a wrong position in front of the display. Also, the 3D illusion may not occur if the images are not visible to the correct eyes properly and the brain is not able to process the information. In the worst case, the viewer may even feel nauseated, and a prolonged use of a low-quality display may lead to headaches and dizziness.
Multi-view systems are displays that have taken one step forward from common stereoscopic displays. In these devices, light is emitted from a pixelated layer, and a microlens or lenticular sheet collimates the emitted light into a set of beams that exit the lens aperture at different propagation directions. The beam directions create the stereoscopic 3D effect when several unique views of the same 3D image are projected to the different directions by modulating the pixels according to the image content. If only two pixels are used for one 3D scene, the result is a stereoscopic image for a single user standing in the middle of the FOV. If more than two pixels are used under one microlens that defines the boundaries of a multi-view display cell, the result is a set of unique views spread across the FOV, and multiple users may see the stereoscopic images at different positions inside the predefined viewing zone. Each viewer may have his or her own stereoscopic viewpoint to the same 3D content, but perception of a three-dimensional image is generated, enabling a shared visual experience. As the viewers move around the display, the image is changed for each new viewing angle, making the 3D illusion much more robust and convincing for individual viewers, thereby improving the perceived display quality considerably.
With many relatively low-density multi-view displays, the views change in a stepwise fashion as the viewer moves in front of the device. This feature lowers the quality of 3D experience and even may cause a breakup of the 3D perception. In order to mitigate this problem, some Super Multi View (SMV) techniques have been tested with as many as 512 views. An extremely large number of views may be generated, making the transition between two viewpoints very smooth. If the light from at least two images from slightly different viewpoints enters the eye pupil almost simultaneously, a much more realistic visual experience follows, according to journal article Yasuhiro Takaki, High-Density Directional Display for Generating Natural Three-Dimensional Images, 94:3 P
At nominal illumination conditions the human pupil is generally estimated to be ˜4 mm in diameter. If the ambient light levels are high (sunlight), the diameter may be as small as 1.5 mm and in dark conditions as large as 8 mm. The maximum angular density that is able to be achieved with SMV displays is generally limited by diffraction, and there is an inverse relationship between spatial resolution (pixel size) and angular resolution according to journal article A. Maimone, et al., Focus 3D: Compressive Accommodation Display, 32(5) ACM T
One potential method to create a multi-view 3D display suitable for a mobile device is by using a directional backlight structure behind an ordinary liquid crystal display (LCD). In this technique, two or more light sources (at least one for each eye's view) are used together with a lightguide. The lightguide has out-coupling structures that project the display back-illumination to two or more different directions according to which light source is used. By alternating the display image content in synchrony with the light sources, a stereoscopic view pair or set of views of the 3D scene may be created.
One problem associated with many backlight systems is he use of relatively slow LCD displays. The backlight module produces a set of directional illumination patterns that go through a single LCD, which is used as a light valve that modulates the images going to different directions. LEDs commonly used as light sources may be modulated much faster than the few hundred cycles per second of which many LCDs are capable. But because all of the directional illumination patterns go through the same display pixels, the display refresh rate becomes the limiting factor for how many flicker-free views may be created. The human eye limit for seeing light intensity modulation is generally set to a value of 60 Hz, but the limit may be calculated. For example, an LCD display may modulate at a frequency of 240 Hz, and only 4 unique views may be generated with the display without inducing eye straining flicker to the image. In general, the same refresh frequency limitation applies to 3D display systems that are based on the use of LCDs.
Functioning of the currently available, flat-panel-type goggleless multi-view displays tend to be generally based on spatial multiplexing only. In the most common integral imaging approach, a row or matrix of light emitting pixels is placed behind a lenticular lens sheet or microlens array, and each pixel is projected to a unique view direction in front of the display structure. The more light emitting pixels there are on the light emitting layer, the more views may be generated. In order to obtain a high-quality 3D image, the angular resolution should be high, generally in the range of at least 1.0°-1.5° per one view. This may create a problem with stray light because the neighboring views should be adequately separated from each other in order to create a clear stereoscopic image. At the same time, neighboring views may be very closely packed in order to offer high angular resolution and a smooth transition from one view to the next one. Light-emitting sources also have typically quite wide emission patterns, which means that the light will easily spread over more than the aperture of the one lens intended for image projection. The light hitting neighboring lenses may cause secondary images that are projected to wrong directions. If a viewer sees simultaneously one of these secondary views with the other eye and a correct view with the other eye, the perceived image may flip to the wrong orientation, and the 3D image will be severely distorted.
The size of the viewing zone may be designed by altering beam bundle fields of view. This process may be done by increasing the width of the light emitter row or by changing the focal length of the beam collimating optics. Smaller focal lengths may lead to larger projected voxels, so the focal length may be increased to obtain better spatial resolution. This relationship means that there may be a trade-off between optical design parameters (like spatial/angular resolution, lens focal length, and FOV) and the design needs of a particular use case.
3D multi-view displays may offer a more engaging viewing experience than regular 2D displays. However, the specifications for display optics may be very different for a regular 2D display and a multi-view 3D display. The 2D display may have a very high spatial pixel resolution (e.g., in the range of ˜500 pixels per inch (PPI)) to be considered high quality, and the image may be visible for a large field-of-view (FOV). In contrast, 3D display optics may restrict the FOV of single pixels considerably to enable showing of different images to different angular directions at the same time. In integral imaging devices, these specifications may be met with a microlens or lenticular array that increases angular resolution and decreases spatial resolution. If attached to a high-end 2D display, such an optical component may make the resolution of the display unacceptably low for mobile device use. To resolve this issue, an optical layer attached to light emitting pixels may be designed such that the optical layer transforms from an optically flat surface to a light collimating lens array.
Electrically-switchable liquid crystal (LC) lens systems are described in U.S. Pat. No. 9,709,851 and journal article Y-P. Huang, et al., Autostereoscopic 3D Display with Scanning Multi-Electrode Driven Liquid Crystal (MeD-LC) Lens, 1:1 3D R
Thin-film buckling is a phenomenon described in, for example, the journal article B. Wang, et al., Buckling Analysis in Stretchable Electronics, 1:5
In some embodiments, the buckling phenomenon is employed as a way to make a large number of small surface features. This approach may use ordered buckling that is controlled by a design parameter of the elastic layer. With proper control, surface structures may be created that have predetermined shape and slope distributions that perform a certain function. Buckling techniques that may be adapted for embodiments described herein include those described in the journal article D-Y. Khang, et al., Mechanical Buckling: Mechanics, Metrology, and Stretchable Electronics, 19:10 A
If buckling occurs on a flat and unstructured substrate, the pattern is most likely random. However, there are several different methods available for controlling the buckling behavior of elastic surfaces. One method is to coat an elastic substrate like PDMS (polydimethylsiloxane) with a metallic mesh that causes stress to the material when the combination is cooled down and the two materials shrink differently. This stress is released when the elastic substrate material buckles. Resulting wrinkles may have a predetermined shape and amplitude controlled with the metallic coating mesh design, according to journal article J. Yin, et al., Deterministic Order in Surface Micro-Topologies Through Sequential Wrinkling, 24(40) A
When creating an ordered buckling pattern, if the local material bending radius is too small or the internal shearing forces are too high, ruptures and layer delamination may start to occur randomly if the material plasticity limits are exceeded. Design rules and, e.g., finite-element modeling of material deformation behavior under stress, may be used when such structures are designed. Elastic surfaces tend to buckle to natural sinusoidal linear patters that have a certain surface wavelength and amplitude according to journal article Khang. This shape may be easier to produce than other possible wrinkle formations. However, also other ordered patterns are possible to create by applying e.g., bi-axial strain to the elastic material layer. With a suitable strain profile it is even possible to create well-ordered two-dimensional herringbone structures where the material buckles in zigzag form according to Yin and journal article P-C. Lin & S. Yang, Spontaneous Formation of One-Dimensional Ripples in Transit to Highly Ordered Two-Dimensional Herringbone Structures Through Sequential and Unequal Biaxial Mechanical Stretching, 90 A
In some embodiments, a flexible 2D display is bent or curved to transform the display into a 3D multi-view display. The functionality may make use of ordered buckling of an elastic optical layer under mechanical stress. A flexible display (e.g., an OLED panel) with a dense array of small pixels may be coated with an elastic layer of optical material that has a linear array of transparent and more rigid baffles. A frame around the display may be provided to allow for bending of the device into a predetermined curved shape. This bending imparts compressive forces and mechanical stress on the elastic material causing the layer to buckle into an ordered lenticular shape using a rigid baffle array. The lenticular shape collimates light emitted from display pixels into narrow light beams in one direction enabling a multi-view 3D image to be rendered.
Such a display may be switched between 2D and 3D display modes. A standard 2D image may be shown when the device is kept flat. In this mode, the optical layer over the display pixel matrix may have no substantial surface features, and light emitted from a single pixel may exit the optical structure with a wide field of view. Emission patterns of pixels may overlap and cover both eyes of the viewer. In 2D mode, the display shows a single image with the full high spatial resolution determined by the display panel pixel pitch. A three-dimensional (3D) mode may be activated by mechanically bending the display to a predetermined radius of curvature. In 3D mode, the single pixel emission patterns may become narrower due to the buckled lenticular optical surface features. A limited beam FOV may enable different images to be shown to each eye of a viewer, and a 3D autostereoscopic image may be rendered. Ordered buckling may be used to operate a display device with different optical specifications for 2D and 3D display modes.
Such a display device may be switched mechanically between a 2D mode with an outer optical layer that is flat and a 3D mode with a layer that has a lenticular structure. This operation allows the use of the 2D mode without loss of display resolution because the optical structure functionality is added or removed by switching between modes mechanically.
Such a device may be used with mobile devices. A 3D image may be shown by interlacing a multi- view image using the same display panel that is used for standard 2D images. Mobile devices also contain front facing cameras that may be used to actively calibrate displaying of a 3D image.
The ability of the buckled structure to limit the field of view may be used in some embodiments to create an adjustable privacy filter for the mobile device or to save power due to the emitted light energy being more concentrated to a narrower emission angle, making the image brighter in the direction of the projected pixel images.
The display 652 may be switched into 3D mode by bending the display. In some embodiments, the display is bent to a predetermined radius of curvature. Bending causes mechanical stress to the elastic optical layer, and the elastic optical 654 layer starts to buckle, forming an array of lenticular lenses on top of the pixel matrix. (The size of the lenticular lenses is exaggerated in
In
For some embodiments, selecting the display mode may include selecting the display mode from between at least a wide viewing angle mode (such as a 2D display mode) and a limited viewing angle mode (such as a privacy display mode). For some embodiments, selecting the display mode may include selecting the display mode from between at least a wide viewing angle mode (such as a 2D display mode) and a multi- view three-dimensional (3D) mode. For some embodiments, the optical layer may be flexible, and the optical layer may switch between two states of deformation: (1) a first state of deformation such that the optical layer is substantially planar (such as is shown in
For some embodiments, a display apparatus may include: a mechanical layer 780 with flexible joints 782; a flexible display emitter layer including individually addressable light-emitting elements 772, 774; and a flexible transparent layer 779 with optical properties that vary when flexed. For some embodiments, a display apparatus may include a light emitting layer that is deformable, and the light emitting layer may be configured to be deformed synchronously with the optical layer.
In the transverse direction, the emission patterns retain the wide FOV of the sources. Relative position of the emitter to the lenticular shape optical axis determines the projected beam tilt angle with respect to the display local surface normal. The narrow beams are located in the same directional plane as the viewers eyes to create correct parallax effect with multiple images. The display may create horizontal parallax only if linear buckles are used in the horizontal direction. However, both horizontal and vertical parallax images may be created by utilizing two-dimensional structures (e.g. herringbone structures) using techniques as described above (e.g., with regard to Lin) or by bending the display in diagonal direction and forming diagonal lenticular shapes.
For some embodiments of a display structure, the optical layer may be compressible such that if the optical layer is in a first state of deformation, the optical layer is compressed, and if the optical layer is in a second state of deformation, the optical layer is relaxed compared to the first state of deformation. For some embodiments of a method using a display structure, the optical layer may be compressed in a first state of deformation, and the optical layer may be relaxed (in comparison with the first state of deformation) in a second state of deformation. The first state of deformation in which the optical layer is compressed may correspond to, e.g., a 3D display mode, and the second state of deformation in which the optical layer is relaxed may correspond to, e.g., a 2D display mode.
A further example optical elastic layer design case shown in
For some embodiments of a display structure, the optical layer may be stretchable such that if the optical layer is in a first state of deformation, the optical layer is stretched, and if the optical layer is in a second state of deformation, the optical layer is relaxed compared to the first state of deformation. For some embodiments of a method using a display structure, the optical layer may be stretched in a first state of deformation, and the optical layer may be relaxed (in comparison with the first state of deformation) in a second state of deformation. The first state of deformation in which the optical layer is stretched may correspond to, e.g., a 2D display mode, and the second state of deformation in which the optical layer is relaxed may correspond to, e.g., a 3D display mode.
As an example, light exiting the display through a lenticular lens region 1003 extends across a primary field of view 1004. Secondary views 1018 may be visible outside the primary field of view. Within the primary field of view 1004, light from one emitter may generate a beam 1010 that is visible to the right eye of the user 1008, and light from another emitter may generate a beam 1014 that is visible to the left eye of the user. Light exiting the display through a lenticular lens region 1005 extends across a primary field of view 1006. Secondary views 1020 may be visible outside the primary field of view. Within the primary field of view 1006, light from one emitter may generate a beam 1012 that is visible to the right eye of the user 1008, and light from another emitter may generate a beam 1016 that is visible to the left eye of the user.
Because the lenticular lens shape radius is connected to the display overall bending radius, the design shown in
Buckled lens shapes and display panel pixel layouts may be fitted together in order to meet the specifications for the 3D image. The number of pixels for each lens shape may determine how many different views may be created with the display structure. A direct trade-off situation between angular and spatial resolution may exist because the system may use only spatial multiplexing for the 3D image creation. This trade-off leads to image spatial resolutions in 2D and 3D modes being different from each other, and the total performance of the whole display system may be balanced for these two modes. The 3D image may have lower spatial resolution than the 2D image if the 2D mode is not artificially sampled down by, e.g., grouping pixels for a more balanced overall look. The display may be used with full display panel spatial resolution in 2D mode because there are no obstructing optical structures when the elastic optical layer is made flat.
In some embodiments, while the display is used in 2D mode, the display may have a shallow lenticular structure in front of the pixels that slightly limits the FOV. The display may be turned into a 3D display by curving the device, which causes the lenticular shapes to have a sharper curvature and narrower projected beams. The 3D image may be formed with the pixels whenever a single projected beam size is below eye pupil distance at the viewing distance. Such a design may be used to adjust the FOV for, e.g., different viewing distances or number of viewers. In some embodiments, a front facing camera may be used for determining the single or multiple user eye locations and distance for image rendering calculations.
Embodiments described herein that limit the field of view of the display may be used for purposes other than the creation of a 3D image, such as privacy mode and energy savings. Privacy mode may be used, e.g., in large crowds or in confined spaces, like in an airplane. Energy savings may be achieved by limiting the field of view because display brightness may be lowered if the light is concentrated into a narrower angular range. By bending the device, the field of view may be adjusted for some embodiments without an electrical control system change.
In addition to being compressed for a buckling effect, the display optical surface also may be manufactured as a lenticular surface and turned into a flat surface by stretching it. Materials may operate differently when they are stretched or compressed. Such mechanochromic materials may, e.g., change their color or transparency under pressure, such as those described in Y. Jiang, et al., Dynamic Optics with Transparency and Color Changes under Ambient Conditions, 11 P
Mechanical pressure that transforms the optically elastic material shape may be induced with methods other than bending. For example, a metallic mesh with high transparency may be coated onto the elastic layer, and the surface shape transformation may be made with heat driven by electric current resistance in the mesh. The surface may contain an array of, e.g., piezoelectric actuators that change shape of the surface by compressing or stretching the surface locally. These example structures may be combined to create an elastic layer with more complex optical structures, such as, e.g., shapes that are sinusoidal in two directions or have locally alternating patterns.
In some embodiments, a rigid display is manufactured using deformation of an optical layer to generate a lenticular array. For example, an OLED display may be wrapped around a transparent cylinder, and the light emission may be directed towards the internal volume. An elastic optical polymer layer that buckles may be attached to the display to form a series of lenticular lenses that are used in creating a 3D image inside the cylinder. The same material layer may be adjusted for different use cases, e.g. to create cylinders with different curvatures. If, e.g., UV-curable material is used in the elastic layer, the optical shape may be fixed and may form complex rigid optical features without a mold.
For the example display structure shown in
The optical layer may have non-elastic transparent baffles 1512 that are made from, for example, COP material Zeonex 480R. The space between the baffles may be filled with optically clear and elastic silicone or other transparent elastomeric material. Because both of these materials may have refractive indices of ˜1.53 @ 550 nm, the interface between these materials is optically transparent. The sheet may be made with a continuous extrusion process, and the display component may be cut to a rectangular piece that fits the OLED panel measurements. Baffles determine the lenticular lens pitch because ordered buckling shapes the lenticular silicone lenses during device bending. A full-color pixel may emit light with a primary beam 1514 that has a FOV of 8.8° when the 3D mode is activated. As a result, the image of a single pixel may be projected to a viewing distance of 300 mm such that a ˜46 mm wide stripe is visible to only one eye in the horizontal direction.
For some embodiments, a display apparatus may include: a light emitting layer that includes individually controllable light emitting elements; a deformable optical layer that is configurable by a user into at least a first state of deformation and a second state of deformation, the optical layer having different optical properties in the first state of deformation compared to the second state of deformation; and control circuitry that is configured to control the light emitting elements to display imagery to the user, the apparatus configured to display two-dimensional (2D) imagery when the optical layer is configured to the first state of deformation, and the apparatus configured to display three-dimensional (3D) imagery when the optical layer is configured to the second state of deformation.
In the example of
The cross-sectional area of a region of the elastic optical polymer layer 1604 between adjacent baffles generally remains the same in the bent and the flat configurations. In the example of
To test optical functioning of the design, a set of raytrace simulations was performed with commercial optical simulation software OpticsStudio 19. One 16 μm wide source surface with green 550 nm light was projected through a 0.35 mm thick protective substrate layer and a 1.68 mm thick elastic optical polymer lenticular lens structure that had a surface curvature radius of 1.05 mm. Angular divergence of the sources was set to a Gaussian distribution with a full-width, half-maximum (FWHM) value of ±34°. With this angular distribution, light emitted by a single source was able to reach the next two neighboring lens apertures on both sides of the 0.5 mm wide selected projector cell. A 600 mm wide detector surface placed at the designated 300 mm viewing distance from the optical structure was used for collecting the simulation results to spatial irradiance and angular radiance distributions. Simulations were performed with both the 2D mode flat and 3D mode buckled surface structures to see the FOV difference for each mode. The 3D mode functionality was analyzed with two separate simulations. The first simulation was made with a light source that was at the center of the lens optical axis. The second simulation was made with a light source that was off-axis from the lens optical axis for the projector cell. The second simulation was used to simulate projector cells positioned at the edge of the curved display surface.
Overall, the simulation results of
For some embodiments of a display apparatus, the optical layer may include one or more sections of flexible optical material such that each of the sections is separated by non-flexible baffle material. For some embodiments of a method performed by a display apparatus, detecting the state of bending of the optical layer may include detecting the degree of bending of the optical layer.
For some embodiments, the optical layer may be configured by the user selecting a display mode in a user interface. Such a selection may select between 2D and 3D display mode. A privacy display setting may be selected by the user via the user interface. A device may include a sensor, which may be used to determine whether the optical layer is configured in a first or second state of deformation. The first state of deformation, e.g., may correspond to 2D imagery, and the second state of deformation may correspond to 3D imagery. The device may be configured to display 2D or 3D imagery according to the state of deformation. The state of deformation may be determined based on the amount of bending detected. For example, a small amount of bending up to a threshold may correspond to selecting the first state of deformation and a larger amount of bending greater than the threshold may correspond to selecting the second state of deformation. The renderer process or device may receive a display mode selection from a user via a user interface. A separate process or device may receive the display mode selection from the user via the user interface, and the separate process or device may communicate the display mode selection to the renderer. The renderer may configure the optical layer according to the display mode selection, which may be received by the renderer or determined locally to the renderer. The display mode may be selected from a group that includes 2D and 3D display modes. The group also may include privacy or other display mode settings. The optical layer may be configured according to the detected state of bending of the optical layer. The state of bending of the light emitter layer may be detected, and the light emitter layer may be controlled so that the light emitter layer displays image content according to the detected state of bending of the light emitter layer. For example, a small amount of bending of the light emitter layer up to a threshold may correspond to a first state of bending and a larger amount of bending greater than the threshold may correspond to a second state of bending. The first state of bending may be associated with 2D display mode, and second state of bending may be associated with 3D display mode.
Stray light may be a general problem in multi-view displays. Some embodiments are implemented in devices that have a front facing camera, which may be used for viewer eye detection. The 3D image may be rendered in such a way that the secondary pixel images are directed away from the viewer's eyes.
In the configuration of
In the configuration of
In some embodiments, the control circuitry is operable in a privacy mode.
In some embodiments, the display configuration may be selected through user input. Some such embodiments may operate without the use of a bending sensor. User input may also be used to override a mode selected with the use of a sensor. When the display is in a curved configuration, user input may be used to determine whether a privacy mode or a 3D mode is selected. In some embodiments, the same levels of curvature are used for a 3D mode and a privacy mode. In other embodiments, different levels of curvature are used for a 3D mode and a privacy mode. For example, a slight curvature may be sufficient impart an optical power to the lenticular array that is sufficient to prevent most undesired viewing of the display. A greater level of curvature may be desirable to impart an optical power to the lenticular array that is sufficient to prevent excessive overlap between angularly separated views. Below a first threshold level of curvature, the display may be operated in a 2D mode. Between the first threshold level of curvature and a second threshold level of curvature, the display may be operated in a privacy mode. At or above the second level of curvature, the display may be operated in a 3D mode.
An apparatus according to some embodiments includes: a mechanical layer with flexible joints; a flexible display emitter layer; and a flexible transparent layer with optical properties that vary when flexed. Some such embodiments further include subpixels which alternate color in both horizontal and vertical spatial directions.
A method according to some embodiments includes: sensing a degree of bending of a flexible display; selecting a display mode based on the degree of bending; rendering image content based on the selected display mode; and displaying the rendered image content on the flexible display.
In some embodiments, the degree of bending is limited to one plane.
In some embodiments, selecting the display mode comprises selecting the display mode from a group comprising at least a wide viewing angle mode and a limited viewing angle mode.
In some embodiments, selecting the display mode comprises selecting the display mode from a group comprising at least a wide viewing angle mode and a multi-view three-dimensional (3D) mode.
An apparatus according to some embodiments includes: a light emitting layer comprising individually-controllable light emitting elements; a deformable optical layer configurable by a user into at least a first state of deformation and a second state of deformation, the optical layer having different optical properties in the first state of deformation compared to the second state of deformation; and control circuitry configured to control the light emitting elements to display imagery to the user, the apparatus configured to display two-dimensional (2D) imagery when the optical layer is configured to the first state of deformation, and the apparatus configured to display three-dimensional (3D) imagery when the optical layer is configured to the second state of deformation.
In some embodiments, the optical layer is flexible, and in the first state of deformation, the optical layer is configured into a substantially-planar shape, and in the second state of deformation, the optical layer is configured into a curved shape.
In some embodiments, the optical layer is stretchable, and in the first state of deformation, the optical layer is stretched, and in the second state of deformation, the optical layer is relaxed compared to when in the first state of deformation.
In some embodiments, the optical layer is compressible, in the first state of deformation, the optical layer is compressed, and in the second state of deformation, the optical layer is relaxed compared to when in the first state of deformation.
In some embodiments, when in the first state of deformation, the optical layer comprises a substantially flat surface. In some embodiments, when in the second state of deformation, the optical layer comprises a lenticular lens array configured for displaying 3D imagery.
In some embodiments, the optical layer is configured by bending the apparatus. In some embodiments, the optical layer is configured by selecting between 2D and 3D display modes in a user interface.
Some embodiments further include: a sensor, wherein the sensor is used for a determination of whether the optical layer is configured into the first state of deformation or the second state of deformation, and wherein the apparatus is configured to display either the 2D imagery or the 3D imagery based on the determination.
In some embodiments, the optical layer comprises a plurality of sections of flexible optical material, each of the plurality of sections separated by non-flexible baffle material.
In some embodiments, the light emitting layer is deformable, and the light emitting layer is configured to be deformed synchronously with the optical layer.
A method according to some embodiments includes: detecting a state of bending of an optical layer of a flexible display apparatus; and controlling a light emitting layer comprising a plurality of individually- controllable light emitting elements to display image content according to the state of bending of the optical layer detected.
Detecting the state of bending of the optical layer may include detecting a degree of bending of the optical layer.
In some embodiments, the optical layer is configurable into at least a first state of deformation and a second state of deformation. The first state of deformation may be associated with a two-dimensional image mode, and the second state of deformation may be associated with a three-dimensional image mode.
In some embodiments, the first state of deformation is associated with a first degree of bending of the optical layer, and the second state of deformation is associated with a second degree of bending of the optical layer, wherein the second degree of bending is greater than the first degree of bending.
In some embodiments, when the optical layer is in the first state of deformation, the optical layer is in a substantially planar shape, and when the optical layer is in the second state of deformation, the optical layer is in a curved shape.
In some embodiments, when the optical layer is in the first state of deformation, the optical layer is stretched, and when the optical layer is in the second state of deformation, the optical layer is relaxed compared to when in the first state of deformation.
In some embodiments, when the optical layer is in the first state of deformation, the optical layer is compressed, and when the optical layer is in the second state of deformation, the optical layer is relaxed compared to when in the first state of deformation.
Some embodiments further include: receiving a display mode selection; and configuring the optical layer according to the display mode selection.
In some embodiments, the display mode selection is selected from the group consisting of a 2D display mode and a 3D display mode.
Some embodiments further comprise configuring the optical layer according to the state of bending of the optical layer detected.
In some embodiments, the method further comprises: detecting a state of bending of the light emitting layer of the flexible display apparatus, wherein controlling the light emitting layer comprises displaying image content according to the state of bending of the light emitting layer.
A display device according to some embodiments includes: a light-emitting layer comprising an addressable array of light-emitting elements; a flexible optical layer overlaying the light-emitting layer, the flexible optical layer having a plurality of lens regions, wherein the flexible optical layer is configured such that optical powers of the lens regions change in response to changing levels of tensile or compressive force on the flexible optical layer.
In some embodiments, under a first amount of tensile or compressive force on the optical layer, the optical powers of the lens regions are substantially zero.
In some embodiments, under a second amount of tensile or compressive force on the optical layer, the lens regions are configured as a lenticular array, each lens region corresponding to a cylindrical lens within the lenticular array. In some embodiments, under the second amount of tensile or compressive force on the optical layer, the cylindrical lens regions are operative to substantially collimate light from the light- emitting layer along a horizontal direction.
In some embodiments, the lens regions are separated by substantially rigid baffles.
In some embodiments, the display device is configured to be bendable in at least one plane of principle curvature, and the device is configured such that the tensile or compressive force on the optical layer changes based on the amount of bending.
In some embodiments, the display device further comprises a sensor for determining the amount of bending.
In some embodiments, the display device further comprises control circuitry for controlling the display of light by the light-emitting layer, the control circuitry being operable to select a display mode based on the amount of bending.
Note that various hardware elements of one or more of the described embodiments are referred to as “modules” that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. § 119(e) from, U.S. Provisional Patent Application Ser. No. 62/894,417, entitled “METHOD FOR CREATING A 3D MULTIVIEW DISPLAY WITH ELASTIC OPTICAL LAYER BUCKLING,” filed Aug. 30, 2019, which is hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/047663 | 8/24/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62894417 | Aug 2019 | US |