The present disclosure relates to multi-view (MV) display systems, and more particularly, to extensible, precision MV display systems that can provide arbitrary (e.g., different) content to easily specified locations.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
A multi-view display simultaneously presents different content to different viewers based on the location of each viewer relative to the display. Novelty lenticular cards are a simple example of a multi-view system. When viewed from different angles they can reveal different images. They use a series of cylindrical lenslets placed over stripes of content to direct each content stripe in a unique angular range. A complete image is formed by having the stripes from a single image placed in the right locations under the lenslets. The stripe images can be provided by a printed sheet, or by a flat panel display.
There are significant limitations to the display system 2700. A viewer in a viewing zone 3D would see the stripe-3 part of the zone 3C image and the stripe-D part of the zone 4D image. Far away from the array of stripes of content 2704, the zones 4D, 3C, 2B, and 1A are respectively wide. Nearer to the array of stripes of content 2704, viewers in zones 3D, 2C and 1B would see a combination of parts of the multiple images intended for zones 4D, 3C, 2B, and 1A. When designing a printed lenticular display, one needs to know the expected viewing distance so that image stripes can be arranged to provide consistent images to the intended viewing zones, as opposed to providing a combination of parts of multiple images. For an electronic display, one may assign the stripes dynamically so as to create a consistent image at the locations where viewers currently are located.
If one attempts to increase the number of viewing zones by increasing the number of stripes underneath each lenticule, the number of distinct viewing zones grows rapidly, and the size of each shrinks. Targeting images to a particular location becomes increasingly challenging. Due to these and other limitations, current multi-view displays are typically limited to a very small number of viewing zones. Two to four viewing zones is common, and commercial units that are intended for three-dimensional (3D) viewing applications tend to max out in the small tens of stripes per lenslet.
Flat panel electronic display pixels are typically comprised of sub-pixels (e.g., red, green, and blue sub-pixels) which are spatially distinct to create a range of colors. This technique depends on the limited ability of the human eye to resolve this level of detail. Unfortunately, the lenticules act as magnifiers, and can make the sub-pixels quite evident. For example, if the red sub-pixels line up as a stripe under a lenticule, viewers at the locations that this images to might only be able to see red in the region of that lenticule. To overcome the sub-pixel problem, the lenticules may be angled relative to the underlying panel, so as to cover different color sub-pixels along the long axis of the lens. Because the cylindrical lenticules do not magnify in that dimension, color mixing works appropriately.
Lenticular displays that use cylindrical lenses are limited to creating views in a single dimension, e.g., strictly horizontal or strictly vertical. So-called “Dot” or “Fly Eye” lenticulars use a 2-dimensional array of lenses to allow content to be directed in both dimensions. Unfortunately, there is no equivalent trick to angling the lenticules to allow sub-pixel mixing because both dimensions are magnified.
There are alternative techniques to traditional lensing. For example, one company, LEIA, uses diffractive optics to create a display with sixty-four views (8 in each dimension). There are also techniques using parallax barriers, but those techniques lose significant brightness. Steerable backlights combined with time division multiplexed display have also been disclosed, but the number of views of such a system is limited by the lack of high speed liquid crystal display (LCD) panels. Up to 4 independent views have been reported using such systems.
To make large displays, it is common practice to tile smaller displays in the form of a grid. Video walls and large light emitting diode (LED) signs are often architected in this fashion. There are many advantages to this approach, including that the tiles are easier to ship, store, and generally handle than a single large display. Also, the tiles can be arranged in many different configurations. In addition, the tiles can be individually serviced or replaced without having to deal with the entire display. Moreover, the tiles are easier to manufacture because, given a certain defect density, a small tile has a much better chance of being defect free than a very large display. There are disadvantages to tiling a display versus simply building a larger one. For example, power and video signals must be created for, and routed to, each tile. In addition, each tile may have a different brightness or color, which may need to be corrected through calibration.
Specialized equipment has been created to address the needs of traditional tiled displays. For example, video wall controllers can rescale and segment a standard video stream for playback across tiled monitors. Color calibrators are used to maintain consistent brightness and color from tile to tile. Specialized mechanical mounting systems hold the tiles in place, and provide channels to manage the many electrical cables.
Although independent multi-view displays can be arranged to create the appearance of a larger display, the multi-view displays used to make such a tiled display do not include any features to make this sort of tiled display easier to construct or less costly.
Finally, most electronic multi-view displays are targeted at auto-stereo applications, and do not provide an interface for arbitrarily directing arbitrary content to multiple locations simultaneously.
What is needed is an extensible, precision multi-view display system that can provide arbitrary (e.g., different) content to easily specified locations to support location specific media experiences.
Various aspects of a precision multi-view display system are disclosed, which can accurately and simultaneously target content to individual viewers over a wide field of view. Larger displays may be created by tiling individual units, and various techniques are disclosed that are designed to make tiling easy and efficient. Also disclosed are a calibration procedure that enables the specification of content at precise viewing locations, as well as a simple interface that allows a user to graphically specify viewing zones and associate content that will be visible in those zones.
The lens array panel 112 is comprised of smaller lens assemblies 132 (see
To create larger displays with more multi-view (MV) pixels, the MV display device 100 may be used in tiled configurations as shown in
The MV display device 100 includes a number of features that make tiling easier and more effective. In one or more embodiments, there are no protrusions, vents, and cable connectors provided on the side edges of the rear cover 106 and front cover 108, which enables the MV display devices 100 to physically abut one another. Mounting points are provided on the rear of the MV display device 100 (see
There are numerous aspects of the MV display system 122 that work together to provide the intended multi-view functionality. For example, the MV display system 122 includes a number of subsystems, including an optical system (which is a type of light field display specifically optimized for multi-view applications), a display controller, calibration, and graphical interface, which work together to provide the intended multi-view functionality. Each of those aspects is described in greater detail below.
Optical System
The MV display device 100 is a type of light field display. Each pixel of a conventional display is designed to display one color and intensity of light at a time, which is cast over the field of view of the display. In contrast, each multi-view (MV) pixel 102 of the MV display device 100 simultaneously projects different colors and intensities of light to various viewing zones. In this regard, the MV pixel 102 is more like a projector, sending individually controlled beamlets of light in numerous directions simultaneously.
In one or more embodiments of the present disclosure, the lens array panel 112 of the MV display device 100 includes an array of optical elements (an array of multiple-element lens systems), to be placed over the flat panel display (FPD) 110 including an array of display pixels. The multiple-element lens system of the lens array panel 112 is placed over a sub-array of display pixels (e.g., 100×100=10,000 display pixels) to collectively form one multi-view (MV) pixel 102, where each beam let corresponds to one display pixel. In this example, each MV pixel 102 can emit 10,000 beamlets based on the 10,000 display pixels, where the direction, color and brightness of each of the beam lets are independently controllable. Thus, an array of MV pixels 102 can be considered as an array of small projectors, each of which uses a subsection of the flat panel display 110 as an imaging device. Alternatively, the configuration can be considered as an array of magnifying glasses (i.e., an array of multi-element lens systems) placed on the flat panel display 110. Each lens system magnifies each of the display pixels to fill the pupil of the multi-element lens system. The display pixel that a viewer sees magnified depends on the viewing angle, or angle of the viewer with respect to the optical axis of the lens system that is disposed over the display pixel. In other words, which display pixels are seen through the magnifying glass depends on the viewing angle. Thus, the magnification allows for both selection of (via viewing angle) which pixels are visible and enlargement of the selected visible pixels to cover a larger extent from the viewer's standpoint.
The FPD-based approach (i.e., a combination of an FPD 110 with a lens array panel 112) provides some advantages compared to using an array of discrete projectors. For a discrete projector design, drive electronics need to be created for each MV pixel separately, whereas in the FPD-based approach, all the MV pixels on the FPD 110 may use shared electronics. With an FPD-based approach wherein a fixed number of beam lets (to first order) are respectively provided by the fixed number of display pixels, one may trade off the number or spatial resolution of MV pixels 102 with the angular resolution of the MV display device 100.
Display “Sub-Pixels”
Many FPDs create color via the use of different colored sub-pixels (e.g., red, green, and blue sub-pixels). In other words, the color of each display pixel may be set by use of different colored display “sub-pixels” that collectively form the display pixel. When viewed from sufficiently far away, the display sub-pixels cannot be individually resolved, and thus create the effect of mixing the individual colors together for the corresponding display pixel. In MV applications, the magnification of the lens system may be set high to give distinct angular resolution, though this may make the individual display sub-pixels visible. If a viewer is in the path of a beamlet of only a given display sub-pixel and not of other display sub-pixels forming a display pixel, then the viewer can only see the color of that display sub-pixel (e.g., red, green or blue) and not the mixed color intended for the display pixel. A similar problem may occur even with monochrome displays where there is a gap between display sub-pixels.
To solve this problem, the MV display device 100 uses the diffuser 162 (see
There may be engineering tradeoffs in selecting the proper diffuser 162. A diffuser that provides wide lateral mixing will mix colors well, but will limit the achievable angular resolution of the display because of smear.
The sub-pixel pattern used on FPDs 110 varies. A typical pattern is shown in
Future FPDs may incorporate more amenable color mixing techniques (e.g., field sequential color) which may lessen the need for the diffuser. Thus, the use of a diffuser is preferable in FPDs that use typical color filtered sub-pixel channels and in general this diffuser will have an asymmetric scattering profile.
Lens Design and Intra-Array Mechanical Alignment And Fixture Features
In various exemplary embodiments, a multi-element lens (or a multi-element lens system) is employed. Using multiple elements to form a lens system allows one to achieve a much better tradeoff among focus, field of view, and fill factor. One could assemble each multi-element lens independently, including providing baffles to prevent stray light from crossing among MV pixels 102, and then array them on top of the flat panel display 110. Such a technique may be prohibitively expensive. Alternatively, using the example of lenticular lens sheets, one could imagine stacking sheets of lenses to create the individual lens elements in parallel.
There may be a number of problems with a naïve lens sheet approach. First, it may be difficult to maintain proper spacing among the lenses along the optical axis. Second, differential thermal expansion would make it difficult to keep the lenses centered over the correct display pixels over temperature changes. For example, if the lens sheet were fixed to one edge of the flat panel display 110, the thermal expansion would shift the MV pixels 102 on the opposite unfixed edge much more than those on the constrained edge. Third, a sheet made of optical material may provide paths for stray light to pass parallel to the flat panel display 110, passing from one MV pixel 102 to another. Finally, there may be significant manufacturing challenges in molding a large sheet of precision lenses with arbitrary surfaces on both sides. As set forth below, MV display devices 100 according to the present disclosure overcome those issues.
Holding multiple sheets of lenses a constant distance away from each other may be challenging. FPDs can be quite large, and sheets of that size may exhibit significant sag. This could be overcome to some degree by holding the sheets under high tension from the edges. But this solution causes its own problems, including stretch of the lens sheet, and a need for a large mechanical frame that would cause large gaps in a tiled system. The present disclosure overcomes these two issues by including self-aligning features in the area between lenses that help maintain precise alignment. Those features will be described in detail below with reference to
One way of preventing sag is to limit the size of the sheets to something small, and then tile these pieces together. In exemplary embodiments, the lenses are constructed in 4×4 lens assemblies 132 which are held in place via a system of supporting rails 134, 136, as shown in
In one or more embodiments, the support structure includes a plurality of vertical rails 134 and a plurality of horizontal rails 136. For example, the vertical and horizontal rails 134, 136 may be integrally formed, or soldered together. Each of the vertical rails 134 has a plurality of apertures formed therein, wherein a plurality of internal threads is formed in each aperture. The lens assemblies 132 are coupled to the vertical rails 134 using a plurality of screws 138 having external threads. After the lens assemblies 132 are placed on the vertical and horizontal rails 134, 136, the screws 138 are inserted into the apertures formed in the vertical rails 134 and rotated, which causes the heads of the screws 138 to move toward the vertical rails 134 until the heads of the screws 138 contact the lens assemblies 132 and securely fasten (hold) them to the vertical rails 134.
In one or more embodiments, multiple lens assemblies 132 are tiled together to form a lens array panel 112 that covers the flat panel display 110. The lens array panel 112 includes features that aid in the alignment of the lens assemblies 132. It should be noted that other sizes of arrays and specific details of shapes can be modified and fall within the scope of this disclosure.
When the MV display device 100 is assembled, the flat panel display 110 is located behind the second side 144b of the third lens array 144, at or near the imaging plane; and viewers would be located in front of the first side 140a of the first lens array 140. As described below, the first lens array 140, second lens array 142, and third lens array 144 form a multi-element (triplet) optical system (or lens system).
Each lens assembly 132 needs to have its mechanical degrees of freedom constrained with respect to the flat panel display 110, as well as the other lens assemblies 132. This is accomplished using several features. A rail system as described above in reference to
To meet the design goal of having the largest fill factor as possible, the individual lenses within a lens assembly 132 are very closely abutted. This may have the effect of leaving very little space between each lens within the array, which drives the need for a mounting system that takes up very little space within the lens assembly. Further, the lens assemblies 132 are tiled in such a fashion that many of the lens assemblies 132 are “landlocked,” meaning they are completely surrounded by other lens assemblies 132. In exemplary embodiments, the mounting system for the lens assemblies 132 includes a set of rails 134, 136 (see
Kinematic mounting features are incorporated into interfaces between pairs of the lens arrays 140, 142, 144.
The quantity of the lenses 142c included in the second lens array 142 is the same as the number of lenses 140c included in the first lens array 140. A plurality of cylindrical or truncated cylindrical holes 142d extends into a surface at the first side 142a of the second lens array 142. A mating surface 142e is disposed at the bottom of each of the holes 142d. The posts 140d of the first lens array 140 are inserted into corresponding holes 142d of the second lens array 142 until the mating surfaces 140e, 142e abut each, thereby constraining motion along the z-axis (or optical axis) of the lens arrays 140, 142 and as well as the roll (rotation about the x-axis) and pitch (rotation about the y-axis) degrees of freedom.
When the posts 140d of the first lens array 140 are inserted into corresponding holes 142d of the second lens array 142, the outer cylindrical mating surface 140g abuts the inner cylindrical mating surface 142g, thereby constraining the x and y-axis degrees of freedom between these two lens arrays 140, 142. Additionally, the outer cylindrical mating surface 140h abuts the mating surfaces 142h, thereby constraining yaw, or rotation about the z-axis (optical axis), between the two lens arrays 140, 142.
The rail system described above (see
Finally, as in any optical system, the ability to adjust focus may be desirable. In some embodiments, the distance between the flat panel display 110 and the lens array panel 112 may be adjusted by the placement of shims between the flat panel display 110 mounting features and their respective seats. In the enclosure of the MV display device 100, the flat panel display 110 is mounted to a rigid plate to ensure that the flat panel display 110 remains planar. This rigid plate is then mounted to the enclosure itself (e.g., rear cover 106). Shims may be added or removed from this mechanical connection in order to adjust focus, or the distance between the lens assemblies 132 of the lens array panel 112 and the flat panel display 110.
Stray Light Management Techniques
Internal baffles
Many optical systems are comprised of a series of lenses placed axially in relation to each other to achieve a desired optical performance. In that scenario, the lenses are often placed in a black barrel. The black barrel aides in blocking undesired light from entering the optical system, which may introduce ghost images, hot spots, and contrast reduction. In exemplary embodiments, an array of lenses (e.g., lens assembly 132) is used, which is formed of multiple (e.g., three) lens arrays 140, 142, 144 that are stacked together, in which it may be difficult to provide a black barrel structure for each of the 4×4 array of 16 lenses (or 16 lens systems). One possible avenue for stray light in the lens assembly 132 is light entering the surface of the lens assembly 132, propagating internally like a waveguide, and then exiting a different surface of the lens assembly 132. This is undesirable as now there are rays propagating into space, which cannot be calibrated since their exact origin is unknown. To reduce this “channel crosstalk”, some embodiments use a series of grooves or recesses 140i that act as internal baffles for the lens assemblies 132.
Along with painting of certain surfaces that will be discussed more in depth below, these internal baffles provided by the recesses 140i block light propagating in an undesirable manner within the slab of the lens assembly 132. These grooves/recesses 140i extend outwardly from a surface at the second side 140b of the first lens array 140, within the material of the first lens array 140. This has the effect of optically isolating each lens 140c within the first lens array 140, from a channel crosstalk point of view. It should be noted that other shapes and configurations are possible for these internal baffles 140i and are considered within the scope of this invention.
Painting of Surfaces
To further address stray light as well as visual appearance, as this is inherently a visual instrument, several surfaces of the first lens array 140 may be coated with a light-absorbing coating 148, for example, black paint. In one or more embodiments, the light-absorbing coating 148 absorbs a specific portion of the light incident thereon, for example, red paint or coating, or a substantial portion of the light incident thereon, for example, black paint or coating.
Alternative methods to achieve similar ends include bonding of a black material to these surfaces, and two-part injection molding, which are considered within the scope of the present disclosure.
While painting of surfaces can achieve the desired effect, the process of painting specific areas of the lens array may prove challenging. Other methods that can achieve black surfaces in molded lens areas include “overmolding” and “in-mold decorating” described below.
Overmolding and In-Mold Decorating of Lens Arrays
In one embodiment, a part (of a lens array) may be molded from a non-transparent media, then have its optical surfaces of/around that part molded from transparent media. This process can either be done as two steps in the same molding process, or as separate molding processes with the part molded in the first process thereafter placed into the mold for the second process.
In another embodiment, when the molding media such as polymer plastic is deposited in the mold for producing a part (of a lens array), an opaque film may be placed in the mold before the mold is closed such that the film will be registered and adhered to the molded part. Those with ordinary skill in the art will recognize this technique for applying decoration to molded plastic consumer goods. Typically, the film is fed from roll-to-roll during the time that the mold is open and secured to one side of the mold using a vacuum system. Typically precise registration is required in order to form precise apertures for each optic in the lens array.
Painting Prior To Anti-Reflection Coating
During manufacture of an optical system, as discussed above, a series of lenses are typically placed into a black cylindrical housing. A multi-element lens assembly employs different approaches to common issues. One example of this is in the normal manufacture of a lens element, the lens is ground or molded from glass or plastic, as an example. Then the optical element may have an optical coating applied. For example, an anti-reflection (AR) coating or specific bandpass coating may be applied. Finally, the lens may have its edges painted black. Although it is common for lenses to be placed into a black housing, painting the edges of the lens black can help with stray light concerns.
In the present disclosure, the typical order of operations may cause undesirable effects. Therefore, it may be desirable to change the normative order of operations. Namely, in some exemplary embodiments, elements (e.g., the first lens array 140) of the lens assemblies 132 have their shapes defined first, then all painting operations of the light-absorbing coating material are performed, finally the optical (e.g., anti-reflection or bandpass) coating is applied. In the case of an AR coating with typical characteristics of very low reflectance over the visible spectrum, this has the effect of producing a visually darker black when looking at the lens assemblies 132 as less light is reflected and makes it back to the observer. If the AR coating is applied first followed by surface painting, color artifacts may be present and surfaces painted a given color may appear differently. This is due to the optical interface that is created between an AR coating and black paint, for example. It should be noted this is a general technique that may be applied to other coating and surface finishing solutions.
Aperture Arrays
Opaque apertures may be used for both managing stray light and defining the aperture stop and pupils of an optical system. The MV display device 100 may utilize three aperture arrays 220, 222, 224 integrated into the lens assembly 132, as shown in
As shown in
The individual lens arrays 140, 142, 144 of the assembly 132 include unique features for supporting, fixturing, and locating of the aperture arrays 220, 222, 224. As shown in
The first posts 144d of the third lens array 144 constrain several degrees of freedom of the third aperture array 224; namely, motion along the z-axis, as well as roll, pitch, and yaw. The second posts 144e of the third lens array 144 are used for locating and mounting of the second lens array 142 and the third lens array 144 relative to each other. Holes 224b formed in the third aperture array 224 fit over the second posts 144e, as shown in
Baffles
Ideally, each multi-element lens (or lens assembly) 132 only receives light from a section of the flat panel display 110 that is assigned to it. Theoretically one could assume that if the lens system were designed for a certain image height/field-of-view, then the light emanating from outside of the region would not pass through the system. In practice, however, this assumption may not hold true since these rays can cause scattered stray light that does pass through the system as well as causing contrast reduction. Since most FPDs have very large emission profiles, a field stop is not sufficient to address these issues. One solution is to cordon off each lens system (e.g., each lens assembly 132) near the flat panel display 110 with an opaque wall such that light from one lens's FPD region cannot transmit to another lens. To achieve this, as shown in
In one or more embodiments, each of the first baffles 150 includes a plurality of first slots, wherein each of the first slots extends through approximately one-half of the height of the first baffles 150. Additionally, each of the second baffles 152 includes a second slot, wherein the second slot extends through one-half of the height of second baffles 152. Each first baffle 150 is interlocked with a plurality of second baffles 152. The first and second baffles 150, 152 are interlocked at locations of the first and second slots such that portions of the first baffle 150 adjacent to each first slot are disposed around a portion of one of the second baffles 152, and portions of each second baffle 152 adjacent to its second slot are disposed around a portion of the first baffle 150.
The width of the slots 158 is approximately the same size the width of the baffles 150, 152 so that the walls 156 hold the baffles 150, 152 firmly in place. For each of the fixtures 154, a first baffle 150 is inserted into two collinear slots 158 of the fixture 154, and a second baffle 152 is inserted into the other two collinear slots 158 of the fixture 154. In one example, the first baffles 150 are inserted as rows into the horizontal slots 158, and the second baffles 152 are inserted as partial columns into the vertical slots 158 shown in
Another way of isolating each optical channel is to manufacture a single-piece baffle structure 151 that includes the baffles 150, 152, as shown in
The single-piece baffle structure 151 can be formed into a particular shape related to the lens assemblies 132.
Enclosure Front Aperture
Referring once again to
Another consideration for the apertures 108a of the front cover 108 is visual appearance. The lenses of the lens assemblies 132 may or may not have an optical coating applied. The presence of an optical coating, such as an AR coating, drastically changes the visual appearance of the lens elements themselves. To reduce the visual impact of busyness of the front of the MV display device 100, it may be desirable that the apertures 108a of the front cover 108 have a dark color and reflectivity visually similar to that of the optical elements. Because the MV display device 100 inherently is a visual device designed to display information to viewers, features that distract from the optical elements or the MV pixels 102 also distract from the functionality of the MV display device 100.
Diffuser
In color filtered displays, color filters are placed over different display sub-pixels to create a larger display pixel. Most FPDs operate in this regime. The radiant exitance (radiant emittance) emitted from each display sub-pixel can be modulated to create different colors than that of the color primaries of the display sub-pixels. Three different examples of red, green, and blue (RGB) color primary display sub-pixel structures are shown in
One approach in designing a projection system utilizing an electronic imaging device would be to assume that no diffuser is needed, and simply place a lens at the proper distance from the imaging device to project an image to the desired plane. In the specific case of a stripe RGB color filter FPD (see
A more sophisticated approach would employ a diffuser, or scatterer, placed between the imaging device and the lens to help blend the spatially distinct regions of color primaries, or display sub-pixels. Examples of diffusers that can be used for this purpose are frosted glass, ground glass, diffuser film which is visually similar to frosted glass, etc. These diffusers often exhibit a scattering profile that is circularly symmetric arising from a stochastic process employed in their manufacture. This approach could lead to a more uniform color in a given region of the projected image with an inherent tradeoff. The tradeoff may come in the form of decreased spatial resolution, since the diffuser naturally causes loss of spatial fidelity in the image plane.
Various exemplary embodiments employ an engineered diffuser 162 with a non-circularly symmetric scattering profile, as shown in
The backlighting scheme or emission profile of the flat panel display 110 can also play a role in determining the ideal scattering angles of the diffuser 162. In an example flat panel display 110 with a stripe style pixel structure, two examples of backlights that can be used are collimated and not collimated. A collimated backlight would produce light travelling largely in a single direction impending on the backside of the transmissive FPD. A non-collimated backlight would emit light into some larger cone or solid angle. These two examples would call for largely different diffuser scattering profiles. Therefore, the emission profile of the flat panel display 110 is an important input in the design of a diffuser scattering profile.
In general, the scattering profile of the engineered diffuser 162 is elliptical. The major and minor axes of the diffuser 162 may be aligned to the characteristic axes of the flat panel display's 110 sub-pixel structure. In a stripe sub-pixel arrangement, the major axis of the scattering profile will be aligned in the x-axis of
In the context of a multi-view display device 100 made up of a stripe RGB flat panel display 110 with lens assemblies 132 placed atop, the diffuser 162 may play an important role. Since the stripe RGB flat panel display 110 is made up of display pixels with spatially separated colored sub-pixels, light from these sub-pixels will be directed by the lens into different angular directions. An observer looking at this lens would therefore see a magnified portion of an individual display sub-pixel, thereby limiting the possible colors to display to the observer to be that of the color primaries of the color filters. The practical application and purpose of the diffuser 162 is to scatter the light from the individual display sub-pixels, allowing for the blending of the three RGB display sub-pixels. As discussed earlier, this means a reduction in spatial resolution, or angular fidelity of the MV pixel. From a practical standpoint, the needed amount of diffusion or blending is only over an individual display pixel, blending the display sub-pixels together. A diffuser placed over the flat panel display 110 will, in fact, blend more than just the display sub-pixels of a given display pixel. Since display sub-pixel spacing, say from a red sub-pixel to the next red sub-pixel, is different in the vertical and horizontal directions, it may be desirable to apply different color diffusion in the vertical and horizontal directions.
Another consideration in the optimal design of the diffuser 162, along with the backlight design, is the physical structure of the flat panel display 110. Many display panels include several layers of polarizers, cover glass, etc. All these elements are a consideration in the design of a diffuser 162 that will optimally blend the colors of individual display sub-pixels within the flat panel display 110.
Display Controller
The display controller 170 receives data from, for example, the host computer 182 via the network controller 178 and drives the flat panel display 110 to generate beam lets that create images directed towards viewing zone(s), as described below. When the MV display device 100 is one of many MV display devices 100 that are daisy chained (see
Pixel Processing Unit
The pixel processing unit (PPU) 172 computes and renders the beam let patterns on the flat panel display 110 to show the appropriate images to the associated viewing zones. In other words, the PPU 172 identifies a first bundle of beam lets, which originate from a first set of display pixels on the FPD 110 and are directed to the pupil of a first viewer in a first viewing zone to form a first image in the first viewer's brain, and a second bundle of beamlets, which originate from a second set of display pixels (different from the first set of display pixels) and are directed to the pupil of a second viewer in a second viewing zone to form a second image in the second viewer's brain.
In various embodiments, the PPU 172 receives viewing zone coordinate data which defines locations of the first and second viewing zones, content stream data used to form the first and second images, viewing zone to content stream mappings that associate different content to different viewing zones, respectively, calibration parameters used to calibrate the MV display device 100, and/or color palette parameters from the host computer 182 to render the images on the flat panel display 110 that generate the appropriate beamlet patterns.
In various embodiments, viewing zones are described in a viewing zone coordinate system, such as the coordinate system of a camera (e.g., camera 104) looking at an environment in which the MV display device 100 is used. Beam lets generated by the flat panel display 110, on the other hand, are described in a beam let coordinate system, such as X-Y display pixel coordinates or floating-point viewport coordinates of display pixels of the flat panel display 110. The PPU 172 applies mathematical transformations between the viewing zone coordinate system and the beam let coordinate system to compute the corresponding beam let coordinates for viewing zones. In other words, the PPU 172 applies mathematical transformations between the viewing zone coordinate system and the beam let coordinate system to determine which display sub-pixels to activate to produce beam lets that are visible at corresponding locations (viewing zones) in the viewing zone coordinate system.
Each multi-view (MV) pixel 102 controlled by the PPU 172 has a unique mapping between the two coordinate systems, which is contained in its associated set of calibration parameters (p0, p1, . . . , p15). One embodiment of the mathematical mapping between the viewing zone coordinate system (X, Y, Z) and the beam let coordinate system (U, V), which utilize the calibration parameters (p0, p1, . . . , p15), is provided below in Equations 1-5. The PPU 172 uses Equations 1-5 to map between the viewing zone coordinate system (X, Y, Z) and the beamlet coordinate system (U, V).
In one or more embodiments, the PPU 172 includes a processor and a memory storing instructions that cause the PPU 172 to receive information regarding a set of coordinates in the viewing zone coordinate system, determine a corresponding set of coordinates in the beam let coordinate system by evaluating Equations 1-5, and output information regarding the corresponding set of coordinates in the beamlet coordinate system, which is used to drive the flat panel display 110.
Those with ordinary skill in the art will recognize there are many alternative mathematical models and parameter sets that may be used to create a mapping between a viewing zone coordinate system and a beamlet coordinate system. The calibration parameters for each multi-view (MV) pixel are computed with a calibration procedure, as described below.
To reduce the data bandwidth and storage requirements for content streams and frame buffers, the color bit width can be less than the native color bit width of the flat panel display 110. In some embodiments, color values are represented using 8 bits, while the flat panel display 110 is driven with 24-bit color values. The PPU 172 stores a color palette that converts between the stored color bit width and the native flat panel display 110 bit width. For example, the stored 8-bit color can be represented as a 0-255 grayscale, 3:3:2 RGB (i.e., 3 bits for red, three bits for green, and 2 bits for blue), or an alternative color representation. The color palette for each panel can also be tuned to provide color matching between multiple panels.
In various embodiments, the PPU 172 is implemented in a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). Those with ordinary skill in the art will recognize there are many other alternative implementations, including a central processing unit (CPU), graphics processing unit (GPU), or a hybrid processor. In addition, multiple processors may be used together to perform the tasks of the PPU 172.
The PPU 172 communicates with the volatile memory 174 and/or non-volatile memory 176 to perform its tasks. The volatile memory 174 may comprise dynamic random-access memory (DRAM) and/or static random-access memory (SRAM), for example. The non-volatile memory 176 may include flash, Electrically Erasable Programmable Read-Only Memory (EEPROM), and/or a disk drive. In various embodiments, the PPU 172 communicates with the volatile memory 174 to store dynamic run-time data, including but not limited to viewing zone data, content stream data, viewing zone to content stream mappings, and/or frame buffer data. The PPU 172 communicates with the non-volatile memory 176 to store static data, including, but not limited to, calibration parameters, color palettes, firmware, identification numbers, and/or version numbers. The PPU 172 also can modify the contents of the non-volatile memory 176, for example, to set the stored parameters or update firmware. The ability to update firmware on-the-fly allows easier upgrades without having to plug in an additional programmer cable and run specialized software from the host computer 182.
The PPU 172 provides buffering in the system to allow graceful performance degradation in non-ideal situations. Typically, for a display such as an LCD, video data must be consistently sent at a fixed rate (e.g., 30 Hz, 60 Hz). However, due to the non-deterministic computations, rendering, and data transmission from the host computer 182, the PPU 172 may generate data at a non-fixed rate. Thus, the PPU 172 includes buffering when controlling the flat panel display 110 to, for example, hold the last frame's state if the data is too slow, or drop frames if the data is too fast.
The PPU 172 drives the flat panel display 110 through a FPD connector 184. In various embodiments, the FPD connector 184 is an embedded DisplayPort (eDP) interface. Those with ordinary skill in the art will recognize there are many alternative display interfaces that may be used, including but not limited to DisplayPort, High-Definition Multimedia Interface (HDMI), Digital Visual Interface (DVI), and Video Graphics Array (VGA). In one or more embodiments, the FPD connector 184 additionally contains connections for powering, controlling, and/or modulating a backlight of the flat panel display 110.
The PPU 172 communicates with the host computer 182 and/or other display controllers 170 (of other MV display devices 100) through the network controller 178. The PPU 172 sends and/or receives data through a network, including but not limited to viewing zone information, content stream data, viewing zone to content stream mappings, calibration parameters, color palette parameters, identification information, addressing information, status information, and/or other configuration information. In various embodiments, the network is an Ethernet® network, and the network controller 178 provides an Ethernet® physical layer interface. Those with ordinary skill in the art will recognize there are many alternative data interfaces that may be used, including but not limited to Universal Serial Bus (USB), Peripheral Component Interconnect (PCI), Infiniband®, and/or Thunderbolt®. Some data interfaces may be preferred over others for certain circumstances. For example, Ethernet® generally can span longer physical distances than USB, which may be advantageous in many installation configurations.
Multi-MV Display Device Tiling Features
Several features of the display controller 170 facilitate tiling of multiple MV display devices 100 to form a larger display. For example, in various embodiments, the features may be used to connect multiple MV display devices 100 in a daisy chain, to reduce the number of ports required by the host computer 182, reduce cable lengths, and simplify installation. Those with ordinary skill in the art will recognize there are many alternative connection architectures, including but not limited to buses, trees, stars, and/or meshes.
The network controller 178 contains two network interfaces 179a and 179b coupled to respective data connectors 120 to allow passing of received data to downstream MV display devices 100. In various embodiments, the network controller 178 comprises a dual Gigabit Ethernet® transceiver. The PPU 172 can receive data from a first network interface 179a and transmit data to a second interface 179b and vice versa. The transmitted data on the second interface 179b can be a direct copy of the received data on the first interface 179a, a filtered version of the received data, a transformed version of the received data, or entirely independent data.
For example, in various embodiments, viewing zone data sent by the host computer 182 is intended to be consumed by all MV display devices 100 in a MV display system 122 (see
The directionality of the network interfaces 179a, 179b of the network controller 178 can be programmed on-the-fly. This multi-way directionality allows flexibility in installation configurations. For example, one situation may require the host computer 182 to be placed within a daisy chain to the left of a MV display device 100, while another situation may require the computer 182 to be placed within a daisy chain to the right of the MV display device 100. This directionality programming can be done either passively or with an active command. In an example of the former, any data received on either network interface of the network controller 178 can be operated upon and forwarded to the other interface of the network controller 178. In an example of the latter, one network interface of the network controller 178 is designated as the upstream interface, while the other is designated as the downstream interface. If a “set direction” command is received on the downstream interface, the upstream/downstream designations can be flipped.
Some commands may be broadcast to all display controllers 170 in a chain. For example, in various embodiments, all display controllers 170 operate on the same set of viewing zone data, which is broadcasted to all display controllers 170. However, to allow different display controllers 170 in a daisy chain to operate on different data, the display controllers 170 may need to have distinct addresses. For example, each display controller 170 may use its own set of calibration parameters and may render from its own portion of the content stream. A straightforward method to assign distinct addresses is for each display controller 170 to have a globally unique ID. For example, a serial EEPROM with a pre-programmed globally unique ID can be read by the PPU 172. As another example, a unique ID number can be stored in the non-volatile memory 176. The host computer 182 can query the display controllers 170 in the daisy chain for their unique IDs, and map content stream portions to those unique IDs. However, these techniques require either separate ID memories or bookkeeping steps.
In various embodiments, temporary unique ID numbers are assigned at run-time. For example, the host computer 182 sends a “Set Address” command with a base address and increment value to a first display controller 170 in the daisy chain. The first display controller 170 sets its address to the given base address. Then, the first display controller 170 sends the base address with the increment value added to it to a second display controller 170 in the daisy chain along with the increment value. The second display controller 170 sets its address to the incremented base address, increments the address again, and sends the new address and increment value to a third display controller 170 in the daisy chain, and so on. This way, each display controller 170 is assigned a known, unique address within the daisy chain at run-time.
The host computer 182 can perform a query to determine the number of display controllers 170 in the chain at run-time. For example, the display controllers 170 may be designed to respond to a ping command with its unique address. The ping command is broadcast by the host computer 182 to all display controllers 170 in a chain, and all of the display controllers 170 respond to the ping command with their unique addresses. Then the host computer 182 can simply count or check the number of ping responses to determine the number and addresses of the display controllers 170 in the chain. This way, applications can be adaptable to the number of MV display devices 100 in a chain, rather than requiring a fixed number of MV display devices 100.
In addition to the network interfaces, power interfaces of the power controller 180 can be arranged to allow daisy chaining as well. For example, power can be received from a first interface 179a of the power controller 180 and transmitted to a second interface 179b of the power controller 180. Alternatively, the first and second interfaces of the power controller 180 can be directly connected such that power can be transmitted in either direction, to allow more flexible installation.
Programming Interface
In various embodiments, the primary method for controlling MV display devices 100 is through an Application Programming Interface (API) running on the host computer 182 attached to the display controllers 170 of the MV display devices 100 via Ethernet. The API is intended to be used by programmers to control the MV display devices 100. The primary purpose of the API is to enable users to do three things: (i) create and update (i.e., resize, move, etc.) viewing zones in the viewing zone coordinate system, (ii) create and update (i.e., change color, text, scroll direction, image) content streams that can be shown to viewing zones, and (iii) assign viewing zones to content streams.
The API allows users to do these things both statically and dynamically. Listed below are a few examples of both static and dynamic operation to help illustrate the breadth of experiences that can be created using these three basic features.
Static operation may be used to create viewing zones at specified locations and show content to viewers based on where they are placed. For example, one or more MV display devices 100 may be statically configured to show different advertisements to different sides of a street, or show a red light to cars over a certain distance away from a traffic light and a green light to closer cars. Additionally, one or more MV display devices 100 may be statically configured to use a map of the world on a floor, to show text to a viewer in the native language of a country on top of which the viewer is standing.
Dynamic operation may use dynamic content and static viewing zones. Viewing zones may be created at specified locations, and external data may be used to decide what content to show to what viewing zone. For example, a person could walk up behind a podium, see content on a sign, and use a dial on the podium to select the language of information that is displayed to the person. Also, people sitting in seats at a movie theater could use their phones to enter their seat numbers and captioning preferences (i.e., no captioning, English, Spanish, German, etc.). In this case the viewing zone is statically set for each seat, but the content changes based on the user input. Any interaction device (e.g., dials, phones, remote controls, gestures, facial recognition) may be used to change what content is being shown to a static location like a chair.
Dynamic operation also may use static content and dynamic viewing zones. The viewing zones are changed based on external data, but content is set using only internal data. For example, the API may be used to create 3D viewing zones and assign content to them, and the display controller 170 only turns on beam lets that terminate inside the viewing zones (which can be determined based on a real-time point cloud, time-of-flight camera, or another 3D sensor, to be described below). This has the effect of dynamically updating viewing zones so that they are the exact size of the person (or people) standing inside of them. For example, a user may statically set a 3D region to be the bounding box of a viewing zone. When one or more people enter the bounding box, the viewing zone is updated in a way such that it fits exactly to the people in the viewing zone. In other words, the 3D viewing zone may be statically set and dynamically updated. Additionally, people may be tracked using wands, badges, phones, motion capture systems, vehicles, or visual tags, etc., and content is assigned without external data (i.e., based on location).
In addition, dynamic operation may be fully dynamic, wherein viewing zones are dynamically created and content is dynamically based on external data. For example, people may be tracked using wands, badges, phones, motion capture systems, vehicles, visual tags, etc., and content is assigned based on who the person is or input the person has given to the system (i.e., if a person walks into a mall and starts looking at a particular item). Additionally, computer-aided facial recognition of a face of a viewer may be used to set a viewing zone around the face, identify who the viewer is, and show the viewer specific content based on the identity of the viewer.
In addition to the three basic features, several enhancements allow for easier operation including: (a) auto-discovery, (b) manually specifying the content buffer-to-display panel mapping, (c) filtering viewing zones based on calibrated area, and (d) single-view mode, to be described below.
(a) Auto-Discovery
The host computer 182 executes software to perform an auto-discovery process to discover what MV display devices 100 are connected to it and how they are plugged into each other. Without this data, an operator would need to manually program addresses for each MV display device 100 and then inform the API of the addresses of the MV display devices 100. Instead, on startup, the API finds all attached MV display devices 100 and assigns each of them an address. It does this in a programmatic and repeatable way such that if the order that the MV display devices 100 are plugged in does not change, the address of each MV display device 100 will stay the same. This is advantageous for being able to show content correctly, since the API divides up content based on addresses assigned to the MV display devices 100. There are numerous other ways to accomplish assigning persistent addresses, such as setting unique identifiers (IDs) for each MV display device 100 in the factory, but it would be less efficient than the auto-discovery method, which requires no unique IDs to be pre-assigned.
(b) Manually Specifying the Content Buffer-to-Display Panel Mapping
When creating content for the multi-view display devices 100, one might expect to be able to create a single image (or frame buffer) and then assign parts of that image to be displayed on each individual MV display device 100 based on the physical arrangement of the MV display devices 100. Since the addresses of the MV display devices 100 are dependent on the order that they are plugged in, and users can plug in MV display devices 100 any way they choose, adjacent addresses may not necessarily correspond to adjacent panels. In various embodiments, the MV display system 122 enables users to manually specify which portions of a frame buffer maps to which addresses. For example, a user may specify that the content delivered by multi-view (MV) pixels (0,0) through (27,15) maps to a first MV display device 100, while the content delivered by MV pixels (28,0) through (56, 16) maps to a second MV display device 100, etc. Enabling users to assign portions of content this way gives users greater creative freedom. Alternatively, it may be possible to assume the MV display devices 100 are plugged in a certain way and to auto-assign the MV display devices 100 certain regions of the content, but that may force users to think carefully about how they plug in the MV display devices 100. It may also not even be possible to plug in the MV display devices 100 in the required configuration given physical constraints of mounting, etc.
(c) Filtering Viewing Zones Based on Calibrated Area
It is sometimes difficult for users to know exactly where a MV display device 100 has been calibrated (i.e., the precise locations in the viewing zone coordinate system at which beamlets from each of its MV pixels are known to terminate) and where it has not been calibrated. Generally the MV display device 100 performs better inside an area in which calibration was performed (e.g., inside the convex hull of all the points a calibration device 210 was placed during calibration; see
(d) Single-View Mode
When a designer is using the MV display devices 100 and trying to preview content, to verify that the right content is visible in the right viewing zone the designer may need to get up from the host computer 182 to physically stand in the viewing zone to view the content. To ease the designing burden, the MV display system 122 may include a “single-view” mode. In this mode, designers can see a single content stream no matter where they physically stand as long as they are inside the field-of-view of the MV display devices 100. While this mode is designed to assist designers and programmers, it may also be used in ultimate operation of the MV display system 122 (see
Graphical User Interface
For less technical users to be able to use the MV display devices 100, a graphical user interface 186 can be used, as shown in
The graphical user interface 186 enables an operator to specify and display a viewing space representation 194 in the viewing zone coordinate system pane 192. For example, the viewing space representation 194 may be a 3D model of a room in which the MV display device 100 will be used. When an operator uses a pointing device (e.g., a mouse) of the host computer 182 to perform graphical operations on a display device of the host computer 182, the host computer 182 converts locations on the display device to corresponding locations in a viewing zone coordinate system (e.g., coordinate system of the room in which the MV display system 122 will be used). The graphical user interface 186 also enables an operator to place and manipulate viewing zones within the viewing zone coordinate system pane 192. For example, an operator may use a pointing device to draw, resize, and move a first viewing zone representation 196a, a second viewing zone representation 196b, and a third viewing zone representation 196c within the viewing zone coordinate system pane 192. In one or more embodiments, each of the viewing zone representations 196a-196c appears as a three-dimensional bounding box. After the user specifies three viewing zones with the viewing zone representations 196a-196c, the host computer 182 displaying the graphical user interface 186 converts coordinates of the boundaries of the viewing zone representations 196a-196c into corresponding coordinates in the viewing zone coordinate system of boundaries of three viewing zones, and then stores the coordinates of the viewing zones.
Providing a visual representation of the viewing zone coordinate system in the viewing zone coordinate system pane 192 can be helpful for people to understand how to use the MV display device 100. The form of the visual representation depends on the sensor 104 being used on/with the MV display device (see
In addition to showing the generic coordinate system in the form of a camera feed, point cloud, etc., the graphical user interface 186 can also show what the maximum calibrated bounds are. (See “(c) Filtering viewing zones based on calibrated area” discussed above.) The fact that a sensor can sense in a particular region does not necessarily a viewing zone can be placed there. This is because a user may not have calibrated the entire viewing space within the field of view of the display sensor 104. In order to help the user understand what area is calibrated and what area is not, the graphical user interface 186 includes a feature that overlays a rendering of the calibrated area/volume over the viewing zone coordinate system visualization. In various embodiments, this may be a shaded 2D/3D box.
With a representation of the viewing zone coordinate system, viewing zones may be placed and manipulated within it. In 2D, this can simply be drawing and manipulating rectangles (or potentially other 2D shapes) on top of a camera feed to which the MV display device 100 is calibrated. In 3D, this may be more complicated. For the 3D case, a volume in space to which content is shown must be defined. In various embodiments, an axis-aligned bounding box (i.e., a rectangular prism with all sides parallel to an axis of the coordinate system) may be used to speed up computations, though any 3D volume may be used. Moving and manipulating 3D volumes in 3D space on a 2D computer monitor may be more difficult than the 2D case, but can be accomplished using standard CAD methodologies.
At 304, a display sensor (e.g., 104) captures sensor data of a space in which the MV display device 100 is viewable. For example, in response to the message from the host computer 182, the camera 104 of the MV display device 100 captures sensor data of a portion of a room in which viewers of the MV display device 100 is located.
At 306, the sensor data is received. For example, the host computer 182 receives the sensor data captured by the camera 104 via the network, which is transmitted from the network controller 178 of the MV display device 100. In one or more embodiments, the sensor data may be sent via Universal Serial Bus.
At 308, the sensor data and viewing zone data are rendered on a display device. For example, a memory of the host computer 182 stores software instructions that, when executed by a processor, causes the host computer 182 to process the sensor data captured by the camera 104 and transmit corresponding processed data to a display device coupled to the host computer 182. The data transmitted to the display device is in a format that causes the display device to display the graphical user interface 186 shown in
After the sensor data and viewing zone data are rendered in the graphical user interface 186 on the display device at 308, the user is able to visualize viewing zones represented by the viewing zone representations 196a, 196b, 196c in the context of the display sensor data that is displayed on the display device. After viewing the information displayed in the graphical user interface 186, the user may determine that the viewing zone represented by the viewing zone representation 196a, for example, needs to be adjusted by being moved and resized. The user may then perform graphical operations using a pointing device (e.g., a mouse) coupled to the host computer 182 to select the viewing zone representation 196a and then resize and move it on the display device.
At 310, user input is received. For example, the host computer 182 receives data corresponding to the graphical operations the user has made that cause the viewing zone representation 196a to be resized and moved on the display device.
At 312, new coordinates of one or more viewing zones are determined. For example, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the host computer 182 to determine new coordinates, in a viewing zone coordinate system, of the viewing zone represented by the viewing zone representation 196a, based on the user input received at 310.
At 314, an application programming interface is notified. For example, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the processor to send a message indicating a change in the coordinates of the viewing zone represented by the viewing zone representation 196a to an application programming interface executing on the host computer 182.
At 316, viewing zone data is updated. For example, the application programming interface executing on the host computer 182 causes data corresponding to the new coordinates of the viewing zone represented by the viewing zone representation 196a determined at 312 to be stored in a memory of the host computer 182.
At 318, updated data is transmitted to a display device. For example, the application programming interface executing on the host computer 182 causes the data corresponding to the new coordinates of the viewing zone represented by the viewing zone representation 196a determined at 312 to be transmitted to the MV display device 100.
At 320, the method 300 ends. For example, the display controller 170 of the MV display device 100 stores the data corresponding to the new coordinates of the viewing zone represented by the viewing zone representation 196a and uses it to determine which display pixels of the flat panel display 100 cause beam lets to be emitted to the viewing zone represented by the viewing zone representation 196a.
One feature of the graphical user interface 186 is the ability to create and assign content to viewing zones. Content designers can design images and videos for multi-view displays in other software programs and then import them. However, users can create simple content, such as scrolling and static text, with the graphical user interface 186. Once the content has been created, it can be assigned to a content group. A content group has one piece of content assigned to it and one or many viewing zones. While it is also possible to think about this as assigning content to viewing zones, it may be more beneficial to think about assigning viewing zones to content because in various embodiments far fewer content streams are supported than viewing zones. This is because for any reasonably sized MV display device 100 with a reasonable number of MV pixels 102, content streams take up more data bandwidth than viewing zones when being communicated from the host computer 182 to the display controller 170. As discussed above, in various embodiments users create a group for every content stream. Users can change what content is shown to what viewing zone by moving the viewing zones between groups.
It is also possible to save each “configuration” or a state defining which viewing zones are located where and which viewing zones are assigned to which content (or content group). The graphical user interface 186 provides a configuration list, in which all the saved configurations are put in order such that they can quickly and easily be switched between. With a configuration list, the graphical user interface 186 allows users to switch between configurations based on external triggers. For example, when a button is pressed in the environment (e.g., a visitor at an amusement park pressing a button located near the MV display devices 100), the MV display system 122 may move to the next configuration which has a different set of content. Triggers from other systems can also be received, such as lighting consoles, various sensors, timers, or media servers. Another use of the ability to save configuration information from the graphical user interface 186 is to save just the viewing zone locations. Expanding the previous example, if a programmer wants to be able to dynamically change what content is shown when the button is pressed based on who pressed it, the programmer could write a program to do so using the application programming interface. As another example, a programmer could set up the viewing zones in the graphical user interface 186, name the viewing zones (i.e., “button 1,” “button 2,” etc.), and then load that file into the programming interface to assign the dynamic content to the viewing zone.
At 334, first configuration data is created. For example, a user performs graphical operations using a pointing device (e.g., a mouse) coupled to the host computer 182 to create the viewing zone representation 196a and the viewing zone representation 196b in the viewing zone coordinate system pane 192 of the graphical user interface 186. A memory of the host computer 182 stores software instructions that, when executed by a processor, cause the host computer 182 to generate and store viewing zone data representing boundaries of a first viewing zone and a second viewing zone, in a viewing zone coordinate system, based on data indicating the graphical operations performed by the user.
The user also performs graphical operations using the pointing device and the content assignment pane 192 of the graphical user interface 186 to assign a first content stream to a first content group, and assign a second content stream to a second content group. In addition, the user performs graphical operations using the pointing device to assign a first viewing zone represented by the viewing zone representation 196a to the first content group, and to assign a second viewing zone represented by the viewing zone representation 196b to the second content group.
In one or more embodiments, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the host computer 182 to generate first configuration data including the viewing zone data representing the boundaries of the first and second viewing zones, data indicating content items that are included in the first content group, data indicating content items that are included in the second content group, data indicating that the first viewing zone is assigned to the first content group, and data indicating that the second viewing zone is assigned to the first content group.
For example, the memory of the host computer 182 stores instructions that, when executed by the processor, cause the host computer 182 to store the first configuration data in a table or other suitable data structure in which data representing coordinates of the boundaries of the first viewing zone are associated with an identifier of the first viewing zone (e.g., “Zone 1”), data representing coordinates of the boundaries of the second viewing zone are associated with an identifier of the second viewing zone (e.g., “Zone 2”), an identifier of a first content stream (e.g. file name 1) is associated with an identifier of the first content group (e.g., “Group 1”), an identifier of a second content stream (e.g. file name 2) is associated with an identifier of the second content group (e.g., “Group 2”), an identifier of the first viewing zone (e.g., “Zone 1”) is associated with an identifier of the first content group (e.g., “Group 1”), and an identifier of the second viewing zone (e.g., “Zone 2”) is associated with an identifier of the second content group (e.g., “Group 2”).
At 336, second configuration data is created. For example, the user performs graphical operations similar to those described above to generate third and fourth viewing zone data, assign a third content stream to a third content group, assign a fourth content stream to a fourth content group, assign the third viewing zone to the third content group, and assign the fourth viewing zone to the fourth content group. The host computer 182 then generates second configuration data including the viewing zone data representing the boundaries of the third and fourth viewing zones, data indicating the contents of the third and fourth content groups, data indicating that the third viewing zone is assigned to the third content group, and data indicating that the fourth viewing zone is assigned to the fourth content group.
At 338, first and second viewing zone data is transmitted. For example, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the host computer 182 to transmit the first and second viewing zone data identified in the first configuration data to the MV display device 100.
At 340, first and second viewing streams are transmitted. For example, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the host computer 182 to transmit the first and second viewing streams identified in the first configuration data to the MV display device 100.
The display controller 170 of the MV display device 100 uses the first and second viewing zone data transmitted at 338 and the first and second viewing streams transmitted at 340 to determine which beam lets (or corresponding display pixels) in a coordinate system of the flat panel display 110 to drive such that a viewer in the first viewing zone is able to view the first content stream and a viewer in the second viewing zone is able to view the second content stream.
At 342, trigger data is received. For example, at 342, the host computer 182 receives a signal from a sensor device or a message from a communication device that is located in a room in which the MV display device 100 is located. In one or more embodiments, the host computer 182 receives a message that includes data identifying a particular configuration data. For example, at 342, the host computer 182 receives a message that includes data identifying or associated with the second configuration data (e.g., “Second Configuration”).
At 344, an application programming interface is notified. For example, the memory of the host computer 182 stores software instructions that, when executed by the processor, causes the host computer 182 to send a message indicating a change in configuration data, which identifies the second configuration data, to an application programming interface executing on the host computer 182.
At 346, third and fourth viewing zone data are transmitted. For example, the application programming interface executing on the host computer 182 causes the host computer 182 to transmit to the MV viewing device 100 the viewing zone data included in the second configuration data, in response to receiving at 344 the message indicating the change in configuration data, which, for example, identifies the second configuration data or includes an identifier that is associated with an identifier of the second configuration data. In one or more embodiments, the third and fourth viewing zone data are transmitted along with one or more commands that instruct the display controller 170 to stop driving the display sub-pixels of the flat panel display 110 and to delete the viewing zone data that is currently stored in the non-volatile memory 176.
In one or more embodiments, the third and fourth viewing zone data are transmitted along with one or more commands that instruct the display controller 170 to store the third and fourth viewing zone data in the non-volatile memory 176, associate an identifier of the content stream of the third content group with an identifier of the third content group in a table or other suitable data structure stored in the non-volatile memory 176, and associate an identifier of the content stream of the fourth content group with an identifier of the fourth content group in a table or other suitable data structure stored in the non-volatile memory 176.
At 348, third and fourth viewing steams are transmitted. For example, the application programming interface executing on the host computer 182 causes the host computer 182 to transmit at 348 the third and fourth viewing steams identified in the second configuration data, in response to receiving at 344 the message indicating the change in configuration data received at 342.
At 350, the method 330 ends. For example, the display controller 170 of the MV display device 100 converts the coordinates included in the third and fourth viewing zone data transmitted at 346, which are in the viewing zone coordinate system, into corresponding coordinates in the beamlet coordinate system of the flat panel display 110, in order to drive the flat panel display 110 such that a viewer in the third viewing zone is able to view the third content stream and a viewer in the fourth viewing zone is able to view the fourth content stream.
Calibration
The MV display device 100 requires a calibration process. This is because users specify locations in a viewing zone coordinate system, and the MV display device 100 must know what beamlets for each MV pixel 102 to illuminate. If the exact way light bends in each lens, the exact location of each lens in relation to the display sensor (i.e., camera 104), and the exact location of the lens relative to the underlying display panel are known, the calibration process could be theoretically eliminated. In practice, those measurements are difficult to obtain and would be even harder to use in real-time to turn on the correct beam let for a given viewing zone coordinate.
In various embodiments, a simplified mathematical model is used to approximate what beamlet to turn on for a given viewing zone coordinate. In the worst case, the approximation has an error on the order of a few display pixels between the intended beamlet and an actual beam let, which is tolerable under normal circumstances. On average, the error is even better at about 0.5 display pixels.
A calibration process determines coefficients and constants in the mathematical model that approximates the projection/mapping of locations in the viewing zone coordinate system to the beam let coordinate system. To determine the coefficients and constants, the calibration device captures some ground truth mappings between the viewing zone coordinate system and the beamlet coordinate system. The collected data and a non-linear optimizer is used to find the coefficients and constants in the equation. Once the coefficients and constants are obtained, new mappings given a viewing zone coordinate can be efficiently generated.
Physical Setup
To collect the ground truth mappings to solve for the coefficients, some hardware is needed. In various embodiments, three devices are used at minimum: a MV display device 100; a display sensor 226 (e.g., camera 104) attached to the MV display device 100 that creates a viewing zone coordinate space (e.g., a camera, a stereo camera, Light Detection and Ranging (LIDAR), time-of-flight camera, line scan camera, etc.); and a camera (the calibration device 210) that can view the MV display device 100, can be moved around the environment, and can be found by the display sensor 226, as shown in
In one implementation, the calibration device 210 takes the form of a camera with an attached checkerboard and a tablet computer (e.g., including a processor and a memory storing instructions that cause the tablet computer to perform a calibration procedure), and the display sensor 226 is a 2D camera. In an alternative implementation, the calibration device 210 is a camera with an attached infrared (IR) LED and a tablet computer, and the display sensor 226 is an IR sensitive stereo camera. In any case, the calibration device 210 must be able to be found in the viewing zone coordinate system by the display sensor (e.g., camera 104). Some other examples of calibration device/display sensor combinations are: checkerboard/stereo camera, other printed pattern or tag/camera (or stereo camera), visible light LED/camera (or stereo camera), etc. The host computer 182 can be additionally used to control the MV display device 100, and a wireless network allows the calibration device 210 and the host computer 182 to communicate during the calibration procedure. In some embodiments on may use one computer and eliminate the tablet, but that could potentially require that the camera have a cable run to the host computer 182. It is also possible that the display controller 170 could directly interface with the calibration device (camera) 210.
Calibration Procedure
During the calibration procedure, the host computer 182 transmits display pattern data 228 to the MV display device 100. In response, the MV display device 100 emits light forming display patterns 230 corresponding to the display pattern data 228. The calibration device 210 records which beamlets from the MV display device 100 are received. In the meantime the calibration device 210 includes a checkerboard pattern 232 (e.g., displayable on a screen of the calibration device 210 or printed and attached to the calibration device 210). If the calibration device 210 is within the field of view of the display sensor 226 (i.e., the display sensor 226 can sense or detect the checkerboard pattern 232 of the calibration device 210), the display sensor 226 transmits calibration device location data 234 to the host computer 182. In one or more embodiments, the calibration device location data 234 indicates coordinates of the calibration device 210 in a viewing zone coordinate system that are based on the detected checkerboard pattern 232. The calibration device 210 transmits beam let coordinate data 236 to the host computer 182, which are stored by the host computer 182. As explained below, the host computer 182 uses the stored calibration device location data 234 and the beamlet coordinate data 236 to calculate calibration parameters (p0, p1, . . . , p15) that are used by the MV display device 100 to transform coordinates in the viewing zone coordinate system to corresponding coordinates in the beam let (or display pixel) coordinate system of the flat panel display 110, so that the MV display device 100 can present different content to different viewers who are located in different viewing zones.
In one or more embodiments, the calibration device 210 includes a tablet computer having a memory that stores software instructions that, when executed by a processor of the tablet computer, cause the tablet computer to perform aspects of the calibration procedure. In addition, a memory of the host computer 182 stores software instructions that, when executed by a processor of the host computer 182, cause the host computer to perform other aspects of the calibration procedure.
The calibration procedure consists of capturing several mappings per MV pixel between a spatial 1D/2D/3D point in the viewing zone coordinate system and a beamlet in the beamlet coordinate system that, when turned on, illuminates the position of the spatial coordinate in the world. In various embodiments, these captured mappings are spread around the entire area that is to be used for viewing the MV display device 100. To capture these mappings, the MV display system 122 must do two things: find the calibration device 210 in the viewing zone coordinate space and enable the calibration device 210 to record which beamlet is hitting it at its current location.
In various embodiments, the calibration device 210 is found by locating the checkerboard pattern 232 in the feed of the display sensor 226. This gives spatial coordinates in the viewing zone coordinate system, which represent the current location of the calibration device 210 and which are included in the calibration device location data 234. As mentioned earlier, the display sensor 226 (e.g., camera 104) could be a 1D, 2D, or 3D sensor. Each of these has implications on how the MV display device 100 operates. The dimensionality of the display sensor 226 determines the dimensionality of the coordinate space in which the end user can define viewing zones. Thus, if the MV display device 100 is calibrated to a 2D display sensor 226, then viewing zones can only be defined as regions of a 2D surface, and all the locations which the calibration device 210 is placed must be within that 2D surface. A downside to using a display sensor 226 that is 2D or 1D may be that the MV display device 100 will only work well on a corresponding plane or line because the mathematical model assumes a viewer is standing in that plane or line. If the MV display device 100 is small in comparison to the distance of the viewer from the MV display device 100, then the difference between beamlets that hit a viewer on the plane and off the plane is small and can be ignored. However, as the MV display device 100 gets larger (e.g., multiple MV display devices 100 tiled together), a difference between the beamlets for someone standing on the calibrated surface, and someone off of it, might not be as small and lead to only some of the MV pixels appearing to be on for the viewer. To address this issue, in various embodiments, the display sensor 226 may include a 2D camera and it is possible to measure the distance between the calibration device 210 and the display sensor 226. Then, the distance is used as the third coordinate to add an extra dimension, effectively turning the 2D display sensor 226 into a 3D sensor. The user could therefore specify a region of the 2D image and a distance from the camera.
At 364, the calibration device 210 is positioned within the field of view of the MV display device 100. The calibration device 210 may be located at any point within the viewing zone coordinate system defined by the display sensor 226
At 366, the display sensor 226 determines a location of the calibration device 210. In one or more embodiments, a memory of the display sensor 226 stores instructions that, when executed by a processor, cause the display sensor 226 to capture an image of the checkerboard pattern 232 displayed by the calibration device 210, process corresponding image data, determine coordinates of the calibration device 210 in a viewing zone coordinate system based on the image data, and transmit calibration device location data 234 including the determined coordinates to the host computer 182. In some embodiments, the display sensor 226 sends sensor data to the host computer 182, and the host computer 182 processes the sensor data to determine coordinates of the calibration device 210 in the viewing zone coordinate system.
At 368, the MV pixels 102 of the MV display device 100 are located by the calibration device 210. In one or more embodiments, the host computer 182 generates display pattern data 228 that cause the MV display device 100 to turn all of the MV pixels 102 on, and then turn all of the MV pixels 102 off (see
At 370, each of the MV pixels 102 is identified. In one or more embodiments, the host computer 182 generates display pattern data 228 that cause the MV display device 100 to turn each of the MV pixels 102 on and off according to a unique code that is assigned to or associated with each of the MV pixels 102 (see
At 372, display pixel IDs (or beam let IDs) corresponding to the location of the calibration device 210 are determined. In one or more embodiments, the host computer 182 generates display pattern data 228 that cause the MV display device 100 to turn each of the beam lets on and off according to a unique code that is assigned to each of the beamlets. This results in the calibration device 210 seeing MV pixels 102 turn “on” and “off” (see
In this stage of 372 in one embodiment, the purpose is to find which of the ˜10,000 beamlets under each MV pixel, for example, the MV display device 100 needs to turn on in order for the MV pixel 102 to appear “on” to the calibration device 210 wherever the calibration device 210 happens to be placed. In the ideal case, the MV pixel 102 will appear “off” when any but one of the beamlets is turned on, but appear “on” when that one (correct) beamlet is turned on. The MV display device 100 displays patterns on the flat panel display 110 that encode an ID for each beam let. Thus, for a given MV pixel and location in the viewing zone coordinate system, the calibration device 210 would see a pattern as shown in
At 374, a refinement process may be performed, as explained below with reference to
At 376, calibration parameters are determined, as explained below.
Once the location of the calibration device 210 is found, the MV display system 122 must determine which beamlet of each MV pixel hits the calibration device 210. To accomplish this, the host computer 182 may cause the MV display device 100 to display a series of patterns. Each pattern is used to give a specific piece of information to the calibration device 210. The patterns are listed below in the order of one exemplary embodiment, though other orders can be used.
Calibration Step 1: MV Pixel Locations are Found
Calibration Step 2: MV Pixel IDs are Found
Each of
The display pattern data 228 transmitted by the host computer 182 causes the MV display device 100 to display a series of images (patterns) using the MV pixels 102 to the calibration device 210. The images shown in
In various embodiments, each of
For example, the circled MV pixel 102 in
The calibration device 210 captures images corresponding to
Calibration Step 3: Display Pixel IDs are Found
A memory of the calibration device 210 stores software instructions that, when executed by a processor of the calibration device 210, cause the calibration device 210 to process image data corresponding to images of the MV display device 100 shown in
In this phase, one exemplary embodiment uses gray code encoding (though, again, other encodings could be used) to have each beamlet flash a particular sequence that is its unique ID. The ID number is simply the x-beamlet coordinate followed by the y-beamlet coordinate. For a given MV pixel 102, there is one “best” beamlet that best illuminates the location of the calibration device 210. In this phase, it is assumed that if the MV pixel 102 appears off or on (i.e., below or above a threshold brightness value) to the calibration device 210, that means that the “best” beamlet is off or on, and that data is used to decode the ID of that beamlet. Thus, in
Binary[9]=Graycode[9] Equation 6
Binary[8]=Binary[9]⊕Graycode[8] Equation 7
Binary[7]=Binary[8]⊕Graycode[7] Equation 8
Binary[6]=Binary[7]⊕Graycode[6] Equation 9
Binary[5]=Binary[6]⊕Graycode[5] Equation 10
Binary[4]=Binary[5]⊕Graycode[4] Equation 11
Binary[3]=Binary[4]⊕Graycode[3] Equation 12
Binary[2]=Binary[3]⊕Graycode[2] Equation 13
Binary[1]=Binary[2]⊕Graycode[1] Equation 14
Binary[0]=Binary[1]⊕Graycode[0] Equation 15
Calibration Step 4: Calibration Refinement
In practice, the calibration device 210 may be between two (or even four) beamlets. This becomes even more likely when there is poor focus of the lenses on the MV display device 100, in which case the calibration device 210 that ideally sees (or identifies) only one beam let as the “best” beamlet, at 372 of
After the MV pixel locations, MV pixel IDs, and display pixel IDs (or beamlet IDs) have been found at 368, 370, and 372, respectively, as described above, the calibration device 210 has enough information to estimate which beamlet best corresponds to the current location of the calibration device 210. To verify the accuracy of the estimation, in the refinement phase at 374, the calibration device 210 sends the beamlet coordinate data 236 to the host computer 182 (see
The calibration device 210 determines for each MV pixel 102 which of the nine refinement images shown in
There are many alternative ways to do refinement as well. For example, while the embodiment illustrated above selects 8 display pixels around the estimated best display pixel, a 25 display pixel region (5×5) centered on the estimated best display pixel instead of the 9 display pixel region (3×3) may also be used. An encoding method to decrease the number of images required for the refinement process may also be used. One such encoding entails showing each row and column in sequence, instead of each display pixel. In the case of the 9 display pixel region (3×3), use of such encoding method will reduce the required number of images from 9 to 6. This method would find which row image is the brightest and which column image is the brightest at the location of the MV pixel. Based on this information, which display pixel is the brightest for the MV pixel (i.e., the display pixel located in the row and the column that are the brightest) can be uniquely determined.
After the calibration procedure, the MV display system 122 knows which beamlet 216 corresponds to the location of the calibration device 210 and what coordinate in the viewing zone coordinate system corresponds to the location of the calibration device 210. In various embodiments, once the calibration procedure (364-374 of
Modifications
The calibration procedure described can be a time-intensive procedure and prone to noise. For example, in one implementation, calibrating to a 2D camera may requires the calibration device 210 to always be placed within a 2D plane (though this may not be a strict requirement, as the system could be adapted to allow for any 2D surface). To help alleviate some of these issues, a few changes to the process can be made to improve the results.
For example, inverse patterns may be used. When an encoding pattern is captured (while determining the MV pixel IDs and beamlet (display pixel) IDs, as described above), the inverse of the pattern can be captured as well. In other words, if an MV pixel is “on” in the pattern, then it would be “off” in the inverse image and visa-versa. This allows the MV display system 122 to subtract the image of the inverse of the pattern from the image of the pattern to double the signal-to-noise ratio. This is because when the two images are subtracted, any baseline brightness in the image (i.e., light reflecting off the surface of the MV display device 100) is subtracted, and only the signal from the MV pixel 102 is left.
As another example, aperture adjustment may be used. In order for the calibration procedure to work properly, the calibration device 210 may need to be able to tell the difference between when an MV pixel 102 is “on” and when it is “off”. Since “off” may not be a total absence of light (for example light leakage from the backlight can cause an MV pixel to look “on”) the calibration device 210 may be adjusted to let in the proper amount of light such that “off” MV pixels are read as off and “on” MV pixels are read as on. In order to accomplish this, the MV display device 100 shows a pattern where half of the MV pixels are on and the other half are off. The user then adjusts the aperture ring on the camera until the off MV pixels appear off in the camera feed.
As yet another example, a calibration robot may be used. Since one implementation of the calibration uses a 2D camera 104 attached to the MV display device 100, it may be efficient to calibrate the MV display device 100 to the camera 104 without having to require a user to move the calibration device 210 relative to the camera 104 of the MV display device 100. The MV display devices 100 may be pre-calibrated. A calibration robot may be used to address these issues. The robot is configured to allow a MV display device 100 and/or the calibration device 210 to be placed in it. The robot then moves the MV display device 100 and the calibration device 210 in an automated fashion to capture mappings based on a supplied list of desired locations to place the calibration device 210. Once the robot finishes capturing mappings, it may calculate the coefficients and constants in the mathematical model and save them for use in subsequent processing.
One way this robot could be built is to leave the MV display device 100 stationary and move the calibration device 210 camera around in the viewing space. This may result in a very large robot that would have to take up much of a room. Instead, a robot could be built such that the calibration device 210 camera stays within a constant line, and the MV display device 100 pans and tilts to simulate the calibration device 210 camera moving around the MV display device 100. The calibration device 210 camera must still move back and forth in a line to ensure that the points captured around the MV display device 100 would be on a plane and not a hemisphere. This way the number of actuators required for the robot to function is decreased. The software driving the robot may use a formula to convert physical locations (i.e., x, y, z offsets from the MV display device 100) supplied to it into pan, tilt, and distance coordinates. This enables the calibration robot to calibrate the MV display device 100 to any set of points.
The robot can be placed in a controlled light environment such that the lights can change for different parts of the calibration process. This may ensure the checkerboard is well illuminated on the calibration device 210 (thus making it easier to see for the display sensor 226) to help reduce the noise in the measurements. The lights can be turned off for the part of the calibration process where the calibration device 210 captures the patterns, reducing reflected light on the MV display device 100.
For individual MV display devices 100 with an attached camera 104, the MV display devices 100 can be fully calibrated before it is installed. This is generally true with the MV display device 100 that is relatively small and the camera 104 that cannot move in relation to any of the MV pixels. If multiple MV display devices 100 are used, though, it may be difficult to fully pre-calibrate the MV display devices 100, as the exact location of each MV display device 100 in relation to the display sensor 104 may not be known ahead of time (e.g., before the MV display devices 100 are tiled together). In various embodiments, the robot can be used to partially calibrate a MV display device 100 before finishing its calibration in the field. The calibration robot may determine the intrinsic properties of the MV display device 100, and determine the extrinsic properties in the field. For example, in various embodiments, the radial distortion coefficients and the lens center constants (i.e., which display pixel the lens (lens system) is over) are calibrated with the calibration robot, since these do not change no matter where the MV display device 100 is placed or how it is oriented in relation to the display sensor 104. A fractional linear projective equation is then calibrated in the field that accounts for the location of the lens (lens system) in relation to the display camera 104. Since some of the coefficients and constants are pre-calibrated, there are fewer degrees of freedom that the solver has in determining the remaining coefficients. This allows the capture of fewer points than if performing the entire calibration in the field. Once the fractional linear projective equation coefficients are obtained, they can be combined with the pre-calibrated coefficients to get a full set of coefficients to be used in the mathematical model.
The various embodiments described above can be combined to provide further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
5855425 | Hamagishi | Jan 1999 | A |
5949581 | Kurtenbach et al. | Sep 1999 | A |
6169632 | Kurtenbach | Jan 2001 | B1 |
6339421 | Puckeridge | Jan 2002 | B1 |
6377295 | Woodgate et al. | Apr 2002 | B1 |
7001023 | Lee et al. | Feb 2006 | B2 |
7462104 | De Cesare | Dec 2008 | B2 |
7602395 | Diard | Oct 2009 | B1 |
7990498 | Hong | Aug 2011 | B2 |
8461995 | Thornton | Jun 2013 | B1 |
9080279 | Jun et al. | Jul 2015 | B2 |
9396588 | Li | Jul 2016 | B1 |
9715827 | Ng et al. | Jul 2017 | B2 |
9743500 | Dietz et al. | Aug 2017 | B2 |
9792712 | Ng et al. | Oct 2017 | B2 |
20030065805 | Barnes, Jr. | Apr 2003 | A1 |
20030115096 | Reynolds et al. | Jun 2003 | A1 |
20030156260 | Putilin et al. | Aug 2003 | A1 |
20040252374 | Saishu et al. | Dec 2004 | A1 |
20050093986 | Shinohara et al. | May 2005 | A1 |
20050195330 | Zacks et al. | Sep 2005 | A1 |
20090109126 | Stevenson et al. | Apr 2009 | A1 |
20090273486 | Sitbon | Nov 2009 | A1 |
20100002079 | Krijn et al. | Jan 2010 | A1 |
20100085517 | Hong | Apr 2010 | A1 |
20100207961 | Zomet | Aug 2010 | A1 |
20100214537 | Thomas | Aug 2010 | A1 |
20100246018 | Yu | Sep 2010 | A1 |
20110159929 | Karaoguz et al. | Jun 2011 | A1 |
20110169863 | Kawai | Jul 2011 | A1 |
20110216171 | Barre et al. | Sep 2011 | A1 |
20110242298 | Bathiche et al. | Oct 2011 | A1 |
20110304613 | Thoresson | Dec 2011 | A1 |
20120026157 | Unkel et al. | Feb 2012 | A1 |
20120062565 | Fuchs et al. | Mar 2012 | A1 |
20120105445 | Sakai et al. | May 2012 | A1 |
20120114019 | Wallace et al. | May 2012 | A1 |
20120140048 | Levine | Jun 2012 | A1 |
20120218253 | Clavin | Aug 2012 | A1 |
20120268451 | Tsai et al. | Oct 2012 | A1 |
20120300711 | Wang et al. | Nov 2012 | A1 |
20130013412 | Altman et al. | Jan 2013 | A1 |
20130093752 | Yuan | Apr 2013 | A1 |
20130169765 | Park et al. | Jul 2013 | A1 |
20130182083 | Grossmann | Jul 2013 | A1 |
20130282452 | He | Oct 2013 | A1 |
20130298173 | Couleaud | Nov 2013 | A1 |
20140015829 | Park et al. | Jan 2014 | A1 |
20140035877 | Cai et al. | Feb 2014 | A1 |
20140061531 | Faur et al. | Mar 2014 | A1 |
20140111101 | McRae | Apr 2014 | A1 |
20140300711 | Kroon et al. | Oct 2014 | A1 |
20140313408 | Sharma et al. | Oct 2014 | A1 |
20140316543 | Sharma et al. | Oct 2014 | A1 |
20150020135 | Frusina et al. | Jan 2015 | A1 |
20150042771 | Jensen et al. | Feb 2015 | A1 |
20150049176 | Hinnen et al. | Feb 2015 | A1 |
20150062314 | Itoh | Mar 2015 | A1 |
20150085091 | Varekamp | Mar 2015 | A1 |
20150092026 | Baik et al. | Apr 2015 | A1 |
20150198940 | Hwang et al. | Jul 2015 | A1 |
20150229894 | Dietz | Aug 2015 | A1 |
20150279321 | Falconer et al. | Oct 2015 | A1 |
20150293365 | Van Putten | Oct 2015 | A1 |
20150334807 | Gordin et al. | Nov 2015 | A1 |
20150356912 | Dietz | Dec 2015 | A1 |
20160012726 | Wang | Jan 2016 | A1 |
20160210100 | Ng et al. | Jul 2016 | A1 |
20160212417 | Ng | Jul 2016 | A1 |
20160224122 | Dietz et al. | Aug 2016 | A1 |
20160227201 | Ng et al. | Aug 2016 | A1 |
20160261837 | Thompson et al. | Sep 2016 | A1 |
20160261856 | Ng et al. | Sep 2016 | A1 |
20160293003 | Ng et al. | Oct 2016 | A1 |
20160341375 | Baker | Nov 2016 | A1 |
20160341377 | Eddins | Nov 2016 | A1 |
20160364087 | Thompson et al. | Dec 2016 | A1 |
20160366749 | Dietz et al. | Dec 2016 | A1 |
20160371866 | Ng et al. | Dec 2016 | A1 |
20170155891 | Hu | Jun 2017 | A1 |
20170205889 | Ng et al. | Jul 2017 | A1 |
20180115772 | Thompson et al. | Apr 2018 | A1 |
20180277032 | Ng et al. | Sep 2018 | A1 |
20180357981 | Ng et al. | Dec 2018 | A1 |
20190015747 | Thompson et al. | Jan 2019 | A1 |
20190019218 | Thompson et al. | Jan 2019 | A1 |
20190028696 | Dietz et al. | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
2 685 735 | Jan 2014 | EP |
0224470 | Mar 2002 | WO |
2013183108 | Dec 2013 | WO |
2016118622 | Jul 2016 | WO |
2016141248 | Sep 2016 | WO |
Entry |
---|
International Search Report, dated Feb. 25, 2019, for International Application No. PCT/US2018/059859, 14 pages. |
U.S. Appl. No. 15/469,220, filed Mar. 24, 2017, Display System and Method for Delivering Multi-View Content. |
U.S. Appl. No. 15/648,128, filed Jul. 12, 2017, Multi-View Display Systems for Quest Experiences, Challenges, Scavenger Hunts, Treasure Hunts, & Alternate Reality Games. |
U.S. Appl. No. 15/649,188, filed Jul. 13, 2017, Multi-View Advertising System and Method. |
U.S. Appl. No. 15/934,068, filed Mar. 23, 2018, Personalized Audio-Visual System. |
International Search Report, dated Jun. 21, 2018, for International Application No. PCT/US2018/024024, 3 pages. |
International Search Report, dated Jun. 3, 2016, for International Application No. PCT/US2016/014122, 3 pages. |
International Search Report, dated May 12, 2016, for International Application No. PCT/US2016/020784, 4 pages. |
International Search Report, dated Sep. 29, 2016, for International Application No. PCT/US2016/037185, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20190149808 A1 | May 2019 | US |