Not Applicable
Not Applicable
Not Applicable
1. Field of Invention
This invention relates to displaying three-dimensional moving images featuring binocular disparity and motion parallax that can be seen simultaneously by multiple viewers in different positions without requiring headgear.
2. Review of Related Art
Humans use several visual cues to recognize and interpret three-dimensionality in images. Monocular cues can be seen with just one eye. Binocular cues require two eyes. Monocular cues for three-dimensional images include: the relative sizes of objects of known size; occlusion among objects; lighting and shading; linear perspective; adjusting eye muscles to focus on an object at one distance while objects at other distances are out of focus (called “accommodation”); and objects moving relative to each other when one's head moves (called “motion parallax”). Binocular cues for three-dimensional images include: seeing different images from slightly different perspectives in one's right and left eyes (called “binocular disparity” or “stereopsis”); and intersection of the viewing axes from one's right and left eyes (called “convergence”). When a method of displaying three-dimensional images provides some of these visual cues, but not others, then the conflicting signals can cause eye strain and headaches for the viewer.
The ultimate goal for methods of displaying three-dimensional moving images is to provide as many of these visual cues for three-dimensionality as possible while also: being safe; providing good image quality and color; enabling large-scale applications; being viewable simultaneously by multiple viewers in different positions; not requiring special headgear; and being reasonably priced. This goal has not yet been achieved by current methods for displaying three-dimensional moving images. In this application, we present a taxonomy of current methods for displaying three-dimensional moving images, discuss limitations of these current methods, and then discuss how the present invention overcomes some key limitations.
Binocular disparity is a good starting point for a taxonomy of methods to display three-dimensional moving images. When images seen in the right and left eyes are different perspectives of the same scene, as would be seen if one were viewing the scene in the real world, then the brain interprets the two images as a single three-dimensional image. This process is “stereoscopic” vision. Since the invention disclosed herein provides stereoscopic vision, this discussion of related art focuses on methods that provide at least some degree of stereoscopic vision. The first branch in the taxonomy of three-dimensional display methods is between methods that require glasses or other headgear (called “stereoscopic”) vs. methods that do not require glasses or other headgear (called “autostereoscopic”).
The first type of stereoscopic imaging uses glasses or other headgear and a single image display that simultaneously displays encoded images for both the right and left eyes. These simultaneous images are decoded by lenses in the glasses for each eye so that each eye sees the appropriate image. Image encoding and decoding can be done by color (such as red vs. cyan) or polarization (such as linear polarization or circular polarization). A second type of stereoscopic imaging uses glasses or other headgear and a single image display source that sequentially displays images for the right and left eyes. These sequential images are each routed to the proper eye by alternating shutter mechanisms over each eye. The third type of stereoscopic imaging uses glasses or other headgear and two different image projectors, one for each eye, so that each eye receives a different image.
The main limitation of methods that display three-dimensional moving images through the use of glasses or other headgear is the inconvenience of glasses or other headgear for people who do not normally wear glasses and potential incompatibility with regular glasses for people who do normally wear glasses. Other potential limitations of these methods include: lack of motion parallax with side-to-side head movement, up-or-down head movement, or frontward-or-backward head movement; and lack of accommodation because all image points are on the same two-dimensional plane. The conflict between accommodation and convergence can cause eye strain and headaches.
There are many examples of methods using glasses or other headgear in the related art. However, the invention disclosed in this application does not require glasses or headgear, so these examples are not directly relevant and thus not listed here.
We now turn our attention to “autostereoscopic” methods for displaying three-dimensional moving images. Broadly defined, “autostereoscopic” refers to any method of displaying three-dimensional images that does not require glasses or headgear. Six general methods of autostereoscopic display are as follows: (1) methods using a “parallax barrier” to direct light from a display source in different directions by selectively blocking light rays; (2) methods using lenses (such as “lenticular” lenses) to direct light from a display source in different directions by selectively bending light rays; (3) methods using an array of “micromirrors” that can be tilted in real time to direct light from a display source in different directions by selectively reflecting light rays; (4) methods using sets of “sub-pixels” within each pixel in a display to emit light in different directions at the pixel level; (5) methods using a three-dimensional display volume or a moving two-dimensional surface to create a “volumetric” image in three-dimensional space; and (6) methods using laser light interference to create animated three-dimensional “holograms.” Some of these six methods can also be used together in various combinations. As we will discuss further, the invention disclosed in this application is autostereoscopic, but a significantly improvement over current autostereoscopic methods.
We now discuss the six autostereoscopic methods in greater detail, starting with three-dimensional imaging methods using a parallax barrier. A parallax barrier selectively blocks light from a display surface (such as an LCD screen). Openings in the barrier that do not block light allow different images to reach the right and left eyes. For example, the display surface can show a composite image with vertical image stripes for right and left eye images and the parallax barrier can have vertical slits that direct the appropriate image stripes to reach the right and left eyes when the viewer is located in the right spot. When the viewer is not located in the right spot, the viewer can see “pseudoscopic” images (with reversed depth, double images, and black lines) that can cause eye strain and headaches. To partially address this problem, the system may track the location of the viewer's head and move the parallax barrier so that the viewer can see the image properly from a larger range of locations.
A basic parallax barrier system with vertical slits does not provide vertical motion parallax; there is no relative movement of objects in the image with up-or-down head movement. Also, it provides only limited horizontal motion parallax (no more than a few sequential views of objects with side-to-side head movement) due to spatial constraints between the slits and the display surface. A parallax barrier using an array of pinholes (a method with its roots in “integral photography”) can provide limited motion parallax in both vertical and horizontal directions, but has significant limitations in terms of image resolution and brightness.
Having some distance between the light display surface and the parallax barrier is required so that the parallax barrier can direct light rays along different angles to the right and left eyes. However, this distance causes many of the limitations of the parallax barrier method. For example, distance significantly restricts the area within which the viewer must be located in order to see the three-dimensional images correctly. It is also why parallax barriers do not work well, or at all, for simultaneous viewing by multiple viewers. This distance is also the reason why the parallax barrier blocks so much of the light from the display surface. This causes inefficient light use and relatively dim images. As we will discuss, the invention disclosed in this application does not require such distance to direct light in different angles and thus avoids many of these problems.
There are several limitations on using one or more parallax barriers to display three-dimensional moving images. One of the main limitations is the restricted size of the viewing area within which an observer sees the images properly. When a viewer moves outside this restricted viewing area, the viewer can see pseudoscopic images with depth reversal, double images, and black bands that can cause eye strain or headaches. Head tracking mechanisms can be used in an effort to expand the size of the proper viewing area, but such tracking mechanisms can be inconvenient and do not work well for multiple viewers.
Another limitation of parallax barriers arises because the barrier blocks much of the light from the display. Display light is used inefficiently and the image can be dim. Also, due to spatial constraints between the display surface and openings in the parallax barrier, there are only a limited number of different views for motion parallax. For barriers with vertical slits, there is no motion parallax at all for up-and-down head movement. Use of pinholes instead of slits can provide motion parallax for vertical as well as horizontal movement, but also can have severe problems in terms of low image resolution and image dimness. Finally, lack of accommodation because all image points are on the same two-dimensional plane can result in eye strain and headaches.
Examples in the related art that appear to use one or more parallax barriers to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,300,942 (Dolgoff, 1994), 5,416,509 (Sombrowsky, 1995), 5,602,679 (Dolgoff et al., 1997), 5,855,425 (Hamagishi, 1999), 5,900,982 (Dolgoff et al., 1999), 5,986,804 (Mashitani et al., 1999), 6,061,083 (Aritake et al., 2000), 6,337,721 (Hamagishi et al., 2002), 6,481,849 (Martin et al., 2002), 6,791,512 (Shimada, 2004), 6,831,678 (Travis, 2004), 7,327,389 (Horimai et al., 2008), 7,342,721 (Lukyanitsa, 2008), 7,426,068 (Woodgate et al., 2008), and 7,532,225 (Fukushima et al., 2009); and U.S. Patent Applications 20030076423 (Dolgoff, Eugene, 2003), 20030107805 (Street, Graham, 2003), 20030206343 (Morishima et al., 2003), 20050219693 (Hartkop et al., 2005), 20050264560 (Hartkop et al., 2005), 20050280894 (Hartkop et al., 2005), 20060176541 (Woodgate et al., 2006), 20070058258 (Mather et al., 2007), 20080117233 (Mather et al., 2008), 20080150936 (Karman, Gerardus, 2008), and 20080231690 (Woodgate et al., 2008).
We now continue our discussion of autostereoscopic methods by discussing the use of lenses (especially arrays of lenticular lenses) for displaying three-dimensional moving images. Lenticular lenses are used to selectively bend light from a display surface to create the illusion of a three-dimensional image. These lenses may be bi-convex columns, semi-cylindrical columns, hemispheres, spheres, or other shapes. Lenticular lens column arrays may be arranged vertically or horizontally. Lenticular lenses may be configured in single or multiple layers. They may be static or move relative to the display surface or each other. There are also “active” or “dynamic” lenses whose focal length and/or curvature can be adjusted in real time.
There are several similarities between using parallax barriers and using lenticular lenses. For example, parallax barriers with parallel vertical slits allow strips of different-perspective images from a display surface to reach the right and left eyes by letting light through the vertical slits. By analogy, lenticular lenses with parallel vertical columns allow strips of different-perspective images from a display surface to reach the right and left eyes by bending light at different angles through the vertical lenses. Also, as is the case with using parallax barriers, there is a restricted area within which a viewer must be located in order to see the three-dimensional images properly when using lenticular lenses. Head tracking can be used to move the lenticular array to increase the size of this area, but the number of sequential views remains limited by spatial constraints. Analogous to the use of pinholes in a parallax barrier, spherical lenses called “fly's eye” lenses can be used in a lenticular array. Taking and displaying images with an array of small “fly's eye” lenses is called “integral photography.” As is the case with parallax barriers, there is also some distance between the display surface and the light-directing layer with the use of lenticular lenses.
However, lenticular lenses have some capabilities that are different than those possible with parallax barriers. This is because there are a greater variety of ways to bend light through a lens than there are ways to pass light through an empty opening. For example, there are “active” or “dynamic” lenses whose focal length and/or curvature can be changed in real time. Different methods for changing the optical characteristics of active lenses in real time include: applying an electric potential to a polymeric or elastomeric lens; mechanically deforming a liquid lens sandwiched within a flexible casing; and changing the temperature of the lens. With imaging systems that include head tracking, the focal lengths and/or curvatures of active lenses can be changed in response to movement of an observer's head.
Many of the limitations of using lenticular lenses to display three-dimensional moving images are similar to those for using parallax barriers and many of these common limitations come from the distance between the display surface and the light-guiding layer. As is the case with parallax barriers, display systems that use lenticular arrays have significant restrictions on the size of the viewing area and the number of observers. When viewers move outside this restricted area, they can see pseudoscopic images involving depth reversal, double images, and black bands that can cause eye strain and headaches. Using such systems for multiple viewers is difficult or impossible. Head tracking mechanisms used to try to expand the proper viewing area are often inconvenient and do not work well for multiple viewers. Further, the moving parts of head-tracking mechanisms are subject to wear and tear. Boundaries between light elements in lenticular display systems can create dark lines, graininess, and rough edges.
Due to spatial constraints between the display surface and the width of the lenticular lenses, there are a limited number of different views for motion parallax. With vertical columnar lenses, there is no vertical motion parallax at all. Fly's eye lens arrays can provide some vertical as well as horizontal motion parallax, but are expensive and can have significant problems in terms of low resolution and dim images. Using active lenses in lenticular displays can provide a wider range of motion parallax, but fluids or other moving materials may not change shape fast enough to display three-dimensional moving images. Lack of accommodation due to all image points being on the same plane can cause eye strain and headaches.
Examples in the related art that appear to use stationary lenticular lenses to display three-dimensional moving images include the following: U.S. Pat. Nos. 4,829,365 (Eichenlaub, 1989), 5,315,377 (Isono et al., 1994), 5,465,175 (Woodgate et al., 1995), 5,602,679 (Dolgoff et al., 1997), 5,726,800 (Ezra et al., 1998), 5,880,704 (Takezaki, 1999), 5,943,166 (Hoshi et al., 1999), 6,118,584 (Van Berkel et al., 2000), 6,128,132 (Wieland et al., 2000), 6,229,562 (Kremen, 2001), 6,437,915 (Moseley et al., 2002), 6,462,871 (Morishima, 2002), 6,611,243 (Moseley et al., 2003), 6,795,241 (Holzbach, 2004), 6,876,495 (Street, 2005), 6,929,369 (Jones, 2005), 7,142,232 (Kremen, 2006), 7,154,653 (Kean et al., 2006), 7,268,943 (Lee, 2007), 7,375,885 (Ijzerman et al., 2008), 7,400,447 (Sudo et al., 2008), 7,423,796 (Woodgate et al., 2008), 7,492,513 (Fridman et al., 2009), and 7,506,984 (Saishu et al., 2009); and U.S. Patent Applications 20040012671 (Jones et al., 2004), 20040240777 (Woodgate et al., 2004), 20050030308 (Takaki, Yasuhiro, 2005), 20050264560 (Hartkop et al., 2005), 20050264651 (Saishu et al., 2005), 20060012542 (Alden, Ray, 2006), 20060227208 (Saishu, Tatsuo, 2006), 20060244907 (Simmons, John, 2006), 20070058127 (Mather et al., 2007), 20070058258 (Mather et al., 2007), 20070097019 (Wynne-Powell, Thomas, 2007), 20070109811 (Krijn et al., 2007), 20070201133 (Cossairt, Oliver, 2007), 20070222915 (Niioka, Shinya, 2007), 20070258139 (Tsai et al., 2007), 20080068329 (Shestak et al., 2008), 20080117233 (Mather et al., 2008), 20080231690 (Woodgate et al., 2008), 20080273242 (Woodgate et al., 2008), and 20080297670 (Tzschoppe et al., 2008)
Examples in the related art that appear to use laterally-shifting lenticular lenses to display three-dimensional moving images include the following: U.S. Pat. Nos. 4,740,073 (Meacham, 1988), 5,416,509 (Sombrowsky, 1995), 5,825,541 (Imai, 1998), 5,872,590 (Aritake et al., 1999), 6,014,164 (Woodgate et al., 2000), 6,061,083 (Aritake et al., 2000), 6,483,534 (d'Ursel, 2002), 6,798,390 (Sudo et al., 2004), 6,819,489 (Harris, 2004), 7,030,903 (Sudo, 2006), 7,113,158 (Fujiwara et al., 2006), 7,123,287 (Surman, 2006), 7,250,990 (Sung et al., 2007), 7,265,902 (Lee et al., 2007), 7,375,885 (Ijzerman et al., 2008), 7,382,425 (Sung et al., 2008), and 7,432,892 (Lee et al., 2008); and U.S. Patent Applications 20030025995 (Redert et al., 2003), 20040178969 (Zhang et al., 2004), 20050041162 (Lee et al., 2005), 20050117016 (Surman, Philip, 2005), 20050219693 (Hartkop et al., 2005), 20050248972 (Kondo et al., 2005), 20050264560 (Hartkop, David; et al., 2005), 20050270645 (Cossairt et al., 2005), 20050280894 (Hartkop et al., 2005), 20060109202 (Alden, Ray, 2006), 20060244918 (Cossairt et al., 2006), 20070165013 (Goulanian et al., 2007), 20080204873 (Daniell, Stephen, 2008), 20090040753 (Matsumoto, Shinya, 2009), 20090052027 (Yamada et al., 2009), and 20090080048 (Tsao, Che-Chih, 2009).
Examples in the related art that appear to use active lenses to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,493,427 (Nomura et al., 1996), 5,790,086 (Zelitt, 1998), 5,986,811 (Wohlstadter, 1999), 6,014,259 (Wohlstadter, 2000), 6,061,083 (Aritake et al., 2000), 6,437,920 (Wohlstadter, 2002), 6,533,420 (Eichenlaub, 2003), 6,683,725 (Wohlstadter, 2004), 6,714,174 (Suyama et al., 2004), 6,909,555 (Wohlstadter, 2005), 7,046,447 (Raber, 2006), 7,106,519 (Aizenberg et al., 2006), 7,167,313 (Wohlstadter, 2007), 7,297,474 (Aizenberg et al., 2007), 7,336,244 (Suyama et al., 2008), and 7,471,352 (Woodgate et al., 2008); and U.S. Patent Applications 20030058209 (Balogh, Tibor, 2003), 20040141237 (Wohlstadter, Jacob, 2004), 20040212550 (He, Zhan, 2004), 20050111100 (Mather et al., 2005), 20050231810 (Wohlstadter, Jacob, 2005), 20060158729 (Vissenberg et al., 2006), 20070058127 (Mather et al., 2007), 20070058258 (Mather et al., 2007), 20070242237 (Thomas, Clarence, 2007), 20080007511 (Tsuboi et al., 2008), 20080117289 (Schowengerdt et al., 2008), 20080192111 (Ijzerman, Willem, 2008), 20080204871 (Mather et al., 2008), 20080297594 (Hiddink et al., 2008), 20090021824 (Ijzerman et al., 2009), 20090033812 (Ijzerman et al., 2009), 20090052049 (Batchko et al., 2009), and 20090052164 (Kashiwagi et al., 2009).
Examples in the related art that appear to include head or eye tracking as part of a system to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,311,220 (Eichenlaub, 1994), 5,712,732 (Street, 1998), 5,872,590 (Aritake et al., 1999), 5,959,664 (Woodgate, 1999), 6,014,164 (Woodgate et al., 2000), 6,061,083 (Aritake et al., 2000), 6,115,058 (Omori et al., 2000), 6,788,274 (Kakeya, 2004), 6,798,390 (Sudo et al., 2004), and 7,450,188 (Schwerdtner, 2008); and U.S. Patent Applications 20030025995 (Redert et al., 2003), 20070258139 (Tsai et al., 2007), and 20080007511 (Tsuboi et al., 2008).
Examples in the related art that appear to use a large rotating or tilting lens or prism as part of a system to display three-dimensional moving images include the following: U.S. Pat. Nos. 3,199,116 (Ross, 1965), 4,692,878 (Ciongoli, 1987), 6,061,489 (Ezra et al., 2000), 6,483,534 (d'Ursel, 2002), and 6,533,420 (Eichenlaub, 2003); and U.S. Patent Applications 20040178969 (Zhang et al., 2004), 20060023065 (Alden, Ray, 2006), and 20060203208 (Thielman et al., 2006).
Examples in the related art that appear to use multiple rotating or tilting lenses or prisms as part of a system to display three-dimensional moving images include the following: U.S. Pat. Nos. 7,182,463 (Conner et al., 2007), 7,300,157 (Conner et al., 2007), and 7,446,733 (Hirimai, 2008), and unpublished U.S. Patent Applications 12,317,856 (Connor, Robert, 2008) and 12,317,857 (Connor, Robert, 2008).
We now continue discussion of autostereoscopic methods by considering micromirror arrays. A micromirror array is a matrix of very tiny mirrors that can be individually controlled and tilted in real time to reflect light beams in different directions. Micromirror arrays are often used with coherent light, such as the light from lasers. Coherent light can be precisely targeted onto and reflected from moving mirrors. These redirected coherent light beams can be intersected to create a moving holographic image.
Although micromirror arrays offer some advantages over parallax barriers and lenticular arrays, they can be complicated and expensive to manufacture. They also have mechanical limitations with respect to speed and range of motion. If they are used with coherent light, then there can be expense and safety issues. If they are used with non-coherent light, then there can be issues with image quality due to the imprecision of reflecting non-coherent light from such tiny surface areas.
Examples in the related art that appear to use micromirror arrays to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,689,321 (Kochi, 1997), 6,061,083 (Aritake et al., 2000), 6,304,263 (Chiabrera et al., 2001), 7,182,463 (Conner et al., 2007), 7,204,593 (Kubota et al., 2007), 7,261,417 (Cho et al., 2007), 7,300,157 (Conner et al., 2007), and 7,505,646 (Katou et al., 2009); and U.S. Patent Applications 20030058209 (Balogh, Tibor, 2003), 20040252187 (Alden, Ray, 2004), and 20050248972 (Kondo et al., 2005).
We now continue further along the autostereoscopic branch of our taxonomy to discuss the use of three-dimensional (3D) pixels. Each 3D pixel contains a set of sub-pixels, in different discrete locations, that each emit light in a different direction. For example, a 3D pixel can be made from a set of sub-pixels in proximity to a pixel-level microlens wherein the light from each sub-pixel enters and exits the microlens at a different angle. In another example, a 3D pixel can be made from a set of optical fibers that emit light at different angles.
The concept of 3D pixels has considerable appeal, but is complicated to implement. Manufacturing 3D pixels can be complex and expensive. There are spatial limits to how many discrete sub-pixels one can fit into a space the size of a pixel. This, in turn, limits image resolution and quality. Large displays can become bulky and expensive due to the enormous quantity of sub-pixels required and the complicated structures required to appropriately direct their light outputs. Microstructures (such as microdomes) to house multiple sub-pixels that protrude from the display surface can occlude the light from sub-pixels in adjacent pixels, limiting the size of the proper viewing zone.
Examples in the related art that appear to use 3D pixels containing sets of sub-pixels to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,132,839 (Travis, 1992), 5,550,676 (Ohe et al., 1996), 5,993,003 (McLaughlin, 1999), 6,061,489 (Ezra et al., 2000), 6,128,132 (Wieland et al., 2000), 6,201,565 (Balogh, 2001), 6,329,963 (Chiabrera et al., 2001), 6,344,837 (Gelsey, 2002), 6,606,078 (Son et al., 2003), 6,736,512 (Balogh, 2004), 6,999,071 (Balogh, 2006), 7,084,841 (Balogh, 2006), 7,204,593 (Kubota et al., 2007), 7,283,308 (Cossairt et al., 2007), 7,425,951 (Fukushima et al., 2008), 7,446,733 (Hirimai, 2008), and 7,532,225 (Fukushima et al., 2009); and U.S. Patent Applications 20030071813 (Chiabrera et al., 2003), 20030103047 (Chiabrera et al., 2003), 20050053274 (Mayer et al., 2005), 20050285936 (Redert et al., 2005), 20060227208 (Saishu, Tatsuo, 2006), 20060279680 (Karman et al., 2006), 20080150936 (Karman, Gerardus, 2008), 20080266387 (Krijn et al., 2008), 20080309663 (Fukushima et al., 2008), 20090002262 (Fukushima et al., 2009), 20090046037 (Whitehead et al., 2009), 20090079728 (Sugita et al., 2009), 20090079733 (Fukushima et al., 2009), 20090096726 (Uehara et al., 2009), 20090096943 (Uehara et al., 2009), and 20090116108 (Levecq et al., 2009).
We now continue our review of autostereoscopic methods by discussing three-dimensional display volumes and moving two-dimensional surfaces that create a “volumetric” image in three-dimensional space. “Volumetric” means that the points that comprise the three-dimensional image are actually spread out in three-dimensions instead of on a flat display surface. In this respect, volumetric displays are not an illusion of three-dimensionality; they are actually three dimensional. Major types of volumetric displays are: (a) curved screen displays (such as a cylindrical or hemispherical projection surface); (b) static volumetric displays (such as an X,Y,Z matrix of light elements in 3D space or a series of parallel 2D display layers with adjustable transparency); and (c) dynamic volumetric displays with two-dimensional screens that rotate through space (such as a spinning disk or helix) while emitting, reflecting, or diffusing light.
Many planetariums use a dome-shaped projection surface as a form of volumetric display. The audience sits under the dome while light beams representing stars and planets are projected onto the dome, creating a three-dimensional image. Static volumetric displays can be made from a three-dimensional matrix of LEDs or fiber optics. Alternatively, a static volumetric display can be a volume of translucent substance (such as a gel or fog) into which light beams can be focused and intersected. One unusual version of a static volumetric display involves intersecting infrared laser beams in mid-air to create a pattern of glowing plasma bubbles in mid-air. This plasma method is current quite limited in terms of the number of display points, color, and safety issues, but is one of the few current display methods that genuinely projects images in “mid-air.”
There are several limitations of using volumetric methods to display three-dimensional moving images. Curved screen methods are significantly limited with respect to the shape of three-dimensional image that they can display; planetariums work because a dome-shaped display surface works as a proxy for the sky, but would not work well for projecting a 3D image of a car. Large static volumetric displays become very bulky, heavy, complex, and costly. Also, both static and dynamic volumetric displays generally create ghost-like images with no opacity, limited interposition, limited color, and low resolution. There are significant limitations on the size of dynamic volumetric displays due to the mass, inertia, and structural stress of large rapidly-spinning objects.
Examples in the related art that appear to use volumetric displays to display three-dimensional moving images include the following: U.S. Pat. Nos. 5,111,313 (Shires, 1992), 5,704,061 (Anderson, 1997), 6,487,020 (Favalora, 2002), 6,720,961 (Tracy, 2004), 6,765,566 (Tsao, 2004), 6,948,819 (Mann, 2005), 7,023,466 (Favalora et al., 2006), 7,277,226 (Cossairt et al., 2007), 7,364,300 (Favalora et al., 2008), 7,490,941 (Mintz et al., 2009), 7,492,523 (Dolgoff, 2009), and 7,525,541 (Chun et al., 2009); and U.S. Patent Applications 20050117215 (Lange, Eric, 2005), 20050152156 (Favalora et al., 2005), 20050180007 (Cossairt et al., 2005), and 20060109200 (Alden, Ray, 2006). A closely related method that involves using a vibrating projection screen is disclosed in U.S. Pat. Nos. 6,816,158 (Lemelson et al., 2004) and 7,513,623 (Thomas, 2009).
We now conclude our review of autostereoscopic methods by discussing holographic methods of displaying three-dimensional moving images. Holography involves recording and reconstructing the amplitude and phase distributions of an interference pattern of intersecting light beams. The light interference pattern is generally created by the intersection of two beams of coherent (laser) light: a signal beam that is reflected off (or passed through) an object and a reference beam that comes from the same source. When the interference pattern is recreated and viewed by an observer, it appears as a three-dimensional object that can be seen from multiple perspectives.
Holography has been used for many years to create three-dimensional static images and progress has been made toward using holographic images to display three-dimensional moving images, but holographic video remains quite limited. Limitations of using holographic technology for displaying three-dimensional moving images include the following: huge data requirements; display size limitations; color limitations; ghost-like images with no opacity and limited interposition; and cost and safety issues associated with using lasers.
The invention disclosed herein, called “Holovision”™, is a novel autostereoscopic device that displays three-dimensional moving images featuring binocular disparity and motion parallax that can be seen simultaneously by multiple viewers in different positions without headgear. This device uniquely addresses many of the limitations of methods for displaying three-dimensional images in the related art.
The invention disclosed herein is a device comprising: (1) a plurality of longitudinal light-guiding members that rotate around their longitudinal axes; and (2) a plurality of light-emitting members inside, or attached to, each longitudinal light-guiding member. Light rays from the light-emitting members are guided through light-transmitting portions in the longitudinal light-guiding member so that the directions of these light rays change as the longitudinal light-guiding member rotates. Further, changes in the content of light rays from the light-emitting members are coordinated with changes in the directions of these light rays so that different viewers in different positions all see appropriate three-dimensional images.
Advantages of this present invention over different methods in the related art include the following. No current method in the related art offers all of these advantages.
The figures discussed herein show selected examples of how this invention may be embodied, but these figures do not limit the full generalizability of the claims.
In the example shown in
The sequence of
In this example, the light-transmitting portion 102 of column 101 runs through the entire cross-section of column 101 and allows light to exit in two opposite directions as the column rotates. This allows two complete angular sweeps of the light rays exiting the column with each rotation of the column. This two-opening configuration eliminates a “black-out period” (while the opening is on the side opposite the viewer) that would occur if there were only a single opening. In another example, if the rotation speed is so fast that a black-out period would not be noticed due to image persistence, then having a light-emitting member located off-center with only one opening could be advantageous for some applications. For example, a single-opening configuration could allow longer light-transmitting portion for more precise ray direction and/or smaller-scale columns.
Using a rotating member to change the angle of emitted light rays can be an advantage over methods in the related art that use reciprocal movement of parallax barriers, lenticular lenses, or dynamic lens arrays. Rotating members do not spend energy or time overcoming inertia as is required with reciprocal movement. This can allow smoother and faster motion for guiding light rays along different angles. In this example, the columns rotate sufficiently fast that viewers see a continuous image, not a flickering image, due to persistence of vision.
Another advantage of using a rotating member to direct light rays in different angles, as compared to parallax barriers with vertical slits or 3D pixels with a limited number of sub-pixel light emitters, is that the change in angle is continuous rather than discrete. This allows a much greater degree of image precision. The only limitation is how rapidly the light content emitted by the light-emitting member can be changed, which is probably much less of a limitation than the number of vertical strips that can be used in a parallax barrier or the number of sub-pixel light emitters than one can fit into a pixel-size space.
In this example, all five columns rotate in the same clockwise direction. In another example, adjacent columns may rotate in clockwise and counter-clockwise directions. This latter design may be useful if the columns touch each other, especially if the columns are interconnected by gears as part of a rotational drive mechanism. In another example, the columns may switch rotational direction over time, creating a “window washer” motion for the light-emitting portion. A “window washer” motion might be useful for avoiding a black-out period with a single light-emitting opening if this outweighs the disadvantage of having to overcome inertia to change rotational direction.
Another advantage of this current invention is that it avoids a gap between a light-emitting surface and a parallax member such as a parallax barrier, lenticular array, or dynamic lens arrays. In this invention, the light-emitting member is directly contained within a light-transmitting portion within a rotating longitudinal member. This current invention enables multiple viewers from multiple locations to always see the correct three-dimensional image, avoiding pseudoscopic images with reversed depth, double images, or black lines that can occur in much of the related art.
In this example, the longitudinal axes of the columns are in parallel straight lines within the same plane. This creates a relatively flat display surface (albeit with rounded elements). Such a flat display surface can be useful for applications in laptop computers, cellular phones, and other portable electronics. In another example, the longitudinal axes of the columns may be in parallel straight lines along a curved surface in three-dimensional space. Such a curved display surface may be useful for providing some visual accommodation cues, in addition to motion parallax cues, for large-scale viewing applications. In another example, the axes of the columns need not be parallel. For example, the axes of the columns may be configured in a radial pattern. Radial columns could produce an image like that of a radar or sonar screen display, except that radial columns would display a three-dimensional image instead of a two-dimensional image like a conventional radar screen.
A single layer of parallel rotating columns, such as shown in this example, will only provide motion parallax along one axis of motion. For example, if the axes of the columns are generally vertical, then the resulting three-dimensional image will show motion parallax in response to horizontal (side to side) head movement by a viewer. If the axes of the columns are generally horizontal, then the resulting three-dimensional image will show motion parallax in response to vertical (up and down) head movement by the viewer. As we will discuss later, motion parallax that is responsive to both horizontal and vertical movement can be achieved with the addition of an optional second columnar layer.
Technically,
For example,
With a surface composed of only vertical-axis parallel rotating columns, the viewers will only see motion parallax with horizontal head movement (right and left on this diagram). We will discuss options that allow motion parallax with vertical head movement as well, but it is important to start with the basic concept before moving into options and greater complexity.
In this example, rotation of the columns and corresponding planes of light rays happens simultaneously for all the parallel columns forming the image surface. As mentioned previously, when this rotation is sufficiently rapid, persistence of vision causes the eye to continue to see the image at a certain angle until the ray sweeps around and that angle image is again displayed. In this case, the rotating planes of light are perceived as simultaneous by the viewer.
When the content of the light plane changes in synchronization with changes in the angle of the light plane, then this device can display three-dimensional moving images with some degree of motion parallax as viewed simultaneously by multiple viewers. This is possible to a greater extent than with most methods in the related art because of the integration of the light-guiding member (e.g. light-transmitting portion in this invention) and the light emitting member. This is an improvement over methods in the related art that have a gap between the light-emitting display surface and a light-guiding member such as a parallax barrier or lenticular lens.
The means by which longitudinal columns are rotated and power is delivered to the light-emitting members are not central to this invention. However, it is useful to discuss examples of how these functions may be achieved.
In the example in
In the examples shown in
An advantage of using a rotating column with a circular cross-section is a lower chance of damaging the rotating column by snagging it on an external object (such as someone accidentally touching the display surface with their finger if the display is not covered) or collision between adjacent rotating columns (if inter-columnar alignment becomes slightly awry). Nonetheless, there may be some circumstances under which non-circular cross-sections may be preferred due to lower manufacturing cost or specialized function.
The addition of a second layer of rotating columns may allow motion parallax when a viewer moves their head vertically (up and down) in addition to motion parallax when a viewer moves their head horizontally (back and forth).
However, the addition of a second layer of rotating columns raises some technical issues that must be addressed. For example, there is a gap between the second layer and the light-emitting members contained within columns in the first layer. This gap may cause problems similar to those that occur with methods in the related art involving a gap between parallax barriers, lenticular arrays, or dynamic lens arrays and a light-emitting surface. As the columns in the second layer rotate, they do not provide a consistent line-of-sight to the same light-emitting member in the first column throughout their rotation. One way to address this problem is to have light-emitting members in the first column relatively close to each other and to coordinate shifting image content across multiple light-emitting members as the column in the second layer rotates. A second way to address this problem is to have the columns in the second layer shift as well as rotate, as shown in
In
In
In
Number | Name | Date | Kind |
---|---|---|---|
3199116 | Ross | Aug 1965 | A |
4692878 | Ciongoli | Sep 1987 | A |
4740073 | Meacham | Apr 1988 | A |
4829365 | Eichenlaub | May 1989 | A |
4853769 | Kollin | Aug 1989 | A |
5111313 | Shires | May 1992 | A |
5132839 | Travis | Jul 1992 | A |
5300942 | Dolgoff | Apr 1994 | A |
5311220 | Eichenlaub | May 1994 | A |
5315377 | Isono et al. | May 1994 | A |
5416509 | Sombrowsky | May 1995 | A |
5465175 | Woodgate et al. | Nov 1995 | A |
5493427 | Nomura et al. | Feb 1996 | A |
5550676 | Ohe et al. | Aug 1996 | A |
5602679 | Dolgoff et al. | Feb 1997 | A |
5689321 | Kochi | Nov 1997 | A |
5704061 | Anderson | Dec 1997 | A |
5712732 | Street | Jan 1998 | A |
5726800 | Ezra et al. | Mar 1998 | A |
5790086 | Zelitt | Aug 1998 | A |
5825541 | Imai | Oct 1998 | A |
5855425 | Hamagishi | Jan 1999 | A |
5872590 | Aritake et al. | Feb 1999 | A |
5880704 | Takezaki | Mar 1999 | A |
5900982 | Dolgoff et al. | May 1999 | A |
5943166 | Hoshi et al. | Aug 1999 | A |
5959664 | Woodgate | Sep 1999 | A |
5986804 | Mashitani et al. | Nov 1999 | A |
5986811 | Wohlstadter | Nov 1999 | A |
5993003 | McLaughlin | Nov 1999 | A |
6014164 | Woodgate et al. | Jan 2000 | A |
6014259 | Wohlstadter | Jan 2000 | A |
6061083 | Aritake et al. | May 2000 | A |
6061489 | Ezra et al. | May 2000 | A |
6115058 | Omori et al. | Sep 2000 | A |
6118584 | Van Berkel et al. | Sep 2000 | A |
6128132 | Wieland et al. | Oct 2000 | A |
6201565 | Balogh | Mar 2001 | B1 |
6219184 | Nagatani | Apr 2001 | B1 |
6229562 | Kremen | May 2001 | B1 |
6304263 | Chiabrera et al. | Oct 2001 | B1 |
6329963 | Chiabrera et al. | Dec 2001 | B1 |
6337721 | Hamagishi et al. | Jan 2002 | B1 |
6344837 | Gelsey | Feb 2002 | B1 |
6437915 | Moseley et al. | Aug 2002 | B2 |
6437920 | Wohlstadter | Aug 2002 | B1 |
6462871 | Morishima | Oct 2002 | B1 |
6481849 | Martin et al. | Nov 2002 | B2 |
6483534 | d'Ursel | Nov 2002 | B1 |
6487020 | Favalora | Nov 2002 | B1 |
6533420 | Eichenlaub | Mar 2003 | B1 |
6606078 | Son et al. | Aug 2003 | B2 |
6611243 | Moseley et al. | Aug 2003 | B1 |
6683725 | Wohlstadter | Jan 2004 | B2 |
6714174 | Suyama et al. | Mar 2004 | B2 |
6720961 | Tracy | Apr 2004 | B2 |
6736512 | Balogh | May 2004 | B2 |
6765566 | Tsao | Jul 2004 | B1 |
6788274 | Kakeya | Sep 2004 | B2 |
6791512 | Shimada | Sep 2004 | B1 |
6795241 | Holzbach | Sep 2004 | B1 |
6798390 | Sudo et al. | Sep 2004 | B1 |
6816158 | Lemelson et al. | Nov 2004 | B1 |
6819489 | Harris | Nov 2004 | B2 |
6831678 | Travis | Dec 2004 | B1 |
6876495 | Street | Apr 2005 | B2 |
6909555 | Wohlstadter | Jun 2005 | B2 |
6929369 | Jones | Aug 2005 | B2 |
6948819 | Mann | Sep 2005 | B2 |
6999071 | Balogh | Feb 2006 | B2 |
7023466 | Favalora et al. | Apr 2006 | B2 |
7030903 | Sudo | Apr 2006 | B2 |
7046447 | Raber | May 2006 | B2 |
7084841 | Balogh | Aug 2006 | B2 |
7106519 | Aizenberg et al. | Sep 2006 | B2 |
7113158 | Fujiwara et al. | Sep 2006 | B1 |
7123287 | Surman | Oct 2006 | B2 |
7142232 | Kremen | Nov 2006 | B2 |
7154653 | Kean et al. | Dec 2006 | B2 |
7167313 | Wohlstadter | Jan 2007 | B2 |
7182463 | Conner et al. | Feb 2007 | B2 |
7204593 | Kubota et al. | Apr 2007 | B2 |
7250990 | Sung et al. | Jul 2007 | B2 |
7261417 | Cho et al. | Aug 2007 | B2 |
7265902 | Lee et al. | Sep 2007 | B2 |
7268943 | Lee | Sep 2007 | B2 |
7277226 | Cossairt et al. | Oct 2007 | B2 |
7283308 | Cossairt et al. | Oct 2007 | B2 |
7297474 | Aizenberg et al. | Nov 2007 | B2 |
7300157 | Conner et al. | Nov 2007 | B2 |
7327389 | Horimai et al. | Feb 2008 | B2 |
7336244 | Suyama et al. | Feb 2008 | B2 |
7342721 | Lukyanitsa | Mar 2008 | B2 |
7364300 | Favalora et al. | Apr 2008 | B2 |
7375885 | Ijzerman et al. | May 2008 | B2 |
7382425 | Sung et al. | Jun 2008 | B2 |
7400447 | Sudo et al. | Jul 2008 | B2 |
7423796 | Woodgate et al. | Sep 2008 | B2 |
7425951 | Fukushima et al. | Sep 2008 | B2 |
7426068 | Woodgate et al. | Sep 2008 | B2 |
7432892 | Lee et al. | Oct 2008 | B2 |
7446733 | Hirimai | Nov 2008 | B1 |
7450188 | Schwerdtner | Nov 2008 | B2 |
7471352 | Woodgate et al. | Dec 2008 | B2 |
7490941 | Mintz et al. | Feb 2009 | B2 |
7492513 | Fridman et al. | Feb 2009 | B2 |
7492523 | Dolgoff | Feb 2009 | B2 |
7505646 | Katou et al. | Mar 2009 | B2 |
7506984 | Saishu et al. | Mar 2009 | B2 |
7513623 | Thomas | Apr 2009 | B2 |
7525541 | Chun et al. | Apr 2009 | B2 |
7532225 | Fukushima et al. | May 2009 | B2 |
7554625 | Koganezawa | Jun 2009 | B2 |
7688376 | Kondo et al. | Mar 2010 | B2 |
20030025995 | Redert et al. | Feb 2003 | A1 |
20030058209 | Balogh | Mar 2003 | A1 |
20030071813 | Chiabrera et al. | Apr 2003 | A1 |
20030076423 | Dolgoff | Apr 2003 | A1 |
20030103047 | Chiabrera et al. | Jun 2003 | A1 |
20030107805 | Street | Jun 2003 | A1 |
20030206343 | Morishima et al. | Nov 2003 | A1 |
20040012671 | Jones et al. | Jan 2004 | A1 |
20040141237 | Wohlstadter | Jul 2004 | A1 |
20040178969 | Zhang et al. | Sep 2004 | A1 |
20040212550 | He | Oct 2004 | A1 |
20040240777 | Woodgate et al. | Dec 2004 | A1 |
20040252187 | Alden | Dec 2004 | A1 |
20050030308 | Takaki | Feb 2005 | A1 |
20050041162 | Lee et al. | Feb 2005 | A1 |
20050053274 | Mayer et al. | Mar 2005 | A1 |
20050111100 | Mather et al. | May 2005 | A1 |
20050117016 | Surman | Jun 2005 | A1 |
20050117215 | Lange | Jun 2005 | A1 |
20050152156 | Favalora et al. | Jul 2005 | A1 |
20050180007 | Cossairt et al. | Aug 2005 | A1 |
20050219693 | Hartkop et al. | Oct 2005 | A1 |
20050231810 | Wohlstadter | Oct 2005 | A1 |
20050248972 | Kondo et al. | Nov 2005 | A1 |
20050264560 | Hartkop et al. | Dec 2005 | A1 |
20050264651 | Saishu et al. | Dec 2005 | A1 |
20050270645 | Cossairt et al. | Dec 2005 | A1 |
20050280894 | Hartkop et al. | Dec 2005 | A1 |
20050285936 | Redert et al. | Dec 2005 | A1 |
20060012542 | Alden | Jan 2006 | A1 |
20060023065 | Alden | Feb 2006 | A1 |
20060109200 | Alden | May 2006 | A1 |
20060109202 | Alden | May 2006 | A1 |
20060158729 | Vissenberg et al. | Jul 2006 | A1 |
20060176541 | Woodgate et al. | Aug 2006 | A1 |
20060203208 | Thielman et al. | Sep 2006 | A1 |
20060227208 | Saishu | Oct 2006 | A1 |
20060244907 | Simmons | Nov 2006 | A1 |
20060244918 | Cossairt et al. | Nov 2006 | A1 |
20060279680 | Karman et al. | Dec 2006 | A1 |
20070058127 | Mather et al. | Mar 2007 | A1 |
20070058258 | Mather et al. | Mar 2007 | A1 |
20070097019 | Wynne-Powell | May 2007 | A1 |
20070109811 | Krijn et al. | May 2007 | A1 |
20070165013 | Goulanian et al. | Jul 2007 | A1 |
20070201133 | Cossairt | Aug 2007 | A1 |
20070222915 | Niioka | Sep 2007 | A1 |
20070242237 | Thomas | Oct 2007 | A1 |
20070258139 | Tsai et al. | Nov 2007 | A1 |
20080007511 | Tsuboi et al. | Jan 2008 | A1 |
20080068329 | Shestak et al. | Mar 2008 | A1 |
20080117233 | Mather et al. | May 2008 | A1 |
20080117289 | Schowengerdt et al. | May 2008 | A1 |
20080150936 | Karman | Jun 2008 | A1 |
20080192111 | Ijzerman | Aug 2008 | A1 |
20080204871 | Mather et al. | Aug 2008 | A1 |
20080204873 | Daniell | Aug 2008 | A1 |
20080231690 | Woodgate et al. | Sep 2008 | A1 |
20080266387 | Krijn et al. | Oct 2008 | A1 |
20080273242 | Woodgate et al. | Nov 2008 | A1 |
20080297594 | Hiddink et al. | Dec 2008 | A1 |
20080297670 | Tzschoppe et al. | Dec 2008 | A1 |
20080309663 | Fukushima et al. | Dec 2008 | A1 |
20090002262 | Fukushima et al. | Jan 2009 | A1 |
20090021824 | Ijzerman et al. | Jan 2009 | A1 |
20090033812 | Ijzerman et al. | Feb 2009 | A1 |
20090040753 | Matsumoto et al. | Feb 2009 | A1 |
20090046037 | Whitehead et al. | Feb 2009 | A1 |
20090052027 | Yamada et al. | Feb 2009 | A1 |
20090052049 | Batchko et al. | Feb 2009 | A1 |
20090052164 | Kashiwagi et al. | Feb 2009 | A1 |
20090079728 | Sugita et al. | Mar 2009 | A1 |
20090079733 | Fukushima et al. | Mar 2009 | A1 |
20090080048 | Tsao | Mar 2009 | A1 |
20090096726 | Uehara et al. | Apr 2009 | A1 |
20090096943 | Uehara et al. | Apr 2009 | A1 |
20090116108 | Levecq et al. | May 2009 | A1 |