Consumer Video Devices
Consumers are increasingly using portable devices such as the Apple™ iPod™ Touch™, or smart phones such as the iPhone™ and Google™ Android™ devices to store (or access via the web) television programs, movies, and other video content. The small form factor for these devices is conveniently carried in one's pocket. However, the small screen size can provide a less than ideal viewing experience.
Head-mounted displays have been known for quite some time. Certain types of these displays are worn like a pair of eyeglasses. They may have a display element for both the left and right eyes to provide stereo video images. They may be designed to present a smoked-plastic “sunglasses” look to the outside world. Products on the market today can provide a reasonably immersive viewing experience in a small, portable, compact form factor, providing a “big screen” experience while at the same time remaining compatible with the portability of iPods and smart phones.
In these head mounted display devices, the optical imaging path for each eye typically consists of a Light Emitting Diode (LED) for backlight illumination, a polarizing film, and a micro-display Liquid Crystal Display (LCD) element in a molded plastic package. Among the pieces in the optical path, the micro-display element typically takes center stage. Suitably small color LCD panels are available from sources such as Kopin Corporation of Taunton, Mass. Kopin's displays such as the CyberDisplay® models can provide QVGA, VGA, WVGA, SVGA and even higher resolution depending on the desired quality of the resulting video.
Stereoscopic 3D Techniques
At the same time, full size flat panel televisions are increasingly available for viewing content in three dimensions. This is causing a corresponding increase in the availability of 3D content, making it desirable to view such content on portable devices as well.
Stereoscopy is a method of displaying three-dimensional images by presenting slightly different views to the left and right eyes. The most common methods use a single screen or display device, combined with some optical means to separate the images for left and right eyes. These methods include:
All of the above techniques are to some degree subject to crosstalk artifacts, in which each eye receives some light intended for the other. These artifacts reduce the perceived quality of the 3D image.
An alternative technique eliminates inter-eye crosstalk entirely, by using separate microdisplays in binocular eyewear. The eyewear is constructed such that each eye focuses on a single display, and the two displays are driven with separate video signals.
Stereo Video Signals and Formats
The installed base of video electronic equipment includes very little support for stereoscopic 3D. In most cases, it is therefore more desirable to adapt existing 2D equipment, signals, and formats to handle 3D content. Some methods include:
Various formats have proliferated as each display system has chosen the method most advantageous for the particular display technology used.
YouTube™ has introduced support for 3D video, and has selected “cross-eyed side-by-side” as its standard format for uploads. (The YouTube web site provides options for various other formats on playback.) Because of the vast popularity of YouTube, this format may become a de facto standard for exchanging 3D content.
The crossed-eyed side-by-side format splits the image into left and right halves, and places left-eye data in the right half image and right-eye data in the left half image. The “parallel” side-by-side format puts left and right images in their corresponding halves. Both formats are well-suited for display on video eyewear, as format conversion can be accomplished without use of a frame buffer memory.
Various approaches have also been taken in the past to adapt eyewear to handle streaming video.
In a prior art implementation of eyewear adapted for showing 3D video, both displays also receive the same video signals 30, but they are driven with separate control signals 42-L, 42-R, as illustrated in
Comparing
This method may be adapted to other formats. The adaptations may include:
The present invention is a technique to drive three dimensional video eyewear from various format input video streams. In one embodiment, a common sampling clock is used to drive both a left and right display in parallel.
More particularly, a video eyewear device capable of displaying three dimensional video content receives a digital video signal having encoded therein information to be displayed on a left and right display. Left channel and right channel video driver provide a separate left and right video signal to each of the respective left and right displays. A clock signal applied to the left and right video drivers, such that an active sampling clock period for the left video driver occurs during the same time as an active sampling clock period for the right video driver.
In one configuration, a left and right digital scaler are connected to receive a the respective left and right digital video streams, to apply a horizontal scale factor. The scale factor may depend on a ratio of the number of pixels in a horizontal line of one of the displays divided by the number of pixels in a horizontal line of the input digital video signal. The scalers may repeat pixels, or use linear or other pixel interpolation techniques. When the input signal is a color signal having three color channels, the scaling is performed independently for each channel, and for each eye.
In still other embodiments, a pixel position shifter may shift at least one of the left or right digital video signals horizontally with respect to one another. Such adjustments may be desirable to accommodate variations in the video source material or in the viewer's Inter-Pupilliary Distance (IPD).
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
In a first example embodiment, the left and right displays of a head mounted video eyewear apparatus are driven with two respective video signals that are derived from an input video stream. The control signals, such as sampling clock inputs, to the two displays may be identical, and may have a clock period that is the same as for displaying 2D content.
In one embodiment, the displays 150-L, 150-R may each be a Kopin CyberDisplay® WVGA LVS Display with an 854×480 resolution in a 0.44″ diagonal form factor size. Such a display 150 may be driven by a video driver 140 such as Kopin's KCD-A251 display driver Application Specific Integrated Circuit (ASIC).
In this embodiment, the input video stream 110 may be a 480p color digital video signal with the pixels arranged in a “side by side” format. The color digital video signal further consists of 3 channels (R, G, B), each channel having 8 bits of resolution.
In this case, the line buffer memory 120 is 24 bits wide. The line buffer 120 may be implemented as a single buffer, a double buffer, or a pair of FIFOs. The buffer 120, of whatever design, is often small enough to fit in an FPGA or similar programmable logic device, and may be integrated with other components of the display control logic.
In this example embodiment, there are a total of six digital scalers 130 used (one each for the R, G, B channels for each of the left and right displays), since interpolation of each color channel is preferably done separately from the other channels. Digital scalers 130 “make up” the difference in horizontal resolution between the higher resolution display and now lower resolution input signal fed to each eye. Digital scalers 130 can be implemented as a simple repetition of pixels in the horizontal resolution of the input video stream 110 and the two displays 150. In a case where the scale factor is a simple integer reciprocal (such as 2:1), scaling 130 can be implemented as a simple repetition of pixels. However, in other cases, where the scale factor is not an integer reciprocal, more complicated scaling techniques such as linear interpolation may be used. In either case, scalers 130 are preferably implemented in the digital domain, which may achieve better results than possible with the prior art methods of resampling an analog signal.
Some example considerations for the digital scalers 130 include the following:
After undergoing any necessary scaling by scalers 130-L, 130-R, the output streams pass to the left and right display drivers 140-L and 140-R. Each display driver 140 typically includes one or more D/A converters and one or more video amplifiers.
Selective Switching Between 2D and 3D Modes
In another embodiment, the 3D system may be selectively switched to a 2D mode by changing the scaling factor in the digital scalers. That is, instead of applying interpolation, the same buffer output, without scaling, is sent to each display 150-L, 150-R.
Soft IPD and Convergence Adjust
In another implementation, any of the 3D methods described above may be adapted to provide soft Inter-Pupilliary Distance (IPD) or convergence adjustments.
In particular, it is not uncommon for the available resolution of the physical displays 150 to exceed that of the presented image in the input stream 110. For example “wide VGA” displays such as the Kopin CyberDisplay® WVGA mentioned above may have up to 864 active columns, but are often used to display content with horizontal resolutions of only 854, 768, 720, or 640 pixels. In these situations, the drive electronics 100 will typically center the active image horizontally and drive the inactive “side” pixel columns to black. However, by varying the size of the resulting left and right black borders, the position of the image can be moved horizontally within the active pixel array.
Because the 3D methods described provide independent signals to the two displays, it is possible to control the border sizes on left and right displays independently. For example, moving the left image to the right and the right image to the left would change the stereoscopic convergence and make the image appear closer to the viewer. In this way, the convergence of the stereoscopic images may be adjusted for optimal viewing via electronic controls, without requiring mechanical adjustments to the display or lens position. Such adjustments may be desirable to accommodate variations in the video source material or in the viewer's Inter-Pupilliary Distance (IPD). This can then affect the 3D depth perceived by the viewer.
In such an embodiment, as shown in
It should be understood that the IPD adjustment need not depend on a particular scale factor, and indeed can be applied to other 3D video eyewear systems such as the systems that do not apply scale factors at all. The horizontal shift may be performed before or after scalers 140-L, 140-R such as by changing the address from which the digital scalers 140-L, 140-R read from the line buffer memory 120 (as shown in
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/261,104, filed on Nov. 13, 2009, the entire teachings of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5075776 | Cheung | Dec 1991 | A |
5416510 | Lipton et al. | May 1995 | A |
5523886 | Johnson-Williams et al. | Jun 1996 | A |
5612708 | Ansley | Mar 1997 | A |
5677728 | Schoolman | Oct 1997 | A |
5751341 | Chaleki et al. | May 1998 | A |
5808591 | Mantani | Sep 1998 | A |
6268880 | Uomori et al. | Jul 2001 | B1 |
6437767 | Cairns et al. | Aug 2002 | B1 |
6518939 | Kikuchi | Feb 2003 | B1 |
6573819 | Oshima | Jun 2003 | B1 |
6636185 | Spitzer et al. | Oct 2003 | B1 |
6853935 | Satoh et al. | Feb 2005 | B2 |
6977629 | Weitbruch et al. | Dec 2005 | B2 |
7844001 | Routhier et al. | Nov 2010 | B2 |
8253760 | Sako et al. | Aug 2012 | B2 |
8754931 | Gassel et al. | Jun 2014 | B2 |
20010043266 | Robinson et al. | Nov 2001 | A1 |
20030080964 | Prache | May 2003 | A1 |
20050117637 | Routhier | Jun 2005 | A1 |
20060241792 | Pretlove et al. | Oct 2006 | A1 |
20060256140 | Turner | Nov 2006 | A1 |
20070120763 | De Paepe | May 2007 | A1 |
20070247477 | Lowry | Oct 2007 | A1 |
20070296713 | Kim et al. | Dec 2007 | A1 |
20080062143 | Shahoian et al. | Mar 2008 | A1 |
20080122736 | Ronzani et al. | May 2008 | A1 |
20080152072 | Herrmann | Jun 2008 | A1 |
20080259096 | Huston | Oct 2008 | A1 |
20090002482 | Cho et al. | Jan 2009 | A1 |
20090185030 | McDowall et al. | Jul 2009 | A1 |
20090219382 | Routhier et al. | Sep 2009 | A1 |
20090251531 | Marshall et al. | Oct 2009 | A1 |
20100157029 | MacNaughton | Jun 2010 | A1 |
20100271462 | Gutierrez Novelo | Oct 2010 | A1 |
20100309291 | Martinez et al. | Dec 2010 | A1 |
20110012845 | Rothkopf et al. | Jan 2011 | A1 |
20110169928 | Gassel et al. | Jul 2011 | A1 |
20110181707 | Herrmann | Jul 2011 | A1 |
20110187821 | Routhier et al. | Aug 2011 | A1 |
20110187840 | Chao et al. | Aug 2011 | A1 |
20110292169 | Jain | Dec 2011 | A1 |
20110292170 | Jain | Dec 2011 | A1 |
20120062710 | Lee et al. | Mar 2012 | A1 |
20120176411 | Huston | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
101563722 | Oct 2009 | CN |
Entry |
---|
International Preliminary Report on Patentability and Written Opinion of PCT/US2010/056671 dated May 24, 2012. |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority for Int'l Application No. PCT/US2010/056671; Date Mailed: Jan. 12, 2011. |
Number | Date | Country | |
---|---|---|---|
20110181707 A1 | Jul 2011 | US |
Number | Date | Country | |
---|---|---|---|
61261104 | Nov 2009 | US |