Present invention relates to apparatus and methods for glassless 3D display with “unlimited” number of TV viewers and with flexibility for eye positions, which are applied to any 3D display.
In current glassless 3D display systems, the technologies are mainly based on parallax barrier plate, lenticular plate, grating plate, micro-lens array plate (such as IP lens array—integral photography), Fresnel lens, spherical mirror, holographic screen, etc., the numbers of viewers is limited, such as, on 2011 FPD China, showed 3D TV with 9 viewers, Toshiba developed a prototype with only 3 viewers. In Figure “prior arts” shows glassless 3D display principle based on parallax barrier method (with 6 viewers as example). The TV screen is divided into 6 (2 rows, and 3 columns) sub-screens, every screen displays the part of image in it's own area simultaneously, but rotate the display of each sub-screen to each of the 6 viewers, respectively. As one has seen that the positions of viewer's eyes are fixed without flexibility.
Present invention relates to apparatus and methods for glassless 3D display with “unlimited” number of TV viewers and with flexibility for eye positions, the fundament element of this invention is the introduction of “eye space” and the circuit unit for shutter pupil control on shutter screen. The methods are applied to any 3D display, such as TV's, monitors, smart devices (iphone, ipad, . . . ), movie theaters, games, etc. One of the methods is called “mimic scene method”, based on image depth map. The second one is called “dynamic pinhole shutter method”.
The invention contains for following Figures:
FIG “Prior arts”: shows typically prior glassless 3D display principle based on parallax barrier method with 6 viewers as example.
In this invention, we proposed two glassless 3D display methods. One of the methods is called “mimic scene method”, based on image depth map. The second one is called “dynamic pinhole shutter method”, both of them are based on the concept of “eye space”, and will be described in details below.
<Mimic Scene Method>
As shown in
Solving equations (1) to (3), we get
For easier understand, let us consider an imaged “Picture Space” (PS) 92, as shown in
Once display process unit 90 (in
Regarding the lighting screen 10 and shutters 20 and 30, as shown in
In cases (1) and (2), for each “object” on the depth profile 40 (i.e. each virtual pixel in PS), strips are scanned one by one from left to right, or pixels are scanned one by one from left to right and from top to bottom—when scan passing over, all the pixels or strips on whole lighting screen 10 are lighting same color with same brightness as that of the “object” (uniform color and brightness over all whole screen, but only scanning over, not lighting simultaneously).
In cases (3) and (4), i.e. case of sub-screens, for each “object” on the depth profile 40, do the same as above, i.e. strips are scanned one by one from left to right, or pixels are scanned one by one from left to right and from top to bottom—when scan passing over, but all the pixels or strips on each sub-screen (not whole screen) in lighting screen 10 are lighting same color with same brightness as that of the “object” (uniform color and brightness over each sub-screen), and there are two choices: (a) in all sub-screens, strips or pixels are scanned simultaneously, because the corresponding strips or pixels (when switched on) in two neighboring sub-screens have constant separation, the separation (i.e. size of sub-screen) must be large enough, so as to avoid the wrong rays as shown by the long dash lines dir1 and dir2 in
PS can be also divided into multiple sub-screens. The During real time playing of 3D display, the each of “virtual pixels” on PS is scanned row by row (horizontally scan), or scanned column by column (vertically scan), and is scanned in whole screen of PS or in each of independent sub-screen.
In the following example, we assume using row by row scan. “A virtual pixel on PS is switch on” means trigging on simultaneously all the 3 pixels in the pixel string of the first ray in the first row (the 3 pixels are on lighting screen 10 and on shutters 20 and 30, respectively), and then trigging on all the 3 pixels in the string of the 2nd ray in the first row, and so on to last ray in the first row, . . . and to the 2nd row and 3rd row, . . . , and finally to the last row, then continue on next pixel on PS, and so on . . .
In summary, either lighting screen 10 (with shutter screens 20 and 30 together), or PS or both of them can be divided into multiple sub-screens (multiple zones), the scan mentioned above can be applied to each zone simultaneously, so as to meet the requirement for high speed process, to increase the brightness for same other conditions, and to avoid the wrong rays mentioned above.
The rays cannot be infinitely dense. The maximum allows divergent angle can be defined, such as, one tenth of a/z for example, which determines Der and the designs for Ws1, Ws2, D1, and D2 as shown in
<Dynamic Pinhole Shutter Method>
All
Important statement: different people may use different names for the same items or same concepts defined by these terminologies in this invention. Various changes, modifications, alterations, decorations, and extensions in the structures, embodiments, apparatuses, algorithms, procedures of operation and methods of data processing of this invention will be apparent to those skilled in this art without departing from the scope and spirit of the invention. Although the invention has been described in connection with specific preferred embodiments, apparatuses, numbers, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments and apparatuses
As shown in
Image pixels can be manufactured by any technologies, such as liquid crystal display (LCD) based, including regular or conventional LCD, TET-LCD (thin film transistors), or organic light emission diodes (OLED), or surface-conduction electron-emitter display (SED), or plasma, field emission display (FED), etc, or any other technologies which have not been invented yet.
The front polarizer of the LCD based can be dropped, if the valves pixels on of shutter screen 300 are polarizing light based.
On image pixel screen 100, the image pixel scanning procedures can be same as convention 3D display with glass, or scanning with multiple zones [within each zone, scan the images which belongs to this zone, and rotating the display in each zone to different viewers, but all the zones are scanned simultaneously]. During the scanning, when an image pixel is selected, i.e. the pixel is lighting, all color pixels in this image pixel can be lighting simultaneously, or lighting in series order. Whatever which method for lighting image pixel is used, as shown in
In eye space, from bottom to top, the eye to screen distance is increasing and so the apertures size on eye projection plan 400 is decreasing and the eye density should also be increasing correspondingly. However, gradually increasing in density or decreasing the apertures size from bottom to top is very hard for the development of control circuits, so we could use multiple zones (2, or 3, . . . ) with different densities from bottom to top, but in each zone, the density and apertures size are uniform. Therefore, for each zone, we need one group of address control matrixes (circuits), and so need n groups of address matrix for n zones (n=1, 2, 3, . . . ). We also need total 3˜4 address matrixes in each group—2 row address matrixes and 1 or 2 column address matrixes, the former being built-in in address drivers 600 or being calculated by address drivers 600 or by process units mentioned above, one for all right eyes, and one for all left eyes; and the latter being built-in in address drivers 700 or being calculated by address drivers 700 or by process units mentioned above, for either of (or both of) all right eyes and all left eyes. Usually, we only have one group of address matrixes (only one zone). However, if we build or design 2 or 3 or more groups of address matrixes for 2 or 3 or more zones in eye space (with little overlap between neighboring zones), so as to increase the total eye depth (distance from eye at the nearest to eye at the most distant relative to screen) in eye space, or to increase the eye-motion tolerance [because, for given total eye depth in eye space, the eye depth in each of zone 1, and zone 2, . . . (corresponding group 1, group 2, . . . respectively) in eye space is reduced].
As we known, in
The eye motion can be easily detected by the using optical image correlation (using FFT) of two images taken by eye-tracking camera at two neighboring moments in time. So, the locations and sizes of shutter pupils on shutter screen for rights and left eyes can be easily determined, and further, the row addresses and column addresses for shutter pixels (valves) in each of these shutter pupils can be calculated and dynamically updated in the address buffers, which provides steady address data streams for row address matrix 600 and the other is column address matrix 700.
To increase the brightness, screen may be divided into multiple zones. Image pixel scan happens in each zone simultaneously, as shown in
(Important notice: Various changes, modifications, alterations, decorations, and extensions in the structures, embodiments, apparatuses, algorithms, procedures of operation and methods of data processing of this invention will be apparent to those skilled in this art without departing from the scope and spirit of the invention. Although the invention has been described in connection with specific preferred embodiments, apparatuses, numbers, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments and apparatuses and the appended claims are therefore intended to cover all such changes and modifications as fall within the true spirit and scope of the invention.)
The application claims the priority from U.S. provisional application No. 61/744,786, filed on Oct. 4, 2012 with post mail date on Oct. 1, 2012, and tilted “Method of Glassless 3D Display”.