Method of glassless 3D display

Information

  • Patent Application
  • 20150054927
  • Publication Number
    20150054927
  • Date Filed
    August 23, 2013
    11 years ago
  • Date Published
    February 26, 2015
    9 years ago
Abstract
Present invention relates to apparatus and methods for glassless 3D display with “unlimited” number of TV viewers and with flexibility for eye positions, the fundament elements of this invention are the introduction of “eye space” and the circuit unit for shutter pupil control on shutter screen. The methods are applied to any 3D display, such as TV's, monitors, smart devices (iphone, ipad, . . . ), movie theaters, games, etc. One of the methods is called “mimic scene method”, based on image depth map. The second one is called “dynamic pinhole shutter method”.
Description
FIELD OF INVENTION

Present invention relates to apparatus and methods for glassless 3D display with “unlimited” number of TV viewers and with flexibility for eye positions, which are applied to any 3D display.


BACKGROUND OF INVENTION

In current glassless 3D display systems, the technologies are mainly based on parallax barrier plate, lenticular plate, grating plate, micro-lens array plate (such as IP lens array—integral photography), Fresnel lens, spherical mirror, holographic screen, etc., the numbers of viewers is limited, such as, on 2011 FPD China, showed 3D TV with 9 viewers, Toshiba developed a prototype with only 3 viewers. In Figure “prior arts” shows glassless 3D display principle based on parallax barrier method (with 6 viewers as example). The TV screen is divided into 6 (2 rows, and 3 columns) sub-screens, every screen displays the part of image in it's own area simultaneously, but rotate the display of each sub-screen to each of the 6 viewers, respectively. As one has seen that the positions of viewer's eyes are fixed without flexibility.


RELATED U.S. PATENT DOCUMENTS














publish/Application Number
time
inventors







4,180,313
December 1979
Inuiya


4,829,365
May 1989
Eichenlab


4,987,487
January 1991
Ichinose et al.


5,270,818
December 1993
Ottenstein


5,606,455
February 1997
Eichenlaub


5,774,262
June 1998
Schwerdtner et al.


5,833,507
November 1998
Woodgate et al.


5,850,269
December 1998
Kim


5,852,512
December 1998
Chikazawa


5,867,322
February 1999
Morton


5,896,225
April 1999
Chikazawa


5,929,859
July 1999
Meijers


5,936,774
August 1999
Street


5,991,073
November 1999
Woodgate et al.


6,094,216
July 2000
Taniguchi et al.


6,097,554
August 2000
Watkins


6,302,541
October 2001
Grossmann


6,337,675
January 2002
Toffolo et al.


6,445,406
September 2002
Taniguchi et al.


6,573,928
June 2003
Jones et al.


6,700,701
Mar. 2, 2004
Son, et al.


6,791,570
September 2004
Schwerdtner et al.


2004/0174604
September 2004
Brown


2004/0218245
November 2004
Kean et al.


2004/0252374
December 2004
Saishu et al.


2004/0257531
December 2004
Hattori et al.


2005/0007453
January 2005
Ahiska


2005/0062869
March 2005
Zimmermann et al.


2006/0008175
January 2006
Tanaka et al.


2005/0111100
May 2005
Mather et al.


2005/0146662
July 2005
Inoue et al.


2005/0237471
October 2005
Kawamura


2006/0082586
April 2006
Noorbakhsh et al.


7,098,824
August 2006
Yang et al.


2006/0181610
August 2006
Carlsson et al.


7,154,653
December 2006
Kean, et al.


2007/0058127
March 2007
Mather et al.


2007/0182885
August 2007
Egi et al.


7,492,348
February 2009
Matsuda


2009/0040426
February 2009
Mather et al.


2009/0109126
April 2009
Stevenson et al.


2009/0102950
April 2009
Ahiska


7,551,353
June, 2009
Kim, et al.


7,623,090
November 2009
Ijzerman et al.


2010/0014313
January 2010
Tillin et al.


2010/0073332
March 2010
Gettemy et al.


8,144,079
March 2012
Mather, et al.


8,154,800
April 2012
Kean, et al.


8,154,686
April, 2012
Mather, et al.


8,427,538
April, 2013
Ahiska


8,446,461
May, 2013
Jian


8,456,514
June, 2013
Leister


8,482,508
July 2013
Wong, et al.


8,503,079
August, 2013
Lin









FOREIGN PATENT DOCUMENTS



















60-224383
November, 1985
JP



63-105583
May, 1988
JP



02-185175
July, 1990
JP



06-186526
July, 1994
JP



2,278,223
November, 1994
GB



08-331600
December, 1996
JP



09-021979
January, 1997
JP



0,752,610
January, 1997
EP



9-9297
January, 1997
JP



2,305,048
March, 1997
GB



2,309,609
July, 1997
GB



2,311,905
October, 1997
GB



0,822,441
February, 1998
EP



10-221644
August, 1998
JP



2,315,902
February, 1998
GB



2,317,734
April, 1998
GB



2,320,156
June, 1998
GB



0,953,962
November, 1999
EP



2,336,963
November, 1999
GB



2,341,033
March, 2000
GB



2000-078617
March, 2000
JP



1,120,746
August, 2001
EP



02/062056
August, 2002
WO



03/005338
January, 2003
WO



2003-029205
January, 2003
JP



10-0400221
September, 2003
KR



1,341,383
September, 2003
EP



2,389,730
December, 2003
GB



2,396,070
June, 2004
GB



2004-206050
July, 2004
JP



2004-239932
August, 2004
JP



2004-287440
October, 2004
JP



2004-294862
October, 2004
JP



2004/088996
October, 2004
WO



2,403,367
December, 2004
GB



2 403,637
January, 2005
GB



2 403,863
January, 2005
GB



2,403,864
January, 2005
GB



2005-010303
January, 2005
JP










SUMMARY OF THE INVENTION

Present invention relates to apparatus and methods for glassless 3D display with “unlimited” number of TV viewers and with flexibility for eye positions, the fundament element of this invention is the introduction of “eye space” and the circuit unit for shutter pupil control on shutter screen. The methods are applied to any 3D display, such as TV's, monitors, smart devices (iphone, ipad, . . . ), movie theaters, games, etc. One of the methods is called “mimic scene method”, based on image depth map. The second one is called “dynamic pinhole shutter method”.





DETAIL DESCRIPTION OF THE INVENTION

The invention contains for following Figures:


FIG “Prior arts”: shows typically prior glassless 3D display principle based on parallax barrier method with 6 viewers as example.



FIG. 1 Display based on “mimic scene method”, for the scene point is inside (behind) the screen.



FIG. 2 Display based on “mimic scene method”, for the scene point is out of (at front of) the screen.



FIG. 3 Shows an example configuration of glassless 3D display process and control of this invention, for “mimic scene method”.



FIG. 4. Shows example of eye space and the relation to display and shutter screens.



FIG. 5. Shows two examples of eye space and eye tolerant ranges.



FIG. 6. Shows another two examples of eye space and eye tolerant ranges.



FIG. 7. Shows eye tolerant ranges at different locations.



FIG. 8. Shows eye tolerant ranges at different view depth.



FIG. 9. Shows how to determine the pupil's locations and sizes on shutter screen for all right eyes.



FIG. 10. Shows how to determine the pupil's locations and sizes on shutter screen for all left eyes.



FIG. 11. Shows zoom-in view for image pixels on the image pixel plan and shutter pupils and shutter pixels on shutter plan.



FIG. 12. Shows a configuration of glassless 3D display process and control of this invention, for both methods.



FIG. 13. Shows an example configuration of glassless 3D display process and control of this invention, for “dynamic pinhole shutter method”, with built-in designs for switch units.



FIG. 14. Shows another example configuration of glassless 3D display process and control of this invention, for “dynamic pinhole shutter method”, with built-in designs for switch units.



FIG. 15. Shows third example configuration of glassless 3D display process and control of this invention, for “dynamic pinhole shutter method”, with dynamic control design for switch units.





DETAIL DESCRIPTION OF THE INVENTION

In this invention, we proposed two glassless 3D display methods. One of the methods is called “mimic scene method”, based on image depth map. The second one is called “dynamic pinhole shutter method”, both of them are based on the concept of “eye space”, and will be described in details below.


<Mimic Scene Method>


As shown in FIGS. 1 and 2, in mimic scene method, depth map of scene will be used. FIG. 1 shows the case scene point is inside (behind) the screen; FIG. 2 shows the case scene point is out of (at front of) the screen. Depth map can be (1) obtained by 2D to 3D conversion technology, or (2) reconstructed from FFT correlation analysis of 2 stereo pictures (for left and right eyes), as shown in FIG. 1, or (3) can be obtained by depth map camera (in this case, TV station will send one 2D TV picture with 3 colors and one depth map with one color and save bandwidth for two color data, For 3D TV, generally, we need band width for 2 of 3 colors—total 6 colors for two stereo pictures). For the second case, the first step to do is match process, i.e. find out the positions of corresponding points x′1 and x′2 on 2 stereo pictures (detail will be described below), then using camera setup parameters (eye or camera (51, 52) distance α, eye or camera (51, 52) looking directions α1=α2, camera focus length f), to construct the image depth z=zo+dk by using the following formula










x
1


=

f




cos






α
1


-


z

x
+

a
/
2




sin






α
1






z

x
+

a
/
2




cos






α
1


+

sin






α
1









(
1
)







x
2


=

f




cos






α
2


-


z

x
-

a
/
2




sin






α
2






z

x
-

a
/
2




cos






α
2


+

sin






α
2









(
2
)







y
1


=


y
2


=


-
y



f
z







(
3
)







Solving equations (1) to (3), we get









z
=

a
·


[





x
1



cos






α
1


+

f





sin






α
1






-

x
1




sin






α
1


+

f





cos






α
1




-




x
2



cos






α
2


-

f





sin






α
2






x
2



sin






α
2


+

f





cos






α
2





]


-
1







(
4
)






x
=


z




·




x
2



cos






α
2


-

f





sin






α
2






x
2



sin






α
2


+

f





cos






α
2





+

a
2






(
5
)






y
=


-

y
2





z
f






(
6
)







For easier understand, let us consider an imaged “Picture Space” (PS) 92, as shown in FIG. 3, which contains same size (N1×N2) of pixel matrix as that of any one of the two stereo pictures, and there is one to one location correspondence of a virtual pixel on PS and a pixel on any one of the two stereo pictures. For each given virtual pixel at location (i, j) on PS [i.e. at the i-th row and the j-th column, then x=j, and y=i)], to find out the positions of corresponding points (x′1, y′1) and (x′2, y′2) on two stereo pictures, we need to use a (n1×n2) sub-matrix of pixels around the pixel (i, j) on each of the two stereo pictures [i.e. center at location (x, y), n1 and n2 must be large enough, but need to satisfy n1 <<N1 and n2 <<N2], and apply FFT correlation analysis on these two sub-matrixes, the image shift amount (dX, dY) between the two sub-matrixes can be obtained, then as first order approximation, x′1=x+dX/2, x′2=x−dX/2, and y′1=y+dY/2, y′2=y−dY/2, and then z can be calculated from equation (4) above. If necessary, we can do iteration calculation to make correction for (x′1, y′1), (x′2, y′2) and z, i.e. by solving x′2 and y′2 from equations (5) and (6) for given z, and mixing old (x′2, y′2) with new (x′2, y′2) with proper weights, then corrected (x′2, y′2) can be obtained, which leads to corrected (x′1, y′1) by using x′1=x′2+dX and y′1=y′2+dY. Finally, put the corrected (x′1, y′1) and (x′2, y′2) back into equation (4) to get corrected z, and so on until the difference between new and old values are small enough. A special treatment [using multiple dividend (such as ¼, 1/16, 1/64, . . . ) sub-pictures for FFT correlation analysis] is needed when there is a large area with uniform background in the two stereo pictures.


Once display process unit 90 (in FIG. 3) receives signals of the left eye and right eye pictures (or of one picture with depth map), process unit uses any method such as mentioned above to get the depth profile 40 (need using more complex technology to deal with missing part problem), then display process unit 90 calculates the pixel positions in a pixel string (for each ray to each eye in eye space is corresponding to a pixel string, a pixel string for a ray to given eye consists of 3 pixels aligned on a line along the ray, one pixel (lighting pixel) is on lighting screen 10, the 2nd pixel (shutter pixel), is on inner shutter 20 and the 3rd pixel (shutter pixel), is on out shutter 30), and prepares the address of cell-selection for these pixels to switch on, and finally organizes all these information into display streams and sends them to buffers 60, 70, 80 for temporally storage (ready to release for display once trigger signal is received). The display streams include 3 streams, i.e. stream for lighting screen 10, for inner direction shutters 20 and for outer direction shutters 30, respectively. Pixel (with colors and brightness) on lighting screen 10 and pixels on direction shutters 20 and 30 are switched on and off synchronously for given direction or given ray. The virtual pixel on PD is distinguished form the pixel on display lighting screen 10 and shutter screens 20 and 30. Each virtual pixel on PS has one to one correspondence with an “object” on the depth profile 40, with same color and brightness as this “object” which “radiates” rays to all directions or to all eyes in eye space [eye space for Dynamic Pinhole Shutter Method is shown in FIG. 4, the eye space for Mimic Scene Method is similar to that in FIG. 4, but not exactly same. For Mimic Scene Method, we don't need to distinguish left eye or right eye in eye space, and eye density can be much higher than Dynamic Pinhole Shutter Method]. So, each virtual pixel on PS includes information (dk, R, B, G, I), where dk is scene depth, (R, B, G) are the color and I (by TV setup) is brightness value for this virtual pixel, and the position, color and brightness of pixel on lighting screen 10 are determined by signal of “virtual pixel” on PS. The density of virtual pixel on “Picture Space” is same of that for regular 2D display or conventional 3D display. However, the pixel density on lighting screen 10 and shutters 20 and 30 are determined by the eye density in eye space and should be much higher than that on PS


Regarding the lighting screen 10 and shutters 20 and 30, as shown in FIGS. 1 and 2, they can consist of (1) a whole matrix filled with pixels or (2) a whole matrix filled with vertical lines (or strips), or (3) segmented matrixes filled with pixels or (4) segmented matrixes filled with vertical strips, the “segmented” can be considered as sub-screens.


In cases (1) and (2), for each “object” on the depth profile 40 (i.e. each virtual pixel in PS), strips are scanned one by one from left to right, or pixels are scanned one by one from left to right and from top to bottom—when scan passing over, all the pixels or strips on whole lighting screen 10 are lighting same color with same brightness as that of the “object” (uniform color and brightness over all whole screen, but only scanning over, not lighting simultaneously).


In cases (3) and (4), i.e. case of sub-screens, for each “object” on the depth profile 40, do the same as above, i.e. strips are scanned one by one from left to right, or pixels are scanned one by one from left to right and from top to bottom—when scan passing over, but all the pixels or strips on each sub-screen (not whole screen) in lighting screen 10 are lighting same color with same brightness as that of the “object” (uniform color and brightness over each sub-screen), and there are two choices: (a) in all sub-screens, strips or pixels are scanned simultaneously, because the corresponding strips or pixels (when switched on) in two neighboring sub-screens have constant separation, the separation (i.e. size of sub-screen) must be large enough, so as to avoid the wrong rays as shown by the long dash lines dir1 and dir2 in FIG. 1; (b) not scan in each sub-screen, but just switch on or off simultaneously all strips or pixels in the sub-screen, and any two neighboring sub-screens are not lighting on at same time, but are lighting on/off alternatively to avoid the wrong rays mentioned above.


PS can be also divided into multiple sub-screens. The During real time playing of 3D display, the each of “virtual pixels” on PS is scanned row by row (horizontally scan), or scanned column by column (vertically scan), and is scanned in whole screen of PS or in each of independent sub-screen.


In the following example, we assume using row by row scan. “A virtual pixel on PS is switch on” means trigging on simultaneously all the 3 pixels in the pixel string of the first ray in the first row (the 3 pixels are on lighting screen 10 and on shutters 20 and 30, respectively), and then trigging on all the 3 pixels in the string of the 2nd ray in the first row, and so on to last ray in the first row, . . . and to the 2nd row and 3rd row, . . . , and finally to the last row, then continue on next pixel on PS, and so on . . .


In summary, either lighting screen 10 (with shutter screens 20 and 30 together), or PS or both of them can be divided into multiple sub-screens (multiple zones), the scan mentioned above can be applied to each zone simultaneously, so as to meet the requirement for high speed process, to increase the brightness for same other conditions, and to avoid the wrong rays mentioned above.


The rays cannot be infinitely dense. The maximum allows divergent angle can be defined, such as, one tenth of a/z for example, which determines Der and the designs for Ws1, Ws2, D1, and D2 as shown in FIG. 1.


<Dynamic Pinhole Shutter Method>


All FIGS. 4 to 15 are used for illustrating dynamic pinhole shutter method. Some terminologies should be introduced below for convenient description:

    • 1. eye space 500—in dynamic pinhole shutter method, it means any possible (left and right) eye pair locations, eye pairs can be up to 1360 for 70″ TV in a family room, and up to 5000 for theater, so it can provide service for unlimited viewers [However, in mimic scene method, it means any possible eye (not eye pair—no distinguish for left and right eyes) locations, eye can be up to 10,000 for 70″ TV, and up to 40,000 for theater, actually, there is no limitation for number of eyes as long as the control circuit is achievable]
    • 2. eye projection plan 400
    • 3. shutter screen 300, on which shutter windows (or shutter pupils) has correspondence to the eye pattern on eye projection plan 400
    • 4. pinhole plan 200
    • 5. image pixel screen 100, an image pixel consists of 3 or 4 color pixels
    • 6. lighting—means that light valve is switch on and pass back light (if the pixel is a passive light source, such as back LED lighting in LCD display), or means emitting light if the pixel is an active light source (such as plasma display.
    • 7. shutter/valve pixel—minimum physical light valve on shutter screen 300
    • 8. shutter pupil may contain 1 or 2 or 3 or more shutter pixels or valve pixels
    • 9. shutter pupil row-address driver 600, which may or may not include a row address matrix
    • 10. shutter pupil column-address driver 700, which may or may not include a column address matrix


Important statement: different people may use different names for the same items or same concepts defined by these terminologies in this invention. Various changes, modifications, alterations, decorations, and extensions in the structures, embodiments, apparatuses, algorithms, procedures of operation and methods of data processing of this invention will be apparent to those skilled in this art without departing from the scope and spirit of the invention. Although the invention has been described in connection with specific preferred embodiments, apparatuses, numbers, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments and apparatuses


As shown in FIG. 4, any one of the pixel on image pixel plan 100 has many light (or beams) connections to each eye in eye space 500, a beam from a pixel on image pixel plan 100 (at a given moment) passes through a pinhole (such as 201, and 202, etc, it is an imaged, not materialized) on pinhole plan 200, and passes through pupils (such as 311 etc) on shutter plan 300, and then passes through imaged apertures (such as 411, etc) on eye projection plan 400, and finally reaches to the corresponding eye in eye space 500. Apertures on eye projection plan 400 and shutter pupils on shutter plan 300 have one to one correspondence, either for locations or for sizes.



FIGS. 5 and 6 show the allowable size or range and visible zones of the apertures on eye projection plan 400. The eyes far away have smaller but denser apertures. FIG. 7 shows the tolerant range for eye or head movement without changing viewer's location or posture, and even without jumping to next zone. Therefore, it can be seen in FIG. 8 that, a closer viewer has more tolerance at outside, but less tolerance at inside, while a far viewer has more tolerance at inside, but less tolerance at outside.



FIG. 9 shows how to determine the pupil's locations and sizes on shutter screen 300 for all right eyes, and FIG. 10 shows the pupil locations and sizes for all left eyes, such as the row locations 311′, 321′, 331′ etc, and column locations in each row, such as 311, 321, and 331 etc.



FIG. 11 shows zoom-in view for image pixels on the image pixel plan 100 and shutter pupils and shutter pixels on shutter plan 300. One image pixel consists of 3 or 4 color pixels, such as R, B, G (or Red, Green, Blue with one more different color if needed), and R, B, G can be in horizontal configuration as shown in FIG. 11 (as used in current TV), or in vertical configuration (not shown in Figures, but it is first option of this invention). One shutter pupil may contain 1 or 2 or 3 or more shutter pixels or valve pixels. Smaller valve pixels in a pupil (nee more valve pixels for given pupil size) are better for size and location control, as long as not too small due to too high cost or due to light diffraction effect for red color becoming important. Pupil's size and density have distribution over the shutter plan 300.


Image pixels can be manufactured by any technologies, such as liquid crystal display (LCD) based, including regular or conventional LCD, TET-LCD (thin film transistors), or organic light emission diodes (OLED), or surface-conduction electron-emitter display (SED), or plasma, field emission display (FED), etc, or any other technologies which have not been invented yet.


The front polarizer of the LCD based can be dropped, if the valves pixels on of shutter screen 300 are polarizing light based.


On image pixel screen 100, the image pixel scanning procedures can be same as convention 3D display with glass, or scanning with multiple zones [within each zone, scan the images which belongs to this zone, and rotating the display in each zone to different viewers, but all the zones are scanned simultaneously]. During the scanning, when an image pixel is selected, i.e. the pixel is lighting, all color pixels in this image pixel can be lighting simultaneously, or lighting in series order. Whatever which method for lighting image pixel is used, as shown in FIG. 12, when an image pixel on image pixel screen 100 for right eye picture is lighting, all the corresponding pupils (for all right eyes) on shutter plan 300 will be switched on simultaneously (i.e. all valve pixels in each of these pupils for right eyes should pass light), which is controlled by the row address drivers 600 and column address driver 700; similarly, when an image pixel on image pixel screen 100 for left eye picture is lighting, the corresponding pupils (for all left eyes) on shutter plan 300 will be switched on simultaneously (i.e. all valve pixels in each of these pupils for left eyes should pass light). If without eye-tracking system, the address (i.e. location) matrix of pupils for all right eyes and address matrix of pupils for all left eyes are pre-determined in circuit design (i.e. built-in in address drivers 600 and 700). If with eye-tracking system (camera and process units), these address matrixes are dynamically calculated and dynamically controlled by address drivers 600 and 700 according to the data from eye-tracking system or calculated by and process units of eye-tracking system.


In eye space, from bottom to top, the eye to screen distance is increasing and so the apertures size on eye projection plan 400 is decreasing and the eye density should also be increasing correspondingly. However, gradually increasing in density or decreasing the apertures size from bottom to top is very hard for the development of control circuits, so we could use multiple zones (2, or 3, . . . ) with different densities from bottom to top, but in each zone, the density and apertures size are uniform. Therefore, for each zone, we need one group of address control matrixes (circuits), and so need n groups of address matrix for n zones (n=1, 2, 3, . . . ). We also need total 3˜4 address matrixes in each group—2 row address matrixes and 1 or 2 column address matrixes, the former being built-in in address drivers 600 or being calculated by address drivers 600 or by process units mentioned above, one for all right eyes, and one for all left eyes; and the latter being built-in in address drivers 700 or being calculated by address drivers 700 or by process units mentioned above, for either of (or both of) all right eyes and all left eyes. Usually, we only have one group of address matrixes (only one zone). However, if we build or design 2 or 3 or more groups of address matrixes for 2 or 3 or more zones in eye space (with little overlap between neighboring zones), so as to increase the total eye depth (distance from eye at the nearest to eye at the most distant relative to screen) in eye space, or to increase the eye-motion tolerance [because, for given total eye depth in eye space, the eye depth in each of zone 1, and zone 2, . . . (corresponding group 1, group 2, . . . respectively) in eye space is reduced].



FIG. 13 shows an example how built-in address drivers (dipole based circuit) work, Address driver contains two address matrixes, one is row address matrix 600, and the other is column address matrix 700. Address matrix is metal mesh and some nodes at the mesh crosses has switch unit (such as dipole as shown in FIG. 13, but not limited to this) connected to two metal mesh lines at the cross. A node at a metal line cross with solid circles means there is a connection through a dipole (direction is shown in the chart), a node without solid circles means there is no connection (no dipole) at metal cross. When an image pixel, say at (2, 3) [at row 2 and column 3] for example, is lighting, the trigging signal on row 2 (i.e. 702) trigs on the switch 701 for row line 2 in address matrix 700→then all the column lines in 700 with solid circles which crosses with row line 2 will switched on, which leads to all corresponding column wires in shutter screen 300 will switch on→means these column lines on 300 are connected to a voltage source +V 703; meanwhile, the trigging signal on column line 3 (i.e. 602) trigs on the switch 601 for column line 3 in address matrix 600→then all the row lines with solid circles which crosses with column line 3 will be switched on, which leads all corresponding row wires in shutter screen 300 will switch on→means these row lines on 300 are connected to a voltage source −V 603; As a result, all the shutter valve pixels at the crosses (which form a rectangular shutter pupil) of the these rows (with voltage −V) and these columns (with voltage +V) in shutter screen are turned to open (pass light). By this way, for any given image pixel on image pixel screen 100, pupil functions for all right eyes, or all left eyes can be achieved. All the switch unit (such as dipole, etc.) mentioned above are pre-built-in by design.



FIG. 14 shows an example of the case with voltage-controlled switches. Similar as in FIG. 13, the nodes with solid circles means there is a connection through voltage-controlled switches (such as transistor, but not limited to this), the nodes without solid circles means there is no connection at metal cross. For this case, the switch is controlled by the voltage signal at 602 or 702. All the voltage-controlled switches are pre-built-in by design.


As we known, in FIGS. 13 and 14 show two examples of built-in designs for switch units, while in FIG. 15 shows an example of a dynamic control design for switch units. Now each of ALL nodes (whatever has solid circles or not) has voltage-controlled switch. Voltage-controlled switch here is different from that mentioned in FIG. 14. In FIG. 14, the switch units are controlled by voltage signals from 602 or 702, but in FIG. 15, the switch units are controlled by both voltage signals from (602 or 702) and dynamic control voltages (Vi or Vj). Please note that the symbol definition in figure is changed, The node with solid circle NOW means the voltage-controlled switch at this node is ready for to switch ON, and the node without solid circle NOW means the voltage-controlled switch at this node is NOT ready to switch on. Therefore, “the switch at node being ready or not ready to switch on” can be controlled dynamically by dynamic control voltages Vi and Vj from dynamic control lines, according to the data from eye-tracking camera.


The eye motion can be easily detected by the using optical image correlation (using FFT) of two images taken by eye-tracking camera at two neighboring moments in time. So, the locations and sizes of shutter pupils on shutter screen for rights and left eyes can be easily determined, and further, the row addresses and column addresses for shutter pixels (valves) in each of these shutter pupils can be calculated and dynamically updated in the address buffers, which provides steady address data streams for row address matrix 600 and the other is column address matrix 700.


To increase the brightness, screen may be divided into multiple zones. Image pixel scan happens in each zone simultaneously, as shown in FIG. 16, with zone rotation. In this case, the design should be well optimized to avoid the light cross talking. Increasing brightness may also be done by using vertical zone rotation only.


(Important notice: Various changes, modifications, alterations, decorations, and extensions in the structures, embodiments, apparatuses, algorithms, procedures of operation and methods of data processing of this invention will be apparent to those skilled in this art without departing from the scope and spirit of the invention. Although the invention has been described in connection with specific preferred embodiments, apparatuses, numbers, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments and apparatuses and the appended claims are therefore intended to cover all such changes and modifications as fall within the true spirit and scope of the invention.)

Claims
  • 1. glassless 3D display method based on eye space concept and its corresponding embodiments, comprises image pixel screen or lighting screen, at least one or two shutter screen (s), address drivers of shutter(s) to form shutter pupils, and method and algorithm for image screen or light screen control and shutter screen control, wherein, the method and embodiments are applicable to either “mimic scene method” or “dynamic pinhole shutter method”, and applicable to any 3D display, such as TV's, monitors, smart devices (iphone, ipad, . . . ), movie theaters, games, etc, with unlimited number of TV viewers for given space, and with flexibility for eye or head motions;the eye space concept provides key fundament for shutter pixel control;the system embodiments may or may not include eye-tracking device;
  • 2. method and algorithm of image screen or light screen control, and shutter screen (s) control for glassless 3D display based on eye space concept, wherein eye locations in eye space determining locations and sizes of shutter pupils on shutter screen for given image pixel on image screen in “dynamic pinhole shutter method”, or determining locations of lighting pixels and size of lighting pupils on lighting screen and locations/sizes of corresponding shutter pupils on inner and outer shutter screens for given virtual pixel on picture space in “mimic scene method”;on image pixel screen or in picture space, scanning image pixel with same procedures as convention 3D display with glass, or scanning image pixels in single screen or in multiple sub-screens (zones) synchronously, but within each sub-screen, scanning the images which belongs to this sub-screen, and rotating the display in each sub-screen to different zones in eye space;during the scanning, when an image pixel is selected, i.e. the pixel is lighting, all color pixels of this image pixel can be lighting simultaneously, or lighting in series order;when an image pixel on image pixel screen for right eye picture or left eye picture is lighting, all the corresponding pupils for all right eyes or all the corresponding pupils for all left eyes on shutter plan will be switched on synchronously, i.e. all shutter valves or pixels in each of these pupils pass light through, which are controlled by the row and column address drivers for all right eyes or all left eyes respectively;when an virtual pixel in picture space is selected, scanning each of all eyes over eye space for this virtual pixel, and for an given scanned eye, all the corresponding pixels on lighting screen are lighting synchronously, and all corresponding pupils on inner and outer shutter screens will be switched on synchronously, i.e. all shutter valves or pixels in each of these pupils for this eye pass light through, which are controlled by the row and column address drivers for all eyes;
  • 3. address drivers (circuit boards) of shutter pixels to form shutter pupils for glassless 3D display, wherein address driver driving the shutter pixels on shutter screen in “dynamic pinhole shutter method” or driving shutter pixels on inner shutter screen and on outer shutter screen and lighting pixels on lighting screen in “mimic scene method”;address driver consists of two parts: one is row address driver with row address matrix, and the other is column address driver with column address matrix;address matrix being conductor mesh and each of all notes or each of some nodes at the mesh crosses having voltage-controlled switch unit or directional-conductive unit which is connected to two conductor mesh lines at the cross;open/close status of shutter pixels in a shutter pupil on shutter screen (s) or/and lighting status of pixels on lighting screen being determined by the voltage difference between row mesh lines in row address matrix and column mesh lines in column address matrix;voltages of column mesh lines in column address matrix being controlled by the conduction status of mentioned directional-conductive units or voltage-controlled switches, which are further controlled by the same signal trigging image pixel on image screen or virtual pixel on picture space through row mesh lines of column address matrix, and if with eye-tracking system; also controlled by dynamic control voltages from dynamic control lines;voltages of row mesh lines in row address matrix being controlled by the conduction status of mentioned directional-conductive units or switches, which are further controlled by the same signal trigging image pixel on image screen or virtual pixel on picture space through column mesh lines of row address matrix, and if with eye-tracking system, also controlled by dynamic control voltages from dynamic control lines;address matrix of pupils is pre-determined in circuit design if without eye-tracking system, or dynamically calculated and dynamically controlled if with eye-tracking system;in “dynamic pinhole shutter method”, one row address matrix and one column address matrix are needed for all right-eyes in each zone if using more than one zone in eye space;in “dynamic pinhole shutter method”, one row address matrix and one column address matrix are needed for all left-eyes in each zone in eye space if using more than one zone in eye space;in “mimic scene method”, one row address matrix and one column address matrix are needed for all eyes (no distinguish of right and left eyes) in each zone in eye space if using more than one zone in eye space;
  • 4. the method of claim 1, wherein “mimic scene method” based on image depth map comprising at least lighting pixel screen, at least one inner shutter screen, at least one outer screen, address drivers and stream processing unit, algorithm and control procedures for glassless 3D display, wherein, locations of lighting pixels and size of lighting pupils on lighting screen and locations/sizes of corresponding shutter pupils on inner and outer shutter screens are determined by eye location in eye space, and by depth and location of given image pixel;
  • 5. the method of claim 1, wherein “dynamic pinhole shutter method” comprising at least image pixel screen, at least one shutter screen, address drivers and stream processing unit, algorithm and control procedures for glassless 3D display, locations and sizes of corresponding shutter pupils on shutter screen are determined by eye location in eye space and location of given image pixel;
  • 6. the image pixel screen or lighting screen of claim 1, pixel lighting can base on any technologies, such as liquid crystal display (LCD) based, including regular or conventional LCD, LED-LCD, TET-LCD (thin film transistors), or organic light emission diodes (OLED), or surface-conduction electron-emitter display (SED), or plasma, field emission display (FED), etc, or any other technologies which have not been invented yet, wherein, the front polarizer of the LCD based can be dropped if the valves of shutter are polarizing light based;
  • 7. the eye space of claim 2, the eyes are classified into set of right eyes and set of left eyes in “dynamic pinhole shutter method”, which leads to two kinds of address drivers for right eyes and left eye, respectively, and the eyes are not distinguish as left eyes or right eyes in “mimic scene method” which only needs one kind of address drivers;
  • 8. the address drivers of claim 3, each of row and column address driver comprises one group (row and column of all eyes, or row and column of all right eyes and all left eyes) or multiple groups of address matrixes or circuit boards for one or multiple zones respectively in eye space, so as to increase the total eye depth in eye space, or to increase the eye-motion tolerance in each zone, wherein, in the zones from bottom to top in eye space, the eye to screen distance is increasing, and so the size of shutter pupil is decreasing and the eye density and shutter pupil density are increasing correspondingly;
  • 9. the address drivers of claim 3, directional-conductive unit is any one-directional conduction device such as dipole based or transistor based, but not limited to these;
  • 10. the address drivers of claim 3, voltage-controlled switches is any one of voltage-controlled directional conduction device, of which the one-directional conductivity (i.e. conduction is ready but the first voltage must also satisfy certain condition or not ready at all) is controlled by a second voltage so called control voltage, such as transistor based, but not limited to this, wherein, address matrixes of pupils dynamically controlled by the control voltage if with eye-tracking system;
CROSS-REFERENCE TO RELATED APPLICATION

The application claims the priority from U.S. provisional application No. 61/744,786, filed on Oct. 4, 2012 with post mail date on Oct. 1, 2012, and tilted “Method of Glassless 3D Display”.