Method to control point spread function of an image

Information

  • Patent Grant
  • 7742239
  • Patent Number
    7,742,239
  • Date Filed
    Monday, March 17, 2003
    21 years ago
  • Date Issued
    Tuesday, June 22, 2010
    14 years ago
Abstract
A method of controlling the point spread function of an image projected with said image being diffused by a filter; said point spread function is a result of the application of spatial filter(s) on said image; with said control of the point spread function effected by varying the distance between such image and said spatial filter(s) and varying the bidirectional scattering transmission function of the spatial filter(s). Said spatial filter may be a holographic diffuser, which by method of manufacture has a well defined bi-directional scattering transmission spread function. Control of said spread function is particularly useful to maintain image quality while abating moiré interference in situations where two periodic patterns are layered causing moiré interference.
Description
TECHNICAL FIELD

This invention relates to the field of improved imaging technology. In particular, this invention will be discussed in relation to display technology with multiple image layers.


Reference shall now be made to use of the present invention in relation to multiple layered display technology.


BACKGROUND ART

There are two main types of displays used in computer monitors, passive matrix and active matrix. Passive-matrix displays use a simple grid to supply the charge to a particular pixel on the display. Creating the grid starts with two glass layers called substrates. One substrate is given columns and the other is given rows made from a transparent conductive material. This is usually indium tin oxide. The rows or columns are connected to integrated circuits that control when a charge is sent down a particular column or row. The electro-optical material is often sandwiched between the two glass substrates.


A pixel is defined as the smallest resolvable area of an image, either on a screen or stored in memory. Each pixel in a monochrome image has its own brightness, from 0 for black to the maximum value (e.g. 255 for an eight-bit pixel) for white. In a colour image, each pixel has its own brightness and colour, usually represented as a triple of red, green and blue intensities. To turn on a pixel, the integrated circuit sends a charge down the correct column of one substrate and a ground activated on the correct row of the other. The row and column intersect at the designated pixel and that delivers the voltage to untwist the liquid crystals at that pixel.


The passive matrix system has significant drawbacks, notably slow response time and imprecise voltage control. Response time refers to the display's ability to refresh the image displayed. Imprecise voltage control hinders the passive matrix's ability to influence only one pixel at a time.


When voltage is applied to change the optical state of one pixel, the pixels around it also partially change, which makes images appear un-sharp and lacking in contrast.


Active-matrix displays depend on thin film transistors (TFH). Thin film transistors are tiny switching transistors and capacitors. They are arranged in a matrix on a glass substrate. To address a particular pixel, the proper row is switched on, and then a charge is sent down the correct column. Since all of the other rows that the column intersects are turned off, only the capacitor at the designated pixel receives a charge. The capacitor is able to hold the charge until the next refresh cycle; and if the amount of voltage supplied to the crystal is carefully controlled, it can be made to untwist only enough to allow some light through. By doing this in very exact, very small increments, displays can create a grey scale. Most displays today offer 256 levels of brightness per pixel.


Displays that can show colours may have three sub-pixels with red, green and blue colour filters to create each colour pixel. Through the careful control and variation of the voltage applied, the intensity of each sub-pixel can range over 256 shades. Combining the sub-pixels produces a possible palette of 16.8 million colours (256 shades of red×256 shades of green×256 shades of blue). These filters are arranged such that they form vertical red, green and blue stripes across the panel.


The frequency spectrum of radiation incident upon a detector depends on the properties of the light source, the transmission medium and possibly the properties of the reflecting medium. If one considers the eye as a detector the human visual system can sense radiation that has a wavelength between 0.6 nm and 380 nm. Hence this is described as the visual part of the electromagnetic spectrum. Humans perceive certain frequency distributions as having different colours and brightness. A scheme was devised to describe any perceived colour and brightness via adding three basis spectral distributions with various weights. For example in the 1931 CIE colour space any perceivable colour may be described by the following equation:

C=xrX+yrY+zrZ

Where C is the colour being described, Xr, Yr and Zr are the weights and X, Y and Z are 1931 CIE tristimulis curves which are graphs of the relative sensitivity of the eye Vs wavelength. For any given colour, the weights may be determined by the following equations:







x
r

=

(




C


(
λ
)




X


(
λ
)





λ



)








y
r

=

(




C


(
λ
)




Y


(
λ
)





λ



)








z
r

=

(




C


(
λ
)




Z


(
λ
)





λ



)





The 1931 co-ordinates are formed via the following normalisation:







x
r

=


X
r



X
r

+

Y
r

+

Z
r










y
r

=


Y
r



X
r

+

Y
r

+

Z
r










z
r

=

1
-

x
r

-

y
r






These may be plotted on the 1931 CIE diagram. The spectral locus defines the pure spectral colours, that is the perception of radiation with a specific wavelength. Colour co-ordinates that are closer or farther from pure spectral colours are described as being more or less saturated respectively. The value of the y coordinate multiplied by 683 is also referred to as the luminance denoted by the symbol L.


The perception model described above accurately predicts that colours on addressable objects can be formed by mixing small areas of three basis colours with modulated intensities which are close in either close spatial or temporal proximity. If the basis colours are plotted on the CIE diagram then the enclosed triangle contains all the colours producible by the system. The enclosed area is called the colour gamut and hence a addressable object with a larger area can addressable object a greater variation in colour and has a greater colour gamut.


Displays employ several variations of liquid crystal technology, including super twisted nematics, dual scan twisted nematics, ferroelectric liquid crystal and surface stabilized ferroelectric liquid crystal. They can be lit using ambient light in which case they are termed as reflective, or backlit and termed transmissive. There are also emissive technologies and reflective technologies such as Organic Light Emitting Diodes and electronic ink which are addressed in the same manner as Liquid Crystal displays.


At present there exist displays that by various means enable the stacking of addressable object planes at set distances. As well as the binocular depth cue, they feature intrinsic motion parallax, where the x and y distance changes between objects displayed on different planes depending on viewing angle. Additionally separate focal planes may be literally be brought in and out of focus depending on the focal length of the lens in the viewers eye. These displays consist of a high-brightened backlight, a rear image panel which is usually an active matrix, colour liquid crystal display, a diffuser, a refractor and a front image plane which are laminated to form a stack. There are generally colour filter stripes as mentioned above, and a black matrix on each display which defines the borders of the pixels. However it should be appreciated that the following discussion applies to all addressable object planes that are addressed by passive or active matrices or have colour filters arranged in any periodic pattern, or any optically active periodic pattern. The displays are close to each other, as far as the viewer is concerned they form two similar, but not identical periodic patterns on the retina. This is because the solid angle subtended by the repeating patterns is different, which causes the colour stripes and black matrix boundaries to have slightly different pitches when projected onto the retina.


These conditions are sufficient to cause a phenomenon called moiré interference, which is characterized by large, annoying vertical red, green and blue stripes. The diffuser combats the interference by spreading the intensity distribution of the image formed by the colour filters. However while this may help remove moiré it has the effect of changing the bidirectional scattering transmission function of the sub-pixels, smearing them to a point spread function thus effectively reducing the resolution of the display. Therefore to make a good display or optical system where the image remains sharp and the amplitude of the moiré interference is hardly noticeable, these two conflicting factors must be carefully controlled.


Typically the diffuser is of the form of a chemically etched series of surface features on a thin (0.000115 meter), birefingent substrate such polyester. If the pattern was viewed under a microscope at 1000× magnification it would be undulating in topology. Because of the polarised nature of the displays this can cause the total luminance, which is evaluated at the front display by the viewer, to be reduced because it changes the degree and polarization orientation from the optimum. A similar pattern is available on a non-birefringent surface such as acrylic but this substrate cannot be made thin enough as not over-blur the rear most pixels. In general one cannot easily control the angular distribution of the light as it exits a typical diffuser. Also because there is an extra layer in the optical stack, extra air-plastic or air-glass interfaces are formed causing back reflections. These decrease the brightness of the display because at least 4% of the light is directed towards the backlight, as opposed, to the viewing direction. The ratio of the reflected and transmitted radiation is given by Fresnel's equations which are well known in the art. Note that if a ray is at some angle from the normal significantly more than 4% of light may be reflected. This reflected light may also be re-reflected out to the viewer, but may not appear to come from the correct origin, reducing the contrast of the display. Also because the film is on a separate sheet it has the tendency to deform due to the heat from the high-brightness backlight which is visible to the viewer and can exasperate the sharpness problem described above. Point spread functions for typical, commercially available diffusers are circularly symmetric, that is their gain is constant for a given radius.


A holographic diffuser is a transparent or translucent structure having an entrance surface, an exit surface, and light shaping structures formed on its entrance surface and/or in its interior. These light shaping structures are random, disordered, and non-planar micro sculpted structures.


These structures are created during recording of the medium by illuminating the medium with a speckle pattern produced in conjunction with coherent light or the combination of incoherent light and a computer-generated mask which simulates speckle. The speckle produced changes in the refractive index of the medium which, when developed, are the micro-sculpted structures. These light shaping structures diffract light passing through the holographic diffuser so that the beam of light emitted from the holographic diffuser's exit surface exhibits a precisely controlled energy distribution along horizontal and vertical axes. Holographic diffusers can be used to shape a light beam so that over 90% (and up to 95%-98%) of the light beam entering the holographic diffuser is directed towards and into contact with a target located downstream of the holographic diffuser. A holographic diffuser can be made to collect incoming light and either (1) distribute it over a circular area from a fraction of a degree to over 100 degrees, or (2) send it into an almost unlimited range of elliptical angles. For example, a 2 degree×50 degree. holographic diffuser will produce a line when illuminated by a LED or laser and a 35 degree×0.90 degree. Thus a holographic diffuser is not a typical example of a diffuser, since it may send most of the incoming light out at elliptical angles and these particular angles may be finely controlled.


The following discussion describes pixel patterns used in the imaging industry. For the purposes of illustration it is assumed a sub-pixel is a 0.1 mm×0.3 mm rectangle, with the long axis of the rectangle in the y direction and a pixel is a 0.3 mm×0.3 mm square, however it should be appreciated that a pixel can be any shape that is possible to tessellate and a sub pixel can be any one of a set of shapes which are possible to tessellate in combination. To define this rigorously consider a set of regular points in 2D space forming a lattice and the same collection of pixels or sub-pixels at these points. Then the pixel pattern is wholly described by the lattice and the collection of sub-pixels or pixels at that point which are called a basis. The lattice can in turn be described by a primitive lattice cell comprised of two linearly independent vectors which form two sides of a parallelogram.


The following radio metric quantities will be used throughout this specification are defined below:


Luminous Flux is the flow rate of visual energy and is measured in lumens.


Illuminance is a measure of photometric flux per unit area, or visible flux density. Illuminance is measured in lux (lumens per square meter).


Luminance is the illuminance per solid angle.

    • To appreciate the solid angle concept consider a spherical surface of radius r containing an area element ΔA. The solid angle at the centre of the sphere is defined to be






ΔΩ
=



Δ





A


r
2


.





Pixels on a transmissive addressable object will be capable of maximum and minimum luminous states. Labelling the maximum state as Lb and the minimum as Ld then the contrast ratio is described by







C
r

=


L
b


L
d






The term contrast ratio is usually abbreviated to just contrast.


From http://www.cquest.utoronto.ca/psych/psy280f/ch5/csf.html “The contrast sensitivity function (CSF) plots the contrast sensitivity for the human visual system (1/(contrast threshold)) for all spatial frequencies. Viewers are most sensitive to intermediate frequencies (˜4-8 cycles per degree). Viewers are less sensitive to lower frequencies, and less sensitive to higher frequencies.


The CSF shows us the observer's window of visibility. Points below the CSF are visible to the observer (those are the points that have even higher contrasts than the threshold level). Points above the CSF are invisible to the observer (those are the points that have lower contrasts than the threshold level). The lowest visible frequency (at 100% contrast) is the low frequency cut-off, and the highest visible frequency (at 100% contrast) is the high frequency cut-off.”


DISCLOSURE OF INVENTION

According to one aspect of the present invention there is a method of controlling the point spread function;


The term ‘point spread function’ is defined as the output of the imaging system for an input point source, in particular it describes the distribution of a single object point after passing through a filter with a particular spatial frequency response;


in an optical system consisting of an object, at least one spatial filter, and the image projected by that object with said spatial filter(s) located between said object and said image where said point spread function is a representation of the application of spatial filter(s) on said image;


with said point spread function controlled by varying the distance between said image and said spatial filter(s) and bidirectional scattering transmission function of the spatial filter(s).


The spatial filter is characterised by the bidirectional scattering transmission distribution function, which describes how a small cone of light rays around a single point is transmitted through a surface. Said function is defined as








f
s



(


ω
i



ω
o


)


=





L
o



(

ω
o

)





L


(

ω
i

)











σ




(

ω
i

)










where the left hand side of the equation is the observed radiance in the direction of ωo, per unit of irradiance arriving from ωi. The arrow on the left hand side of the equation symbolises the direction of the light flow. The equation on right hand side is the ratio of the luminance out to the illuminance in, contained in a small solid angle around a ωi.


The, or each, spatial filter may be any type of spatial filter used in known optical technology such as, for example, a holographic diffuser, prismatic filter, a hologram, or any filter that changes the direction of light in a defined way. Reference throughout this specification to a spatial diffuser is purely exemplary and should not be viewed as limiting in any way.


Said point spread function can be determined by the following equation










PSF


(


x
D

,

y
D

,

Z
OD

,

Z
LD


)


=





f
s



(


ϖ
i



ϖ
o


)





L
O



(


θ
H

,

θ
V


)




A
pupil
2




(




(


x
D

-

x
O


)

2


M
2


+



(


y
D

-

x
O


)

2


M
2


+

Z
OD
2


)



(



x
R
2


M
2


+


y
R
2


M
2


+

Z
DL
2


)





M
2






(
1
)








where







θ
H

=


tan

-
1




(


x
R


MZ
OD


)









θ
V

=


tan

-
1




(


y
R


MZ
OD


)






Table 1 introduces all of the relevant notation, note that one example of a projective system is the object, the cornea/lens and the retina system but the discussion pertains to any system containing an object, a lens an image:


















x, y, z
x, y, z coordinates



ZOD
Distance along z axis between object and




diffuser



ZDL
Distance along z axis between diffuser




and lens



ZLR
Distance along z axis between lens and




retina



ΩOD
Solid angle formed between point source




and δAD



ΩDL
Solid angle formed between the




intersection of the ray we are following




and the diffuser plane



M
Magnification of the lens system



| |
Modulus of a vector



| | | |
Norm of a vector




Direction of optic flow



ωi
||R * OD||



ωo
||R * DL||










This analysis enables one to take a spatial filter of a particular type, spaced at a particular distance from the display and to derive a linear transform to be used on a representation of the image. So the presence of the physical apparatus is not required to evaluate the correct BTDF, or portion thereof, and distance to provide the optimum image quality, saving development time and cost.


Images that are portrayed and viewed using typical display technology are discreet approximations of the real world and are said to be pixelised. The diffusion of said pixelisation, so that the sharp boundaries between pixels are no longer perceptible, is one method to make the object appear more natural. Prior art diffusion and de-pixelisation methods are subject to trial and error. Upon the application of a diffuser, or other spatial filter to an image the luminance as measured substantially in the direction perpendicular to the surface of the image layer, the viewer looking substantially in this direction when viewing or using the display, will be reduced and consequently the contrast will be reduced. Additionally if there are any interstitial elements between the image layer and the diffuser then it may not be possible to get the diffuser close enough to the image layer so as not to over blur the image. Prior art provides no way to finely control said depixelisation nor is there a way to predict for a given spatial filter at a given distance what the effect on the image will be in terms of contrast, luminance, point spread and most importantly the viewers perception of the image. The current invention describes a way to pre-determine, control and most importantly optimize the de-pixelisation process.


The preferred point spread function should remove frequencies higher than the sub-pixel frequency, but none below. The ideal filter would be a perfect cut-off filter at the sub-pixel frequency. However given a range of physically realisable filters with known BTDF's a difference metric, such as differences between the square root integral, weighted by the contrast sensitivity function of the human visual system, can be used to determine the departure from the ideal, which can be minimised, so as to pick the best physical embodiment.


According to another aspect of this invention it is desirable to maintain as far as practical the contrast and luminance characteristics of the original object. Also it is preferable, because of the presence of interstitial optical films of significant thickness, to be able to have the spatial filter as far from the image as possible. This is achieved when the bi-directional transmission distribution function is narrow for all input angles.


According to another aspect of the present invention point spread function can be pre-determined and the trade off between moiré interference and image clarity abated;


in an optical system consisting of at least two addressable object planes with periodicity and at least one spatial filter between at least two of the addressable object planes where said point spread function is a result of the application of spatial filter(s) on said image,


with said point spread function being controlled by varying the distance between an object and said spatial filter(s) and varying bidirectional scattering transmission function characteristic of the spatial filter(s).


In typical multilayered technologies moiré interference is caused due to the periodicity of the layers. Diffusion techniques can be employed to abate moiré interference. However methods employed in prior art are subject to trial and error and result in residual moiré interference and an unnecessarily large drop in the perceived quality of the image.


The moiré interference as it appears on the image plane can be characterised by the following equation which describes the luminous intensity of the final image evaluated at the retina or any other image plane










E


(

x
,
y

)


=









i
=

1







m



m
,
n



j
=

1







n






BL
0

·


(


PSF


(

x
,
y

)


*

[






R


(

x
,
y

)


R




T


(
λ
)



Red
,
R










R


(

x
,
y

)


G




T


(
λ
)



G
,
R










R


(

x
,
y

)


B




T


(
λ
)



B
,
R






]


)

[



F


(

x
,
y

)


R




T


(
λ
)



Red
,
F















F


(

x
,
y

)


G




T


(
λ
)



G
,
F





F


(

x
,
y

)


B




T


(
λ
)



B
,
F



]



M
2



A
lens




cos
4



(
θ
)







z


2







(
2
)







Where BL0 IS the radiance of the backlight, PSF(x,y) is the point spread function described earlier, TRed, TG and TB are the spectral transmission functions of the dye layers where second subscripts R and F designate the front and rear imaging layers respectively, M is the magnification of the thin lens system given by z′/zo and Alens is the area of the lens in the system.


Once the intensity distribution on the back of the retina is known the distortion to the subjective image quality is described by the following metric











D
1

=



1

ln


(
2
)








v
o


v
max







M


D
o



(
v
)





M
t



(
v
)












(

ln


(
v
)


)





-


1

ln


(
2
)








v
o


v
max








M
D



(
v
)




M
t



(
v
)












(

ln


(
v
)


)














D
2

=



1

ln


(
2
)








v
o


v
max







M


M
o



(
v
)





M
t



(
v
)












(

ln


(
v
)


)





-


1

ln


(
2
)








v
o


v
max








M
M



(
v
)




M
t



(
v
)












(

ln


(
v
)


)













D
=


D
1

+

D
2







(
3
)







where







M


(
v
)


=






F

X





(
v
)


/


F
X



(
0
)






F
X



(
v
)


/


F
X



(
0
)











is defined as the modulation transfer function where FX′(v) and FX(v) are representations of the distorted and undistorted images in the frequency domain respectively and FX(0)=FX′(0) are the average luminance's of the distorted and undistorted images







M
t

=

1

CSF


(

ω


(
v
)


)








where CSF(ω) is the contrast sensitivity function

    • MD(v) is the filtered image and MD0(v) is the unfiltered image that is being compared
    • MM(v) is the image with moiré and MM0(v) is the ideal image without moiré.


Preferably, there is little perceptible moiré interference present and the object maintains its original image quality, and as such the BDTF of the spatial filter and the distance of said spatial filter from the display is picked such that the value of D, in equation (3) above is at a minimum for any given set of pixel patterns.


The advantages will become clear when the modern product development process is considered. Nowadays the this could be broken down into the following five phases


i. Specification


ii. Soft prototyping


iii. Hard prototyping


iv. Validation


v. Manufacturing


Within the development process, steps (ii) to (iv) are repeated many times, to get the product to the point where the it is suitable for manufacture. Step (ii) in the modern process requires the use of computer aided design tools which significantly reduce the number of iterations at step (iii). There exist no specialised tools in the prior art for the purposes of multi-layered optics, typically those available use Monte Carlo ray tracing techniques which involve large numbers of calculations.


According to another embodiment of this invention the results contained within equations (1), (2) or (3) are incorporated into an algorithm where


(i) The distance between the spatial filter and the object


(ii) The pixel structure for the object layers


(iii) The available spatial filters


(iv) The refractive indices within the optical stack are entered and the algorithm provides


(a) a subjective image quality value for each combination of the above parameters


(b) the best configuration of the distance between layers and the spatial filter.


The algorithm provides a procedure to preserve the image quality by the and abate more interference by the manipulation and optimization of the point spread function acting on the image caused by the spatial filter, and additionally provides a simple, timely means to do this in absence of a group of observers and at the soft prototype stage.


To further appreciate the advantages of a soft prototyping system, in terms of cost and time, over a trial and error approach consider the following example: In multi-layered displays the gap between the diffuser and object layer is controlled by “adjusting” the thickness of a birefringence free substrate such as glass or acrylic. In reality this adjustment is not trivial. Because layers in the stack are viewed through polarizers, any stress on or within these layers causes birefringence which appears as coloured or dark patches within the display. So cast acrylic is generally used, as extruding the material induces stress, introducing unwanted birefringence into the multi-layered display stack. On the other hand if the casting method of forming the acrylic is used, there is no birefringence present however the thickness across the sheet can vary by millimetres resulting in variable image quality. There exist proprietary methods to overcome this dilemma, however there is no “real time” adjustment possible. In order to change the thickness of the substrate die and machine set-ups need to be altered resulting in considerable delays and expense.


Additionally there is the problem that one needs to have a physical object, that is one cannot determine the correct thickness of acrylic to be used by specification of the object alone. If the object is a flat panel display then it is necessary that the display be first constructed, which can take between 6 and 12 months and incur large costs, typically in the order of millions of $USD. This implies that there is no way of determining the optimum object specification, so a display that is optimised for the purposes of layering can be specified correctly first time.


According to yet another aspect of the present invention an image will have a periodic pattern which is asymmetric. Pixel patterns which are commonly employed in display technologies are asymmetric. For example the red, green, blue stripe pattern commonly found in liquid crystal displays and other display technologies are asymmetric in their arrangement.


Commonly available filters are circularly symmetric. The result of applying a circularly symmetric filter on an asymmetric pixel image pattern is a circularly symmetric point spread function—resulting in over-blurring of the initial image and over degradation of the image quality.


In another embodiment of the invention control spread function caused by a spatial diffuser acting upon an image with an asymmetric pattern can be soft and hard prototyped by varying the distance between such image and said spatial filter(s) and by varying bidirectional scattering transmission function of the spatial filter(s). Preferably the spatial diffuser employed is of known characteristics such that its bidirectional scattering transmission function is asymmetric.


Preferably the spatial diffuser used in the present invention will be a holographic diffuser.


According to yet another aspect of the present invention point spread function can be soft and hard prototyped and moiré interference abated;


in an optical system consisting of a at least two addressable object planes with periodicity and at least one spatial filter between at least two of the addressable object planes where said point spread function is a result of the application of spatial filter(s) on said image,


with said point spread function being controlled by varying the distance between such image and said spatial filter(s) and independently varying the BTDF characteristic of the spatial filter(s) along any axis in the x-y plane.


In typical display technologies there is a black matrix defining the borders of the sub-pixels on an object layer in order to hide any manufacturing defects. Two, or more, of these layers combined when imaged on the retina produce a pattern composed of lines of greater and lesser spatial density. This pattern is perceived as bright and dark stripes. The black matrix is approximately an order of magnitude smaller than the size of the sub-pixel and requires only a small amount of spreading in the x and y directions.


On the other hand in a stripe pattern for example the coloured sub-pixels on the requires spreading across their nearest neighbours in the x direction.


So the stripe pixel pattern is a superposition of this situation and ideally is spread exactly over the width of two sub-pixels in the x direction and exactly the width of the black matrix in the y direction.


According to one aspect of the present invention coloured and black moiré is overcome by control of an asymmetric point spread function.


According to yet another aspect of the present invention point spread function can be soft and hard prototyped and moiré interference abated; whilst maintaining image quality.


Generally, a given point within a region on an image plane surrounded by a group of regions with substantially the same optical emission spectrums or absorption profiles, then is preferable, to control the point spread function the extent practical, to spread the point no further than half the distance to the nearest point on the border of the boundary of the nearest neighbouring region to avoid moiré interference with another image layer and maintain the image quality on the image plane. The half factor is incorporated because the nearest neighbouring region is spread also.


Preferably the spatial filter is a holographic diffuser, with full width half maximum less than 30 degrees on either the x or the y axis.


Whilst the holographic diffusion pattern may be recorded using a laser and mask arrangement it can be reproduced, and indeed any diffusion pattern may be produced to within a given tolerance by many different methods. One such method is calenderending where an adhesive, usually epoxy that is curable by ultra-violet radiation, is applied to the desired surface and a 3D negative impression of the surface, on a transparent substrate, to be reproduced is pushed into the adhesive. The adhesive is then cured by applying the UV radiation through the substrate, and the substrate removed leaving a surface impression. Also the pattern may be applied to the surface during its manufacturing process, such as embossing the pattern onto a plastic sheet whilst the surface is still soft. It also may be applied using material removal systems such as acid or abrasion. Thus a pattern may be applied to any surface within the optical stack.





BRIEF DESCRIPTION OF DRAWINGS

Further aspects of the present invention will become apparent from the following description which is given by way of example only and with reference to the accompanying drawings in which:



FIG. 1 shows the setup for the working that shows how the point spread function can be calculated from the bi-directional transmission distribution function of the spatial filter



FIG. 2 shows the setup for the working that shows how the moiré interference is described



FIG. 3 is a flow chart describing the software embodiment of the present invention



FIG. 4 shows the effect of the two different bidirectional spread functions of figure two on a simplified version of the vertical stripe pattern.



FIG. 5 is a graph of the bidirectional spread function, where the horizontal and vertical axes give the row and column number of the element respectively and the vertical axis potrays the weight.



FIG. 6 shows the effect if narrower Vs wider bi-directional spread functions on the moiré interference



FIG. 7 is a diagram disclosing the hardware embodiment of the present invention



FIG. 8 shows a microscopic image of the holographic diffuser and its effect on an incident laser beam.



FIG. 9 is a schematic view of a typical stripe pattern used in the art. Red, green and blue sub-pixels are arranged in a matrix where each pixel is formed from a triple of a red sub-pixel, a green sub-pixel and a blue sub-pixel. A sensor dragged across the display in the horizontal direction would detect red, green and blue sub-pixels if not on a horizontal strip of the black matrix. Conversely a sensor dragged in the vertical direction would only see red sub-pixels for example.





BEST MODES FOR CARRYING OUT THE INVENTION

At present there exist methods to produce displays where several imaging planes are stacked with set distances between them. These imaging planes may also be stacked as closely as possible. These displays consist of a high-brightened backlight, a rear image panel which is usually an active matrix, colour display, a diffuser and a front image plane, which are laminated to form a stack. There are generally colour filter stripes, and a black matrix on each display which defines the borders of the pixels. However it should be appreciated that the following discussion applies to all addressable object planes that are addressed by passive or active matrices or have colour filters, or any periodic pattern. For the purposes of the present invention these addressable object planes may not be addressable at all.


It is an object of the present invention to address the foregoing problems or at least to provide the public with a useful choice.


Further aspects and advantages of the present invention will become apparent from the ensuing description which is given by way of example only.


Reference throughout this specification will now be made to the present invention as applying to video display systems. However, it should be appreciated by those skilled in the art that other types of display and imaging systems may be used in conjunction with the invention, not necessarily being video screens.


According to one embodiment of the present invention there is provided a mathematical model for predicting and optimising the trade-off between moiré interference and image quality by use of a spatial filter within a multi-layered image system where said layers contain periodic elements.


To determine the effect of a diffusion element on small features, such as font elements, on the an imaging layer consider the somewhat simplistic optical system shown in FIG. 1. For the purposes of the metric and further modelling, the point spread function of the system in terms of the BTDF of the diffuser and its distance from the rear display is required. Since the image quality on the rear imaging layer is being examined the front imaging layer is not needed. The eye is modelled as a thin lens system where the images are focused on the fovea (1), a small region of the retina with very high cone density, at an angle of about 6-8 degrees to the optic axis. These “flat retina” and “thin lens” approximations are acceptable because the fovea forms such a small portion of the curved retina. Table 1 introduces all of the relevant notation.


















x, y, z
x, y, z coordinates



ZOD
Distance along z axis between object and




diffuser



ZDL
Distance along z axis between diffuser




and lens



ZLR
Distance along z axis between lens and




retina



ROD*
Ray from object to diffuser



RDL*
Ray form diffuser to lens



RLR*
Ray from lens to retina



δAD
Small area surrounding the intersection




of ray we are following with diffuser plane



δAR
Small area surrounding the intersection




of ray we are following with retina plane



ΩOD
Solid angle formed between point source




and δAD



ΩDL
Solid angle formed between the




intersection of the ray we are following




and the diffuser plane



M
Magnification of the lens system



| |
Modulus of a vector



| | | |
Norm of a vector




Direction of optic flow



ωi
||R * OD||



ωo
||R * DL||










To begin the analysis the object layer is partitioned into a fine grid










G
O

=

{




(


x
1

,

y
1


)




(


x
2

,

y
1


)







(


x
m

,

y
1


)






(


x
1

,

y
2


)












(


x
n

,

y
2


)






















(


x
1

,

y
m


)




(


x
2

,

y
n


)







(


x
m

,

y
n


)




}





(
1
)








which is mapped by the thin lens, focused to give a magnification M, to a grid on the retina










G
R

=


-
M



{




(


x
1

,

y
1


)




(


x
2

,

y
1


)







(


x
n

,

y
1


)






(


x
1

,

y
2


)












(


x
n

,

y
2


)






















(


x
1

,

y
m


)




(


x
2

,

y
m


)







(


x
n

,

y
m


)




}






(
2
)







The negative sign implies that the x coordinate has been reflected in the y axis and vice versa. Initially the position is required, so follow the path of a generalised ray R* from a point at (xO*, yO*) (2) behind the diffuser which is broken into three sections, {tilde over (R)}*OD (3), {tilde over (R)}*DL (4), and {tilde over (R)}*LR (5) each a vector. {tilde over (R)}*OD, whose direction is described by the inclination angles θHV) (5) resolved into components in the horizontal z-x plane and vertical z-y plane respectively, is redirected by the diffuser at the point

(xD*,yD*)=(ZOD tan(θH),ZOD tan(θV))  (3)


Think of the point (x*D,yD*) (6) as a new object which is then imaged by the thin lens. To determine the surrounding grid element at the retina take the imaged point and divide the x and y coordinates at the fovea by the grid size and round up to the nearest integer










(


x
R


,

y
R
*


)

=

(



x
D
*

M

,


y
D
*

M


)





(
4
)







(


x
R
*

,

y
R
*


)

=

(


ceil


{


x
F
*


δ






x
F



}


,

ceil


{


y
F
*


δ






y
F



}



)





(
5
)







If the irradiance of the of the light entering the diffuser contained within a small solid angle is known the output luminance on the exit side at that point is determined from














L
D



(

ω
0

)


=



L
O



(

ϖ
i

)





f
s



(


ϖ
i



ω
o


)





Ω
D



(

ϖ
i

)









=



L
O



(

ϖ
i

)





f
s



(


ϖ
i



ω
o


)





δ






A
D






R
OD



2









=



L
O



(

ϖ
i

)





f
s



(


ϖ
i



ω
o


)





δ






A
D






(


x
D

-

x
O


)

2


M
2


+



(


y
D

-

y
O


)

2


M
2


+

Z
OD
2











(
6
)












f
s

=





L
O



(

ϖ
o

)






L
i



(

ω
i

)






Ω


(

ϖ
i

)










is the bidirectional transmittance function for the diffuser element where

ωi=∥R*OD
ωo=∥R*DL∥  (7)


Now the illuminance at the lens is













E
L

=




L


(


θ
H

,

θ
V


)



δ






Ω
DL








=





L
O



(


θ
H

,

θ
V


)





A
lens





R
DL
*



2









=





L
O



(


θ
H

,

θ
V


)





A
kens


4






π




[



x
D
2


M
2


+


y
D
2


M
2


+

Z
DL
2


]











(
8
)








And the flux through the lens is













Φ
lens

=


E
L



A
lens








=



L
O



(


θ
H

,

θ
V


)





A
kens
2


4






π


[



x
D
2


M
2


+


y
D
2


M
2


+

Z
DL
2


]












(
9
)







To find the illuminance imaged at each grid area on the retina by the lens, which is the stimulus at that area, take the flux through the lens and divide it by the area of the corresponding grid element







E
v







y
R


x
R








Φ
Lens


δ






A
R










So










E
v







x
R


y
R





=





f
s



(


ϖ
i



ϖ
o


)





L
O



(


θ
H

,

θ
V


)




A
Lens
2




(




(


x
D

-

x
O


)

2


M
2


+



(


y
D

-

y
O


)

2


M
2


+

Z
OD
2


)



(



x
R
2


M
2


+


y
R
2


M
2


+

Z
DL
2


)






δ






A
O



δ






A
R










(
10
)







However the result is required to be independent of the grid and so dissolve the ratio on the right hand side. The













lim






δ






A
R



0







δ






A
D



δ






A
R




=




A
O





A
R







(
11
)








for δAF=f(δAO) and since













δ






A
R


=


δ






x
R


δ






y
R


=

M





δ






x
O


M





δ






y
O









=



M
2


δ






x
O


δ






y
O


=


M
2


δ






A
O










(
12
)










A
F





A
O



=

M
2





(
13
)








and finally











PSF


(


x
D

,

y
D

,

Z
OD

,

Z
LD


)


=





f
s



(


ϖ
i



ϖ
o


)





L
O



(


θ
H

,

θ
V


)





A
pupil
2










(




(


x
D

-

x
O


)

2


M
2


+



(


y
D

-

x
O


)

2


M
2


+

Z
OD
2


)






(



x
R
2


M
2


+


y
R
2


M
2


+

Z
DL
2


)







M
2








where







θ
H

=


tan

-
1




(


x
R


MZ
OD


)










θ
V

=


tan

-
1




(


y
R


MZ
OD


)







(
14
)







An intuitive way to think about moiré is shown in FIG. 2 which shows two gratings of slightly different wavelengths (7) overlaid. When each of these screens is imaged by the lens on the same plane there is interference (8) between two square waveforms of slightly different frequency—which, considering how the average density varies across the screen, produces a beating pattern.


The following describes, in a rigorous way, how this beating phenomenon occurs in multi-layered imaging systems. The situation presented in FIG. 2a is similar for LCD panels that are spaced apart and have regularly sized and spaced apertures containing red, green and blue filters.


Each filter of, each layer, is modelled separately with the spectral transmission function. This is overlaid upon a scaled 2D square wave with Lambertian luminance of BL0 or zero for the rear imaging layer; and a transmittance of one or zero for the front imaging plane. The origin of this wave is at the optical axis shown as the vertical centre line on FIG. 2c. The black matrix (9) is included implicitly in this setup. The luminance/transmission functions are expressed mathematically in Equations 15 and 16 where the symbols are defined in FIG. 3. The multiplication is expressed in the horizontal and vertical vectors in Equation 3. This idea is










[





F


(

x
,
y

)


Red







F


(

x
,
y

)


G







F


(

x
,
y

)


B




]

=

[




{





1

_if


_M
F



(


BM

X
,
F


+

0


P

X
,
F



+


n
x



P

X
,
F




)


<
x
<


M
F



(


BM

X
,
F


+

(


P

X
,
F


-

2


BM

X
,
F




)

+


n
x



P

X
,
F




)










M
F



BM

Y
,
F



<
y
<


M
F



(



n
y



P

Y
,
F



-

BM

Y
,
F



)








0

_otherwise










{





1

_if


_M
F



(


BM

X
,
F


+

1


P

X
,
F



+


n
x



P

X
,
F




)


<
x
<


M
F



(


BM

X
,
F


+

(


2


P

X
,
F



-

2


BM

X
,
F




)

+


n
x



P

X
,
F




)











and_M
F



BM

Y
,
F



<
y
<


M
F



(



n
y



P

Y
,
F



-

BM

Y
,
F



)













0

_otherwise










{





1

_if


_M
F



(


BM

X
,
F


+

2


P

X
,
F



+


n
x



P

X
,
F




)


<
x
<


M
F



(


BM

X
,
F


+

(


3


P

X
,
F



-

2


BM

X
,
F




)

+


n
x



P

X
,
F




)










and_M
F



BM

Y
,
F



<
y
<


M
F



(



n
y



P

Y
,
F



-

BM

Y
,
F



)








0

_otherwise








]





(
15
)








obviously portable to other technologies and optical configurations which could form the basis of multi-layered displays.


The assignments in Equations 15 and 16 may look a confusing to the reader but the pattern within the braces is rather simple and the general form shown above it












ϕ



initial_phase

_shift



+

n


P


period




<
z
<



ϕ



initial_phase

_shift


+


Δ





z




change_in

_distance



+

n



P


period






[





R


(

x
,
y

)


Red







R


(

x
,
y

)


G







R


(

x
,
y

)


B




]




=

[




{





1

_if


_M
R



(


BM

X
,
R


+

0


P

X
,
R



+


n
x



P

X
,
R




)


<
x
<


M
R



(


BM

X
,
R


+

(


P

X
,
R


-

2


BM

X
,
R




)

+


n
x



P

X
,
R




)










M
R



BM

Y
,
R



<
y
<


M
R



(



n
y



P

Y
,
R



-

BM

Y
,
R



)








0

_otherwise










{





1

_if


_M
R



(


BM

X
,
R


+

1


P

X
,
R



+


n
x



P

X
,
R




)


<
x
<


M
R



(


BM

X
,
R


+

(


2


P

X
,
R



-

2


BM

X
,
R




)

+


n
x



P

X
,
R

20



)










and_M
R



BM

Y
,
R



<
y
<


M
R



(



n
y



P

Y
,
R



-

BM

Y
,
R



)








0

_otherwise










{





1

_if


_M
R



(


BM

X
,
R


+

2


P

X
,
R



+


n
x



P

X
,
R




)


<
x
<


M
R



(


BM

X
,
R


+

(


3


P

X
,
R



-

2


BM

X
,
R




)

+


n
x



P

X
,
R




)










and_M
R



BM

Y
,
R



<
y
<


M
R



(



n
y



P

Y
,
R



-

BM

Y
,
R



)








0

_otherwise








]





(
16
)







The layers are then set-up as shown in FIG. 2c. This situation can be simplified by considering the same image produced on the retina by the rear layer, but where the rear layer is moved very slightly behind the front layer and suitably scaled. This will allow us to determine the final image with point by point multiplication of the separate layers. The use of radiometric rather than photometric quantities is required since the various filters need to be modelled. To scale this object, such that the image on the retina remains the same examine FIG. 2c. Start with the fact that the area has to be preserved and from thin lens theory it is know that







x
R

=




z
0


z
R




x
o


=





z



z
R




x





x



=



z
0


z





x
0









and similarly







y


=



z
0


z






y
0

.







The illuminance on a plane a distance z away from a flat surface imaged by a thin lens (10) is












E
=


Φ
a

=




LA



S



az


2





cos
4



(
θ
)










=




L


(


z
o


z



)


2


AS







cos
4



(
θ
)





az


2









(
17
)











=


LAS







cos
4



(
θ
)




az
2







(
18
)








where S is the area of the lens (10), L is the luminance of the rear display (11), A is the area of a small element on the rear display (12), a is a small element on the lens, Φ is the flux and E is the irradiance. This turns out to be the same no matter how far the rear panel is away from the lens. Take the diffuser into account by convolving the rear display image with the point source before changing its scale. To get the irradiance at the retina combine all of the separate layers using point by point multiplication and sum them as in Equation 19.










E


(

x
,
y

)


=









i
=

1











m



j
=

1











n




m
,
n





BL
0

·

(


PSF


(

x
,
y

)




[






R


(

x
,
y

)


R




T


(
λ
)



Red
,
R










R


(

x
,
y

)


G




T


(
λ
)



G
,
R










R


(

x
,
y

)


B




T


(
λ
)



B
,
R






]


)









[



F


(

x
,
y

)


R




T


(
λ
)



Red
,
F





F


(

x
,
y

)


G




T


(
λ
)



G
,
F





F


(

x
,
y

)


B




T


(
λ
)



B
,
F



]



M
2



A
lens




cos
4



(
θ
)








z


2






(
19
)







Where BL0 is the radiance of the backlight, PSF(x,y) is the point spread function described earlier, TRed, TG and TB are the spectral transmission functions of the dye layers of the where second subscripts R and F designate the front and rear imaging layers respectively, M is the magnification of the thin lens system given by z′/zO and Alens is the area of the lens.


Since visual perception is very difficult to measure directly, and varies from person to person an external reference to measure the image quality of a display is required. In most work the external reference is a physical artefact that transforms radiometric quantities into numbers. For these numbers to be useful they should have at least the following two properties: (a) If the numbers describing an aspect of image quality are the same for two displays then when asked observers comparing the two displays should report that this aspect of image quality is the same (b) If the numbers describing associated with two different displays are different then an observer should be able to report which has the greater magnitude. A third useful property is where the numbers describing the magnitude of a measured quantity agree with the magnitude described by an observer. Here we are interested with two conflicting quantities (a) image “clarity” of background layers which is compromised in a MLD to reduce the (b) saliency of moiré interference produced. Vision science provides a description of the response to spatial frequency, the contrast sensitivity function (CSF), which is good starting point for producing the map between the response of the physical artefact and the “useful numbers” described. The CSF plots the contrast sensitivity (1/cutoff contrast) against angular frequency for human observers.












CSF


(
ω
)


=

a






ωⅇ


-
b






ω





1
+

0.06
















a


(

ω
,
L

)


=


540



(

1
+

0.7
L


)


-
0.2




1
+

12


p


(

1
+


1
3


ω


)


2












b


(
L
)


=

0.3



(

1
+

100
L


)

0.15







(
20
)








Where L is the average display luminance, p is the angular display size in degrees and ω is the angular frequency in cycles per radian related to the angular frequency of the display (cycles per radian) by

ω=dv  (21)


Additionally there is no widely agreed standard for the contrast sensitivity function yet which describes the spatial response of the human visual system. At present there are several contenders which could be named as standard by the CIE, so in a preferred embodiment the CSF that becomes the standard for the art would be incorporated


A naïve display engineer about to place one image layer upon another would not expect Moiré interference or the spatial qualities of the image on the rear display to differ. This is the ideal specification. This is better expressed by stating that only the difference between two sets of images matters.


An image of text on the rear display diffused and the ideal un-diffused case, with the representations FD′, FD in the frequency domain.


An image of the moiré interference produced between the two layers and the ideal case of a uniform white surface the same area as the front most imaging layer, with the representation FM,FM′ in the frequency domain.


The square root integral function compares an original and a distorted image where signal components are weighted by the contrast sensitivity function, and the result is expressed in just noticeable differences (JND). One JND between the two images would mean a difference that was noticeable half of the time. The Square Root Integral (SQUI) is calculated as









J
=


1

ln


(
2
)








v
o


v
max








M


(
v
)




M
t



(
v
)




28









(

ln


(
v
)


)









(
22
)








where







M


(
v
)


=






F

X





(
v
)


/


F
X



(
0
)






F
X



(
v
)


/


F
X



(
0
)











is defined as the modulation transfer function where FX′(v) and FX(v) are representations of the distorted and undistorted images in the frequency domain respectively and FX(0)=FX′(0) are the average luminance's of the distorted and undistorted images







M
t

=

1

CSF


(

ω


(
v
)


)








where CSF(ω) is the contrast sensitivity function


To find the distortion of our multi-layered display from the ideal in terms of JDN's use the following sum:











D
1

=



1

ln


(
2
)








v
o


v
max








M

D
o




(
v
)




M
t



(
v
)












(

ln


(
v
)


)





-


1

ln


(
2
)








v
o


v
max








M
D



(
v
)




M
t



(
v
)












(

ln


(
v
)


)














D
2

=



1

ln


(
2
)








v
o


v
max








M

M
o




(
v
)




M
t



(
v
)












(

ln


(
v
)


)





-


1

ln


(
2
)








v
o


v
max








M
M



(
v
)




M
t



(
v
)












(

ln


(
v
)


)













D
=


D
1

+

D
2







(
23
)








where vo is the smallest spatial frequency viewable on the display corresponding to the size of the display and vmax is the maximum frequency viewable by the human visual system.


The first metric compares the filtered layer to the unfiltered layer. The second term in the metric compares the representation of the front screen to the back screen combination to the retinal of the front screen by itself. In a preferred embodiment the spectral information is calculated such that it forms each sub-term of the metric is calculated via the square root integral.


Many other metrics incorporating the contrast sensitivity function of the human visual system may be employed including the Modulation Transfer Function Area.


According to another embodiment of the present invention there is provided an algorithm for predicting and optimising the trade-off between moiré interference and image quality by use of a spatial filter within a multi-layered image system where said layers contain periodic elements.


As shown in FIG. 2b display architecture is specified to algorithm which may include but is not limited to the following


(a) the shape and dimensions of each pixel or sub-pixel (13,14), and the width of the black matrix (15,16) that surrounds the pixel or sub-pixel, which form the cell attached to each point on the lattice


(b) The chromaticity co-ordinates of combinations of pixels or sub-pixels (17) of substantially the same or of different spectral absorption or emission characteristics on different layers.


For example with two layers, each layer with RGB sub-pixels, the chromaticity co-ordinates of the following combinations are necessary

















R
G
B





















R
RR
RG
RB




(x, y)
(x, y)
(x, y)



G
GR
GG
GB




(x, y)
(x, y)
(x, y)



B
BR
BG
BB




(x, y)
(x, y)
(x, y)










Where, for example RR denotes a red sub-pixel is in front of a red sub-pixel, and (x,y) denotes the chromaticity co-ordinates measured when illuminated by a standard D65 source. Note that the combinations can be measured by using a commercially available photo-spectrometer, or calculated using Beer's law.


Note that because of the effects of interstitial optical layers the co-ordinates of RB do not equal BR for example. In the general 2D case the matrix is not symmetric about its diagonal, and in the nD case the elements of the matrix transpose are not equal.


(d) The distance of one display layer to the next and the refractive index of interstitial elements between layers


(e) An approximation to the bidirectional transmission distribution function of each filter


(f) The distances of each spatial filter from each display layer


The following MATLAB® code is provided by way of example only and details a possible implementation of the algorithm. The algorithm is in no way required to be implemented in MATLAB® and it should be appreciated that the comments in conjunction with FIG. 3 are sufficient to teach someone skilled in the art an implementation that is portable to any virtual or Turing machine. MATLAB® documentation, and the http links detailed in the notes section of each function are incorporated herein by way of reference.


The first section of code provides a top-down script controlling the execution of the functions detailed below



FIG. 7 illustrates yet another preferred embodiment of the present invention implemented with a dual screen display (32) composed of a plurality of transparent imaging screens in the form of a front LCD screen (33), parallel to, but spaced apart from a rear display screen (34) provided with a backlight (35) and spatial filter between the imaging screens (36).


It should be apparent to one skilled in the art that a number of alternative display technologies may be utilised in place of the LCD screens. Furthermore FIG. 7 shows a single screen (33) in front of the rear display (34) for the sake of clarity and convenience any number of additional (at least partially transparent imaging screens (33,34) may be incorporated. Although the rear screen (34) may also be a LCD screen it will be apparent that alternative, non-transparent display technology may be employed.


Such displays provide a three dimensional quality to the scene viewed by an observer as displayed in the applicants co-pending patents PCT number PCT/NZ98/00098 and PCT/NZ99/00021, incorporated by reference herein.


Thus for the sake of clarity and to aid understanding of the present invention, the display (32) and associated display screens (33,34) are shown in simplified schematic form in the drawings; elements not essential to illustrate the present invention are omitted from the drawings to aid comprehension.


In this embodiment the point spread function acting upon the image is controlled by varying the apparent distance, determined by the index of refraction of the interstitial element between the holographic diffuser and the object layer; and the characteristics of the holographic diffuser.


To aid understanding the effect of a holographic diffuser acting upon a single ray consider FIG. 8 where a single ray strikes the screen in position (37) producing a 2D dirac delta function distribution. When the screen is moved to position (38) after the diffuser a 2D distribution depending on the scattering profile of the diffuser is formed. The profiles are typically Gaussian along any line in the x-y plane. Preferably the contours (39) are elliptical. More preferably the contours are rectangular (40).


(41) shows a microscopic view of a circularly symmetrical holographic diffuser and (42) shows a microscopic view of a non circularly symmetric holographic diffuser.


Aspects of the present invention have been described by way of example only and it should be appreciated that modifications and additions may be made thereto without departing from the scope thereof.

Claims
  • 1. A method of controlling a point spread function of an object, said method comprising: determining at least one characteristic of an optical component of a multi-component display, said multi-component display further comprising a first display screen and a second display screen, wherein said first and second display screens overlap as viewed by a viewer;determining a distance from an origin of said object to a position based upon said at least one characteristic; andpositioning said optical component at said position for controlling said point spread function of said object.
  • 2. The method of claim 1, wherein said object comprises a displayed image.
  • 3. The method of claim 1 further comprising: adjusting said at least one characteristic of said optical component; anddetermining an undated position for said optical component based upon a new value of said at least one characteristic.
  • 4. The method of claim 1, wherein said origin is associated with said first display screen, and wherein said object is displayed on said first display screen.
  • 5. The method of claim 1, wherein said determining said distance comprises determining said distance based upon a refractive index of a display screen selected from a group consisting of said first display screen and said second display screen.
  • 6. The method of claim 1, wherein said first and second display screens are selected from a group consisting of a liquid crystal display, an organic light emitting diode display, and a transparent organic light emitting diode display.
  • 7. The method of claim 1, wherein said at least one characteristic of said optical component is associated with optical transmission through said optical component.
  • 8. The method of claim 1, wherein said optical component is selected from a group consisting of a spatial filter, a prismatic filter, a spatial diffuser, and a holographic diffuser.
  • 9. The method of claim 1, wherein said positioning said optical component at said position further comprises positioning said optical component at said position for altering an optical characteristic associated with said object, and wherein said optical characteristic is selected from a group consisting of Moiré interference and blurriness.
  • 10. A method of controlling a point spread function of an object, said method comprising: determining at least one characteristic of an optical component of a projection system, said projection system operable to project an image onto an image layer;determining a distance from an origin of said object to a first position based upon said at least one characteristic; andpositioning said optical component at said first position for controlling said point spread function of said object and further wherein said position reduces Moiré interference without over-blurring said image.
  • 11. The method of claim 10, wherein said object comprises a projected object generated by said projection system.
  • 12. The method of claim 10, further comprising: adjusting said at least one characteristic of said optical component; andre-positioning said optical component at a second position based upon a second value of said at least one characteristic.
  • 13. The method of claim 10, wherein said at least one characteristic of said optical component is associated with optical transmission through said optical component.
  • 14. The method of claim 10, wherein said optical component is selected from a group consisting of a spatial filter, a prismatic filter, a spatial diffuser, and a holographic diffuser.
  • 15. A method of controlling a point spread function of an object, said method comprising: determining at least one characteristic of an optical component of a projection system, said projection system operable to project an image onto an image layer;determining a distance from an origin of said object to a first position based upon said at least one characteristic;positioning said optical component at said first position for controlling said point spread function of said object;adjusting said at least one characteristic of said optical component; andre-positioning said optical component at a second position based upon a second value of said at least one characteristic for further controlling said point spread function of said object.
  • 16. The method of claim 15, wherein said object comprises a projected object generated by said projection system.
  • 17. The method of claim 15, wherein said at least one characteristic of said optical component is associated with optical transmission through said optical component.
  • 18. The method of claim 15, wherein said optical component is selected from a group consisting of a spatial filter, a prismatic filter, a spatial diffuser, and a holographic diffuser.
  • 19. The method of claim 15, wherein said positioning said optical component at said position further comprises positioning said optical component at said first position for altering an optical characteristic associated with said object, and wherein said optical characteristic is selected from a group consisting of Moiré interference and blurriness of said object.
Priority Claims (1)
Number Date Country Kind
517457 Mar 2002 NZ national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/NZ03/00046 3/17/2003 WO 00 11/7/2005
Publishing Document Publishing Date Country Kind
WO03/079094 9/25/2003 WO A
US Referenced Citations (124)
Number Name Date Kind
2543793 Marks Mar 1951 A
2961486 Marks Nov 1960 A
3536921 Caulfield Oct 1970 A
3605594 Gerritsen Sep 1971 A
3622224 Wysocki et al. Nov 1971 A
3863246 Trcka, et al. Jan 1975 A
3891305 Fader Jun 1975 A
3918796 Fergason Nov 1975 A
3940788 Abe et al. Feb 1976 A
3955208 Wick et al. May 1976 A
3992082 Katz Nov 1976 A
4153654 Maffitt et al. May 1979 A
4165922 Morrissy Aug 1979 A
4190856 Ricks Feb 1980 A
4239349 Scheffer Dec 1980 A
4281341 Byatt Jul 1981 A
4294516 Brooks Oct 1981 A
4333715 Brooks Jun 1982 A
4447141 Eisenkraft May 1984 A
4448489 Sato et al. May 1984 A
4472737 Iwasaki Sep 1984 A
4523848 Gorman et al. Jun 1985 A
4541692 Collins et al. Sep 1985 A
4613896 Takita et al. Sep 1986 A
4648691 Oguchi et al. Mar 1987 A
4649425 Pund Mar 1987 A
4670744 Buzak Jun 1987 A
4734295 Liu Mar 1988 A
4736214 Rogers Apr 1988 A
4768300 Rutili Sep 1988 A
4792850 Liptoh et al. Dec 1988 A
5032007 Silverstein et al. Jul 1991 A
5046826 Iwamoto et al. Sep 1991 A
5046827 Frost et al. Sep 1991 A
5086354 Bass et al. Feb 1992 A
5107352 Fergason Apr 1992 A
5112121 Chang et al. May 1992 A
5132839 Travis Jul 1992 A
5132878 Carey Jul 1992 A
5261404 Mick et al. Nov 1993 A
5337181 Kelly Aug 1994 A
5367801 Ahn Nov 1994 A
5432626 Sasuga et al. Jul 1995 A
5473344 Bacon et al. Dec 1995 A
5537233 Miura et al. Jul 1996 A
5557684 Wang et al. Sep 1996 A
5583674 Mosley Dec 1996 A
5585821 Ishikura et al. Dec 1996 A
5589980 Bass et al. Dec 1996 A
5600462 Suzuki et al. Feb 1997 A
5689316 Hattori et al. Nov 1997 A
5695346 Sekiguchi et al. Dec 1997 A
5706139 Kelly Jan 1998 A
5745197 Leung et al. Apr 1998 A
5751385 Heinze May 1998 A
5764317 Sadovnik et al. Jun 1998 A
5796455 Mizobata et al. Aug 1998 A
5796509 Doany et al. Aug 1998 A
5822021 Johnson et al. Oct 1998 A
5825436 Knight Oct 1998 A
5838308 Knapp et al. Nov 1998 A
5924870 Brosh et al. Jul 1999 A
5956180 Bass et al. Sep 1999 A
5976297 Oka et al. Nov 1999 A
5990990 Crabtree Nov 1999 A
6005654 Kipfer et al. Dec 1999 A
6018379 Mizobata et al. Jan 2000 A
6061110 Hisatake et al. May 2000 A
6067137 Ohnishi et al. May 2000 A
6100862 Sullivan Aug 2000 A
6114814 Shannon et al. Sep 2000 A
6122103 Perkins et al. Sep 2000 A
6141067 Ikka Oct 2000 A
6147741 Chen et al. Nov 2000 A
6204902 Kim et al. Mar 2001 B1
6239852 Oono et al. May 2001 B1
6287712 Bulovic et al. Sep 2001 B1
6300990 Yamaguchi et al. Oct 2001 B1
6326738 McAndrew Dec 2001 B1
6341439 Lennerstad Jan 2002 B1
6351298 Mitsui et al. Feb 2002 B1
6392725 Harada et al. May 2002 B1
6412953 Tiao et al. Jul 2002 B1
6443579 Myers Sep 2002 B1
6489044 Chen et al. Dec 2002 B1
6504587 Morishita et al. Jan 2003 B1
6512559 Hashimoto et al. Jan 2003 B1
6515881 Chou et al. Feb 2003 B2
6557999 Shimizu May 2003 B1
6562440 Tsuchiya et al. May 2003 B1
6573961 Jiang et al. Jun 2003 B2
6578985 Seraphim et al. Jun 2003 B1
6590605 Eichenlaub Jul 2003 B1
6593904 Marz et al. Jul 2003 B1
6609799 Myers Aug 2003 B1
6639349 Bahadur Oct 2003 B1
6679613 Mabuchi Jan 2004 B2
6693692 Kaneko et al. Feb 2004 B1
6771327 Sekiguchi Aug 2004 B2
6812649 Kim Nov 2004 B2
6845578 Lucas Jan 2005 B1
6897855 Matthies et al. May 2005 B1
6906762 Witehira et al. Jun 2005 B1
6947024 Lee et al. Sep 2005 B2
7072095 Liang et al. Jul 2006 B2
7205355 Liang et al. Apr 2007 B2
7262752 Weindorf Aug 2007 B2
7352424 Searle Apr 2008 B2
7372447 Jacobsen et al. May 2008 B1
20010040652 Hayashi Nov 2001 A1
20020027608 Johnson et al. Mar 2002 A1
20020047601 Shannon et al. Apr 2002 A1
20020064037 Lee May 2002 A1
20020075211 Nakamura Jun 2002 A1
20020105516 Tracy Aug 2002 A1
20020111195 Repin et al. Aug 2002 A1
20020163728 Myers Nov 2002 A1
20020163729 Myers Nov 2002 A1
20030043106 Woo Mar 2003 A1
20030132895 Berstis Jul 2003 A1
20030184665 Berstis Oct 2003 A1
20040012708 Matherson Jan 2004 A1
20050146787 Lukyanitsa Jul 2005 A1
20060103951 Bell et al. May 2006 A1
Foreign Referenced Citations (129)
Number Date Country
2480600 Jul 2000 AU
2453800 Aug 2000 AU
6821901 Dec 2001 AU
2009960 Sep 1990 CA
2104294 Aug 1992 CA
1356584 Jul 2002 CN
1369997 Sep 2002 CN
2730785 Jan 1979 DE
19757378 Jul 1998 DE
29912074 Nov 1999 DE
19920789 May 2000 DE
19916747 Oct 2000 DE
76651 Apr 1983 EP
0 195 584 Sep 1986 EP
0 336 351 Oct 1989 EP
0389123 Sep 1990 EP
454423 Oct 1991 EP
0573433 Dec 1993 EP
595387 May 1994 EP
0802684 Oct 1997 EP
0999088 May 2000 EP
1151430 Aug 2000 EP
1155351 Aug 2000 EP
1046944 Oct 2000 EP
1081774 Mar 2001 EP
1093008 Apr 2001 EP
20000733927 Jul 2001 EP
1231757 Aug 2002 EP
1287401 Mar 2003 EP
1923860 May 2008 EP
1 448 520 Sep 1976 GB
2107482 Apr 1983 GB
2312584 Oct 1997 GB
2314943 Jan 1998 GB
2347003 Aug 2000 GB
2372618 Aug 2002 GB
56-007916 Jan 1981 JP
60024502 Feb 1985 JP
60-103895 Jun 1985 JP
60-122920 Jul 1985 JP
60-233684 Nov 1985 JP
60-244924 Dec 1985 JP
61-166524 Jul 1986 JP
61-200783 Sep 1986 JP
63-067094 Mar 1987 JP
62-122494 Jun 1987 JP
62-161294 Jul 1987 JP
62-191819 Aug 1987 JP
62-191820 Aug 1987 JP
62-235929 Oct 1987 JP
63-100898 May 1988 JP
63-203088 Aug 1988 JP
63-274918 Aug 1988 JP
63-318856 Dec 1988 JP
2-262119 Oct 1990 JP
03-002835 Jan 1991 JP
3021902 Jan 1991 JP
3-101581 Apr 1991 JP
3174580 Jul 1991 JP
3-233548 Oct 1991 JP
3226095 Oct 1991 JP
4-034521 Feb 1992 JP
4-034595 Feb 1992 JP
04-107540 Apr 1992 JP
4191755 Jul 1992 JP
5-007373 Jan 1993 JP
5-091545 Apr 1993 JP
5-142515 Jun 1993 JP
6-233328 Aug 1994 JP
63-039299 Nov 1994 JP
8-076139 Mar 1995 JP
7146473 Jun 1995 JP
07-198921 Aug 1995 JP
07-198942 Aug 1995 JP
7-209573 Aug 1995 JP
7-222202 Aug 1995 JP
8-036375 Feb 1996 JP
08-335043 Dec 1996 JP
09-033858 Feb 1997 JP
9-043540 Feb 1997 JP
9-096789 Apr 1997 JP
9-102969 Apr 1997 JP
9-133893 May 1997 JP
09211392 Aug 1997 JP
9-282357 Oct 1997 JP
9-308769 Dec 1997 JP
10-003355 Jan 1998 JP
10-039821 Feb 1998 JP
10-105829 Apr 1998 JP
10-228347 Aug 1998 JP
10232304 Sep 1998 JP
10-312033 Nov 1998 JP
11-066306 Mar 1999 JP
11-205822 Jul 1999 JP
2000-075135 Mar 2000 JP
2000-111940 Apr 2000 JP
2000-113988 Apr 2000 JP
2000-142173 May 2000 JP
2001-56410 Feb 2001 JP
2001-215332 Apr 2002 JP
2002-097269 Apr 2002 JP
2002-156608 May 2002 JP
2002-258284 Sep 2002 JP
2002-287144 Oct 2002 JP
2002-350772 Dec 2002 JP
2003-015555 Jan 2003 JP
2002-099223 Oct 2003 JP
20005178 Apr 2001 NO
343229 Apr 2001 PL
9112554 Aug 1991 WO
9115930 Oct 1991 WO
9209003 May 1992 WO
9215170 Sep 1992 WO
9627992 Sep 1996 WO
9804087 Jan 1998 WO
9816869 Apr 1998 WO
9847106 Oct 1998 WO
9942889 Aug 1999 WO
9944095 Sep 1999 WO
WO 0017708 Mar 2000 WO
0036578 Jun 2000 WO
0048167 Aug 2000 WO
0049453 Aug 2000 WO
0115128 Mar 2001 WO
0195019 Dec 2001 WO
0235277 May 2002 WO
02091033 Nov 2002 WO
03003109 Jan 2003 WO
9703025 Nov 1997 ZA
Related Publications (1)
Number Date Country
20060103951 A1 May 2006 US