Input output matching in optical processing

Information

  • Patent Grant
  • 7119941
  • Patent Number
    7,119,941
  • Date Filed
    Friday, May 19, 2000
    24 years ago
  • Date Issued
    Tuesday, October 10, 2006
    18 years ago
Abstract
The invention relates to determining a matching between discrete optical elements, such as an SLM and a CCD, and continuous optical systems, subsystems and/or elements, such as Fourier lens and individual lenslet of lenslet arrays. Exemplary processing performed by such matched systems includes discrete transforms, such as the discrete Fourier transform, the discrete cosine transform and the discrete wavelet transform.
Description
FIELD OF THE INVENTION

The present invention relates to the field of optical data processing.


BACKGROUND OF THE INVENTION

Many optical processing systems use an SLM (spatial light modulator) to modulate a beam of light, which modulated beam is then processed by various optical elements. The processed light is often detected using a CCD array detector or another type of PDA (planar array detector). Such optical systems utilize discrete optical elements, such as the SLM and detector, even when the nature of processing is continuous, for example using a Fourier lens. This results in a mismatch between the input and output, in some systems. In some cases, also some of the optical processing elements are discrete.


SUMMARY OF THE INVENTION

An aspect of some embodiments of the invention relates to determining a matching between discrete optical elements, such as an SLM and a CCD, and continuous optical systems, subsystems and/or elements, such as Fourier lens and individual lenslets of lenslet arrays. Exemplary processing performed by such matched systems includes discrete transforms, such as the discrete Fourier transform, the discrete cosine transform and the discrete wavelet transform.


In an exemplary embodiment of the invention, the matching is between a continuous signal produced by a processing element (or subsystem) and some kind of non-uniform detector, for example a discrete detector array. Exemplary detectors include CCD detectors and optical fiber array detectors (e.g., for further optical processing). The matching may, for example, take the shape of an optical element between the processing system and the detector, an optical element in the subsystem and/or a selection of data sampling using the detector.


In one embodiment of the invention, the active size of elements in the CCD and the SLM is matched. Alternatively or additionally, one or more binary, continuous or semi-continuous spatially varying neutral density filters are used to match the CCD and/or SLM to the rest of the optical system. Alternatively or additionally, not all the CCD pixels are used to detect the processed light, effectively describing a windowing filter on the CCD. In some embodiments, a spatially varying filter can be emulated by suitable weighting of the driving voltages of the SLM or weighting the received signals on the CCD.


In an exemplary embodiment of the invention, the elements are matched for a general system and not for a dedicated system. Optionally, off-the-shelf elements are used and a matching is determined and implemented, instead of tweaking component parameters or designing custom components. Alternatively or additionally, the matching parameters are changed dynamically, as various optical system parameters change.


The matching may be determined at system construction. Alternatively or additionally, the matching is determined on the fly when the system is reprogrammed or responsive to a particular data set or transform to be implemented by the system.


There is thus provided in accordance with an exemplary embodiment of the invention, a method of designing an optical processing system, comprising:


providing an optical processor comprising at least one discrete element, comprising at least a discrete detector and at least one optical processing element which produces a spatially continuous signal;


determining a matching between said discrete detector and said continuous signal, to provide a desired output on said discrete detector, responsive to a data input; and


applying said determined matching in said system. Optionally, determining a matching comprises modeling a behavior of at least one discrete pixel portion of said detector. Alternatively, said detector is an electro-optical detector.


In an exemplary embodiment of the invention, applying said matching comprises selecting a plurality of weight for a plurality of discrete pixel portions of said detector. Optionally, at least 50% of said pixels are ignored. Alternatively or additionally, at least 90% of said pixels are ignored.


In an exemplary embodiment of the invention, applying said matching comprises setting up a matching element. Optionally, said matching element is between said processing element and said detector element. Alternatively, said matching element is between said processing element and an input element, which provides said data.


In an exemplary embodiment of the invention, said element is a flat plate. Alternatively, said matching element is an LCD.


In an exemplary embodiment of the invention, said matching element comprises a mask. Optionally, said mask is a binary mask. Alternatively, said mask is a multi-density mask.


In an exemplary embodiment of the invention, said mask has a plurality of holes defined therein, each hole corresponding to a single discrete pixel of the discrete detector. Optionally, not all the pixels of said discrete detector have a corresponding hole.


In an exemplary embodiment of the invention, an input to said optical processing element comprises an SLM.


In an exemplary embodiment of the invention, an input to said optical processing element comprises a spatially discrete modulated light source.


In an exemplary embodiment of the invention, said optical processing element comprises a lens. Alternatively or additionally, said optical processing element comprises a lenslet array. Alternatively or additionally, said desired output comprises a desired optical transform of said data. Optionally, said transform comprises a Fourier-derived transform. Optionally, said transform comprises a DCT transform. Alternatively, wherein said transform comprises a discrete linear transform.


There is also provided in accordance with an exemplary embodiment of the invention, an optical processing system comprising:


a first, continuous, optical element which generates a continuous spatially modulated beam;


a second, discrete, optical element, which receives said beam; and


a matching element, which matches said discrete element to said continuous element to provide a desired optical behavior of the system. Optionally, said matching element comprises a mask. Alternatively, said mask comprises a binary mask.


In an exemplary embodiment of the invention, said mask comprises a plurality of apertures, one per pixel of said discrete optical element, at least for some of the pixels of the optical element.


In an exemplary embodiment of the invention, said matching element comprises a controllable element.


In an exemplary embodiment of the invention, said matching element is between said discrete and said continuous elements. Alternatively or additionally, at least one matching element is between said continuous elements and an input element. Optionally, said input element comprises an SLM (spatial light modulator).


In an exemplary embodiment of the invention, said continuous element comprises a lens. Alternatively, said continuous element comprises a lenslet array.


There is also provided in accordance with an exemplary embodiment of the invention, a method of determining a matching between optical system elements, comprising:


determining a spatial optical distribution field of an input to a processing system;


determining the effect of the processing system on the field;


selecting a desired discrete transform to be applied by the system; and


determining a desired matching between a detector of said system and the rest of said system to achieve said desired discrete transform. Optionally, determining a spatial distribution field comprises modeling at least one SLM (spatial light modulator) pixel. Alternatively or additionally, determining a desired matching comprises modeling at least one detector pixel.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will now be described in the following detailed description of some embodiments of the invention and from the attached drawings, in which:



FIG. 1 is a schematic diagram of an optical processing system, in accordance with an exemplary embodiment of the invention; and



FIG. 2 is a schematic functional flow diagram of a processing in a matched optical system, in accordance with an exemplary embodiment of the invention.





DETAILED DESCRIPTION OF SOME EMBODIMENTS


FIG. 1 is a schematic diagram of an optical processing system 100, in accordance with an exemplary embodiment of the invention. A light source 102 generates a beam of light, which is spatially modulated in a plurality of SLM pixels 105, by an SLM 104. In an exemplary embodiment of the invention, the light is collimated, for example using a lens (not shown). A driver 106 drives the SLM to have a desired spatial modulation effect on the light, for example, corresponding to a set of input data. The modulated beam is then processed by an optical element 108, for example one or more lens or a lenslet array. The processed beam is detected and converted into electrical signals by a detector 110, which typically comprises a plurality of detector pixel elements 111. The electrical signals are then provided to electrical circuitry 112, for example for output to an electrical circuit. System 100 may be varied in many ways within the scope of the invention, for example, the modulated light source may be an array (one or tow dimensional) or light sources, rather than a combination of a light source and an SLM. Various types of optical units may be provided, including, for example masks, holographic lens and/or an SLM. In some embodiments, detector 110 is an optical detector, for further optical processing of the results, rather than electrical processing. System 100 as a whole may be coherent, incoherent or partially coherent.


In one embodiment of the invention, in order to achieve a discrete transform using unit 108, the data is optionally provided as an impulse image, with each image pixel being a spatial delta function, and with optical element 108 comprising, for example, a single or an array of Fourier transform lenses or a 4-f system, which implements a continuous convolution or correlation, which, with matching implemented, can be used to apply a discrete convolution. Spatial delta function-type data provision can be achieved using an SLM with a pinhole filter (e.g., an array of pin-holes), for example a filter 107.


In one embodiment of the invention, for example for DCT transforming, both the SLM and the CCD are spatially matched according to the following formula:









{





Δ


x
_


=


0.5
·
Δ






x








Δ





u

=


λ





f


2

Δ






x
·
N











(
1
)








in which Δx defines the distances between centers of pixels 105, e.g., the distance between the delta functions (pinholes) in the SLM and Au defines the distances between the centers of pixels 111, e.g., the center-to-center distance between the delta-function receptors in the CCD. Optionally, an ideal (delta function) CCD detector is modeled by providing a pinhole filter in front of the CCD, for example a filter 114, or by creating pinhole detectors on the CCD plane, during its manufacture. In these formulae, f is the focal length of optical element 108, λ is the wavelength of the light, N is the number of elements in the discrete sequence of numbers, for example 8 for a 8×8 DCT block transform, and Δ{overscore (x)} is the placement of the delta function in the interval between pixel centers (spatial shift) in the SLM, e.g., relative to pixel borders.


Since the pixels intervals in the CCD and the SLM are not necessarily the same, some of the CCD pixels. In multi-wavelength based embodiments, different pinholes may be designated for different wavelengths, for example using active filter elements, such as LCA arrays, that can provide multiple pinhole sizes and/or locations. Alternatively or additionally, circuitry 112 may selectively read signals from different ones of pixels 111, to provide different effective pixel distributions. A high resolution CCD is optionally provided to allow a significant degree of freedom with respect to pixel selection. Alternatively or additionally, a dedicated CCD may be designed, for example including concentric pixels (physically concentric or read as concentric), so that a variable size pixel may be selected electronically. Optionally, different CCD pixels are ignored for different wavelengths, data sets, desired accuracy, expected post-processing and/or transforms. Optionally, a lens or other optical element is used to match the scaling of the system and the detector and/or to allow more efficient use of the desirable (from a matching point of view) CCD detector elements. The percentage of pixels ignored can vary, for example, being greater than 50%, 80%, 90%, 95% or 99%, for example. Alternatively or additionally, the portion of the pixel masked is greater than 50%, 80%, 90%, 95% or 99%, for example. Suitable selection of a matching condition, allows a non-vanishing width to be used for the SLM and/or CCD apertures and still achieve a substantially exact processing.


The above matching condition may be derived using the following analysis (for a one-dimensional case). In one exemplary embodiment of the invention, the modulated light source and system response are mathematically modeled and then, based on the desired transform to be measured, a desired sampling behavior is determined. This sampling behavior may require changes in the system or an additional matching element. In an exemplary embodiment of the invention, depending on the type of matching, one or more of the following may be modeled: the light source distribution, intra-pixel effects on the detector and between pixel effects (e.g., pixel size and dead zone) on the detector.


The following equation defines an exemplary JPEG-DCT, which is to be achieved using optical processing system 100:










F


(
k
)


=




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


π






k


(


2

n

+
1

)




2

N


)








(
2
)







Assuming symmetric input, where every block of 16×16 samples (8×8 mirrored twice) is represented as a combination of delta functions, spaced at intervals of size Δx, and transmitted from a Δ{overscore (x)} position inside each interval:










s


(
x
)


=


1
2



(





n
=
0



N
-
1

=
7









f


(
n
)


·

δ


(

x
-


n
·
Δ






x

-

Δ


x
_



)




+




n
=
0



N
-
1

=
7









f


(
n
)


·

δ


(

x
+


n
·
Δ






x

+

Δ


x
_



)





)






(
3
)








where s(x) is field distribution function. Applying the optical Fourier transform:











s
~



(
u
)


=




-








s


(
x
)


·




-
j




2

π





u





x


λ





f












x







(
4
)








The imaginary parts cancel out (due to the input being symmetric):











s
~



(
u
)


=




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


2

π






u
·

(



n
·
Δ






x

+

Δ


x
_



)




λ





f


)








(
5
)








Assuming accurate sampling at the Fourier plane (the CCD) (or, alternatively, compensating for it using suitable matching):











s
_



(
k
)


=




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


2


π
·
k
·
Δ







u
·

(



n
·
Δ






x





+

Δ


x
_



)




λ





f


)








(
6
)








Since equation (2) is desired, we match:











s
~



(
k
)


=





n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


2


π
·
k
·
Δ







u
·

(



n
·
Δ






x

+

Δ


x
_



)




λ





f


)




=




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


π






k


(


2

n

+
1

)




2

N


)









(
7
)








One matching condition is:










cos


(


2


π
·
k
·
Δ







u
·

(



n
·
Δ






x

+

Δ






x
_



)




λ





f


)


=

cos


(


π






k


(


2

n

+
1

)




2

N


)






(
8
)








Leading to:











Δ






u
·
Δ






x


λ





f


=

1

2

N






(
9
)








and












2
·
Δ







u
·
Δ



x
_



λ





f


=

1

2

N






(
10
)








resulting in the above (Eq. 1) matching condition:









{





Δ






x
_


=


0.5
·
Δ






x








Δ





u

=


λ





f


2

Δ






x
·
N











(
11
)







In some cases, it may be not be convenient, possible or desirable to provide delta functions (pinholes or other optical elements) on one or both of the SLM and CCD. The following analysis shows a method of matching a CCD and an SLM, by spatially modulating the light in a less drastic manner, for example using continuous neutral density filters, for filter elements 107 and 114.


The following equation describes a one-dimensional SLM-like object, e.g., a pixelated source:










s


(
x
)


=


1
2



(





n
=
0



N
-
1

=
7









f


(
n
)


·

l


(

x
-


n
·
Δ






x


)




+




n
=
0



N
-
1

=
7









f


(
n
)


·

l


(

x
+


n
·
Δ






x


)





)






(
12
)







Where 1(x) is a general transmission function of each pixel in the SLM, assumed identical for all pixels, and symmetric, so it can be mirrored. However, it should be noted that a similar but more complex analysis can also be performed in the case where not all the pixels are identical, in size, location, intensity or uniformity of transmission, as will be provided below.


After applying the optical (and continuous) Fourier transform:











s
~



(
u
)


=




n
=
0



N
-
1

=
7









f


(
n
)


·

L


(
u
)


·

cos


(


2

π






u
·
n
·
Δ






x


λ





f


)








(
13
)







Where L(u) is the Fourier transform of 1(x). Since the actual sampling is done by summing all intensities on a detector cell (i.e., a CCD pixel cell), equation (5) transforms to:











s
~



(
k
)


=


1

Δ


u
_










k
·
Δ






u

-

Δ







u
_

/
2






k
·
Δ






u

+

Δ







u
_

/
2








s
~



(
u
)


·

W


(

u
-

k





Δ





u


)


·






u








(
14
)








It should be noted that equation (14) is slightly different from that stated for coherent systems, {tilde over (s)}(u) denotes field distribution rather than intensity. In equation (14), W(u) is the CCD's pixel detection weight function. Again, it is assumed that W is the same for all pixels but this assumption is not essential, except that it simplifies the calculations and is often a relatively correct assumption. Using equation (13):











s
~



(
k
)


=


1

Δ


u
_










k
·
Δ






u

-

Δ







u
_

/
2






k
·
Δ






u

+

Δ







u
_

/
2







W


(

u
-

k





Δ





u


)


·

{




n
=
0



N
-
1

=
7





f


(
n
)


·

L


(
u
)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)




}

·


u








(
15
)








In the DCT example, we match:










cos


(


π






k


(


2

n

+
1

)




2

N


)


=


1

Δ


u
_










k
·
Δ






u

-

Δ







u
_

/
2






k
·
Δ






u

+

Δ







u
_

/
2







W


(

u
-

k





Δ





u


)


·

L


(
u
)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)


·


u








(
16
)








We define:











R
k



(
u
)


=


1

Δ






u
_





L


(
u
)




W


(

u
-

k





Δ





u


)







(
17
)








The matching requirement is thus:











cos


(


π






k


(


2

n

+
1

)




2

N


)


=






k
·
Δ






u

-

Δ







u
_

/
2






k
·
Δ






u

+

Δ







u
_

/
2








R
k



(
u
)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)


·


u




,




n
,

k
=
0

,
1
,








N

-
1





(
18
)








Which results in the following N×N Fredholm I equations for n,k=0, 1, . . . , N−1 (for the 1D case. In 2D it is N×N×N×N equations)


If one sets Δu to be:










Δ





u

=


λ





f


Δ





x






(
19
)








then, one may write











R
k



(
u
)


=




n
=
0



N
-
1

=
7









cos


(


π






k


(


2

n

+
1

)




2

N


)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)








(
20
)







It should be noted that Eq. (19) defines a different matching condition than Eq. (11). This condition was obtained in order to simplify Eq. (18) and assure a simple solution of it. This solution defines a matching between individual pixels in the SLM (n) and the CCD (k). ukε[k·Δu−Δū/2,k·Δu+Δū/2)]. This matching may be applied, for example, by mounting suitable masks on the SLM and/or the CCD, in order to get a proper Rk(u).


It should be noted that equation 20 actually defines a family of solutions, thus, in some embodiments of the invention, standard geometries of SLMs and CCDs are used, while in others one or both of the SLM and CCD are modified to better fit a particular matching solution. In the general case, the matching may be performed by using neutral filters and by matching at least the locations, if not the sizes of CCD and SLM pixels. Sizes of the complete SLM and detector arrays may be matched, for example using a beam expander.


In an exemplary embodiment of the invention, the above matching condition(s) are applied towards other discrete linear transforms which are to be applied using a Fourier lens:










F


(
k
)


=




n
=
0


N
-
1









f


(
n
)


·

C


(

k
,
n

)








(
21
)







Applying the same procedure as in equations (12)–(20), equation (18) now reads:











C


(

k
,
n

)


=





k
·
Δu

-

Δ



u
_

/
2





k
·
Δu

+

Δ



u
_

/
2








R
k



(
u
)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)


·






u




,
n
,

k
=
0

,
1
,








N

-
1





(

22

a

)







So for the general 1D linear transform:









R
k



(
u
)


=




n
=
0


N
-
1









C


(

k
,
n

)


·

cos


(


2


π
·
u
·
n
·
Δ






x


λ





f


)





,


u
k



[



k





Δ





u

-

Δ



u
_

/
2



,


k





Δ





u

+

Δ



u
_

/
2




]






In the context of matching conditions it should be noted that a matrix arrangement of sub-elements is not required. Rather, it is sufficient that there be a correspondence between the pixels in the SLM and the pixels in the CCD. A simple to manufacture construction is that of a matrix of elements.


The use of the above matching condition may depend on the type of detector used. A standard CCD detector measures power (amplitude squared). Thus, a square root of the measurement may need to be determined. A related issue is that a CCD detector integrates the square of the amplitude, so when even after taking a square root the result is not precise. However, in many cases the effect of the error is negligible and usually smaller than that allowed by the JPEG standard, for compression applications. This error is especially small if most of the CCD area (for each pixel) is ignored, e.g., using a suitable mask. Ignoring most of the CCD area is also useful in that it reduces noise, albeit usually requiring more signal strength.


Alternatively, an amplitude (rather than power) detector is used, for example using a detector with a gamma of 0.5.


Following is a description of an exemplary coherent optical processing system including sign retrieval and the squared Fourier nonlinear channel. A Fourier plane bias term is provided to provide sign retrieval, implemented via a central pixel in each channel (other pixels are mirrored, whereas the bias pixel is centered and unique), as follows:










s


(
x
)


=



1
2



(





n
=
0



N
-
1

=
7









f


(
n
)


·

l


(

x
-



(

n
+

1
2


)

·
Δ






x


)




+




n
=
0



N
-
1

=
7









f


(
n
)


·

l


(

x
+



(

n
+

1
2


)

·
Δ






x


)





)


+


A
·
1



(
x
)







(
23
)
















s
_



(
u
)


=


L


(
u
)


·

{

A
+




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)





}






(
24
)








The actual PDA measurement is:











s
_



(
k
)


=


1

Δ






u
_










k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2











s
~



(
u
)




2

·

W


(
u
)


·






u








(
25
)








that is,











s
~



(
k
)


=


1

Δ


u
_










k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2







W


(
u
)


·





L


(
u
)


·

{

A
+




n
=
0



N
-
1

=
7





f


(
n
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)





}




2

·


u








(
26
)








Since all the terms are real, the ∥2 is a regular square:











s
~



(
k
)


=


1

Δ


u
_










k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2








W


(
u
)


·




L


(
u
)




2

·


{

A
+




n
=
0



N
-
1

=
7





f


(
n
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)





}

2





u








(
27
)








Now, we define a new R:











R
2



(
u
)





W


(
u
)


·




L


(
u
)




2






(
28
)








Substituting the equation (28) into equation (27), and opening the quadratic term in the integrand gives:











s
~



(
k
)


=




A
2


Δ






u
_



·






k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2








R
2



(
u
)


·


u




+



2

A


Δ






u
_



·






k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2








R
2



(
u
)


·




n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)


·


u






+


1

Δ


u
_



·






k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2








R
2



(
u
)


·




n
=
0



N
-
1

=
7











m
=
0



N
-
1

=
7









f


(
m
)


·

f


(
n
)


·

cos
(






2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)

·

cos


(


2


π
·
u
·

(

m
+

1
2


)

·
Δ






x


λ





f


)


·


u












(
29
)







I
.
e
.

,







s
~



(
k
)


=



A
2

·


I
1



(
k
)



+

2


A
·


I
2



(
k
)




+



I
3



(
k
)


.















To recover the sign, equate (29) with the form (F(k)+B)2=B2+2B·F(k)+F(k)2·I1 can be seen to be a scaling factor for the B2 constant bias term. I2 is the linear term described above (and solved by a cosine series).


The previously described solution for the linear channel still performs well even as a solution for (29). However, due to (28), R2 is inherently non-negative, contrary to the cosine series. To avoid this issue, a solution of the type R2=R+φ, where φ is a general non-negative bias-like function, can be used in some embodiments of the invention. Equation (29) now reads:












s
~



(
k
)


=



C
R



(
k
)


+

2

A





n
=
0



N
-
1

=
7









f


(
n
)


·

cos


(


π
·
k
·

(

n
+

1
2


)


N

)





+


C
ϕ



(
k
)


+

2

A





n
=
0



N
-
1

=
7









f


(
n
)


·

I

Δ


u
_



·






k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2







ϕ


(
u
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)


·


u






+




n
=
0



N
-
1

=
7











m
=
0



N
-
1

=
7









f


(
m
)


·

f


(
n
)


·

1

Δ


u
_



·






k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2







ϕ


(
u
)


·

cos


(


2


π
·
u
·

(

n
+

1
2


)

·
Δ






x


λ





f


)


·

cos


(


2


π
·
u
·

(

m
+

1
2


)

·
Δ






x


λ





f


)


·


u








,




(
30
)









C
R



(
k
)






A
2


Δ


u
_










k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2







R


(
u
)


·


u





,



C
ϕ



(
k
)






A
2


Δ


u
_










k
·
Δ






u

-

Δ



u
_

/
2






k
·
Δ






u

+

Δ



u
_

/
2







ϕ


(
u
)


·



u

.


















Choices of φ spawn various system architectures: in the most trivial case, φ=const., the φ terms combine nicely, however, varying 0 can be used in alternative embodiments:












s
_



(
k
)


=



C

R
+
ϕ




(
k
)


+

2

A





n
=
0



N
-
1

=
0









f


(
n
)


·

cos


(


π
·
k
·

(

n
+

1
2


)


N

)





+



2

A





ϕ

π






n
=
0



N
-
1

=
7









f


(
n
)


·



(

-
1

)


n
+
k



n
+

1
2






+

ϕ





n
=
0



N
-
1

=
7





f


(
n
)


2













C

R
+
ϕ




(
k
)






C
ϕ



(
k
)


+



C
R



(
k
)


.







(
31
)








Hence, the DCT can be computed by subtracting the last terms (which can be optionally calculated via electrical logic circuits) from the PDA result.



FIG. 2 is a schematic functional flow diagram 200 of a processing in a matched optical system, in accordance with an exemplary embodiment of the invention, for the above example.


Following are the results of a simulation of matching between an SLM and a CCD, under the assumption that the system is an exact continuous Fourier relation with scaling of λf, where λ is the wavelength of the illuminating source and f is the focal length of the Fourier lens.


The goal is to match the SLM and the CCD sufficiently well (e.g., below a desired error threshold), so that the symmetry in the input plane will allow us to obtain a discrete cosine transform (DCT) relation between the values of the SLM pixels and those of the CCD. Had these pixels been modeled as impulses, the desired result would have been obtained. As described above, however, neither the pixels in the SLM nor those in the CCD are usually impulses (very small apertures); rather, they often have a finite size with a limited dead zone.


Assume that the masks to be generated are simple binary masks (one for the SLM and the other for the CCD) consisting of rectangular holes that appear in the middle of each pixel (SLM and CCD). Following is an attempt to determine the expected performances as a function of the width of each rectangular hole. It should be noted, however, that other patterns may be used, including non-binary patterns, such as a Gaussian density and other geometric shapes, such as a plurality of pinholes, circular holes and rectangular holes. The shape of the hole may or may not match the shape of the discrete optical element.


If the widths of each pixel is reduced (i.e., an increase the dead zone) by placing the masks, better results as indicated above may be obtained. However, less light will be detected and low signal-to-noise ratio (SNR) can be expected. Therefore, simulations were carried out to determine the maximum active part of each pixel that should be used, in an exemplary system, in order to obtain an error that is less than 0.1% (9 bit precision). Different results may be expected for different systems, as indicated above.


One hundred random 8-bit series and additional 10 known functions (exponential, sine, cosine, rectangular and linear) were tested. It was found that for a wavelength of 670 nm and pixel size of 20 μm, substantially any active size in the input plane (SLM) could be tolerated but a 90% dead zone in the CCD pixels is desirable. The error (in percent) between the obtained results and the exact DCT is determine to be less than 0.09% in a 90% dead zone embodiment simulated.


The above description has focused on matching for a Fourier system. The following is a presentation of designing a matched optical system for an arbitrary discrete linear transformation of the form:










g
k

=




n
=
0


N
-
1









C

k
,
n




F
n







(
32
)








where








{

f
n

}


n
=
0


N
-
1







and







{

g
k

}


k
=
0


N
-
1







are the input and output series respectively.


In order to implement such a transformation optically, let us consider an arbitrary optical system (coherent or incoherent), which is linear in terms of field or intensity distributions, i.e., such a system satisfies the superposition integral:

g(x)=∫h(x,{overscore (x)})f({overscore (x)})d{overscore (x)}  (33)

where f({overscore (x)}) is the input continuous signal and h(x,{overscore (x)}) is the response of the system to a shifted impulse function.


Exemplary special cases of the above general equation are the Fresnel, Fourier and fractional Fourier transforms, which can be easily realized by free space propagation and a lens. Another exemplary case is the convolution integral, which is obtained for shift invariant systems, i.e., whenever h(x,{overscore (x)})=h0(x−{overscore (x)}). Convolution can be realized optically with 4-f or imaging systems, both under spatially coherent and incoherent illuminations.


In order to realize a discrete linear transformation using this class of optical systems, consider an input distribution, which is composed of N pixels located at











-


(

N
-
1

)

2



Δ


x
_


,


-


(

N
-
3

)

2



Δ


x
_


,





,



(

N
-
3

)

2


Δ


x
_


,



(

N
-
1

)

2


Δ



x
_

:






(
34
)







f


(

x
_

)


=




n
=
0


N
-
1





f
n




L
n



(


x
_

+



(

N
-
1

)

2


Δ


x
_


-

n





Δ






x
_



)














Each pixel is characterized by a weighting function Ln({overscore (x)}), which is assumed to be identically zero for any









x
_



>


1
2


Δ







x
_

.







Further assume that each input pixel consists of N rectangular functions with equal widths and different heights. I.e., defining:










rect


(
x
)


=

{





1



x





1
2








0



x



>

1
2










(
35
)








yields that











L
n



(

x
_

)


=




m
=
0


N
-
1









w

m
,
n




rect


(



x
_

-

m





Δ

+



N
-
1

2


Δ


Δ

)








(
36
)








where






Δ
=



Δ






x
_


N

.






Introducing the latter expression in Eq. (33) yields










g


(
x
)


=




n
=
0


N
-
1









f
n






m
=
0


N
-
1









w

m
,
n











rect
(



x
_

+



N
-
1

2


N





Δ

-

nN





Δ

-

m





Δ

+



N
-
1

2


Δ


Δ





)



h


(

x
,

x
_


)






x
_












(
37
)








If the output plane is sampled at:








-


(

N
-
1

)

2



Δ






x
_


,


-


(

N
-
3

)

2



Δ






x
_


,





,



(

N
-
1

)

2


Δ






x
_







then the following is obtained:










g
k

=




n
=
0


N
-
1









f
n










m
=
0


N
-
1









w

m
,
n











rect
(










x
_

+



N
-
1

2



(

N
+
1

)


Δ

-







nN





Δ

-

m





Δ





Δ

)







h
(







kN





Δ

-



N
-
1

2






N





Δ


,

x
_


)





x
_












(
38
)








where k=0, 1 . . . N−1. Comparing Eq. (38) with Eq. (32) yields:










C

k
,
n


=




m
=
0







N
-
1










w

m
,
n











rect
(










x
_

+



N
-
1

2



(

N
+
1

)


Δ

-







nN





Δ

-

m





Δ





Δ

)







h
(







kN





Δ

-







N
-
1

2


N





Δ


,

x
_


)









x
_










(
39
)








Defining a set of N N*N matrices (Rn) by:











r
n



(

m
,
k

)


=




rect
(






x
_

+



N
-
1

2



(

N
+
1

)


Δ

-







nN





Δ

-

m





Δ





Δ

)



h


(



kN





Δ

-



N
-
1

2


N





Δ


,

x
_


)






x
_








(
40
)








yields that the heights wm,n may be found, for example, by solving a set of N linear equations for each pixel. In other words, in order to find the shape of the nth pixel, the following may be solved:










C

k
,
n


=





m
=
0


N
-
1









w

m
,
n





r
n



(

m
,
k

)






w
_

n



=


R
n

-
1


·


C
_

n







(
41
)







It should be appreciated that the above selection of a form for the masking element is arbitrary and is meant to reflect a particular manufacturing capability on hand. Gray-scale, half-tone and binary masks are all contemplated for various embodiments of the invention.


Up to here an arbitrary linear optical system was analyzed. The following is a brief analysis of a shift invariant case, i.e., where h(x,{overscore (x)})→h(x−{overscore (x)}). By defining:










s


(
x
)


=


rect


(

x
Δ

)




h


(
x
)







(
42
)








we obtain











r
n



(

m
,
k

)


=

s


(



(

k
-
n

)


N





Δ

-

m





Δ

+



N
-
1

2


Δ


)






(
43
)








and Eq. (41) still holds.


This particular embodiment is useful for some applications as it includes several interesting optical systems as special cases (e.g., free space propagation, coherent and incoherent imaging). In any particular solution, it may be desirable to check that Rn is not singular and/or that Eq. (41) has a solution.


In order to test the above described matching method, computer simulations were performed. The goal was to realize the discrete cosine transform (DCT) with the help of a quasi-monochromatic, spatially incoherent imaging system with a rectangular aperture. i.e., in this case:










g
k

=




n
=
0


N
-
1









f
n



cos


[


π

2

N




(


2

n

+
1

)


k

]








(
44
)















h


(
x
)


=




-
1




{
OTF
}


=

sin







c
2



(

xD

λ





f


)








(
45
)








where N=8, D=5 mm (aperture's width), λ=0.5 μm (wavelength) and f=5 cm. The size of each pixel in the input plane was chosen to be 20 μm. The input data was a random i.i.d. sequence, uniformly distributed between zero and one. A suitable (multi-rectangular type) input mask was designed and a system including the input mask was simulated. An absolute error relative to a real DCT of the random input (assuming a high quality input mask could be realized) was found to be less than 10−6.


The present application is related to the following four PCT applications filed on same date as the instant application in the IL receiving office, by applicant JTC2000 Development (Delaware), Inc.: PCT/IL00/00283 which especially describes various optical processor designs, PCT/IL00/00285 which especially describes reflective and incoherent optical processor designs, PCT/IL00/00284 which especially describes a method of optical sign extraction and representation, and PCT/IL00/00286 which especially describes a method of processing by separating a data set into bit-planes and/or using feedback. The disclosures of all of these applications are incorporated herein by reference.


It should be appreciated that the assumptions on the behavior of the various optical elements described above were solely for simplicity of mathematical description and should not be construed as unnecessarily limiting the application of the invention, in some embodiments thereof. A same mathematical analysis may be performed without the above assumption, to yield solutions for various exemplary systems where the above assumptions do not hold.


It will be appreciated that the above described methods of optical matching may be varied in many ways, including, changing the order of steps, which steps are performed using electrical components and which steps are performed using optical components, the representation of the data and/or the hardware design. In addition, various distributed and/or centralized hardware configurations may be used to implement the above invention. In addition, a multiplicity of various features, both of methods and of devices, have been described. It should be appreciated that different features may be combined in different ways. In particular, not all the features shown above in a particular embodiment are necessary in every similar embodiment of the invention. Further, combinations of the above features are also considered to be within the scope of some embodiments of the invention. In addition, the scope of the invention includes methods of using, constructing, calibrating and/or maintaining the apparatus described herein. When used in the following claims, the terms “comprises”, “comprising”, “includes”, “including” or the like mean “including but not limited to”.

Claims
  • 1. A method of calculating parameters for a matching mask, comprising: providing an optical processor having a given system design and including a continuous optical element performing a continuous optical transform and at least one discrete input or output array component comprising a plurality of discrete elements;determining a discrete general linear transform to be performed by said processor after said processor is provided with a matching mask to be determined, said mask being associated with the discrete array component for matching discretization behavior of the component with continuous behavior of said continuous element;providing a non-pinhole geometry of a mask element of said matching mask; andcalculating a plurality of mask elements weights that define a transmittance of said mask element in said geometry, such that said processor at least approximates said general linear transform, by modeling a behavior of the processor including the mask.
  • 2. A method according to claim 1, comprising constructing a mask to have said mask element weights.
  • 3. A method according to claim 1, comprising driving said input or said output array components to apply said mask.
  • 4. A method according to claim 1, wherein said mask element is associated with said input component.
  • 5. A method according to claim 1, wherein said mask element is associated with said output component.
  • 6. A method according to claim 1, wherein said mask element is a multi-density mask element.
  • 7. A method according to claim 1, wherein said mask element is a half-tone mask.
  • 8. A method according to claim 1, wherein said mask element is a binary mask.
  • 9. A method according to claim 1, wherein said mask element is controllable.
  • 10. A method according to claim 1, wherein not all of said discrete elements have a matching transmitting mask element.
  • 11. A method according to claim 1, wherein modeling a behavior comprises modeling a behavior of said input component.
  • 12. A method according to claim 1, wherein modeling a behavior comprises modeling a behavior of said output component.
  • 13. A method according to claim 1, wherein said continuous optical transform is not a continuous version of said discrete general transform.
  • 14. A method according to claim 1, wherein said calculation is dependent on the transform to be performed.
  • 15. A method of determining at least one characteristic of a mask element, comprising: (a) providing an optical processor for performing a given discrete general linear transform and having a given system design and including: a continuous optical element performing a continuous optical transform;at least one discrete input or output array component comprising a plurality of discrete elements; anda matching mask associated with the discrete array component for matching discretization behavior of the component with continuous behavior of said continuous element, said mask having a plurality of mask elements;(b) determining a desired error behavior of said processor;(c) selecting, for each of said mask elements, a value for said at least one characteristic of the mask element;(d) simulating the behavior of said processor with said element characteristic value for at least one representative input data set to said processor;(e) evaluating an error behavior of said processor from said simulating; and(f) repeating (c), (d) and (e) if said error behavior does not meet said desired error behavior.
  • 16. A method according to claim 15, wherein said element has a binary aperture geometry.
  • 17. A method according to claim 15, wherein said element has a non-binary pattern aperture geometry.
  • 18. A method according to claim 15, wherein said element has a rectangular aperture geometry.
  • 19. A method according to claim 15, wherein said error behavior comprises an error rate.
  • 20. A method according to claim 15, wherein said representative input data comprises random data.
  • 21. A method according to claim 15, wherein said representative input data comprises a plurality of known functions.
  • 22. An optical processing system for performing a discrete general linear transform, comprising: an input component and an output component, at least one of the components comprising a discrete array comprising a plurality of discrete elements;a continuous optical element that performs a continuous optical transform; anda matching mask having a masking function defined by a spatially varying transmittance over a plurality of mask elements having a non-pinhole geometry, said mask being associated with said array, for matching discretization behavior of said discrete array with continuous behavior of said continuous element,wherein the masking function of said mask optically interacts with said processing system to cause said processing system to at least approximate said discrete general linear transform.
  • 23. A system according to claim 22, wherein said geometry comprises at least three different density levels.
  • 24. A system according to claim 22, wherein said mask is a binary mask.
  • 25. A system according to claim 22, wherein said mask is a grey-scale mask.
  • 26. A system according to claim 22, wherein said mask is a half-tone mask.
  • 27. A system according to claim 22, wherein said mask is a continuous varying density mask.
  • 28. A system according to claim 22, wherein said masking function defines a rectangular aperture.
  • 29. A system according to claim 22, wherein said masking function defines a plurality of apertures for a single mask element.
  • 30. A system according to claim 22, wherein said mask is controllable.
  • 31. A system according to claim 22, wherein said continuous element comprises a lens.
  • 32. A system according to claim 22, wherein said continuous element comprises a lenslet array.
  • 33. A method according to claim 15, wherein said at least one characteristic comprises a geometry.
  • 34. A method according to claim 15, wherein said at least one characteristic comprises a size.
Priority Claims (2)
Number Date Country Kind
130038 May 1999 IL national
131094 Jul 1999 IL national
RELATED APPLICATIONS

This application is a U.S. national filing of PCT Application No. PCT/IL00/00282, filed May 19, 2000. This application is also a continuation in part of PCT application No. PCT/IL99/00479, filed Sep. 5, 1999, designating the US, the disclosure of which is incorporated herein by reference.

PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/IL00/00282 5/19/2000 WO 00 2/21/2002
Publishing Document Publishing Date Country Kind
WO00/72105 11/30/2000 WO A
US Referenced Citations (29)
Number Name Date Kind
3891968 McMahon Jun 1975 A
3969699 McGlaughlin Jul 1976 A
4005385 Joynson et al. Jan 1977 A
4460969 Chen et al. Jul 1984 A
4590608 Chen et al. May 1986 A
4615619 Fateley Oct 1986 A
4847796 Aleksoff et al. Jul 1989 A
4892370 Lee Jan 1990 A
4892408 Pernick et al. Jan 1990 A
4972498 Leib Nov 1990 A
5072314 Cheng Dec 1991 A
5080464 Toyoda Jan 1992 A
5107351 Leib et al. Apr 1992 A
5166508 Davis et al. Nov 1992 A
5216529 Paek et al. Jun 1993 A
5227886 Efron et al. Jul 1993 A
5235439 Stoll Aug 1993 A
5262979 Chao Nov 1993 A
5274716 Mitsuoka et al. Dec 1993 A
5317651 Refregier et al. May 1994 A
5327286 Sampsell et al. Jul 1994 A
5339305 Curtis et al. Aug 1994 A
5454047 Chang et al. Sep 1995 A
5485312 Horner et al. Jan 1996 A
5537492 Nakajima et al. Jul 1996 A
5659637 Bagley, Jr. et al. Aug 1997 A
5675670 Koide Oct 1997 A
5694488 Hartmann Dec 1997 A
5790686 Koc et al. Aug 1998 A
Continuation in Parts (1)
Number Date Country
Parent PCT/IL99/00479 Sep 1999 US
Child 09979178 US