SPATIAL MODE PROCESSING FOR HIGH-RESOLUTION IMAGING

Information

  • Patent Application
  • 20240242495
  • Publication Number
    20240242495
  • Date Filed
    May 06, 2022
    2 years ago
  • Date Published
    July 18, 2024
    5 months ago
  • CPC
    • G06V10/92
    • G06V10/60
    • G06V10/764
  • International Classifications
    • G06V10/88
    • G06V10/60
    • G06V10/764
Abstract
Optical imaging includes: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes: receiving a set of output optical signals from the spatial mode sorter during a detection interval of time: processing information based at least in part on the set of output optical signals received in the detection interval of time: and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
Description
TECHNICAL FIELD

This disclosure relates to spatial mode processing for high-resolution imaging.


BACKGROUND

Various techniques can be used to capture images of objects in an object plane. For example, a camera or other imaging device can be placed in a focal plane of a lens system in a direct imaging approach. In some situations (e.g., when there are features with extent smaller than the Rayleigh diffraction limit), the optical collection hardware used for direct imaging approaches may be insufficient to resolve the relevant object features, or may require a relatively long integration time for estimation tasks involving discriminating different classes of images.


SUMMARY

In one aspect, in general, a method for optical imaging includes: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.


Aspects can include one or more of the following features.


The method further includes: receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time: and providing a second estimated measurement for discriminating among the first set of two or more predetermined target images based at least in part on information derived from the processing.


The method further includes: receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time: processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among a second set of two or more predetermined target images based at least in part on information derived from the processing;


The processing includes: determining, based at least in part on the set of output optical signals, information that is dependent on a second moment of a transverse spatial distribution of the input optical signal; and performing a statistical analysis of the determined information based on a decision rule that provides a discrimination among the two or more predetermined target images.


The determined information further comprises information that is dependent on a first moment of a spatial distribution of the input optical signal.


The statistical analysis includes additional information obtained by prior measurement or prior estimation.


The decision rule comprises a comparison between the determined information and a set of second moments containing transverse spatial distributions of each of the predetermined target images.


The determined information is dependent on a third moment of a transverse spatial distribution of the input optical signal.


The decision rule comprises a comparison between the determined information and a set of third moments containing transverse spatial distributions of each of the predetermined target images.


The set of target spatial modes includes: a zero-order radially symmetric spatial mode, and two first-order spatial modes that represent transverse spatial distributions along orthogonal axes.


A subset of the set of target spatial modes are Hermite-Gaussian modes.


A subset of the set of target spatial modes are distorted Hermite-Gaussian modes.


A subset of the set of target spatial modes are matched to the spatial mode of a point spread function of an imaging system.


The set of target spatial modes is modified to compensate for misalignment of the spatial mode sorter with respect to the received input optical signal.


The set of target spatial modes is modified to compensate for optical aberrations distorting the received input optical signal.


The method further includes spatially aligning the spatial mode sorter to compensate for changes in a spatial or angular position of the received optical signal.


The two or more predetermined target images represent images of different types of vehicles.


The two or more predetermined target images represent images of different celestial bodies.


The two or more predetermined target images represent images of different biological structures.


The processing includes assigning classification labels to an input optical signal from a set of two or more predetermined classification labels.


In another aspect, in general, one or more non-transitory computer-readable media, having instructions stored thereon that, when executed by a computer system, cause the computer system to perform operations including: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time: processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.


In another aspect, in general, an apparatus for imaging a distribution of one or more optical sources includes: a spatial mode sorter that is configurable based on a set of target spatial modes onto which an input optical signal is projected: and a control module. The control module is configured to: configure the spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in the set of target spatial modes: receive a set of output optical signals from the spatial mode sorter during a detection interval of time; process information based at least in part on the set of output optical signals received in the detection interval of time; and provide an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.


Aspects can have one or more of the following advantages.


In implementations of some of the techniques described herein, optical receiver frameworks can classify known objects to a desired accuracy benchmark with substantially less integration time than that possible with an idealized focal-plane camera (e.g., direct imaging).


Some of the techniques described herein are performed in a binary object classification context, where optimal classification performance can be achieved for any two diffraction-limited objects using a spatial mode analyzer or spatial parity sorter. The systems can be used for 2D object classification. In some implementations, the system is able to achieve or approach the best classification accuracy allowed by physics and can reduce the integration time required for a desired accuracy by multiple orders of magnitude over direct imaging. Examples of applications of such systems include detection of exoplanets in extrasolar systems, and diagnosis of medical conditions based on binary fluorescence signaling in such cellular biostructures. These techniques can be utilized in high-stability contexts where I-dimensional or 2-dimensional visual codes or other objects are to be read or identified using small optics at large distances, for example. The techniques may be particularly advantageous in contexts where the use of RF or other active signaling is precluded, for example, in automated sensing contexts.


Other features and advantages will become apparent from the following description, and from the figures and claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawing. It is emphasized that, according to common practice, the various features of the drawing are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.



FIG. 1 is a schematic diagram of an example system for spatial mode processing.



FIG. 2 is a schematic diagram illustrating direct imaging and spatial mode sorting.



FIG. 3 is a set of images including subsets (a, b, c, d) of images corresponding to four pairs of objects imaged with direct imaging, before and after each image is convolved with a Gaussian point spread function.



FIG. 4 is a set of four plots (a, b, c, d) comparing direct imaging and spatial mode sorting by plotting a scaling factor related to the success probability of object discrimination.



FIG. 5 is a plot showing the maximum number of objects that can be distinguished at a given threshold error rate with a 2D Gaussian aperture.



FIG. 6A is a schematic diagram of an implementation of a spatial mode sorter system.



FIG. 6B is a schematic diagram illustrating the sorting of a first spatial mode.



FIG. 6C is a schematic diagram illustrating the sorting of a second spatial mode.



FIG. 6D is a schematic diagram illustrating the sorting of a third spatial mode.



FIG. 7 is a schematic diagram of a second implementation of a spatial mode sorter.



FIG. 8 is a schematic diagram of a third implementation of a spatial mode sorter.



FIG. 9 shows a flowchart for an example spatial mode sorting procedure.





DETAILED DESCRIPTION

Object discrimination is at the heart of decision making in medical diagnostics, extrasolar astronomy, and autonomous sensing. For incoherent imaging with large standoff distances, small objects, and/or aperture-limited imaging systems, the physical principle of diffraction impedes accurate discrimination between spatially distinct objects. A classic heuristic criterion, attributed to Rayleigh, holds that two objects cannot be discriminated when their distinguishing features exhibit length scales smaller than the width of the system point spread function. More quantitatively, for hypothesis tests between such “sub-Rayleigh” objects, the probability of correct identification degrades as the PSF more severely perturbs the measured images.


A paradigm shift for sub-Rayeigh imaging has emerged from the calculation of task-specific error bounds that optimize over all measurements permitted by quantum mechanics. These “quantum limits” revealed that direct measurements of the optical intensity profile are responsible for the catastrophic degree of error implied by the Rayleigh criterion, whereas alternative measurements yield far lower error than direct imaging for many tasks. Quantum limits, and “quantum-optimal” measurements that achieve them, were found for specific hypothesis tests including one-vs-two point source discrimination and exoplanet detection. However, no general results exist that broadly apply to real-world object discrimination settings.


Referring to FIG. 1, an example of a system 100 for spatial mode processing including an optical imaging system 102 that includes an optical processing module 104 (e.g., including an optical front-end and a processing module implemented on a special-purpose or general-purpose processor) for receiving an optical input 103 and producing measurement information 105 as output. The optical processing module 104 receives image information 106 to configure the optical imaging system 102 to discriminate among different images. For example, a set of predetermined target images 108 can be stored in a storage system 110. Between a series of detection intervals of time, the optical processing module 104 is, in some implementations, able to configure a configurable spatial mode sorter 112 to provide separate output optical signals for each spatial mode in a set of target spatial modes, as described in more detail herein. In some implementations, the spatial mode sorter 112 is initially configured and then used in a single detection interval to provide information for discriminating (e.g., for binary discrimination between two predetermined target images). Examples of some aspects of this part of the procedure (e.g., spatial-mode sorting) are described more detail below.


For hypothesis tests between any two incoherent, quasi-monochromatic 2D objects in the sub-Rayleigh regime, examples are described herein for techniques to 1) compute the quantum Chernoff bound on asymptotic discrimination error, 2) quantify the sub-optimal error rate of direct imaging, and 3) identify a quantum-optimal measurement whose linear-optical design does not depend on the object models. The results of prophetic examples included herein extend to M-ary discrimination: the same object-independent measurement is quantum-optimal for any database of M>2 objects.


Without intending to be bound by theory, for describing some examples, we let Hj, j∈[1, M], denote a hypothesis corresponding to one of M candidate objects. Under Hj, the quantum state ηj on Hilbert space custom-character describes one temporal mode of the quasi-monochromatic optical field collected by an imaging system. Many naturally occurring incoherent sources exhibit a small mean photon flux ε<<1 per temporal mode such that multi-photon detection within the optical coherence time is vanishingly rare. In this case, a weak-source approximation uses the Fock expansion ηj=(1−ε)|0custom-charactercustom-character0|ερj+O(ε2), where |0custom-charactercustom-character0| is the quantum vacuum state and the single-photon state ρj carries all of the spatial information about the object under Hj. Since ρj is restricted to single-photon (unary) excitation, its infinite-dimensional spatial-mode structure can be mapped to a Hilbert space custom-character(1).


Let an imaging system with a 2D coherent PSF ψ({right arrow over (x)}) relate object- and image-plane position vectors {right arrow over (x)}obj={xobj, yobj} and {right arrow over (x)}=μ{right arrow over (x)}obj by the transverse magnification μ. We model the spatial irradiance of the object under Hj by a normalized radiant exitance profile mj({right arrow over (x)}obj). The state of the collected optical field on custom-character(1) is then












ρ
j

=






-









1

μ
2





m
j

(


x


μ

)





"\[LeftBracketingBar]"


ψ

x















ψ

x







"\[LeftBracketingBar]"




d
2



x



,








(
1
)







where the pure state |ψ{right arrow over (x)}custom-character=∫∫−∞ψ({right arrow over (a)}−{right arrow over (x)})|{right arrow over (a)}custom-characterd2{right arrow over (a)} encodes the effect of the aperture and |{right arrow over (x)}custom-character is a single-photon eigenket at image-plane position {right arrow over (x)}. In a basis of eigenvectors |ϕmcustom-character=∫∫−∞ϕm({right arrow over (x)})|{right arrow over (x)}custom-characterd2{right arrow over (x)} on custom-character(1) set by orthogonal 2D functions ϕm({right arrow over (x)}), the density matrix












ρ
j

=




m
,


n
=
0







d

j
,

m
,

n






"\[LeftBracketingBar]"


ϕ
m












ϕ
n



"\[LeftBracketingBar]"







(
2
)







has elements dj,m,n=∫∫−∞μ2mj({right arrow over (x)}/μ)cm,n({right arrow over (x)})d2{right arrow over (x)}, where cm,n({right arrow over (x)})=custom-characterϕm{right arrow over (x)}custom-charactercustom-characterψ{right arrow over (x)}ncustom-character.


Consider a binary hypothesis test between objects m1({right arrow over (x)}obj) and m2({right arrow over (x)}obj) with equal prior probabilities. To make a decision Z∈[1,2], a receiver measures the state η1custom-character or η2custom-character acquired over custom-character temporal modes and then applies a pre-determined decision rule on the outcome(s). If the conditional probability of deciding Hj′ under true hypothesis Hj is Pcustom-character(Z=j′|Hj), the average error probability Perr,custom-character=[Pcustom-character(Z=1|H2)+Pcustom-character(Z=2|H1)]/2 is a symmetric performance metric for the measurement/decision rule scheme.


Optimizing over all such schemes, the quantum-limited minimum average error Perr,min,custom-character˜e−ξQcustom-character follows an exponential decay when custom-character>>1, where the quantum Chernoff exponent (QCE) ξQ quantifies how efficiently each additional copy of the received state ηj suppresses the minimum error. We later show that the quantum limit can be written as Perr,min,custom-character˜e−ξQ(1)N, where N=εM is the average photon number of njcustom-character and where the per-photon QCE










ξ
Q

(
1
)


=

log
[



min



0

s

1




Tr



(


ρ
1
s



ρ
2

1
-
s



)


]





(
3
)







obeys ξQ≈εξQ(1) with weak-source sub-Rayleigh objects.


The most general description of a measurement, a positive operator-valued measure (POVM), consists of a set of positive semi-definite operators {Πz}z on custom-character, linked to measurement outcomes {z} on an outcome space custom-character, that resolve the identity operator as custom-character=I. For a particular measurement performed on ηjcustom-character, the minimum average error probability among all decision rules goes as Perr,min,Meas,custom-character˜e−ξMeascustom-character, where ξMeas is the Chernoff exponent (CE) for the chosen measurement. The quantum and classical statistics are related by the achievable quantum Chernoff bound ξMeas≤ξQ; that is, the QCE automatically optimizes over the CEs of all POVMs on custom-charactercustom-character. Under the weak-source approximation, we show that the minimal error of any measurement that uses temporally-resolved photon counting goes as Perr,min,Meas,custom-character˜e−ξMeas(1)N, where ξMeas≈εξMeas(1) in the sub-Rayleigh regime and where










ξ
Meas

(
1
)


=

-

log

[


min

0

s

1






𝓏


𝒵

(
1
)







P

(

𝓏


ρ
1


)

s




P

(

𝓏
|

ρ
2


)


1
-
s





]






(
4
)







is the per-photon CE, which depends on probabilities P(z|ρj)=Tr(Πz(1)ρj) of outcomes, in a single-photon subspace custom-character(1), of the reduced POVM {Πz(1)}custom-character(1) on custom-character(1).


A measurement whose CE matches the QCE (ξMeas(1)Q(1)) is considered to be quantum-optimal for the given hypothesis test. Conversely, a relative gap (ξMeas(1)Q(1)) indicates a fundamental sub-optimality in the measurement that cannot be remedied by data post-processing.


Our goals are twofold: compute the QCE ξQ(1) for generalized sub-Rayleigh object discrimination and find a universally optimal measurement for which ξMeas(1)Q(1). As a first step, m2({right arrow over (x)}obj) and if object m1({right arrow over (x)}obj) is a single point source at position {right arrow over (x)}1,obj={right arrow over (x)}1/μ, we find that the QCE is exactly











ξ
Q

(
1
)


=

-

log

[






-









1

μ
2





m
2

(



x


-


x


1


μ

)






"\[LeftBracketingBar]"


Γ

(

x


)



"\[RightBracketingBar]"


2



d
2



x





]



,




(
5
)







where Γ({right arrow over (x)})=custom-characterψ{right arrow over (Ω)}{right arrow over (x)}custom-character is the 2D autocorrelation of the PSF and {right arrow over (Ω)} denotes the origin of the image-plane coordinate system. In this case, ξBSPADE(1)Q(1) is achieved by a 2D binary spatial mode demultiplexing (BSPADE) device that passively couples the PSF-matched spatial mode (i.e., |ψ{right arrow over (Ω)}custom-character) to one shot-noise-limited photon-counting detector and all other light to a second identical detector. As an example, for discriminating one-vs-two point sources with a 2D Gaussian PSF ψ({right arrow over (x)})=(2πσ2)−1/2exp(−(x2+y2)/4σ2), where d is the source separation under H2, we confirm that the BSPADE CE enjoys a quadratic (d2) scaling advantage as d<<σ over the CE of idealized 2D direct imaging (an infinite spatial bandwidth, unity fill factor, unity quantum efficiency photon-counting detector array).


We now generalize to arbitrary m1({right arrow over (x)}obj) and m2({right arrow over (x)}obj), with applications in bioimaging, astronomy, and computer vision. We focus on the sub-Rayleigh limit γ<<1, where γ=μθ/σ quantifies the geometric ratio between the magnified spatial extent of the object(s) θ and the PSF width σ.


We also define {tilde over (m)}j({right arrow over (x)}obj)=θ2mj(θ{right arrow over (x)}obj), {tilde over (ψ)}({right arrow over (x)})=σψ(σ{right arrow over (x)}), and {tilde over (Γ)}({right arrow over (x)})=Γ(σ{right arrow over (x)}) as non-dimensionalized representations of the object(s), the coherent PSF, and the PSF autocorrelation function, respectively, to isolate the influence of diffraction (i.e., γ) from that of the object and aperture. In some implementations, the objects' 2D centroids coincide at a location known to the receiver either from prior knowledge or a preliminary measurement, such that the task is object identification not localization, and that the PSF ψ({right arrow over (x)}) is even in x and y, as with a circularly symmetric aperture.


To derive the generalized QCE, we represent ρ1 and ρ2 [Eq. (2)] in a basis of PSF-adapted (PAD) eigenvectors |{tilde over (ϕ)}mcustom-character=∫∫−∞{tilde over (ϕ)}m({right arrow over (x)})|{right arrow over (x)}custom-characterd2{right arrow over (x)} on custom-character(1) via Gram-Schmidt orthogonalization of the 2D Cartesian derivatives of the non-dimensionalized PSF {tilde over (ψ)}({right arrow over (x)}). For a 2D Gaussian PSF, the PAD basis functions {tilde over (ϕ)}m({right arrow over (x)}) are Hermite-Gauss polynomials. After expanding ρ1 and ρ2 in powers of γ<<1 and truncating to finite dimensions, we use operator perturbation theory to find











ξ
Q

(
1
)


=




max

0

s

1


[



(


sm

1
,


x
2



+


(

1
-
s

)



m

2
,


x
2




-


m

1
,


x
2


s



m

2
,


x
2



1
-
s




)



Γ

x
2



+


(


sm

1
,


y
2



+


(

1
-
s

)



m

2
,


y
2




-


m

1
,


y
2


s



m

2
,


y
2



1
-
s




)



Γ

y
2




]



γ
2


+

O

(

γ
3

)



,




(
6
)







where mj,xk,yl=∫∫−∞xobjkyobjl{tilde over (m)}j({right arrow over (x)}obj)d2{right arrow over (x)}obj are spatial moments of the non-dimensionalized object models and Γxkyl=−[Re(∂k+l{tilde over (Γ)}({right arrow over (x)})/∂xk∂dyl)]{right arrow over (x)}={right arrow over (Ω)} are derivatives of the PSF autocorrelation function. The QCE of Eq. (6) represents the quantum limit for discrimination between any two incoherent objects in the sub-Rayleigh limit.


The CE for direct imaging with a zeroless PSF that is separable in x and y is given by by











ξ
Direct

(
1
)


=



(

1
/
32

)



(


𝒦
x

+

𝒦
y


)



γ
4


+

O

(

γ
5

)



,




(
7
)







with custom-charactera=(m1,a2−m2,a2)2∫∫−∞ψa2({right arrow over (x)})2/|{tilde over (ψ)}({right arrow over (x)})|2d2{right arrow over (x)} for a∈[x,y] and where ψxkyl({right arrow over (x)})=∂k+l|{tilde over (ψ)}({right arrow over (x)})|2/∂xkyl are derivatives of the incoherent PSF. Eqs. (6) and (7) reveal a quadratic scaling sub-optimality in direct imaging—ξDirect(1)˜γ4 vs ξQ(1)˜γ2—for all binary discrimination tasks. Alternatively, a “TriSPADE” measurement sorts the collected light between the PSF-matched spatial mode and the first-order PAD-basis modes in two perpendicular dimensions, using only linear optics and photodetectors to implement a POVM Π0=|ϕ0custom-charactercustom-character{tilde over (ϕ)}0|, Π1=|{tilde over (ϕ)}1custom-charactercustom-character{tilde over (ϕ)}1|, and Π2=|{tilde over (ϕ)}2custom-charactercustom-character{tilde over (ϕ)}2| that does not depend on the candidate object models. The resulting CE ξTriSPADE(1) achieves the QCE when γ<<1.



FIG. 2 shows an example of a direct imaging system 200 that collects incoming light and images the light onto a detector 202. In contrast, a spatial mode sorting system 210 collects incoming light and a spatial mode sorter 212 projects it onto a first spatial mode 214A that is detected by a first detector 213A, a second spatial mode 214B that is detected by a second detector 213B, and a third spatial mode 214C that is detected by a third detector 213C.


The upper two images of FIG. 3a show the object irradiance of a vertical and horizontal ellipse while the lower two images of FIG. 3a show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3b show the object irradiance of a filled and hollow pore while the lower two images of FIG. 3b show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3c show the object irradiance of two possible exoplanet detection scenarios while the lower two images of FIG. 3c show their Gaussian point spread function convolved image-plane intensity profiles. The upper two images of FIG. 3d show the object irradiance of two different QR codes while the lower two images of FIG. 3d show their Gaussian point spread function convolved image-plane intensity profiles.


To illustrate our results, in FIG. 4a, FIG. 4b, FIG. 4c, FIG. 4d we numerically evaluate ξQ(1), ξDirect(1), and ξTriSPADE(1) for the examples depicted in FIG. 3a, FIG. 3b, FIG. 3c, and FIG. 3d respectively. Thick lines represent the analytical lowest-order (in γ) results for ξQ(1) (solid) and ξDirect(1) (dashed). Thin lines represent numerical results for ξQ(1) (solid), ξDirect(1) (dotted), and ξTriSPADE(1) (dashed). A misalignment of θ/10 is used for the lower TriSPADE CE in FIG. 4c. The lowest-order behavior of the QCE in γ [Eq. (6)] is an excellent approximation for both the full QCE and the TriSPADE CE throughout the sub-Rayleigh regime (γ<1), and the results clearly exhibit the expected O(γ2) scaling gap. We also find TriSPADE to be robust to optical misalignment; a mode sorter that is misaligned from the mutual object centroid retains the quadratic scaling advantage over direct imaging. These results suggest that TriSPADE can perform a wide range of sub-Rayleigh hypothesis tests with substantially less error than conventional imaging methods.


We now extend our analysis to M>2 equiprobable objects, such as a database of QR codes. The M-ary QCE ξQ,M(1)=mini≠jξQ,i,l(1), which characterizes the quantum-limited asymptotic error for discriminating M states, is found by minimizing the pairwise QCEs ξQ,i,j(1) for each pair of states {ρi, ρj}. The similarly defined M-ary CE ξMeas,M(1)=mini≠jξMeas,i,j(1) obeys the multiple quantum Chernoff bound ξMeas,M(1)≤ξQ,M(1). We have shown that ξTriSPADE,i.j(1)Q,i,j(1) for any two states when γ<<1. Therefore, the TriSPADE POVM, which does not depend on the candidate states, will simultaneously achieve the QCE for all pairs of states in a database. It follows that ξTriSPADE,M(1)Q,M(1). We conclude that TriSPADE is a quantum-optimal measurement for any M-object database in the sub-Rayleigh limit.


Finally, in FIG. 5 we show how many objects can be distinguished to a desired accuracy with a conventional or quantum-optimal measurement. The inset of FIG. 5 shows the error probability vs. mean detected photon number. We find that TriSPADE resolves more objects than direct imaging when γ<√{square root over (2)}/(√{square root over (mx2,max)}+/√{square root over (mx2,min)}) regardless of the threshold error rate ξThresh(1). As the threshold is relaxed, meaning more photons are available and/or more error can be tolerated (inset), the gap between TriSPADE and direct imaging grows to over two orders of magnitude for small γ. We conclude that TriSPADE significantly increases the complexity of distinguishable sub-Rayleigh object databases without compromising performance.


The examples described herein show that a realizable optical receiver could substantially enhance decision-making accuracy for super-resolution biological, astronomical, and terrestrial imaging.


The spatial mode sorting may be performed with various optical configurations, as discussed below.



FIG. 6A shows an example of a spatial mode sorting system 600. The incoming beam 602 reflects off a spatial light modulator 604 containing five independently controlled regions that modifies the intensity or phase of the incoming beam 602. A mirror 606 reflects the incoming beam 602 after it has interacted with one or more of the regions of the spatial light modulator 604. The incoming beam 602 is then sent to a detector 608, such as an EMCCD or CMOS camera. The information produced by the detector is then sent to a processor 610, such as an FPGA, which can then control the intensity and phase of future incoming light 602 after reflecting from the spatial light modulator 604.



FIG. 6B shows the spatial mode sorting system 600 with an incoming beam containing a first mode 620A and sorting it to a first region of the detector image 622A.



FIG. 6C shows the spatial mode sorting system 600 with an incoming beam containing a second mode 620B and sorting it to a second region of the detector image 622B.



FIG. 6D shows the spatial mode sorting system 600 with an incoming beam containing a first mode 620C and sorting it to a third region of the detector image 622C.


If a superposition of the modes 620A, 620B, and 620C is received in the beam 602, the ratio of the spot intensities on the resulting detector image can be used to infer the relative strength of the modes in the received beam 602.



FIG. 7 shows a second example of a spatial mode sorting system 700. A first spatial light modulator 701 reflects and modifies the intensity or phase of the incoming beam 710. A second spatial light modulator 702, a third spatial light modulator 703, a fourth spatial light modulator 704, and a fifth spatial light modulator 705 further reflect and modify the intensity or phase of the incoming beam 710.



FIG. 8 shows a third example of a spatial mode sorting system 800. A first spatial light modulator 801 transmits and modifies the intensity or phase of the incoming beam 810. A second spatial light modulator 802, a third spatial light modulator 803, a fourth spatial light modulator 804, and a fifth spatial light modulator 805 further transmit and modify the intensity or phase of the incoming beam 810.



FIG. 9 shows a flowchart for an example spatial mode sorting procedure 900 for discriminating among a first set of predetermined target images. The procedure 900 includes configuring (902) a spatial mode sorter to provide, in response to receiving (904) an input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes. The procedure 900 includes processing (906) information based at least in part on the set of output optical signals received in the detection interval of time. The procedure 900 includes providing (908) an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. The procedure 900 may be performed when, during the detection interval of time, a total number of the output optical signals is greater than two and less than ten. The procedure 900 may be iterated multiple times until a goal is reached (e.g., until no further imaging is allowed), with each iteration providing (908) an estimated measurement for discriminating among a first set of two or more predetermined target images. The procedure 900 may be iterated multiples times, providing (908) a plurality of estimated measurements for discriminating among a plurality of sets of two more predetermined target images.


The techniques described above for controlling and configuring a spatial mode sorting system can be implemented using software for execution on a computer system. For example, the software can define procedures in one or more computer programs that execute on one or more programmed or programmable computer systems (e.g., desktop, distributed, client/server computer systems) each including at least one processor, at least one data storage system (e.g., including volatile and non-volatile memory and/or storage elements), at least one input device (e.g., keyboard and mouse) or port, and at least one output device (e.g., monitor) or port. The software may form one or more modules of a larger program.


The software may be provided on a non-transitory medium such as a computer-readable storage medium (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer system, or delivered over a communication medium (e.g., encoded in a propagated signal) such as network to a computer system where it is stored in a non-transitory medium and executed. Each such computer program can be used to configure and operate the computer system when the non-transitory medium is read by the computer system to perform the procedures of the software.


While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.

Claims
  • 1. A method for optical imaging, the method comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes;receiving a set of output optical signals from the spatial mode sorter during a detection interval of time;processing information based at least in part on the set of output optical signals received in the detection interval of time; andproviding an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing;wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
  • 2. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time;processing information based at least in part on the second set of output optical signals received in the second detection interval of time; andproviding a second estimated measurement for discriminating among the first set of two or more predetermined target images based at least in part on information derived from the processing.
  • 3. The method of claim 1, further comprising; receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time;processing information based at least in part on the second set of output optical signals received in the second detection interval of time; andproviding a second estimated measurement for discriminating among a second set of two or more predetermined target images based at least in part on information derived from the processing;
  • 4. The method of claim 1, wherein the processing includes: determining, based at least in part on the set of output optical signals, information that is dependent on a second moment of a transverse spatial distribution of the input optical signal; andperforming a statistical analysis of the determined information based on a decision rule that provides a discrimination among the two or more predetermined target images.
  • 5. The method of claim 4, wherein the determined information further comprises information that is dependent on a first moment of a spatial distribution of the input optical signal.
  • 6. The method of claim 4, wherein the statistical analysis includes additional information obtained by prior measurement or prior estimation.
  • 7. The method of claim 4, wherein the decision rule comprises a comparison between the determined information and a set of second moments containing transverse spatial distributions of each of the predetermined target images.
  • 8. The method of claim 4, wherein the determined information is dependent on a third moment of a transverse spatial distribution of the input optical signal.
  • 9. The method of claim 6, wherein the decision rule comprises a comparison between the determined information and a set of third moments containing transverse spatial distributions of each of the predetermined target images.
  • 10. The method of claim 1, wherein the set of target spatial modes includes: a zero-order radially symmetric spatial mode, and two first-order spatial modes that represent transverse spatial distributions along orthogonal axes.
  • 11. The method of claim 1, wherein a subset of the set of target spatial modes are Hermite-Gaussian modes.
  • 12. The method of claim 1, wherein a subset of the set of target spatial modes are distorted Hermite-Gaussian modes.
  • 13. The method of claim 1, wherein a subset of the set of target spatial modes are matched to the spatial mode of a point spread function of an imaging system.
  • 14. The method of claim 1, wherein the set of target spatial modes is modified to compensate for misalignment of the spatial mode sorter with respect to the received input optical signal.
  • 15. The method of claim 1, wherein the set of target spatial modes is modified to compensate for optical aberrations distorting the received input optical signal.
  • 16. The method of claim 1, further comprising spatially aligning the spatial mode sorter to compensate for changes in a spatial or angular position of the received optical signal.
  • 17. The method of claim 1, wherein the two or more predetermined target images represent images of different types of vehicles.
  • 18. The method of claim 1, wherein the two or more predetermined target images represent images of different celestial bodies.
  • 19. The method of claim 1, wherein the two or more predetermined target images represent images of different biological structures.
  • 20. The method of claim 11, wherein the processing includes assigning classification labels to an input optical signal from a set of two or more predetermined classification labels.
  • 21. One or more non-transitory computer-readable media, having instructions stored thereon that, when executed by a computer system, cause the computer system to perform operations comprising: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes;receiving a set of output optical signals from the spatial mode sorter during a detection interval of time;processing information based at least in part on the set of output optical signals received in the detection interval of time; andproviding an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing;wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
  • 22. An apparatus for imaging a distribution of one or more optical sources, the apparatus comprising: a spatial mode sorter that is configurable based on a set of target spatial modes onto which an input optical signal is projected; anda control module configured to: configure the spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in the set of target spatial modes;receive a set of output optical signals from the spatial mode sorter during a detection interval of time;process information based at least in part on the set of output optical signals received in the detection interval of time; andprovide an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing;wherein during the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application claims priority to and the benefit of U.S. Provisional Application Patent Ser. No. 63/187,264, entitled “SPATIAL MODE PROCESSING FOR HIGH-RESOLUTION IMAGING,” filed May 11, 2021, the entire disclosure of which is hereby incorporated by reference.

STATEMENT AS TO FEDERALLY SPONSORED RESEARCH

This invention was made with government support under Grant No. W911NF-20-1-0039 awarded by ARMY/ARO, and Grant No. HR0011-20-9-0128 awarded by DARPA. The government has certain rights in the invention.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2022/027996 5/6/2022 WO
Provisional Applications (1)
Number Date Country
63187264 May 2021 US