This disclosure relates to spatial mode processing for high-resolution imaging.
Various techniques can be used to capture images of objects in an object plane. For example, a camera or other imaging device can be placed in a focal plane of a lens system in a direct imaging approach. In some situations (e.g., when there are features with extent smaller than the Rayleigh diffraction limit), the optical collection hardware used for direct imaging approaches may be insufficient to resolve the relevant object features, or may require a relatively long integration time for estimation tasks involving discriminating different classes of images.
In one aspect, in general, a method for optical imaging includes: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time; processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
Aspects can include one or more of the following features.
The method further includes: receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time; processing information based at least in part on the second set of output optical signals received in the second detection interval of time: and providing a second estimated measurement for discriminating among the first set of two or more predetermined target images based at least in part on information derived from the processing.
The method further includes: receiving a second set of output optical signals from the spatial mode sorter during a second detection interval of time: processing information based at least in part on the second set of output optical signals received in the second detection interval of time; and providing a second estimated measurement for discriminating among a second set of two or more predetermined target images based at least in part on information derived from the processing;
The processing includes: determining, based at least in part on the set of output optical signals, information that is dependent on a second moment of a transverse spatial distribution of the input optical signal; and performing a statistical analysis of the determined information based on a decision rule that provides a discrimination among the two or more predetermined target images.
The determined information further comprises information that is dependent on a first moment of a spatial distribution of the input optical signal.
The statistical analysis includes additional information obtained by prior measurement or prior estimation.
The decision rule comprises a comparison between the determined information and a set of second moments containing transverse spatial distributions of each of the predetermined target images.
The determined information is dependent on a third moment of a transverse spatial distribution of the input optical signal.
The decision rule comprises a comparison between the determined information and a set of third moments containing transverse spatial distributions of each of the predetermined target images.
The set of target spatial modes includes: a zero-order radially symmetric spatial mode, and two first-order spatial modes that represent transverse spatial distributions along orthogonal axes.
A subset of the set of target spatial modes are Hermite-Gaussian modes.
A subset of the set of target spatial modes are distorted Hermite-Gaussian modes.
A subset of the set of target spatial modes are matched to the spatial mode of a point spread function of an imaging system.
The set of target spatial modes is modified to compensate for misalignment of the spatial mode sorter with respect to the received input optical signal.
The set of target spatial modes is modified to compensate for optical aberrations distorting the received input optical signal.
The method further includes spatially aligning the spatial mode sorter to compensate for changes in a spatial or angular position of the received optical signal.
The two or more predetermined target images represent images of different types of vehicles.
The two or more predetermined target images represent images of different celestial bodies.
The two or more predetermined target images represent images of different biological structures.
The processing includes assigning classification labels to an input optical signal from a set of two or more predetermined classification labels.
In another aspect, in general, one or more non-transitory computer-readable media, having instructions stored thereon that, when executed by a computer system, cause the computer system to perform operations including: configuring a spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in a set of target spatial modes; receiving a set of output optical signals from the spatial mode sorter during a detection interval of time: processing information based at least in part on the set of output optical signals received in the detection interval of time; and providing an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
In another aspect, in general, an apparatus for imaging a distribution of one or more optical sources includes: a spatial mode sorter that is configurable based on a set of target spatial modes onto which an input optical signal is projected: and a control module. The control module is configured to: configure the spatial mode sorter to provide, in response to a received input optical signal, a separate output optical signal for each spatial mode in the set of target spatial modes: receive a set of output optical signals from the spatial mode sorter during a detection interval of time; process information based at least in part on the set of output optical signals received in the detection interval of time; and provide an estimated measurement for discriminating among a first set of two or more predetermined target images based at least in part on information derived from the processing. During the detection interval of time, a total number of the output optical signals is greater than two and less than ten.
Aspects can have one or more of the following advantages.
In implementations of some of the techniques described herein, optical receiver frameworks can classify known objects to a desired accuracy benchmark with substantially less integration time than that possible with an idealized focal-plane camera (e.g., direct imaging).
Some of the techniques described herein are performed in a binary object classification context, where optimal classification performance can be achieved for any two diffraction-limited objects using a spatial mode analyzer or spatial parity sorter. The systems can be used for 2D object classification. In some implementations, the system is able to achieve or approach the best classification accuracy allowed by physics and can reduce the integration time required for a desired accuracy by multiple orders of magnitude over direct imaging. Examples of applications of such systems include detection of exoplanets in extrasolar systems, and diagnosis of medical conditions based on binary fluorescence signaling in such cellular biostructures. These techniques can be utilized in high-stability contexts where I-dimensional or 2-dimensional visual codes or other objects are to be read or identified using small optics at large distances, for example. The techniques may be particularly advantageous in contexts where the use of RF or other active signaling is precluded, for example, in automated sensing contexts.
Other features and advantages will become apparent from the following description, and from the figures and claims.
The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawing. It is emphasized that, according to common practice, the various features of the drawing are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.
Object discrimination is at the heart of decision making in medical diagnostics, extrasolar astronomy, and autonomous sensing. For incoherent imaging with large standoff distances, small objects, and/or aperture-limited imaging systems, the physical principle of diffraction impedes accurate discrimination between spatially distinct objects. A classic heuristic criterion, attributed to Rayleigh, holds that two objects cannot be discriminated when their distinguishing features exhibit length scales smaller than the width of the system point spread function. More quantitatively, for hypothesis tests between such “sub-Rayleigh” objects, the probability of correct identification degrades as the PSF more severely perturbs the measured images.
A paradigm shift for sub-Rayeigh imaging has emerged from the calculation of task-specific error bounds that optimize over all measurements permitted by quantum mechanics. These “quantum limits” revealed that direct measurements of the optical intensity profile are responsible for the catastrophic degree of error implied by the Rayleigh criterion, whereas alternative measurements yield far lower error than direct imaging for many tasks. Quantum limits, and “quantum-optimal” measurements that achieve them, were found for specific hypothesis tests including one-vs-two point source discrimination and exoplanet detection. However, no general results exist that broadly apply to real-world object discrimination settings.
Referring to
For hypothesis tests between any two incoherent, quasi-monochromatic 2D objects in the sub-Rayleigh regime, examples are described herein for techniques to 1) compute the quantum Chernoff bound on asymptotic discrimination error, 2) quantify the sub-optimal error rate of direct imaging, and 3) identify a quantum-optimal measurement whose linear-optical design does not depend on the object models. The results of prophetic examples included herein extend to M-ary discrimination: the same object-independent measurement is quantum-optimal for any database of M>2 objects.
Without intending to be bound by theory, for describing some examples, we let Hj, j∈[1, M], denote a hypothesis corresponding to one of M candidate objects. Under Hj, the quantum state ηj on Hilbert space describes one temporal mode of the quasi-monochromatic optical field collected by an imaging system. Many naturally occurring incoherent sources exhibit a small mean photon flux ε<<1 per temporal mode such that multi-photon detection within the optical coherence time is vanishingly rare. In this case, a weak-source approximation uses the Fock expansion ηj=(1−ε)|00|ερj+O(ε2), where |00| is the quantum vacuum state and the single-photon state ρj carries all of the spatial information about the object under Hj. Since ρj is restricted to single-photon (unary) excitation, its infinite-dimensional spatial-mode structure can be mapped to a Hilbert space (1).
Let an imaging system with a 2D coherent PSF ψ({right arrow over (x)}) relate object- and image-plane position vectors {right arrow over (x)}obj={xobj, yobj} and {right arrow over (x)}=μ{right arrow over (x)}obj by the transverse magnification μ. We model the spatial irradiance of the object under Hj by a normalized radiant exitance profile mj({right arrow over (x)}obj). The state of the collected optical field on (1) is then
where the pure state |ψ{right arrow over (x)}=∫∫−∞∞ψ({right arrow over (a)}−{right arrow over (x)})|{right arrow over (a)}d2{right arrow over (a)} encodes the effect of the aperture and |{right arrow over (x)} is a single-photon eigenket at image-plane position {right arrow over (x)}. In a basis of eigenvectors |ϕm=∫∫−∞∞ϕm({right arrow over (x)})|{right arrow over (x)}d2{right arrow over (x)} on (1) set by orthogonal 2D functions ϕm({right arrow over (x)}), the density matrix
has elements dj,m,n=∫∫−∞∞μ2mj({right arrow over (x)}/μ)cm,n({right arrow over (x)})d2{right arrow over (x)}, where cm,n({right arrow over (x)})=ϕm|ψ{right arrow over (x)}ψ{right arrow over (x)}|ϕn.
Consider a binary hypothesis test between objects m1({right arrow over (x)}obj) and m2({right arrow over (x)}obj) with equal prior probabilities. To make a decision Z∈[1,2], a receiver measures the state η1⊗ or η2⊗ acquired over temporal modes and then applies a pre-determined decision rule on the outcome(s). If the conditional probability of deciding Hj′ under true hypothesis Hj is P(Z=j′|Hj), the average error probability Perr,=[P(Z=1|H2)+P(Z=2|H1)]/2 is a symmetric performance metric for the measurement/decision rule scheme.
Optimizing over all such schemes, the quantum-limited minimum average error Perr,min,˜e−ξ
obeys ξQ≈εξQ(1) with weak-source sub-Rayleigh objects.
The most general description of a measurement, a positive operator-valued measure (POVM), consists of a set of positive semi-definite operators {Πz}z on , linked to measurement outcomes {z} on an outcome space , that resolve the identity operator as =I. For a particular measurement performed on ηj⊗, the minimum average error probability among all decision rules goes as Perr,min,Meas,˜e−ξ
is the per-photon CE, which depends on probabilities P(z|ρj)=Tr(Πz(1)ρj) of outcomes, in a single-photon subspace (1), of the reduced POVM {Πz(1)}
A measurement whose CE matches the QCE (ξMeas(1)=ξQ(1)) is considered to be quantum-optimal for the given hypothesis test. Conversely, a relative gap (ξMeas(1)<ξQ(1)) indicates a fundamental sub-optimality in the measurement that cannot be remedied by data post-processing.
Our goals are twofold: compute the QCE ξQ(1) for generalized sub-Rayleigh object discrimination and find a universally optimal measurement for which ξMeas(1)=ξQ(1). As a first step, m2({right arrow over (x)}obj) and if object m1({right arrow over (x)}obj) is a single point source at position {right arrow over (x)}1,obj={right arrow over (x)}1/μ, we find that the QCE is exactly
where Γ({right arrow over (x)})=ψ{right arrow over (Ω)}|ψ{right arrow over (x)} is the 2D autocorrelation of the PSF and {right arrow over (Ω)} denotes the origin of the image-plane coordinate system. In this case, ξBSPADE(1)=ξQ(1) is achieved by a 2D binary spatial mode demultiplexing (BSPADE) device that passively couples the PSF-matched spatial mode (i.e., |ψ{right arrow over (Ω)}) to one shot-noise-limited photon-counting detector and all other light to a second identical detector. As an example, for discriminating one-vs-two point sources with a 2D Gaussian PSF ψ({right arrow over (x)})=(2πσ2)−1/2exp(−(x2+y2)/4σ2), where d is the source separation under H2, we confirm that the BSPADE CE enjoys a quadratic (d2) scaling advantage as d<<σ over the CE of idealized 2D direct imaging (an infinite spatial bandwidth, unity fill factor, unity quantum efficiency photon-counting detector array).
We now generalize to arbitrary m1({right arrow over (x)}obj) and m2({right arrow over (x)}obj), with applications in bioimaging, astronomy, and computer vision. We focus on the sub-Rayleigh limit γ<<1, where γ=μθ/σ quantifies the geometric ratio between the magnified spatial extent of the object(s) θ and the PSF width σ.
We also define {tilde over (m)}j({right arrow over (x)}obj)=θ2mj(θ{right arrow over (x)}obj), {tilde over (ψ)}({right arrow over (x)})=σψ(σ{right arrow over (x)}), and {tilde over (Γ)}({right arrow over (x)})=Γ(σ{right arrow over (x)}) as non-dimensionalized representations of the object(s), the coherent PSF, and the PSF autocorrelation function, respectively, to isolate the influence of diffraction (i.e., γ) from that of the object and aperture. In some implementations, the objects' 2D centroids coincide at a location known to the receiver either from prior knowledge or a preliminary measurement, such that the task is object identification not localization, and that the PSF ψ({right arrow over (x)}) is even in x and y, as with a circularly symmetric aperture.
To derive the generalized QCE, we represent ρ1 and ρ2 [Eq. (2)] in a basis of PSF-adapted (PAD) eigenvectors |{tilde over (ϕ)}m=∫∫−∞∞{tilde over (ϕ)}m({right arrow over (x)})|{right arrow over (x)}d2{right arrow over (x)} on (1) via Gram-Schmidt orthogonalization of the 2D Cartesian derivatives of the non-dimensionalized PSF {tilde over (ψ)}({right arrow over (x)}). For a 2D Gaussian PSF, the PAD basis functions {tilde over (ϕ)}m({right arrow over (x)}) are Hermite-Gauss polynomials. After expanding ρ1 and ρ2 in powers of γ<<1 and truncating to finite dimensions, we use operator perturbation theory to find
where mj,x
The CE for direct imaging with a zeroless PSF that is separable in x and y is given by by
with a=(m1,a
The upper two images of
To illustrate our results, in
We now extend our analysis to M>2 equiprobable objects, such as a database of QR codes. The M-ary QCE ξQ,M(1)=mini≠jξQ,i,l(1), which characterizes the quantum-limited asymptotic error for discriminating M states, is found by minimizing the pairwise QCEs ξQ,i,j(1) for each pair of states {ρi, ρj}. The similarly defined M-ary CE ξMeas,M(1)=mini≠jξMeas,i,j(1) obeys the multiple quantum Chernoff bound ξMeas,M(1)≤ξQ,M(1). We have shown that ξTriSPADE,i.j(1)=ξQ,i,j(1) for any two states when γ<<1. Therefore, the TriSPADE POVM, which does not depend on the candidate states, will simultaneously achieve the QCE for all pairs of states in a database. It follows that ξTriSPADE,M(1)=ξQ,M(1). We conclude that TriSPADE is a quantum-optimal measurement for any M-object database in the sub-Rayleigh limit.
Finally, in
The examples described herein show that a realizable optical receiver could substantially enhance decision-making accuracy for super-resolution biological, astronomical, and terrestrial imaging.
The spatial mode sorting may be performed with various optical configurations, as discussed below.
If a superposition of the modes 620A, 620B, and 620C is received in the beam 602, the ratio of the spot intensities on the resulting detector image can be used to infer the relative strength of the modes in the received beam 602.
The techniques described above for controlling and configuring a spatial mode sorting system can be implemented using software for execution on a computer system. For example, the software can define procedures in one or more computer programs that execute on one or more programmed or programmable computer systems (e.g., desktop, distributed, client/server computer systems) each including at least one processor, at least one data storage system (e.g., including volatile and non-volatile memory and/or storage elements), at least one input device (e.g., keyboard and mouse) or port, and at least one output device (e.g., monitor) or port. The software may form one or more modules of a larger program.
The software may be provided on a non-transitory medium such as a computer-readable storage medium (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer system, or delivered over a communication medium (e.g., encoded in a propagated signal) such as network to a computer system where it is stored in a non-transitory medium and executed. Each such computer program can be used to configure and operate the computer system when the non-transitory medium is read by the computer system to perform the procedures of the software.
While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
This application claims priority to and the benefit of U.S. Provisional Application Patent Ser. No. 63/187,264, entitled “SPATIAL MODE PROCESSING FOR HIGH-RESOLUTION IMAGING,” filed May 11, 2021, the entire disclosure of which is hereby incorporated by reference.
This invention was made with government support under Grant No. W911NF-20-1-0039 awarded by ARMY/ARO, and Grant No. HR0011-20-9-0128 awarded by DARPA. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/027996 | 5/6/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63187264 | May 2021 | US |