This application generally relates to metamaterial elements, machine learning, artificial intelligence, and convolutional neural networks. Various embodiments of this application are more specifically related to convolution lenses, the mathematical operations of convolution and Fourier transform, and optical computing architectures.
Various examples of the systems and methods described herein relate to optical computer architectures to compute convolutional neural networks at increased speeds and reduced power consumption compared to digital computers. According to various embodiments, the proposed optical computer architecture utilizes dynamic metasurfaces to encode digital information in the optical domain and computes convolutions using a four-focal length (4F) optical system.
According to various embodiments, convolutional neural networks can be understood as a standard set of algorithms used for voice recognition, object detection, classification and tracking, image segmentation, and more general image processing operations. While extremely useful, these algorithms are computationally costly when implemented on digital electronic hardware, to the point where the computational power and latency costs make them difficult or even impossible to implement for many applications
In many embodiments, most of the computational cost of implementing a neural network is associated with computing the convolution operations (e.g., 120 and 140 in the illustrated example). Other operations, such as threshold comparisons, nonlinear activation and MaxPooling, can be performed relatively quickly and with minimal computational cost compared to calculating the convolution operations.
The presently described systems and methods propose an optical computing approach for convolution operations to increase the speed and reduce the power consumption associated with the convolution operations of a convolutional neural network 100. In various embodiments the optical computing components for implementing the convolutional operations may be integrated within a digital computing system that performs other computation operations using digital computing techniques (e.g., via application-specific integrated circuits (ASICs) or microprocessors). Such a system may be referred to as an optoelectronic system that implements the convolutions optically and the nonlinear activations and MaxPool operations electronically.
Existing optoelectronic systems utilize relatively high optical powers that render them infeasible or inferior to digital electronic approaches with respect to power consumption. For example, existing optical 4F system architectures, such as those used to reconstruct synthetic aperture radar images before digital computers, have proved impractical for convolutional neural networks due to the lack of fast and efficient spatial light modulators. Various embodiments of the presently described systems and methods propose the use of dynamically tunable metasurfaces instead of spatial light modulators. Dynamically tunable metasurfaces can be implemented with three orders of magnitude higher switching speed, smaller pixel sizes, at a lower cost, and with a lower switching power consumption compared to traditional spatial light modulators.
The presently described systems and methods include various embodiments of optoelectronic computing systems, optoelectronic convolutional neural network (CNN) computing systems, and other systems to perform convolution operations in the optical domain using optical fields. Additionally, some specific optical systems and lens element layouts are described for performing optical convolutions of optical fields. In one example, an optoelectronic computing system includes an electronic subsystem, spatial light modulators, an optical subsystem to perform an optical convolution, an optical detection system, and a digital processing system.
In various embodiments, the optoelectronic computing system includes an electronic subsystem to receive input digital data and a first spatial light modulator to transmit a coherent optical field encoded with the input digital data. An optical subsystem may include on or more lenses and/or additional spatial light modulators to implement a first Fourier transform of the coherent optical field encoded with the input digital data. The transformed optical field can then be modulated with kernel data, such that the kernel data is modulated onto the coherent optical field, along with the encoded input digital data. The optical subsystem may then implement a second Fourier transform of the coherent optical field to generate an output optical field that is encoded with a convolution of the input digital data and the kernel data. The optoelectronic computing system may further include an optical detection subsystem to convert the output optical field to an output digital signal. The output digital signal represents the convolution of the input digital signal with the kernel data, as optically computed by the optical subsystem. In various embodiments, the optoelectronic computing system may further include a digital processing subsystem to perform at least one mathematical operation on the output digital signal.
In various embodiments described herein, the spatial light modulators may be embodied as tunable optical metasurfaces, digital micromirror devices, and/or liquid crystal on silicon devices. In some embodiments, such as in the folded 4F systems described below, reflective metasurfaces, digital micromirror devices, liquid crystal on silicon devices, and/or other spatial light modulators may be utilized. As described herein, various lenses, lens layouts, mirrors, and other optical elements may be combined to form the optical subsystem that implements the operations described herein. Specific examples of suitable lens configurations are described herein, but it is appreciated that alternative configurations may be used.
The presently described systems and methods may also be used to form an optoelectronic convolutional neural network (CNN) computing system. A CNN computing system may include, for example, tunable optical metasurfaces for encoding input data into the optical domain and then modulating kernel data onto the light field. The optical subsystem (e.g., a folded 4F optical subsystem) may compute a convolution of the input data and the kernel data in the optical domain. A detector subsystem can then convert the computed convolution in the optical domain into a digital signal for subsequent processing by a digital electronic subsystem.
In some embodiments, the optical system described herein for performing convolution operations in the optical domain may be used for any of a wide variety of alternative purposes and is not limited to usage in conjunction with CNN computing systems. Various systems are described herein to perform convolution operations in the optical domain (e.g., using modulated light or optical fields that are passed through lenses configured to implement Fourier transformations). The convolution operation may include, for example, encoding (e.g., via a first modulator) input data to be convolved onto an object field. The systems compute (e.g., via a first Fourier transform lens assembly) a first Fourier transform of the object field. A second modulator is used to apply a kernel modulation to the optical field and then a second Fourier transform of the modulated field is computed using the same Fourier transform lens assembly (in a folded 4F optical assembly) or a different Fourier transform lens assembly (in a non-folded 4F optical assembly). The resulting convolution light field represents the convolution of the input data and kernel data. A detector may then be used to generate a digital version of the convolution data for digital processing by digital electronic components or computing systems.
A second lens 237 implements a second Fourier transform to deliver the inversion of the convolution of the image 201 with the kernel to a detector array (e.g., a metasurface detector array 239, as illustrated). A sensor array (e.g., an array of contact image sensors (CISs) and complementary analog-to-digital converters (ADCs)) may be used to convert the convolution for digital processing (e.g., rectified linear unit operation (ReLU), digital nonlinear activation 285, pooling 286, and the like). The result of the process may be stored in a memory (not shown).
The illustrated optoelectronic computing system 200 architecture represents one layer of the neural network. The metasurfaces 231, 235 can be dynamically tuned and the optical subsystem 230 and digital subsystem 280 can be used again for each additional layer of a multilayer convolutional neural network. The illustrated example might be modeled using a field-programmable gate array, programmable logic array, or an ASIC. An ASIC might be particularly useful in some embodiments to reduce power consumption and/or increase digital computations. The optical subsystem 230 may be implemented in an in-reflection architecture to reduce space and eliminate or reduce communication overhead between separate chips.
The optoelectronic computing system 200 may perform a linear operation on an input vector of size N, which is presented as a two-dimensional √{square root over (N)}×√{square root over (N)} image. Data is represented on a two-dimensional surface on the object plane of the metasurface 231, and computation happens while the light propagates in the third dimension. The lenses 233 and 237 and the metasurface 235 scattering the kernel for convolution operate to implement the convolution computations optically.
In an alternative embodiment, a volumetric metamaterial may be used to implement a (tunable) discrete linear operator of a given size. However, the tuning volumetric metamaterial requires modulating O(N3/2) elements and may not be as computationally cost effective as the illustrated optoelectronic computing system 200, or even digital computing. The illustrated optoelectronic computing system 200 can implement convolutions (the specific subset of linear operators used in convolutional neural networks) by tuning O(N) metamaterial elements. This is because all operators in the convolution class of linear operators share a common space of eigenvectors, which happen to be the basis of plane waves. Accordingly, any convolution operator A can be decomposed as the product of three linear operators, such that:
Ĥ=U
†
DU Equation 1
In Equation 1, U is a unitary operator defined by the eigen basis of Ĥ which is a Fourier transform. D is a diagonal matrix with the entries being the eigenvalues of Ĥ, and for a convolution operator these entries are the Fourier transform of the kernel. Accordingly, the optoelectronic computing system 200 can implement any convolution operator by changing the diagonal elements of D to implement a different kernel, which is the same number of entries as the input vector to the system. Therefore, only O(N) elements need to be modulated to span the space of convolution operations, significantly reducing the time, energy, and data bandwidth required to change between convolution operators.
In a practical optical system, each of the components U†, D, and U of the decomposition of the convolution operator can be modeled as three separate optical components aligned in series along the optical path, with only D being a dynamically reconfigurable component, and the others being static. The lenses 233 and 237 can implement Fourier transforms of the U† and U operators. The metasurface 235 may implement D as a pixelated scattering surface that can be controlled in both the amplitude and phase of its transmission (or reflection) for each pixel. The metasurface 235 can be implemented with microsecond switching speeds and sub-micron resolution, while still delivering complete amplitude and phase control.
The optical subsystem 230 implements the convolution using coherent light, such as a laser, which is sent into a feed structure (e.g., a waveguide or backplane feed structure) that spreads the incident beam across the surface of the metasurface 235. The metasurface 235 contains a set of optical scattering elements (e.g., tunable optical metamaterial scattering elements with sub-wavelength interelement spacings). The optical scattering elements can be tuned (e.g., voltage controlled) in both amplitude and phase. According to various embodiments, the metasurface may comprise a two-dimensional array of the optical scattering elements arranged with sub-wavelength interelement spacings on a transmissive or reflective surface. Depending on the desired resolution, the two-dimensional array of optical scattering elements may be between, for example, 1 megapixel and 30 megapixels.
According to some embodiments, an input vector, or input image 201, is used as the control signal of the tug mechanism of the scattering elements on the two-dimensional surface, such that each scattering element's polarizability or tuning state is adjusted to take a complex value of a particular component of the input vector, and the scattering from these elements forms an image that traverses the modified 4F optical subsystem 230, as discussed above.
In terms of time, computation of a convolution with a digital computing system is an O(NM) operation, where N is the input vector size and M is the kernel size, which may be up to the image size. While this can be parallelized with a GPU or an ASIC with a multiply and accumulate (MAC) array, the number of parallel pipelines is severely limited by the space on a chip. Looking at realistic, state of the art chips, this is typically limited to on the order of 10,000 MAC units in a custom neural network accelerator ASIC. However, using the metasurface optical computing approach described herein (e.g., the optical subsystem 230), the amount of time required to compute a convolution is based on the time it takes a wave phenomenon to propagate through the system, which is independent of both the input vector and kernel vector size, i.e., O(N0).
Accordingly, the presently described systems and methods represent a massive increase in computational speed for any convolution-dominated algorithm, resulting in improvements in both latency and throughput for convolutional neural networks. Moreover, when data is represented in two dimensions on the surface of an optical metasurface, as described herein, the size of the input vectors (i.e., images) can be on the order of 107.
The architectures described above utilize analog optical systems for computing convolutions. The precision of a digital computing system is determined by the number of bits of precision used for computations. In contrast, analog computers represent numbers as continuous physical quantities. Therefore, errors can be introduced into the computation by any aberrations or manufacturing errors that deviate the operation of the system from the intended operation. Unlike many computational tasks, many neural networks can tolerate a limited amount of error in their implementation since the learning mechanism of the network offers an ability to self-correct.
The presently described systems and methods utilize an optical subsystem (such as optical subsystem 230) that may be implemented using various configurations of lenses and tunable metasurfaces. In some embodiments, the optical subsystem 230 may be implemented using a reflective spatial light modulator, rather than transmissive spatial light modulators.
The object plane of the input object 410 is a field of coherent radiation in which a signal to be processed is encoded. The object plane may be implemented with an SLM. The source of illumination may be provided by an essentially monochromatic laser which illuminates the object plane by means of an off-axis beam or a fixed hologram, by way of example. As previously described, the first lens 420 performs the first Fourier transform operation of the field from the object 410 to the kernel plane 430 with each placed one focal length away from the first lens 420. At the kernel plane 430, the Fourier transform of the object 410 is modulated by, for example, another SLM. The pattern encoded onto the kernel plane 430 is the Fourier transform of the kernel to be convolved with the object field.
As stated above, the second lens 440 performs a second Fourier transform operation from the field after passing through the kernel plane 430 to the output image plane 450, both of which are also placed one focal length from the second lens 440, so the desired convolution of the object plane 410 is encoded onto the field at the image plane 450. The output image at the image plane 450 may be detected interferometrically, for example, by means of interference with a reference beam which illuminates a detector using an off-axis beam or a fixed hologram. The detector may be, for example, an array detector such as a CMOS (Complementary Metal-Oxide Semiconductor) or CCD (Charge Coupled Device) array of photosensitive pixels. The amplitude and phase of the field may be inferred from the recorded field intensity on the detector, for example, by means of a digital Fourier spatial filter of the intensity pattern, or measuring the interference pattern with several phase shifts between the reference and image fields which are combined to estimate the amplitude and phase.
In the paraxial approximation of the 4F lens system 400, for which the f/# of the system is sufficiently large, and in a small neighborhood of points close to the optical axis, this system performs a nearly ideal, error-free operation. However, this limits the usable field size to only a small fraction of the lens diameter and requires a very long optical system length.
The presently described systems and methods propose alternative 4F optical systems that provide a usable field that is a significant fraction of the optical element diameters and a shorter length. Different 4F optical systems can be characterized and compared with respect to a space-bandwidth product (SBP) that is a dimensionless quantity counting the number of effectively independent points that may be transmitted through the optical system. The SBP is approximately four times the area of the object field, multiplied by the input numerical aperture (NA) squared, divided by the wavelength squared.
The computational capacity of the optical system scales with the SBP, as a Fourier transform lens system with SBP with a count of N, performs a computation roughly analogous to the Fast Fourier Transform (FFT) over a vector of length N. The FFT has an energy cost scaling as N log N and speed scaling with log N while an optical system has a power consumption that scales with N and computes in constant time. Likewise, a conventional digital computing convolution system scales with the product of the size of both functions to be convolved and also therefore scales unfavorably compared to an optical Fourier transform computing system.
Various aberrations are identified as scaling by various exponents of field size and NA. Therefore, as the SBP is increased, particular aberrations of an optical system also increase with the field size and NA. Unlike an ordinary lens that forms an image of a point from an object point, a Fourier transform lens projects a collimated beam or a plane-like wave at a particular angle. Aberrations of the plane-like wave projected as the image of a point are measured in the number of waves of deviation of the wavefront from the ideal plane wave. For a lens to be diffraction-limited, it is required that these aberrations be only a small fraction of a wave, typically less than one-quarter wave. For use in an analog computer, the maximum permissible aberration may be much smaller, such as, for example, one-tenth or even one-hundredth of a wave.
A Fourier-transform lens that can perform larger computations with an increased SBP must be generally held to a low level of aberrations over a wide field and large numerical aperture. Many of the techniques typically used to minimize aberrations may not be employed. For example, many lenses employ selective vignetting of the most aberrated rays to preserve image quality. However, convolution is a space-invariant operation, and vignetting would selectively block rays from certain points and not others, breaking space invariance. Therefore, the lens is designed such that the image is a limiting field stop and the kernel plane is a limiting pupil stop. According to various embodiments, all the lens elements in between the object, image, and kernel planes conduct all rays. Additionally, the lens is telecentric in the object/image space and afocal in the kernel space. Telecentricity ensures that the field captured from each object point is identical. An afocal image space, with the pupil stop at the kernel plane, ensures that the extent of the collimated beam formed by each object point coincides at the kernel plane.
In some embodiments, the optical system may utilize Spatial Light Modulators (SLMs) instead of, or in addition to, metasurfaces, as described herein. In such embodiments, the Fourier lenses interface with the SLMs. An SLM is a device used to modulate a pattern onto an optical wavefront, as a modulation of amplitude, phase, polarization, or a combination of these. A SLM may be used to modulate a field incident on an optical processor as well as within an optical processor, for example, at the object and kernel planes of a 4F optical system. A common application of SLMs is inside display devices, such as liquid crystal display screens and projectors. To fully utilize the SBP of a Fourier transform lens, the number of pixels of the SLM in the field of the lens corresponds to the SBP. In various embodiments, the SBP of the Fourier transform lenses are on the order of 1 million to 100 million. Accordingly, an SLM may need millions of pixels in a small area. To achieve this, SLMs with a very high density of pixels such as digital micromirror devices (DMD) or liquid-crystal on silicon (LCOS) modulators manufactured using microfabrication methods may be utilized.
These SLMs may be reflective rather than transmissive because silicon substrate used therein is opaque. Accordingly, in some embodiments of the presently described systems and methods, an optoelectronic system utilizes a 4F optical arrangement for use with reflective SLMs.
The intensities at the pixels of the detector are converted to electrical samples 691, either analog or digital. As electrical signals, nonlinear computations may be applied to this data, for example, ReLu which is a rectifier, and MaxPool, at 692, which selects the maximum value of a set of samples. The results of these operations may be stored in memory 693 for future retrieval, at 694. The retrieved results, at 694, may be converted to the optical domain, at 605 and utilized for a subsequent convolution by configuring the object plane SLM elements 610 to encode these data onto the optical field.
As discussed above, to increase the speed of the computation, the object SLM 610, image detector 675, and electrical computation including analog to digital (ADC) or digital to analog (DAC) operations, mathematical operations, memory storage, and data transmission (691, 692, 693, 694, and 605) may be integrated into a single electrical package such as a chip carrier with an interposer.
The optical designs described below for folded 4F systems are suitable for use in optoelectronic convolutional neural networks described herein, including those using reflective SLMs. The folded 4F optical systems detailed in
The folded 4F lens layout 700 includes seven spherical elements of optical glass with a prescription as follows:
The surface data summary of the folded 4F lens layout 700 is specified in Table 1 below:
The design tolerances of the folded 4F lens layout 700 can be achieved with typical precision optics manufacturing and passive placement of components in a machined optical barrel. To facilitate manufacturing, the refraction invariant is minimized at each surface. That is, the refraction of rays is divided more equally among the elements rather than a particular surface bending rays sharply. As spherical aberration, coma, and astigmatism are dependent on the refraction invariants of the marginal and chief rays, these aberrations are minimized using this approach. As the radiation is monochromatic, it can be favorable to use higher refractive index glasses as the dispersion these cause is not a concern, and using such glasses allows the curvature of each surface to be minimized as to reduce field curvature.
The surface data summary of the folded 4F ZnSe lens layout 800 is specified in Table 2 below:
In Table 2 above, the Surface 1 Evenasph is defined as:
In Table 2 above, the Surface 3 Evenasph is defined as:
In Table 2 above, the Surface 8 Evenasph is defined as:
The folded 4F silicon lens layout 900 is specified, including the coefficients of aspheric polynomial surfaces as follows:
The surface data summary of the folded 4F silicon lens layout 900 is specified in Table 3 below:
In Table 3 above, the Surface 1 Evenasph is defined as:
In Table 3 above, the Surface 3 Evenasph is defined as:
In Table 3 above, the Surface 6 Evenasph is defined as:
In the above examples, a Fourier transform lens reduces or minimizes the RMS wavefront error, which is intended to minimize an absolute error in computation of the Fourier transform. However, in some embodiments, it is desirable to minimize an angular spot radius instead. When the angular spot radius is minimized, a corresponding angular spectrum of each object/image point in the kernel plane is minimized. When acted on by spatial frequencies of a kernel plane SLM, the angular spectrum is only broadened by the kernel bandwidth. Once the image is formed by passing back through the Fourier transform lens, a locality of a point is preserved, and possibly broadened by the intended convolution kernel. For operations that select a maximum value, such as MaxPool, maintaining the locality of the signal may be more important than preserving a quantitative phase of the result.
According to various embodiments, an SLM may be used at each plane, a first SLM to encode data onto the optical field and a second SLM as part of an array detector to record the intensity of the optical field. As described herein, the SLM may be illuminated with a coherent light beam to be modulated. The optical field at the detector typically is superimposed with a reference beam to facilitate an interferometric reconstruction of the optical field from its intensity. In each plane of the Fourier transform lens, the SLM and array detector are advantageously placed adjacent to each other so that the recorded signal from the detector may be transferred to the SLM over a minimum distance which reduces energy consumption and delay. As illustrated, the optoelectronic convolution system 1000 includes an interposer 1010, or other fabric with electrical processing capabilities. The interposer 1010 connects a memory 1020, SLM or metasurface 1030, and an array detector 1040.
By placing the components in close proximity, interconnection distances are minimized. Reducing the distances reduces the amount of energy and decreases the delay needed to transfer data to the SLM 1030 from the array detector 1040 or the memory 1020. The interposer 1010 may also contain electronic signal processing or computation to modify the data being moved, for example, to add a carrier or to apply nonlinear operations such as ReLu or MaxPool, or to extract data to be moved off the interposer 1010 into other storage.
A further advantageous variation of the embodiments described herein is to implement compute-in-memory methods which minimize the amount of data transferred to and from a Fourier transform computation system. As described above, the Fourier transform of the input function is placed on the upper SLM 1110. The Fourier transform of the input function is convolved with a kernel function, which has its Fourier transform optically computed, and the convolution recorded on the lower detector 1125. The upper SLM 1110 serves as a memory for input data so that multiple convolutions can be performed on the same input data. The process may be repeated with many kernels, each being prepared on the lower SLM 1120 with the resulting convolution detected on the lower detector 1125. The repeated retrieval of the input function is avoided as the upper SLM 1110 retains the input function for many convolutions. As convolution is commutative with respect to the input and kernel functions, the input and kernel functions may be exchanged, and for example, the same kernel may be convolved with many input functions by placing the Fourier transform of the kernel on the upper SLM 1110.
If any of the constituent functions were smaller than 1000 by 1000 samples, zero padding is added so that the function fills its respective block of the larger function. A second large sampled function contains kernels that are uniformly and equally spaced with an identical periodicity to the first large sampled function, with the position of each respective kernel function in the second array corresponding to that of the input function in the first array. Again, the kernels are zero padded as to fill the space in between the kernels. The array of tiled input functions and the array of tiled kernel functions is convolved using the optical convolution system.
The overall convolution of the two arrays produces the sum of the convolutions of the individual input functions and their respective kernels at the position in the convolution corresponding to zero shift between the input and kernel fields, which is shown in the array of tiled kernel functions 1202 in the figure in the upper left of the convolution function, but in general depends on the origin of the optical axis of the Fourier transforms relative to the SLMs and detectors. The convolution, at 1203, of the array of tiled input functions with the array of tiled kernel functions is detected, at 1204, at the image plane by a detector.
Many existing computing systems, methods, and devices may be used in combination with the presently described systems and methods. Some of the infrastructure that can be used with embodiments disclosed herein is already available, such as general-purpose computers, computer programming tools and techniques, digital storage media, and communication links. A computing device or controller may include a processor, such as a microprocessor, a microcontroller, logic circuitry, or the like. Various technologies, systems, architectures, and applications are relevant to the presently described embodiments. Examples of such technologies, systems, architectures, and applications include, but are not limited to, certain aspects of deep neural networks, image recognition, recommender systems, medical diagnosis, language processing, and the like.
A processor may include a special-purpose processing device, such as application-specific integrated circuits (ASIC), programmable array logic (PAL), programmable logic array (PLA), programmable logic device (PLD), field programmable gate array (FPGA), or other customizable and/or programmable device. The computing device may also include a machine-readable storage device, such as non-volatile memory, static RAM, dynamic RAM, ROM, CD-ROM, disk, tape, magnetic, optical, flash memory, or other machine-readable storage medium. Various aspects of certain embodiments may be implemented using hardware, software, firmware, or a combination thereof.
The components of the disclosed embodiments, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Furthermore, the features, structures, and operations associated with one embodiment may be applicable to or combined with the features, structures, or operations described in conjunction with another embodiment. In many instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of this disclosure.
This disclosure has been made with reference to various exemplary embodiments, including the best mode. However, those skilled in the art will recognize that changes and modifications may be made to the exemplary embodiments without departing from the scope of the present disclosure. While the principles of this disclosure have been shown in various embodiments, many modifications of structure, arrangements, proportions, elements, materials, and components may be adapted for a specific environment and/or operating requirements without departing from the principles and scope of this disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure.
This disclosure is to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope thereof. Likewise, benefits, other advantages, and solutions to problems have been described above with regard to various embodiments. However, benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element.
This application is continuation of PCT Application No. PCT/US2022/015159 filed on Feb. 3, 2022 titled “Optoelectronic Computing Systems and Folded 4F Convolution Lenses,” which application claims benefit of and priority to U.S. Provisional Patent Application No. 63/145,350 titled “Opto-Electronic Accelerator for Convolutional Neural Networks Using Metamaterials” filed on Feb. 3, 2021, and claims benefit and priority to U.S. Provisional Patent Application No. 63/160,276 titled “Folded 4-F Convolution Lenses” filed on Mar. 12, 2021, and claims benefit and priority to U.S. Provisional Patent Application No. 63/162,392 titled “Folded 4-F Convolution Lens System with Dual Spatial Light Modulators” filed Mar. 17, 2021, which applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
63145350 | Feb 2021 | US | |
63160276 | Mar 2021 | US | |
63162392 | Mar 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2022/015159 | Feb 2022 | US |
Child | 18362301 | US |