Not applicable.
The present invention generally relates to optical sensing and, more particularly, to active optical compressive sensing.
Compressive sensing techniques allow direct measurement of compressed data products with significant reductions in both data and optical energy for sparse signals of interest. For traditional active remote sensing systems, the required optical energy scales linearly with the covered area. The size, weight and power (SWaP) of typical optical sensing systems are constrained by the signal requirements and radiometry of the sensing scenario. For a given aperture size, desired resolution, distance from the object, and waveform, the radiometry analysis can yield the required optical pulse energy to achieve a desired signal-to-noise ratio. Existing active space-borne optical sensing and imaging systems have large SWaP due to housing elaborate optical designs and are high cost due to the exquisite optical engineering needed to meet the design requirements.
The application requirements usually determine the coverage rate, which then yields the required transmitter power. Once the fundamental noise limits (e.g., shot-noise limit) are achieved, the transmitter requirements can only be reduced with compromises on the application parameters. The traditional active remote sensing systems can be based on a single pixel measured with a flying-spot or a multipixel receiver with wide area illumination. In either case, the total energy requirements of the active remote sensing systems remain the same. The flying-spot or the multipixel receiver architectures just provide a way to make a trade for power versus time with constraints on coverage rate. In these examples, the active sensor collects data used to generate an image. However, that collected data can be significantly compressed through postprocessing, since it is sparse.
According to various aspects of the subject technology, methods and configurations for providing active optical compressive sensing are disclosed. The techniques of the subject technology provide a method of directly measuring sparse data with orders-of-magnitude reduction in total energy for wide area coverage. The disclosed solution provides a significant reduction in the transmitter requirements for applications such as wide area search with very sparse signals.
In one or more aspects, an active optical compressive sensing system includes an optical source to generate light for illuminating a target and a pattern generator to generation of a pattern. A pattern controller controls an operation of the pattern generator to cause generating a desired pattern. The pattern is a spatial filtering pattern that enables data compression by generating sparse scattered data. Applying the pattern allows a logarithmic resource scaling.
In other aspects, an active optical remote sensing system includes a waveform encoder to modulate a phase or an amplitude of source light generated by an optical source to generate modulated light. The system further includes an optical mixer to receive the source light and the modulated light in two different paths and to generate a phase-error signal. Relay optics projects the modulated light onto a target and relays resulting scattered light from the target to an optical heterodyne detector to generate a detector signal. A processor receives the phase-error signal and the detector signal and generates a processed phase signal.
In yet other aspects, a method of implementing active optical compressive sensing includes producing light for illuminating a target, generating a pattern to optically modulate the produced light using a desired pattern, and enabling data compression by generating sparse scattered data from the target using the pattern. The method further includes detecting the sparse scattered data and processing the detected sparse scattered data to reconstruct an image of the target with a logarithmic resource scaling.
The foregoing has outlined rather broadly the features of the present disclosure so that the following detailed description can be better understood. Additional features and advantages of the disclosure, which form the subject of the claims, will be described hereinafter.
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions to be taken in conjunction with the accompanying drawings describing specific aspects of the disclosure, wherein:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be clear and apparent to those skilled in the art that the subject technology is not limited to the specific details set forth herein and can be practiced using one or more implementations. In one or more instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
In some aspects of the present technology, methods and configurations for providing active optical compressive sensing are disclosed. The techniques of the subject technology are based on directly measuring sparse data with orders-of-magnitude reduction in total energy for wide area coverage. The disclosed techniques allow for achieving of a significant reduction in the transmitter optical energy requirements for applications such as wide area search with very sparse signals. An illustration of what is meant by sparse and compressible is shown in the next section. For traditional active remote sensing systems, the required optical energy scales linearly with the covered area. The techniques of the subject technology demonstrate that this scaling can be changed to logarithmic, which provides significant advantages for wide area active sensing.
Convex optimizations for sparse data recovery have been implemented by others for passive imaging architectures. The convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Applying the convex optimization for compressive sensing enables reducing the total energy used to measure the data of interest by directly measuring the signal in a compressed basis set. The disclosed solution can apply convex optimizations to active optical remote sensing. As stated earlier, the transmitter is typically the SWaP driver for an active optical system, with the transmitter power requirements set by the radiometry for the sensing geometry and voxel parameters, which include aperture diameter (DA), object range (R), wavelength (λ), object reflectivity (ρ) and pixel resolution (dx). The signal-to-noise requirements depend on the noise and clutter, and the sensor parameters and the signal-to-noise determine the required pulse energy. Because typical active sensor data is very sparse, there are significant opportunities for reducing the total transmitted energy by applying compressive sensing techniques.
As shown by the disclosure herein, wide area imaging can obtain orders of magnitude reduction over a traditional ladar sensing architecture with minimal system changes. Specifically, the subject technology can be applied in a large number of applications and provides architectures for active optical compressive spatial imaging, active optical remote sensing, coherent detection, linear mode detection and Geiger mode detection implementation. The disclosure further includes algorithms for image recovery, compressive sensing for efficient change detection, estimation of data requirements based on measured variance, expansion to a wavelet basis set, implementing a deterministic basis set, reduction of platform pointing requirements and additional reductions for change detection for imaging. The disclosed subject matter further includes algorithms for compressive sensing of ladar ranging data; nonlinear fast Fourier transform (FFT) for efficient sparsely sampled signal recovery; combination of spatial and temporal active compressive sensing for 3D surface recovery; basis projection optimization for Geiger mode saturation mitigation; recovering sparse, wide-area data in high-clutter environments; and recovery of sparse, wide-area data in low clutter environments.
The above techniques have been applied to passive imaging systems and for one-dimensional signal processing and recovery. The subject technology uses the mathematical constructs to reduce optical energy and data requirements for active sensing and imaging systems. The energy (and data) requirements for a traditional ladar sensor scale with the number of pixels, n, in the scene. With a compressive sensing architecture, this scaling becomes a polynomial of log n, with the number of measurements scaling as a polynomial of log n. For a large number of pixels, this can be an orders-of-magnitude reduction in system requirements. It is important to note that the scaling and coefficients depend on the scene complexity and noise. Approaching these information-theory limits requires an optimized choice of projection measurement basis sets. The techniques of the subject technology work even without a priori knowledge of the scene. There are additional improvements when information about the scene is exploited, as discussed in more detail herein.
The optical source 102 is a light source, with examples including a laser diode, a fiber laser, a super-luminescent LED, or an LED. The pattern generator 104 can be a micro-electromechanical system (MEMS) mirror array or liquid crystal spatial light modulator and can spatially modulate the intensity of the light generated by the optical source 102. The relay optics 106 can be a single lens or a lens system that relays the spatially modulated light from the pattern generator 104 to the target 108 so that the generated pattern is scaled to the desired size and in focus at the target 108. The receiver optics 110 can include the same or a separate aperture that relays the light scattered from the target 108 (illuminated scene) to a light detector within the receiver optics 110. The electrical signal generated by the detector is then passed to the digitizer 112 (through an optional amplifier when needed) to convert the detector electrical signal to a digital signal. The digital signal is processed by the processor 114, which performs the desired digital signal processing for the convex optimization to generate the image as described in more detail herein.
The pattern controller 116 can be a separate interface or combined with the processor 114 and can control the pattern generator 104. In some implementations, the pattern controller 116 may cause the pattern generator 104 to select a desired pattern from a list of patterns, for example, stored in memory, or select a pattern generation algorithm stored in the memory for execution by the pattern generator 104 or the processor 114. The pattern generation algorithm can generate a desired pattern based on a number of parameters including parameters of the scene and/or based on prior image data of the scene.
The system architecture 100B corresponds to an active optical compressive spatial imaging. The system architecture 100B uses a bistatic setup with patterns on the received light as discussed herein. The system architecture 100B includes the optical source 102, a transmit relay optics 120, the target 108, a receiver optics 110, the pattern generator 104, a receive relay optics 122, a receiver 124, a processor 114, a pattern controller 116, and a resulting image 128. The transmit relay optics 120 illuminates the region of interest and the receiver optics 110 maps the region of interest onto the pattern generator 104, which could be a MEMS array or amplitude spatial light modulator. The receive relay optics 122 relays the image on the pattern generator 104 to a detector within the receiver 124. The functionalities of the processor 114 and the pattern controller 116 are as discussed above.
The techniques of the subject technology are based on using the patterns generated by the pattern generator 104 to provide a sparse data from the illuminated scene and directly measure the sparse data with orders-of-magnitude reduction in total energy and significant reduction in the amount of data processing for a wide area coverage.
This system architecture 200 can be used for applications in optical remote sensing. The optical source 202 can be a laser diode, fiber laser, super-luminescent LED, or LED. The first and second optical splitters 204 and 208 can be free-space beam splitters or fiber-based splitters. The waveform encoder 206 can be a direct modulator of the optical source, an acousto-optic modulator, a Mach-Zehnder-based amplitude modulator, a phase modulator, or a combination of these devices to create the desired waveform. The optical mixer 210 receives signals before and after the waveform encoder 206 in two different paths and can be a beam splitter with the appropriate delays on each path in order to create a phase signal to measure the waveform characteristics. The optical mixer 210 generates measured phase errors 212, which are passed to the processor 224. The transmit/receive switch 214 can be a nonpolarizing beam splitter, a polarizing beam splitter, or a Faraday crystal-based device, which can be either free-space or fiber-based components. The relay optics 216 maps the light from the transmit/receive switch 214 to a region of interest at the location of the target 218 and relays scattered light from the target 218 to the transmit/receive switch 214.
The optical heterodyne module 220 is a heterodyne detector that can use either free space or fiber-based components, including beam splitters, to mix the scattered signals received from the transmit/receive switch 214 in a heterodyne, balanced heterodyne, quadrature, or balanced quadrature configuration. The signal from the heterodyne detector can then be passed through an amplifier (not shown for simplicity) prior to the digitizer 222. The digitized signal from the digitizer 222 is then passed to the processor 224 for processing. The sampling control module 226 controls sampling of the digitizer 222. The sampling control module 226 can take either regularly sampled data or randomly spaced samples from the processor 224, with a controlled distribution setting the moments of the time spacing. The measured waveform phase errors 212 are sent by the processor 224 to the phase correction module 228 as a correction to the sampled data either in the time or frequency domain or a combination of time and frequency domains. The corrected data is then windowed, by the window function module 230, using any (or no) window function. The windowed corrected data is then passed to the Fourier transform module 232. The output of the Fourier transform module 232 is a recovered signal 234 that is a corrected signal representing the desired ladar data product. Examples of the recovered signal 234 include amplitude as a function of range in the region of interest.
In one or more implementations, the sampling control module 226, the phase correction module 228, the window function module 230 and the Fourier transform module 232 can be implemented in hardware, firmware or software executable by the processor 224 or any other processor of the system, including a general processor.
At operation block 618, it is determined whether the required variance parameters for the number of measurements are met. If the required variance parameters are not met, control is passed to operation block 612. If the required variance parameters are met, control is passed to operation block 620, where the same convex optimization is implemented to reconstruct the image based on the desired scene metric (e.g., entropy). Optional improvements can be achieved by applying sharpness filters (at operation block 622) and median filters (at operation block 626) to the reconstructed image to form the final image output 628.
At operation block 806, a pseudo-random pattern for the next index is generated. At operation block 808, these patterns are transmitted to the target (or used as a filter for the return light) and the resulting amplitude for the scattered light from each pattern is measured. At operation block 810, a variance of the amplitude data is measured. The variance requirements from earlier are modified to match the expected variance for the lowest resolution parts of the scene, since this will have a lower variance corresponding to a larger number of measurements required. At operation block 812, it is determined whether the variance meets the requirements for the lowest resolution portions of the scene. If the variance does not meet the requirements, control is passed to operation block 804. If the variance meets the requirements, control is passed to operation block 814, where the measured amplitude and projections are processed using convex optimization to reconstruct the image. The reconstruction works in the same way as the earlier description, with convex optimization providing the reconstructed image with nonuniform resolution.
The optical source 1102 can be a laser diode, fiber laser, super-luminescent LED, or LED. The first and second optical splitters 1104 and 1108 can be free space beam splitters or fiber-based splitters. The waveform encoder 1106 can be a direct modulator of the optical source, an acousto-optic modulator, a Mach-Zehnder-based amplitude modulator, a phase modulator, or a combination of these devices to create the desired waveform. The optical mixer 1110 receives signals before and after the waveform encoder 1106 in two different paths and can be a beam splitter with the appropriate delays on each path in order to create a phase signal to measure the waveform characteristics. The optical mixer 1110 generates measured waveform phase errors 1112, which are passed to the processor 1128. The pattern generator 1114 can be a MEMS mirror array or liquid crystal spatial light modulator and can spatially modulate the intensity of the light generated by the optical source 1102. The transmit/receive switch 1116 can be a nonpolarizing beam splitter, a polarizing beam splitter, or a Faraday crystal-based device, which can be either free-space or fiber-based components. The relay optics 1118 maps the light from the transmit/receive switch 1116 to a region of interest at the location of the target 1120.
The optical heterodyne module 1122 is a heterodyne detector that can use either free space or fiber-based components, including beam splitters, to mix the signals received from the transmit/receive switch 1116 in a heterodyne, balanced heterodyne, quadrature, or balanced quadrature configuration. The signal from the heterodyne detector can then be passed through an amplifier 1124 prior to the digitizer 1126. The digitized signal from the digitizer 1126 is then passed to the processor 1128 for processing. The sampling control module 1130 controls sampling of the digitizer 1126. The sampling control module 1130 can take either regularly sampled data or randomly spaced samples from the processor 1128, with a controlled distribution setting the moments of the time spacing. The measured waveform phase errors 1112 is sent by the processor 1128 to the phase correction module 1134 after a convex optimization processing for each range slice by the convex optimization processing module 1132. The phase correction module 1134 makes a correction to the sampled data either in the time domain or frequency domain or in a combination of time and frequency domains. The corrected data is then windowed, by the window function module 1136, using any (or no) windows function. The windowed corrected data is then passed to the Fourier transform module 1138. The output of the Fourier transform module 1138 is used by the 3D image reconstruction module 1140 to reconstruct the 3D image from the spatial and amplitude pattern at each range slice.
In one or more implementations, the sampling control module 1130, the convex optimization processing module 1132, the phase correction module 1134, the window function module 1136, the Fourier transform module 1138 and the 3D image reconstruction module 1140 can be implemented in hardware, firmware or software executable by the processor 1128 or any other processor of the system including a general processor.
In one or more implementations, the sampling control module 1220, the convex optimization processing module 1222, the window function module 1224, the Fourier transform module 1226 and the 3D image reconstruction module 1228 can be implemented in hardware, firmware or software executable by the processor 1228 or any other processor of the system such as a general processor.
The optical source 1202 can be a laser diode, fiber laser, super-luminescent LED, or LED. The waveform encoder 1204 can be a direct modulator of the optical source, an acousto-optic modulator, a Mach-Zehnder-based amplitude modulator, a phase modulator, or a combination of these devices to create the desired waveform. The pattern generator 1206 can be a MEMS mirror array or liquid crystal spatial light modulator and can spatially modulate the intensity of the light generated by the optical source 1202. The transmit/receive switch 1208 can be a nonpolarizing beam splitter, a polarizing beam splitter, or a Faraday crystal-based device, which can be either free-space or fiber-based components. The relay optics 1210 maps the light from the transmit/receive switch 1208 to a region of interest at the location of the target 1212.
The amplifier 1214 amplifies the signal from the transmit/receive switch 1208, which is converted to a digital signal by the digitizer 1216. The digitized signal from the digitizer 1216 is then passed to the processor 1218 for processing. The sampling control module 1220 controls sampling of the digitizer 1216. The sampling control module 1220 can take either regularly sampled data or randomly spaced sample from the processor 1218, with a controlled distribution setting the moments of the time spacing. The digitized signal is processed by the processor 1218, and the processed signal is sent as inputs to the sampling control module 1220 and the convex optimization processing module 1222, which performs the convex optimization for each range slice. The output of the convex optimization processing module 1222 is then windowed, by the window function module 1224, using any (or no) window function. The windowed corrected data is then passed to the Fourier transform module 1226. The output of the Fourier transform module 1226 is used by the 3D image reconstruction module 1228 to reconstruct the 3D image from the spatial amplitude pattern at each range.
The pattern controller 1318 can be a separate interface or combined with the processor 1316 and can control the pattern generator 1304. In some implementations, the pattern controller 1318 may cause the pattern generator 1304 to select a desired pattern from a list of patterns, for example, stored in memory or select a pattern generation algorithm stored in the memory for execution by the pattern generator 1304 or the processor 1316.
The flow chart 1300B shows an algorithm for implementation with the Geiger mode detector 1312. The algorithm implements a loop to generate the amplitude for a given pattern using multiple measurements. The algorithm can be adapted to use patterns on either the transmit or receive paths. The algorithm starts at operation block 1330, where the next projection basis element set is prepared. At operation block 1332, the pulses are transmitted to illuminate the scene. At operation block 1334, the amplitudes of return signals at each range within the region of interest are measured. At operation block 1336, the amplitude variations are checked for the projection pattern. At operation block 1338, it is checked whether the desired amplitude fidelity is met. If the desired amplitude fidelity is not met, control is passed to operation block 1332. If the desired amplitude fidelity is met, at operation block 1340 it is checked whether the last projection basis element is projected. If the last projection basis element is not yet projected, control is passed to operation block 1330. If the last projection basis element is projected, control is passed to operation block 1342, where average returns within each projection basis data set at each range slice are determined. At operation block 1344, the resulting data at each range slice is processed using a convex optimization algorithm. The output is a 3D image 1346 reconstructed using range and spatial data. The pulse-repetition-frequency is set to allow the desired number of photons to be counted in the desired timeframe while keeping the average count below the saturation threshold. The data is a sequence of timestamps 1313 that are translated into a range.
Because the amplitudes must be recovered over time to avoid saturation of the Geiger mode detector, the clutter of the scene impacts a Geiger mode detection system differently than a coherent detection setup. This allows an additional opportunity to optimize the basis set based on maximizing variance of the measured projections.
The resulting images 1500B show about six times (6×) reduction in total energy and total data. Further two times (×2) reduction in total energy and total data can be achieved, as shown by the images 1500C, by using a smaller set of projections to construct the image of the test target #2.
Bus 1608 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of electronic system 1600. In one or more implementations, bus 1608 communicatively connects processing unit(s) 1612 with ROM 1610, system memory 1604, and permanent storage device 1602. From these various memory units, processing unit(s) 1612 retrieve(s) instructions to execute and data to process in order to execute the processes of the subject disclosure. The processing unit(s) 1612 can be a single processor or a multicore processor in different implementations.
ROM 1610 stores static data and instructions that are needed by processing unit(s) 1612 and other modules of the electronic system. Permanent storage device 1602, on the other hand, is a read-and-write memory device. This device is a nonvolatile memory unit that stores instructions and data even when electronic system 1600 is off. One or more implementations of the subject disclosure use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as permanent storage device 1602.
Other implementations use a removable storage device (such as a flash drive, and its corresponding drive) as permanent storage device 1602. Like permanent storage device 1602, system memory 1604 is a read-and-write memory device. However, unlike storage device 1602, system memory 1604 is a volatile read-and-write memory, such as random access memory (RAM). System memory 1604 stores any of the instructions and data that processing unit(s) 1612 need(s) at runtime. In one or more implementations, the processes of the subject disclosure, for example, the trained ROM, are stored in system memory 1604, permanent storage device 1602, and/or ROM 1610. From these various memory units, processing unit(s) 1612 retrieve(s) instructions to execute and data to process in order to execute the processes of one or more implementations. In one or more implementations, the processing unit(s) 1612 execute(s) various processes and algorithms of the subject technology, including algorithms and methods of
Bus 1608 also connects to input and output device interfaces 1614 and 1606. Input device interface 1614 enables a user to communicate information and select commands to the electronic system. Input devices used with input device interface 1614 include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). Output device interface 1606 enables, for example, the display of images generated by electronic system 1600. Output devices used with output device interface 1606 include, for example, printers and display devices such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat-panel display, a solid-state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback, and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
In some other aspects, the subject technology may be used in various markets, including, for example, and without limitation, sensor technology, signal processing and communication markets.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software or a combination of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way), all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks may be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single hardware and software product or packaged into multiple hardware and software products.
The description of the subject technology is provided to enable any person skilled in the art to practice the various aspects described herein. While the subject technology has been particularly described with reference to the various figures and aspects, it should be understood that these are for illustration purposes only and should not be taken as limiting the scope of the subject technology.
A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” The term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description.
Although the invention has been described with reference to the disclosed aspects, one having ordinary skill in the art will readily appreciate that these aspects are only illustrative of the invention. It should be understood that various modifications can be made without departing from the spirit of the invention. The particular aspects disclosed above are illustrative only, as the present invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular illustrative aspects disclosed above may be altered, combined, or modified and all such variations are considered within the scope and spirit of the present invention. While compositions and methods are described in terms of “comprising,” “containing,” or “including” various components or steps, the compositions and methods can also “consist essentially of” or “consist of” the various components and operations. All numbers and ranges disclosed above can vary by some amount. Whenever a numerical range with a lower limit and an upper limit is disclosed, any number and any subrange falling within the broader range are specifically disclosed. Also, the terms in the claims have their plain, ordinary meanings unless otherwise explicitly and clearly defined by the patentee. If there is any conflict in the usage of a word or term in this specification and one or more patent or other documents that may be incorporated herein by reference, the definition that is consistent with this specification should be adopted.
Number | Name | Date | Kind |
---|---|---|---|
8125549 | Dekel | Feb 2012 | B2 |
20090027390 | Bin Zafar | Jan 2009 | A1 |
20150201176 | Graziosi | Jul 2015 | A1 |
20190219696 | Xu | Jul 2019 | A1 |
20190369247 | Lindner | Dec 2019 | A1 |
20200120299 | Li | Apr 2020 | A1 |
Number | Date | Country |
---|---|---|
105828087 | Aug 2016 | CN |
106796293 | Jan 2020 | CN |
102008002725 | Dec 2009 | DE |
3413173 | Dec 2018 | EP |
WO-9306693 | Apr 1993 | WO |
WO-2017137415 | Aug 2017 | WO |