The present disclosure is directed to an exit-pupil expander used to expand light over a liquid-crystal variable retarder. In one embodiment an optical device includes a liquid-crystal variable retarder. An exit-pupil expander is optically coupled to the liquid-crystal variable retarder, the exit-pupil expander includes: at least one optical input feature that receives reference light from a reference light source; and one or more optical coupling elements coupled to receive the reference light from the reference light source and expand the reference light to one or more spatially-separated regions of the liquid-crystal variable retarder.
In another embodiment, reference light is coupled into an optical input of an exit-pupil expander. The reference light is expanded to one or more spatially-separated regions of the exit-pupil expander. The expanded reference light is passed through a liquid-crystal variable retarder. Based on detecting the expanded reference light that passes through the liquid-crystal variable retarder, a spatially-dependent retardance of the liquid-crystal variable retarder is determined.
These and other features and aspects of various embodiments may be understood in view of the following detailed discussion and accompanying drawings.
The discussion below makes reference to the following figures, wherein the same reference number may be used to identify the similar/same component in multiple figures. The drawings are not necessarily to scale.
The present disclosure relates to liquid-crystal devices used for optical retardance control. Generally, liquid-crystal (LC) materials are liquids having some crystalline properties (e.g., orientation of internal structures, such as the LC director that indicates the local average alignment of LC molecules) that can be selectably altered by applying an external stimulus, such as an electric field or a magnetic field. A change in orientation of the LC director alters the optical properties of the LC materials, e.g., changing the optical axis of the LC birefringence. While the selectable orientation of liquid crystals has a wide range of applications (e.g., electronic displays) the present disclosure is directed to a class of devices known as variable optical retarders, or LC variable retarders (LCVRs).
An LCVR generates a variable optical path delay, or a variable retardance, between two orthogonal polarizations of light that travel through the liquid crystal. One or more liquid-crystal cells within the LCVR function as electrically tunable birefringent elements. By varying the voltage across the electrodes of the liquid-crystal cell, the cell molecules change their orientation, and it is possible to create a variable optical path delay between first rays in an incident polarization direction and second rays in an orthogonal polarization (e.g., ordinary and extraordinary rays). This path delay causes a wavelength-dependent phase shift between the first and second rays.
Because LCVRs generate an electrically-controllable optical path delay, they are sometimes used within interferometers, specifically polarization interferometers. Polarization interferometers are common-path interferometers (meaning that both arms of the interferometer follow the same geometrical path) that combine polarizing elements with birefringent elements to generate interferograms, whereby the optical path delay induced by the birefringent elements varies spatially and/or temporally.
To create a polarization interferometer with an LCVR, the LCVR is placed between a first polarizer and a second polarizer with nominally parallel or perpendicular polarization axes. The slow axis of the LCVR (the polarization axis with the variable optical path delay) is oriented nominally 45 degrees with respect to the polarization direction of the first polarizer. Incoming light is polarized to an incident polarization direction by the first polarizer. Because the slow axis of the LCVR is at 45 degrees with respect to this incident polarization direction, the polarized incident light can be described in terms of a portion of light polarized parallel to the slow axis of the LCVR and a portion of light polarized perpendicular to this axis.
As the light passes through the LCVR, it acquires a wavelength-dependent relative phase shift between the first and second polarizations, thereby leading to a wavelength-dependent change in the polarization state. The second polarizer, or analyzer, oriented either parallel or perpendicular to the first polarizer, interferes the portion of light polarized parallel to the slow axis of the LCVR with the portion of light polarized perpendicular, changing the wavelength-dependent polarization state at the output of the LCVR into a wavelength-dependent intensity pattern that can be sensed by an optical detector or a focal plane array. By sensing this intensity while varying the retardance of the LCVR, it is possible to measure an interferogram of the incoming light, which can be used to ascertain spectral properties of the incoming light.
A polarization interferometer based on an LCVR may have a number of uses. For example, such a device may be used in hyperspectral imaging applications because of its abilities to encode spectral information of the incident light into an intensity pattern that is easily measured with a non-spectrally-resolving detector. Hyperspectral imaging refers to methods and devices for acquiring hyperspectral datasets or data-cubes, which may include images where densely sampled, finely resolved spectral information is provided at each pixel.
The wavelength-dependent intensity pattern provided by the polarization interferometer corresponds approximately to a cosine transform of the spectrum of the incident light. By recording the spatially-dependent intensity pattern at the output of a polarization interferometer as a function of the LCVR's retardance, the interferograms generated by all points of a scene imaged through the LCVR can be sampled simultaneously. From this, the hyperspectral data-cube can be nominally recovered by applying a transform, such as an inverse cosine transform or Fourier transform along the retardance axis, to the recorded spatially-dependent interferogram.
To accurately calculate the hyperspectral data-cube, the processing apparatus used to apply the above transform should have precise knowledge of the optical path delay of the LCVR over its clear aperture for each individual interferogram sample. This can be done, for example, with a monochromatic reference light source, calibration light source, or laser that is pointed through the LCVR and linearly polarized at 45 degrees with respect to the LCVR's slow axis. The intensity of the light that has passed through the LCVR and is polarized parallel to the source polarization is recorded, and the phase (and thus the optical path delay) is calculated via methods known in the art, such as the Takeda Fourier-transform method. However, for practical reasons, this measurement is typically done only at one location of the LCVR, using the assumption that the retardance of the LCVR has no spatial dependence. However, the LCVR can have significant spatially-dependent retardance variation, so this assumption can lead to potential errors in calculating portions of the hyperspectral data-cube from regions of the spatially-dependent interferogram that are imaged through positions of the LCVR that differ substantially in retardance from where the retardance is actually measured.
The LCVRs typically used for a hyperspectral imager generally comprise thick LC layers in order to access a high level of optical retardance. This required LC layer thickness leads to LCVRs that switch slowly, potentially much more slowly than is desirable in a hyperspectral imaging arrangement. It is possible to switch the LCVR much faster than its natural relaxation time by dynamically driving the LCVR with an appropriate voltage waveform. However, the faster the LCVR is driven, the more likely it is that a spatial dependence is introduced into the instantaneous retardance. This is because the LC cells in the LCVR are generally not perfectly flat or homogeneous, and each position responds differently depending on its thickness or other position-dependent parameters. In order to remedy this, the spatially-dependent retardance can be measured at many points across the LCVR as the nominal retardance is changed in order to improve accuracy of the transform operations used to calculate the hyperspectral data-cube. More details of the hyperspectral imaging process can be found in U.S. Publication 2016/0123811, dated May 5, 2016, as well as A. Hegyi and J. Martini, Opt. Express 23, 28742-28754 (2015).
One technique for combining two images in a compact form factor is referred to as “waveguiding.” Generally, light from a display is coupled into a glass “waveguide” that forms the display window using some form of coupling element, e.g., a diffractive optical element. It is then coupled out of the display window using a second coupling element. The coupling element could be, for example, a diffractive element, partially reflective element, waveguide coupler, etc. This arrangement functions as a periscope, and it expands the exit-pupil of the display. It is known in the art as an “exit-pupil expander.” The present disclosure relates to a device and a method for combining an exit-pupil expander with a liquid-crystal variable retarder.
In
The plurality of coupling elements 110 are spatially dispersed over a major surface of the glass substrate 108. In the illustrated example, the major surface corresponds to an xy-plane. There may be a detector 114, such as a focal-plane array (FPA) behind the LCVR 100, for example when used for hyperspectral imaging. The coupling of the light 105 to different regions of the LC layer 101 allows measuring instantaneous optical path delay at different xy-coordinates through the LCVR 100. The detector 114 has corresponding sensors (e.g., pixels) at these locations, and a separate measurement of optical path delay can be made at each xy-coordinate, in a preferred embodiment by analyzing the intensity of light polarized parallel to the polarization of the coupled light 109.
The coupling elements 110 cause light emitted from the exit-pupil expander 102 to be dispersed through one or more spatially unique regions of the LCVR 100. The one or more coupling elements 110 may be arranged such that the exit-pupil expander 102 emits light 109 in the z-direction and from a one-dimensional pattern, e.g., from a line parallel to the y-direction. In other embodiments, the exit-pupil expander 102 may emit a two-dimensional pattern. The diagrams of
In
A plurality of second coupling elements 212 are arranged along the second light paths 210 and configured to reflect part of light incident on the elements 212 towards an output surface 200a of the substrate 200. The second coupling elements 212 are shown arranged in a rectangular grid, although other patterns are possible. For example, the coupling elements 212 can be non-evenly spatially distributed based on a priori knowledge of the LCVR, e.g., increasing density of the elements 212 in regions that are to experience larger spatial gradients of retardance. Note that each of the coupling elements 208, 212 will be configured to reflect a first portion of light and transmit a second portion of light further along the light path. For example, assuming uniform illumination over the grid is desired and assuming no optical loss, the leftmost optical element 208 would be configured to reflect ⅙ of the incident light in the negative y-direction and transmit ⅚ of the incident light in the x-direction. The next optical element would reflect ⅕ and pass ⅘, with each element reflecting a relatively greater portion until the rightmost element 208 reflects all of the light. These values can be adjusted for optical losses, manufacturing tolerances, a desired non-uniform intensity distribution, etc.
In
An optical device as shown in
In
In
Consideration should be made such that an exit-pupil expander does not adversely obscure the external light that is to pass through the LCVR, e.g., from an object to be imaged, as represented by arrows 510 in
In
Note that, even with a coverage as small as 1% of the LCVR's clear aperture, the exit pupil expander 600 can distribute reference light over enough spatially-distributed regions to estimate the spatially-dependent retardance to a high degree of accuracy. For example, consider a 30 mm×20 mm (600 mm2) clear aperture of an LCVR that is divided into 600 squares (30×20 grid) in which retardance is to be separately measured. Further assume for this example that the exit-pupil expander distributes the reference light to a 30×20 grid of square output couplers, each approximately (0.1 mm)2=0.01 mm2 in area. In such a case, the total area of the output couplers would be 600×0.01 mm2, which is (6 mm2)/(600 mm2)=1% of the total area of the clear aperture. Note that for simplicity we do not include in the above calculation the obscuration of the clear aperture by waveguides and coupling elements other than the output couplers, although their effects on the clear aperture can also be included.
In combination with or separate from the above, it may be useful to divide the laser light into two predefined polarization states, e.g., linear and circular polarization states, such that the in-phase and quadrature components of the interferograms used to sample each position of the retardance can be measured. These two measurements can be done using time-multiplexing or space-multiplexing. For example, as shown in
The feature 606 may be a waveguide-based polarizing beam splitter followed by a 90° phase shifter of one polarization and a recombination of the two polarizations. The split light beams 608, 610 would couple into adjacent waveguides that travel through the exit-pupil expander 600, and are coupled out of the exit-pupil expander 600 using common or separate coupling elements 605 for each polarization. Each set of adjacent waveguides produces two separate arrays or patterns of dots or lines, the light from one array or pattern being polarization-rotated relative to the other. As seen in the figures, the separate arrays or patterns (or elements thereof) may be located in close proximity so that the in-phase and quadrature interferogram components can be measured as close to each other as possible via detector 615. Note that the arrangement of beam-splitting and polarizing components should be such that the polarization states of split light beams 608, 610 should be as described, e.g., linear and circular polarization, upon exiting the exit-pupil expander.
The feature 606 can be used in any of the embodiments described herein. In other embodiments, two light sources (e.g., two different light sources 304 as shown in
Note that because the measurement of retardance described above relies on detecting the relative phase of two polarizations of light passing from the exit-pupil expander through the LCVR, and because phase can only be directly measured modulo 2π, there is an inherent ambiguity in the measurement of retardance. It some embodiments, the measurement using an exit-pupil expander is combined with one of an alternate modality that gives an absolute (though potentially not as accurate) measure of retardance. One example of such an alternate measurement is shown in
In
In
The apparatus includes an optical section 806 that includes an external optical interface 808 that receives light from outside the apparatus 800. The external optical interface 808 may include windows, lenses, filters, apertures, etc., suitable for passing light from outside the apparatus 800 to internal optical components. In this example, the external optical interface 808 is shown coupled to an external lens 810.
A polarization interferometer 812 is located in the optical section 806 of the apparatus 800. The polarization interferometer 812 is coupled to the controller 802, e.g., via electrical signal lines. The controller 802 applies signals to the polarization interferometer 812 to cause a time-varying optical path delay or retardance in an LCVR 812a that is part of the interferometer 812. This time-varying optical path delay creates an interferogram that varies as a function of the optical path delay. The interferogram is detected by an image sensor 814 (e.g., an array of sensor pixels, focal plane array) which is also coupled to the controller 802.
The polarization interferometer 812 includes LCVR 812a that may be configured similar to previously described embodiments. Between the external optical interface 808 and the LCVR 812a is an exit-pupil expander 812b. The exit-pupil expander 812b receives reference light 817a (e.g., polarized and/or collimated monochromatic light) from a light source 816 and expands the light across a major surface of the LCVR 812a. A photodetector (e.g., sensor 814 or optional separate detectors 815) on the other side of the interferometer 812 detects this expanded light 817b and produces a spatially-dependent photodetector signal, e.g., a signal that represents separate light intensity measurements each obtained at a plurality of locations on the detector 814, 815. The controller 802 extracts a spatially-dependent retardance measurement from the photodetector signal.
Note that the expanded reference light 817b may be detected together with or separate from other light 809 (e.g., from an image received from the lens 810) that passes through the LCVR 812a. For example, the optical interface 808 may include a shutter that blocks incoming light for sufficient time to make measurements of the expanded light. In other embodiments, the expanded reference light 817b could be time multiplexed with the image light 809. In such a case, the light source 816 is pulsed at a high intensity in conjunction with a very short exposure at the detector 814, 815. The high intensity of the expanded reference light 817b would minimize the influence of the image light 809. For example, if the intensity of the expanded reference light 817b were 100× or 10,000% of the intensity of the image light 809, then the very short exposure described above to measure the expanded reference light 817b could have an exposure time of 0.01× or 1% of the exposure time used to measure the image light 809 in order to produce the same time-integrated intensity. Therefore, the image light 809 would cause at most a 1% error in the measurement of the expanded reference light 817b.
In other embodiments, the intensity of the expanded reference light 817b could be adjusted to be similar to the image light 809 such that both could be simultaneously captured in the same exposure. If the expanded reference light 817b has sufficient intensity, it should cause a measurable spectral peak in the spectral data recovered via the detector 710, the peak corresponding to the known wavelength of the monochromatic light source 816. The difference between the measured peak wavelength and the known wavelength of the light source 816 could be used, for example, to calibrate the wavelength error of a hyperspectral imager as a function of image position.
The spatially-dependent retardance measurement, as a function of time, can be used by an image processor 820 to calculate a hyperspectral data-cube from recorded interferograms. Generally, the retardance controller 818 instructs the device controller 802 to apply a control signal to the LCVR 812a to achieve a time-varying retardance trajectory, generating spatially-dependent interferograms at the image sensor 814, also as a function of time. The image processor 820 can combine the spatially- and temporally-dependent retardance measurements and interferograms to first calculate the interferogram at each position as a function of retardance, and then to calculate the hyperspectral data-cube by Fourier-transforming all interferograms with respect to retardance. Some or all of this image processing may be performed by an external device, such as computer 824 that is coupled to the apparatus 800 via a data transfer interface 822. In such a case, the computer 824 may also receive spatially-dependent retardance measurements obtained via the exit-pupil expander 812b.
If there are errors in the expected-versus-actual retardance of the LCVR 812a, there will be errors in the resulting spectral data or hyperspectral data-cube if the expected rather than actual retardance is used to calculate those data. The retardance controller 818 can use the spatially-dependent retardance measured via the exit-pupil expander 812 and photodetector 814, 815 as an input to feedback or feedforward control models in order to cause the actual retardance to more closely follow the expected or desired retardance as a function of time. The spatially-dependent retardance measurements could be combined (e.g., spatially averaged) so that deviations from a desired target retardance trajectory can be detected and compensation provided, e.g., by adjusting electrical signals applied to the LCVR 812a. These control models could also utilize capacitance measurements of one or more of the LC cells within the LCVR 812a, e.g., as shown in
The use of the multiple electrode pairs 812aa-812ac can enable more precise retardance control of different regions of the LCVR 812a, which can reduce errors in the spectral data resulting from spatially-dependent retardance variation of the LCVR 812a. If the image light 809 passing through the LCVR 812a is focused by the lens 810 onto the image sensor 814 and the lens 810 has a large aperture-stop, then the rays of image light corresponding to one image sensor position will pass through different portions of the LCVR 812a and may therefore experience different retardances. This would reduce interferogram contrast and degrade the measurement of the spectral data in such a way that knowledge of the spatially-dependent retardance of the LCVR 812a would not be useful to compensate the degradation. In this case, the spatially-dependent retardance control enabled by the multiple electrode pairs 812aa-812ac could be used to increase the spatial homogeneity of the instantaneous retardance of the LCVR 812a in order to maintain high interferogram contrast and prevent this kind of measurement degradation.
In
In
The various embodiments described above may be implemented using circuitry, firmware, and/or software modules that interact to provide particular results. One of skill in the relevant arts can readily implement such described functionality, either at a modular level or as a whole, using knowledge generally known in the art. For example, the flowcharts and control diagrams illustrated herein may be used to create computer-readable instructions/code for execution by a processor. Such instructions may be stored on a non-transitory computer-readable medium and transferred to the processor for execution as is known in the art. The structures and procedures shown above are only representative example of embodiments that can be used to provide the functions described hereinabove.
Unless otherwise indicated, all numbers expressing feature sizes, amounts, and physical properties used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the foregoing specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by those skilled in the art utilizing the teachings disclosed herein. The use of numerical ranges by endpoints includes all numbers within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5) and any range within that range.
The foregoing description of the example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Any or all features of the disclosed embodiments can be applied individually or in any combination are not meant to be limiting, but purely illustrative. It is intended that the scope of the invention be limited not with this detailed description, but rather determined by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
4342516 | Chamran et al. | Aug 1982 | A |
4461543 | Mcmahon | Jul 1984 | A |
4812657 | Minekane | Mar 1989 | A |
4848877 | Miller | Jul 1989 | A |
4905169 | Buican et al. | Feb 1990 | A |
5126869 | Lipchak et al. | Jun 1992 | A |
5247378 | Miller | Sep 1993 | A |
5347382 | Rumbaugh | Sep 1994 | A |
5592314 | Ogasawara et al. | Jan 1997 | A |
5619266 | Tornita et al. | Apr 1997 | A |
5642214 | Ishii | Jun 1997 | A |
5784162 | Cabib et al. | Jul 1998 | A |
5856842 | Tedesco | Jan 1999 | A |
5953083 | Sharp | Sep 1999 | A |
6169594 | Aye et al. | Jan 2001 | B1 |
6330097 | Chen et al. | Dec 2001 | B1 |
6421131 | Miller | Jul 2002 | B1 |
6552836 | Miller | Apr 2003 | B2 |
6576886 | Yao | Jun 2003 | B1 |
6774977 | Walton et al. | Aug 2004 | B1 |
6992809 | Wang et al. | Jan 2006 | B1 |
7067795 | Yan et al. | Jun 2006 | B1 |
7116370 | Huang | Oct 2006 | B1 |
7167230 | Klaus et al. | Jan 2007 | B2 |
7196792 | Drevillon et al. | Mar 2007 | B2 |
7339665 | Imura | Mar 2008 | B2 |
7630022 | Baur et al. | Dec 2009 | B1 |
7999933 | Mcclure | Aug 2011 | B2 |
8422119 | Keaton | Apr 2013 | B1 |
9631973 | Dorschner | Apr 2017 | B2 |
9864148 | Ishikawa | Jan 2018 | B1 |
20020181066 | Miller | Dec 2002 | A1 |
20040036876 | Davis et al. | Feb 2004 | A1 |
20040165101 | Miyanari et al. | Aug 2004 | A1 |
20050036143 | Huang | Feb 2005 | A1 |
20050190329 | Okumura | Sep 2005 | A1 |
20060141466 | Pinet et al. | Jun 2006 | A1 |
20060187974 | Dantus | Aug 2006 | A1 |
20060279732 | Wang | Dec 2006 | A1 |
20070003263 | Nomura | Jan 2007 | A1 |
20070030551 | Oka et al. | Feb 2007 | A1 |
20070070260 | Wang | Mar 2007 | A1 |
20070070354 | Chao et al. | Mar 2007 | A1 |
20080158550 | Arieli et al. | Jul 2008 | A1 |
20080212874 | Steib | Sep 2008 | A1 |
20080266564 | Themelis | Oct 2008 | A1 |
20080278593 | Cho et al. | Nov 2008 | A1 |
20090168137 | Wen et al. | Jul 2009 | A1 |
20090284708 | Abdulhalim | Nov 2009 | A1 |
20100056928 | Zuzak | Mar 2010 | A1 |
20100296039 | Zhao et al. | Nov 2010 | A1 |
20110012014 | Livne et al. | Jan 2011 | A1 |
20110170098 | Normand | Jul 2011 | A1 |
20110205539 | Cattelan et al. | Aug 2011 | A1 |
20110273558 | Subbiah et al. | Nov 2011 | A1 |
20110279744 | Voigt | Nov 2011 | A1 |
20110299089 | Wang et al. | Dec 2011 | A1 |
20120013722 | Wong et al. | Jan 2012 | A1 |
20120013922 | Wong et al. | Jan 2012 | A1 |
20120188467 | Escuti et al. | Jul 2012 | A1 |
20120268745 | Kudenov | Oct 2012 | A1 |
20120300143 | Voigt | Nov 2012 | A1 |
20130010017 | Kobayashi et al. | Jan 2013 | A1 |
20130027516 | Hart | Jan 2013 | A1 |
20130107260 | Nozawa | May 2013 | A1 |
20140125990 | Hinderling et al. | May 2014 | A1 |
20140257113 | Panasyuk et al. | Sep 2014 | A1 |
20140354868 | Desmarais | Dec 2014 | A1 |
20140362331 | Shi | Dec 2014 | A1 |
20150022809 | Marchant et al. | Jan 2015 | A1 |
20150168210 | Dorschner | Jun 2015 | A1 |
20150206912 | Kanamori | Jul 2015 | A1 |
20160123811 | Hegyi et al. | May 2016 | A1 |
20160127660 | Hegyi et al. | May 2016 | A1 |
20160127661 | Hegyi et al. | May 2016 | A1 |
20160231566 | Levola | Aug 2016 | A1 |
20160259128 | Wagener et al. | Sep 2016 | A1 |
20170017104 | Lin et al. | Jan 2017 | A1 |
20170264834 | Hegyi et al. | Sep 2017 | A1 |
20170264835 | Hegyi et al. | Sep 2017 | A1 |
20170363472 | Abdulhaim | Dec 2017 | A1 |
20170366763 | Lin et al. | Dec 2017 | A1 |
20180088381 | Lin et al. | Mar 2018 | A1 |
20180095307 | Herloski | Apr 2018 | A1 |
20180120566 | Macnamara | May 2018 | A1 |
20190121191 | Hegyi | Apr 2019 | A1 |
Entry |
---|
File History for U.S. Appl. No. 14/527,347, 504 pages. |
File History for U.S. Appl. No. 14/527,378, 354 pages. |
File History for U.S. Appl. No. 14/883,404, 256 pages. |
File History for U.S. Appl. No. 15/605,625, 116 pages. |
File History for U.S. Appl. No. 15/605,642, 129 pages. |
Hegyi et al., “Hyperspectral imaging with a liquid crystal polarization interferometer”, Optics Express, vol. 23, No. 22, Oct. 26, 2015, 13 pages. |
Jullien et al., “High-resolution hyperspectral imaging with cascaded liquid crystal cells”, Optica, Vo. 4, No. 4, Apr. 2017, pp. 400-405. |
EP Patent Application No. 18212122.8; European Search Report dated Jun. 26, 2019; 9 pages. |
File History for U.S. Appl. No. 15/827,204, 128 pages. |
file History for U.S. Appl. No. 15/858,609, 110 pages. |
File History for U.S. Appl. No. 14/527,347, 546 pages. |
File History for U.S. Appl. No. 14/527,378, 407 pages. |
File History for U.S. Appl. No. 14/883,404, 298 pages. |
File History for U.S. Appl. No. 15/605,625, 155 pages. |
File History for U.S. Appl. No. 15/605,642, 183 pages. |
File History for U.S. Appl. No. 15/858,354, 96 pages. |
File History for EP App. No. 15190915.7 as retrieved from the European Patent Office electronic filing system on Sep. 25, 2018, 306 pages. |
Office action dated Aug. 8, 2018 from CN App. No. 201510710643.X, 16 pages. |
U.S. Appl. No. 15/827,204, filed Nov. 30, 2017. |
U.S. Appl. No. 15/858,338, filed Dec. 29, 2017. |
U.S. Appl. No. 15/858,354, filed Dec. 29, 2017. |
U.S. Appl. No. 15/858,609, filed Dec. 29, 2017. |
File History for U.S. Appl. No. 14/527,347, 420 pages. |
File History for U.S. Appl. No. 14/527,378, 205 pages. |
File History for U.S. Appl. No. 14/883,404, 294 pages. |
Itoh et al., “Liquid-crystal imaging Fourier-spectrometer array”, Optics Letters, 15:11, 652-652, Jun. 1, 1990. |
Li et al., “GPU accelerated parallel FFT processing for Fourier transform hyperspectral imaging”, Applied Optics, vol. 54, No. 13, pp. D91-D99, May 1, 2015. |
Persons et al., “Automated registration of polarimetric imagery using Fourier transform techniques”, Proceedings of SPIE, vol. 4819, 2002. |
Porter et al., “Correction of Phase Errors in Fourier Spectroscopy”, International Journal of Infrared and Millimeter Waves, vol. 4, No. 2, 273-298, 1983. |
Smith et al., “Increased acceptance bandwidths in optical frequency conversion by use of multiple walk-off-compensating nonlinear crystals”. J. Opt. Soc. Am. B/ vol. 15, No. 1, Jan. 1998. |
Number | Date | Country | |
---|---|---|---|
20190204594 A1 | Jul 2019 | US |