In many manufacturing processes there is a need for automated non-contact 3-D surface profile measuring instruments with improved accuracy and speed. Typical uses of these instruments are for quality control inspection of, for example, stamped metal parts, welds, automobile door closures, bored hole position and dimensional accuracy, and automobile wheel alignment. Such instruments may be stationary and take measurements of stationary or moving objects, or may be mounted on robotic arms to rapidly scan stationary or moving objects from different angles. Robotic arms with many degrees of freedom are now common in automated factories and present optimum mounting platforms for accurate and fast optical profiling instruments. The 3-D profiling instrument itself must represent the best compromise between measurement accuracy, volume coverage rate (speed), reliability, size, weight and cost.
The present state of the art in 3-D surface profiling for manufacturing purposes is exemplified by two general approaches: 1) conventional optical stereo photogrammetry with two or more digital video cameras separated by known baselines (triangulation is used for the depth coordinate) and 2) a combination of one or more digital cameras and a “structured light” projector. The term structured light refers to optical projection of a sequence of one or more optical images having a known spatial structure for the purpose of determining the angular position of an object, or part of an object, in the beam. For surface profiling, what is measured is the intensity of the reflected projector light on each of a large number of camera pixels for a sequence of projected patterns. When the intensity sequence for each pixel is decoded, it provides a two-dimensional measurement of the angular position of that pixel's image on the object with respect to the projector's optical axis. The two-dimensional angular position of the same pixel with respect to the camera's optical axis is determined simply by its location coordinates in the focal plane array.
A structured light projector with absolute position encoding can either replace one of the cameras in stereo photogrammetry or improve operation with the same number of cameras. An example of this is described in U.S. Pat. Nos. 8,233,156 B2 and 8,243,289 B2 for wheel alignment. Even if the structured light cannot measure absolute angles with respect to the projector, the patterns reflected from the object can provide a means for faster and more reliable image registration between two or more cameras. However, overall speed and accuracy can benefit if the structured light can measure absolute angles. It is therefore an object of this invention to describe a faster and more accurate means for measuring absolute two-dimensional angular positions of small regions of a surface, both with respect to the optical axes of one or more structured light projectors and also with respect to the optical axes of one or more electronic cameras, where the small regions of the surface are defined by camera pixel images.
U.S. Pat. No. 3,662,180, 1972, may have been the first to describe means for projecting a sequence of binary Gray code intensity patterns for determining the absolute angular position of a remote receiver with respect to a projector's optical axis.
Prior art U.S. Pat. No. 3,799,675, 1974 describes a method of projecting Gray code pattern slides interspersed with clear reference slides such that the amplitude of the Gray code pulses at the receiver can be normalized by dividing by the amplitude of the most recent reference pulse. This was found to be a valuable technique for minimizing measurement errors caused by the amplitude modulation effect of atmospheric turbulence (scintillation). In terms of the present need for accurate 3-D surface profiling, atmospheric turbulence is not normally an issue. But variations of surface reflectivity and slope on the object can cause similar temporal variations in the energy received by the camera pixels when there is even a small amount of relative rotational or translational motion between a projector/camera assembly and the object being profiled. Some means for intensity normalization are therefore required even when atmospheric turbulence effects are negligible. Interspersing clear reference slides between the Gray code slides as in U.S. Pat. No. 3,799,675 (1974) as well as in U.S. Pat. No. 4,100,404 (1978) and U.S. Pat. No. 5,410,399 (1995) is effective, but it either requires a larger code disk or limits the size of the individual patterns to be projected. When encoding a small angular field this is not a problem, but for 3-D surface profiling it would be a serious limitation. It is therefore an object of this invention to provide a method of minimizing errors caused by variations in surface reflectivity and slope by intensity normalization without requiring clear reference slides.
U.S. Pat. No. 4,100,404 (1978) describes a Gray code structured light projector that encodes two orthogonal dimensions by projecting two sequences of one-dimensional bar pattern slides on a spinning disk. One sequence has the center bar edges radially oriented and the other with the center bar edges tangentially oriented. A very short (60 nanosecond) pulse-width laser diode source was required to freeze the motion of the radially oriented edges. Interspersed clear reference slides were used as discussed. There were 60 discrete slide positions on the disk, of which 16 carried Gray code slides. One clear reference slide was added between each group of four Gray code slides.
The laser diode source described in U.S. Pat. No. 4,100,404 is actually a stack of five small edge-emitting laser diodes with a total of 100 W of peak output power in a 60 ns pulse, or six microjoules (μJ) of energy. Unfortunately that pulse energy is nearly three orders of magnitude too low for 3-D surface profiling, because a much wider angle beam must be projected and then reflected by the object and scattered into an even wider angle before a camera lens can collect a small portion of its energy and focus it onto a single pixel. To meet surface profiling requirements, a 60 ns pulse-width laser source would need a peak power of nearly 10,000 watts, making it too large and expensive for commercial application. It is therefore an object of this invention to define a structured light projector that can encode two orthogonal dimensions without requiring radial-edged slides, and as a result able to operate with commercially available longer pulse-width and typically 1,000 watt peak power laser diode array sources that are very efficient in electrical to optical power conversion.
U.S. Pat. No. 4,175,862 (1979) appears to have been the first description of a structured light projector in conjunction with a group of passive camera sensors system for the purpose of measuring 3-D surface profiles. This patent describes several different space coding methods, including one using natural binary code, but does not describe a binary Gray code. Natural binary code is not desirable when used in structured light projectors, because multiple pattern edge transitions can occur at some angular positions, whereas the Gray code allows no more than one edge transition to occur at any angle [Reference 3]. It is an object of this invention to continue to exploit the Gray code and any variations that retain its benefits.
U.S. Pat. No. 4,871,256 (1989) is useful prior art for the present invention in that it describes the basic means by which a projector with a spinning disk carrying a sequence of one-dimensional patterns with no radially oriented edges can be used to encode two dimensions. This is accomplished by use of two or more strobe light sources and two or more projection lenses, with the optical axes of each source and projection lens assembly spaced by 90 degrees with respect to the center of the disk. This is directly applicable to the present invention. However, it does not describe the detailed means for creating a practical projector usable for high speed 3-D surface profiling. It is therefore another object of this invention to provide a detailed description of a practical projector.
U.S. Pat. No. 5,410,399 (1995) describes a method for improving the accuracy of Gray code encoding by interpolating a remote receiver's position within each coarse quantization element defined by a sequence of projected bar patterns. The general concept is applicable to 3-D surface profiling, but the specific method claimed is based on assumptions that the projector's illumination source emits very coherent and long wave infrared laser radiation, and that the projection optics provide perfectly diffraction limited optical resolution over the entire projected field. These are very restrictive and undesirable constraints with respect to 3-D surface profiling. Coherent laser illumination is not desirable for surface profiling, not only because it results in unwanted speckle intensity variations in the light reflected from the surface, but primarily because when used to project the image of a sharp edge it creates an image intensity at the geometrical location of the edge that is only 25% of maximum intensity, instead of 50% of maximum intensity as with incoherent illumination. As shown in U.S. Pat. No. 5,410,399, it is possible to still accurately locate edge positions by projecting complementary pairs of patterns. This is done by defining the edge location as the position where the intensity measured for both patterns in a pair is equal. However, the asymmetrical shape of the intensity curves created by coherent illumination increase the difficulty of achieving accurate interpolation inside of a digital resolution element. Coherent illumination also requires the projection of clear reference slides for intensity normalization in any interpolation algorithm.
It is an object of the present invention to minimize the coherence of laser illumination used in the projector. It is also an object of this invention to provide an intensity-normalizing function without the use of clear reference slides.
There have been several alternative coding schemes reported in the literature and in issued U.S. patents regarding 3-D surface profiling. Relatively long period phase shifted intensity sinusoids and relatively long period phase shifted trapezoidal or triangular waves have been discussed. With the recent availability of electronically controlled digital scene projectors, researchers have experimented with appending sinusoidal “phase shift” intensity patterns to a Gray code sequence and using phase shifted long period triangular wave intensity patterns instead of Gray code. For example, see Reference [1] on sinusoidal phase shift coding and Reference [2] for triangular wave phase shifting.
In general, phase-shifted sinusoidal intensity waveforms or phase-shifted long period triangular waveforms can provide improved immunity to defocus in a 3-D profiling application, as well as reducing the required number of projected patterns. However, they inherently are susceptible to reduced accuracy when sensed by the newer high frame rate but lower sensitivity CMOS cameras, simply a result of low intensity gradients in the projected images and higher readout noise in CMOS cameras. That is, when the projected intensity is made to vary gradually from minimum to maximum over a longer distance on the object, the intensity slope is lower and, given the same amount of camera pixel readout noise, there is a greater position uncertainty in the recorded data. Despite improved immunity to defocus, these coding schemes make it difficult to achieve best accuracy unless the signal to noise ratio is very high.
Future high speed surface profiling instruments will very likely need CMOS cameras in order to meet frame rate requirements. Even though CMOS cameras can use larger pixel dimensions to somewhat mitigate the sensitivity problem, a surface profiling system using them for high frame rate may be forced to operate its projection lens at a lower f-number (larger relative aperture), which may eliminate any advantage in defocus immunity that might be thought to occur from the sinusoidal or long period triangular intensity patterns. It is therefore an object of the present invention to take advantage of the much higher frame rates available with CMOS cameras and at the same time maximum accuracy in the presence of higher readout noise by making use of a larger number of projected patterns with higher intensity gradients.
There are also now commercial 3-D profiling products that make use of laser-illuminated line patterns, such as in U.S. Pat. Nos. 6,191,850 and 8,233,156. Although these types of patterns require only modest laser power, they inherently provide only a sparse sampling of an object surface, in other words, they do not encode the space between the lines. It is an object of this invention to provide a method for uniform and dense surface profile measurement data over all of the interior of a defined-coverage solid angle.
Many current structured light projectors for 3-D surface profiling make use of Digital Mirror Device (DMD) technology, such as in the Texas Instruments' DLP®. This technology avoids the sparse spatial sampling problem of projected line patterns, but because the DMD mirror-switching rate for an XGA format (1024×768 mirrors) is typically limited to below 5,000 Hz, a structured light projector using this technology for future 3-D profiling would seriously limit 3-D measurement rate, even with laser illumination. That means that future profiling systems using DMD technology would not be able to take advantage of the very high frame rates achievable with CMOS digital cameras. For example, the commercially available Vision Research Phantom v1610 widescreen CMOS camera can operate at a frame rate of 19,800 Hz for a focal plane size of 1,024×800 pixels, and can provide even higher rates at lower resolutions. This is about four times the frame rate available with a micro-mirror DMD array at the same resolution, illustrating the need for an improved and different projection method that can take advantage of high CMOS camera rates. It is therefore an object of this invention to provide means to create a pattern projection rate of at least 10,000 Hz, double the available DMD rate of 5,000 Hz for the same resolution and compatible with commercially available CMOS cameras.
A compound structured light projector system for the purpose of 3-D surface profiling consists of a compound projector assembly, a camera assembly, and a digital processor. The compound projector consists of four or more sub-projectors having optical axes parallel to each other and also to the axis of a circular spinning code disk, on which there are a number of slide transparencies in the form of discrete periodic bar patterns organized according to an extended complementary Gray code sequence. The illumination source in each sub-projector is a stacked array of low coherence, high power, pulsed edge-emitting laser diodes, each of which has its output radiation collimated by one of several miniature cylindrical lenses arranged in an array. The collimated light produced by each laser diode and its associated cylindrical lens is then integrated in a reflective light pipe and focused on the slide disk by a biconvex condensing lens. The result is uniform, incoherent, and intense illumination of the slides.
The bars in each slide on the spinning code disk are all tangentially oriented; that is, all have their long edges perpendicular to a radius of the code disk at their centers. As a result, translational motion of the projected patterns on the object being measured is negligible for a laser pulse duration of a few microseconds, making it possible to use commercially available “quasi-CW” mode high power and high efficiency laser diode arrays.
Each periodic bar pattern on the code disk has a unique combination of spatial periods and phase shifts, defined by the new extended complementary Gray code sequence of this invention. The extended Gray code sequence makes use of the fact that all of the projected bar images will be detected and measured by digital cameras, each containing a focal plane array of square pixels spaced at a consistent pixel pitch. Each square camera pixel spatially integrates the energy reflected from a small part of the object being measured, that small part being defined by a back-projected image of the pixel itself. The result is that measured pixel signal changes as a linear function of its position relative to a projected bar pattern edge over a fixed distance equal to the width of the projected pixel image, a fact that is used to advantage in this invention to provide for optimum interpolation in the receiver decoding process. The extended complementary Gray code is defined such that the bar patterns have the correct spatial periods and phase shifts to allow for optimum decoding and interpolation of received pixel signals in the system's digital processor.
Correct periods and phases of the bar code patterns are assured when the following rules are observed: 1) starting the sequence at the least significant bit end, there must be six slides consisting of three phase shifted complementary pairs, each of which when projected has the same spatial period of eight times a magnified camera pixel size on the object. 2) the fourth and all subsequent pattern pairs in the extended Gray code sequence have no phase shifts, and have periods that double with respect to the preceding pattern until the last period is equal to or greater than the required field of view as in standard complementary Gray code.
The sub-projectors are equally spaced at 90 degree positions with respect to the disk axis so that during a disk rotation any single pattern on the spinning code disk will be imaged on the object being profiled at least twice in each of two orthogonal dimensions. The camera assembly consists of at least two, and desirably four high frame rate CMOS digital cameras spaced at 90 degree angles with respect to the center of the code disk. Each camera is separated laterally by a calibrated baseline from the axis of the code disk, and views essentially the same angular space as the sub-projectors so as to provide depth measurement to points on the surface by triangulation. A digital processor receives, stores, decodes and processes measured data from the cameras and provides synchronization between projector and camera assemblies.
In order to allow each of the four or more cameras viewing reflected light from the object being profiled to maintain a constant frame rate and minimize readout noise, the camera exposure times are made only slightly longer than a projected pulse duration, which requires the pulses from each laser source to be precisely multiplexed to occur at equal intervals. In addition, each laser pulse is made to occur at the time when a slide is exactly centered on the optical axis of the sub-projector emitting the pulse. These conditions are created by ensuring that the disk rotation rate is constant and the total number of slide positions on the disk is given by the formula NPOSITIONS=4m+1, m being an integer, for the case of four equally spaced sub-projectors and 3-D surface profiling.
Referring now to
In electronics subsystem 50, power supply 52 converts available input power to various direct current (DC) voltages. Laser diode pulse generator 54 creates pulses at a rate of 12,000 Hz to serve four laser diode stacks in sequence, each being pulsed at 3,000 Hz with approximately 130 amperes of current at 16 volts for a duration of 4 μs. Microcontroller 56 is a small electronic processor that controls the operation of the compound structured light projector 60, including motor speed control and communication with the overall system's 3-D processor and information storage computer 100. Micro electro-mechanical system (MEMS) inertial measurement unit (IMU) 58 provides measurement of the overall system's rotational and translational motion with respect to inertial coordinates. Thermal monitoring function 59 provides temperature measurements to allow correction for absolute and differential thermal deformations of the mounting structure.
The laser end of a general sub-projector is shown in more detail in cross-section c-c of
Projection lenses 3802, 3803, and 3804 for the identical sub-projectors are shown in
Returning to the block diagram in
Again in
The preferred laser diode stack for each of the four mini-projectors is a commercially available DILAS Conduction-Cooled QCW (quasi-continuous wave) Vertical Diode Laser Stack operating at 808 nm wavelength with nominally 8 laser bars of 11 mm length spaced apart by 1.7 mm and collimated by miniature cylindrical lens arrays 3300 shown in the expanded cross-section of
The overall pattern projection rate for the four sub-projectors is 12,000 Hz, slightly greater than twice the maximum rate of current DLP® micro-mirror arrays, achieving one of the objects of this invention. Preferred peak output power from each diode stack is 1,000 W. The average output power per stack is equal to the peak power times the duty cycle factor, or 1,000 W×0.012=12 W. This type of stack has a high power conversion efficiency, approximately 50%, so the power input for each of the four laser diode stacks is 24 W and the heat dissipation is 12 W. The four stacks in the overall compound projector will therefore require 96 W of input pulsed electrical power and dissipate 48 W of that as heat at the stacks themselves. There will be an additional roughly 12 W heat load because roughly half of the average power incident on the patterns on the code disk will be reflected by the chrome in the individual slide patterns, with perhaps half of that returned to the diode stacks. This moderate heat load is removed by small fan 23 as seen in
The two cameras and two projectors shown in the plane c-c will provide four two-dimensional angular measurements to use in the depth estimation for any arbitrary point Q on the surface, within the limitations of the beam extents of the projectors and the field of view limits of the cameras. That is, two-dimensional independent absolute angle measurements are made with respect to the optical axes of each. When the two cameras and the two projectors in the orthogonal plane are also considered, it can be seen that there is a potential for this invention to significantly improve 3-D measurement accuracy through averaging. Note that accuracy improvement through averaging independent measurements is additional to that resulting from improved position interpolation. Interpolation will be described in later paragraphs and by
Now referring to
Each even numbered slide shown in
Note that there is a small opaque timing mark 461 associated with every slide position on the disk and two additional master index marks 462, one placed midway between slides 4621 and 4622 and the other midway between slides 4622 and 4623. These timing marks are sensed by optical timing sensor 47 shown in
With respect to intensity normalization, it can be seen that there are no clear reference patterns between the complementary pairs of Gray code patterns on the code disk shown in
Clear reference slides are not needed for intensity normalization in this invention because the signal measured by a given camera pixel as a function of its distance from the ideal sharp edge in the projected patterns has a profile with odd vertical symmetry about a 50% intensity level. Another way of stating the previous is that the measured intensity is always 50% of maximum at the location of a geometrical edge, unlike the case for highly coherent illumination as in prior art of U.S. Pat. No. 5,410,399. In that prior art invention the intensity at the edge location is only 25% of maximum. The location of edges could still be found in that case by the condition of equal measured signals for each slide of a complementary pair, but the sum of intensities was definitely not constant in the region of the edge, leading to a requirement in that prior art for clear reference slides in order to perform accurate interpolation. Using incoherent light as in the present invention provides a high degree of certainty that the sum of received signals from the two patterns in each complementary pair will be a constant, and the same as would have been measured by projection of a clear slide, leading to improved interpolation and more efficient use of space on the code disk.
Very low coherence in the projected light of this invention is assured by the inherent low coherence of the laser diode emission itself plus further integration and scrambling by rectangular light pipes 3400. The result is the desired intensity transition curve with odd symmetry about each projected edge in a complementary pair and no need for clear reference slides. Those who are versed in the art of optical lens design may note that asymmetrical aberrations in the projection lens at large field angles may create some asymmetry in the intensity transition curves. However, accurate location of the edge positions can still be performed. This allows accurate determination of a receiver's location to within one part in 1,024 across the field of view, sufficiently accurate to calculate predictive corrections in the interpolation algorithm of the 3-D processor.
The intensity normalizing process for a sequence of received pulses associated with the projection of a sequence of complementary Gray code pairs on the code disk is defined by the following steps:
Detecting and storing as a first electrical signal the pixel output from the first coded pattern in a first complementary pair;
Detecting and storing as a second electrical signal the output from the same pixel and the second coded pattern in the first complementary pair;
Deriving a normalizing factor R1 for the first complementary pair that is the sum of the first electrical signal and the second electrical signal;
Repeating the above process for second, third, and additional projected complementary pairs of patterns to calculate second, third, and additional pair-normalizing factors Rn up to an N'th value;
Calculating an N-pair average intensity-normalizing factor RN by averaging the number N of said pair-normalizing factors, the averaging formula being
Using the normalizing factor RN to calculate normalized amplitudes of received pulses from each individual pattern by dividing each individual measured pulse amplitude by RN.
It is convenient for further mathematical derivations and descriptions to introduce the name “stripel” in place of the terms “projector resolution element” or “quantization increment” that are used in prior art patents. It has a close analogy to a camera's focal plane “pixel”, although stripels are one-dimensional long thin strips instead of squares.
Stripel width S is defined at the projector's focal plane in order to maintain the best analogy to camera pixels. The magnified width of a stripel as projected onto an object is defined as SOBJ.
Unlike physical camera pixels, stripels are virtual instead of physical entities. Decoding an entire sequence of light pulses received at a camera pixel is generally required to define the 1-D stripel which contains the pixel's centroid. Each of the two edges of a stripel is defined by a single bar edge somewhere in the sequence. Which patterns they are in and which bars of the various patterns define their edges must be determined during the encoding and decoding processes
The extended Gray code of this invention as exemplified by the pattern sequence illustrated on code disk 46 in
The two essential requirements of the extended Gray code sequence are that stripel width S on the code disk is made proportional to the system's camera pixel pitch pp, and that the minimum bar pattern period on the code disk is made to be eight times S. The proportionality constant is equal to the ratio of camera magnification MCAM to projector magnification MPROJ, where each of these magnifications are defined by equating the field of coverage on an object to be the same for both the system's projectors and cameras, as illustrated in
Again referring to
Provided that both projector and camera field of view are at least as large as WOBJECT, the number of magnified stripels NSTRIPELS across width WOBJECT in a projected image is the same as the number of magnified pixels NPIXELS across the same width. For a CMOS camera the pixel size and the imaging lens diameter must currently be made large to achieve best sensitivity. The sub-projectors do not have a sensitivity requirement, and furthermore have high brightness laser sources that can use small projection apertures and focal lengths. As a result, the projector's slide dimensions and focal length may be made much smaller and yet provide the same total number of projector stripels as the camera's number of pixels in one dimension. This is illustrated in
where pp is the camera pixel pitch.
The entire extended Gray code sequence of the invention is defined in terms of integer multiples of stripel width S, which is defined in the above equation. There is an additional and important requirement that the minimum spatial period in the sequence of bar patterns must be equal to 8S. In addition, the extended Gray code sequence requires that the slides on the disk be arranged in complementary pairs, where the second slide of a pair will have an optical transmission waveform that is 180 degrees out of phase with that of the first slide; that is, the second slide has a clear strip where the first slide has an opaque bar, and the widths of clear strips and opaque bars are equal. When maximum transmission is 1.0 and the minimum transmission is 0.0, the transmission of the second slide of a pair is simply one minus the transmission of the first slide at the same distance from the edge.
Further to define the extended Gray code sequence of this invention, starting at the least significant (shortest period) end of the sequence there are three pairs of phase-shifted bar-pattern slides, with the phase shift of the first pair in multiples of one stripel being −1, the phase shift of the second pair being zero, and the phase shift of the third pair being +1. The phase shift for each of the remaining slide pattern pairs is zero. For a total number of pairs of this sequence of NP, the total number of stripels NSTRIPEL is an even integer 2(NP−1). For the preferred embodiment of the invention, NP is 11 pairs and NSTRIPEL is 1,024 stripels. The stripel width S in the preferred embodiment of the invention is ten micrometers (0.01 mm), such that each slide has an encoded width of 10.24 mm. The stripel lengths and physical bar lengths are each 10.24 mm so that the slides are all squares.
It is convenient to define the entire sequence of the extended Gray code patterns on the code disk in terms of optical transmission square waves as illustrated in
Since the this definition results in transmission values of only one or zero, the transmission Tp2(x) of the second slide pattern in a complementary pair is given by the Excel® worksheet formula
T
p2(x)=If(Tp1(x)=1,0,1).
Waveforms for
For a real square camera pixel of width pp, the sharp edged square wave 9601 becomes slope-edged trapezoidal wave 10601. This can be seen in
It should be noted that good design of any electronic camera will ensure that diffraction and lens aberration blur diameter are considerably less than a pixel width. Although it might seem that ignoring lens aberrations and diffraction blur when deriving the 8S bar pattern wavelength dimension could seriously affect decoding accuracy, it is important to note that a small amount of blur will not affect the accuracy of determining in which stripel a camera pixel's center is located, although it could affect interpolation accuracy inside the stripel. Future interpolation algorithms can minimize errors using predictive models of blur size and symmetry as a function of field angle.
Referring now to the timing diagram of
There are only certain numbers of slides on the code disk that will create the desired condition of constant camera frame rate for multiple lasers spaced 90 degrees apart with respect to disk center. Referring to
α(m+¼)=90° (four lasers)
Using the relation α=360°/NSLIDES, the number of slides allowable on the disk for the requirement of having four lasers equally spaced by 90° can now be written as a function of m, assuming that the pulses from four lasers located at 0°, 90°, 180°, and 270° are to be multiplexed:
N
SLIDES=4m+1 (four lasers)
Letting m take on integer values 1, 2, 3, 4, 5, 6, 7 . . . etc. it can be seen that the only allowable numbers of slides on the disk for the preferred embodiment with four lasers (four mini-projectors) is 5, 9, 13, 17, 21, 25. 29 . . . , etc. For the preferred embodiment of this invention, m is 6 and NSLIDES is 25.
For an alternate embodiment in which there are only two mini-projectors located at 0° and 90° as for a remote receiver application, the timing of the laser pulses at the 90° position diagram will only show pulses at integral multiples of T/2 instead of T/4. The equations in previous paragraphs for this will have α/4 replaced by α/2, such that the expression for allowable numbers of slides becomes
N
SLIDES=4m+2 (two lasers at 90°)
This application claims the benefit of Provisional Patent Application Ser. No. 61/641,083 filed May 1, 2012, the entire contents of which are hereby incorporated by reference.
This invention was made without United States Government assistance.