Optic systems and, more particularly, compact near infrared illumination systems and methods that enable control of the illumination intensity in separately defined zones across the field of view of an electronic imaging system.
Near infrared (NIR) illumination is increasingly popular for enhancing the performance and utility of imaging sensors in automotive, mobile and consumer applications. The image sensors are used for object detection, driver monitoring, gesture recognition and other similar user interface functions, with significant use of computational image processing. In addition to adding low light and nighttime capabilities, the illumination can be used to highlight regions of interest and enable filtering out the ambient lighted background for the benefit of image processing algorithms. A major complication for these image processing applications is the modest dynamic range of current electronic image sensors. Subjects or areas of interest captured by the electronic image sensors are often too bright, saturating the detector so that detail is not visible. In other cases, subjects or areas of interest may be too dark, also limiting useful image detail, unless the gain or shutter duration for the imaging system is adjusted so that the brightly illuminated or highly reflective regions are again saturated.
Most current NIR illumination systems are based on light-emitting diodes or LEDs. LEDs have the advantage of low cost and freedom from speckle or coherence noise, which can seriously complicate image processing. The disadvantages of LEDs in this role include the very broad emission profile that is difficult to concentrate to a smaller field and limited optical conversion efficiency at higher powers. See Overton, G. “High-power VCSELs rule IR illumination,” Laser Focus World, Aug., 29-30, (2013). LEDs also have a very broad spectral output, which complicates filtering out solar background and means that some of the light can be visible to the subjects being illuminated, which can be distracting. Conventional laser diode sources can be used for illumination with narrow spectral emission, well defined beams and higher efficiency. However, a single laser source with sufficient power for illuminating the image field will have significant “speckle” or coherence noise from mutual interference of the beam with its own scattered light. In addition, the point source characteristics of single laser sources result in low eye-safe exposure levels.
Devices, systems and methods are described herein for separately controlling illumination levels of multiple illumination sources directed at different regions or zones of an area or volume, such as a field of view of a camera, to provide adjustable illumination or light to the separate zones. In some aspects, the illumination sources, and the amount of light they produce, may be driven by software that receives inputs from one or more image sensors, such as one or more cameras, to enable a variety of additional or enhanced functions of the image sensor(s). These functions may include enhanced object tracking, driver monitoring, gesture recognition and other similar user interface functions that may be enhanced by specific lighting characteristics in one or more zones or subdivisions of the field of view of the image sensor.
In some instances, the illumination sources may include near infrared (NIR) illumination sources, such as a laser array. In one example, the multiple illumination sources may include vertical-cavity surface-emitting laser (VCSEL) arrays that use integrated micro-lenses for beam shaping. Such VCSELs can provide a much more usable emission profile and even provide separate, narrow illumination beams for independently addressable control of the illumination field. The combination of many low-power incoherent emitters greatly reduces coherence noise compared to conventional laser diodes and acts as an extended source with higher eye-safe intensities.
VCSEL arrays can be designed to operate at different wavelengths, thus making them particularly useful for a wide variety of applications. The most common sensors that would be used with this type of illuminator are silicon complimentary metal-oxide semiconductor (CMOS) or charge-coupled device (CCD) image sensors. Responsivity in the NIR falls off rapidly in CMOS sensors due to the thin photoabsorption layers that are compatible with the CMOS process. Wavelengths in the range of 830 nm to 940 nm are most compatible with these silicon sensors. VCSELs additionally can be fabricated at longer infrared wavelengths for applications where longer wavelength sensors may be desired.
In one aspect, multiple flip-chip bonded VCSEL devices, for example in an array and/or organized into sub arrays, may be placed on one or more sub-mounts. This design may provide greater flexibility in configuring these devices for a wide range of applications and interconnection options. The ability to interconnect separate groups of lasers in the array can be combined with the integrated micro-lenses to produce a “smart illumination” system that not only provides an optimum illumination pattern over a given area or volume, but can be actively controlled to “track” an area or object of interest to the imaging system, based on feedback from that system. The overall illumination pattern is defined by subdividing the laser array into a number of subarrays that each have one or a combination of micro-lenses with offsets calculated to provide spatially separated (but, in some cases, overlapping) illumination fields from the emitters in the subarray. A single field will typically be a limited part of the overall illumination pattern. Each subarray may be independently addressable through the sub-mount so that it can be switched on and off, and the intensity controlled by the system processor. Each subarray may illuminate a part of the overall field of view of the imaging system. Multiple subarrays can be combined to illuminate larger parts of the system field of view, including turning on all of them when required or for initial identification of the area of interest from within a large field of view. In some aspects, individual VCSEL devices may comprise a subarray, such that individual VCSELs are addressable and configurable.
Illuminating arrays can be designed to produce illumination zones covering arbitrary areas of different shapes or aspect ratios. In most cases, the illumination area would be a rectangular field with the 4:3 and 16:9 aspect ratios commonly used by commercial imaging sensors. Different numbers of illumination zones can be configured, limited by the die size and complexity that is cost-effective for the application. With these designs, all the beam shaping can be accomplished with the micro-lenses, such that there is no need for an external lens to cover fields of view of up to 50 degrees. Larger fields of view can be addressed by larger offsets of the micro-lenses that illuminate the edge of the field of view; however, total internal reflection and off-axis aberrations will cause some loss of illumination power. Larger fields of view can be addressed more efficiently by addition of an external optic to increase the angular spread provided by the micro-lenses. In some cases, a holographic diffuser can be added for additional smoothing of the beam profile and for eye safety benefits.
In some aspects, the eye-safe tolerance for the illumination array can be increased (e.g., enabling higher power illumination without exceeding eye safety regulations) by not locating the sub arrays that illuminate adjacent zones in the field of view next to each other on the laser array die, or sub mount in cases where multiple die are used to realize the illuminator.
Some aspects may utilize a “smart illuminator,’ where sub arrays can be controlled via software that incorporates the image processing needed for the sensor application. The software can detect areas of over illumination (resulting in saturation of the imaging sensor) and under illumination (resulting in little signal in the imaging sensor) and adjust the individual zones in the sensor field of view to provide a more uniform signal level across the whole image. This will improve the image processing software performance and allow for enhanced features for various applications.
The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the disclosure.
Devices, systems and methods are described herein for separately controlling illumination levels of multiple illumination sources directed at different regions or zones of an area or volume, such as a field of view of a camera, to provide adjustable illumination to the separate zones. A multi-zone illuminator, as described herein, may enable a variety of additional or enhanced functions of the image sensor(s) or cameras, such as object tracking, driver monitoring, gesture recognition, etc., that may be enhanced by specific lighting characteristics in one or more zones or subdivisions of the field of view of the image sensor.
In some instances, the multi-zone illuminator, also referred to herein as one or more illumination sources or arrays, may include near infrared (NIR) illumination sources, such as a laser array including multiple laser devices. In one example, the multiple illumination sources may include vertical-cavity surface-emitting laser (VCSEL) arrays that use integrated micro-lenses for beam shaping. Such VCSELs can provide a usable emission profile and even provide separate, narrow illumination beams for independently addressable control of the illumination field. The combination of many low-power incoherent emitters greatly reduces coherence noise compared to conventional laser diodes and acts as an extended source with higher eye-safe intensities.
In the embodiment, VCSEL array device 100 includes a substrate 102 which includes Gallium Arsenide (GaAs), although other materials such as Indium Phosphide (InP), Indium Arsenide (InAs), Silicon (Si), an epitaxially grown material, and the like, could be used to form the substrate 102. It will also be understood that substrate 102 typically includes a lattice constant chosen to minimize defects in a material layer subsequently grown thereon. It will also be understood that the choice of at least one of the compositions and the thicknesses of the subsequently grown material layers will provide a desired wavelength of operation. Subsequent layers are deposited on the substrate 102 via epitaxial growth using Molecular Beam Epitaxy (MBE), Metal-Organo-Chemical Vapor Deposition (MOCVD), and the like.
In the embodiment, a lattice-matched lower Distributed Bragg Reflector (DBR) 104 is epitaxily deposited on substrate 102 to form the first of the raised layers of the VCSEL mesas 103 and the short-circuiting/shorting/grounding mesa 105. The lower DBR 104 is formed from multiple layers of alternating materials with varying (a high and a low) indexes of refraction, or by periodic variation of some characteristic, such as height, of a dielectric waveguide, resulting in periodic variation in the effective refractive index in the guide. Each layer boundary causes a partial reflection of an optical wave, with the resulting combination of layers acting as a high-quality reflector at a desired wavelength of operation. Thus, while the lower DBR 104 (and upper DBR 108, as further described below) includes more than one material layer, it is illustrated in
In the embodiment, an active region 106 is epitaxily deposited on lower DBR 104. Although shown as a single layer (again for simplicity and ease of discussion), active region 106 comprises cladding (and/or waveguiding) layers, barrier layers, and an active material capable of emitting a substantial amount of light at a desired wavelength of operation. In the embodiment, the wavelength of operation is a wavelength within a range approximately given from about 620 nm to about 1600 nm (for a GaAs substrate). However, it will be understood that other wavelength ranges may be desired and will depend on the application.
As is understood by those skilled in the art, the wavelength of emission is substantially determined according to the choice of materials used to create lower DBR 104 and upper DBR 108, as well as the composition of the active region 106. Further, it will be understood that active region 106 can include various light emitting structures, such as quantum dots, quantum wells, or the like. In the embodiment, upper DBR 108 is positioned on active region 106, and like lower DBR 104, is electrically conductive to allow ohmic electrical connections to be formed (not shown). In some embodiments, lower DBR 104 is n-doped and upper DBR 108 is p-doped, but this can be reversed, where lower DBR 104 is p-doped and upper DBR 108 is n-doped. In other embodiments, electrically insulating DBRs can be employed (not shown), which utilize intra-cavity contacts and layers closer to the active region.
In some embodiments, an upper mirror contacting layer 109 is positioned on upper DBR 108. Contacting layer 109 is typically heavily doped so as to facilitate ohmic electrical connection to a metal deposited on contacting layer 109, and hence to an electrical circuit (not shown). In some embodiments, contacting layer 109 can be formed as part of upper DBR 108.
Lithography and etching can be used to define each of the mesas 103 and 105 and their structures stated above. This can be achieved by patterning the epitaxially-grown layers through a common photolithography step, such as coating, exposing, and developing a positive thick resist. The thickness of the resist can be varied as is known in the art, depending on etch-selectivity between the resist and the epitaxial layers, and the desired mesa geometry.
For GaAs-based materials, etching is usually accomplished using a Chlorine (Cl) based dry etch plasma, such as Cl2:BCl3, but any number of gases or mixtures thereof could be used. Etching can also be accomplished by many wet etchants. Other forms of etching, such as ion milling or reactive ion beam etching and the like can also be used. The depth of the etch is chosen to be deep enough to isolate the active regions of mesas in the array. The etch stops either on the N mirror (lower DBR 104), an etch stop/contact layer formed in the N mirror (lower DBR 104), or through the N mirror (lower DBR 104) into the substrate 102. After etching to form the mesas, the remaining photoresist is removed. This can be achieved using a wet solvent clean or dry Oxygen (O2) etching or a combination of both.
A confinement region 110 can also be formed within each of the mesas. Within the VCSEL mesas 103, the confinement region 110 defines an aperture 112 for the device. The confinement region 110 can be formed as an index guide region, a current guide region, and the like, and provides optical and/or carrier confinement to aperture 112. Confinement regions 110 can be formed by oxidation, ion implantation and etching. For example, an oxidation of a high
Aluminum (Al) content layer (or layers) can be achieved by timing the placement of the wafer or sample in an environment of heated Nitrogen (N2) bubbled through Water (H2O) and injected into a furnace generally over 400° C. A photolithographic step to define an ion implant area for current confinement, and combinations of these techniques and others known in the art, can also be used.
It will be understood that confinement region 110, defining aperture 112, can include more than one material layer, but is illustrated in the embodiment as including one layer for simplicity and ease of discussion. It will also be understood that more than one confinement region can be used.
In the embodiments shown in the Figures, the mesa size, and apertures of the light producing VCSELs are the same and have uniform spacing. However, in some embodiments, the individual VCSEL mesa size for the devices in an array can differ. Furthermore, the VCSEL mesa spacing in the array can differ. In some embodiments, the separation of the light producing VCSELs mesas in an array 100 is between approximately 20 μm and 200 μm. However, larger and smaller separations are also possible.
Dielectric deposition can be used and processed to define an opening for a contact surface. First, the deposition of a dielectric material 114 over the entire surface of the device 100 is usually accomplished by Plasma Enhanced Chemical Vapor Deposition (PECVD), but other techniques, such as Atomic Layer Deposition (ALD), can be used. In the embodiment, the dielectric coating 114 is a conformal coating over the upper surface (including the mesa sidewalls) and is sufficiently thick so as to prevent current leakage through pinholes from subsequent metal layers.
Other properties to consider while choosing the thickness of this film is the capacitance created between the plated metal heat sink 124 (further described below with reference to
Turning now to
Once the opened areas in the photoresist are defined, metallization can be performed, typically with a p-type metal, over the opened areas. The p-metal contact layer 120 is usually a multilayer deposition that is deposited by E-beam, resistive evaporation, sputter, or any other metal deposition techniques. A thin Titanium (Ti) layer is first deposited for adhesion of the next layer. The thickness of this adhesion layer can vary greatly, but is generally chosen to be between about 50 Å and about 400 Å as the Ti films are stressful and more resistive than the subsequent layers. In an embodiment, the adhesion layer is approximately 200 Å thick. Other adhesive metal layers can be substituted for this layer such as Chromium (Cr), Palladium (Pd), Nickel (Ni), and the like. Also this layer can serve as a reflector layer to increase reflectance of the contacting mirror.
The next layer is deposited directly on top of the adhesion layer without breaking vacuum during the deposition. In many cases this layer acts as a guard against the Gold (Au) or other top metals from diffusing too far into the contact (a diffusion barrier) because of excessive heating at the bonding stage. Metals chosen are generally Pd, Platinum (Pt), Ni, Tungsten (W), or other metals or combinations of these metals chosen for this purpose. The thickness chosen should depend upon specific bonding temperatures needed in the flip chip process. The thickness of this layer is typically between about 1,000 Å and about 10,000 Å. In embodiments where a low temperature bonding process is used, for example, in an Indium bonding process, a diffusion barrier layer can be optional, and not deposited as part of the metal contact stack.
The next layer is generally Au but can be Pd or Pt or mixtures such as Gold Beryllium (AuBe) or Gold Zinc (AuZn). In the embodiment described below, the thickness of this layer is approximately 2,000 Å. However, it can generally have a wide range of thicknesses depending on the photo resist properties and heating characteristics of the deposition. In some embodiments, another metal can also be deposited at this time to increase metal thickness and to form the metal heat sink at this stage, thereby reducing the number of processing steps, but this technique is not necessary and was not utilized in the demonstration devices described below.
Generally a common liftoff technique is chosen for this photolithographic process so that the metal deposited on the surface can easily be separated from the areas of the surface covered with photoresist, such that any metal on the photoresist is removed without sticking to or affecting the adhesion of the metal to the semiconductor. As noted above, a photolithographic process is then used to define the openings over various portions of the substrate 102 and the shorted n-contact mesas 105, where the dielectric was opened in a previous step. In an embodiment, the opened area in the photoresist corresponding to the n-metal deposition should be slightly larger than the opening in the dielectric openings for the n-metal. N-metal layer 122 is then deposited and can form an electrical circuit with the substrate 102 either through the lower DBR 104 (if an n-mirror), an etch stop and contact layer which is generally heavily doped within lower DBR 104, or to substrate 102 itself. The process to form the n-metal layer 122 is similar to that for the p-metal layer 120. The metal layers can be chosen to include the combinations of Ni/Ge/Au, Ge/Au/Ni/Au, or many such combinations. In some embodiments, the first layer or layers are chosen to reduce contact resistance by diffusion into the n-doped epitaxial material of the substrate 102. In other embodiments, the first layer of the multi-layer metal stack can also be chosen as a diffusion-limiting layer such as Ni so that in the annealing process the metals do not “clump” and separate due to the various diffusion properties of the materials. Evenly distributing diffusion of these metals is desired and can be used to lower the contact resistance which also reduces heating. The thickness of this multi-layer metal stack can vary greatly. In the embodiment to be described, a Ni/Ge/Au metal stack with thicknesses of 400 Å/280 Å/2,000 Å, respectively, was used.
A Rapid Thermal Anneal (RTA) step is then performed on the wafer in order to lower contact resistance. For the embodiment described, the process temperature is rapidly ramped up to ˜400° C., held for about 30 seconds and ramped down to room temperature. The temperature and time conditions for the RTA step depend on the metallization, and can be determined using a Design Of Experiment (DOE), as known to those of ordinary skill in the art.
In other embodiments, this step can be performed at an earlier or later stage of the process flow, but is generally done before solder is deposited so as to reduce oxidation of the solder or adhesive metal. A photolithographic process (using a thin layer of photoresist, typically around 1 μm to 3 μm, is used and developed to define the contact openings over the substrate 102 and shorted N contact mesas 105, and active mesas 103 where the heat sink structures will be plated or built up. The next step is deposition of the metal seed layer and is usually a multilayer deposition and deposited by E-beam, resistive evaporation, sputter or any other metal deposition techniques. The metal layers can be chosen such as Ti/Au, 20 Å/600 Å, or many such combinations where the first layer or layers is deposited for adhesion and ease to etch off, and the second layer for conductivity and ease to etch off. The seed layer is continuous over the surface allowing electrical connections for plating, if this technique is used for building up the heat sinks.
In an embodiment, a thick metal is then deposited by plating, to form heat sink 124. However, other methods of deposition can also be used, in which case the metal seed layer is not required. For plating, a photolithographic process is used to define the openings over the openings defined with the previous seed layer resist. The photoresist is removed in the areas where the deposition will occur. The thickness of the photoresist must be chosen so that it will lift off easily after the thick metal is defined and typically ranges in thickness from about 4 μm to about 12 μm. A plasma clean using O2, or water in combination with Ammonium Hydroxide, (NH4OH), is performed to clear any of the resist left on the gold seed layer. The heat sink 124 metal is plated next by means of a standard plating procedure. In the embodiment described, Copper (Cu) was chosen as the metal for plating due to its thermal conductance properties, but non-oxidizing metals, such as Au, Pd, Pt, or the like, that provide good thermal conductance and provide an interface that does not degrade device reliability, could be more appropriate. Plating thicknesses can vary. In the embodiment described, an approximately 3 μm thickness was used.
Next the wafer or sample is placed in a solder plating solution such as Indium (In) plating to form a bonding layer 126. Other metals can be chosen at this step for their bonding characteristics. The thickness can vary greatly. In the embodiment described, approximately 2 μm of plated In was deposited on the heat sinks. However, other solders such as Gold Tin (AuSn) alloys can also be used, and alternative deposition techniques such as sputtering can also be used. After metal deposition is complete, the photoresist is then removed using solvents, plasma cleaned, or a combination of both, as previously described, and the seed layer is etched with a dry or wet etch that etches Au, then etched in a dry or wet etch that etches Ti and/or removes TiO2. The seed layer photoresist is then cleaned off with standard resist cleaning methods. At this point, the VCSEL array substrate is complete and ready for bonding.
The full encasement of the mesas with a thick heat sink material is an important aspect of the embodiment. Since the active regions of the mesas are closest to the edge where the thick heat sink material is formed, there is good thermal conductance, thereby enabling the design of the embodiment to efficiently and effectively remove heat generated by those active regions. As previously noted, this is a significant departure from existing VCSEL array device heat reduction techniques, which place the heat sink material on top of the mesa. These existing or prior designs require heat to move through a series of higher thermally conductive materials (mirrors) or dielectrics, thereby resulting in less efficient and effective heat reduction.
Although some existing designs encompass the mesa with a thin layer of heat sink material, for the purpose of reducing heat, these designs do not take into the consideration the height of the resulting heat sink. By using a thick heat sink layer and adding to the distance between the n-substrate ground potential and the p-contact plane on the heat sink substrate, present embodiments decrease parasitic capacitance of the system as the height of the heat sink layer is increased. Further, in addition to reducing heat, the build-up of additional material increases frequency response. In another embodiment, the dielectric layer 114 covers the entire n-mirror or substrate around the mesas and is not opened so that the heat sink material can completely encompass all mesas and form one large heat sink structure, instead of individual mesas of heat sinks. In this case, the n-contacts would only be needed to extend from the short circuited mesas to the substrate. The heat sinks of the embodiment also improve the operation of the VCSEL array by reducing the amount of heat generated by neighboring mesas. A reduction in thermal resistance within most electrical devices will increase the frequency response of each device. By improving the thermal performance of the VCSEL array device of the present device, a significant increase in the high speed performance of the VCSEL array device is made possible. Furthermore, in this embodiment it is also evident that the extra height given the mesas, because of the thickened heat sinking build up compared to the existing array circuits, reduces capacitance by increasing the distance between the substrate ground plane and the positive contact plate connecting all active mesas in parallel. The resultant effect is a reduction in parasitic impedance of the circuit which also increases the frequency response of the entire array.
Also, the short circuited mesa design, which forms a sub-array surrounding the active regions, allows current flow directly from the fabricated VCSEL substrate to the ground plane on the heat spreader without the use of forming multiple wire bonds. This aspect of the embodiment reduces the complexity of fabrication, and also reduces parasitic inductance from the multiple wire bonds exhibited in the existing arrays. The short circuited mesa design, when flipped chipped to the heat spreader substrate, forms a coplanar waveguide which is beneficial to the frequency response of the array. This design feature also enables simpler packaging designs that do not require raised wire bonds, which also impact reliability and positioning.
To allow for control of the optical properties of the VCSEL devices in the array, micro-lenses can be fabricated on the substrate side of the array. Since all the electrical contacts are made to the epitaxial side of the semiconductor wafer, the other side of the wafer is available for modifications, including thinning the substrate, polishing the substrate surface, fabrication of micro-lenses or other optical elements, including diffractive optical elements, metal or dielectric gratings or special coatings, such as anti-reflection coatings to reduce reflection losses at the substrate surface.
A simplified process sequence to produce the VCSEL array device with micro-lenses on the substrate (light-emitting surface) side of the device, as depicted in
The micro-lenses can be fabricated on the substrate surface by a variety of different processes. Refractive micro-lenses can be patterned in photoresist as cylindrical structures that can be reflowed at high temperature. Surface tension in the melted photoresist causes it to have a hemispheric shape. That shape can be etched into the substrate surface with appropriate plasma or reactive ion etch processes that erode the photoresist at a controlled rate while etching the underlying substrate. Other techniques for fabricating refractive micro-lenses include grey-scale lithography and jet deposition. Diffractive lenses (Fresnel lenses) can also be fabricated on the substrate using fabrication processes including grey-scale lithography, embossing of a polymer coating, or binary lithographic patterning.
In some aspects, the image processor may also be in communication with one or more imaging sensors 210. Imaging sensor 210 may include a lens 215 that may capture image data corresponding to a camera field of view 225. In some cases, the data captured by the imagine sensor 210 may be enhanced by specific illumination or light provided to various zones or areas/volumes of the field of view 225. In some cases, the zones 240-255 may correspond to areas at a certain distance from the image sensor 210/lens 215, or may correspond to volumes within the field of view 225.
In some cases, the image processor may obtain information from the image sensor 210, including information defining or specifying the field of view 225 of the image sensor 210, such as by angle, distance, area, or other metrics. In some cases, the information may include a subset of the total field of view 225 that is of particular interest, such as including one or more objects 220, defined by a distance from the image sensor 210, a certain angle range of the field of view 225, etc. In some cases, this information may change and be periodically or constantly sent to the image processor 205, such as in cases of tracking one or more objects 220. The image processor may receive this information, and in conjunction with laser driver electronics 230, may control the laser array 235 to provide different illumination intensities to different zones 240-255. In some cases, the laser array 235 may be controlled to provide a determined optimal level of illumination to different zones 240-255. The optimal level may be determined based on any number of factors, including physical characteristics of the image sensor 210/lens 215, characteristics of the object or objects of interest 220, certain areas of interest within the field of view 225, other light characteristics of the field of view 225, and so on.
In some aspects, the applications or devices, such as imaging sensor(s) used for detecting or tracking moving objects, such as object 220, tracking or detecting gestures of a user, etc., that can utilize the described multi-zone illuminator 235 may already have a computational unit (e.g., corresponding to image processor 205) processing the image data. In these cases, the multi-zone illuminator 235 may be connected to existing systems and function via a software/hardware interface. The software interface may be modified to include detection of the illumination level of different zones of the image field of view, corresponding to the illumination zones that are provided by the illuminator 235, and provide feedback signals to the laser array driver electronics 230 to modulate the light intensity for each zone.
In one example, the modulation of the light intensity can be performed through typical laser or LED driver electronic circuits 230 that control the direct drive current to each laser 235 or commonly-connected group of lasers (sub arrays) or use pulse-width modulation of a fixed current drive to each laser or commonly-connected group of lasers or other current modulation approaches. Since they are separately connected to the driver electronics, the illumination zones 240-255 can be modulated independently, including modulation in synchronization with electronic shutters in the imaging electronics. The illumination zones 240-255 can also be driven sequentially or in any other timing pattern as may be preferred for the image processing electronics.
The system 200 depicted in
In some aspects, individual illumination zones, such as zones 240-255 may be dynamically controlled, such that one or more zones are turned on and off, or the illumination intensity of one or more zones modified, in response to feedback from the image sensor 210. Dynamic adjustment of the illumination pattern resulting from multiple zones 240-255 may be carried out or controlled by the image processor 205.
Process 300 may begin at operation 305, where the full field of view of the system (e.g., filed of view 225 of image sensor 210) may be illuminated by all the illuminator zones 240-255 at once. In some cases, the illumination power level utilized at operation 305 may have been previously calibrated for an efficient starting point, may be selected based on the field of view 225 of the image sensor 210, or may be full intensity, etc. The image processor 205 may obtain/detect from the image sensor 210, which zones 240-255 are saturated (high signal without any range for further increase), at operation 310, and/or which zones have a low (intensity) signal at operation 320. Each zone 240-255 can be adjusted to reduce illumination to reduce the signal level from the saturated zones, at operation 315, and may be adjusted to increase illumination for low signal zones at operation 325. The process of adjusting illumination in each of n zones can happen in sequence through the zones, one at a time, or if the image processor has the capability, it can be done simultaneously for multiple zones.
In the example illustrated, a first zone n may be selected at operation 330. It may then be determined if the zone is saturated at operation 310. If the zone is saturated, the illumination level or intensity for that zone may be reduced at operation 315. Process 300 may then loop back to operation 310, to re-check if the adjustment at operation 315 reduced the illumination of that zone to an appropriate level. If the zone has a low signal, as determined at operation 320, the illumination level for that zone may be increased at operation 325. Once the selected zone is adjusted to provide an optimized illumination level for the zone, a new zone may be selected at operation 335, and operations 310, 315, 320, 325, and 335 may be performed until there are no more zones left, or until a select number of zones are calibrated, etc. In some cases, an area of interest, for example, may be selected or configured, such that only zones in the area of interest are adjusted via operations 310-335. In other cases, the area of interest may be automatically detected/configured, for example, based on detected movement in the field of view 225.
The image processor 205, for many applications, may detect objects of interest at operation 340, such as a human subject's eyes or face. As the illumination is adjusted at operations 310-325, the image processor 205 may alternatively or additionally identify one or more subjects or objects of interest, at operation 340, and identify illumination zones corresponding to the location of the subject(s) or object(s) at operation 345. The identification process may be more efficient with the optimal illumination level for maximum signal to noise ratio(s). Once the object of interest is identified, the illumination levels for zones that do not contain the object can be reduced, at operation 350, for example, to save power and reduce heating in the laser array and the laser driver electronics.
In some aspects, the object of interest may be tracked by the illumination system, for example, by first detecting if the object is moving, at operation 355. As the object moves within the camera field of view 225, the image processor 205 may determine in what direction the object is moving, at operation 360. Based on the locations of the illumination zones 240-255, the image processor 205 may determine towards which zones the object is moving. The image processor 205 may turn/on or adjust (e.g., increase) the illumination level of these zone or zones to provide optimized illumination levels to better capture images of the object and/or to better track the object as it continues to move, at operation 365. As the relative efficiency of each zone can be stored in and accessed from memory, the initial illumination level of the zone the object is moving into or towards can be set relative to the level that was previously used in the zone that was illuminating the object before it moved. Stated another way, the current illumination level set for a zone in which the object is moving into or towards (future zone) may be set based on an illumination level in a current or previous zone.
The image processor 205 may periodically readjust illumination levels for each zone, while looking for additional objects of interest or changes in the image that may affect the system performance. For example, process 300 may continue to cycle through operations 355, 360, and 365 for an object until the object is no longer moving, at which point, process 300 may end at 370, or may start loop back to the operations 330 or 310, and continue adjusting illumination levels and/or tracking the same or different objects. In some cases, multiple objects may be identified, and operations 340-365 may be performed in parallel for each of the multiple objects. In this instance, illumination levels for different zones may be adjusted based on illumination zones being adjusted for other objects.
One embodiment of the illumination module is depicted in
An alternative embodiment of the illumination module is shown in
In either of these embodiments, the submount 535 allows for separate electrical contact to individual lasers or groups of lasers through the patterned contact metal 525, 530 on the surface of the submount 535. This allows for separate driver circuit current channels for each independently addressed laser or group of lasers.
Each separately addressed laser or group of lasers can have an output beam whose direction and angular spread is determined by the micro-optical elements 340.
If the micro-optical element 605 is a diffractive structure, similar to a Fresnel lens or curved diffraction grating, then the physical offset in position of the micro-optic is not necessary and the diffractive structure is designed to produce the desired angular direction of the beam by well documented mathematical techniques.
The angular spread of the emitted beam 610 from each individual laser 645 or a group of lasers can also be controlled by the micro-optical elements. Each micro-lens 605 can produce a beam 610 that has a lower angular divergence, or a larger angular divergence, than the divergence of the laser 645 itself by the choice of focal length 650 of the lens. The focal length 650 of the lens is determined by the radius of curvature and index of refraction of the micro-lenses. The focal length of the micro-lens can decrease the divergence of the light emitted from a single emitter by acting as a collimating lens. This effect is limited by the relative size of the emitting aperture of a single laser to the size of the micro-lens. A larger source size relative to the micro-lens aperture will increase the divergence of the beam even when the source is located at the focal distance from the lens for best collimation. If the micro-lens is fabricated so that its focal length is shorter or longer than the best collimation focal length, the beam from that emitter will diverge more rapidly than from the divergence from the same laser emitter without a micro-lens.
In addition, a group of lasers can have a collective beam that has greater divergence in the far field by a radial offset of the micro-lenses, as shown in
Micro-lenses can be used with combined linear (between each micro-lens) and radial offsets (described above) to produce beams from a group of lasers that have both a specified angle of propagation in the field of view of the imaging detector and a specified angular spread. By designing a laser die that has several separately connected groups of lasers, each with micro-lenses aligned to produce a beam to illuminate a separate angular zone within the field of view of the detector, a complete illumination system may be fabricated in a single compact module.
As depicted in
The center laser of array 710 may have a zero radial offset, while the six outer laser devices of array 710 may have a radial offset that places the lasers toward the center of array 710/center laser. Array 710 may produce beams 755 that diverge to produce an illumination pattern that expands as distance from the array 710 increases. The 6 outer micro-lenses are offset away from the center axis by a fixed offset that is a fraction of the micro-lens diameter so that significant amounts of light from the lasers are not incident outside the corresponding micro-lens. If the array is larger, then the next ring of micro-lenses (12 additional micro-lenses in a hexagonal array layout) will be two times the offset value relative to the corresponding laser axes. This radial offset can be easily realized in designing the array by using a different pitch for the hexagonal array of lasers than for the hexagonal array of micro-lenses and aligning the central laser and micro-lens to each other. The result is that a radial offset between the micro-lenses and laser emitters that increases by the pitch difference for each larger ring of the array. The example shows a radial offset that places the micro-lenses farther from the array center than the emitter apertures by using a larger pitch for the micro-lens array than the laser array. This will result in a combined beam that diverges more rapidly than the beam divergence due to just the micro-lens focal length. An alternative design can use a smaller pitch for the micro-lens array than for the laser array. That design will create a combined beam that converges for a short distance before the beams cross each other and diverge apart. That approach may have utility for illumination of objects at a short distance for microscopy, material heating or other applications.
All of the lasers of array 715 may be globally offset in the same direction and the same distance, for example, to produce beams 760 that are all directed in the same direction, offset from the beams 745 of array 705. As described previously, the offset of the micro-lenses relative to the location of the laser emitting aperture causes the beam to be emitted at angle defined to first order by the direction of the chief ray. This allows the calculation of how much offset is required to get a desired angle of deviation from the perpendicular to the illuminator surface. More precise calculation of the global offset to direct a combined beam of emitters in desired direction can be done with ray tracing techniques or beam propagation calculations. Both radial and global offsets can be combined in a single array (e.g., combining aspects of arrays 710 and 715), so that both the divergence and direction of the combined beams may be simultaneously determined by the design of the micro-lenses and laser array. It should be appreciated that arrays 705, 710, and 715 are only given by way of example. Other configurations, other numbers of lasers, etc., are contemplated herein.
The example in
As depicted in diagram 1000a of
As depicted in diagram 1000b of
As depicted in diagram 1000c of
As depicted in diagram 1000d of
As depicted in diagram 1000e of
As depicted in diagram 1000f of
Further eye safety improvements can be made for higher power operation for longer ranges by adding a diffuser 1005 in front of the illuminator/laser 1200 as shown in
The techniques described in U.S. Pat. No. 9,232,592B2 may be combined with the multi-zone illuminator described herein. The individual zones of the illuminator may be controlled electronically (e.g., by the image processor 305 and laser drive electronics 330 illustrated in
There may be limits to the size of angular field that the illuminator can cover using only the integrated micro-optical elements. If the micro-optical elements are micro-lenses 540, they will have losses from internal reflections and beam profile distortions from off-axis aberration as the deflection angles increase. Similarly, diffractive elements will have higher diffraction losses and become more difficult to fabricate at large deflection angles. In order to increase the field coverage of an illuminator 1200, an external optic can be added in addition to the integrated micro-lenses 540.
An external optic 1305, which may be a larger aperture device that can change the beam direction and divergence properties for all of the beams at once, may be placed after the micro-lenses 540, as illustrated in the illuminator 1300 of
For situations where higher power from larger laser arrays are required, for longer distance illumination or for illuminating very large fields of view, a multiple substrate approach may be required.
While the present disclosure has been illustrated and described herein in terms of several alternatives, it is to be understood that the techniques described herein can have a multitude of additional uses and applications. Accordingly, the disclosure should not be limited to just the particular description, embodiments and various drawing figures contained in this specification that merely illustrate one or more embodiments, alternatives and application of the principles of the disclosure.
This application is a continuation-in-part application taking priority from U.S. patent application Ser. No. 14/813,011, filed Jul. 29, 2015, which claims benefit under 35 U.S.C. § 119(e) of Provisional U.S. Patent Application No. 62/030,481, filed Jul. 29, 2014, entitled “Laser Arrays For Variable Optical Properties,” and is also a continuation-in-part application taking priority from U.S. patent application Ser. No. 13/902,555, filed May 24, 2013, now U.S. Pat. No. 8,995,493, issued Mar. 31, 2015, entitled “Microlenses for Multibeam Arrays of Optoelectronic Devices for High Frequency Operation,”, and is also a continuation-in-part application taking priority from U.S. patent application Ser. No. 13/868,034 filed Apr. 22, 2013, now U.S. Pat. No. 9,232,592, issued Jan. 5, 2016, entitled “Addressable Illuminator with Eye-Safety Circuitry”, which claims benefit under 35 U.S.C. § 119(e) of Provisional U.S. Patent Application No. 61/636,570, filed Apr. 20, 2012, entitled “Addressable Illuminator with Eye-Safety Circuitry”. The U.S. patent application Ser. No. 14/813,011 is also a continuation-in-part application taking priority from U.S. patent application Ser. No. 13/077,769, filed Mar. 31, 2011, now U.S. Pat. No. 8,848,757, issued Sep. 30, 2014, entitled “Multibeam Arrays of Optoelectronic Devices for High Frequency Operation,” which is a continuation application taking priority from U.S. patent application Ser. No. 12/707,657, filed Feb. 17, 2010, now U.S. Pat. No. 7,949,024, issued May 24, 2011, entitled “Multibeam Arrays of Optoelectronic Devices for High Frequency Operation,” which takes priority from Provisional U.S. Patent Application No. 61/153,190, filed Feb. 17, 2009, entitled “Multibeam Arrays of Optoelectronic Devices for High Frequency Operation.” Each of these applications is hereby incorporated by reference in its entirety. This application is also a continuation-in-part of U.S. patent application Ser. No. 14/946,730, filed Nov. 19, 2015, which is a divisional of U.S. patent application Ser. No. 13/594,714, filed Aug. 24, 2012, which claims benefit under 35 U.S.C. § 119(e) of Provisional Application No. 61/528,119, filed Aug. 26, 2011, and of Provisional Application No. 61/671,036, filed Jul. 12, 2012, each of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
3805255 | Baker | Apr 1974 | A |
4127322 | Jacobsen et al. | Nov 1978 | A |
4447136 | Kitamura | May 1984 | A |
4448491 | Okubo | May 1984 | A |
4714314 | Yang et al. | Dec 1987 | A |
4822755 | Hawkins et al. | Apr 1989 | A |
4827482 | Towe et al. | May 1989 | A |
4850685 | Kamakura et al. | Jul 1989 | A |
4870652 | Thornton | Sep 1989 | A |
4881237 | Donnelly | Nov 1989 | A |
4971927 | Leas | Nov 1990 | A |
5034344 | Jewell et al. | Jul 1991 | A |
5070399 | Martel | Dec 1991 | A |
5073041 | Rastani | Dec 1991 | A |
5098183 | Sonehara | Mar 1992 | A |
5164949 | Ackley et al. | Nov 1992 | A |
5258316 | Ackley et al. | Nov 1993 | A |
5325385 | Kasukawa | Jun 1994 | A |
5325386 | Jewell et al. | Jun 1994 | A |
5328854 | Vakhshoori et al. | Jul 1994 | A |
5359618 | Lebby et al. | Oct 1994 | A |
5383200 | Barrett et al. | Jan 1995 | A |
5402436 | Paoli | Mar 1995 | A |
5420879 | Kawarada et al. | May 1995 | A |
5422753 | Harris | Jun 1995 | A |
5422903 | Yamada et al. | Jun 1995 | A |
5457561 | Taneya et al. | Oct 1995 | A |
5504767 | Jamison et al. | Apr 1996 | A |
5557115 | Shakuda et al. | Sep 1996 | A |
5574738 | Morgan | Nov 1996 | A |
5581571 | Holonyak et al. | Dec 1996 | A |
5640188 | Andrews | Jun 1997 | A |
5680241 | Sakanaka | Oct 1997 | A |
5707139 | Haitz | Jan 1998 | A |
5745515 | Mart et al. | Apr 1998 | A |
5758951 | Haitz | Jun 1998 | A |
5781671 | Li | Jul 1998 | A |
5801666 | Macfarlane | Sep 1998 | A |
5812571 | Peters et al. | Sep 1998 | A |
5825803 | Labranche et al. | Oct 1998 | A |
5896408 | Corzine et al. | Apr 1999 | A |
5918108 | Peters | Jun 1999 | A |
5930279 | Apollonov et al. | Jul 1999 | A |
5976905 | Cockerill et al. | Nov 1999 | A |
5991318 | Caprara et al. | Nov 1999 | A |
6007218 | German et al. | Dec 1999 | A |
6044101 | Luft | Mar 2000 | A |
6075804 | Deppe et al. | Jun 2000 | A |
6125598 | Lanphier | Oct 2000 | A |
6128131 | Tang | Oct 2000 | A |
6136623 | Hofstetter et al. | Oct 2000 | A |
6154480 | Magnusson et al. | Nov 2000 | A |
6167068 | Caprara et al. | Dec 2000 | A |
6215598 | Hwu | Apr 2001 | B1 |
6259715 | Nakayama | Jul 2001 | B1 |
6353502 | Marchant et al. | Mar 2002 | B1 |
6393038 | Raymond et al. | May 2002 | B1 |
6446708 | Lai | Sep 2002 | B1 |
6493368 | Chirovsky et al. | Dec 2002 | B1 |
6608849 | Mawst et al. | Aug 2003 | B2 |
6661820 | Camilleri et al. | Dec 2003 | B1 |
6728289 | Peake et al. | Apr 2004 | B1 |
6757314 | Kneissl et al. | Jun 2004 | B2 |
6775308 | Hamster et al. | Aug 2004 | B2 |
6775480 | Goodwill | Aug 2004 | B1 |
6898222 | Hennig et al. | May 2005 | B2 |
6922430 | Biswas et al. | Jul 2005 | B2 |
6943875 | DeFelic et al. | Sep 2005 | B2 |
6947459 | Kurtz et al. | Sep 2005 | B2 |
6959025 | Jikutani et al. | Oct 2005 | B2 |
6974373 | Kriesel | Dec 2005 | B2 |
7016381 | Husain et al. | Mar 2006 | B2 |
7087886 | Almi et al. | Aug 2006 | B2 |
7126974 | Dong et al. | Oct 2006 | B1 |
7232240 | Kosnik et al. | Jun 2007 | B2 |
7257141 | Chua | Aug 2007 | B2 |
7262758 | Kahen et al. | Aug 2007 | B2 |
7315560 | Lewis et al. | Jan 2008 | B2 |
7357513 | Watson et al. | Apr 2008 | B2 |
7359420 | Shchegrov et al. | Apr 2008 | B2 |
7385229 | Venugopalan | Jun 2008 | B2 |
7386025 | Omori et al. | Jun 2008 | B2 |
7388893 | Watanabe et al. | Jun 2008 | B2 |
7430231 | Luo et al. | Sep 2008 | B2 |
7471854 | Cho et al. | Dec 2008 | B2 |
7568802 | Phinney et al. | Aug 2009 | B2 |
7613215 | Kim | Nov 2009 | B2 |
7680168 | Uchida | Mar 2010 | B2 |
7688525 | Hines et al. | Mar 2010 | B2 |
7742640 | Carlson et al. | Jun 2010 | B1 |
7751716 | Killinger | Jul 2010 | B2 |
7787767 | Wang | Aug 2010 | B2 |
7796081 | Breed | Sep 2010 | B2 |
7834302 | Ripingill et al. | Nov 2010 | B2 |
7911412 | Benner | Mar 2011 | B2 |
7925059 | Hoyos et al. | Apr 2011 | B2 |
7949024 | Joseph | May 2011 | B2 |
7970279 | Dress | Jun 2011 | B2 |
8396370 | Mu | Mar 2013 | B2 |
8995485 | Joseph | Mar 2015 | B2 |
8995493 | Joseph | Mar 2015 | B2 |
20010040714 | Sundaram et al. | Nov 2001 | A1 |
20010043381 | Green et al. | Nov 2001 | A1 |
20020034014 | Gretton et al. | Mar 2002 | A1 |
20020041562 | Redmond et al. | Apr 2002 | A1 |
20020129723 | Beier | Sep 2002 | A1 |
20020141902 | Ozasa et al. | Oct 2002 | A1 |
20030035451 | Ishida et al. | Feb 2003 | A1 |
20030091084 | Sun et al. | May 2003 | A1 |
20030095800 | Finizio | May 2003 | A1 |
20030215194 | Kuhmann et al. | Nov 2003 | A1 |
20040120717 | Clark et al. | Jun 2004 | A1 |
20040207926 | Buckman et al. | Oct 2004 | A1 |
20040208596 | Bringans et al. | Oct 2004 | A1 |
20050019973 | Chua | Jan 2005 | A1 |
20050025210 | Aoyagi et al. | Feb 2005 | A1 |
20050025211 | Zhang | Feb 2005 | A1 |
20050122720 | Shimonaka et al. | Jun 2005 | A1 |
20050147135 | Kurtz et al. | Jul 2005 | A1 |
20060109883 | Lewis et al. | May 2006 | A1 |
20060268241 | Watson et al. | Nov 2006 | A1 |
20060274918 | Amantea et al. | Dec 2006 | A1 |
20060280219 | Shchegrov et al. | Dec 2006 | A1 |
20070052660 | Montbach et al. | Mar 2007 | A1 |
20070099395 | Sridhar et al. | May 2007 | A1 |
20070153862 | Shchegrov et al. | Jul 2007 | A1 |
20070153866 | Shchegrov et al. | Jul 2007 | A1 |
20070242958 | Ieda | Oct 2007 | A1 |
20070273957 | Zalevsky et al. | Nov 2007 | A1 |
20080008471 | Dress | Jan 2008 | A1 |
20080205462 | Uchida | Aug 2008 | A1 |
20080273830 | Chen et al. | Nov 2008 | A1 |
20080317406 | Santori et al. | Dec 2008 | A1 |
20090027778 | Wu et al. | Jan 2009 | A1 |
20090141242 | Silverstein et al. | Jun 2009 | A1 |
20090245312 | Kageyama et al. | Oct 2009 | A1 |
20090278960 | Silverbrook | Nov 2009 | A1 |
20090284713 | Silverstein et al. | Nov 2009 | A1 |
20100066889 | Ueda | Mar 2010 | A1 |
20100129083 | Mu | May 2010 | A1 |
20100129946 | Uchida | May 2010 | A1 |
20100265975 | Baier et al. | Oct 2010 | A1 |
20110002355 | Jansen | Jan 2011 | A1 |
20110148328 | Joseph | Jun 2011 | A1 |
20110164880 | Davidson et al. | Jul 2011 | A1 |
20110176567 | Joseph | Jul 2011 | A1 |
20120120976 | Budd et al. | May 2012 | A1 |
20120169669 | Lee | Jul 2012 | A1 |
20120232536 | Liu et al. | Sep 2012 | A1 |
20120281293 | Gronenborn | Nov 2012 | A1 |
20130076960 | Bublitz et al. | Mar 2013 | A1 |
20130278151 | Lear | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
101375112 | Feb 2009 | CN |
102008030844 | Dec 2009 | DE |
1317038 | Jun 2003 | EP |
1411653 | Apr 2004 | EP |
1024399 | Dec 2005 | EP |
1871021 | Dec 2007 | EP |
04-062801 | Feb 1992 | JP |
05-092618 | Apr 1993 | JP |
H05-308327 | Nov 1993 | JP |
06-020051 | Mar 1994 | JP |
07-506220 | Jul 1995 | JP |
H08-213954 | Aug 1996 | JP |
H08-237204 | Sep 1996 | JP |
H09-139963 | May 1997 | JP |
H11-017615 | Jan 1999 | JP |
2001-246776 | Sep 2001 | JP |
2002-002015 | Jan 2002 | JP |
2005-102171 | Apr 2005 | JP |
2006-032885 | Feb 2006 | JP |
2006-067542 | Mar 2006 | JP |
2006-109268 | Apr 2006 | JP |
2006-310913 | Nov 2006 | JP |
2008-118542 | May 2008 | JP |
2008-277615 | Nov 2008 | JP |
2010-522493 | Jul 2010 | JP |
2010-531111 | Sep 2010 | JP |
2012-089564 | May 2012 | JP |
WO 1993021673 | Oct 1993 | WO |
WO 2000016503 | Mar 2000 | WO |
WO 2002010804 | Feb 2002 | WO |
WO 2003003424 | Jan 2003 | WO |
WO 2005057267 | Jun 2005 | WO |
WO 2006082893 | Jun 2008 | WO |
WO 2008115034 | Sep 2008 | WO |
WO 2011021140 | Feb 2011 | WO |
WO 2012106678 | Aug 2012 | WO |
Entry |
---|
Warren et al., Integration of Diffractive Lenses with Addressable Vertical-Cavity Laser Arrays, Sandia National Laboratories, Albuquerque, NM 87185, Apr. 1, 1995, Conf-950226-38, Sand 95-0360C, 11 pages. |
Gadallah, “Investigations into Matrix-Addressable VCSEL Arrays”, Annual Report 2008, Institute of Optoelectronics, Ulm University, 6 pages. |
International Patent Application No. PCT/US2013/42767; Int'l Preliminary Report on Patentability; dated May 6, 2015; 33 pages. |
Overton; High-Power VCESLs Rule IR Illumination; Laser Focus World; Aug. 2013; 2 pages. |
European Patent Application No. 13882974.2; Extended Search Report; dated Oct. 5, 2016; 9 pages. |
European Patent Application No. 18183404.5; Extended Search Report; dated Aug. 16, 2018; 7 pages. |
Number | Date | Country | |
---|---|---|---|
20160164261 A1 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
61153190 | Feb 2009 | US | |
62030481 | Jul 2014 | US | |
61671036 | Jul 2012 | US | |
61636570 | Apr 2012 | US | |
61528119 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13594714 | Aug 2012 | US |
Child | 14946730 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12707657 | Feb 2010 | US |
Child | 13077769 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14946730 | Nov 2015 | US |
Child | 15040975 | US | |
Parent | 14813011 | Jul 2015 | US |
Child | 14946730 | US | |
Parent | 13902555 | May 2013 | US |
Child | 14946730 | US | |
Parent | 13868034 | Apr 2013 | US |
Child | 13902555 | US | |
Parent | 13077769 | Mar 2011 | US |
Child | 14813011 | US |