1. Field of the Invention
This invention relates to a multi-aperture imaging system that uses multiple apertures of different f-numbers to estimate depth of an object.
2. Description of Related Art
A dual-aperture camera has two apertures. A narrow aperture, typically at one spectral range such as infrared (IR), produces relatively sharp images over a long depth of focus. A wider aperture, typically at another spectral range such as RGB, produces sometimes blurred images for out of focus objects. The pairs of images captured using the two different apertures can be processed to generate distance information of an object, for example as described in U.S. patent application Ser. No. 13/579,568, which is incorporated herein by reference. However, conventional processing methods can be computationally expensive.
Therefore, there is a need to improve approaches for depth map generation.
The present disclosure overcomes the limitations of the prior art by using partial blurring over less than 360 degrees, for example single-sided blur kernels, rather than using full blurring in all directions. For example, a first image may contain an edge and a second image may contain the same edge as the first image. The two images may be captured by imaging systems with blur characteristics that vary differently as a function of object depth. For example, a dual-aperture system may simultaneously capture a faster f-number visible image and a slower f-number infrared image. Depth information may be generated by comparing blurring of one side of the same edge in the two images.
In one aspect, single-sided blur kernels are used to accommodate edges, for example edges caused by occlusions where the two sides of the edge are at different depths. In such a case, one side of the edge is at a closer object distance and the other side is at a further object distance. The two different object distances correspond to different blur kernels. Applying a full blur kernel to both sides of the edge will not give good results, because when one side is matched to the blur kernel, the other side will not be. Single-sided blur kernels can be used instead. One single-sided blur kernel may be used for one side of the edge, and a different single-sided blur kernel may be used for the other side of the edge.
In other aspects, single-sided blur kernels and edges may be processed to reduce the required computation. For example, blurring may be estimated by convolution. This may be simplified by binarizing the edges, reducing the window and/or blur kernel sizes, and/or using only mathematical summing operations. Binarized edges can be used to reduce computationally expensive convolutions into simpler summing operations. Edges may first be normalized to phase match and/or equate energies between the two images.
Other aspects include components, devices, systems, improvements, methods, processes, applications, computer readable mediums, and other technologies related to any of the above.
Embodiments of the disclosure have other advantages and features which will be more readily apparent from the following detailed description and the appended claims, when taken in conjunction with the accompanying drawings, in which:
The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
The multi-aperture system 120 includes at least two apertures, shown in
The sensor 130 detects both the visible image corresponding to aperture 122 and the infrared image corresponding to aperture 124. In effect, there are two imaging systems that share a single sensor array 130: a visible imaging system using optics 110, aperture 122 and sensor 130; and an infrared imaging system using optics 110, aperture 124 and sensor 130. The imaging optics 110 in this example is fully shared by the two imaging systems, but this is not required. In addition, the two imaging systems do not have to be visible and infrared. They could be other spectral combinations: red and green, or infrared and white (i.e., visible but without color), for example.
The exposure of the image sensor 130 to electromagnetic radiation is typically controlled by a shutter 170 and the apertures of the multi-aperture system 120. When the shutter 170 is opened, the aperture system controls the amount of light and the degree of collimation of the light exposing the image sensor 130. The shutter 170 may be a mechanical shutter or, alternatively, the shutter may be an electronic shutter integrated in the image sensor. The image sensor 130 typically includes rows and columns of photosensitive sites (pixels) forming a two dimensional pixel array. The image sensor may be a CMOS (complementary metal oxide semiconductor) active pixel sensor or a CCD (charge coupled device) image sensor. Alternatively, the image sensor may relate to other Si (e.g. a-Si), III-V (e.g. GaAs) or conductive polymer based image sensor structures.
When the light is projected by the imaging optics 110 onto the image sensor 130, each pixel produces an electrical signal, which is indicative of the electromagnetic radiation (energy) incident on that pixel. In order to obtain color information and to separate the color components of an image which is projected onto the imaging plane of the image sensor, typically a color filter array 132 is interposed between the imaging optics 110 and the image sensor 130. The color filter array 132 may be integrated with the image sensor 130 such that each pixel of the image sensor has a corresponding pixel filter. Each color filter is adapted to pass light of a predetermined color band onto the pixel. Usually a combination of red, green and blue (RGB) filters is used. However other filter schemes are also possible, e.g. CYGM (cyan, yellow, green, magenta), RGBE (red, green, blue, emerald), etc. Alternately, the image sensor may have a stacked design where red, green and blue sensor elements are stacked on top of each other rather than relying on individual pixel filters.
Each pixel of the exposed image sensor 130 produces an electrical signal proportional to the electromagnetic radiation passed through the color filter 132 associated with the pixel. The array of pixels thus generates image data (a frame) representing the spatial distribution of the electromagnetic energy (radiation) passed through the color filter array 132. The signals received from the pixels may be amplified using one or more on-chip amplifiers. In one embodiment, each color channel of the image sensor may be amplified using a separate amplifier, thereby allowing to separately control the ISO speed for different colors.
Further, pixel signals may be sampled, quantized and transformed into words of a digital format using one or more analog to digital (A/D) converters 140, which may be integrated on the chip of the image sensor 130. The digitized image data are processed by a processor 180, such as a digital signal processor (DSP) coupled to the image sensor, which is configured to perform well known signal processing functions such as interpolation, filtering, white balance, brightness correction, and/or data compression techniques (e.g. MPEG or JPEG type techniques).
The processor 180 may include signal processing functions 184 for obtaining depth information associated with an image captured by the multi-aperture imaging system. These signal processing functions may provide a multi-aperture imaging system with extended imaging functionality including variable depth of focus, focus control and stereoscopic 3D image viewing capabilities. The details and the advantages associated with these signal processing functions will be discussed hereunder in more detail.
The processor 180 may also be coupled to additional compute resources, such as additional processors, storage memory for storing captured images and program memory for storing software programs. A controller 190 may also be used to control and coordinate operation of the components in imaging system 100. Functions described as performed by the processor 180 may instead be allocated among the processor 180, the controller 190 and additional compute resources.
As described above, the sensitivity of the imaging system 100 is extended by using infrared imaging functionality. To that end, the imaging optics 110 may be configured to allow both visible light and infrared light or at least part of the infrared spectrum to enter the imaging system. Filters located at the entrance aperture of the imaging optics 110 are configured to allow at least part of the infrared spectrum to enter the imaging system. In particular, imaging system 100 typically would not use infrared blocking filters, usually referred to as hot-mirror filters, which are used in conventional color imaging cameras for blocking infrared light from entering the camera. Hence, the light entering the multi-aperture imaging system may include both visible light and infrared light, thereby allowing extension of the photo-response of the image sensor to the infrared spectrum. In cases where the multi-aperture imaging system is based on spectral combinations other than visible and infrared, corresponding wavelength filters would be used.
In order to take advantage of the spectral sensitivity provided by the image sensor as illustrated by
An infrared pixel may be realized by covering a pixel with a filter material, which substantially blocks visible light and substantially transmits infrared light, preferably infrared light within the range of approximately 700 through 1100 nm. The infrared transmissive pixel filter may be provided in an infrared/color filter array (ICFA) may be realized using well known filter materials having a high transmittance for wavelengths in the infrared band of the spectrum, for example a black polyimide material sold by Brewer Science under the trademark “DARC 400”.
Such filters are described in more detail in US2009/0159799, “Color infrared light sensor, camera and method for capturing images,” which is incorporated herein by reference. In one design, an ICFA contain blocks of pixels, e.g. a block of 2×2 pixels, where each block comprises a red, green, blue and infrared pixel. When exposed, such an ICFA image sensor produces a raw mosaic image that includes both RGB color information and infrared information. After processing the raw mosaic image, a RGB color image and an infrared image may be obtained. The sensitivity of such an ICFA image sensor to infrared light may be increased by increasing the number of infrared pixels in a block. In one configuration (not shown), the image sensor filter array uses blocks of sixteen pixels, with four color pixels (RGGB) and twelve infrared pixels.
Instead of an ICFA image sensor (where color pixels are implemented by using color filters for individual sensor pixels), in a different approach, the image sensor 130 may use an architecture where each photo-site includes a number of stacked photodiodes. Preferably, the stack contains four stacked photodiodes responsive to the primary colors RGB and infrared, respectively. These stacked photodiodes may be integrated into the silicon substrate of the image sensor.
The multi-aperture system, e.g. a multi-aperture diaphragm, may be used to improve the depth of field (DOF) or other depth aspects of the camera. The DOF determines the range of distances from the camera that are in focus when the image is captured. Within this range the object is acceptably sharp. For moderate to large distances and a given image format, DOF is determined by the focal length of the imaging optics N, the f-number associated with the lens opening (the aperture), and/or the object-to-camera distance s. The wider the aperture (the more light received) the more limited the DOF. DOF aspects of a multi-aperture imaging system are illustrated in
Consider first
The pixels of the image sensor may thus receive a wider-aperture optical image signal 352B for visible light, overlaying a second narrower-aperture optical image signal 354B for infrared light. The wider-aperture visible image signal 352B will have a shorter DOF, while the narrower-aperture infrared image signal 354 will have a longer DOF. In
Objects 150 close to the plane of focus N of the lens are projected onto the image sensor plane 330 with relatively small defocus blur. Objects away from the plane of focus N are projected onto image planes that are in front of or behind the image sensor 330. Thus, the image captured by the image sensor 330 is blurred. Because the visible light 352B has a faster f-number than the infrared light 354B, the visible image will blur more quickly than the infrared image as the object 150 moves away from the plane of focus N. This is shown by
Most of
The DSP 180 may be configured to process and combine the captured color and infrared images. Improvements in the DOF and the ISO speed provided by a multi-aperture imaging system are described in more detail in U.S. application Ser. No. 13/144,499, “Improving the depth of field in an imaging system”; U.S. application Ser. No. 13/392,101, “Reducing noise in a color image”; U.S. application Ser. No. 13/579,568, “Processing multi-aperture image data”; U.S. application Ser. No. 13/579,569, “Processing multi-aperture image data”; and U.S. application Ser. No. 13/810,227, “Flash system for multi-aperture imaging.” All of the foregoing are incorporated by reference herein in their entirety.
In one example, the multi-aperture imaging system allows a simple mobile phone camera with a typical f-number of 2 (e.g. focal length of 3 mm and a diameter of 1.5 mm) to improve its DOF via a second aperture with a f-number varying e.g. between 6 for a diameter of 0.5 mm up to 15 or more for diameters equal to or less than 0.2 mm. The f-number is defined as the ratio of the focal length f and the effective diameter of the aperture. Preferable implementations include optical systems with an f-number for the visible aperture of approximately 2 to 4 for increasing the sharpness of near objects, in combination with an f-number for the infrared aperture of approximately 16 to 22 for increasing the sharpness of distance objects.
The multi-aperture imaging system may also be used for generating depth information for the captured image. The DSP 180 of the multi-aperture imaging system may include at least one depth function, which typically depends on the parameters of the optical system and which in one embodiment may be determined in advance by the manufacturer and stored in the memory of the camera for use in digital image processing functions.
If the multi-aperture imaging system is adjustable (e.g., a zoom lens), then the depth function typically will also include the dependence on the adjustment. For example, a fixed lens camera may implement the depth function as a lookup table, and a zoom lens camera may have multiple lookup tables corresponding to different focal lengths, possibly interpolating between the lookup tables for intermediate focal lengths. Alternately, it may store a single lookup table for a specific focal length but use an algorithm to scale the lookup table for different focal lengths. A similar approach may be used for other types of adjustments, such as an adjustable aperture. In various embodiments, when determining the distance or change of distance of an object from the camera, a lookup table or a formula provides an estimate of the distance based on one or more of the following parameters: the blur kernel providing the best match between IR and RGB image data; the f-number or aperture size for the IR imaging; the f-number or aperture size for the RGB imaging; and the focal length. In some imaging systems, the physical aperture is constrained in size, so that as the focal length of the lens changes, the f-number changes. In this case, the diameter of the aperture remains unchanged but the f-number changes. The formula or lookup table could also take this effect into account.
In certain situations, it is desirable to control the relative size of the IR aperture and the RGB aperture. This may be desirable for various reasons. For example, adjusting the relative size of the two apertures may be used to compensate for different lighting conditions. In some cases, it may be desirable to turn off the multi-aperture aspect. As another example, different ratios may be preferable for different object depths, or focal lengths or accuracy requirements. Having the ability to adjust the ratio of IR to RGB provides an additional degree of freedom in these situations.
As described above in
Here, the sharpness parameter may relate to the circle of confusion, which corresponds to the blur spot diameter measured by the image sensor. As described above in
Hence, in a multi-aperture imaging system, the increase or decrease in sharpness of the RGB components of a color image relative to the sharpness of the IR components in the infrared image is a function of the distance to the object. For example, if the lens is focused at 3 meters, the sharpness of both the RGB components and the IR components may be the same. In contrast, due to the small aperture used for the infrared image for objects at a distance of 1 meter, the sharpness of the RGB components may be significantly less than those of the infrared components. This dependence may be used to estimate the distances of objects from the camera.
In one approach, the imaging system is set to a large (“infinite”) focus point. That is, the imaging system is designed so that objects at infinity are in focus. This point is referred to as the hyperfocal distance H of the multi-aperture imaging system. The system may then determine the points in an image where the color and the infrared components are equally sharp. These points in the image correspond to objects that are in focus, which in this example means that they are located at a relatively large distance (typically the background) from the camera. For objects located away from the hyperfocal distance H (i.e., closer to the camera), the relative difference in sharpness between the infrared components and the color components will change as a function of the distance s between the object and the lens.
The sharpness may be obtained empirically by measuring the sharpness (or, equivalently, the blurriness) for one or more test objects at different distances s from the camera lens. It may also be calculated based on models of the imaging system. In one embodiment, sharpness is measured by the absolute value of the high-frequency infrared components in an image. In another approach, blurriness is measured by the blur size or point spread function (PSF) of the imaging system.
Now consider the object distance sx. At this object distance, the infrared image is produced with a blur spot 410 and the visible image is produced with a blur spot 420. Conversely, if the blur spot sizes were known, or the ratio of the blur spot sizes were know, this information could be used to estimate the object distance sx. Recall that the blur spot, also referred to as the point spread function, is the image produced by a single point source. If the object were a single point source, then the infrared image will be a blur spot of size 410 and the corresponding visible image will be a blur spot of size 420.
Iir=Iideal*PSFir (1)
Ivis=Iideal*PSFvis (2)
where * is the convolution operator. Manipulating these two equations yields
IvisIir*B (3)
where B is a blur kernel that accounts for deblurring of the IR image followed by blurring of the visible image. The blur kernels B can be calculated in advance or empirically measured as a function of object depth s, producing a table as shown in
In
The infrared image Iir and visible image Ivis in
The approach of
One advantage of this approach is that down-sampled blur kernels are smaller and therefore require less computation for convolution and other operations. The table below shows a set of 9 blur kernels, ranging in size from 3×3 for blur kernel 1, to 25×25 for blur kernel 9. In the approach of
In
In
These figures effectively illustrate different sampling approaches to find the extremum of the error function e(k). As another variation, the error function e(k) may be coarsely sampled at first in order to narrow the range of k where the minimum error e exists. Finer and finer sampling may be used as the range is narrowed. Other sampling approaches can be used to find the value of kernel number k (and the corresponding object distance) where the extremum of the error function e(k) occurs. Down-sampling can be implemented in other ways. For example, the visible images may be down-sampled first. The blur kernels are then down-sampled to match the down-sampling of the visible images. The down-sampled blur kernels are applied to the full resolution IR images. The result is an intermediate form which retains the fill resolution of the IR image but then is down-sampled to match the resolution of the down-sampled visible images. This method is not as efficient as fully down-sampling the IR but is more efficient than not using down-sampling at all. This approach may be beneficial to reduce computation while still maintaining a finer resolution.
Another aspect is that the approach of
In one approach, the windows are selected to include edges. Edge identification can be accomplished using known algorithms. Once identified, edges preferably are processed to normalize variations between the different captured images.
The second row of
Note that the IR edge looks like a line source. This is not uncommon since the IR point spread function is small and fairly constant over a range of depths, compared to the color point spread function. Also recall that in
Edges in an image may be caused by a sharp transition within an object, for example the border between black and white squares on a checkerboard. In that case, the approach shown in
Single-sided blur kernels can be used instead. A single-sided blur kernel is half a blur kernel instead of an entire blur kernel.
However, note that the blur kernels 1210A-D differ only within the frequency range 1220. Outside this frequency range 1220, all of the blur kernels 1210A-D in the bank have the same behavior. Therefore, content outside the frequency range 1220 will not distinguish between the different blur kernels 1210A-D. However, that content will add to background noise. Therefore, in one approach, frequency filtering is added to reduce energy and noise from outside the frequency range 1220. In one approach, the original images are frequency filtered. In another approach, the blur kernels may be frequency filtered versions. The frequency filtering may be low pass filtering to reduce frequency content above frequency 1220B, high pass filtering to reduce frequency content below frequency 1220A, or bandpass filtering to reduce both the low frequency and high frequency content. The filtering may take different forms and may be performed regardless of whether down-sampling is also used. When it is used, down-sampling is a type of low pass filtering.
The filtering may also be applied to less than or more than all the blur kernels in a bank. For example, a narrower bandpass filter may be used if it is desired to distinguish only blur kernels 1210A and 1210B (i.e., to determine the error gradient between blur kernels 1210A-1210B). Most of the difference between those two blur kernels occurs in the frequency band 1230, so a bandpass filter that primarily passes frequencies within that range and rejects frequencies outside that range will increase the relative signal available for distinguishing the two blur kernels 1210A and 1210B.
Window sizes and locations preferably are selected based on the above considerations, and the window size may be selected independent of the blur kernel size. For example, window size may be selected to be large enough to contain features such as edges, small enough to avoid interfering features such as closely spaced parallel edges, and generally only large enough to allow processing of features since larger windows will add more noise. The size of the blur kernel may be selected to reduce computation (e.g., by down-sampling) and also possibly in order to provide sufficient resolution for the depth estimation. As a result, the window size may be different (typically, larger) than the size of the blur kernels.
The number of windows and window locations may also be selected to contain features such as edges, and to reduce computation. A judicious choice of windows can reduce power consumption by having fewer pixels to power up and to read out, which in turn can be used to increase the frame rate. A higher frame rate may be advantageous for many reasons, for example in enabling finer control of gesture tracking.
Embodiments of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Moreover, the invention is not limited to the embodiments described above, which may be varied within the scope of the accompanying claims. For example, aspects of this technology have been described with respect to different f-number images captured by a multi-aperture imaging system. However, these approaches are not limited to multi-aperture imaging systems. They can also be used in other systems that estimate depth based on differences in blurring, regardless of whether a multi-aperture imaging system is used to capture the images. For example, two images may be captured in time sequence, but at different f-number settings. Another method is to capture two or more images of the same scene but with different focus settings, or to rely on differences in aberrations (e.g., chromatic aberrations) or other phenomenon that cause the blurring of the two or more images to vary differently as a function of depth so that these variations can be used to estimate the depth.
This application is a continuation of U.S. patent application Ser. No. 14/832,062, “Multi-Aperture Depth Map Using Blur Kernels and Down-Sampling,” filed Aug. 21, 2015; which claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 62/121,203, “Dual-Aperture Depth Map Using Adaptive PSF Sizing,” filed Feb. 26, 2015. The subject matter of all of the foregoing is incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2921509 | Freund | Jan 1960 | A |
3971065 | Bayer | Jul 1976 | A |
4651001 | Harada et al. | Mar 1987 | A |
4878113 | Nakamura | Oct 1989 | A |
4965840 | Subbarao | Oct 1990 | A |
4987295 | Kinnard et al. | Jan 1991 | A |
5148209 | Subbarao | Sep 1992 | A |
5231443 | Subbarao | Jul 1993 | A |
5280388 | Okayama et al. | Jan 1994 | A |
5631703 | Hamilton, Jr. et al. | May 1997 | A |
5998090 | Sabnis et al. | Dec 1999 | A |
6008484 | Woodgate et al. | Dec 1999 | A |
6034372 | LeVan | Mar 2000 | A |
6292212 | Zigadlo et al. | Sep 2001 | B1 |
6316284 | Perregaux et al. | Nov 2001 | B1 |
6393160 | Edgar | May 2002 | B1 |
6459450 | Bawolek et al. | Oct 2002 | B2 |
6590679 | Edgar et al. | Jul 2003 | B1 |
6632701 | Merrill | Oct 2003 | B2 |
6727521 | Merrill | Apr 2004 | B2 |
6768565 | Perregaux et al. | Jul 2004 | B1 |
6771314 | Bawolek et al. | Aug 2004 | B1 |
6783900 | Venkataraman | Aug 2004 | B2 |
6930336 | Merrill | Aug 2005 | B1 |
6998660 | Lyon et al. | Feb 2006 | B2 |
7053908 | Saquib et al. | May 2006 | B2 |
7053928 | Connors et al. | May 2006 | B1 |
7057654 | Roddy et al. | Jun 2006 | B2 |
7072508 | Dance | Jul 2006 | B2 |
7123298 | Schroeder et al. | Oct 2006 | B2 |
7132724 | Merrill | Nov 2006 | B1 |
7164444 | Merrill | Jan 2007 | B1 |
7176962 | Ejima | Feb 2007 | B2 |
7199348 | Olsen et al. | Apr 2007 | B2 |
7224540 | Olmstead | May 2007 | B2 |
7247851 | Okada et al. | Jul 2007 | B2 |
7248297 | Catrysse et al. | Jul 2007 | B2 |
7269295 | Keshet et al. | Sep 2007 | B2 |
7274383 | Brown Elliot | Sep 2007 | B1 |
7400458 | Farr | Jul 2008 | B2 |
7579577 | Ono | Aug 2009 | B2 |
7773317 | Duparre | Aug 2010 | B2 |
8018509 | Numata | Sep 2011 | B2 |
8406564 | Sun | Mar 2013 | B2 |
8467088 | Hosaka | Jun 2013 | B2 |
8917349 | Wajs | Dec 2014 | B2 |
9077916 | Wajs | Jul 2015 | B2 |
20010033326 | Goldstein et al. | Oct 2001 | A1 |
20020149691 | Pereira et al. | Oct 2002 | A1 |
20020171740 | Seo | Nov 2002 | A1 |
20020186976 | Seo | Dec 2002 | A1 |
20030052992 | Nakata | Mar 2003 | A1 |
20030112353 | Morris | Jun 2003 | A1 |
20040097021 | Augusto et al. | May 2004 | A1 |
20040169749 | Acharya | Sep 2004 | A1 |
20040174446 | Acharya | Sep 2004 | A1 |
20040246338 | Lieberman et al. | Dec 2004 | A1 |
20040252867 | Lan et al. | Dec 2004 | A1 |
20050046969 | Beatson et al. | Mar 2005 | A1 |
20050088550 | Mitsunaga et al. | Apr 2005 | A1 |
20050146634 | Silverstein | Jul 2005 | A1 |
20050157376 | Huibers et al. | Jul 2005 | A1 |
20060071156 | Masaki | Apr 2006 | A1 |
20060132642 | Hosaka et al. | Jun 2006 | A1 |
20060171704 | Bingle et al. | Aug 2006 | A1 |
20060182364 | John | Aug 2006 | A1 |
20060197006 | Kochi | Sep 2006 | A1 |
20060221216 | Hattori | Oct 2006 | A1 |
20060261280 | Oon et al. | Nov 2006 | A1 |
20060285741 | Subbarao | Dec 2006 | A1 |
20070019087 | Kuno et al. | Jan 2007 | A1 |
20070024614 | Tam et al. | Feb 2007 | A1 |
20070035852 | Farr | Feb 2007 | A1 |
20070102622 | Olsen et al. | May 2007 | A1 |
20070127908 | Oon et al. | Jun 2007 | A1 |
20070133983 | Traff | Jun 2007 | A1 |
20070145273 | Chang | Jun 2007 | A1 |
20070153099 | Ohki et al. | Jul 2007 | A1 |
20070183049 | Cieslinski | Aug 2007 | A1 |
20070183680 | Aguilar | Aug 2007 | A1 |
20070189750 | Wong et al. | Aug 2007 | A1 |
20070201738 | Toda et al. | Aug 2007 | A1 |
20080013943 | Rohaly et al. | Jan 2008 | A1 |
20080055400 | Schechterman et al. | Mar 2008 | A1 |
20080297648 | Furuki et al. | Dec 2008 | A1 |
20080308712 | Ono | Dec 2008 | A1 |
20080316345 | Onodera | Dec 2008 | A1 |
20090015979 | Fukano et al. | Jan 2009 | A1 |
20090102956 | Georgiev | Apr 2009 | A1 |
20090115780 | Varekamp et al. | May 2009 | A1 |
20090159799 | Copeland | Jun 2009 | A1 |
20090175535 | Mattox | Jul 2009 | A1 |
20090268895 | Emerson | Oct 2009 | A1 |
20090285476 | Choe et al. | Nov 2009 | A1 |
20100033604 | Solomon | Feb 2010 | A1 |
20100033617 | Forutanpour | Feb 2010 | A1 |
20100066854 | Mather et al. | Mar 2010 | A1 |
20100157019 | Schwotzer et al. | Jun 2010 | A1 |
20100225755 | Tamaki et al. | Sep 2010 | A1 |
20100238343 | Kawarada | Sep 2010 | A1 |
20100309350 | Adams et al. | Dec 2010 | A1 |
20110064327 | Dagher et al. | Mar 2011 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110080487 | Venkataraman et al. | Apr 2011 | A1 |
20120008023 | Wajs | Jan 2012 | A1 |
20120140099 | Kim et al. | Jun 2012 | A1 |
20120154596 | Wajs | Jun 2012 | A1 |
20120155785 | Banner | Jun 2012 | A1 |
20120189293 | Cao et al. | Jul 2012 | A1 |
20120249833 | Li | Oct 2012 | A1 |
20120281072 | Georgiev et al. | Nov 2012 | A1 |
20130016220 | Brown | Jan 2013 | A1 |
20130033578 | Wajs | Feb 2013 | A1 |
20130033579 | Wajs | Feb 2013 | A1 |
20130063566 | Morgan-Mar | Mar 2013 | A1 |
20130107002 | Kikuchi | May 2013 | A1 |
20130107010 | Hoiem et al. | May 2013 | A1 |
20130176396 | Cohen et al. | Jul 2013 | A1 |
20130307933 | Znamenskiy et al. | Nov 2013 | A1 |
20140078376 | Shuster | Mar 2014 | A1 |
20140152886 | Morgan-Mar et al. | Jun 2014 | A1 |
20150261299 | Wajs | Sep 2015 | A1 |
20160255323 | Wajs | Sep 2016 | A1 |
20160267667 | Wajs | Sep 2016 | A1 |
20160269600 | Wajs | Sep 2016 | A1 |
20160269710 | Wajs | Sep 2016 | A1 |
Number | Date | Country |
---|---|---|
1610412 | Apr 2005 | CN |
0318188 | May 1989 | EP |
0858208 | Aug 1998 | EP |
2034971 | Jun 1980 | GB |
2463480 | Mar 2010 | GB |
5552277 | Apr 1980 | JP |
H08-140116 | May 1996 | JP |
2002-207159 | Jul 2002 | JP |
2007-174277 | Jul 2007 | JP |
2008-005213 | Jan 2008 | JP |
2008-252639 | Oct 2008 | JP |
2008-268868 | Nov 2008 | JP |
2009-232348 | Oct 2009 | JP |
WO 0172033 | Sep 2001 | WO |
WO 02059692 | Aug 2002 | WO |
WO 2007015982 | Feb 2007 | WO |
WO 2007104829 | Sep 2007 | WO |
WO 2008036769 | Mar 2008 | WO |
WO 2009097552 | Aug 2009 | WO |
WO 2010081556 | Jul 2010 | WO |
WO 2011023224 | Mar 2011 | WO |
WO 2011101035 | Aug 2011 | WO |
WO 2011101036 | Aug 2011 | WO |
WO 2014008939 | Jan 2014 | WO |
WO2014008939 | Jan 2014 | WO |
Entry |
---|
United States Office Action, U.S. Appl. No. 14/832,062, Jun. 3, 2016, 11 pages. |
United States Office Action, U.S. Appl. No. 14/832,062, Jan. 22, 2016, 12 pages. |
United States Office Action, U.S. Appl. No. 15/162,147, Nov. 21, 2016, 21 pages. |
United States Office Action, U.S. Appl. No. 15/163,435, Nov. 21, 2016, 21 pages. |
U.S. Office Action, U.S. Appl. No. 15/162,147, dated Mar. 28, 2017, 13 pages. |
Number | Date | Country | |
---|---|---|---|
20160269710 A1 | Sep 2016 | US |
Number | Date | Country | |
---|---|---|---|
62121203 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14832062 | Aug 2015 | US |
Child | 15162154 | US |