Images produced by a wide field of view lens vary in quality depending on the field angle. It is a physical limitation of such a lens.
A WFOV, fish-eye or similar non-linear imaging system incorporates a lens assembly and a corresponding image sensor which is typically more elongated than a conventional image sensor. An indicative embodiment is provided in
An example expanded view of such a non-linear lens geometry is illustrated in
Taking a typical lens to sensor mapping of a rectangular grid will yield a pattern similar to
The radial distortion patterns are easier to manufacture and most lenses used in consumer imaging will exhibit one of the radial distortion patterns illustrated in
Global motion can affect and induce errors in such an imaging system. This is illustrated in
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
a)-4(i) illustrate various non-linear distortion patterns for a rectangular grid mapped onto an imaging sensor.
a) and 7(b) illustrate motion vectors arising from global motion are more emphasized towards the center of a typical non-linear lens (RHS), whereas they are uniform across a conventional (linear) lens.
a) and 8(b) illustrate three different 4×3 ROIs within the FOV of the non-linear lens of (i) an exemplary fish-eye imaging system and (ii) an exemplary non-linear WFOV imaging system.
a) illustrates a wide horizontal scene mapped onto a full extent of an image sensor.
b) illustrates a wide horizontal scene not mapped onto a full extent of an image sensor, and instead a significant portion of the sensor is not used.
Within an image acquisition system comprising a non-linear, wide-angled lens and an imaging sensor, a method is provided to enhance a scene containing one or more off-center peripheral regions. An initial distorted image is acquired with a large field of view using a non-linear, wide-angled lens and imaging sensor. An off-center region of interest (hereinafter “ROI”) is determined and extracted within the image. Geometric correction is applied to reconstruct the off-center ROI into an approximately rectangular frame of reference as a first reconstructed ROI. A quality of reconstructed pixels is determined within the first reconstructed ROI. Object tracking is applied to one or more regions within the first reconstructed ROI, respectively adapted to one or more local reconstructed pixel qualities within the one or more regions. The method also includes determining if undetected objects below a predetermined size threshold are likely to exist in one or more reduced quality regions of the first reconstructed ROI. Responsive to the determining if undetected objects exist, one or more additional initially distorted images are acquired, and the method includes extracting and reconstructing matching additional ROIs to combine with reduced quality pixels of the first reconstructed ROI to provide one or more enhanced ROIs. A further action is performed based on a value of a parameter of an object within the one or more enhanced ROIs.
The method may include using a super-resolution technique to generate at least one of the one or more enhanced ROIs.
The method may include applying additional object tracking to the enhanced ROI to confirm a location of a object below the predetermined size threshold.
The method may include compensating for global motion of the imaging device.
The parameter of the object may be or may include location.
Prior to the applying geometric correction, an initial object detection process may be applied to the off-center ROI. The applying geometric correction may be performed in response to an initial determination that an object exists within the off-center ROI. The object tracking may be applied to the reconstructed ROI to refine or confirm, or both, the initial object detection process.
A digital image acquisition device is also provided including a non-linear, wide-angled lens and an imaging sensor configured to capture digital images of scenes containing one or more off-center peripheral regions, including an initial distorted image with a large field of view, a processor, and a memory having code embedded therein for programming the processor to perform any of the methods described herein.
One or more non-transitory, processor-readable storage media is/are also provided having code embedded therein for programming a processor to perform any of the methods described herein.
Moreover, within a digital image acquisition system comprising a non-linear, wide-angled lens and an imaging sensor, a method is provided for enhancing a scene containing one or more off-center peripheral regions, including acquiring an initial distorted image with a large field of view, including using a non-linear, wide-angled lens and imaging sensor. The method includes determining and extracting an off-center region of interest (hereinafter “ROI”) within said image. Geometric correction is applied to reconstruct the off-center ROI into a rectangular or otherwise undistorted or less distorted frame of reference as a reconstructed ROI. A quality of reconstructed pixels within said reconstructed ROI is determined. Image analysis is selectively applied to the reconstructed ROI based on the quality of the reconstructed pixels.
The method may include compensating for global motion of the image acquisition system.
The method may also include repeating the method for a second distorted image, and generating a second reconstructed ROI of approximately a same portion of an image scene as the first reconstructed ROI. Responsive to analysis of the first and second reconstructed ROIs, both the first and second reconstructed ROIs may be processed to generate an enhanced output image of substantially the same portion of the image scene.
The method may include, based on the selectively applying image analysis, adjusting an image acquisition parameter and repeating the method for a second distorted image, and generating, based on the second distorted image, a second reconstructed ROI of approximately a same portion of an image scene as the first reconstructed ROI. The first and second reconstructed ROIs of approximately the same portion of the image scene may be processed to generate, based on the processing, an enhanced output image of substantially the same portion of the image scene.
Responsive to the analysis and the pixel quality, image enhancement may be selectively applied to generate an enhanced output image.
A digital image acquisition device is also provided including a non-linear, wide-angled lens and an imaging sensor configured to capture digital images of scenes containing one or more off-center peripheral regions, including an initial distorted image with a large field of view, a processor, and a memory having code embedded therein for programming the processor to perform any of the methods described herein.
One or more non-transitory, processor-readable storage media having code embedded therein for programming a processor to perform any of the methods described herein.
In certain embodiments, the idea is to vary type and amount of image correction depending on the location of the source image as well as depending on a final projection of an image that was created by projecting the source image (partially or whole) to a new coordinate system.
Now certain embodiments are configured to address a different problem, namely that of tracking faces in off-center portions of the imaged area based on a geometric correction engine and knowledge of one or more regions of interest (ROIs) within the overall field of view of the imaging system which contain or contains at least one face. An example of three different ROIs of similar 4×3 “real” dimensions is illustrated in
In certain embodiments, it may be an effect of the geometric remapping of the image scene, or portions thereof, that the removal of purple fringes (due to blue shift) or the correction of chromatic aberrations may be desired. US published patent application no. US2009/0189997 is incorporated by reference as disclosing embodiments to detect and correct purple fringing and chromatic aberrations in digital images.
Referring now to
Other factors may affect the quality of reconstruction. For example, regions with relatively homogeneous texture can be reconstructed with significantly less than 0.5 pixels of original data, whereas it may be desired for regions with substantial fine detail to use greater than 1.0 original pixel of equivalent data.
In certain embodiments, a geometric reconstruction engine can provide information on the quality of areas of the image, or even at the level of individual pixels. In the example of
As a wide field of view (WFOV) optical system may be configured to image a horizontal field of >90-100 degrees or more, it may be desired to process the scene captured by the system to present an apparently “normal” perspective on the scene. There are several approaches to this as exemplified by the example drawn from the architectural perspective of a long building described in Appendix A. In the context of our WFOV camera this disclosure is primarily directed at considering how facial regions will be distorted by the WFOV perspective of this camera. One can consider such facial regions to suffer similar distortions to the frontage of the building illustrated in this attached Appendix. Thus the problem to obtain geometrically consistent face regions across the entire horizontal range of the WFOV camera is substantially similar to the architectural problem described therein.
Thus, in order to obtain reasonable face regions, it is useful to alter/map the raw image obtained from the original WFOV horizontal scene so that faces appear undistorted. Or in alternative embodiments face classifiers may be altered according to the location of the face regions within an unprocessed (raw) image of the scene.
In a first preferred embodiment the center region of the image representing up to 100′ of the horizontal field of view (FOV) is mapped using a squeezed rectilinear projection. In a first embodiment this may be obtained using a suitable non-linear lens design to directly project the center region of the scene onto the middle ⅔ of the image sensor. The remaining approximately ⅓ portion of the image sensor (i.e. ⅙ at each end of the sensor) has the horizontal scene projected using a cylindrical mapping. Again in a first preferred embodiment the edges of the wide-angle lens are designed to optically effect said projection directly onto the imaging sensor.
Thus, in a first embodiment, the entire horizontal scene is mapped onto the full extent of the image sensor, as illustrated at
Naturally the form and structure of such a complex hybrid optical lens may not be conducive to mass production thus in an alternative embodiment a more conventional rectilinear wide-angle lens is used and the squeezing of the middle ⅔ of the image is achieved by post-processing the sensor data. Similarly the cylindrical projections of the outer regions of the WFOV scene are performed by post processing. In this second embodiment the initial projection of the scene onto the sensor does not cover the full extent of the sensor and thus a significant portion of the sensor area does not contain useful data. The overall resolution of this second embodiment is reduced and a larger sensor would be used to achieve similar accuracy to the first embodiment, as illustrated at
In a third embodiment some of the scene mappings are achieved optically, but some additional image post-processing is used to refine the initial projections of the image scene onto the sensor. In this embodiment the lens design can be optimized for manufacturing considerations, a larger portion of the sensor area can be used to capture useful scene data and the software post-processing overhead is similar to the pure software embodiment.
In a fourth embodiment multiple cameras are configured to cover overlapping portions of the desired field of view and the acquired images are combined into a single WFOV image in memory. These multiple cameras may be configured to have the same optical center, thus mitigating perspective related problems for foreground objects. In such an embodiment techniques employed in panorama imaging may be used advantageously to join images at their boundaries, or to determine the optimal join line where a significant region of image overlap is available. The following cases belong to the same assignee and relate to panorama imaging and are incorporated by reference: U.S. Ser. Nos. 12/636,608, 12/636,618, 12/636,629, 12/636,639, and 12/636,647, as are US published apps nos. US2006/0182437, US2009/0022422, US2009/0021576 and US2006/0268130.
In one preferred embodiment of the multi-camera WFOV device three, or more standard cameras with a 60 degree FOV are combined to provide an overall horizontal WFOV of 120-150 degrees with an overlap of 15-30 degrees between cameras. The field of view for such a cameras can be extended horizontally by adding more cameras; it may be extended vertically by adding an identical array of 3 or more horizontally aligned cameras facing in a higher (or lower) vertical direction and with a similar vertical overlap of 15-30 degrees offering a vertical FOV of 90-105 degrees for two such WFOV arrays. The vertical FOV may be increased by adding further horizontally aligned cameras arrays. Such configurations have the advantage that all individual cameras can be conventional wafer-level cameras (WLC) which can be mass-produced.
In an alternative multi-cameras embodiment a central WFOV cameras has its range extended by two side-cameras. The WFOV cameras can employ an optical lens optimized to provide a 120 degree compressed rectilinear mapping of the central scene. The side cameras can be optimized to provide a cylindrical mapping of the peripheral regions of the scene, thus providing a similar result to that obtained in
After image acquisition and, depending on the embodiment, additional post-processing of the image, we arrive at a mapping of the image scene with three main regions. Over the middle third of the image there is a normal rectilinear mapping and the image is undistorted compared to a standard FOV image; over the next ⅓ of the image (i.e. ⅙ of image on either side) the rectilinear projection becomes increasingly squeezed as illustrated in
a) illustrates one embodiment where this can be achieved using a compressed rectilinear lens in the middle, surrounded by two cylindrical lenses on either side. In a practical embodiment all three lenses could be combined into a single lens structure designed to minimize distortions where the rectilinear projection of the original scene overlaps with the cylindrical projection.
A standard face-tracker can now be applied to the WFOV image as all face regions should be rendered in a relatively undistorted geometry.
In alternative embodiments the entire scene need not be re-mapped, but instead only the luminance components are re-mapped and used to generate a geometrically undistorted integral image. Face classifiers are then applied to this integral image in order to detect faces. Once faces are detected those faces and their surrounding peripheral regions can be re-mapped on each frame, whereas it may be sufficient to re-map the entire scene background, which is assumed to be static, only occasionally, say every 60-120 image frames. In this way image processing and enhancement can be focused on the people in the image scene.
In alternative embodiments it may not be desirable to completely re-map the entire WFOV scene due to the computational burden involved. In such embodiment, referring to U.S. Pat. Nos. 7,460,695, 7,403,643, 7,565,030, and 7,315,631 and US published app no. US2009/0263022, which are incorporated by reference along with US2009/0179998, US2009/0080713, US 2009/0303342 and U.S. Ser. No. 12/572,930, filed Oct. 2, 2009 by the same assignee. These references describe predicting face regions (determined from the previous several video frames). The images may be transformed using either cylindrical or squeezed rectilinear projection prior to applying a face tracker to the region. In such an embodiment, it may be involved from time to time to re-map a WFOV in order to make an initial determination of new faces within the WFOV image scene. However, after such initial determination only the region immediately surrounding each detected face need be re-mapped.
In certain embodiments, the remapping of the image scene, or portions thereof, involves the removal of purple fringes (due to blue shift) or the correction of chromatic aberrations. The following case belongs to the same assignee is incorporated by reference and relates to purple fringing and chromatic aberration correction: US2009/0189997.
In other embodiments a single mapping of the input image scene is used. If, for example, only a simple rectilinear mapping were applied across the entire image scene the edges of the image would be distorted and only across the middle 40% or so of the image can a conventional face tracker be used. Accordingly the rectangular classifiers of the face tracker are modified to take account of the scene mappings across the other 60% of image scene regions: Over the middle portion of the image they can be applied unaltered; over the second 30% they are selectively expanded or compressed in the horizontal direction to account for the degree of squeezing of the scene during the rectilinear mapping process. Finally, in the outer ⅓ the face classifiers are adapted to account for the cylindrical mapping used in this region of the image scene.
In order to transform standard rectangular classifiers of a particular size, say 32×32 pixels, it may be advantageous in some embodiments to increase the size of face classifiers to, for example, 64×64. This larger size of classifier would enable greater granularity, and thus improved accuracy in transforming normal classifiers to distorted ones. This comes at the expense of additional computational burden for the face tracker. However we note that face tracking technology is quite broadly adopted across the industry and is known as a robust and well optimized technology. Thus the trade off of increasing classifiers from 32×32 to 64×64 for such faces should not cause a significant delay on most camera or smartphone platforms. The advantage is that pre-existing classifier cascades can be re-used, rather than having to train new, distorted ones.
Having greater granularity for the classifiers is advantageous particularly when starting to rescale features inside the classifier individually, based on the distance to the optical center. In another embodiment, one can scale the whole 22×22 (this is a very good size for face classifiers) classifier with fixed dx,dy (computed as distance from the optical center). Having larger classifiers does not put excessive strain on the processing. Advantageously, it is opposite to that, because there are fewer scales to cover. In this case, the distance to subject is reduced.
In an alternative embodiment an initial, shortened chain of modified classifiers is applied to the raw image (i.e. without any rectilinear or cylindrical re-mapping). This chain is composed of some of the initial face classifiers from a normal face detection chain. These initial classifiers are also, typically, the most aggressive to eliminate non-faces from consideration. These also tend to be simpler in form and the first four Haar classifiers from the Viola-Jones cascade are illustrated in
Where a compressed rectilinear scaling would have been employed (as illustrated in
This short classifier chain is employed to obtain a set of potential face regions which may then be re-mapped (using, for example, compressed rectilinear compression and/or cylindrical mapping) to enable the remainder of a complete face detection classifier chain to be applied to each potential face region. This embodiment relies on the fact that 99.99% of non-face regions are eliminated by applying the first few face classifiers; thus a small number of potential face regions would be re-mapped rather than the entire image scene before applying a full face detection process.
In another embodiment, distortion may be compensated by a method that involves applying geometrical adjustments (function of distance to optical center) when an integral image is computed (in the cases where the template matching is done using II) or compensate for the distortion when computing the sub-sampled image used for face detection and face tracking (in the cases where template matching is done directly on Y data).
Note that face classifiers can be divided into symmetric and non-symmetric classifiers. In certain embodiments it may be advantageous to use split classifier chains. For example right and left-hand face detector cascades may report detection of a half-face region—this may indicate that a full face is present but the second half is more or less distorted than would be expected, perhaps because it is closer to or farther from the lens than is normal. In such cases a more relaxed half, or full-face detector may be employed to confirm if a full face is actually present or a lower acceptance threshold may be set for the current detector. The following related apps belong to the same assignee are incorporated by reference: US2007/0147820, US2010/0053368, US2008/0205712, US2009/0185753, US2008/0219517 and US2010/0054592, and U.S. Ser. No. 61/182,625, filed May 29, 2009 and U.S. Ser. No. 61/221,455, filed Jun. 29, 2009.
In certain embodiments, a first image of a scene is reconstructed from sensor data. This first image is then analyzed using a variety of image analysis techniques and at least a second set of main image data is acquired and used to reconstruct at least a second image of substantially the same scene. The second image is then analyzed and the results of these at least two analyses are used to create an enhanced image of the original scene. Examples of various image analysis techniques include: (i) foreground/background separation; (ii) face detection and facial feature detection including partial or occluded faces or features and peripheral face regions; (iii) indoor/outdoor image classification; (iv) global luminance analysis; (v) local luminance analysis; (vi) directional luminance analysis; (vii) image blur analysis—global and local; (viii) image gradient analysis; (ix) color filtering & segmentation including color correlogram analysis; (x) image variance analysis; (xi) image texture filtering & segmentation.
The following belong to the same assignee as the present application and are incorporated by reference, particularly as describing alternative embodiments:
US published patent applications nos. 20110053654, 20110013044, 20110025886, 20110013043, 20110002545, 20100328486, 20110025859, 20100329549, 20110033112, 20110002506, 20110055354, 20100260414, 20110050919, 20110043648, 20100329582, 20110026780, 20100238309, 20110007174, 20100202707, 20100328472, 20100194895, 20100182458, 20100165140, 20100146165, 20100321537, 20100141798, 20100295959, 20100201826, 20100259622, 20100201827, 20100220899, 20100141787, 20100141786, 20100165150, 20100060727, 20100271499, 20100039525, 20100231727, 20100066822, 20100053368, 20100053367, 20100053362, 20100054592, 20090304278, 20100026833, 20100026832, 20100026831, 20100014721, 20090303343, 20090303342, 20090238419, 20090238410, 20100272363, 20090189998, 20090189997, 20090190803, 20090179999, 20090167893, 20090179998, 20090040342, 20090002514, 20090003661, 20100054549, 20100054533, 20100039520, 20080267461, 20080317379, 20080317339, 20090003708, 20080316328, 20080316327, 20080317357, 20080317378, 20080309769, 20090185753, 20080266419, 20090263022, 20080219518, 20080232711, 20080220750, 20080219517, 20080205712, 20080186389, 20090196466, 20080143854, 20090123063, 20080112599, 20090080713, 20090080797, 20090080796, 20080219581, 20080049970, 20080075385, 20090115915, 20080043121, 20080013799, 20080309770, 20080013798, 20070296833, 20080292193, 20070269108, 20070253638, 20070160307, 20080175481, 20080240555, 20060093238, 20050140801, 20050031224, and 20060204034; and
U.S. Pat. Nos. 7,536,061, 7,683,946, 7,536,060, 7,746,385, 7,804,531, 7,847,840, 7,847,839, 7,697,778, 7,676,108, 7,620,218, 7,860,274, 7,848,549, 7,634,109, 7,809,162, 7,545,995, 7,855,737, 7,844,135, 7,864,990, 7,684,630, 7,869,628, 7,787,022, 7,822,235, 7,822,234, 7,796,816, 7,865,036, 7,796,822, 7,853,043, 7,551,800, 7,515,740, 7,466,866, 7,693,311, 7,702,136, 7,474,341, 7,460,695, 7,630,527, 7,469,055, 7,460,694, 7,403,643, 7,773,118, 7,852,384, 7,702,236, 7,336,821, 7,295,233, 7,469,071, 7,868,922, 7,660,478, 7,844,076, 7,315,631, 7,551,754, 7,804,983, 7,792,335, 7,680,342, 7,619,665, 7,692,696, 7,792,970, 7,599,577, 7,689,009, 7,587,085, 7,606,417, 7,747,596, 7,506,057, 7,685,341, 7,436,998, 7,694,048, 7,715,597, 7,565,030, 7,639,889, 7,636,486, 7,639,888, 7,536,036, 7,738,015, 7,590,305, 7,352,394, 7,551,755, 7,558,408, 7,587,068, 7,555,148, 7,564,994, 7,424,170, 7,340,109, 7,308,156, 7,310,450, 7,206,461, 7,369,712, 7,676,110, 7,315,658, 7,630,006, 7,362,368, 7,616,233, 7,315,630, 7,269,292, 7,471,846, 7,574,016, 7,440,593, 7,317,815, 7,042,505, 6,035,072, and 6,407,777.
U.S. patent application Ser. Nos. 13/077,936 and 13/077,891 are also incorporated by reference as disclosing alternative embodiments.
In the following examples, embodiments involving a rectangular face detector will be described. However, the invention is not limited to detecting faces, and other objects may be detected, and such objects may also be tracked. Thus, where face detection or face tracking is mentioned herein, it is to be understood that the described features may be applied to objects other than faces. The face or other object detector may be based on variations of the Viola-Jones method where a cascade of rectangular classifiers is applied in sequence to a test region in the integral-image domain. Some approaches use a pass/fail cascade, while others employ a cumulative probability which allows the test region to fall below acceptance for some classifiers as long as it compensates by scoring above a threshold for the majority of the classifiers in the cascade.
Different types of classifiers may be used in a cascade. For example, one combination uses Haar-classifiers (see
Embodiments are described above and below herein involving face or other object detection in a portion of an image acquired with a nonlinear lens system. Typically the region of interest, or ROI, lies in the periphery of an ultra wide-angle lens such as a fish-eye with field of view, or FOV, of upwards of 180 degrees or greater. A geometric correction engine may be pre-calibrated for the particular lens in use.
In accordance with certain embodiments, a main image is acquired, and mapped onto an image sensor by a non-linear lens creating a distorted representation of the image scene. This distortion can be, for example, any of the types illustrated at
As the full image frame is not processed, this is significantly faster than applying the engine to the entire acquired, distorted original image frame. This is highly advantageous for portable and even handheld devices, wherein efficient use of computational resources is at a premium.
In one embodiment, the relevant ROI is reconstructed from the main distorted image and regions of different quality are determined. A measure of reconstructed pixel quality may be available. The image is partitioned into a number of regions of differing quality. A number of face (or other object) detector cascades of varying granularity are also available. In one embodiment, several cascades of different sized classifiers are available e.g. 32×32, 24×24 and 14×14 pixel classifiers.
In an alternative, but related embodiment, a hardware resizing engine is used to upscale or downscale the ROI image to match with a fixed size face detector cascade, say 22×22 pixel, but having the same effect as applying different sizes of cascaded classifiers. See U.S. Pat. Nos. 7,460,695, 7,403,643 and 7,315,631, incorporated by reference, for detailed explanations of advantageous face detecting and tracking embodiments. Once a face (or other object) is detected, a history of that face may be recorded over a sequence of image frames and on each new frame acquisition a face candidate area is marked indicating a region of the frame where there is a very high probability of finding a face because a face was detected at or near the center of this region in the previous image frame, or some estimated movement distance from where it was detected in the previous frame. According to this embodiment, a face-detection/tracking process is next applied to the reconstructed ROI image. This process may be modified according to the determined pixel quality of different portions of the image. Thus in regions where the image quality is high quality, or HQ, and normal quality, or NQ, all three sizes of face detector may be used in the face detection/tracking process. However in regions of reduced quality, or RQ, there may typically not be sufficient pixel resolution to use the smaller size(s) of face classifier. The face detection/tracking process in accordance with certain embodiments determines these regions and understands not to apply smaller classifiers thus eliminating potential false positives and saving time.
A particular complication arises where a face region overlaps between two different regions of image quality as illustrated in
In certain embodiments, the smaller sizes of face detector will not be applied to the RQ (reduced quality) regions of the image. However, in other embodiments, (i) the device determines whether a tracked object region is likely to have moved into a RQ region of the image, and (ii) the device determines whether a new face of small size is likely to have entered the image frame. If neither of these applies, then in certain embodiments the RQ region will not be searched and no action will be taken.
If, however, the tracking history, or some knowledge of the region appearing in the RQ portion of the image (e.g., that a door is located in that area) suggests that it is desirable to search in this region, then the face or other object detection module and/or face or other object tracking module may be used to initiate enhancement of the RQ portions of the image frame as follows: Referring back now to
Responsive to an indication from the face or other object detection/tracking subsystem, additional image frames are acquired in certain embodiments with a close temporal proximity to the original acquisition. Depending on the level of desired quality and the speed at which image frames can be acquired, at least 2, and even 4-8, or perhaps more, additional image frames may be obtained. After each acquisition the main image buffer may be cleared as in certain embodiments only the extracted ROI(s) are buffered.
Super-resolution processing is then applied in certain embodiments to generate a single, enhanced output ROI for each sequence of extracted ROI(s). Referring to
In certain embodiments, geometric correction is applied to each ROI within a main acquired image prior to applying face or other object detection and/or tracking. A distorted scene may be “undistorted,” in certain embodiments, and in other embodiments, distortion is actually applied to the face or other object classifiers, such that they may be applied to raw image data. In the latter embodiments, the resulting classifiers are non-rectangular. For the more non-linear regions towards the periphery of the image sensor, it becomes increasingly complicated to apply modified classifiers within a cascade in a consistent manner. Thus, the use of classifiers in these embodiments may be similar to or differ somewhat from the use of classifiers described in U.S. Ser. Nos. 12/959,089, 12/959,137, and 12/959,131, which belong to the same assignee and are incorporated by reference. These describe use of cylindrical and hybrid rectangular-cylindrical classifiers.
Now in order to transform standard rectangular classifiers of a particular size, say 32×32 pixels, it is advantageous in some embodiments to increase the size of face classifiers to, for example, 64×64. This larger size of classifier enables greater granularity, and thus improved accuracy, in transforming normal classifiers to distorted ones, particularly for the most distorted regions towards the periphery of the imaging sensor. The advantage is that preexisting classifier cascades can be re-used, rather than having to train new, distorted ones.
In certain embodiments, an initial, shortened chain of modified classifiers is applied to the raw image. This approach is particularly advantageous when it is not practical to perform a full face or other object detection on an uncorrected ROI. In this embodiment, an initial detection of likely face or other object regions is performed on the uncorrected raw image. Subsequently, in regions where a face is initially detected, the uncorrected ROI will be passed to the geometric correction engine and transformed into a rectangular frame-of-reference where further face or other object detection will be completed using a more straight-forward cascade of rectangular classifiers.
A shortened, distorted classifier chain in accordance with certain embodiments may be composed of the first few face or other object classifiers from a normal face or other object detection chain. These initial classifiers may be the most aggressive to eliminate non-faces from consideration. These may also tend to be less complex in form, such as the first four Haar classifiers from the Viola-Jones cascade that are illustrated in
This short classifier chain is employed in certain embodiments to obtain a set of potential face regions which may then be re-mapped, using, for example, compressed rectilinear and/or cylindrical mapping, to enable the remainder of a complete face or other object detection classifier chain to be applied to each potential face or other object region. This embodiment is particularly advantageous when a large percentage, such as more than 95% or even more than 98% or 99%, or even 99.99%, of non-face or non-object regions may be eliminated by applying the first few face or other object classifiers. Thus, it is advantageous in certain embodiments to re-map a small number of potential face or other object regions, rather than the entire image scene, before applying a full face or other object detection process. This is approach is particularly useful in applications where it is only desired to transform and analyze the tracked face or other object region or regions, e.g., security applications. In certain embodiments, it is optional to correct or not correct portions of the image scene around a tracked individual or object, such that significant computational savings are achieved.
In certain embodiments, face classifiers are divided into symmetric and non-symmetric classifiers (see e.g., U.S. Ser. No. 61/417,737, incorporated by reference). In certain embodiments, it may be advantageous to use split classifier chains. For example, right and left-hand face or other object detector cascades may report detection of a half-face or other object region. This may be used to indicate that a full face is present, while the second half may be more or less distorted than would be expected, in one example because it may be closer to or farther from the lens than is normal. In such cases, more relaxed half- or full-face or other object detector may be employed to confirm whether a full face or other object is actually present and/or whether a lower acceptance threshold may be set for the current detector (see, e.g., U.S. Pat. No. 7,692,696, and US published applications nos. 2011/0050938, 2008/0219517 and 2008/0205712, and U.S. Ser. Nos. 13/020,805, 12/959,320, 12/825,280, 12/572,930, 12/824,204 and 12/944,701, which belong to the same assignee and are incorporated by reference).
In certain embodiments, the entire scene is not re-mapped, and instead only the luminance components are remapped and used to generate a geometrically undistorted integral image. Face or other object classifiers are then applied to this integral image in order to detect faces or other objects respectively. Once faces or other objects are detected those faces or other objects, with or without their surrounding peripheral regions, on each frame. In certain embodiments, it may be sufficient to re-map the entire scene background, which can be assumed in these embodiments to be static, only occasionally, say every 60-120 image frames. Image processing and enhancement is focused in certain embodiments on the people, faces or other objects of interest in the image scene. In alternative embodiments, to save computational resources, only one or more portions of the scene are re-mapped. In such embodiments, only the predicted face candidate areas, determined from one or more or several previous frames (see U.S. Pat. No. 7,460,695, incorporated by reference), may be transformed by the geometric correction engine, prior to applying a face or other object tracker to the region. In these embodiments, the entire main acquired image may be fully re-mapped at selected times in order to make an initial determination of new faces or other tracked objects within the entire image scene. After such initial determination, only the region immediately surrounding each detected face or other object is generally re-mapped.
In certain embodiments, when a face is tracked across the scene, it may be desired to draw particular attention to that face and to emphasize it against the main scene. In one exemplary embodiment, suitable for applications in video telephony, there may be one or more faces in the main scene, while one (or more!) of these may be speaking In this case, a stereo microphone may be used to locate the speaking face. This face region, and optionally other foreground regions (e.g., neck, shoulders & torso, shirt, chair-back, and/or desk-top) may be further processed to magnify them (e.g., by a factor of ×1.8 times) against the background. In certain embodiments, the magnified face may be composited onto the background image in the same location as the unmagnified original. In another embodiment, the other faces and the main background of the image are de-magnified and/or squeezed in order to keep the overall image size self-consistent. This may lead to some image distortion, particularly surrounding the “magnified” face. This can help to emphasize the person speaking (see, e.g.,
Embodiments have been described as including various operations. Many of the processes are described in their most basic form, but operations can be added to or deleted from any of the processes without departing from the scope of the invention.
The operations of the invention may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware and software. The invention may be provided as a computer program product that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process according to the invention. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, the invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication cell (e.g., a modem or network connection). All operations may be performed at the same central site or, alternatively, one or more operations may be performed elsewhere.
While an exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention.
In addition, in methods that may be performed according to preferred embodiments herein and that may have been described above, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, except for those where a particular order may be expressly set forth or where those of ordinary skill in the art may deem a particular order to be necessary.
In addition, all references cited above and below herein, as well as the background, invention summary, abstract and brief description of the drawings, are all incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments.
This application is a continuation-in-part (CIP) of U.S. patent application Ser. No. 13/077,891, filed Mar. 31, 2011. This application is also related to U.S. Ser. No. 13/077,936, also filed Mar. 31, 2011. This application is also related to U.S. Ser. Nos. 12/959,089, 12/959,137 and 12/959,151, each filed Dec. 2, 2010. All of these applications belong to the same assignee and are incorporated by reference. Another related application by the same assignee and same inventors, entitled FACE AND OTHER OBJECT DETECTION AND TRACKING IN OFF-CENTER PERIPHERAL REGIONS FOR NONLINEAR LENS GEOMETRIES Ser. No. 13,078,971, is filed contemporaneously with the present application.
Number | Name | Date | Kind |
---|---|---|---|
1906509 | Claus | May 1933 | A |
3251283 | Wood | May 1966 | A |
3356002 | Raitiere | Dec 1967 | A |
4555168 | Meier et al. | Nov 1985 | A |
5000549 | Yamazaki | Mar 1991 | A |
5359513 | Kano et al. | Oct 1994 | A |
5508734 | Baker et al. | Apr 1996 | A |
5526045 | Oshima et al. | Jun 1996 | A |
5579169 | Mouri | Nov 1996 | A |
5585966 | Suzuki | Dec 1996 | A |
5633756 | Kaneda et al. | May 1997 | A |
5675380 | Florent et al. | Oct 1997 | A |
5850470 | Kung et al. | Dec 1998 | A |
5960108 | Xiong | Sep 1999 | A |
5986668 | Szeliski et al. | Nov 1999 | A |
6035072 | Read | Mar 2000 | A |
6044181 | Szeliski et al. | Mar 2000 | A |
6078701 | Hsu et al. | Jun 2000 | A |
6219089 | Driscoll, Jr. et al. | Apr 2001 | B1 |
6222683 | Hoogland et al. | Apr 2001 | B1 |
6392687 | Driscoll, Jr. et al. | May 2002 | B1 |
6407777 | DeLuca | Jun 2002 | B1 |
6466254 | Furlan et al. | Oct 2002 | B1 |
6664956 | Erdem | Dec 2003 | B1 |
6750903 | Miyatake et al. | Jun 2004 | B1 |
7042505 | DeLuca | May 2006 | B1 |
7058237 | Liu et al. | Jun 2006 | B2 |
7206461 | Steinberg et al. | Apr 2007 | B2 |
7269292 | Steinberg | Sep 2007 | B2 |
7280289 | Yamakawa | Oct 2007 | B2 |
7295233 | Steinberg et al. | Nov 2007 | B2 |
7308156 | Steinberg et al. | Dec 2007 | B2 |
7310450 | Steinberg et al. | Dec 2007 | B2 |
7315630 | Steinberg et al. | Jan 2008 | B2 |
7315631 | Corcoran et al. | Jan 2008 | B1 |
7315658 | Steinberg et al. | Jan 2008 | B2 |
7317815 | Steinberg et al. | Jan 2008 | B2 |
7327899 | Liu et al. | Feb 2008 | B2 |
7336821 | Ciuc et al. | Feb 2008 | B2 |
7340109 | Steinberg et al. | Mar 2008 | B2 |
7352394 | DeLuca et al. | Apr 2008 | B1 |
7362368 | Steinberg et al. | Apr 2008 | B2 |
7369712 | Steinberg et al. | May 2008 | B2 |
7403643 | Ianculescu et al. | Jul 2008 | B2 |
7424170 | Steinberg et al. | Sep 2008 | B2 |
7436998 | Steinberg et al. | Oct 2008 | B2 |
7440593 | Steinberg et al. | Oct 2008 | B1 |
7460694 | Corcoran et al. | Dec 2008 | B2 |
7460695 | Steinberg et al. | Dec 2008 | B2 |
7466866 | Steinberg | Dec 2008 | B2 |
7469055 | Corcoran et al. | Dec 2008 | B2 |
7469071 | Drimbarean et al. | Dec 2008 | B2 |
7471846 | Steinberg et al. | Dec 2008 | B2 |
7474341 | DeLuca et al. | Jan 2009 | B2 |
7495845 | Asami | Feb 2009 | B2 |
7499638 | Arai et al. | Mar 2009 | B2 |
7506057 | Bigioi et al. | Mar 2009 | B2 |
7515740 | Corcoran et al. | Apr 2009 | B2 |
7536036 | Steinberg et al. | May 2009 | B2 |
7536060 | Steinberg et al. | May 2009 | B2 |
7536061 | Steinberg et al. | May 2009 | B2 |
7545995 | Steinberg et al. | Jun 2009 | B2 |
7551754 | Steinberg et al. | Jun 2009 | B2 |
7551755 | Steinberg et al. | Jun 2009 | B1 |
7551800 | Corcoran et al. | Jun 2009 | B2 |
7555148 | Steinberg et al. | Jun 2009 | B1 |
7558408 | Steinberg et al. | Jul 2009 | B1 |
7564994 | Steinberg et al. | Jul 2009 | B1 |
7565030 | Steinberg et al. | Jul 2009 | B2 |
7574016 | Steinberg et al. | Aug 2009 | B2 |
7587068 | Steinberg et al. | Sep 2009 | B1 |
7587085 | Steinberg et al. | Sep 2009 | B2 |
7590305 | Steinberg et al. | Sep 2009 | B2 |
7599577 | Ciuc et al. | Oct 2009 | B2 |
7606417 | Steinberg et al. | Oct 2009 | B2 |
7609850 | Tapson | Oct 2009 | B2 |
7612946 | Kweon et al. | Nov 2009 | B2 |
7613357 | Owechko et al. | Nov 2009 | B2 |
7616233 | Steinberg et al. | Nov 2009 | B2 |
7619665 | DeLuca | Nov 2009 | B1 |
7620218 | Steinberg et al. | Nov 2009 | B2 |
7630006 | DeLuca et al. | Dec 2009 | B2 |
7630527 | Steinberg et al. | Dec 2009 | B2 |
7634109 | Steinberg et al. | Dec 2009 | B2 |
7636486 | Steinberg et al. | Dec 2009 | B2 |
7639888 | Steinberg et al. | Dec 2009 | B2 |
7639889 | Steinberg et al. | Dec 2009 | B2 |
7660478 | Steinberg et al. | Feb 2010 | B2 |
7676108 | Steinberg et al. | Mar 2010 | B2 |
7676110 | Steinberg et al. | Mar 2010 | B2 |
7680342 | Steinberg et al. | Mar 2010 | B2 |
7683946 | Steinberg et al. | Mar 2010 | B2 |
7684630 | Steinberg | Mar 2010 | B2 |
7685341 | Steinberg et al. | Mar 2010 | B2 |
7689009 | Corcoran et al. | Mar 2010 | B2 |
7692696 | Steinberg et al. | Apr 2010 | B2 |
7693311 | Steinberg et al. | Apr 2010 | B2 |
7694048 | Steinberg et al. | Apr 2010 | B2 |
7697778 | Steinberg et al. | Apr 2010 | B2 |
7702136 | Steinberg et al. | Apr 2010 | B2 |
7702236 | Steinberg et al. | Apr 2010 | B2 |
7715597 | Costache et al. | May 2010 | B2 |
7738015 | Steinberg et al. | Jun 2010 | B2 |
7747596 | Bigioi et al. | Jun 2010 | B2 |
7773118 | Florea et al. | Aug 2010 | B2 |
7787022 | Steinberg et al. | Aug 2010 | B2 |
7792335 | Steinberg et al. | Sep 2010 | B2 |
7792970 | Bigioi et al. | Sep 2010 | B2 |
7796816 | Steinberg et al. | Sep 2010 | B2 |
7796822 | Steinberg et al. | Sep 2010 | B2 |
7804531 | DeLuca et al. | Sep 2010 | B2 |
7804983 | Steinberg et al. | Sep 2010 | B2 |
7809162 | Steinberg et al. | Oct 2010 | B2 |
7822234 | Steinberg et al. | Oct 2010 | B2 |
7822235 | Steinberg et al. | Oct 2010 | B2 |
7835071 | Izumi | Nov 2010 | B2 |
7843652 | Asami et al. | Nov 2010 | B2 |
7844076 | Corcoran et al. | Nov 2010 | B2 |
7844135 | Steinberg et al. | Nov 2010 | B2 |
7847839 | DeLuca et al. | Dec 2010 | B2 |
7847840 | DeLuca et al. | Dec 2010 | B2 |
7848548 | Moon et al. | Dec 2010 | B1 |
7848549 | Steinberg et al. | Dec 2010 | B2 |
7852384 | DeLuca et al. | Dec 2010 | B2 |
7853043 | Steinberg et al. | Dec 2010 | B2 |
7855737 | Petrescu et al. | Dec 2010 | B2 |
7860274 | Steinberg et al. | Dec 2010 | B2 |
7864990 | Corcoran et al. | Jan 2011 | B2 |
7865036 | Ciuc et al. | Jan 2011 | B2 |
7868922 | Ciuc et al. | Jan 2011 | B2 |
7869628 | Corcoran et al. | Jan 2011 | B2 |
7907793 | Sandrew | Mar 2011 | B1 |
7929221 | Nong | Apr 2011 | B2 |
8090148 | Asari et al. | Jan 2012 | B2 |
8094183 | Toyoda et al. | Jan 2012 | B2 |
8134479 | Suhr et al. | Mar 2012 | B2 |
8144033 | Chinomi et al. | Mar 2012 | B2 |
8194993 | Chen et al. | Jun 2012 | B1 |
8218895 | Gleicher et al. | Jul 2012 | B1 |
8264524 | Davey | Sep 2012 | B1 |
8311344 | Dunlop et al. | Nov 2012 | B2 |
8340453 | Chen et al. | Dec 2012 | B1 |
8358925 | Gutierrez et al. | Jan 2013 | B2 |
8379014 | Wiedemann et al. | Feb 2013 | B2 |
20020063802 | Gullichsen et al. | May 2002 | A1 |
20020114536 | Xiong et al. | Aug 2002 | A1 |
20030103063 | Mojaver et al. | Jun 2003 | A1 |
20040061787 | Liu et al. | Apr 2004 | A1 |
20040233461 | Armstrong et al. | Nov 2004 | A1 |
20050031224 | Prilutsky et al. | Feb 2005 | A1 |
20050140801 | Prilutsky et al. | Jun 2005 | A1 |
20050166054 | Fujimoto | Jul 2005 | A1 |
20050169529 | Owechko et al. | Aug 2005 | A1 |
20050196068 | Kawai | Sep 2005 | A1 |
20060093238 | Steinberg et al. | May 2006 | A1 |
20060140449 | Otsuka et al. | Jun 2006 | A1 |
20060182437 | Williams et al. | Aug 2006 | A1 |
20060204034 | Steinberg et al. | Sep 2006 | A1 |
20060268130 | Williams et al. | Nov 2006 | A1 |
20070147820 | Steinberg et al. | Jun 2007 | A1 |
20070160307 | Steinberg et al. | Jul 2007 | A1 |
20070172150 | Quan et al. | Jul 2007 | A1 |
20070206941 | Maruyama et al. | Sep 2007 | A1 |
20070253638 | Steinberg et al. | Nov 2007 | A1 |
20070269108 | Steinberg et al. | Nov 2007 | A1 |
20070296833 | Corcoran et al. | Dec 2007 | A1 |
20080013799 | Steinberg et al. | Jan 2008 | A1 |
20080043121 | Prilutsky et al. | Feb 2008 | A1 |
20080075352 | Shinuya et al. | Mar 2008 | A1 |
20080075385 | David et al. | Mar 2008 | A1 |
20080112599 | Nanu et al. | May 2008 | A1 |
20080143854 | Steinberg et al. | Jun 2008 | A1 |
20080175436 | Asari et al. | Jul 2008 | A1 |
20080186389 | DeLuca et al. | Aug 2008 | A1 |
20080205712 | Ionita et al. | Aug 2008 | A1 |
20080218606 | Yoda | Sep 2008 | A1 |
20080219517 | Blonk et al. | Sep 2008 | A1 |
20080219518 | Steinberg et al. | Sep 2008 | A1 |
20080219581 | Albu et al. | Sep 2008 | A1 |
20080232711 | Prilutsky et al. | Sep 2008 | A1 |
20080240555 | Nanu et al. | Oct 2008 | A1 |
20080266419 | Drimbarean et al. | Oct 2008 | A1 |
20080267461 | Ianculescu et al. | Oct 2008 | A1 |
20080292193 | Bigioi | Nov 2008 | A1 |
20080309770 | Florea et al. | Dec 2008 | A1 |
20080316327 | Steinberg et al. | Dec 2008 | A1 |
20080316328 | Steinberg et al. | Dec 2008 | A1 |
20080317357 | Steinberg et al. | Dec 2008 | A1 |
20080317378 | Steinberg et al. | Dec 2008 | A1 |
20080317379 | Steinberg et al. | Dec 2008 | A1 |
20090002514 | Steinberg et al. | Jan 2009 | A1 |
20090003708 | Steinberg et al. | Jan 2009 | A1 |
20090021576 | Linder et al. | Jan 2009 | A1 |
20090022422 | Sorek et al. | Jan 2009 | A1 |
20090040342 | Drimbarean et al. | Feb 2009 | A1 |
20090074323 | Utsugi | Mar 2009 | A1 |
20090080713 | Bigioi et al. | Mar 2009 | A1 |
20090080796 | Capata et al. | Mar 2009 | A1 |
20090080797 | Nanu et al. | Mar 2009 | A1 |
20090115915 | Steinberg et al. | May 2009 | A1 |
20090167893 | Susanu et al. | Jul 2009 | A1 |
20090179998 | Steinberg et al. | Jul 2009 | A1 |
20090179999 | Albu et al. | Jul 2009 | A1 |
20090180713 | Bucha et al. | Jul 2009 | A1 |
20090185753 | Albu et al. | Jul 2009 | A1 |
20090189997 | Stec et al. | Jul 2009 | A1 |
20090189998 | Nanu et al. | Jul 2009 | A1 |
20090190803 | Neghina et al. | Jul 2009 | A1 |
20090196466 | Capata et al. | Aug 2009 | A1 |
20090220156 | Ito et al. | Sep 2009 | A1 |
20090238410 | Corcoran et al. | Sep 2009 | A1 |
20090238419 | Steinberg et al. | Sep 2009 | A1 |
20090263022 | Petrescu et al. | Oct 2009 | A1 |
20090303342 | Cocoran et al. | Dec 2009 | A1 |
20090303343 | Drimbarean et al. | Dec 2009 | A1 |
20090310828 | Kakadiaris et al. | Dec 2009 | A1 |
20100002071 | Ahiska | Jan 2010 | A1 |
20100014721 | Steinberg et al. | Jan 2010 | A1 |
20100026831 | Ciuc et al. | Feb 2010 | A1 |
20100026832 | Ciuc et al. | Feb 2010 | A1 |
20100026833 | Ciuc et al. | Feb 2010 | A1 |
20100033551 | Agarwala et al. | Feb 2010 | A1 |
20100039520 | Nanu et al. | Feb 2010 | A1 |
20100039525 | Steinberg et al. | Feb 2010 | A1 |
20100046837 | Boughorbel | Feb 2010 | A1 |
20100053362 | Nanu et al. | Mar 2010 | A1 |
20100053367 | Nanu et al. | Mar 2010 | A1 |
20100053368 | Nanu et al. | Mar 2010 | A1 |
20100054533 | Steinberg et al. | Mar 2010 | A1 |
20100054549 | Steinberg et al. | Mar 2010 | A1 |
20100054592 | Nanu et al. | Mar 2010 | A1 |
20100060727 | Steinberg et al. | Mar 2010 | A1 |
20100066822 | Steinberg et al. | Mar 2010 | A1 |
20100141786 | Bigioi et al. | Jun 2010 | A1 |
20100146165 | Steinberg et al. | Jun 2010 | A1 |
20100165140 | Steinberg | Jul 2010 | A1 |
20100165150 | Steinberg et al. | Jul 2010 | A1 |
20100166300 | Spampinato et al. | Jul 2010 | A1 |
20100182458 | Steinberg et al. | Jul 2010 | A1 |
20100194895 | Steinberg | Aug 2010 | A1 |
20100201826 | Steinberg et al. | Aug 2010 | A1 |
20100201827 | Steinberg et al. | Aug 2010 | A1 |
20100202707 | Costache | Aug 2010 | A1 |
20100215251 | Klein Gunnewiek et al. | Aug 2010 | A1 |
20100220899 | Steinberg et al. | Sep 2010 | A1 |
20100238309 | Florea et al. | Sep 2010 | A1 |
20100259622 | Steinberg et al. | Oct 2010 | A1 |
20100260414 | Ciuc | Oct 2010 | A1 |
20100271499 | Steinberg et al. | Oct 2010 | A1 |
20100272363 | Steinberg et al. | Oct 2010 | A1 |
20100295959 | Steinberg et al. | Nov 2010 | A1 |
20100303381 | Koehler et al. | Dec 2010 | A1 |
20100321537 | Zamfir | Dec 2010 | A1 |
20100328486 | Steinberg et al. | Dec 2010 | A1 |
20100329582 | Albu et al. | Dec 2010 | A1 |
20110002071 | Zhang et al. | Jan 2011 | A1 |
20110002506 | Ciuc et al. | Jan 2011 | A1 |
20110002545 | Steinberg et al. | Jan 2011 | A1 |
20110007174 | Bacivarov et al. | Jan 2011 | A1 |
20110013043 | Corcoran et al. | Jan 2011 | A1 |
20110013044 | Steinberg et al. | Jan 2011 | A1 |
20110025859 | Steinberg et al. | Feb 2011 | A1 |
20110025886 | Steinberg et al. | Feb 2011 | A1 |
20110026780 | Corcoran et al. | Feb 2011 | A1 |
20110033112 | Steinberg et al. | Feb 2011 | A1 |
20110050919 | Albu et al. | Mar 2011 | A1 |
20110050938 | Capata et al. | Mar 2011 | A1 |
20110053654 | Petrescu et al. | Mar 2011 | A1 |
20110055354 | Bigioi et al. | Mar 2011 | A1 |
20110063446 | McMordie et al. | Mar 2011 | A1 |
20110085049 | Dolgin et al. | Apr 2011 | A1 |
20110102553 | Corcoran et al. | May 2011 | A1 |
20110116720 | Gwak et al. | May 2011 | A1 |
20110216156 | Bigioi et al. | Sep 2011 | A1 |
20110216157 | Bigioi et al. | Sep 2011 | A1 |
20110216158 | Bigioi et al. | Sep 2011 | A1 |
20110234749 | Alon | Sep 2011 | A1 |
20110298795 | Van Der Heijden et al. | Dec 2011 | A1 |
20120119425 | Gutierrez et al. | May 2012 | A1 |
20120119612 | Gutierrez et al. | May 2012 | A1 |
20120249725 | Corcoran et al. | Oct 2012 | A1 |
20120249726 | Corcoran et al. | Oct 2012 | A1 |
20120249727 | Corcoran et al. | Oct 2012 | A1 |
20120249841 | Corcoran et al. | Oct 2012 | A1 |
20120250937 | Corcoran et al. | Oct 2012 | A1 |
20130070125 | Albu | Mar 2013 | A1 |
20130070126 | Albu | Mar 2013 | A1 |
Number | Date | Country |
---|---|---|
11-298780 | Oct 1999 | JP |
2011107448 | Sep 2011 | WO |
2011107448 | Nov 2011 | WO |
Entry |
---|
Co-pending U.S. Appl. No. 12/572,930, filed Oct. 2, 2009. |
Co-pending U.S. Appl. No. 12/636,608, filed Dec. 11, 2009. |
Co-pending U.S. Appl. No. 12/636,618, filed Dec. 11, 2009. |
Co-pending U.S. Appl. No. 12/636,629, filed Dec. 11, 2009. |
Co-pending U.S. Appl. No. 12/636,639, filed Dec. 11, 2009. |
Co-pending U.S. Appl. No. 12/636,647, filed Dec. 11, 2009. |
Co-pending U.S. Appl. No. 12/790,594, filed May 28, 2010. |
Co-pending U.S. Appl. No. 12/825,280, filed Jun. 28, 2010. |
Co-pending U.S. Appl. No. 12/944,701, filed Nov. 11, 2010. |
Co-pending U.S. Appl. No. 12/959,089, filed Dec. 2, 2010. |
Co-pending U.S. Appl. No. 12/959,137, filed Dec. 2, 2010. |
Co-pending U.S. Appl. No. 12/959,151, filed Dec. 2, 2010. |
Co-pending U.S. Appl. No. 12/959,320, filed Dec. 2, 2010. |
Co-pending U.S. Appl. No. 13/020,805, filed Feb. 3, 2011. |
Co-pending U.S. Appl. No. 13/077,891, filed Mar. 31, 2011. |
Co-pending U.S. Appl. No. 13/077,936, filed Mar. 31, 2011. |
Non-Final Rejection, dated May 22, 2013, for U.S. Appl. No. 13/078,971, filed Apr. 2, 2011. |
PCT Invitation to Pay Additional Fees and, Where Applicable, Protest Fee, for PCT Application No. PCT/EP2011/052970, dated Jul. 14, 2011, 5 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2011/052970, dated Sep. 27, 2011, 19 pages. |
Patent Abstracts of Japan, publication No. 11-298780, publication date: Oct. 29, 1999, Wide-Area Image-Pickup Device and Spherical Cavity Projection Device. |
Notice of Allowance, dated Jun. 11, 2013, for U.S. Appl. No. 13/234,139, filed Sep. 15, 2011. |
Notice of Allowance, dated Jun. 12, 2013, for U.S. Appl. No. 13/234,146, filed Sep. 15, 2011. |
Non-Final Rejection, dated Mar. 15, 2013, for U.S. Appl. No. 12/959,089, filed Dec. 2, 2010. |
Non-Final Rejection, dated Mar. 29, 2013, for U.S. Appl. No. 12/959,137, filed Dec. 2, 2010. |
Non-Final Rejection, dated Mar. 15, 2013, for U.S. Appl. No. 12/959,151, filed Dec. 2, 2010. |
Non-Final Rejection, dated Jun. 7, 2013, for U.S. Appl. No. 13/077,891, filed Mar. 31, 2011. |
Non-Final Rejection, dated May 17, 2013, for U.S. Appl. No. 13/077,936, filed Mar. 31, 2011. |
Non-Final Rejection, dated Dec. 7, 2012, for U.S. Appl. No. 13/234,139, filed Sep. 15, 2011. |
Non-Final Rejection, dated Dec. 20, 2012, for U.S. Appl. No. 13/234,146, filed Sep. 15, 2011. |
tawbaware.com: “PTAssembler Projections”, Projections , Jun. 12, 2009, pp. 1-15, XP002641900, Retrieved from the Internet: URL:http://web.archive.org/web/20090612020605/http:/v/projections.htm [retrieved on Jun. 14, 2011]. |
U.S. Appl. No. 13/541,650, filed Jul. 3, 2012. |
U.S. Appl. No. 13/596,044, filed Aug. 27, 2012. |
U.S. Appl. No. 13/862,372, filed Apr. 12, 2013. |
Number | Date | Country | |
---|---|---|---|
20120249725 A1 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13077891 | Mar 2011 | US |
Child | 13078970 | US |