Vehicular vision system with multiple cameras

Information

  • Patent Grant
  • 11553140
  • Patent Number
    11,553,140
  • Date Filed
    Monday, December 14, 2020
    3 years ago
  • Date Issued
    Tuesday, January 10, 2023
    a year ago
Abstract
A vehicular vision system includes first and second cameras disposed at a vehicle and having respective overlapping fields of view that include a road surface of a road along which the vehicle is traveling. Image data captured by the cameras is provided to an image processor and is processed to determine relative movement of a road feature present in the captured image data. The determined movement of the road feature relative to the vehicle in first image data captured by the first camera is compared to the determined movement of the road feature relative to the vehicle in second image data captured by the second camera, and at least a rotational offset of the second camera at the vehicle relative to the first camera at the vehicle is determined and the image data are remapped to at least partially accommodate misalignment of the second camera relative to the first camera.
Description
FIELD OF THE INVENTION

The present invention relates to multi-camera systems for use in vehicles, and more particularly multi-camera systems for use in vehicles wherein image manipulation is carried out on the images prior to displaying the images to a vehicle occupant.


BACKGROUND OF THE INVENTION

There are few multi-camera systems currently available in vehicles. Such systems incorporate four cameras typically, and provide a vehicle occupant with a composite image that is generated from the images taken by the four cameras. However, such systems can require a relatively large amount of processing power to generate the image that is displayed to the vehicle occupant, particular in situations where there is manipulation of the images being carried out. Such manipulation of the images may include dewarping, among other things.


It would be beneficial to provide a multi-camera system for a vehicle that requires relatively little processing power.


SUMMARY OF THE INVENTION

In a first aspect, the invention is directed to a method of establishing a composite image for displaying in a vehicle, comprising:


a) providing a first camera and a second camera, each camera having a field of view;


b) positioning the cameras so that the fields of view of the cameras overlap partially, wherein the cameras together have a combined field of view;


c) recording preliminary digital images from the cameras, each preliminary digital image being made up of a plurality of pixels; and


d) generating a final composite digital image that corresponds to a selected digital representation of the combined field of view of the cameras by remapping selected pixels from each of the preliminary digital images into selected positions of the final composite digital image.


In a second aspect, the invention is directed to a method of establishing a composite image for displaying in a vehicle, comprising:


a) providing a first camera and a second camera, a third camera and a fourth camera, each camera having a field of view, wherein the cameras together have a combined field of view that is a 360 degree field of view around the vehicle;


b) positioning the cameras so that the field of view of each camera overlaps partially with the field of view of two of the other cameras;


c) recording preliminary digital images from the cameras, each preliminary digital image being made up of a plurality of pixels; and


d) generating a final composite digital image that corresponds to a selected digital representation of the combined field of view of the cameras by remapping selected pixels from each of the preliminary digital images into selected positions of the final composite digital image,


wherein the preliminary digital images each have associated therewith a preliminary apparent camera viewpoint and the final composite digital image has associated therewith a final apparent camera viewpoint,


and wherein the selected pixels from the preliminary digital images are selected so that the final apparent camera viewpoint associated with the final composite digital image is higher than the preliminary apparent camera viewpoints associated with the preliminary digital images,


and wherein the selected pixels from the preliminary digital images are selected so that any misalignment between the overlapping portions of the preliminary digital images is substantially eliminated,


and wherein the selected pixels from the preliminary digital images are selected so that the final composite digital image is dewarped as compared to each of the preliminary digital images.


In a third aspect, the invention is directed to a system for establishing a composite image for displaying in a vehicle, comprising a first camera and a second camera and a controller. Each camera has a field of view that overlaps partially with the field of view of the other camera. Each camera has an imager for generating a preliminary digital image. The cameras together have a combined field of view. The controller is programmed to generate a final composite digital image that corresponds to a selected digital representation of the combined field of view of the cameras by using a remapping table to remap selected pixels from each of the preliminary digital images into selected positions of the final composite digital image.


In a fourth aspect, the invention is directed to a method of generating a remapping table for use in mapping pixels from a plurality of preliminary digital images into a final composite image, comprising:


a) driving a vehicle having a first camera and a second camera thereon, each camera having a field of view that overlaps partially with the field of view of the other camera, each camera having an imager for generating one of the preliminary digital image, wherein the cameras together have a combined field of view, wherein the vehicle further includes a controller;


b) detecting a target feature along the path of the vehicle during driving, using the controller;


c) providing a first preliminary digital image from the first camera, wherein the first preliminary digital image contains a first representation of the target feature at a first point time;


d) determining the position of the first representation of the target feature in the first preliminary digital image;


e) providing a second preliminary digital image from the second camera, wherein the second preliminary digital image contains a second representation of the target feature at a second point time;


f) determining the position of the second representation of the target feature in the second preliminary digital image;


g) comparing the positions of the first and second representations of the target feature; and


h) generating at least one value for the remapping table based on the result of the comparison in step g).





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will now be described by way of example only with reference to the attached drawings, in which:



FIG. 1 is a plan view of a vehicle with a camera system in accordance with an embodiment of the present invention;



FIG. 2 is a schematic illustration of the camera system shown in FIG. 1;



FIGS. 3a-3d are images taken by cameras that are part of the camera system shown in FIG. 1;



FIG. 4 is a magnified view of the image shown in FIG. 3d;



FIG. 5 is a composite final image generated by the camera system shown in FIG. 1;



FIG. 6a is remapping table used to generate the final composite image shown in FIG. 5 from the images shown in FIGS. 3a-3d;



FIG. 6b is a graphical representation of the remapping that takes place using the remapping table shown in FIG. 6a;



FIG. 6c is a graphical representation of a step that takes place prior to the remapping illustrated in FIG. 6b;



FIG. 7 is a plan view of a vehicle in a test area use to calibrate the camera system shown in FIG. 1;



FIG. 8a is a preliminary image from a camera from the camera system shown in FIG. 1;



FIGS. 8b and 8c are images formed by progressive remapping of the image shown in FIG. 8a;



FIG. 8d illustrates the analysis performed by the camera system shown in FIG. 1, to stitch together several remapped images;



FIG. 8e is a final composite image generated using the analysis shown in FIG. 8d;



FIGS. 9a-9c are remapping tables used to generate the images shown in FIGS. 8b, 8c and 8e from the preliminary image shown in FIG. 8a;



FIG. 10 shows target features on a road that can be used to assist in calibrating the camera system shown in FIG. 1 during driving;



FIG. 11 is a composite image formed using default remapping values, prior to the calibration of the camera system shown in FIG. 1 during drive; and



FIGS. 12a-12c are illustrations of events that would trigger adjustment of the remapping values used to generate the composite image shown in FIG. 3.





DETAILED DESCRIPTION OF THE INVENTION

Reference is made to FIG. 1, which shows a vehicle 10 that includes a vehicle body 12, and a multi-camera system 14 in accordance with an embodiment of the present invention. The multi-camera system 14 includes four cameras 16 and a controller 18. The multi-camera system 14 is configured to display a composite image that is generated using all four cameras 16 on an in-cabin display, shown at 20 in FIG. 2. The four cameras 16 include a front camera 16F, a rear camera 16R, and driver's side and passenger side cameras 16D and 16P.


Referring to FIG. 1, each camera 16 has a field of view 22. The field of view of each camera 16 overlaps with the fields of view 22 of the two cameras 16 on either side of it. Preferably, the field of view of each camera 16 is at least about 185 degrees horizontally. Referring to FIG. 2, each camera 16 includes an image sensor 24, which is used to generate a digital image taken from the camera's field of view 22. The image sensor 24 may be any suitable type of image sensor, such as, for example a CCD or a CMOS image sensor.


The digital image generated from the image sensor 24 may be referred to as a preliminary digital image, an example of which is shown at 26 in FIGS. 3a-3d. FIGS. 3a-3d show the preliminary digital images 26 from the four cameras 16. The images 26 are correspondingly identified individually at 26F, 26R, 26D and 26P.


Each preliminary digital image 26 is made up of a plurality of pixels, which are shown at 28 in the magnified image shown in FIG. 4. It will be noted that the pixels 28 are enlarged in FIG. 4 for the sake of clarity. The actual image sensor 24 may have any suitable resolution. For example it may generate a digital image that is 640 pixels wide by 480 pixels high, or optionally an image that is 720 pixels wide×480 pixels high, or an image that is 1280 pixels wide×960 pixels high or even higher. The output signals from the cameras 16 to the controller 18 may be in analog form such as in NTSC or PAL format, or in digital form using, for example LVDS format, or Ethernet.


The controller 18 is programmed to generate a final composite digital image, shown at 30 in FIG. 5, that corresponds to a selected digital representation of the combined field of view of the cameras 16 by using a remapping table 32 shown in FIG. 6a to remap selected pixels 28 from each of the preliminary digital images 26 into selected positions of the final composite digital image 30.


The digital representation may incorporate one or more operations on the original preliminary digital images 26. For example, the pixels 28 from the original images 26 may be remapped in such a way as to dewarp the images 26. As can be seen in FIG. 5, the warpage present in the images 26 is reduced (in this case it is substantially eliminated) in the final composite digital image 30.


Another operation that may be carried out through the remapping is viewpoint adjustment. Each preliminary digital image 26 has associated therewith, an apparent viewpoint, which is the viewpoint from which the camera 16 appears to have captured the image 26. In the preliminary digital images 26, the apparent viewpoint of the camera 16 is the same as the actual viewpoint of the camera 16 because no manipulation of the image 26 has been carried out. However, it may be preferable, when presenting a 360 degree view around the vehicle to the vehicle driver, to present a bird's eye view. To accomplish this, the perspective of the image is adjusted by adjusting the relative sizes of portions of the preliminary images when remapping them to the final composite image 30. For example, the objects that are closer to the camera 16 appear larger in the image 26 than objects that are farther from the camera 16. After the apparent viewpoint has been raised however, as shown in the final digital image 30, objects closer to the camera 16 are shrunk so that they are not larger than objects farther from the camera 16.


A graphical representation of the remapping that is carried out is shown in FIG. 6b. In the exemplary embodiment of the present invention, the preliminary digital images 26 were of sufficiently high resolution as compared to the resolution of the final composite image 30 that there is not a need for the controller to ‘stretch’ portions of the preliminary images 26 when generating the map for pixels in the final image 30. In other words, in this particular embodiment, the controller 18 is not required to process a row of 10 pixels from the preliminary image 26 and convert it to a row of 20 pixels in the final image 30. Thus, no pixels in the final image 30 are ‘fabricated’ or generated by the controller 18. Put another way, the preliminary images 26 are of sufficiently high resolution that the image manipulation that is carried out to arrive at the final composite image 30 involves varying amounts of compression of portions of the preliminary image (i.e. removing or skipping pixels), but does not involve stretching of any portions of the preliminary image (which could involve interpolating between pixels and thus ‘creating’ pixels). It is conceivable, however, that the preliminary images would be of relatively lower resolution such that the controller 18 would be relied upon in some instances to stretch portions of the preliminary images 26 when creating the final composite image 30. It will be noted that in the exemplary embodiment, the resolution of each of the preliminary images is 720 pixels wide by 480 pixels high, while the resolution of the final composite image is about 320 pixels wide by 480 pixels high. As can be seen in the image in FIGS. 5 and 6b, a representation of the vehicle 10 itself is inserted in the final composite image 30. While it is preferred that none of the pixels in the final image 30 be ‘created’ through interpolation between adjacent pixels, it is contemplated that in certain situations some pixels may be generated that way (i.e. by interpolating between adjacent pixels) so as to provide a relatively smooth transition between them.


Referring to FIG. 5, in the exemplary embodiment, given that the final composite image 30 is only 320 pixels wide, a somewhat-dewarped rear view is also displayed on the display 20 for the vehicle driver, adjacent the 360 degree view.


Referring to FIG. 6c, in some cases, the portion of the preliminary digital image 26 from each individual camera that is used as part of the final composite image 30 may be a selected subset of the pixels of the preliminary digital image 26. The particular subset used from each preliminary digital image is shown in a dashed box shown at 29 and will vary in position from camera to camera. It will be noted that the dashed box represents the subset of pixels of the associated preliminary digital image 26 that is involved in the generation of image 30, which, for greater certainty, is not to say that each pixel from subset 29 necessarily will be a pixel in the image 30—rather it is to say that the image 30 contains pixels that relate to or are taken from portion 29 and not to the portion of the preliminary digital image that is outside portion 29. The rest of the image pixels (i.e. the pixels that are outside the portion 29 that is used to generate the composite image 30) are not needed and can be discarded. Only the pixels in the portions 29 are streamed into the memory of the image engine (which is what the module involved in generating the composite image 30 using the methods described herein may be referred to). By discarding those pixels that are outside the portions 29, the memory bandwidth in image engine can be reduced, so that a slower memory can be utilized which may advantages in terms of reducing system cost, and/or increasing reliability.


Aspects of the calibration of the multi-camera system 14 will now be discussed. This calibration is used in order to assist in determining the remapping values in the remapping table 32 (FIG. 6a).


Initially, the cameras 16 are mounted to the vehicle body 12 and the vehicle 10 is positioned at a location (as shown in FIG. 7) whereat there is a predetermined test arrangement 34 of alignment landmarks 36, and dewarping landmarks 38.


In the exemplary test arrangement 34 shown in FIG. 7, it can be seen that the landmarks 38 are straight lines. The preliminary digital image from one of the cameras 16 (e.g., the rear camera) is shown at 40 in FIG. 8a. Three functions are carried out on the preliminary digital images 40 to prepare the final composite image 30 shown in FIG. 8e. The functions are: dewarping, viewpoint adjustment, and offset correction. These functions may be carried out sequentially, and an intermediate remapping table may be generated in association with each function. Referring to FIG. 8a, it can be seen that there is substantial warping in the representations 42 of the landmarks 38 in the preliminary digital image 40. Knowing that the actual landmarks 38 are straight lines, this warping can be compensated for when determining the remapping of the pixels from the preliminary digital image 40 into the dewarped intermediate image shown at 44 in FIG. 8b. As can be seen, the representations shown at 45 of the landmarks 38 are dewarped substantially completely. The remapping necessary to generate the dewarped image 44 may be stored in a first intermediate remapping table shown at 46 in FIG. 9a. It will be understood that a preliminary digital image 40 from each camera 16 will be dewarped to generate a dewarped image 44 and so four first intermediate remapping tables 46 will be generated (i.e. one table 46 for each camera 16).


The dewarped image 44 may then be viewpoint adjusted so as to move the apparent viewpoint of the camera 16 upwards to generate a resulting ‘dewarped and viewpoint-adjusted’ image 48 in FIG. 8c, using a second intermediate remapping table shown at 49 in FIG. 9b. The remapping data to be inserted in the second intermediate remapping table 49 may be generated relatively easily by determining what adjustments need to be applied to the longitudinal representations 45a to make them parallel to each other, what adjustments need to be applied to the transverse representations 45b to make them parallel to each other (in this case virtually no adjustment in that regard is required), what adjustments need to be applied to the representations 45 so that they are spaced appropriately from each other, and to make the longitudinal representations 45a extend perpendicularly to the transverse representations 45b, so as to match the known angles at which the actual longitudinal landmarks 38 intersect with the actual transverse landmarks 38. It will be understood that each image 44 will be viewpoint-adjusted to generate a dewarped and viewpoint-adjusted image 48 and so four second intermediate remapping tables 49 will be generated. The representations of the landmarks 38 in FIG. 8c are shown at 50.


The dewarped and viewpoint-adjusted image 48 shown in FIG. 8c from one camera 16 may be compared to the other dewarped and viewpoint-adjusted images 48 from the other cameras 16 to determine whether there is any offset adjustment necessary. This comparison is illustrated in FIG. 8d. The versions of the images 48 shown in FIG. 8d have been greatly simplified and only include a few representations 50 of landmarks 38 and representations 54 of landmarks 36, so as to facilitate explanation and illustration of the comparison that is being carried out. It will be understood however, that the actual comparison that is carried out may be done with all of the representations 50 in the images 48 shown in FIG. 8c.


As can be seen in FIG. 7, the alignment landmarks 36 are arranged in groups 52, shown individually at 52a, 52b, 52c and 52d. Each group 52 is visible to at least two of the cameras 16. As shown in FIG. 8d, each image 48 contains representations 53 of some of the alignment landmarks 36. The groups of representations are identified at 54. It can be seen that the images shown at 48F (front) and 48D (driver's side) both contain representations 54 of the group 52a of landmarks 36. Similarly the images shown at 48F and 48P (passenger side) both contain representations 54 of the group 52b of landmarks 36. Similarly the images shown at 48P and 48R (rear) both contain representations 54 of the group 52c of landmarks 36. Finally, the images shown at 48R and 48D both contain representations 54 of the group 52d of landmarks 36. An X axis and a Y axis are shown at 56 and 58 respectively in FIG. 8d. The X axis and Y axis offsets between the representations 54 of group 52a in image 48F and the representations 54 of group 52a in image 48D are determined, and these offsets can be taken into account when remapping pixels from these two images 48 into the final composite image 30 shown in FIG. 8e to ensure that the final composite image 30 transitions smoothly from pixels taken from image 48F to pixels taken from image 48D. Similarly, the offsets can easily be determined between the representations 54 shown in any two adjacent images 48, and this information can be taken into account when remapping the pixels from the images 48 into the final composite image 30. The remapping information from the images 48 to the final composite image 30 may be stored in a third intermediate remapping table 60 shown in FIG. 9c. It will be understood that only a single remapping table 60 is generated, which remaps pixels from each of the four images 48 into the final composite image 30.


Once the four first remapping tables 46, the four second remapping tables 49 and the third remapping table 60 are generated, the remapping table 32 shown in FIG. 6a can be generated by combining the remapping information in all these tables 46, 49 and 60. Once generated, the remapping table 32 may be stored in the permanent storage memory (shown at 80 in FIG. 2) that is part of the camera system 14.


However, the controller 18 may additionally store one or more of the individual remapping tables for use in generating and displaying an intermediate image. For example, it may be desired to show a dewarped rear view from the vehicle 10 in some instances, such as when the driver is backing the vehicle 10 up. The preliminary digital image 40 from the rear camera 16 can be remapped quickly and easily using the first intermediate remapping table 46 to generate the dewarped rear view image 44. Other viewing modes are also possible and would benefit from having one or more of the intermediate remapping tables stored in the memory 80. For example, a split view showing images from the driver's side and passenger side cameras could be provided.


In the above example, the test arrangement 34 of landmarks 36 and 38 were provided as images painted on the floor of an indoor test area. It will be noted that other means of providing the test arrangement 34 can be provided. For example, the test arrangement can be provided on mats place on the floor of the test area. Alternatively, the test arrangement 34 could projected on the floor of the test area using any suitable means, such as one or more lasers, or one or more projectors, or some combination of both.


In the example described above, four cameras are used to generate a 360 degree view around the vehicle, using pixel remapping. It will be understood that the advantages of pixel remapping are not limited to camera systems that employ four cameras. For example, in an alternative embodiment that is not shown, the vehicle may include cameras 16 mounted at each of the front corners and each of the rear corners of the vehicle. Depending on whether the vehicle is leaving a parking spot by driving forward or by backing up, the two front corner cameras or the two rear corner cameras could be used to form a view that shows cross-traffic in front and to the sides of the vehicle, or behind and to the sides of the vehicle depending on whether the vehicle is driving forward or backing up. In such an embodiment, a final composite image can be generated using pixel remapping, but would be generated based on images from only two cameras (i.e. the cameras at the two front corners of the vehicle, or alternatively the cameras at the rear two corners of the vehicle).


It will be noted that, while the lines 38 in the test arrangement have been shown as straight lines, they need not be. They may be any suitable selected shape, which is then compared to its representation in the images 40 and 44 to determine how to remap the pixels to reduce warping and to carry out viewpoint adjustment.


In the test arrangement 34 shown in FIG. 7 the alignment landmarks 36 are the intersections between the lines 38. It will be understood however that the alignment landmarks could be other things, such as, for example, a group of unconnected dots arranged in a selected arrangement (e.g., arranged to form a square array).


The above description relates to the calibration of the camera system 14 in a controlled environment using a test arrangement 34 to generate the remapping table 32 for storage in the memory 80.


It may be desirable to permit the controller 18 to calibrate or recalibrate the camera system 14 during driving. To do this, the controller 18 identifies a target feature that appears in an image from one of the cameras 16. The target feature is shown in FIG. 10 at 61 and may be, for example, a crack in the pavement, a lane marker or a piece of gravel. In FIG. 10 numerous examples of possible target features are shown, although the controller 18 need only work with one target feature 61 that will pass on one side of the vehicle, in order to calibrate three of the cameras 16 to each other (i.e. the front camera, the camera on whichever side of the vehicle that the target feature 61 will pass, and the rear camera). At least one target feature 61 needs to be identified that will pass on the other side of the vehicle 10 (although not necessarily at the same time as the first target feature 61), in order to calibrate the camera on the other side of the vehicle to the other three cameras.


As the vehicle 10 is driven (preferably below a selected speed) past the target feature 61, the target feature 61 moves through the field of view of the front camera 16, through the field of view of one of the side cameras 16 and finally through the field of view of the rear camera 16. A representation of the target feature 61 will thus move through images from the front camera 16, then through images from one of the side cameras 16, and then through images from the rear camera 16. By analyzing the movement of the representation of the target feature 61 (e.g. its position, its direction of travel and its speed of movement) particularly as it transitions from images from one camera into the images from a subsequent camera the controller 18 can determine X and Y offsets, angular offsets, differences in scale, and possibly other differences, between images from one camera and another. This analysis may be carried out as follows: The controller 18 may start with a default set of remapping values for the remapping table 32 to generate a final composite image 30 from the four images. This default set of remapping values may be based on a simple algorithm to crop the preliminary digital images as necessary, rotate them as necessary and scale them as necessary to fit them in allotted zones 63 (shown individually at 63F, 63R, 63D and 63P) of a preliminary composite image 65 shown in FIG. 11. Optionally the default remapping values may also achieve dewarping and viewpoint adjustment, based on information obtained during testing in a test area similar to the test area shown in FIG. 7. Alternatively, the default remapping values may be the values in the remapping table 32 from a previous calibration (e.g. a calibration performed at a test area shown in FIG. 7, or a previous calibration performed during driving). As shown in FIG. 11, demarcation lines shown at 67 show the boundaries between the zones 63.



FIGS. 12a, 12b and 12c show two adjacent zones 63 and the demarcation line 67 between them, to illustrate the analysis of the movement of the representation of the target feature 61. The adjacent zones in FIGS. 12a, 12b and 12c, are zones 63D and 63R. It will be understood however, that these figures are provided solely to illustrate the analysis that is carried out by the controller 18 on the movement of the representation of the target feature 61 between all applicable pairs of adjacent zones 63.


In FIGS. 12a-12c, the representation is shown at 69 and is shown at two different instants in time in each of the FIGS. 12a-12c. The position of the representation 69 at the first, earlier instant of time is shown at 70a, and at the second, later instant of time at 70b. At the first instant in time, the representation 69 is in the zone 63D. At the second instant of time, the representation 69 is in the zone 63R.


While tracking the movement of the representation 69, if the controller 18 detects that the representation 69 shifts horizontally by some amount of pixels (as shown in FIG. 12a) as it crosses the demarcation line 67 (by comparing the positions 70a and 70b of the representation 69 at the two instants in time), then the controller 18 can adjust the remapping values accordingly for one or both of the images that are mapped to the zones 63D and 63R.


With reference to FIG. 12b, while tracking the movement of the representation 69, the controller 18 may store an expected position 70c for the representation 69 at the second instant of time, based on the speed and direction of travel of the representation 69. The controller 18 may compare the actual detected position 70b of the representation 69 at the second instant of time with the expected position 70c of the representation 69 at the second instant of time, and, if there is a vertical offset, the controller 18 can adjust the remapping values accordingly for one or both of the images that are mapped to the zones 63D and 63R.


With reference to FIG. 12c, while tracking the movement of the representation 69, if the controller 18 detects that the representation 69 changes its direction of travel by some angle as it crosses the demarcation line 67 (by deriving a first direction of travel based on positions 70a and 70a′, deriving a second direction of travel based on positions 70b and 70b′, and by comparing the two directions of travel), then the controller 18 can adjust the remapping values accordingly for one or both of the images that are mapped to the zones 63D and 63R.


It may be that only the remapping values associated with pixels in the immediate vicinity of the representation 69 are adjusted. Thus, the vehicle 10 may drive along while the controller 18 scans for and detects target features 61 at different lateral positions on the road, so that different portions of the remapping table 32 are adjusted. As an alternative way, the vehicle 10 may drive along while the controller 18 scans for and detects multiple target features 61 at different lateral positions across each demarcation line 67. At a selected point in time (e.g., after having detected target features 61 over a selected amount of lateral positions along the demarcation line 67), the controller 18 may then determine a formula (or set of formulas) that could be used to remap the entire area along the demarcation line 67 as a whole, based on the changes in the positions of the representations 69. Then the controller 18 uses that formula (or set of formulas) to remap the entire area around the border. For greater certainty the formula or formulas may be linear or nonlinear.


After detecting a target feature 61 at a particular lateral position on the road, and adjusting a portion of the remapping table 32 through the techniques described above, the controller 18 may also scan for and detect a second target feature 61 at approximately the same lateral position on the road and apply these techniques again, in order to improve the accuracy of the adjustments to the remapping values.


In many situations (e.g. after a malfunctioning or damaged camera has been replaced in the vehicle or simply due to a shift in the position of a camera over time in the vehicle) it may be that a camera is no longer in the same position and orientation as it was before. As a result, during the calibration procedure some pixels will require a change in their remapping due to new changes that occur to representations 69 as they cross demarcation lines 67. If the changes to the remapping are only carried out in the immediate vicinity of the affected pixels then there will be a misalignment of those pixels with other pixels that are not changed. If the changes are made to all the pixels in an image 26 then this could cause a problem with the remapping of pixels at the other demarcation line 67 at the other end of the image 26. To address this issue, when a new remapping is carried out on a selected pixel, the remapping is carried out in progressively diminishing amounts on a range of adjacent pixels. For example, if during a calibration it is determined that a particular pixel should be shifted 5 pixels laterally, a selected first number of pixels longitudinally adjacent to that pixel will be shifted 5 pixels laterally, a selected second number of pixels longitudinally adjacent to the first number of pixels will be shifted 4 pixels laterally, a selected third number of pixels adjacent to the second number of pixels will be shifted 3 pixels laterally, and so on until there is no lateral shift to carry out. This effectively smooths out the remapping of the pixels, as an example, in a car wherein the front camera is damaged in a traffic accident, and is replaced, a recalibration will be carried out, and the controller 18 may detect that the remapping that applied at the front left and right demarcation lines 67 does not work anymore. The controller 18 may determine a new remapping for these regions. However, the remapping that occurs at the rear left and right demarcation lines is still good, since the left, right and rear cameras have not been moved. To address this, the controller 18 may remap some selected number of pixels (e.g. 50 pixels), rearward of the newly remapped pixels along the front left and right demarcation lines 67 in groups by progressively smaller amounts eventually reducing the remapping to zero. No remapping of pixels takes place along the rear left and right demarcation lines 67.


After a selected period of time of driving, or after detecting enough target features at enough lateral positions to ensure that a sufficient amount of adjustment of the remapping table has been made, the controller 18 may end the calibration process.


The particular cameras 16 that are used in the camera system 14 may be any suitable cameras. One example of an acceptable camera is a ReversAid camera made by Magna Electronics, an operating unit of Magna International Inc. of Aurora, Ontario, Canada.


The camera or vision system includes a display screen that is in communication with a video line and that is operable to display images captured by the camera or camera module. The display screen may be disposed in an interior rearview mirror assembly of the vehicle, and may comprise a video mirror display screen, with video information displayed by the display screen being viewable through a transflective mirror reflector of the mirror reflective element of the interior rearview mirror assembly of the vehicle. For example, the camera or camera module may be disposed at a rearward portion of the vehicle and may have a rearward facing field of view. The display screen may be operable to display images captured by the rearward viewing camera during a reversing maneuver of the vehicle.


Surround view/panoramic vision/birds-eye vision multi-camera systems are known, such as described in U.S. Pat. Nos. 6,275,754; 6,285,393; 6,483,429; 6,498,620; 6,564,130; 6,621,421; 6,636,258; 6,819,231; 6,917,378; 6,970,184; 6,989,736; 7,012,549; 7,058,207; 7,071,964; 7,088,262; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,280,124; 7,295,227; 7,295,229; 7,301,466; 7,317,813; 7,369,940; 7,463,281; 7,468,745; 7,519,459; 7,592,928; 7,680,570; 7,697,027; 7,697,029; 7,742,070; 7,768,545 and/or 7,782,374, and/or U.S. Publication Nos. 2003/0137586; 2005/0030379; 2005/0174429; 2005/0203704; 2007/0021881; 2007/0165909; 2008/0036857; 2008/0144924; 2009/0179773 and/or 2010/0013930, and/or International Publication Nos. WO 2000/064175; WO 2005/074287; WO 2007/049266; WO 2008/044589; WO 2009/095901; WO 2009/132617; and/or WO 2011/014482, and/or European Pat. Publication Nos. EP1022903; EP1179958; EP1197937; EP1355285; EP1377062; EP1731366 and/or EP1953698, and/or MURPHY, TOM, “Looking Back to the Future—How hard can it be to eliminate a driver's blindspot?”, Ward's AutoWorld, May 1, 1998, which are all hereby incorporated herein by reference in their entireties. Such systems benefit from the present invention.


The video display is operable to display a merged or composite image to provide a panoramic or surround view for viewing by the driver of the vehicle. The vision system may utilize aspects of the vision and display systems described in U.S. Pat. Nos. 5,550,677; 5,670,935; 6,498,620; 6,222,447 and/or 5,949,331, and/or PCT Application No. PCT/US2011/061124, filed Nov. 17, 2011, and/or PCT Application No. PCT/US2010/025545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, which are hereby incorporated herein by reference in their entireties.


Optionally, the video display may display other images, and may display a surround view or bird's-eye view or panoramic-view images or representations at the display screen, such as by utilizing aspects of the display systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or U.S. provisional applications, Ser. No. 61/540,256, filed Sep. 28, 2011; Ser. No. 61/466,138, filed Mar. 22, 2011; Ser. No. 61/452,816, filed Mar. 15, 2011; and Ser. No. 61/426,328, filed Dec. 22, 2010, which are all hereby incorporated herein by reference in their entireties. Examples of bird's eye view systems and associated techniques are described in U.S. Pat. Nos. 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466 and/or 7,592,928, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, which are hereby incorporated herein by reference in their entireties. Optionally, the camera and video display may operate to display other images, and may display a trailer angle or the like of a trailer behind the vehicle.


The vision display system may operate to display the rearward images at the video mirror display, and may do so responsive to the driver of the vehicle shifting the vehicle into a reverse gear (such as by utilizing aspects of the vision systems described in U.S. Pat. Nos. U.S. Pat. Nos. 5,550,677; 5,670,935; 6,498,620; 6,222,447 and/or 5,949,331, and/or PCT Application No. PCT/US2011/056295, filed Oct. 14, 2011, which are hereby incorporated herein by reference in their entireties).


Optionally, the system of the present invention may utilize aspects of the vision systems and lane departure systems and/or lane change aids and/or side object detection systems of the types described in U.S. Pat. Nos. 7,914,187; 7,720,580; 7,526,103; 7,038,577; 7,004,606; 6,946,978; 6,882,287 and/or 6,396,397, and/or PCT Application No. PCT/US2011/059089, filed Nov. 3, 2011, which are hereby incorporated herein by reference in their entireties.


The imaging sensor or camera that captures the image data for image processing may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in 640 columns and 480 rows (a 640×480 imaging array), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The camera or imaging sensor and/or the logic and control circuit of the imaging sensor may function in any known manner, such as by utilizing aspects of the vision or imaging systems described in U.S. Pat. Nos. 6,806,452; 6,690,268; 7,005,974; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454; 6,824,281; 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094 and/or 6,396,397, and/or PCT Application No. PCT/US2010/028621, filed Mar. 25, 2010, which are all hereby incorporated herein by reference in their entireties.


The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras and vision systems described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454 and 6,824,281, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010, and/or U.S. patent application Ser. No. 12/508,840, filed Jul. 24, 2009, and published Jan. 28, 2010 as U.S. Pat. Publication No. US 2010-0020170, which are all hereby incorporated herein by reference in their entireties. The camera or cameras may comprise any suitable cameras or imaging sensors or camera modules, and may utilize aspects of the cameras or sensors described in U.S. patent application Ser. No. 12/091,359, filed Apr. 24, 2008 and published Oct. 1, 2009 as U.S. Publication No. US-2009-0244361, and/or U.S. Pat. Nos. 7,965,336 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties. The imaging array sensor may comprise any suitable sensor, and may utilize various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like, such as the types described in U.S. Pat. Nos. 7,965,336; 5,550,677; 5,670,935; 5,760,962; 5,715,093; 5,877,897; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 6,498,620; 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 6,806,452; 6,396,397; 6,822,563; 6,946,978; 7,339,149; 7,038,577; 7,004,606 and/or 7,720,580, and/or PCT Application No. PCT/US2008/076022, filed Sep. 11, 2008 and published Mar. 19, 2009 as International Publication No. WO 2009/036176, and/or PCT Application No. PCT/US2008/078700, filed Oct. 3, 2008 and published Apr. 9, 2009 as International Publication No. WO 2009/046268, which are all hereby incorporated herein by reference in their entireties.


The camera module and circuit chip or board and imaging sensor may be implemented and operated in connection with various vehicular vision-based systems, and/or may be operable utilizing the principles of such other vehicular systems, such as a vehicle headlamp control system, such as the type disclosed in U.S. Pat. Nos. 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 7,004,606; 7,339,149 and/or 7,526,103, which are all hereby incorporated herein by reference in their entireties, a rain sensor, such as the types disclosed in commonly assigned U.S. Pat. Nos. 6,353,392; 6,313,454; 6,320,176 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties, a vehicle vision system, such as a forwardly, sidewardly or rearwardly directed vehicle vision system utilizing principles disclosed in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,877,897; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978 and/or 7,859,565, which are all hereby incorporated herein by reference in their entireties, a trailer hitching aid or tow check system, such as the type disclosed in U.S. Pat. No. 7,005,974, which is hereby incorporated herein by reference in its entirety, a reverse or sideward imaging system, such as for a lane change assistance system or lane departure warning system or for a blind spot or object detection system, such as imaging or detection systems of the types disclosed in U.S. Pat. Nos. 7,881,496; 7,720,580; 7,038,577; 5,929,786 and/or 5,786,772, which are hereby incorporated herein by reference in their entireties, a video device for internal cabin surveillance and/or video telephone function, such as disclosed in U.S. Pat. Nos. 5,760,962; 5,877,897; 6,690,268 and/or 7,370,983, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties, a traffic sign recognition system, a system for determining a distance to a leading or trailing vehicle or object, such as a system utilizing the principles disclosed in U.S. Pat. Nos. 6,396,397 and/or 7,123,168, which are hereby incorporated herein by reference in their entireties, and/or the like.


Optionally, the circuit board or chip may include circuitry for the imaging array sensor and or other electronic accessories or features, such as by utilizing compass-on-a-chip or EC driver-on-a-chip technology and aspects such as described in U.S. Pat. Nos. 7,255,451 and/or 7,480,149; and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, and/or Ser. No. 12/578,732, filed Oct. 14, 2009 and published Apr. 22, 2010 as U.S. Publication No. US-2010-0097469, which are hereby incorporated herein by reference in their entireties.


Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device disposed at or in the interior rearview mirror assembly of the vehicle, such as by utilizing aspects of the video mirror display systems described in U.S. Pat. Nos. 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252; 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,446,924; 7,370,983; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 7,338,177; 5,910,854; 6,420,036 and/or 6,642,851, and/or European patent application, published Oct. 11, 2000 under Publication No. EP 0 1043566, and/or PCT Application No. PCT/US2011/056295, filed Oct. 14, 2011, and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008; and/or Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, and/or U.S. provisional applications, Ser. No. 61/466,138, filed Mar. 22, 2011; Ser. No. 61/452,816, filed Mar. 15, 2011; and Ser. No. 61/426,328, filed Dec. 22, 2010, which are hereby incorporated herein by reference in their entireties.


Optionally, the display or displays and any associated user inputs may be associated with various accessories or systems, such as, for example, a tire pressure monitoring system or a passenger air bag status or a garage door opening system or a telematics system or any other accessory or system of the mirror assembly or of the vehicle or of an accessory module or console of the vehicle, such as an accessory module or console of the types described in U.S. Pat. Nos. 7,289,037; 6,877,888; 6,824,281; 6,690,268; 6,672,744; 6,386,742 and 6,124,886, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties.


While the above description constitutes a plurality of embodiments of the present invention, it will be appreciated that the present invention is susceptible to further modification and change without departing from the fair meaning of the accompanying claims.

Claims
  • 1. A vehicular vision system, the vehicular vision system comprising: a first camera disposed at a vehicle and having a first field of view that includes a road surface of a road along which the vehicle is traveling, the first camera capturing first image data while the vehicle is moving along the road;a second camera disposed at the vehicle and having a second field of view that includes the road surface of the road along which the vehicle is traveling, the second camera capturing second image data while the vehicle is moving along the road;wherein the first field of view of the first camera at least partially overlaps the second field of view of the second camera;an image processor disposed at the vehicle;wherein first image data captured by the first camera is provided to the image processor, and wherein second image data captured by the second camera is provided to the image processor;wherein, while the vehicle is moving along the road, the image processor processes the provided first image data captured by the first camera and the provided second image data captured by the second camera;wherein, while the vehicle is moving along the road, the vehicular vision system, via processing by the image processor of first image data captured by the first camera, determines relative movement of a road feature present in first image data captured by the first camera, the determined relative movement of the road feature present in first image data including speed and direction of travel of the road feature relative to the vehicle;wherein, while the vehicle is moving along the road, the vehicular vision system, via processing by the image processor of second image data captured by the second camera, determines relative movement of the road feature relative to the vehicle when the road feature is present in second image data captured by the second camera, the determined relative movement of the road feature present in second image data including speed and direction of travel of the road feature relative to the vehicle;wherein the determined movement of the road feature relative to the vehicle in first image data captured by the first camera is compared to the determined movement of the road feature relative to the vehicle in second image data captured by the second camera;wherein, based on the determined relative movement of the road feature present in the first image data captured by the first camera, an expected position of the road feature relative to the vehicle at a first time is determined, the expected position of the road feature being representative of a position of the road feature when the road feature is in the second field of view of the second camera;wherein, while the vehicle is moving along the road, and via processing by the image processor of the second image data captured by the second camera, the vehicular vision system determines position of the road feature relative to the vehicle at the first time;wherein the expected position of the road feature relative to the vehicle at the first time is compared to the determined position of the road feature relative to the vehicle at the first time, and wherein a vertical offset of the first image data and the second image data is determined based on the comparison of the expected position of the road feature and the determined position of the road feature;wherein, responsive to comparison of the determined movements of the road feature relative to the vehicle, at least a rotational offset of the second camera at the vehicle relative to the first camera at the vehicle is determined;wherein, responsive to determination of the rotational offset of the second camera at the vehicle relative to the first camera at the vehicle and determination of the vertical offset of the first image data and the second image data, the first image data and the second image data are remapped based at least in part on the determined rotational offset and determined vertical offset to at least partially accommodate misalignment of the second camera relative to the first camera;wherein the remapped first image data and the remapped second image data are stored in a remapping table;wherein the vehicular vision system generates composite images based at least in part on the remapping table and the first image data captured by the first camera and second image data captured by the second camera; anda video display screen disposed in the vehicle and viewable by a driver of the vehicle, wherein the video display screen displays surround view video images derived from at least the composite images.
  • 2. The vehicular vision system of claim 1, wherein the first camera is mounted at the vehicle so as to have at least a sideward field of view, and wherein the second camera is mounted at the vehicle so as to have at least a rearward field of view.
  • 3. The vehicular vision system of claim 2, wherein the first camera is mounted at an exterior mirror assembly mounted at a side portion of the vehicle, and wherein the second camera is mounted at a rear portion of the vehicle.
  • 4. The vehicular vision system of claim 1, wherein the first camera is mounted at the vehicle so as to have at least a forward field of view, and wherein the second camera is mounted at the vehicle so as to have at least a sideward field of view.
  • 5. The vehicular vision system of claim 4, wherein the first camera is mounted at a front portion of the vehicle, and wherein the second camera is mounted at an exterior mirror assembly mounted at a side portion of the vehicle.
  • 6. The vehicular vision system of claim 4, comprising a third camera disposed at the vehicle and having at least a rearward field of view.
  • 7. The vehicular vision system of claim 6, wherein, while the vehicle is moving along the road, (i) the vehicular vision system, via processing by the image processor of third image data captured by the third camera, determines movement of the road feature relative to the vehicle when the road feature is present in third image data captured by the third camera, (ii) the vehicular vision system compares the determined movement of the road feature relative to the vehicle in second image data captured by the second camera to the determined movement of the road feature relative to the vehicle in third image data captured by the third camera, and (iii) responsive to comparing the determined movements of the road feature relative to the vehicle, at least a rotational offset of the third camera at the vehicle relative to the second camera at the vehicle is determined.
  • 8. The vehicular vision system of claim 1, wherein, responsive to comparison of the determined movements of the road feature relative to the vehicle, a translational offset of the second camera at the vehicle relative to the first camera at the vehicle is determined, and wherein, responsive to determination of the translational offset of the second camera at the vehicle relative to the first camera at the vehicle, the first image data and the second image data are remapped based at least in part on the determined translational offset to at least partially accommodate misalignment of the second camera relative to the first camera.
  • 9. The vehicular vision system of claim 8, wherein the vehicular vision system compares the determined movements of the road feature relative to the vehicle to determine a horizontal shift of the road feature relative to the vehicle while the vehicle is moving along the road.
  • 10. The vehicular vision system of claim 8, wherein the vehicular vision system compares the determined movements of the road feature relative to the vehicle to determine the vertical offset of the road feature relative to the vehicle in the second image data captured by the second camera as compared to the first image data captured by the first camera while the vehicle is moving along the road.
  • 11. The vehicular vision system of claim 1, wherein the vehicular vision system compares the determined movements of the road feature relative to the vehicle to determine a rotation of the road feature relative to the vehicle in the second image data captured by the second camera as compared to the first image data captured by the first camera while the vehicle is moving along the road.
  • 12. The vehicular vision system of claim 1, wherein processing by the image processor of first image data captured by the first camera and second image data captured by the second camera includes dewarping the first image data and the second image data.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is continuation of U.S. patent application Ser. No. 15/899,105, filed Feb. 19, 2018, now U.S. Pat. No. 10,868,974, which is continuation of U.S. patent application Ser. No. 13/990,902, filed May 31, 2013, now U.S. Pat. No. 9,900,522, which is a 371 national phase application of PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011, which claims the priority benefit of U.S. provisional applications, Ser. No. 61/482,786, filed May 5, 2011, and Ser. No. 61/418,499, filed Dec. 1, 2010.

US Referenced Citations (585)
Number Name Date Kind
2632040 Rabinow Mar 1953 A
2827594 Rabinow Mar 1958 A
3985424 Steinacher Oct 1976 A
4037134 Loper Jul 1977 A
4200361 Malvano et al. Apr 1980 A
4214266 Myers Jul 1980 A
4218698 Bart et al. Aug 1980 A
4236099 Rosenblum Nov 1980 A
4247870 Gabel et al. Jan 1981 A
4249160 Chilvers Feb 1981 A
4254931 Aikens et al. Mar 1981 A
4266856 Wainwright May 1981 A
4277804 Robison Jul 1981 A
4281898 Ochiai et al. Aug 1981 A
4288814 Talley et al. Sep 1981 A
4355271 Noack Oct 1982 A
4357558 Massoni et al. Nov 1982 A
4381888 Momiyama May 1983 A
4390895 Sato et al. Jun 1983 A
4420238 Felix Dec 1983 A
4431896 Lodetti Feb 1984 A
4443057 Bauer et al. Apr 1984 A
4460831 Oettinger et al. Jul 1984 A
4481450 Watanabe et al. Nov 1984 A
4491390 Tong-Shen Jan 1985 A
4512637 Ballmer Apr 1985 A
4521804 Bendell Jun 1985 A
4529275 Ballmer Jul 1985 A
4529873 Ballmer et al. Jul 1985 A
4532550 Bendell et al. Jul 1985 A
4546551 Franks Oct 1985 A
4549208 Kamejima et al. Oct 1985 A
4571082 Downs Feb 1986 A
4572619 Reininger et al. Feb 1986 A
4580875 Bechtel et al. Apr 1986 A
4600913 Caine Jul 1986 A
4603946 Kato et al. Aug 1986 A
4614415 Hyatt Sep 1986 A
4620141 McCumber et al. Oct 1986 A
4623222 Itoh et al. Nov 1986 A
4626850 Chey Dec 1986 A
4629941 Ellis et al. Dec 1986 A
4630109 Barton Dec 1986 A
4632509 Ohmi et al. Dec 1986 A
4638287 Umebayashi et al. Jan 1987 A
4645975 Meitzler et al. Feb 1987 A
4647161 Muller Mar 1987 A
4653316 Fukuhara Mar 1987 A
4669825 Itoh et al. Jun 1987 A
4669826 Itoh et al. Jun 1987 A
4671615 Fukada et al. Jun 1987 A
4672457 Hyatt Jun 1987 A
4676601 Itoh et al. Jun 1987 A
4690508 Jacob Sep 1987 A
4692798 Seko et al. Sep 1987 A
4693788 Berg et al. Sep 1987 A
4697883 Suzuki et al. Oct 1987 A
4701022 Jacob Oct 1987 A
4713685 Nishimura et al. Dec 1987 A
4717830 Botts Jan 1988 A
4727290 Smith et al. Feb 1988 A
4731669 Hayashi et al. Mar 1988 A
4731769 Schaefer et al. Mar 1988 A
4741603 Miyagi et al. May 1988 A
4758883 Kawahara et al. Jul 1988 A
4768135 Kretschmer et al. Aug 1988 A
4772942 Tuck Sep 1988 A
4789904 Peterson Dec 1988 A
4793690 Gahan et al. Dec 1988 A
4817948 Simonelli Apr 1989 A
4820933 Hong et al. Apr 1989 A
4825232 Howdle Apr 1989 A
4833534 Paff et al. May 1989 A
4838650 Stewart et al. Jun 1989 A
4847772 Michalopoulos et al. Jul 1989 A
4855822 Narendra et al. Aug 1989 A
4859031 Berman et al. Aug 1989 A
4862037 Farber et al. Aug 1989 A
4867561 Fujii et al. Sep 1989 A
4871917 O'Farrell et al. Oct 1989 A
4872051 Dye Oct 1989 A
4881019 Shiraishi et al. Nov 1989 A
4882565 Gallmeyer Nov 1989 A
4886960 Molyneux et al. Dec 1989 A
4891559 Matsumoto et al. Jan 1990 A
4892345 Rachael, III Jan 1990 A
4895790 Swanson et al. Jan 1990 A
4896030 Miyaji Jan 1990 A
4900133 Berman Feb 1990 A
4907870 Brucker Mar 1990 A
4910591 Petrossian et al. Mar 1990 A
4916374 Schierbeek et al. Apr 1990 A
4917477 Bechtel et al. Apr 1990 A
4937796 Tendler Jun 1990 A
4953305 Van Lente et al. Sep 1990 A
4956591 Schierbeek et al. Sep 1990 A
4961625 Wood et al. Oct 1990 A
4966441 Conner Oct 1990 A
4967319 Seko Oct 1990 A
4970653 Kenue Nov 1990 A
4971430 Lynas Nov 1990 A
4974078 Tsai Nov 1990 A
4987357 Masaki Jan 1991 A
4987410 Berman et al. Jan 1991 A
5001558 Burley et al. Mar 1991 A
5003288 Wilhelm Mar 1991 A
5012082 Watanabe Apr 1991 A
5016977 Baude et al. May 1991 A
5027001 Torbert Jun 1991 A
5027200 Petrossian et al. Jun 1991 A
5044706 Chen Sep 1991 A
5050966 Berman Sep 1991 A
5055668 French Oct 1991 A
5059877 Teder Oct 1991 A
5064274 Alten Nov 1991 A
5072154 Chen Dec 1991 A
5075768 Wirtz et al. Dec 1991 A
5086253 Lawler Feb 1992 A
5096287 Kakinami et al. Mar 1992 A
5097362 Lynas Mar 1992 A
5121200 Choi Jun 1992 A
5124549 Michaels et al. Jun 1992 A
5130709 Toyama et al. Jul 1992 A
5148014 Lynam et al. Sep 1992 A
5166681 Bottesch et al. Nov 1992 A
5168378 Black Dec 1992 A
5170374 Shimohigashi et al. Dec 1992 A
5172235 Wilm et al. Dec 1992 A
5172317 Asanuma et al. Dec 1992 A
5177606 Koshizawa Jan 1993 A
5177685 Davis et al. Jan 1993 A
5182502 Slotkowski et al. Jan 1993 A
5184956 Langlais et al. Feb 1993 A
5189561 Hong Feb 1993 A
5193000 Lipton et al. Mar 1993 A
5193029 Schofield et al. Mar 1993 A
5204778 Bechtel Apr 1993 A
5208701 Maeda May 1993 A
5208750 Kurami et al. May 1993 A
5214408 Asayama May 1993 A
5243524 Ishida et al. Sep 1993 A
5245422 Borcherts et al. Sep 1993 A
5253109 O'Farrell et al. Oct 1993 A
5276389 Levers Jan 1994 A
5285060 Larson et al. Feb 1994 A
5289182 Brillard et al. Feb 1994 A
5289321 Secor Feb 1994 A
5305012 Faris Apr 1994 A
5307136 Saneyoshi Apr 1994 A
5309137 Kajiwara May 1994 A
5313072 Vachss May 1994 A
5325096 Pakett Jun 1994 A
5325386 Jewell et al. Jun 1994 A
5329206 Slotkowski et al. Jul 1994 A
5331312 Kudoh Jul 1994 A
5336980 Levers Aug 1994 A
5341437 Nakayama Aug 1994 A
5343206 Ansaldi et al. Aug 1994 A
5351044 Mathur et al. Sep 1994 A
5355118 Fukuhara Oct 1994 A
5359666 Nakayama et al. Oct 1994 A
5374852 Parkes Dec 1994 A
5386285 Asayama Jan 1995 A
5394333 Kao Feb 1995 A
5406395 Wilson et al. Apr 1995 A
5408346 Trissel et al. Apr 1995 A
5410346 Saneyoshi et al. Apr 1995 A
5414257 Stanton May 1995 A
5414461 Kishi et al. May 1995 A
5416313 Larson et al. May 1995 A
5416318 Hegyi May 1995 A
5416478 Morinaga May 1995 A
5424952 Asayama Jun 1995 A
5426294 Kobayashi et al. Jun 1995 A
5430431 Nelson Jul 1995 A
5434407 Bauer et al. Jul 1995 A
5440428 Hegg et al. Aug 1995 A
5444478 Lelong et al. Aug 1995 A
5451822 Bechtel et al. Sep 1995 A
5457493 Leddy et al. Oct 1995 A
5461357 Yoshioka et al. Oct 1995 A
5461361 Moore Oct 1995 A
5469298 Suman et al. Nov 1995 A
5471515 Fossum et al. Nov 1995 A
5475494 Nishida et al. Dec 1995 A
5487116 Nakano et al. Jan 1996 A
5498866 Bendicks et al. Mar 1996 A
5500766 Stonecypher Mar 1996 A
5510983 Lino Apr 1996 A
5515448 Nishitani May 1996 A
5521633 Nakajima et al. May 1996 A
5528698 Kamei et al. Jun 1996 A
5529138 Shaw et al. Jun 1996 A
5530240 Larson et al. Jun 1996 A
5530420 Tsuchiya et al. Jun 1996 A
5535144 Kise Jul 1996 A
5535314 Alves et al. Jul 1996 A
5537003 Bechtel et al. Jul 1996 A
5539397 Asanuma et al. Jul 1996 A
5541590 Nishio Jul 1996 A
5550677 Schofield et al. Aug 1996 A
5555312 Shima et al. Sep 1996 A
5555555 Sato et al. Sep 1996 A
5559695 Daily Sep 1996 A
5568027 Teder Oct 1996 A
5574443 Hsieh Nov 1996 A
5581464 Woll et al. Dec 1996 A
5594222 Caldwell Jan 1997 A
5614788 Mullins Mar 1997 A
5619370 Guinosso Apr 1997 A
5634709 Iwama Jun 1997 A
5638116 Shimoura et al. Jun 1997 A
5642299 Hardin et al. Jun 1997 A
5648835 Uzawa Jul 1997 A
5650944 Kise Jul 1997 A
5657073 Henley Aug 1997 A
5660454 Mori et al. Aug 1997 A
5661303 Teder Aug 1997 A
5666028 Bechtel et al. Sep 1997 A
5668663 Varaprasad et al. Sep 1997 A
5670935 Schofield et al. Sep 1997 A
5675489 Pomerleau Oct 1997 A
5677851 Kingdon et al. Oct 1997 A
5699044 Van Lente et al. Dec 1997 A
5715093 Schierbeek et al. Feb 1998 A
5724187 Varaprasad et al. Mar 1998 A
5724316 Brunts Mar 1998 A
5737226 Olson et al. Apr 1998 A
5757949 Kinoshita et al. May 1998 A
5760826 Nayar Jun 1998 A
5760828 Cortes Jun 1998 A
5760931 Saburi et al. Jun 1998 A
5760962 Schofield et al. Jun 1998 A
5761094 Olson et al. Jun 1998 A
5765116 Wilson-Jones et al. Jun 1998 A
5781437 Wiemer et al. Jul 1998 A
5786772 Schofield et al. Jul 1998 A
5790403 Nakayama Aug 1998 A
5790973 Blaker et al. Aug 1998 A
5793308 Rosinski et al. Aug 1998 A
5793420 Schmidt Aug 1998 A
5796094 Schofield et al. Aug 1998 A
5798575 O'Farrell et al. Aug 1998 A
5835255 Miles Nov 1998 A
5837994 Stam et al. Nov 1998 A
5844505 Van Ryzin Dec 1998 A
5844682 Kiyomoto et al. Dec 1998 A
5845000 Breed et al. Dec 1998 A
5848802 Breed et al. Dec 1998 A
5850176 Kinoshita et al. Dec 1998 A
5850254 Takano et al. Dec 1998 A
5867591 Onda Feb 1999 A
5877707 Kowalick Mar 1999 A
5877897 Schofield et al. Mar 1999 A
5878370 Olson Mar 1999 A
5883684 Millikan et al. Mar 1999 A
5883739 Ashihara et al. Mar 1999 A
5884212 Lion Mar 1999 A
5890021 Onoda Mar 1999 A
5896085 Mori et al. Apr 1999 A
5899956 Chan May 1999 A
5904725 Iisaka et al. May 1999 A
5914815 Bos Jun 1999 A
5920367 Kajimoto et al. Jul 1999 A
5923027 Stam et al. Jul 1999 A
5938810 De Vries, Jr. et al. Aug 1999 A
5940120 Frankhouse et al. Aug 1999 A
5956181 Lin Sep 1999 A
5959367 O'Farrell et al. Sep 1999 A
5959555 Furuta Sep 1999 A
5963247 Banitt Oct 1999 A
5964822 Alland et al. Oct 1999 A
5971552 O'Farrell et al. Oct 1999 A
5986668 Szeliski et al. Nov 1999 A
5986796 Miles Nov 1999 A
5990469 Bechtel et al. Nov 1999 A
5990649 Nagao et al. Nov 1999 A
6001486 Varaprasad et al. Dec 1999 A
6009336 Harris et al. Dec 1999 A
6020704 Buschur Feb 2000 A
6049171 Stam et al. Apr 2000 A
6052124 Stein et al. Apr 2000 A
6066933 Ponziana May 2000 A
6084519 Coulling et al. Jul 2000 A
6087953 DeLine et al. Jul 2000 A
6091833 Yasui et al. Jul 2000 A
6097024 Stam et al. Aug 2000 A
6100811 Hsu et al. Aug 2000 A
6116743 Hoek Sep 2000 A
6124647 Marcus et al. Sep 2000 A
6124886 DeLine et al. Sep 2000 A
6139172 Bos et al. Oct 2000 A
6144022 Tenenbaum et al. Nov 2000 A
6148120 Sussman Nov 2000 A
6158655 DeVries, Jr. et al. Dec 2000 A
6172613 DeLine et al. Jan 2001 B1
6173087 Kumar et al. Jan 2001 B1
6175164 O'Farrell et al. Jan 2001 B1
6175300 Kendrick Jan 2001 B1
6184781 Ramakesavan Feb 2001 B1
6198409 Schofield et al. Mar 2001 B1
6201642 Bos Mar 2001 B1
6222460 DeLine et al. Apr 2001 B1
6226061 Tagusa May 2001 B1
6243003 DeLine et al. Jun 2001 B1
6250148 Lynam Jun 2001 B1
6259412 Duroux Jul 2001 B1
6259423 Tokito et al. Jul 2001 B1
6266082 Yonezawa et al. Jul 2001 B1
6266442 Laumeyer et al. Jul 2001 B1
6285393 Shimoura et al. Sep 2001 B1
6285778 Nakajima et al. Sep 2001 B1
6291906 Marcus et al. Sep 2001 B1
6294989 Schofield et al. Sep 2001 B1
6297781 Turnbull et al. Oct 2001 B1
6302545 Schofield et al. Oct 2001 B1
6310611 Caldwell Oct 2001 B1
6313454 Bos et al. Nov 2001 B1
6317057 Lee Nov 2001 B1
6320176 Schofield et al. Nov 2001 B1
6320282 Caldwell Nov 2001 B1
6326613 Heslin et al. Dec 2001 B1
6329925 Skiver et al. Dec 2001 B1
6333759 Mazzilli Dec 2001 B1
6341523 Lynam Jan 2002 B2
6353392 Schofield et al. Mar 2002 B1
6359392 He Mar 2002 B1
6366213 DeLine et al. Apr 2002 B2
6370329 Teuchert Apr 2002 B1
6396397 Bos et al. May 2002 B1
6411204 Bloomfield et al. Jun 2002 B1
6411328 Franke et al. Jun 2002 B1
6420975 DeLine et al. Jul 2002 B1
6424273 Gutta et al. Jul 2002 B1
6428172 Hutzel et al. Aug 2002 B1
6430303 Naoi et al. Aug 2002 B1
6433676 DeLine et al. Aug 2002 B2
6433817 Guerra Aug 2002 B1
6442465 Breed et al. Aug 2002 B2
6477464 McCarthy et al. Nov 2002 B2
6485155 Duroux et al. Nov 2002 B1
6497503 Dassanayake et al. Dec 2002 B1
6498620 Schofield et al. Dec 2002 B2
6513252 Schierbeek Feb 2003 B1
6515378 Drummond et al. Feb 2003 B2
6516664 Lynam Feb 2003 B2
6523964 Schofield et al. Feb 2003 B2
6534884 Marcus et al. Mar 2003 B2
6539306 Turnbull Mar 2003 B2
6547133 Devries, Jr. et al. Apr 2003 B1
6553130 Lemelson et al. Apr 2003 B1
6559435 Schofield et al. May 2003 B2
6570998 Ohtsuka et al. May 2003 B1
6574033 Chui et al. Jun 2003 B1
6578017 Ebersole et al. Jun 2003 B1
6587573 Stam et al. Jul 2003 B1
6589625 Kothari et al. Jul 2003 B1
6593011 Liu et al. Jul 2003 B2
6593565 Heslin et al. Jul 2003 B2
6593698 Stam et al. Jul 2003 B2
6594583 Ogura et al. Jul 2003 B2
6611202 Schofield et al. Aug 2003 B2
6611610 Stam et al. Aug 2003 B1
6627918 Getz et al. Sep 2003 B2
6631316 Stam et al. Oct 2003 B2
6631994 Suzuki et al. Oct 2003 B2
6636258 Strumolo Oct 2003 B2
6648477 Hutzel et al. Nov 2003 B2
6650233 DeLine et al. Nov 2003 B2
6650455 Miles Nov 2003 B2
6672731 Schnell et al. Jan 2004 B2
6674562 Miles Jan 2004 B1
6678056 Downs Jan 2004 B2
6678614 McCarthy et al. Jan 2004 B2
6680792 Miles Jan 2004 B2
6690268 Schofield et al. Feb 2004 B2
6691464 Nestell et al. Feb 2004 B2
6693524 Payne Feb 2004 B1
6700605 Toyoda et al. Mar 2004 B1
6703925 Steffel Mar 2004 B2
6704621 Stein et al. Mar 2004 B1
6710908 Miles et al. Mar 2004 B2
6711474 Treyz et al. Mar 2004 B1
6714331 Lewis et al. Mar 2004 B2
6717610 Bos et al. Apr 2004 B1
6735506 Breed et al. May 2004 B2
6741377 Miles May 2004 B2
6744353 Sjonell Jun 2004 B2
6757109 Bos Jun 2004 B2
6762867 Lippert et al. Jul 2004 B2
6794119 Miles Sep 2004 B2
6795221 Urey Sep 2004 B1
6802617 Schofield et al. Oct 2004 B2
6806452 Bos et al. Oct 2004 B2
6807287 Hermans Oct 2004 B1
6822563 Bos et al. Nov 2004 B2
6823241 Shirato et al. Nov 2004 B2
6824281 Schofield et al. Nov 2004 B2
6831261 Schofield et al. Dec 2004 B2
6847487 Burgner Jan 2005 B2
6864930 Matsushita et al. Mar 2005 B2
6882287 Schofield Apr 2005 B2
6889161 Winner et al. May 2005 B2
6891563 Schofield et al. May 2005 B2
6909753 Meehan et al. Jun 2005 B2
6946978 Schofield Sep 2005 B2
6953253 Schofield et al. Oct 2005 B2
6968736 Lynam Nov 2005 B2
6975775 Rykowski et al. Dec 2005 B2
7004593 Weller et al. Feb 2006 B2
7004606 Schofield Feb 2006 B2
7005974 McMahon et al. Feb 2006 B2
7038577 Pawlicki et al. May 2006 B2
7046448 Burgner May 2006 B2
7062300 Kim Jun 2006 B1
7065432 Moisel et al. Jun 2006 B2
7085637 Breed et al. Aug 2006 B2
7092548 Laumeyer et al. Aug 2006 B2
7113867 Stein Sep 2006 B1
7116246 Winter et al. Oct 2006 B2
7123168 Schofield Oct 2006 B2
7133661 Hatae et al. Nov 2006 B2
7149613 Stam et al. Dec 2006 B2
7151996 Stein Dec 2006 B2
7167796 Taylor et al. Jan 2007 B2
7195381 Lynam et al. Mar 2007 B2
7202776 Breed Apr 2007 B2
7224324 Quist et al. May 2007 B2
7227459 Bos et al. Jun 2007 B2
7227611 Hull et al. Jun 2007 B2
7249860 Kulas et al. Jul 2007 B2
7253723 Lindahl et al. Aug 2007 B2
7255451 McCabe et al. Aug 2007 B2
7307655 Okamoto et al. Dec 2007 B1
7311406 Schofield et al. Dec 2007 B2
7325934 Schofield et al. Feb 2008 B2
7325935 Schofield et al. Feb 2008 B2
7338177 Lynam Mar 2008 B2
7339149 Schofield et al. Mar 2008 B1
7344261 Schofield et al. Mar 2008 B2
7360932 Uken et al. Apr 2008 B2
7370983 DeWind et al. May 2008 B2
7375803 Bamji May 2008 B1
7380948 Schofield et al. Jun 2008 B2
7388182 Schofield et al. Jun 2008 B2
7402786 Schofield et al. Jul 2008 B2
7423248 Schofield et al. Sep 2008 B2
7423821 Bechtel et al. Sep 2008 B2
7425076 Schofield et al. Sep 2008 B2
7459664 Schofield et al. Dec 2008 B2
7526103 Schofield et al. Apr 2009 B2
7541743 Salmeen et al. Jun 2009 B2
7561181 Schofield et al. Jul 2009 B2
7565006 Stam et al. Jul 2009 B2
7566851 Stein et al. Jul 2009 B2
7602412 Cutler Oct 2009 B2
7605856 Imoto Oct 2009 B2
7616781 Schofield et al. Nov 2009 B2
7619508 Lynam et al. Nov 2009 B2
7633383 Dunsmoir et al. Dec 2009 B2
7639149 Katoh Dec 2009 B2
7655894 Schofield et al. Feb 2010 B2
7676087 Dhua et al. Mar 2010 B2
7710463 Foote May 2010 B2
7720580 Higgins-Luthman May 2010 B2
7786898 Stein et al. Aug 2010 B2
7792329 Schofield et al. Sep 2010 B2
7843451 Lafon Nov 2010 B2
7855778 Yung et al. Dec 2010 B2
7859565 Schofield et al. Dec 2010 B2
7877175 Higgins-Luthman Jan 2011 B2
7881496 Camilleri et al. Feb 2011 B2
7914187 Higgins-Luthman et al. Mar 2011 B2
7914188 DeLine et al. Mar 2011 B2
7929751 Zhang et al. Apr 2011 B2
7930160 Hosagrahara et al. Apr 2011 B1
7949486 Denny et al. May 2011 B2
7982766 Corghi Jul 2011 B2
7991522 Higgins-Luthman Aug 2011 B2
7994462 Schofield et al. Aug 2011 B2
8017898 Lu et al. Sep 2011 B2
8064643 Stein et al. Nov 2011 B2
8082101 Stein et al. Dec 2011 B2
8095310 Taylor et al. Jan 2012 B2
8098142 Schofield et al. Jan 2012 B2
8100568 DeLine et al. Jan 2012 B2
8150210 Chen et al. Apr 2012 B2
8164628 Stein et al. Apr 2012 B2
8203440 Schofield et al. Jun 2012 B2
8222588 Schofield et al. Jul 2012 B2
8224031 Saito Jul 2012 B2
8233045 Luo et al. Jul 2012 B2
8254635 Stein et al. Aug 2012 B2
8300886 Hoffmann Oct 2012 B2
8314689 Schofield et al. Nov 2012 B2
8324552 Schofield et al. Dec 2012 B2
8378851 Stein et al. Feb 2013 B2
8386114 Higgins-Luthman Feb 2013 B2
8421865 Euler et al. Apr 2013 B2
8452055 Stein et al. May 2013 B2
8534887 DeLine et al. Sep 2013 B2
8543330 Taylor et al. Sep 2013 B2
8553088 Stein et al. Oct 2013 B2
8564657 Michalke Oct 2013 B2
8643724 Schofield et al. Feb 2014 B2
8676491 Taylor et al. Mar 2014 B2
8692659 Schofield et al. Apr 2014 B2
8699821 Orr, IV et al. Apr 2014 B2
9900522 Lu Feb 2018 B2
10868974 Lu Dec 2020 B2
20010002451 Breed May 2001 A1
20020005778 Breed et al. Jan 2002 A1
20020011611 Huang et al. Jan 2002 A1
20020113873 Williams Aug 2002 A1
20030068098 Rondinelli et al. Apr 2003 A1
20030085999 Okamoto May 2003 A1
20030103142 Hitomi et al. Jun 2003 A1
20030137586 Lewellen Jul 2003 A1
20030222982 Hamdan et al. Dec 2003 A1
20040164228 Fogg et al. Aug 2004 A1
20050163343 Kakinami Jul 2005 A1
20050219852 Stam et al. Oct 2005 A1
20050237385 Kosaka et al. Oct 2005 A1
20050259158 Jacob et al. Nov 2005 A1
20060013438 Kubota Jan 2006 A1
20060018511 Stam et al. Jan 2006 A1
20060018512 Stam et al. Jan 2006 A1
20060029255 Ozaki Feb 2006 A1
20060050018 Hutzel et al. Mar 2006 A1
20060066730 Evans et al. Mar 2006 A1
20060091813 Stam et al. May 2006 A1
20060103727 Tseng May 2006 A1
20060125921 Foote Jun 2006 A1
20060250501 Wildmann et al. Nov 2006 A1
20070024724 Stein et al. Feb 2007 A1
20070046488 Fair Mar 2007 A1
20070104476 Yasutomi et al. May 2007 A1
20070165909 Leleve et al. Jul 2007 A1
20070236595 Pan et al. Oct 2007 A1
20070242339 Bradley Oct 2007 A1
20070291189 Harville Dec 2007 A1
20080043099 Stein et al. Feb 2008 A1
20080147321 Howard et al. Jun 2008 A1
20080181488 Ishii Jul 2008 A1
20080192132 Bechtel et al. Aug 2008 A1
20080253606 Fujimaki et al. Oct 2008 A1
20080266396 Stein Oct 2008 A1
20090022422 Sorek et al. Jan 2009 A1
20090113509 Tseng et al. Apr 2009 A1
20090153549 Lynch et al. Jun 2009 A1
20090160987 Bechtel et al. Jun 2009 A1
20090179773 Denny et al. Jul 2009 A1
20090190015 Bechtel et al. Jul 2009 A1
20090243889 Suhr et al. Oct 2009 A1
20090256938 Bechtel et al. Oct 2009 A1
20090290032 Zhang et al. Nov 2009 A1
20100014770 Huggett et al. Jan 2010 A1
20100235095 Smitherman Sep 2010 A1
20110115912 Kuehnle May 2011 A1
20110156887 Shen et al. Jun 2011 A1
20110164108 Bates et al. Jul 2011 A1
20110216201 McAndrew et al. Sep 2011 A1
20120045091 Kaganovich Feb 2012 A1
20120045112 Lundblad et al. Feb 2012 A1
20120069185 Stein Mar 2012 A1
20120200707 Stein et al. Aug 2012 A1
20120314071 Rosenbaum et al. Dec 2012 A1
20120320209 Vico et al. Dec 2012 A1
20130141580 Stein et al. Jun 2013 A1
20130147957 Stein Jun 2013 A1
20130162828 Higgins-Luthman Jun 2013 A1
20130169812 Lu et al. Jul 2013 A1
20130286193 Pflug Oct 2013 A1
20140015976 DeLine et al. Jan 2014 A1
20140022378 Higgins-Luthman Jan 2014 A1
20140043473 Gupta et al. Feb 2014 A1
20140063254 Shi et al. Mar 2014 A1
20140098229 Lu et al. Apr 2014 A1
20140247352 Rathi et al. Sep 2014 A1
20140247354 Knudsen Sep 2014 A1
20140320658 Pliefke Oct 2014 A1
20140333729 Pflug Nov 2014 A1
20140347486 Okouneva Nov 2014 A1
20140350834 Turk Nov 2014 A1
20150049193 Gupta Feb 2015 A1
Foreign Referenced Citations (66)
Number Date Country
3248511 Jul 1984 DE
4107965 Sep 1991 DE
4124654 Jan 1993 DE
0202460 Nov 1986 EP
0353200 Jan 1990 EP
0361914 Apr 1990 EP
0450553 Oct 1991 EP
0492591 Jul 1992 EP
0513476 Nov 1992 EP
0527665 Feb 1993 EP
0605045 Jul 1994 EP
0640903 Mar 1995 EP
0697641 Feb 1996 EP
1022903 Jul 2000 EP
1065642 Jan 2001 EP
1074430 Feb 2001 EP
1115250 Jul 2001 EP
1170173 Jan 2002 EP
2181417 May 2010 EP
2377094 Oct 2011 EP
2523831 Nov 2012 EP
2710340 Mar 2014 EP
3189497 Jul 2017 EP
2233530 Jan 1991 GB
S5539843 Mar 1980 JP
58110334 Jun 1983 JP
59114139 Jul 1984 JP
6080953 May 1985 JP
6079889 Oct 1986 JP
6216073 Apr 1987 JP
6272245 May 1987 JP
62131837 Jun 1987 JP
6414700 Jan 1989 JP
01123587 May 1989 JP
H1168538 Jul 1989 JP
H236417 Aug 1990 JP
H2117935 Sep 1990 JP
03099952 Apr 1991 JP
04114587 Apr 1992 JP
04239400 Aug 1992 JP
0577657 Mar 1993 JP
05050883 Mar 1993 JP
05213113 Aug 1993 JP
06107035 Apr 1994 JP
6227318 Aug 1994 JP
06267304 Sep 1994 JP
06276524 Sep 1994 JP
06295601 Oct 1994 JP
07004170 Jan 1995 JP
0732936 Feb 1995 JP
0747878 Feb 1995 JP
07052706 Feb 1995 JP
0769125 Mar 1995 JP
07105496 Apr 1995 JP
H730149 Jun 1995 JP
2630604 Jul 1997 JP
200274339 Mar 2002 JP
200383742 Mar 2003 JP
20041658 Jan 2004 JP
1994019212 Sep 1994 WO
1996021581 Jul 1996 WO
1996038319 Dec 1996 WO
2001080353 Oct 2001 WO
2012139636 Oct 2012 WO
2012139660 Oct 2012 WO
2012143036 Oct 2012 WO
Non-Patent Literature Citations (46)
Entry
Achler et al., “Vehicle Wheel Detector using 2D Filter Banks,” IEEE Intelligent Vehicles Symposium of Jun. 2004.
Behringer et al., “Simultaneous Estimation of Pitch Angle and Lane Width from the Video Image of a Marked Road,” pp. 966-973, Sep. 12-16, 1994.
Borenstein et al., “Where am I? Sensors and Method for Mobile Robot Positioning”, University of Michigan, Apr. 1996, pp. 2, 125-128.
Bow, Sing T., “Pattern Recognition and Image Preprocessing (Signal Processing and Communications)”, CRC Press, Jan. 15, 2002, pp. 557-559.
Broggi et al., “Automatic Vehicle Guidance: The Experience of the ARGO Vehicle”, World Scientific Publishing Co., 1999.
Broggi et al., “Multi-Resolution Vehicle Detection using Artificial Vision,” IEEE Intelligent Vehicles Symposium of Jun. 2004.
Brown, A Survey of Image Registration Techniques, vol. 24, ACM Computing Surveys, pp. 325-376, 1992.
Burger et al., “Estimating 3-D Egomotion from Perspective Image Sequences”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 11, pp. 1040-1058, Nov. 1990.
Cucchiara et al., Vehicle Detection under Day and Night Illumination, 1999.
Dickmanns et al., “A Curvature-based Scheme for Improving Road Vehicle Guidance by Computer Vision,” University of Bundeswehr München, 1986.
Dickmanns et al., “Recursive 3-D road and relative ego-state recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, No. 2, Feb. 1992.
Dickmanns et al.; “An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 20, No. 6, Nov./Dec. 1990.
Dickmanns, “4-D dynamic vision for intelligent motion control”, Universitat der Bundeswehr Munich, 1991.
Donnelly Panoramic Vision™ on Renault Talisman Concept Car At Frankfort Motor Show, PR Newswire, Frankfort, Germany Sep. 10, 2001.
Franke et al., “Autonomous driving approaches downtown”, Intelligent Systems and Their Applications, IEEE 13 (6), 40-48, Nov./Dec. 1999.
Greene et al., Creating Raster Omnimax Images from Multiple Perspective Views Using the Elliptical Weighted Average Filter, IEEE Computer Graphics and Applications, vol. 6, No. 6, pp. 21-27, Jun. 1986.
Honda Worldwide, “Honda Announces a Full Model Change for the Inspire.” Jun. 18, 2003.
International Search Report and Written Opinion dated Mar. 2, 2012 from corresponding PCT Application No. PCT/US2011/062834.
Kan et al., “Model-based vehicle tracking from image sequences with an application to road surveillance,” Purdue University, XP000630885, vol. 35, No. 6, Jun. 1996.
Kastrinaki et al., “A survey of video processing techniques for traffic applications”, copyright 2003.
Kluge et al., “Representation and Recovery of Road Geometry in YARF,” Carnegie Mellon University, pp. 114-119.
Koller et al., “Binocular Stereopsis and Lane Marker Flow for Vehicle Navigation: Lateral and Longitudinal Control,” University of California, Mar. 24, 1994.
Kuhnert, “A vision system for real time road and object recognition for vehicle guidance,” in Proc. SPIE Mobile Robot Conf, Cambridge, MA, Oct. 1986, pp. 267-272.
Lu, M., et al. On-chip Automatic Exposure Control Technique, Solid-State Circuits Conference, 1991. ESSCIRC '91. Proceedings—Seventeenth European (vol. 1) with abstract.
Malik et al., “A Machine Vision Based System for Guiding Lane-change Maneuvers,” Sep. 1995.
Mei Chen et al., AURORA: A Vision-Based Roadway Departure Warning System, The Robotics Institute, Carnegie Mellon University, published Aug. 9, 1995.
Morgan et al., “Road edge tracking for robot road following: a real-time implementation,” vol. 8, No. 3, Aug. 1990.
Philomin et al., “Pedestrian Tracking from a Moving Vehicle”.
Porter et al., “Compositing Digital Images,” Computer Graphics (Proc. Siggraph), vol. 18, No. 3, pp. 253-259, Jul. 1984.
Pratt, “Digital Image Processing, Passage—ED.3”, John Wiley & Sons, US, Jan. 1, 2001, pp. 657-659, XP002529771.
Sahli et al., “A Kalman Filter-Based Update Scheme for Road Following,” IAPR Workshop on Machine Vision Applications, pp. 5-9, Nov. 12-14, 1996.
Sun et al., “On-road vehicle detection using optical sensors: a review”.
Szeliski, Image Mosaicing for Tele-Reality Applications, DEC Cambridge Research Laboratory, CRL 94/2, May 1994.
Toyota Motor Corporation, “Present and future of safety technology development at Toyota.” 2004.
Tremblay, M., et al. High resolution smart image sensor with integrated parallel analog processing for multiresolution edge extraction, Robotics and Autonomous Systems 11 (1993), pp. 231-242, with abstract.
Tsugawa et al., “An automobile with artificial intelligence,” in Proc. Sixth IJCAI, 1979.
Turk et al., “VITS-A Vision System for Autonomous Land Vehicle Navigation,” IEEE, 1988.
Van Leeuwen et al., “Motion Estimation with a Mobile Camera for Traffic Applications”, IEEE, US, vol. 1, Oct. 3, 2000, pp. 58-63.
Van Leeuwen et al., “Motion Interpretation for In-Car Vision Systems”, IEEE, US, vol. 1, Sep. 30, 2002, p. 135-140.
Van Leeuwen et al., “Real-Time Vehicle Tracking in Image Sequences”, IEEE, US, vol. 3, May 21, 2001, pp. 2049-2054, XP010547308.
Van Leeuwen et al., “Requirements for Motion Estimation in Image Sequences for Traffic Applications”, IEEE, US, vol. 1, May 24, 1999, pp. 145-150, XP010340272.
Vellacott, Oliver, “CMOS in Camera,” IEE Review, pp. 111-114 (May 1994).
Vlacic et al., (Eds), “Intelligent Vehicle Technologies, Theory and Applications”, Society of Automotive Engineers Inc., edited by SAE International, 2001.
Wolberg, “A Two-Pass Mesh Warping Implementation of Morphing,” Dr. Dobb's Journal, No. 202, Jul. 1993.
Wolberg, Digital Image Warping, IEEE Computer Society Press, 1990.
Zheng et al., “An Adaptive System for Traffic Sign Recognition,” IEEE Proceedings of the Intelligent Vehicles '94 Symposium, pp. 165-170 (Oct. 1994).
Related Publications (1)
Number Date Country
20210105420 A1 Apr 2021 US
Provisional Applications (2)
Number Date Country
61482786 May 2011 US
61418499 Dec 2010 US
Continuations (2)
Number Date Country
Parent 15899105 Feb 2018 US
Child 17247488 US
Parent 13990902 US
Child 15899105 US