Vehicular vision system with image manipulation

Information

  • Patent Grant
  • 11577645
  • Patent Number
    11,577,645
  • Date Filed
    Monday, February 22, 2021
    3 years ago
  • Date Issued
    Tuesday, February 14, 2023
    a year ago
Abstract
A vehicular vision system includes a camera disposed at a front portion of a vehicle, a display screen and a processor for processing image data captured by the camera. The processor performs first, second and third image manipulations on first, second and third portions of the image data to generate first, second and third region manipulated image data. The display screen displays first, second and third images derived from the manipulated image data at respective display regions. The displayed images are discontinuous at a first seam between first and second display regions and discontinuous at a second seam between first and third display regions. An object present in first and second regions of the view of the camera is displayed as discontinuous at the first seam and an object present in the first and third regions of the view of the camera is displayed as discontinuous at the second seam.
Description
FIELD OF THE INVENTION

This disclosure relates to vehicle vision systems and, more particularly, to a vehicle vision system that displays images derived from image data captured by one or more vehicle cameras.


BACKGROUND OF THE INVENTION

Vehicle camera systems can provide vehicle operators with valuable information about driving conditions. For example, a typical vehicle camera system can aid a driver in parking her automobile by alerting her to hazards around her automobile that should be avoided. Other uses for vehicle camera system are also known. However, a typical vehicle camera system may not be able to provide video that is quickly and reliably comprehensible to the driver.


SUMMARY OF THE INVENTION

The present invention provides a vision system having a camera that captures image data representative of a scene exterior of a vehicle equipped with the vision system. Different regions of the image data captured by a single vehicular camera can be manipulated by different image manipulation techniques before the captured image is displayed at a display for viewing by a driver of the equipped vehicle.


These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The drawings illustrate, by way of example only, embodiments of the present disclosure.



FIG. 1 is a perspective view of a vehicle having a vehicle camera system;



FIG. 2 is a functional block diagram of the vehicle camera system;



FIG. 3 is a flowchart of a method for manipulating an image captured by the vehicle camera system;



FIGS. 4A-C are diagrams showing image manipulation of an original image captured by the vehicle camera to obtain a manipulated image;



FIG. 5 is a diagram showing another manipulated image having trapezoidal right and left manipulated regions;



FIG. 6 is a diagram showing a manipulated image having rectangular manipulated regions;



FIG. 7 is a diagram showing a manipulated image having triangular and trapezoidal manipulated regions;



FIG. 8 is a diagram showing a manipulated image obtained by continuous dewarping and reshaping; and



FIG. 9 is a diagram of a remapping table.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

Referring now to the drawings and the illustrative embodiments depicted therein, and with reference to FIG. 1, a vehicle 10, such as a car, truck, van, bus, or other type of vehicle, includes a camera 12. The camera 12 is configured to be positioned on the vehicle 12 to face away from the bulk of the body 14 of the vehicle 10 to capture video of the environment outside of the vehicle 10 to, for example, aid the operator of the vehicle 10, such as, for example, when executing a reversing maneuver or a parking maneuver of the vehicle.


In this example, the camera 12 is positioned at a rear-portion of the body 14 of the vehicle 10 and is rear-facing to capture video of the scene behind the vehicle 12. In another example, the camera 12 can be positioned at a rear bumper of the vehicle 10. In still other examples, the camera can be forward-facing and can be positioned, for example, at the front windshield, at the rear-view mirror, or at the grille of the vehicle 10. For example, the camera may comprise a forward facing camera, such as for assisting the driver of the vehicle during forward parking maneuvers and/or to detect cross traffic at intersections and parking lots and the like. Optionally, the camera may be part of a multi-camera system of the vehicle, such as for a surround view or top-down view system of the vehicle or the like, such as discussed below.


The camera 12 is a single imager or camera comprising a single photosensor array, and the camera may include a wide-angle lens, such as a lens with a horizontal field of view of at least about 120 degrees to about 180 degrees or more than about 180 degrees. In this way, the camera 12 can capture the scene directly behind or ahead of the vehicle 10 as well as areas to the right and left of the vehicle 10. When the camera 12 is rear-facing and has a lens with a horizontal field of view of more than about 180 degrees, the horizontal extents of the field of view of the camera 12 are shown at 13 in FIG. 1. Such a field of view can encompass a wide range of potential hazards including objects directly in the vehicle's rear path of travel, objects in rear blind spots, as well as objects at a distance to the far left and far right of the vehicle 10, such as an approaching vehicle on a perpendicular path of travel. A similar field of view may be established when the camera 12 is a forward-facing camera disposed at a front portion of the vehicle. Optionally, the processor may be part of the camera or camera module or may be separate from the camera. Optionally, the processor may receive other image data, such as image data captured by one or more other cameras of a multi-camera system of the vehicle.


The camera 12 is coupled via a line 16 (such as, for example, conductive wires) to a controller 18 located at a forward portion of the vehicle 10, such as under the hood or below the dash. In other examples, the camera 12 can be coupled to the controller 18 via a wireless communications technique instead of via the line 16. Moreover, the controller 18 can be positioned elsewhere in the vehicle 10. The controller may also be inside the camera 12 or incorporated into the camera or camera module. The processor in the controller may comprise any suitable processing device, such as an ASIC, a digital signal processor (DSP), a FPGA, a system-on-chip (SOC), or any other suitable digital processing unit. The controller also includes a video signal generator/converter, which converts video image data from digital data format to output video format, such as NTSC analog video, LVDS digital video, MOST digital video or Ethernet digital video format or the like.


As shown in FIG. 2, the camera 12 and controller 18 can form at least part of a vehicle camera system 20. The vehicle camera system 20 is described herein as capturing images and video, and captured images can be considered, for explanatory purposes, frames of captured video.


The controller 18 includes a processor 22 and connected memory 24. The controller 18 is operatively coupled to both the camera 12, as mentioned above, and to a display 30.


The display 30 is configured to be positioned inside the cabin of the vehicle 10. The display 30 is coupled to the controller 18 by way of, for example, conductive lines. The display 30 can include an in-vehicle display panel situated in the dash of the vehicle 10. The display 30 can include a liquid-crystal display (LCD) panel, a light-emitting diode (LED) display panel, an active-matrix organic LED (AMOLED) display panel, or the like, as well as a circuit to drive the display panel with a video signal received from the controller 18. The display 30 can include a touch-screen interface to control how the video is displayed by, for example, outputting a mode signal to the controller 18.


The processor 22 can execute program code stored in the memory 24. The memory 24 can store program code, such as a first image manipulation routine 26 and a second image manipulation routine 28. As will be discussed in detail below, the processor 22 can be configured by the first and second image manipulation routines 26, 28 to manipulate an image received from the camera 12 to generate a manipulated image. The first and second image manipulation routines 26, 28 are different, such as different types of processing, so that pixels of one region of the image are manipulated by a different manipulation than pixels of another region of the image. Performing such manipulations to a consecutive series of images captured by the camera 12 results in manipulated video being displayed on the display 30 to aid the driver in operating the vehicle 10.


The image processing or manipulation may be performed on any given frame of captured image data or a series of frames of captured image data or intervals or sequences of frames or the like of captured image data. For example, the camera may be operable to capture frames of image data at a rate of about 15 frames per second or about 30 frames per second or more, and the system may be operable to manipulate the image data of each frame of captured image data, or optionally every other frame of captured image data or every third frame of captured image data or the like may be processed and manipulated (depending on the particular application of the system) in accordance with the present invention.


The first image manipulation routine 26 includes instructions executable by the processor 22 to perform a first image manipulation on a first region of an image. The first image manipulation routine 26 can define the first region of the image as well as the first image manipulation to be performed on the first region. The first region can be defined by a first set of coordinates of a first set of pixels, one or more boundaries or partitions that fence in a first set of pixels, or the like.


Similarly, the second image manipulation routine 28 includes instructions executable by the processor 22 to perform a second image manipulation on a second region of an image. The second image manipulation routine 26 can define the second region of the image as well as the second image manipulation to be performed on the second region. The second region can be defined by a second set of coordinates of a second set of pixels, one or more boundaries or partitions that fence in a second set of pixels, or the like.


Selection of the two or more regions (such as by partitioning the captured image into a plurality of distinct regions, such as a left region and a right region or such as a center region and left region and right region or the like) may be a predetermined or preset decision based on the known field of view optics/parameters of a given camera at a given location at a vehicle (such as a rearward facing camera at the rear of the vehicle) or may be dynamically or automatically selected based at least in part on the given camera at a given location at a vehicle or the like, and/or based at least in part on the environment or lighting conditions at the scene being imaged and/or based at least in part on the type of driving maneuver being performed by the driver of the vehicle.


Each of the first and second image manipulations can be defined by one or more of a remapping table, function, algorithm, or process that acts on the respective first or second set of pixels to generate a respective first or second manipulated region. In one example, a remapping table (see FIG. 9) correlates X and Y coordinates of source pixels with X and Y coordinates of destination pixels, where color values of each source pixel are set at the X and Y coordinates of each corresponding destination pixel. In this case, the first and second image manipulation routines 26, 28 include instructions for carrying out the remapping of pixels, and can further include the remapping tables themselves. Of course, a remapping table can be stored separately in the memory 24.


In another example, a remapping function takes as input source pixel coordinates and color values and outputs destination pixel coordinates and color values. In this case, the first and second image manipulation routines 26, 28 include instructions that define the respective remapping function. Each of the first and second image manipulation routines 26, 28 can use interpolation or extrapolation to output color values for pixels that do not directly correlate to pixels in the captured image. Although interpolation or extrapolation may result in blur or an apparent loss of image fidelity, it can also result in a larger or more easily comprehensible image.


In other examples, other techniques can alternatively or additionally be used for the first and second image manipulation routines 26, 28.


The first and second image manipulations defined by the respective first and second image manipulation routines 26, 28 are different. In one example, the first image manipulation routine 26 includes dewarping instructions that, for example, flatten the first region of the image to reduce the apparent curvature in the image resulting from optical distortion causes by the wide-angle lens 32. In the same example, the second image manipulation includes reshaping instructions that reshape the second region of the image by one or more of enlarging, moving, cropping, stretching, compressing, skewing, rotating, and tilting, for example, parts of the second region or the entire second region. In addition to reshaping, the second image manipulation routine 28 can further perform dewarping in the second region, similar to that performed in the first region. The second image manipulation routine 28 can be configured to move an apparent viewpoint of the camera 12 along a path of travel of the vehicle, so that if the camera 12 is rear-facing the apparent viewpoint of the camera 12 is moved rearward, and if the camera 12 is forward-facing then the apparent viewpoint of the camera 12 is moved forward. Furthermore, although the different manipulations may comprise similar types of manipulation (such as, for example, dewarping or the like), the character or degree or technique of the particular type of manipulation (such as, for example, dewarping or the like) may be different between the two manipulations.


Similarly, a third image manipulation routine can also be stored in the memory 24. In this example, the third image manipulation routine performs the same manipulation as the second image manipulation routine but on a different, third region of the image. Third region of the image has a shape that is similar, preferably mirror-symmetric, to a shape of the second region, and accordingly, the third image manipulation routine generates a third manipulated region that has a shape that is mirror-symmetric to a shape of the second manipulated region. Accordingly, the second and third image manipulation routines can be the same routine executed with different parameters. For example, when a remapping table is referenced to generate the third manipulated region, a parameter can be used to indicate that the remapping table is to be traversed differently than when the remapping table is used to generate the second manipulated region. When a remapping function is used, the remapping function can be passed a parameter that identifies whether pixels of the second region or the third region are being remapped, so that the remapping function can operate on such pixels accordingly. However, in other examples, the third image manipulation routine can be a separate routine from the second image manipulation routine 28.


In another example, the second image manipulation routine 28 is a remapping table that includes coordinates of pixels of the second and third regions. This is analogous to the second and third regions being parts of the same discontinuous region.


In still another example, the first and second image mapping routines 26, 28 can be a single routine that applies a single remapping table to an image to generate a manipulated image. The two or more different types of manipulations performed to the two or more different regions of the image are realized by the selected coordinate values of the remapping table.


The processor 22 can be configured to generate the manipulated image based on image data received from only the camera. That is, in this example, the processor 22 does not use image data provided by other cameras, if any other cameras are provided to the vehicle 10, to carry out the image manipulations described herein.


Referring to FIG. 3, a method 40 of displaying an image captured by a vehicle camera, such as the camera 12, is illustrated. The method 40 will be described in the context of the vehicle camera system 20 of FIG. 2 and can take the form of instructions executable by the processor 22 of the controller 18. The method 40 will also be described with reference to FIGS. 4A-C, which show original and manipulated images. It is noted that the gridlines in FIGS. 4A-C are intended to illustrate any warping that is present in the images and do not themselves form part of the images.


At 42, an image 60, such as a frame of video, is captured by the camera 12. FIG. 4A shows an example of the image 60 received from the camera 12 equipped with the wide-angle lens 32. Warping in the image 60 as a result of the lens 32 can be seen in the curvature of the gridlines. In the example scene, a road 62 is perpendicular to a driveway 64 that the vehicle 10 is leaving. An approaching vehicle 78 travelling on the road 62 presents a potential collision hazard to the vehicle 10.


Next, at 44, the processor 22 receives image data from the camera 12. The image data is representative of the image 60 captured by the camera 12, and may be a series of pixel color values of the image, a compressed stream of pixel color values, pixel color values of a frame of video differentially encoded with respect to a previous frame (such as, for example, an MPEG video P-frame or B-frame that refers back to a previous frame, such as an I-frame), or the like. Irrespective of the form of the image data, the processor 22 can be considered to have received the image 60 and to have access to all the pixels of the image 60 for the purposes of image manipulation.


Referring to FIGS. 4B and 4C, at 46, the processor 22 performs a first image manipulation on a first region 70 of the image 60 to generate a first manipulated region 80. The first image manipulation can be defined by the first image manipulation routine 26 and can act on a first portion of the image data corresponding to the first region 70 to generate first region manipulated image data representative of a first manipulated region 80. In this example, the first region 70 is a central region 70 of the image 60, as shown in FIG. 4B. The first image manipulation routine 26 includes instructions that dewarp the central region 70 to generate central region manipulated image data representative of a central manipulated region 80, as shown in FIG. 4C. The dewarping of the central manipulated region 80 is illustratively shown as the gridlines being straightened.


At 48, the processor 22 performs second and third image manipulations on respective second and third regions 72, 74 of the image 60 to generate second and third manipulated regions 82, 84. The second and third image manipulations can be defined by the second image manipulation routine 28 (and, optionally, by the third image manipulation routine), and can act on second and third portions of the image data corresponding to the second and third regions 82, 84 to generate second and third region manipulated image data representative of respective second and third manipulated regions 82, 84. In this example, the second and third regions 72, 74 are respectively left and right region 72, 74 of the image 60, as shown in FIG. 4B. The second image manipulation routine 28 (and, if used, the third image manipulation routine) includes instructions that reshape and dewarp the left and right regions 72, 74 to generate left and right region manipulated image data representative of respective left and right manipulated regions 82, 84, as shown in FIG. 4C. The reshaping and dewarping of the left and right manipulated regions 82, 84 is illustratively shown as the gridlines being straightened and the left and right manipulated regions 82, 84 being larger and polygonal when compared to the rounded, non-manipulated left and right regions 72, 74.


In this example, the left and right regions 72, 74 are different regions that undergo the same type of manipulation, albeit in a mirror-symmetric manner about a central vertical axis 76 of the image 60. This is because the camera 12 is forward- or rear-facing and the left and right directions generally have about the same importance to the operator of the vehicle 10 when assessing potential external hazards.


Next, at 50, the processor 22 composes the first (central), second (left), and third (right) manipulated regions 80, 82, 84 into the larger manipulated image 86, shown in FIG. 4C. Such composing can be inherent to the image manipulation performed at 46 and 48. It can be seen in this particular example image that the manipulated image 86 is generally or at least substantially discontinuous at seams 87 between the central manipulated region 80 and the left and right manipulated regions 82, 84, and objects in the manipulated image 86 do not line up at the seams 87. Moreover, the seams 87 can be graphically enhanced by way of the processor 22 overlaying lines or other graphical elements. It is desirable, however, in certain other applications, to have continuous stitching (such as shown in FIG. 8) between the regions 82, 80 and 84, so that there is no blind spot in the whole image coverage area or field of view of the camera or displayed portion of the field of view of the camera. This can be accomplished with proper tuning of pixel manipulation formulas for each region with the architecture and processing of the present invention.


Regarding the shape of the manipulated image 86, it can be seen that the central region 80 is rectangular while the left and right manipulated regions 82, 84 are complementary shaped trapezoids. These are merely example shapes. Other shapes of image 86 and/or other shapes of individual regions may also be possible or suitable with this architecture (such as by utilizing aspects of the vision systems described in U.S. provisional applications, Ser. No. 61/745,864, filed Dec. 26, 2012; Ser. No. 61/700,617, filed Sep. 13, 2012, and Ser. No. 61/616,855, filed Mar. 28, 2012, which are hereby incorporated herein by reference in their entireties.


Finally, at 52, the processor 22 outputs manipulated image data, including the first (central) region manipulated image data and the second and third (left and right) region manipulated image data, to the display 30 to cause the display 30 to display the manipulated image 86 to the operator of the vehicle 10.


It can be seen from FIGS. 4B and 4C that the left and right manipulated regions 82, 84 are enlarged relative to the manipulated central region 80 when compared with the original left, right, and central regions 82, 84, 80. That is, the left and right manipulated regions 82, 84 are increased in size in order to emphasize to the driver hazards or other information that may be contained in these regions. This is not to say that the central region is unimportant, but rather, referring to FIG. 4A showing the original image, that the left and right regions 72, 74 are relatively small and hazards therein may not be readily noticed by drivers. Moreover, the reshaping of the left and right regions 72, 74 can further emphasize to the driver that these regions are more lateral to the vehicle than may be apparent from the original image 60. For example, the approaching vehicle 78 in the original image 60 may have a position or may be moving in a way that is misperceived by the driver due to the distortion caused by the wide-angle lens 32. While the vehicle 78 is indeed a hazard to the driver wishing to enter the roadway 62, the distortion of the image 60 in the left region 72 may be confusing to the driver and may cause the driver to not fully realize the approaching hazard. However, the image in FIG. 4C reshapes the image of the vehicle 78 to make it larger and to emphasize that the path of the vehicle 78 will intersect with the driver's intended path of the vehicle 10. The hazard is the same, but its significance is highlighted. The second (and third) image manipulation routine 28 can be configured to highlight hazards in the left and right regions by selecting specific image manipulation techniques (such as, for example, specific kinds of reshaping) that are found to quickly and coherently inform drivers of such hazards.


In this example, image manipulation used on the left and right regions 72, 74 is one that moves an apparent viewpoint of the camera along the path of travel of the vehicle. That is, it gives the driver the impression of peeking around the corner behind (in the case of the camera 12 being rear-facing) or ahead (in the case of the camera 12 being front-facing) of the vehicle 10. Although no additional information is added to the left and right manipulated regions 82, 84 (at most, interpolation or extrapolation may be used to enlarge these regions), the reshaping performed can alter the driver's perception of these regions in a way than better alerts the driver to hazards. The presence of the seams 87, whether enhanced or not, can also contribute to increased hazard perception.


Although showing the left and right manipulated regions 82, 84 has the advantage of alerting drivers to oncoming cross-traffic or other hazards that may be obstructed by blind spots or obstacles, showing the central manipulated region 80 as well provides a further advantage even if the scene of the central region is clearly directly visible via the rear-view mirror or front windshield. This advantage is that the driver does not have to switch his/her attention between the display 30 and the rear view mirror or front windshield, and can thus observe the entire scene on the display 30.


In the examples described herein, the field of view shown on the display 30 has no gaps. More particularly, there is no gap in the field of view between the fields of view displayed in the left and right views and the central view on the in-cabin display. There is in at least some embodiments, overlap between the central view and the respective left and right views so as to ensure that the views (such as the left, right and central views) represent a continuous field of view without gaps on the display. Some known systems omit a horizontal angular region, such as the central region, in order to have more room to display left and right regions. However, omitting any such region from the display may result in a safety concern, in that the driver may not be able to properly see the omitted region by another means. Moreover, the driver may incorrectly assume that an omitted region, by virtue of its omission, is unimportant to safe operation of the vehicle. To address this, the examples described herein show a continuous horizontal field of view in one place on the display 30.


Steps of the method 40 can be performed in orders different from that described and can be aggregated together or further separated.


In another example, dewarping is performed on the entire image 60 while reshaping is only performed on the left and right regions 72, 74. This means that the first region is substantially the entire image 60 while the second and third regions are different as they are smaller regions of the full image 60. In short, the regions of the image being differently manipulated can overlap.



FIG. 5 shows another example of a manipulated image 90. In this example, a different manipulation routine is used to reshape left and right manipulated regions 92, 94 differently when compared to the example of FIGS. 4A-C. The approaching vehicle 98 is apparently larger and traveling more aligned with the portion of the road 99 shown in the central manipulated region 96. Gridlines are omitted from this figure for clarity.


The manipulated images 80, 90 of FIGS. 4C and 5 are merely illustrative, and manipulated images of other shapes can be generated. The below described manipulated images 100, 110, 110 can equally be used in place of the manipulated image 80, 90 in the above description.



FIG. 6 shows a manipulated image 100 having a rectangular manipulated central region 102 positioned above and spanning the widths of two smaller, rectangular, and mirror-symmetric left and right manipulated regions 104, 106. The image manipulation performed on the central region 70 of the image 60 to obtain the region 102 can include dewarping and reshaping that includes stretching and cropping an upper portion away. The different image manipulation performed on the left and right regions 72, 74 to obtain the regions 104, 106 can include dewarping and reshaping that includes cropping upper portions away.



FIG. 7 shows a manipulated image 110 having a triangular manipulated central region 112 positioned above and nested between two smaller, trapezoidal, and mirror-symmetric left and right manipulated regions 114, 116. The image manipulation performed on the central region 70 of the image 60 to obtain the region 112 can include dewarping and reshaping that includes horizontally stretching an upper portion and horizontally compressing a lower portion to obtain the triangular shape as well as cropping an upper portion away. The different image manipulation performed on the left and right regions 72, 74 to obtain the regions 114, 116 can include dewarping and reshaping that includes vertically stretching outer portions and vertically compressing inner portions to obtain the trapezoidal shapes that match the triangular shape of the region 112.



FIG. 8 shows a manipulated image 120 obtained by performing a continuous dewarping manipulation on the entire image 60, which produces a dewarped central region 122 and dewarped left and right regions 124, 126, and by performing reshaping of the left and right regions 72, 74 by stretching to provide image information at the outer corners 129 of the left and right manipulated regions 124, 126. The left and right manipulated regions 124, 126 can be mirror-symmetric. Continuous seams 128 are produced by the continuous dewarping manipulation, and it can be seen that objects in the manipulated image 120 meet at the continuous seams 128 as they do in the original image 60. The continuous seams 128 are not graphically enhanced in this example, but may be visually apparent to the driver due to the shapes of objects in the vicinity of the seams 128.



FIG. 9 shows a remapping table 134 that correlates X and Y coordinates of source pixels of a source image 130 to X and Y coordinates of destination pixels of a destination image 132. The remapping table 134 allows color values A-L of each source pixel to be set at the X and Y coordinates of a corresponding destination pixel. In this example, the corner pixels of the source image 130 are not used, so the remapping table 134 references color values of neighboring pixels to populate the destination image 132. Although simplified to 16 pixels for explanatory purposes, the remapping table 134 corresponds to a reshaping operation that increases the size of the destination image 132 as well as make the destination image 132 rectangular when compared to the source image 130, which is nominally round. This technique can be used by increasing the number of pixels to achieve any of the image manipulations discussed herein.


The image manipulation and display system of the present invention may utilize aspects of the systems described in U.S. provisional applications, U.S. provisional applications, Ser. No. 61/745,864, filed Dec. 26, 2012; Ser. No. 61/700,617, filed Sep. 13, 2012; and Ser. No. 61/616,855, filed Mar. 28, 2012, which are hereby incorporated herein by reference in their entireties.


According to one aspect of this disclosure, a vehicle camera system includes a camera configured to be positioned on a vehicle, a display configured to be positioned in a cabin of the vehicle, and a processor operatively coupled to the camera and the display. The processor is configured to receive image data from the camera, the image data being representative of an image captured by the camera, and perform a first image manipulation on a first portion of the image data corresponding to a first region of the image to generate first region manipulated image data. The processor is further configured to perform a second image manipulation on a second portion of the image data corresponding to a second region of the image to generate second region manipulated image data. The second region is different from the first region, and the second image manipulation is of a type different from the first image manipulation. The processor is further configured to output to the display manipulated image data including the first region manipulated image data and the second region manipulated image data to cause the display to display a manipulated image based on the manipulated image data. The manipulated image has a first manipulated region corresponding to the first region manipulated image data and a second manipulated region corresponding to the second region manipulated image data.


The first image manipulation can include dewarping.


The second image manipulation can include reshaping, and further, can include dewarping.


The second image manipulation can be configured to move an apparent viewpoint of the camera along a path of travel of the vehicle.


The processor can be further configured to perform a third image manipulation on a third portion of the image data corresponding to a third region of the image to generate third region manipulated image data. The third region is different from the first region and the second region. The third image manipulation is of a same type as the second image manipulation. The manipulated image data includes the third region manipulated image data, which corresponds to a third manipulated region that forms part of the manipulated image.


The third region of the image can have a shape that is mirror-symmetric to a shape of the second region of the image.


The third manipulated region of the manipulated image can have a shape that is mirror-symmetric to a shape of the second manipulated region of the manipulated image.


The first region can be a central region, the second region can be a left region, and the third region can be a right region of a scene captured by the camera.


The manipulated image can include a generally discontinuous seam between the first manipulated region and the second manipulated region.


The processor can be configured to graphically enhance the discontinuous seam.


The image data can be representative of at least a frame of video captured by the camera.


The camera can include a wide-angle lens.


The processor can be configured to generate the manipulated image based on image data received from only the camera.


The camera can be configured to be rear-facing on the vehicle.


The camera can be configured to be forward-facing on the vehicle.


The manipulated image can include substantially the entire horizontal field of view of the camera.


According to another aspect of this disclosure, a vehicle camera system includes a camera configured to be positioned on a vehicle, a display configured to be positioned in a cabin of the vehicle, and a processor operatively coupled to the camera and the display. The processor is configured to manipulate an image received from the camera to generate a manipulated image. Pixels of a central region of the image are manipulated by a different type of manipulation than pixels of left and right regions of the image. The pixels of the left and right regions of the image are manipulated symmetrically about a vertical axis central to the central region.


According to another aspect of this disclosure, a method includes capturing an image with a camera positioned on a vehicle, performing a first image manipulation on a first region of the image to generate a first manipulated region, and performing a second image manipulation on a second region of the image to generate a second manipulated region. The second region is different from the first region. The second image manipulation is of a type different from the first image manipulation. The method further includes displaying a manipulated image including the first manipulated region and the second manipulated region.


The first image manipulation can include dewarping.


The second image manipulation can include reshaping, and further, can include dewarping.


The second image manipulation can be configured to move an apparent viewpoint of the camera along a path of travel of the vehicle.


The method can further include performing a third image manipulation on a third region of the image to generate a third manipulated region. The third region is different from the first region and the second region. The third image manipulation is of a same type as the second image manipulation. The manipulated image further includes the third manipulated region.


The third region of the image can have a shape that is mirror-symmetric to a shape of the second region of the image.


The third manipulated region of the manipulated image can have a shape that is mirror-symmetric to a shape of the second manipulated region of the manipulated image.


The first region can be a central region, the second region can be a left region, and the third region can be a right region of a scene captured by the camera.


The manipulated image can include a generally discontinuous seam between the first manipulated region and the second manipulated region.


The method can further include graphically enhancing the discontinuous seam.


The image can be a frame of video captured by the camera.


The image can be captured with only the camera.


The manipulated image can include substantially the entire horizontal field of view of the camera.


The camera or cameras may include or may be associated with an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an EYEQ2 or EYEQ3 image processing chip available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.


The camera or imager or imaging sensor may comprise any suitable camera or imager or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012, and published on Jun. 6, 2013 as International Publication No. WO 2013/081985, which is hereby incorporated herein by reference in its entirety.


The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ladar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in at least about 640 columns and 480 rows (at least about a 640×480 imaging array), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data. For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, PCT Application No. PCT/US2010/047256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686 and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US2012/048800, filed Jul. 30, 2012, and published on Feb. 7, 2013 as International Publication No. WO 2013/019707, and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012, and published on Jan. 31, 2013 as International Publication No. WO 2013/016409, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published Nov. 1, 2012 as International Publication No. WO 2012/145822, and/or PCT Application No. PCT/US2012/056014, filed Sep. 19, 2012, and published Mar. 28, 2013 as International Publication No. WO 2013/043661, and/or PCT Application No. PCT/US12/57007, filed Sep. 25, 2012, and published Apr. 4, 2013 as International Publication No. WO 2013/048994, and/or PCT Application No. PCT/US2012/061548, filed Oct. 24, 2012, and published on May 2, 2013 as International Publication No. WO 2013/063014, and/or PCT Application No. PCT/US2012/062906, filed Nov. 1, 2012, and published May 1, 2013 as International Publication No. WO 2013/067083, and/or PCT Application No. PCT/US2012/063520, filed Nov. 5, 2012, and published May 16, 2013 as International Publication No. WO 2013/070539, and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, and/or PCT Application No. PCT/US2012/066570, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081984, and/or PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081985, and/or PCT Application No. PCT/US2012/068331, filed Dec. 7, 2012, and published Jun. 13, 2013 as International Publication No. WO 2013/086249, and/or PCT Application No. PCT/US2012/071219, filed Dec. 21, 2012, and published Jul. 11, 2013 as International Publication No. WO 2013/103548, and/or PCT Application No. PCT/US2013/022119, filed Jan. 18, 2013, and published Jul. 25, 2013 as International Publication No. WO 2013/109869, and/or PCT Application No. PCT/US2013/026101, filed Feb. 14, 2013, and published Aug. 22, 2013 as International Publication No. WO 2013/123161, and/or U.S. patent application Ser. No. 13/681,963, filed Nov. 20, 2012, now U.S. Pat. No. 9,264,673; Ser. No. 13/660,306, filed Oct. 25, 2012, now U.S. Pat. No. 9,146,898; Ser. No. 13/653,577, filed Oct. 17, 2012, now U.S. Pat. No. 9,174,574; and/or Ser. No. 13/534,657, filed Jun. 27, 2012, and published on Jan. 3, 2013 as U.S. Patent Publication No. US-2013-0002873, and/or U.S. provisional applications, Ser. No. 61/766,883, filed Feb. 20, 2013; Ser. No. 61/760,368, filed Feb. 4, 2013; Ser. No. 61/760,364, filed Feb. 4, 2013; Ser. No. 61/758,537, filed Jan. 30, 2013; Ser. No. 61/754,8004, filed Jan. 21, 2013; Ser. No. 61/745,925, filed Dec. 26, 2012; Ser. No. 61/745,864, filed Dec. 26, 2012; Ser. No. 61/736,104, filed Dec. 12, 2012; Ser. No. 61/736,103, filed Dec. 12, 2012; Ser. No. 61/735,314, filed Dec. 10, 2012; Ser. No. 61/734,457, filed Dec. 7, 2012; Ser. No. 61/733,598, filed Dec. 5, 2012; Ser. No. 61/733,093, filed Dec. 4, 2012; Ser. No. 61/727,912, filed Nov. 19, 2012; Ser. No. 61/727,911, filed Nov. 19, 2012; Ser. No. 61/727,910, filed Nov. 19, 2012; Ser. No. 61/718,382, filed Oct. 25, 2012; Ser. No. 61/710,924, filed Oct. 8, 2012; Ser. No. 61/696,416, filed Sep. 4, 2012; Ser. No. 61/682,995, filed Aug. 14, 2012; Ser. No. 61/682,486, filed Aug. 13, 2012; Ser. No. 61/680,883, filed Aug. 8, 2012; Ser. No. 61/676,405, filed Jul. 27, 2012; Ser. No. 61/666,146, filed Jun. 29, 2012; Ser. No. 61/648,744, filed May 18, 2012; Ser. No. 61/624,507, filed Apr. 16, 2012; Ser. No. 61/616,126, filed Mar. 27, 2012; Ser. No. 61/615,410, filed Mar. 26, 2012; Ser. No. 61/613,651, filed Mar. 21, 2012; Ser. No. 61/607,229, filed Mar. 6, 2012; Ser. No. 61/602,876, filed Feb. 24, 2012; and/or Ser. No. 61/601,651, filed Feb. 22, 2012, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in PCT Application No. PCT/US10/038477, filed Jun. 14, 2010, and/or U.S. patent application Ser. No. 13/202,005, filed Aug. 17, 2011, now U.S. Pat. No. 9,126,525, which are hereby incorporated herein by reference in their entireties.


The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras and vision systems described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454 and 6,824,281, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or U.S. patent application Ser. No. 12/508,840, filed Jul. 24, 2009, and published Jan. 28, 2010 as U.S. Pat. Publication No. US 2010-0020170, and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012, and published on Jan. 31, 2013 as International Publication No. WO 2013/016409, and/or U.S. patent application Ser. No. 13/534,657, filed Jun. 27, 2012, and published on Jan. 3, 2013 as U.S. Patent Publication No. US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The camera or cameras may comprise any suitable cameras or imaging sensors or camera modules, and may utilize aspects of the cameras or sensors described in U.S. patent application Ser. No. 12/091,359, filed Apr. 24, 2008 and published Oct. 1, 2009 as U.S. Publication No. US-2009-0244361, and/or Ser. No. 13/260,400, filed Sep. 26, 2011, now U.S. Pat. Nos. 8,542,451, and/or 7,965,336 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties. The imaging array sensor may comprise any suitable sensor, and may utilize various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like, such as the types described in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,715,093; 5,877,897; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 6,498,620; 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 6,806,452; 6,396,397; 6,822,563; 6,946,978; 7,339,149; 7,038,577; 7,004,606 and/or 7,720,580, and/or U.S. patent application Ser. No. 10/534,632, filed May 11, 2005, now U.S. Pat. No. 7,965,336; and/or PCT Application No. PCT/US2008/076022, filed Sep. 11, 2008 and published Mar. 19, 2009 as International Publication No. WO 2009/036176, and/or PCT Application No. PCT/US2008/078700, filed Oct. 3, 2008 and published Apr. 9, 2009 as International Publication No. WO 2009/046268, which are all hereby incorporated herein by reference in their entireties.


The camera module and circuit chip or board and imaging sensor may be implemented and operated in connection with various vehicular vision-based systems, and/or may be operable utilizing the principles of such other vehicular systems, such as a vehicle headlamp control system, such as the type disclosed in U.S. Pat. Nos. 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 7,004,606; 7,339,149 and/or 7,526,103, which are all hereby incorporated herein by reference in their entireties, a rain sensor, such as the types disclosed in commonly assigned U.S. Pat. Nos. 6,353,392; 6,313,454; 6,320,176 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties, a vehicle vision system, such as a forwardly, sidewardly or rearwardly directed vehicle vision system utilizing principles disclosed in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,877,897; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978 and/or 7,859,565, which are all hereby incorporated herein by reference in their entireties, a trailer hitching aid or tow check system, such as the type disclosed in U.S. Pat. No. 7,005,974, which is hereby incorporated herein by reference in its entirety, a reverse or sideward imaging system, such as for a lane change assistance system or lane departure warning system or for a blind spot or object detection system, such as imaging or detection systems of the types disclosed in U.S. Pat. Nos. 7,720,580; 7,038,577; 5,929,786 and/or 5,786,772, and/or U.S. patent application Ser. No. 11/239,980, filed Sep. 30, 2005, now U.S. Pat. No. 7,881,496, and/or U.S. provisional applications, Ser. No. 60/628,709, filed Nov. 17, 2004; Ser. No. 60/614,644, filed Sep. 30, 2004; Ser. No. 60/618,686, filed Oct. 14, 2004; Ser. No. 60/638,687, filed Dec. 23, 2004, which are hereby incorporated herein by reference in their entireties, a video device for internal cabin surveillance and/or video telephone function, such as disclosed in U.S. Pat. Nos. 5,760,962; 5,877,897; 6,690,268 and/or 7,370,983, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties, a traffic sign recognition system, a system for determining a distance to a leading or trailing vehicle or object, such as a system utilizing the principles disclosed in U.S. Pat. Nos. 6,396,397 and/or 7,123,168, which are hereby incorporated herein by reference in their entireties, and/or the like.


Optionally, the circuit board or chip may include circuitry for the imaging array sensor and or other electronic accessories or features, such as by utilizing compass-on-a-chip or EC driver-on-a-chip technology and aspects such as described in U.S. Pat. Nos. 7,255,451 and/or 7,480,149; and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, and/or Ser. No. 12/578,732, filed Oct. 14, 2009, now U.S. Pat. No. 9,487,144, which are hereby incorporated herein by reference in their entireties.


Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device disposed at or in the interior rearview mirror assembly of the vehicle, such as by utilizing aspects of the video mirror display systems described in U.S. Pat. No. 6,690,268 and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, which are hereby incorporated herein by reference in their entireties. The video mirror display may comprise any suitable devices and systems and optionally may utilize aspects of the compass display systems described in U.S. Pat. Nos. 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252 and/or 6,642,851, and/or European patent application, published Oct. 11, 2000 under Publication No. EP 0 1043566, and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, which are all hereby incorporated herein by reference in their entireties. Optionally, the video mirror display screen or device may be operable to display images captured by a rearward viewing camera of the vehicle during a reversing maneuver of the vehicle (such as responsive to the vehicle gear actuator being placed in a reverse gear position or the like) to assist the driver in backing up the vehicle, and optionally may be operable to display the compass heading or directional heading character or icon when the vehicle is not undertaking a reversing maneuver, such as when the vehicle is being driven in a forward direction along a road (such as by utilizing aspects of the display system described in PCT Application No. PCT/US2011/056295, filed Oct. 14, 2011 and published Apr. 19, 2012 as International Publication No. WO 2012/051500, which is hereby incorporated herein by reference in its entirety).


Optionally, the vision system (utilizing the forward facing camera and a rearward facing camera and other cameras disposed at the vehicle with exterior fields of view) may be part of or may provide a display of a top-down view or birds-eye view system of the vehicle or a surround view at the vehicle, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012/075250, and/or PCT Application No. PCT/US2012/048993, filed Jul. 31, 2012, and published on Feb. 7, 2013 as International Publication No. WO 2013/019795, and/or PCT Application No. PCT/US11/62755, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012/075250, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published on Nov. 1, 2012 as International Publication No. WO 2012/145822, and/or PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081985, and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, and/or U.S. provisional applications, Ser. No. 61/615,410, filed Mar. 26, 2012, which are hereby incorporated herein by reference in their entireties.


Optionally, a video mirror display may be disposed rearward of and behind the reflective element assembly and may comprise a display such as the types disclosed in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,370,983; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187 and/or 6,690,268, and/or in U.S. patent application Ser. No. 12/091,525, filed Apr. 25, 2008, now U.S. Pat. No. 7,855,755; Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008; and/or Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are all hereby incorporated herein by reference in their entireties. The display is viewable through the reflective element when the display is activated to display information. The display element may be any type of display element, such as a vacuum fluorescent (VF) display element, a light emitting diode (LED) display element, such as an organic light emitting diode (OLED) or an inorganic light emitting diode, an electroluminescent (EL) display element, a liquid crystal display (LCD) element, a video screen display element or backlit thin film transistor (TFT) display element or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status, and/or the like. The mirror assembly and/or display may utilize aspects described in U.S. Pat. Nos. 7,184,190; 7,255,451; 7,446,924 and/or 7,338,177, which are all hereby incorporated herein by reference in their entireties. The thicknesses and materials of the coatings on the substrates of the reflective element may be selected to provide a desired color or tint to the mirror reflective element, such as a blue colored reflector, such as is known in the art and such as described in U.S. Pat. Nos. 5,910,854; 6,420,036 and/or 7,274,501, which are hereby incorporated herein by reference in their entireties.


Optionally, the display or displays and any associated user inputs may be associated with various accessories or systems, such as, for example, a tire pressure monitoring system or a passenger air bag status or a garage door opening system or a telematics system or any other accessory or system of the mirror assembly or of the vehicle or of an accessory module or console of the vehicle, such as an accessory module or console of the types described in U.S. Pat. Nos. 7,289,037; 6,877,888; 6,824,281; 6,690,268; 6,672,744; 6,386,742 and 6,124,886, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties.


While the foregoing provides certain non-limiting example embodiments, it should be understood that combinations, subsets, and variations of the foregoing are contemplated. The monopoly sought is defined by the claims.

Claims
  • 1. A vehicular vision system, said vehicular vision system comprising: a camera disposed at a front portion of a vehicle, wherein said camera has a forward field of view of at least 180 degrees so as to capture image data representative of a view of cross traffic when the vehicle is at an intersection;a display screen disposed in a cabin of the vehicle and viewable by a driver of the vehicle;a processor disposed at the vehicle for processing image data captured by said camera;wherein image data captured by said camera is representative of the view of said camera;wherein image data captured by said camera is provided to said processor;wherein said processor processes the provided captured image data;wherein said processor processes the provided captured image data via a first image manipulation of a first portion of the provided captured image data that corresponds to a first region of the view of said camera to generate first region manipulated image data;wherein said processor processes the provided captured image data via a second image manipulation of a second portion of the provided captured image data that corresponds to a second region of the view of said camera to generate second region manipulated image data, the second region being different from the first region;wherein said processor processes the provided captured image data via a third image manipulation of a third portion of the provided captured image data that corresponds to a third region of the view of said camera to generate third region manipulated image data, the third region being different from the first region and being different from the second region;wherein the first region comprises a central region of the view of said camera, the second region comprises a left region of the view of said camera, and the third region comprises a right region of the view of said camera;wherein said processor generates manipulated image data that comprises the first region manipulated image data, the second region manipulated image data and the third region manipulated image data;wherein, responsive to the generated manipulated image data, said display screen displays images based at least in part on the generated manipulated image data, and wherein the displayed images include (i) a first image at a first display region of said display screen, the first image at least partially derived from the first region manipulated image data, (ii) a second image at a second display region of said display screen, the second image at least partially derived from the second region manipulated image data and (iii) a third image at a third display region of said display screen, the third image at least partially derived from the third region manipulated image data;wherein the displayed images are discontinuous at a first seam between the first display region and the second display region such that an object present in both the first and second regions of the view of said camera is displayed as discontinuous at the first seam;wherein the displayed images are discontinuous at a second seam between the first display region and the third display region such that an object present in both the first and third regions of the view of said camera is displayed as discontinuous at the second seam; andwherein, with the vehicle moving and approaching a cross traffic situation where an approaching vehicle is approaching a path of travel of the vehicle, the second image is representative of the approaching vehicle and the second image manipulation comprises selecting a manipulation technique based on the approaching vehicle and manipulating, using the selected manipulation technique, the second portion of the provided captured image data to move an apparent viewpoint of the camera along the path of travel of the vehicle and to enlarge the second image representative of the approaching vehicle at the second display region of said display screen compared to the first image at the first display region of said display screen.
  • 2. The vehicular vision system of claim 1, wherein the second and third image manipulations comprise reshaping.
  • 3. The vehicular vision system of claim 1, wherein said vehicular vision system, responsive to detection of cross traffic at the intersection, performs the second image manipulation to move the apparent viewpoint along the path of travel of the vehicle when the vehicle is at the intersection.
  • 4. The vehicular vision system of claim 1, wherein the second image manipulation is different from the first image manipulation, and wherein the third image manipulation is different from the first image manipulation.
  • 5. The vehicular vision system of claim 4, wherein the third image manipulation is of a same type as the second image manipulation.
  • 6. The vehicular vision system of claim 1, wherein the third display region has a shape that is mirror-symmetric to a shape of the second display region.
  • 7. The vehicular vision system of claim 1, wherein the first seam comprises a first vertical seam between the first display region and the second display region, and wherein the second seam comprises a second vertical seam between the first display region and the third display region.
  • 8. The vehicular vision system of claim 7, wherein said processor graphically enhances the first vertical seam between the first display region and the second display region and graphically enhances the second vertical seam between the first display region and the third display region.
  • 9. The vehicular vision system of claim 1, wherein said processor graphically enhances the first and second seams.
  • 10. The vehicular vision system of claim 1, wherein the provided captured image data is representative of at least a frame of video captured by said camera.
  • 11. The vehicular vision system of claim 1, wherein said camera includes a wide-angle lens.
  • 12. The vehicular vision system of claim 1, wherein the manipulated image data generated by said processor is based on image data received from only said camera.
  • 13. The vehicular vision system of claim 1, wherein the second and third image manipulations provide a different viewpoint as compared to a viewpoint provided via the first image manipulation.
  • 14. The vehicular vision system of claim 13, wherein the different viewpoint provided by the second and third image manipulations comprises the apparent viewpoint that is moved along the path of travel of the vehicle relative to the viewpoint provided via the first image manipulation.
  • 15. The vehicular vision system of claim 1, wherein the generated manipulated image data is provided by said processor to said display screen.
  • 16. A vehicular vision system, said vehicular vision system comprising: a camera disposed at a front portion of a vehicle, wherein said camera has a forward field of view of at least 180 degrees so as to capture image data representative of a view of cross traffic when the vehicle is at an intersection;a display screen disposed in a cabin of the vehicle and viewable by a driver of the vehicle;a processor disposed at the vehicle for processing image data captured by said camera;wherein image data captured by said camera is representative of the view of said camera;wherein image data captured by said camera is provided to said processor;wherein said processor processes the provided captured image data;wherein said processor processes the provided captured image data via a first image manipulation of a first portion of the provided captured image data that corresponds to a first region of the view of said camera to generate first region manipulated image data;wherein said processor processes the provided captured image data via a second image manipulation of a second portion of the provided captured image data that corresponds to a second region of the view of said camera to generate second region manipulated image data, the second region being different from the first region;wherein said processor processes the provided captured image data via a third image manipulation of a third portion of the provided captured image data that corresponds to a third region of the view of said camera to generate third region manipulated image data, the third region being different from the first region and being different from the second region;wherein the second image manipulation is different from the first image manipulation;wherein the second and third image manipulations provide respective different viewpoints as compared to a viewpoint provided via the first image manipulation;wherein the first region comprises a central region of the view of said camera, the second region comprises a left region of the view of said camera, and the third region comprises a right region of the view of said camera;wherein said processor generates manipulated image data that comprises the first region manipulated image data, the second region manipulated image data and the third region manipulated image data;wherein, responsive to the generated manipulated image data, said display screen displays images based at least in part on the generated manipulated image data, and wherein the displayed images include (i) a first image at a first display region of said display screen, the first image at least partially derived from the first region manipulated image data, (ii) a second image at a second display region of said display screen, the second image at least partially derived from the second region manipulated image data and (iii) a third image at a third display region of said display screen, the third image at least partially derived from the third region manipulated image data;wherein, with the vehicle moving and approaching a cross traffic situation where an approaching vehicle is approaching a path of travel of the vehicle, the second image is representative of the approaching vehicle and the second image manipulation comprises selecting a manipulation technique based on the approaching vehicle and manipulating, using the selected manipulation technique, the second portion of the provided captured image data to move an apparent viewpoint of the camera along the path of travel of the vehicle and to enlarge the second image representative of the approaching vehicle at the second display region of said display screen compared to the first image at the first display region of said display screen;wherein, with the vehicle moving and approaching the cross traffic situation, the third image manipulation comprises manipulating the third portion of the provided captured image data to move the apparent viewpoint of the camera along the path of travel of the vehicle and to enlarge the third image at the third display region of said display screen compared to the first image at the first display region of said display screen;wherein the displayed images are discontinuous at a first seam between the first display region and the second display region such that an object present in both the first and second regions of the view of said camera is displayed as discontinuous at the first seam; andwherein the displayed images are discontinuous at a second seam between the first display region and the third display region such that an object present in both the first and third regions of the view of said camera is displayed as discontinuous at the second seam.
  • 17. The vehicular vision system of claim 16, wherein the second and third image manipulations comprise reshaping.
  • 18. The vehicular vision system of claim 16, wherein the first seam comprises a vertical seam between the first display region and the second display region, and wherein the second seam comprises a vertical seam between the first display region and the third display region.
  • 19. The vehicular vision system of claim 16, wherein said processor graphically enhances the first and second seams.
  • 20. The vehicular vision system of claim 16, wherein said camera includes a wide-angle lens.
  • 21. The vehicular vision system of claim 16, wherein the manipulated image data generated by said processor is based on image data received from only said camera.
  • 22. The vehicular vision system of claim 16, wherein the different viewpoint provided by the second and third image manipulations comprises the apparent viewpoint that is moved along the path of travel of the vehicle relative to the viewpoint provided via the first image manipulation.
  • 23. The vehicular vision system of claim 16, wherein the generated manipulated image data is provided by said processor to said display screen.
  • 24. A vehicular vision system, said vehicular vision system comprising: a camera disposed at a front portion of a vehicle, wherein said camera has a forward field of view of at least 180 degrees so as to capture image data representative of a view of cross traffic when the vehicle is at an intersection;a display screen disposed in a cabin of the vehicle and viewable by a driver of the vehicle;a processor disposed at the vehicle for processing image data captured by said camera;wherein image data captured by said camera is representative of the view of said camera;wherein image data captured by said camera is provided to said processor;wherein said processor processes the provided captured image data;wherein said processor processes the provided captured image data via a first image manipulation of a first portion of the provided captured image data that corresponds to a first region of the view of said camera to generate first region manipulated image data;wherein said processor processes the provided captured image data via a second image manipulation of a second portion of the provided captured image data that corresponds to a second region of the view of said camera to generate second region manipulated image data, the second region being different from the first region;wherein said processor processes the provided captured image data via a third image manipulation of a third portion of the provided captured image data that corresponds to a third region of the view of said camera to generate third region manipulated image data, the third region being different from the first region and being different from the second region;wherein the second and third image manipulations provide respective different viewpoints as compared to a viewpoint provided via the first image manipulation;wherein the different viewpoints of said camera provided by the second and third image manipulations comprises an apparent viewpoint that is moved along a path of travel of the vehicle relative to the viewpoint provided via the first image manipulation;wherein the first region comprises a central region of the view of said camera, the second region comprises a left region of the view of said camera, and the third region comprises a right region of the view of said camera;wherein said processor generates manipulated image data that comprises the first region manipulated image data, the second region manipulated image data and the third region manipulated image data;wherein the generated manipulated image data is provided by said processor to said display screen;wherein, responsive to the generated manipulated image data, said display screen displays images based at least in part on the generated manipulated image data, and wherein the displayed images include (i) a first image at a first display region of said display screen, the first image at least partially derived from the first region manipulated image data, (ii) a second image at a second display region of said display screen, the second image at least partially derived from the second region manipulated image data and (iii) a third image at a third display region of said display screen, the third image at least partially derived from the third region manipulated image data;wherein, with the vehicle moving and approaching a cross traffic situation where an approaching vehicle is approaching a path of travel of the vehicle, the second image is representative of the approaching vehicle and the second image manipulation comprises selecting a manipulation technique based on the approaching vehicle and manipulating, using the selected manipulation technique, the second portion of the provided captured image data to move the apparent viewpoint of the camera along the path of travel of the vehicle and to enlarge the second image representative of the approaching vehicle at the second display region of said display screen compared to the first image at the first display region of said display screen;wherein, with the vehicle moving and approaching the cross traffic situation, the third image manipulation comprises manipulating the third portion of the provided captured image data to move the apparent viewpoint of the camera along the path of travel of the vehicle and to enlarge the third image at the third display region of said display screen compared to the first image at the first display region of said display screen;wherein the displayed images are discontinuous at a first vertical seam between the first display region and the second display region such that an object present in both the first and second regions of the view of said camera is displayed as discontinuous at the first vertical seam; andwherein the displayed images are discontinuous at a second vertical seam between the first display region and the third display region such that an object present in both the first and third regions of the view of said camera is displayed as discontinuous at the second vertical seam.
  • 25. The vehicular vision system of claim 24, wherein the second image manipulation is different from the first image manipulation, and wherein the third image manipulation is different from the first image manipulation.
  • 26. The vehicular vision system of claim 25, wherein the third image manipulation moves the apparent viewpoint the same as the second image manipulation.
  • 27. The vehicular vision system of claim 24, wherein said processor graphically enhances the first and second vertical seams.
  • 28. The vehicular vision system of claim 24, wherein the provided captured image data is representative of at least a frame of video captured by said camera.
  • 29. The vehicular vision system of claim 24, wherein said camera includes a wide-angle lens.
  • 30. The vehicular vision system of claim 24, wherein the manipulated image data generated by said processor is based on image data received from only said camera.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 16/699,900, filed Dec. 2, 2019, now U.S. Pat. No. 10,926,702, which is a continuation of U.S. patent application Ser. No. 14/377,940, filed Aug. 11, 2014, now U.S. Pat. No. 10,493,916, which is a 371 national phase filing of PCT Application No. PCT/US2013/027342, filed Feb. 22, 2013, which claims the filing benefit of U.S. provisional application Ser. No. 61/601,669, filed Feb. 22, 2012, which is hereby incorporated herein by reference in its entirety.

US Referenced Citations (302)
Number Name Date Kind
4891559 Matsumoto et al. Jan 1990 A
4961625 Wood et al. Oct 1990 A
4966441 Conner Oct 1990 A
4967319 Seko Oct 1990 A
4970653 Kenue Nov 1990 A
5003288 Wilhelm Mar 1991 A
5096287 Kakinami et al. Mar 1992 A
5166681 Bottesch et al. Nov 1992 A
5245422 Borcherts et al. Sep 1993 A
5313072 Vachss May 1994 A
5355118 Fukuhara Oct 1994 A
5359666 Nakayama et al. Oct 1994 A
5374852 Parkes Dec 1994 A
5386285 Asayama Jan 1995 A
5394333 Kao Feb 1995 A
5406395 Wilson et al. Apr 1995 A
5408346 Trissel et al. Apr 1995 A
5410346 Saneyoshi et al. Apr 1995 A
5414257 Stanton May 1995 A
5414461 Kishi et al. May 1995 A
5416313 Larson et al. May 1995 A
5416318 Hegyi May 1995 A
5416478 Morinaga May 1995 A
5424952 Asayama Jun 1995 A
5426294 Kobayashi et al. Jun 1995 A
5430431 Nelson Jul 1995 A
5434407 Bauer et al. Jul 1995 A
5440428 Hegg et al. Aug 1995 A
5444478 Lelong et al. Aug 1995 A
5451822 Bechtel et al. Sep 1995 A
5457493 Leddy et al. Oct 1995 A
5461357 Yoshioka et al. Oct 1995 A
5461361 Moore Oct 1995 A
5469298 Suman et al. Nov 1995 A
5471515 Fossum et al. Nov 1995 A
5475494 Nishida et al. Dec 1995 A
5487116 Nakano et al. Jan 1996 A
5498866 Bendicks et al. Mar 1996 A
5500766 Stonecypher Mar 1996 A
5510983 Lino Apr 1996 A
5515448 Nishitani May 1996 A
5521633 Nakajima et al. May 1996 A
5528698 Kamei et al. Jun 1996 A
5529138 Shaw et al. Jun 1996 A
5530240 Larson et al. Jun 1996 A
5530420 Tsuchiya et al. Jun 1996 A
5535144 Kise Jul 1996 A
5535314 Alves et al. Jul 1996 A
5537003 Bechtel et al. Jul 1996 A
5539397 Asanuma et al. Jul 1996 A
5541590 Nishio Jul 1996 A
5550677 Schofield et al. Aug 1996 A
5555312 Shima et al. Sep 1996 A
5555555 Sato et al. Sep 1996 A
5559695 Daily Sep 1996 A
5568027 Teder Oct 1996 A
5574443 Hsieh Nov 1996 A
5581464 Woll et al. Dec 1996 A
5594222 Caldwell Jan 1997 A
5614788 Mullins Mar 1997 A
5619370 Guinosso Apr 1997 A
5634709 Iwama Jun 1997 A
5638116 Shimoura et al. Jun 1997 A
5642299 Hardin et al. Jun 1997 A
5648835 Uzawa Jul 1997 A
5650944 Kise Jul 1997 A
5660454 Mori et al. Aug 1997 A
5661303 Feder Aug 1997 A
5666028 Bechtel et al. Sep 1997 A
5668663 Varaprasad et al. Sep 1997 A
5670935 Schofield et al. Sep 1997 A
5675489 Pomerleau Oct 1997 A
5677851 Kingdon et al. Oct 1997 A
5699044 Van Lente et al. Dec 1997 A
5724316 Brunts Mar 1998 A
5737226 Olson et al. Apr 1998 A
5757949 Kinoshita et al. May 1998 A
5760826 Nayar Jun 1998 A
5760828 Cortes Jun 1998 A
5760931 Saburi et al. Jun 1998 A
5760962 Schofield et al. Jun 1998 A
5761094 Olson et al. Jun 1998 A
5765116 Wilson-Jones et al. Jun 1998 A
5781437 Wiemer et al. Jul 1998 A
5790403 Nakayama Aug 1998 A
5790973 Blaker et al. Aug 1998 A
5793308 Rosinski et al. Aug 1998 A
5793420 Schmidt Aug 1998 A
5796094 Schofield et al. Aug 1998 A
5837994 Stam et al. Nov 1998 A
5844505 Van Ryzin Dec 1998 A
5844682 Kiyomoto et al. Dec 1998 A
5845000 Breed et al. Dec 1998 A
5848802 Breed et al. Dec 1998 A
5850176 Kinoshita et al. Dec 1998 A
5850254 Takano et al. Dec 1998 A
5867591 Onda Feb 1999 A
5877707 Kowalick Mar 1999 A
5877897 Schofield et al. Mar 1999 A
5878370 Olson Mar 1999 A
5883684 Millikan et al. Mar 1999 A
5883739 Ashihara et al. Mar 1999 A
5884212 Lion Mar 1999 A
5890021 Onoda Mar 1999 A
5896085 Mori et al. Apr 1999 A
5899956 Chan May 1999 A
5904725 Iisaka et al. May 1999 A
5914815 Bos Jun 1999 A
5920367 Kajimoto et al. Jul 1999 A
5923027 Stam et al. Jul 1999 A
5959555 Furuta Sep 1999 A
5963247 Banitt Oct 1999 A
5964822 Alland et al. Oct 1999 A
5990469 Bechtel et al. Nov 1999 A
5990649 Nagao et al. Nov 1999 A
6009336 Harris et al. Dec 1999 A
6020704 Buschur Feb 2000 A
6049171 Stam et al. Apr 2000 A
6052124 Stein et al. Apr 2000 A
6066933 Ponziana May 2000 A
6084519 Coulling et al. Jul 2000 A
6091833 Yasui et al. Jul 2000 A
6097024 Stam et al. Aug 2000 A
6100811 Hsu et al. Aug 2000 A
6139172 Bos et al. Oct 2000 A
6144022 Fenenbaum et al. Nov 2000 A
6158655 DeVries, Jr. et al. Dec 2000 A
6175300 Kendrick Jan 2001 B1
6201642 Bos Mar 2001 B1
6226061 Tagusa May 2001 B1
6259412 Duroux Jul 2001 B1
6259423 Tokito et al. Jul 2001 B1
6266082 Yonezawa et al. Jul 2001 B1
6266442 Laumeyer et al. Jul 2001 B1
6285393 Shimoura et al. Sep 2001 B1
6285778 Nakajima et al. Sep 2001 B1
6297781 Turnbull et al. Oct 2001 B1
6310611 Caldwell Oct 2001 B1
6313454 Bos et al. Nov 2001 B1
6317057 Lee Nov 2001 B1
6320282 Caldwell Nov 2001 B1
6333759 Mazzilli Dec 2001 B1
6359392 He Mar 2002 B1
6370329 Feuchert Apr 2002 B1
6396397 Bos et al. May 2002 B1
6411328 Franke et al. Jun 2002 B1
6424273 Gulla et al. Jul 2002 B1
6430303 Naoi et al. Aug 2002 B1
6433817 Guerra Aug 2002 B1
6442465 Breed et al. Aug 2002 B2
6485155 Duroux et al. Nov 2002 B1
6497503 Dassanayake et al. Dec 2002 B1
6539306 Turnbull Mar 2003 B2
6547133 Devries, Jr. et al. Apr 2003 B1
6553130 Lemelson et al. Apr 2003 B1
6570998 Ohtsuka et al. May 2003 B1
6574033 Chui et al. Jun 2003 B1
6578017 Ebersole et al. Jun 2003 B1
6587573 Stam et al. Jul 2003 B1
6589625 Kothari et al. Jul 2003 B1
6593011 Liu et al. Jul 2003 B2
6593698 Stam et al. Jul 2003 B2
6593960 Sugimoto et al. Jul 2003 B1
6594583 Ogura et al. Jul 2003 B2
6611610 Stam et al. Aug 2003 B1
6631316 Stam et al. Oct 2003 B2
6631994 Suzuki et al. Oct 2003 B2
6636258 Strumolo Oct 2003 B2
6672731 Schnell et al. Jan 2004 B2
6678056 Downs Jan 2004 B2
6690268 Schofield et al. Feb 2004 B2
6693524 Payne Feb 2004 B1
6700605 Toyoda et al. Mar 2004 B1
6704621 Stein et al. Mar 2004 B1
6711474 Freyz et al. Mar 2004 B1
6714331 Lewis et al. Mar 2004 B2
6717610 Bos et al. Apr 2004 B1
6735506 Breed et al. May 2004 B2
6744353 Sjonell Jun 2004 B2
6757109 Bos Jun 2004 B2
6762867 Lippert et al. Jul 2004 B2
6795221 Urey Sep 2004 B1
6806452 Bos et al. Oct 2004 B2
6807287 Hermans Oct 2004 B1
6823241 Shirato et al. Nov 2004 B2
6824281 Schofield et al. Nov 2004 B2
6864930 Matsushita et al. Mar 2005 B2
6889161 Winner et al. May 2005 B2
6909753 Meehan et al. Jun 2005 B2
6975775 Rykowski et al. Dec 2005 B2
7004593 Weller et al. Feb 2006 B2
7005974 McMahon et al. Feb 2006 B2
7038577 Pawlicki et al. May 2006 B2
7062300 Kim Jun 2006 B1
7065432 Moisel et al. Jun 2006 B2
7085637 Breed et al. Aug 2006 B2
7092548 Laumeyer et al. Aug 2006 B2
7113867 Stein Sep 2006 B1
7116246 Winter et al. Oct 2006 B2
7133661 Hatae et al. Nov 2006 B2
7149613 Stam et al. Dec 2006 B2
7151996 Stein Dec 2006 B2
7161616 Okamoto et al. Jan 2007 B1
7195381 Lynam et al. Mar 2007 B2
7202776 Breed Apr 2007 B2
7227611 Hull et al. Jun 2007 B2
7375803 Bamji May 2008 B1
7423821 Bechtel et al. Sep 2008 B2
7541743 Salmeen et al. Jun 2009 B2
7565006 Stam et al. Jul 2009 B2
7566851 Stein et al. Jul 2009 B2
7605856 Imoto Oct 2009 B2
7619508 Lynam et al. Nov 2009 B2
7633383 Dunsmoir et al. Dec 2009 B2
7639149 Katoh Dec 2009 B2
7676087 Dhua et al. Mar 2010 B2
7720580 Higgins-Luthman May 2010 B2
7786898 Stein et al. Aug 2010 B2
7843451 Lafon Nov 2010 B2
7855778 Yung et al. Dec 2010 B2
7881496 Camilleri et al. Feb 2011 B2
7930160 Hosagrahara et al. Apr 2011 B1
7949486 Denny et al. May 2011 B2
8017898 Lu et al. Sep 2011 B2
8064643 Stein et al. Nov 2011 B2
8082101 Stein et al. Dec 2011 B2
8164628 Stein et al. Apr 2012 B2
8224031 Saito Jul 2012 B2
8233045 Luo et al. Jul 2012 B2
8254635 Stein et al. Aug 2012 B2
8300886 Hoffmann Oct 2012 B2
8378851 Stein et al. Feb 2013 B2
8421865 Euler et al. Apr 2013 B2
8452055 Stein et al. May 2013 B2
8553088 Stein et al. Oct 2013 B2
8736680 Cilia et al. May 2014 B1
10493916 Lu Dec 2019 B2
10926702 Lu Feb 2021 B2
20010002451 Breed May 2001 A1
20020005778 Breed et al. Jan 2002 A1
20020011611 Huang et al. Jan 2002 A1
20020113873 Williams Aug 2002 A1
20030103142 Hitomi et al. Jun 2003 A1
20030122930 Schofield et al. Jul 2003 A1
20030137586 Lewellen Jul 2003 A1
20030222982 Hamdan et al. Dec 2003 A1
20040046889 Imoto Mar 2004 A1
20040164228 Fogg et al. Aug 2004 A1
20050174429 Yanai Aug 2005 A1
20050219852 Stam et al. Oct 2005 A1
20050237385 Kosaka et al. Oct 2005 A1
20060017807 Lee Jan 2006 A1
20060018511 Stam et al. Jan 2006 A1
20060018512 Stam et al. Jan 2006 A1
20060029255 Ozaki Feb 2006 A1
20060088190 Chinomi Apr 2006 A1
20060091813 Stam et al. May 2006 A1
20060103727 Tseng May 2006 A1
20060125919 Camilleri et al. Jun 2006 A1
20060192660 Watanabe Aug 2006 A1
20060250501 Wildmann et al. Nov 2006 A1
20070024724 Stein et al. Feb 2007 A1
20070104476 Yasutomi et al. May 2007 A1
20070242339 Bradley Oct 2007 A1
20080043099 Stein et al. Feb 2008 A1
20080117287 Park et al. May 2008 A1
20080147321 Howard et al. Jun 2008 A1
20080192132 Bechtel et al. Aug 2008 A1
20080231710 Asari et al. Sep 2008 A1
20080246843 Nagata Oct 2008 A1
20080266396 Stein Oct 2008 A1
20090079553 Yanagi Mar 2009 A1
20090079585 Chinomi et al. Mar 2009 A1
20090113509 Tseng et al. Apr 2009 A1
20100045797 Schofield et al. Feb 2010 A1
20100194889 Arndt et al. Aug 2010 A1
20100295945 Plemons et al. Nov 2010 A1
20110122249 Camilleri et al. May 2011 A1
20110216201 McAndrew et al. Sep 2011 A1
20120045112 Lundblad et al. Feb 2012 A1
20120069185 Stein Mar 2012 A1
20120098968 Schofield et al. Apr 2012 A1
20120154589 Watanabe Jun 2012 A1
20120200707 Stein et al. Aug 2012 A1
20120265416 Lu et al. Oct 2012 A1
20120314071 Rosenbaum et al. Dec 2012 A1
20120320209 Vico et al. Dec 2012 A1
20130027558 Ramanath et al. Jan 2013 A1
20130141580 Stein et al. Jun 2013 A1
20130147957 Stein Jun 2013 A1
20130169812 Lu et al. Jul 2013 A1
20130222593 Byrne et al. Aug 2013 A1
20130286193 Pflug Oct 2013 A1
20140043473 Gupta et al. Feb 2014 A1
20140063254 Shi et al. Mar 2014 A1
20140098229 Lu et al. Apr 2014 A1
20140247352 Rathi et al. Sep 2014 A1
20140247354 Knudsen Sep 2014 A1
20140320658 Pliefke Oct 2014 A1
20140333729 Pflug Nov 2014 A1
20140347486 Okouneva Nov 2014 A1
20140350834 Turk Nov 2014 A1
Foreign Referenced Citations (5)
Number Date Country
102010038825 Feb 2011 DE
2011014497 Feb 2011 WO
WO-2011014497 Feb 2011 WO
2011030698 Mar 2011 WO
WO-2011030698 Mar 2011 WO
Non-Patent Literature Citations (10)
Entry
Achler et al., “Vehicle Wheel Detector using 2D Filter Banks,” IEEE Intelligent Vehicles Symposium of Jun. 2004.
Wolberg, Digital Image Warping, IEEE Computer Society Press, 1990.
Wolberg, “A Two-Pass Mesh Warping Implementation of Morphing,” Dr. Dobb's Journal, No. 202, Jul. 1993.
Praitt, “Digital Image Processing, Passage—ED.3”, John Wiley & Sons, US, Jan. 1, 2001, pp. 657-659, XP002529771.
Greene et al., Creating Raster Omnimax Images from Multiple Perspective Views Using the Elliptical Weighted Average Filter, IEEE Computer Graphics and Applications, vol. 6, No. 6, pp. 21-27, Jun. 1986.
Burt et al., A Multiresolution Spline with Application to Image Mosaics, ACM Transactions on Graphics, vol. 2 No. 4, pp. 217-236, Oct. 1983.
Brown, A Survey of Image Registration Techniques, vol. 24, ACM Computing Surveys, pp. 325-376, 1992.
Broggi et al., “Multi-Resolution Vehicle Detection using Artificial Vision,” IEEE Intelligent Vehicles Symposium of Jun. 2004.
Bow, Sing T., “Pattern Recognition and Image Preprocessing (Signal Processing and Communications)”, CRC Press, Jan. 15, 2002, pp. 557-559.
International Search Report and Written Opinion dated Apr. 29, 2013 for corresponding PCT application No. PCT/US2013/027342.
Related Publications (1)
Number Date Country
20210178970 A1 Jun 2021 US
Provisional Applications (1)
Number Date Country
61601669 Feb 2012 US
Continuations (2)
Number Date Country
Parent 16699900 Dec 2019 US
Child 17249121 US
Parent 14377940 US
Child 16699900 US