The present invention relates to imaging systems or vision systems for vehicles.
Use of imaging sensors in vehicle imaging systems is common and known.
Examples of such known systems are described in U.S. Pat. Nos. 7,161,616; 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
The present invention provides a vision system or imaging system for a vehicle that utilizes one or more cameras to capture images exterior of the vehicle, and provides the communication/data signals, including camera data or image data, that may be displayed at a display screen that is viewable by the driver of the vehicle, such as when the driver is backing up the vehicle, and that may be processed and, responsive to such image processing, the system may detect an object at or near the vehicle and in the path of travel of the vehicle, such as when the vehicle is backing up. The system is operable to display a surround view or bird's-eye view of the environment at or around or at least partially surrounding the subject or equipped vehicle, and the displayed image includes a displayed image representation of the subject vehicle that at least partially corresponds to the actual subject vehicle, so that the driver can cognitively associate the displayed image as being representative of the driven vehicle and readily understand that the displayed vehicle represents the vehicle that he/she is driving.
According to an aspect of the present invention, a vision system for a vehicle includes multiple cameras or image sensors disposed at a vehicle and having respective fields of view exterior of the vehicle, and a processor operable to process image data captured by the camera. The processor processes captured image data to generate images, such as three dimensional images, of the environment surrounding the equipped vehicle, and the processor is operable to generate a three dimensional vehicle representation of the equipped vehicle. A display screen is operable to display the generated images of the environment surrounding the equipped vehicle and to display the three dimensional vehicle representation of the equipped vehicle as the representation would be viewed from a virtual camera viewpoint exterior to and higher than the equipped vehicle itself. At least one of (a) a degree of transparency of at least a portion of the displayed vehicle representation is adjustable, (b) the vehicle representation comprises a vector model and (c) the vehicle representation comprises at least one of (i) a shape corresponding to that of the actual equipped vehicle, (ii) a body type corresponding to that of the actual equipped vehicle, (iii) a body style corresponding to that of the actual equipped vehicle and (iv) a color corresponding to that of the actual equipped vehicle.
The vision system thus may select or adjust the displayed vehicle image or representation that is representative of the equipped vehicle to provide an enhanced display of the surrounding environment (such as by adjusting a degree of transparency or opaqueness of the displayed vehicle representation of the subject vehicle) and/or enhanced cognitive recognition by the driver of the equipped vehicle that the displayed vehicle representation represents the equipped vehicle that is being driven by the driver (such as by matching or coordinating the vehicle type or style or color or the like of the displayed vehicle representation with the actual vehicle type or style or color or the like of the actual particular subject vehicle). For example, a portion of the generated vehicle representation (such as an avatar of the vehicle or virtual vehicle or the like) may be rendered at least partially transparent or non-solid to allow “viewing through” the displayed vehicle representation to view an object at or near the vehicle that may be “blocked” by the vehicle representation, which may be between the displayed object and the virtual camera.
The present invention may also or otherwise provide a calibration system for the vision system or imaging system, which utilizes multiple cameras to capture images exterior of the vehicle, such as rearwardly and sidewardly and forwardly of the vehicle, such as for a surround view or bird's-eye view system of a vehicle. The cameras provide communication/data signals, including camera data or image data that is displayed for viewing by the driver of the vehicle and that is processed to merge the captured images from the cameras to provide or display a continuous surround view image for viewing by the driver of the vehicle. The cameras and/or image processing is calibrated to provide the continuous image or merged images. When the cameras and/or image processing is calibrated, the captured images can be stitched or merged together to provide a substantially seamless top-down view or bird's-eye view at the vehicle via capturing images and processing images captured by the vehicle cameras.
According to an aspect of the present invention, a vision system for a vehicle includes multiple cameras or image sensors disposed at a vehicle and having respective fields of view exterior of the vehicle, and a processor operable to process data transmitted by the camera. The processor comprises a camera calibration algorithm that is operable to compare captured image data of a portion of the equipped vehicle with stored or received data (such as uploaded data from a database or data received from a remote source or the like) that is representative of where the portion of the equipped vehicle should be in the captured image data for a calibrated vision system. The vision system adjusts the image processing and/or the camera so that the captured image data of the portion of the equipped vehicle is within a threshold level of where the portion of the equipped vehicle is in the stored or received data (such that the camera field of view and/or the captured image is within a threshold height, width, depth, tilt and/or rotation of the stored or received calibrated camera image).
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle vision system and/or driver assist system and/or object detection system and/or alert system operates to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes a processor that is operable to receive image data from one or more cameras and may provide or generate a vehicle representation or image that is representative of the subject vehicle (such as for a top down or bird's-eye or surround view, such as discussed below), with the vehicle representation being customized to at least partially correspond to the actual subject vehicle, such that the displayed vehicle image or representation is at least one of the same or similar type of vehicle as the subject vehicle and the same or similar color as the subject vehicle.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes multiple exterior facing imaging sensors or cameras (such as a rearward facing imaging sensor or camera 14a, and a forwardly facing camera 14b at the front (or at the windshield) of the vehicle, and a sidewardly/rearwardly facing camera 14c, 14b at respective sides of the vehicle), which capture images exterior of the vehicle, with each of the cameras having a lens for focusing images at or onto an imaging array or imaging plane of the camera (
The vision system 12 is operable to process image data captured by the cameras and may merge or stitch the images together to provide a top view or surround view or bird's-eye view image display at a display device 16 for viewing by the driver of the vehicle (such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012/075250, and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, and/or PCT Application No. PCT/US2012/048993, filed Jul. 31, 2012, and published Feb. 7, 2013 as International Publication No. WO 2013/019795, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published Nov. 1, 2012 as International Publication No. WO 2012/145822, and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, and/or U.S. provisional application Ser. No. 61/588,833, filed Jan. 20, 2012, which are hereby incorporated herein by reference in their entireties). Optionally, the control or processor of the vision system may process captured image data to detect objects, such as objects to the rear of the subject or equipped vehicle during a reversing maneuver, or such as approaching or following vehicles or vehicles at a side lane adjacent to the subject or equipped vehicle or the like.
Vehicle vision systems are typically made for displaying the vehicle's environment, highlighting hazards, displaying helpful information and/or enhancing the visibility. The vision systems display the images (such as video images) to the driver of the vehicle, with the images captured by cameras and sensors, which may comprise optical image sensors, infrared sensors, long and short range RADAR, LIDAR, Laser, ultrasound sensors, and/or the like. The display may be integrated into an interior rearview mirror assembly disposed inside the vehicle, or an exterior rearview mirror assembly at an exterior side portion of the vehicle, or a heads up display or other projection system that projects the visual information to any surface or surfaces for viewing by the driver, or a display unit within the instrument cluster, or a flip up display on the dashboard, or a central display or an aftermarket display which receives display data from the vehicle and/or other aftermarket sensors.
Overlays to the natural or captured image, icons or soft buttons are often useful in such vision systems. The natural or captured images may be enhanced by any algorithm or system. The images may be influenced by the situation depending state (such as similar to the systems described in U.S. patent application Ser. No. 13/660,306, filed Oct. 25, 2012, now U.S. Pat. No. 9,146,898, which is hereby incorporated herein by reference in its entirety), or the images may be enhanced by or merged with information coming from non-vision sensors, such as an infrared sensor or sensors, a radar sensor or sensors, a laser sensor or sensors, a lidar sensor or sensors, or the like, or the images may be influenced by, enhanced by or merged with data from any other data source, such as mobile devices, such as mobile phones, infotainment devices, GPS, a car to car communication device or car to infrastructure communication device or the like.
It is known to merge or stitch images captured by and coming from more than one camera or image capturing device, eventually merging or enhancing the images responsive to non-vision sensors such as discussed above. The resulting stitched images may then be displayed in a manner that the stitched or merged images form a kind of bowl or dome in the displayed image or projection (such as by utilizing aspects of the system described in Japanese publication JP2011151446A, which is hereby incorporated herein by reference in its entirety). Typically, there is an overlay of the subject vehicle placed on top the artificially generated top view. The displayed image view is as if the person viewing the displayed image is looking at the vision system's own vehicle (the subject or equipped vehicle) and seeing the environment in which the vehicle is located (such as from a virtual view point or virtual camera). The virtual view is from the outside of the vehicle and looking downward towards and onto the vehicle, and the vehicle is typically shown as disposed at the bottom of the bowl or dome. This means that a representation of the equipped vehicle is fully or partially visible in the displayed image. The view may be free positionable by a virtual camera (view -point, -angle, -focus, -shutter, -blend). The virtual camera might have a top view point, and/or may be looking at the vehicle's rooftop from an angle (and may be provided as a predetermined virtual viewpoint or may be selected or adjusted by the operator, such as via a user input in the vehicle that allows the operator to adjust the viewing angle and/or distance from the virtual camera to the equipped vehicle).
The vehicle's overlay is typically arbitrarily chosen and is not based on sensor data, but is chosen by the given parameter set up. There may be art to calibrate the camera's color room by the common color captured on some of the subject vehicle's surfaces, but the vehicle in its entirety is not covered by any sensor so the vehicle's real appearance is typically not detectable. Optionally, and such as described in U.S. provisional application Ser. No. 61/602,876, filed Feb. 24, 2012, which is hereby incorporated herein by reference in its entirety, a vision system may provide a manner of detecting the subject vehicle's total height plus baggage by using reflected images of the vehicle as reflected in a window or the like adjacent to or near the vehicle.
The displayed vehicle image (as generated by the system, and not as captured by the cameras) may be selected to be close to the shape of the equipped vehicle. Typically, the displayed vehicle image is an arbitrary or generic vehicle shape and is shown as a solid vehicle shape and drawn sharp without noise or softened borderlines.
Top view vision systems typically provide aligned images without black bars on the abutting sides. These might be stitched by any common algorithm. Typically, such systems may have a comparably small overlapping zone in which both images have a degree of transparency to smooth the stitching borderlines.
Thus, images of objects and the environment around the vehicle that are captured by the cameras and/or further sensing systems, and thus might be evident to the driver, may be hidden or partially hidden by the visualized or displayed (own or equipped) vehicle image shown in the displayed images. Known vision systems are operable to display a vehicle representation that typically does not match or correspond with the real appearance of the real or actual subject or equipped vehicle, and customers may dislike the potentially substantial differences between the actual equipped vehicle and the displayed representation of the vehicle.
In nature the surfaces of objects reflect the surrounding scene, depending on the object surfaces' reflection ratio, scattering properties of the surfaces and illumination angle. The surrounding scene's illumination brightness and color also becomes influenced by objects reflections. Known vehicle top view vision systems do not reflect most of these matters. For improving the visual reproduction or representation of the subject vehicle within its surrounding environment, the present invention may utilize the methods of Gouraud-shading, (Blin-) Phong shading (use of bidirectional reflectance distribution functions), Fast Phong shading (using texture coordinates instead of angle calculation), Bump mapping or Fast Bump mapping (a fast phong shading based brightness control) for generating reflections or scattering on the vehicle's surfaces, especially the windshield, chrome parts and painted metal and plastic parts. The closest to a realistic reproduction of reflections may be the (spherical) “Environment mapping” method. A sphere shape image around the reflecting object is assumed. Rays from the viewpoint reflected angle correct (angle in=angle out) onto the object's surface are tracked to a respecting originating point on the sphere (part of the ray tracing formulas). The reflections may be calculated out of the images taken by the vehicle's cameras. There may be a simplified model to calculate the ray tracing since the object's (vehicle's) surface is composed out of several polygons. This may require calculating the ray tracing on its edges and to interpolate texture coordinates within the polygon. To ease the ray tracing, the undewarped original images of the fish eye cameras may be directly used as spherical environment images assuming these are mostly spherical. If the sky cannot be captured by any camera sufficiently it may be assumed according real reflections on the real vehicle's surface seen by any camera. The sun's position may be calculated out of navigation system data or assumed according to real reflections on the real vehicle's surface as seen by any camera. The reflections may be generated correctly according the surface's (such as, for example, the hood of the vehicle) curvature or in a simplified manner, so that the reflection does not represent the real world's reflections in full extend.
The present invention may also map the movement of the correct appearance or fake tires, the correct or a fake license plate, and switched on or switched off lights and blinkers. The light or the blinking may be done by positioning one or more glowing, partial, transparent polygons in front of the not engaged blinker or lights, or by animating the texture of the vehicle. The animation may be a part of at least one camera's view, and may be distorted to match correctly. The top view animation may replace the blinker indicators in the central clusters, and it may also indicate which headlight or blinker bulb is broken when there is a defect, maybe by an additional icon overlay that is deployed.
The present invention may also or otherwise use three dimensional polygon models for rendering the subject vehicle projected into the scene within the top view (typically into the center of the view) in combination with bump mapping. The quality of the produced image of the vehicle improves with the number of used polygons. Higher polygon numbers (for example, at least about 10,000, more preferably at least about 20,000 and more preferably at least about 30,000 or the like) require increasingly more calculating capacities. Hence, the necessary number of used polygons shall be as minimal as acceptable. With decreasing numbers of polygons, increasingly less details can be realized. To make the own vehicle's image look right or correct while having a low or restricted number of polygons (for example, less than about 10,000, more preferably less than about 8,000 or the like), the present invention may use “bump mapping” methods when applying the own vehicle's surface maps onto the vehicle's polygon surfaces (such as by utilizing aspects of the systems described in U.S. provisional applications, Ser. No. 61/602,878, filed Feb. 24, 2012, and Ser. No. 61/678,375, filed Aug. 1, 2012, which are hereby incorporated herein by reference in their entireties, which suggests to use ‘Parallax Occlusion mapping’ for automotive visions systems in general and especially on the surface of imposters). By that, missing contour details can be faked into the image pleasing to the viewer (usable for human vision not for machine vision). Peculiarities of bump mapping is “Normal mapping” also called “Dot3 bump mapping” (enhancement of the appearance of details by faking the lighting (scattering and shading) of bumps and dents generated by a normal map of a high polygon model). “Parallax mapping” or “virtual Displacement mapping” is a peculiarity of Normal mapping. Hereby the displacement of a point of a rendered polygon may be mapped according the viewing angle. This gives (rough) surfaces the illusion of an apparent depth. Basic Parallax mapping does not account for occlusion. “Occlusion mapping” is a peculiarity of Parallax mapping. By using a displacement map, self-occlusion and fake reflections relative to the perspective may be possible to be mapped onto a polygons mapping surface. For the viewer it is nearly indiscriminate whether he sees a flat surface with Occlusion mapping or a real complex three dimensional (3D) surface on a two dimensional (2D) display screen.
Since the display screen may not have the contrast level and the light intensity of possible light sources in nature, bright light reproduced on display screens typically looks quite flat. To enhance the illusion for looking at bright light in vehicle vision systems, the system of the present invention may use “Bloom shading”. By a fake aperture diffraction bright light appears to exceed beyond its natural area. Natural cameras do this by nature, depending on their optical and imagers properties due to aperture diffraction and pixel cross talking (see
An additional artificial effect which is suggested herein to use in the vehicle vision system is “artificial Motion blurring”. This effect can be observed on real camera's movies when fast objects pass a scene (see
The top view image may come with a certain noise level depending on the hardware's quality and/or algorithm's quality. The displayed vehicle on top is a rendered overlay with a low noise level. Due to the different noise levels the overlay appears quite artificially. This is typically done to ease the viewing. The stitching of abutting images within a top view display is often not optimally resolved. If done well, the user cannot readily discern where the stitching/overlapping area begins and ends. The transient from one image to the other becomes mostly invisible or not readily discernible.
The present invention provides a solution to make objects and areas around the vehicle captured by the cameras and/or further sensing systems visible within the display, although the displayed vehicle image may be in the line of sight from the virtual view point to the objects/areas.
The present invention may provide such a solution by displaying the displayed subject vehicle image as a fully or partially transparent vehicle image or vehicle representation. The transparency may be a dependency of the distance, and/or the angle aberration from the center viewing direction of the virtual camera or viewpoint (such as can be seen with reference to
T(d, α)=T*½*((1−1/d*k1)+(1−α*k2));
where k1 is a scaling factor (parameter) for the distance in [1/mm], and k2 is a scaling factor (parameter) for the aberration angle.
Optionally, the transparency may be a dependency of whether the virtual camera is within the (own) vehicle's body or outside of the subject or equipped vehicle. Optionally, the transparency may be a combination of dependency of the equation above and the location of the virtual camera. Optionally, the transparency may be dependent or at least may be influenced by the level of backlight and/or a combination of one or more of the other factors above. Optionally, the transparency may be dependent on a location of a detected object at or near the vehicle whereby the displayed images may not show the detected object when a portion of the three dimensional vehicle representation may block or obscure the object when the vehicle would be at least partially between the object and the virtual camera, as discussed below.
Optionally, the vision system may display the subject vehicle image as a vector model shape without solid surfaces. The vector model shape may be shown or displayed dependent on whether the virtual camera is within the (own) vehicle's body or outside of the subject or equipped vehicle. The options for the criteria that determine the degree of transparency of the displayed image may be selectable by the driver within a vehicle's OBD (on-board diagnostics), or via remote device or vehicle manufacturer service tool or vehicle repair or service shop (manufacturer or aftermarket) service tool.
The vision system of the present invention is also or otherwise operable to match the appearance of the vision system's shown or displayed (own) vehicle to that of the appearance of the real or actual subject or equipped vehicle as close as possible, or to customize the displayed vehicle's appearance.
The display or customization may be adjusted individually by the vision system actively making the adjustment/customization. For example, the vision system's processor or cameras or other sensors may identify the color of the equipped vehicle (such as by processing of captured images of portions of the equipped vehicle as captured by its cameras or other sensors), and further on may match or substantially match or coordinate the visualized or displayed vehicle's color accordingly. Optionally, the vision system's cameras or other sensors and algorithm may identify the “real” vehicle's surface appearance by the images taken from the vehicle direct and further on may “map” the identified surface to the image of the vehicle displayed or shown on the display screen by the vision system. Optionally, the vision systems cameras or other sensors and algorithm may analyze, stitch, dewarp, and the like the captured images to identify the “real” vehicle's surface appearance by the vehicle's appearance seen in reflecting surfaces distant to the vehicle, and further may “map” the identified surface to the image of the vehicle displayed or shown on the display screen by the vision system.
Optionally, the display or customization may be adjusted individually and may be a passive adjustment/customization in accordance with given parameters. For example, the parameters may be first time transmitted or updated at the line or end of line of the vision system manufacturer or of the vehicle manufacturer. Optionally, the parameters may be first time transmitted or updated by a supporting system embodied to the vehicle, such as via any kind of data channel, such as via a one-time initial system power up or all time on power up of the system. Optionally, the parameters may be first time transmitted or updated by a supporting system not permanently installed to the vehicle, such as a remote/mobile device or system or device or tool at a vehicle shop (such as at a manufacturer facility or aftermarket device or facility) or such as a driver input via the vehicle's OBD.
Optionally, the display or customization of the vehicle representation or vehicle image may be adjusted individually or freely by mapping one or more images of the vehicle (such as the subject or equipped vehicle) that is to be shown in the display of the vision system. Such images or vehicle representations may be uploaded by the system manufacturer or the vehicle manufacturer, or by the customer or active software (that may be developed for this operation/function). For example, the mapped images may be digital photographs of the actual equipped vehicle, or may be previously captured images or generated images selected to correspond to the actual vehicle car line and/or body style and/or trim options and/or color. The mapped images may comprise one or more images captured or generated to represent the equipped vehicle from one or more viewpoints that may correspond with the virtual camera viewpoint in the displayed images. For example, at the vehicle assembly plant, when the vision system is installed in the vehicle (or earlier at the vision system manufacturing facility if the vision system is made for a particular car line or the like), the system may be set to correspond to the actual vehicle in which it is installed. Thus, an operator at the vehicle assembly plant may enter “Buick Enclave Silver” or other manufacturer name/code, vehicle line/code and color/code and/or the like (or the vision system may be automatically set according to the manifest or bar code for the vehicle in which the system is being installed), and images or representations of a silver Buick Enclave may be downloaded into the vision system or control for use as the displayed vehicle images when the vision system operates to display images for viewing by the driver of the equipped silver Buick Enclave.
Optionally, the mapping interior of the vehicle may be set by parameters or may be uploaded by the vision system manufacturer, the OEM, the vehicle buyer (owner) or contracted as a third party service. For this mapping, advance (bump-) mapping methods for faking structure of the types mentioned above (Normal mapping, Occlusion mapping or Parallax mapping) may come into use as well. The vehicle interior's structure may be rendered by a low number of polygons with a bump map mapped onto it, instead of having a high number of polygons. A low level variant may be to just map the interior onto the window surfaces. This may substantially reduce the numbers of required polygons for the inner structure. It is known to map the driver's face into the interiors scene. It is unknown in automotive vision systems to integrate the driver's face into a bump map. The driver's face bump map may become produced offline out of two or more images from different viewing angles. The driver's mapping may be placed on top of a neutral body made by a low number of polygons.
Optionally, the interior's scene may be not static but calculated in real time to render the interiors polygons and its mapping. The input of the interior's scene including the instrumentation may be composed (stitched) out of the images from one or more interior cameras (and may be out door cameras looking inside in parts) which may be in front of the passengers. The area which does not become captured by any camera may be a static non real time image which may become partially blended (merged) into the real time image to fill missing parts.
Optionally, the displayed images may comprise a combination of the adjustment and/or customization of the displayed equipped vehicle image or representation and the partial or full transparency of the displayed equipped vehicle image or representation to further enhance the display of the equipped vehicle and environment surrounding the equipped vehicle for viewing by the driver of the equipped vehicle, such as during a reversing maneuver or parking maneuver or the like.
To cope with the overlays which are placed over vision systems images, especially the own or subject vehicle's top view overlay or representation, which appear artificial due to mismatching noise levels, it is heretofore unknown in automotive vision to raise the noise level of the overlays artificially. Overlays which are there for highlighting important facts to the driver, such as control buttons or inputs or the distance to a solid obstacle at the projected rearward path of travel of the vehicle when the vehicle is backing up, may appear as clearly as possible to become highlighted so such an overlay should not become acknowledged or discerned as a part of the scenery. Thus, such overlays may be superimposed in a low noise level.
However, for different overlays which are for providing an understanding of the general scenery the driver is looking at, there is not a concern that they be seen as clearly. Thus, it may help the driver's viewing and understanding of the displayed images to superimpose the top view of the vehicle when providing four stitched camera images as a top view, and, to ease the viewing, the vehicle may appear as embedded to the scenery. Thus, the noise level should become adapted to the noise level of the images. Naturally, the noise level of the camera images are higher than that of the overlay or overlays, so to adapt or match the noise levels, the noise level of the overlay or overlays has to be increased (or otherwise adapted or adjusted). In some systems, the images of the four (or more) cameras may come with different noise levels for some or each of the cameras, and thus the overlay may be adapted to an average noise level or to a noise level at about the lowest or the highest noise level value of the captured images. The noise level may be set by parameters, since the cameras and system's features do not change much over life time, or the noise level may be determined by a suited algorithm during a run time of the system or during an initial phase of the system.
When the noise level is selected, the noise to the overlay may be added by (i) use of a scrim diffuser, soft focus or blur effect filter/generator, (ii) use of a salt and pepper noise filter/generator, (iii) use of image analogies filters which may transfer image analogies from one or more camera images and project the analogy to the overlay or overlays, (iv) generation of overlays with semi-transparent borderlines, having increasing transparency with diminishing distance to the borderline over the full overlay or partially, (v) use of one or more (offline or initially calculated) stored overlay or overlays which have already diffuse borderlines (and optionally with a look up table for different levels of noise to become achieved), or (vi) by reducing the acutance, if present.
To optimize the appearance of the stitching/overlapping area of abutting images within a top view display, the method “Texture Splatting” or “Texture Blending” may be used. Hereby also called alphamap comes into use for controlling the transparency ratio of the overlapping areas. Such alphamapping is known and, because the underlying principle is for each texture to have its own alpha channel, large amounts of memory may be consumed by such a process. As a solution to this potential concern, multiple alpha maps may be combined into one texture, such as by using a red channel for one map, a blue channel for another map, and so on. This effectively uses a single texture to supply alpha maps for four (or more or less) real-color textures. The alpha textures can also use a lower resolution than the color textures, and often the color textures can be tiled. This might also find use for top view image blending.
It may be suggested that the alpha map (generally) possess a sliding gradient from 100 percent to zero (0) percent for one camera's captured image and zero percent to 100 percent for the other camera's captured image (see
It may be suggested that the blending zone has a V-shape with the open side pointing into the top view's corner. The borderlines may be comprised by the V blades (see
With reference to
It may be also or otherwise be suggested that the alphamap is not static, but influenced by the image's structure. The blending area may be formed such as discussed above and there may still be a static sliding gradient such as discussed above from one borderline to the other, but this gradient may be combined with a dynamic part. The dynamic component may orient on strong illuminance and/or color thresholds (this may be done in a manner similar to that described in Zitnick, Lawrence, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, No. 4, April 2012, which is hereby incorporated by reference herein in its entirety, but not for finding the correct image overlap alignment, but rather for finding the input of an alpha map filter). At these thresholds, the alpha map gradient may be increased during the gradient over the areas without thresholds or may be decreased.
The system of the present invention may also use a “flood fill” algorithm to deploy (find) smooth alpha maps through areas which are borderlined by structures within the image which are ending within the overlapping area of two cameras. A typical path found by flood fill would look as like path 20 shown in
The source image for determining the illuminance and/or color threshold may be the image of just one camera or may be generated by a 50%/50% superposition of two or both cameras, with a consecutive (Gausian) blurring, artificial diffusion or resolution reduction for uniting areas which are originating in the same structure but are not fully matching due to imperfect distortion correction or reconstruction (de-fisheye and view point elevation), color and white balancing and resolution differences.
Optionally, the gradients transition of the whole area and/or the local transitions may be set up as generally linear such as illustrated in
Optionally, for data reduction and/or calculation effort reduction, a quantification (inconsistent manner) may find use for a certain extend in which these measures do not significantly worsen the subjective or objective appearance.
Optionally, and in addition to the above or as alternative solution, the alpha map may be generated by the use of half toning/dither pattern or error diffusion pattern algorithm or the like. Such patterns may have sliding parameters along their extension. Optionally, an embodiment may be that the areas with the most distortion reconstruction, typically the farthest edges of fisheye lens images, become influenced (filtered) the most. This is different from just weighing bending such as is known and such as described in EP publication No. EP000002081149A1, which is hereby incorporated herein by reference in its entirety.
Therefore, the present invention provides a vision system for a vehicle that includes a processor or head unit and a camera or cameras mounted at the vehicle. The processor or head unit is operable to merge captured images to provide a surround view or virtual view of the area at least partially at or surrounding the equipped vehicle, and may select or adjust or customize a generated vehicle representation and/or the displayed image so that the displayed image of a vehicle representation that is representative of the equipped vehicle is actually similar (such as in body type or style and/or car line and/or color) to the actual vehicle equipped with the vision system and display screen. Optionally, the displayed vehicle representation may be adjusted to be more or less transparent to enhance viewing of objects detected at or near the vehicle (such that, for example, the displayed vehicle representation may be more transparent or substantially transparent when an object is detected that may be blocked or partially blocked or hidden by the displayed vehicle representation, such as when the virtual camera angle and/or distance is such that the detected object is at least partially “behind” the displayed vehicle representation and/or when a detected object is very close to the equipped vehicle or partially at or possibly partially under the equipped vehicle), so as to enhance viewability of the images of the detected object by the driver of the vehicle. The noise level of the overlays may be adapted or increased to generally match the noise level in the captured images to provide enhanced display qualities and seamless display images.
Optionally, a vehicle vision system and/or driver assist system and/or object detection system and/or alert system of the present invention may operate to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes a processor that is operable to receive image data from one or more cameras and may provide a displayed image that is representative of the subject vehicle (such as for a top down or bird's-eye or surround view, such as discussed below), with the displayed image being customized to at least partially correspond to the actual subject vehicle, such that the displayed vehicle is at least one of the same or similar type of vehicle as the subject vehicle and the same or similar color as the subject vehicle.
Optionally, the method or system or process of the present invention is operable to process image data and calibrate the cameras so the images are accurately or optimally merged together to provide the top down or surround view display, as discussed below.
Vehicle vision systems may include more than one camera or image capture device. Often, there are cameras at the front, rear and both sides of the vehicle, mostly integrated into the vehicle's structure, and mounted in a fixed position. Vision systems are made for displaying the vehicle's environment, highlighting hazards, displaying helpful information and/or enhancing the visibility to the driver of the vehicle.
It is known to merge or stitch images captured by and coming from more than one camera or image capturing device, in order to provide a 360 degree surround view. The cameras typically use a wide focal width, commonly referred to as a fish eye lens or optic, for providing a wide angle field of view, typically about 180 degrees to about 220 degrees for each camera (such as for a four camera vision system). Typically there are overlapping regions in the fields of view of the cameras. By providing such a wide angle field of view, the field of view of the cameras typically not only include the environment around the vehicle, but partially includes the vehicle's body as well, such as at the lateral regions of the captured images of each camera.
The composed wide angle or wide view image is displayed in a manner that the images form a kind of bowl or dome. The view is as like as looking at the vision system's own vehicle from the outside, seeing the environment that the vehicle is located in. The vehicle is typically sitting at the bottom of the bowl or dome (such as shown, for example, in
For stitching and composing the right proportions of the above mentioned captured images from each camera to a single 360 degree view properly, it is always a task that each camera is aligned properly or within a threshold degree of alignment. There are five degrees of freedom a camera can be misaligned: height, width, depth, tilt and rotation. Another one is the focal length, but this one is usually fixed during manufacture of the camera or camera module and has a low variance.
It is a known method to correct camera misalignments by rotating and/or shifting the captured camera image during the image processing. Typically, this may be done by capturing a target, which has known dimensions during a calibration procedure (and such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, which is hereby incorporated herein by reference in its entirety). There may also be methods to cure or correct unwanted lens distortions during such a procedure using a target. Also, more than one misaligned camera can be calibrated with a target or targets.
Other calibration methods are targetless. These try to eliminate the misalignment over time by capturing a the natural environment, often during driving of the vehicle, using optical physics, such as the vanishing point or round shape items, and/or the like.
Calibrating cameras in a vehicle repair shop or otherwise having the need of using a target is inconvenient. It may also be error prone since it is hard for the repair shop's staff to set up the target correctly and/or to judge whether a calibration was successful. Also, such targets are inconvenient to handle and to store. Thus, it may be desired to provide a targetless calibration system for (re-)calibrating vehicle vision system cameras.
Using targetless vehicle vision system camera calibration methods, which have the need for moving the vehicle to calibrate may not be fully satisfying. It takes time and is not always successful, depending on the environment and lighting and/or weather conditions. Thus, the present invention provides a target less calibrating method without the need for doing a calibration drive or at least to limit such a calibration drive.
The vehicle's own shape and proportions are known from the OEM's construction model (CAD). Typically, this is a file of data. These or an extract of it, or data close to it can be used for generating (rendering) a virtual vehicle model as like it would be visible in a vision system (such as by utilizing aspects of the vehicle vision systems described above). Such a vehicle model may also come from a scanned vehicle (surface), such as by capturing the reflections of the vehicle in one or multiple (non vehicle) mirrors or reflective surfaces, such as window surfaces of a building or the like, and preferably, the vehicle may turn or move until all of the vehicle side and front and rear surfaces are captured. Optionally, such a vehicle model may be generated by using a scanner device, such as a laser scanner or the like.
The vehicle's model data may become overwritten entirely or in parts over a run time, by captured image data when the camera(s) is/are not running in calibration mode, but was/were calibrated before. The corrected vehicle model may find use in the next calibration event and further on consecutively.
The vehicle's own shape and proportions from above become transformed into a single camera's fisheye view, in order to be calibrated, according the camera's and/or transformation matrix parameters. The transformation may take place in one step while capturing the vehicle's proportion and shape in the method above. The vehicle's model might be stored in natural view or may be stored in the transformed view. The vehicle's model may be provided by a vehicle device, such as, for example, the vision processing or head- or display unit, and/or a communication device that collects the according data from a remote source, such as a supplier's or OEM's server or the like.
Additionally or alternatively, the system may transfer the single camera's wide angle or fisheye view into one vehicle top view as seen from a virtual view point mapping the proportion and shape onto the according positions of the vehicle's 3D model in order to calibrate the cameras' view mapping positions, according the camera's and/or transformation matrix parameters.
The camera becomes calibrated by running an alignment algorithm of the body shapes captured, real view against the expected view out of the vehicle's model transformed as described above. The resulting misalignment is reflected, and thus subtracted or compensated for when stitching all of the cameras' images and composing the right proportions in the virtual 360 degree vision system view.
An approximation algorithm also comes into use. The approximation algorithm is a Newton's method. The alignment is done by a minimum quest. Not the whole vehicle body image, but filtered data of it, comes into use for the alignment algorithm. For example, the main borderlines and/or edges of the vehicle body come into use. The algorithm may use any suitable number of significant points to align, such as, for example, nine or more significant points to align.
The calibration algorithm may include several stages. For example, the calibration algorithm may find a first side-minimum initially. The camera's image for the alignment algorithm above may become composed out of several superpositioned images, and the superpositioned images may be taken over a specific time, and/or the superpositioned images may be taken in infinite time (such as a running average or the like).
The main minimum may be found in a consecutive stage when using the superposed image which was already composed at that time. The image superposition minimum search algorithm of the second stage may run in a low priority, when processing time is available. The calibration algorithm may have a routine to decide whether the found minimum is the main minimum.
Optionally, it is envisioned that non-vehicle embodied cameras may be calibrated by the calibration method of the present invention. For example, the cameras to be calibrated may comprise aftermarket cameras, mobile or entertainment device cameras, and the like, OEM-cameras added to the vehicle by the owner (after vehicle manufacture or end-of-line (EOL)).
Optionally, cameras mounted on a trailer may become incorporated into the vision system's 360 degree view and may also be calibrated by the calibration method of the present invention. The trailers dimensions and bending attributes become reflected. The trailer's attributes may be uploaded by the owner, or by a communication device or the like, or may be from the trailer manufacturer's, OEM's or vision system supplier's database or the like. The trailer's attributes may be measured in a learning mode by the vision system by circling or driving the vehicle around the de-coupled trailer. A top view on to the trailer and its rear area or region may be generated by the vision system. A specific trailer steering aid system/algorithm might come into use having rear top view and according overlays.
Therefore, the present invention provides a vision system for a vehicle that includes a processor or head unit and a camera or cameras mounted at the vehicle. The processor or head unit may be operable to merge captured images to provide a surround view or virtual view of the area at least partially at or surrounding the equipped vehicle. The present invention provides a calibration system for the vehicle vision system that calibrates the vehicle cameras without targets and/or specific calibration drives. The calibration system utilizes real vehicle images or data (such as image data representative of where a portion or portions of the equipped vehicle should be in the captured images for a properly calibrated camera) and determines a variation in captured images of the portion or portions of the equipped vehicle as compared to a database or data file of the vehicle, in order to determine a misalignment of the camera or cameras. The comparison and determination may be done while the vehicle is parked and/or being driven.
Conventional surround view/bird's-eye systems typically present for view by the driver of the vehicle a two dimensional view of what a virtual observer may see from a vantage point some distance above the vehicle and viewing downward, such as by utilizing aspects of the display systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012/075250, and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, and/or PCT Application No. PCT/US2012/048993, filed Jul. 31, 2012, and published Feb. 7, 2013 as International Publication No. WO 2013/019795, and/or PCT Application No. PCT/US11/62755, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012-075250, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published Nov. 1, 2012 as International Publication No. WO 2012/145822, and/or U.S. Pat. No. 7,161,616, and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, which are hereby incorporated herein by reference in their entireties. In this regard, the region that the vehicle occupies is also shown two dimensionally, typically as a schematic representation in two dimensions as a footprint of a vehicle representation.
In accordance with the present invention, the central or footprint region of the displayed image that the vehicle occupies and around which the real time video images are displayed, is shown by a three dimensional representation or rendering of the subject vehicle (with the three dimensional representation having an upper or top surface and one or more side surfaces of the vehicle represented in the displayed image). Thus, to take an example for illustration, an owner of a MY 2012 black colored BMW 3-series vehicle would see on the video screen, the actual subject vehicle type (where the vehicle representation generated by the system would look like a three dimensional model of, for example, a MY 2012 black colored BMW 3-series vehicle or the like). Optionally, the system may generate and the viewer may see an avatar or virtual image that, preferably, closely resembles the subject vehicle.
In accordance with the present invention, the driver or occupant of the subject vehicle may effectively maneuver or pan the viewing aspect or angle or virtual vantage point to any desired side view or viewing angle of a representation of the subject vehicle in full three dimensions (showing the top and front, rear and/or sides of the vehicle representation depending on the virtual viewing location or vantage point). When looking, for example, from a bird's-eye view and at an angle, a portion of the representation of the subject vehicle (for example, the forward left region of the vehicle representation when viewed from a virtual vantage point rearward and towards the right side of the vehicle relative to a central point above the vehicle) may shadow or obscure an object (such as a child or other object of interest) standing or present on the road and close to that particular part of the vehicle. Thus, when viewing displayed images representative of that particular viewing angle, the solid presence of that particular portion of the vehicle representation would hide or obscure the presence of that object or child.
However, in accordance with the present invention, that particular region of the vehicle representation at which the child or object is present but not viewable by the driver can be rendered transparent or can be removed or partially or substantially removed or hidden, so that the object or child so present that is being imaged by a camera of the vehicle, and that is otherwise shadowed/obscured by the solid portion of the vehicle representation due to the particular viewing angle of the virtual camera, is viewable by the driver or occupant of the vehicle that is viewing the display screen. Preferably, the portion of the vehicle representation at or adjacent the detected object or child or critical to allowing the driver to view the presence of the object or child is all that is rendered transparent or removed, while the rest of the vehicle representation is solid or “normal”, so as to enable the driver to appropriately appreciate and gauge the relationship of the overall vehicle to the rest of the exterior scene that is being imaged by the multi-camera system.
The renderence of local transparency/vehicle body removal of the vehicle representation may be via user selection, where the driver may, using a cursor or other input device, select a particular portion of the displayed vehicle representation to be rendered transparent based on the driver's particular concern to ensure that there is nothing being shadowed or obscured by that portion of the vehicle representation itself. Alternatively and preferably, machine vision object detection techniques (such as by utilizing an EyeQ image processor or the like and/or external radar sensors or the like, such as by utilizing aspects of the vision systems described in U.S. Pat. Nos. 7,937,667; 8,013,780; 7,914,187; 7,038,577 and/or 7,720,580, which are hereby incorporated herein by reference in their entireties) may automatically detect the presence or possible presence of an object or person or hazardous condition and may automatically render the appropriate portion of the vehicle representation transparent, with the portion that is rendered transparent being determined based on the location of the detected object relative to the vehicle and based on the location of the virtual viewpoint or vantage point of the virtual camera.
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081985, and/or PCT Application No. PCT/US2012/066570, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081984, which are hereby incorporated herein by reference in their entireties.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in at least 640 columns and at least 480 rows (at least a 640×480 imaging array), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The logic and control circuit of the imaging sensor may function in any known manner, such as in the manner described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094 and/or 6,396,397, and/or U.S. provisional applications, Ser. No. 61/727,912, filed Nov. 19, 2012; Ser. No. 61/718,382, filed Oct. 25, 2012; Ser. No. 61/699,498, filed Sep. 11, 2012; Ser. No. 61/696,416, filed Sep. 4, 2012; Ser. No. 61/682,995, filed Aug. 14, 2012; Ser. No. 61/682,486, filed Aug. 13, 2012; Ser. No. 61/680,883, filed Aug. 8, 2012; Ser. No. 61/678,375, filed Aug. 1, 2012; Ser. No. 61/676,405, filed Jul. 27, 2012; Ser. No. 61/666,146, filed Jun. 29, 2012; Ser. No. 61/653,665, filed May 31, 2012; Ser. No. 61/653,664, filed May 31, 2012; Ser. No. 61/648,744, filed May 18, 2012; Ser. No. 61/624,507, filed Apr. 16, 2012; Ser. No. 61/616,126, filed Mar. 27, 2012; Ser. No. 61/613,651, filed Mar. 21, 2012; Ser. No. 61/607,229, filed Mar. 6, 2012; Ser. No. 61/605,409, filed Mar. 1, 2012; Ser. No. 61/602,878, filed Feb. 24, 2012; Ser. No. 61/602,876, filed Feb. 24, 2012; Ser. No. 61/600,205, filed Feb. 17, 2012; Ser. No. 61/588,833, filed Jan. 20, 2012; Ser. No. 61/583,381, filed Jan. 5, 2012, and/or PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081985, and/or PCT Application No. PCT/US2012/066570, filed Nov. 27, 2012, and published Jun. 6, 2013 as International Publication No. WO 2013/081984, and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, and/or PCT Application No. PCT/US2012/062906, filed Nov. 1, 2012, and published May 10, 2013 as International Publication No. WO 2013/067083, and/or PCT Application No. PCT/US2012/063520, filed Nov. 5, 2012, and published May 16, 2013 as International Publication No. WO 2013/070539, and/or PCT Application No. PCT/US2012/057007, filed Sep. 25, 2012, and published Apr. 4, 2013 as International Publication No. WO 2013/048994, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published Nov. 1, 2012, and WO 2012/145822, and/or PCT Application No. PCT/US2012/056014, filed Sep. 19, 2012, and published Mar. 28, 2013 as International Publication No. WO 2013/043661, and/or PCT Application No. PCT/US2012/048800, filed Jul. 30, 2012, and published Feb. 7, 2013 as International Publication No. WO 2013/019707, and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012, and published Jan. 31, 2013 as International Publication No. WO 2013/016409, and/or U.S. patent applications, Ser. No. 13/660,306, filed Oct. 25, 2012, now U.S. Pat. No. 9,146,898, and/or Ser. No. 13/534,657, filed Jun. 27, 2012, and published Jan. 3, 2013 as U.S. Patent Publication No. US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in PCT Application No. PCT/US10/038477, filed Jun. 14, 2010, and/or U.S. patent application Ser. No. 13/202,005, filed Aug. 17, 2011, now U.S. Pat. No. 9,126,525, and/or U.S. provisional applications, Ser. No. 61/650,667, filed May 23, 2012, which are hereby incorporated herein by reference in their entireties.
The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras and vision systems described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,937,667; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454 and 6,824,281, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or U.S. patent application Ser. No. 12/508,840, filed Jul. 24, 2009, and published Jan. 28, 2010 as U.S. Pat. Publication No. US 2010-0020170; and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012, and published Jan. 31, 2013 as International Publication No. WO 2013/016409, and/or U.S. patent application Ser. No. 13/534,657, filed Jun. 27, 2012, and published Jan. 3, 2013 as U.S. Patent Publication No. US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The camera or cameras may comprise any suitable cameras or imaging sensors or camera modules, and may utilize aspects of the cameras or sensors described in U.S. patent applications, Ser. No. 12/091,359, filed Apr. 24, 2008 and published Oct. 1, 2009 as U.S. Publication No. US-2009-0244361; and/or Ser. No. 13/260,400, filed Sep. 26, 2011, now U.S. Pat. No. 8,542,451, and/or U.S. Pat. Nos. 7,965,336 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties. The imaging array sensor may comprise any suitable sensor, and may utilize various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like, such as the types described in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,715,093; 5,877,897; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 6,498,620; 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 6,806,452; 6,396,397; 6,822,563; 6,946,978; 7,339,149; 7,038,577; 7,004,606; 7,720,580 and/or 7,965,336, and/or PCT Application No. PCT/US2008/076022, filed Sep. 11, 2008 and published Mar. 19, 2009 as International Publication No. WO/2009/036176, and/or PCT Application No. PCT/US2008/078700, filed Oct. 3, 2008 and published Apr. 9, 2009 as International Publication No. WO/2009/046268, which are all hereby incorporated herein by reference in their entireties.
The camera module and circuit chip or board and imaging sensor may be implemented and operated in connection with various vehicular vision-based systems, and/or may be operable utilizing the principles of such other vehicular systems, such as a vehicle headlamp control system, such as the type disclosed in U.S. Pat. Nos. 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 7,004,606; 7,339,149 and/or 7,526,103, which are all hereby incorporated herein by reference in their entireties, a rain sensor, such as the types disclosed in commonly assigned U.S. Pat. Nos. 6,353,392; 6,313,454; 6,320,176 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties, a vehicle vision system, such as a forwardly, sidewardly or rearwardly directed vehicle vision system utilizing principles disclosed in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,877,897; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978 and/or 7,859,565, which are all hereby incorporated herein by reference in their entireties, a trailer hitching aid or tow check system, such as the type disclosed in U.S. Pat. No. 7,005,974, which is hereby incorporated herein by reference in its entirety, a reverse or sideward imaging system, such as for a lane change assistance system or lane departure warning system or for a blind spot or object detection system, such as imaging or detection systems of the types disclosed in U.S. Pat. Nos. 7,881,496; 7,720,580; 7,038,577; 5,929,786 and/or 5,786,772, and/or U.S. provisional applications, Ser. No. 60/628,709, filed Nov. 17, 2004; Ser. No. 60/614,644, filed Sep. 30, 2004; Ser. No. 60/618,686, filed Oct. 14, 2004; Ser. No. 60/638,687, filed Dec. 23, 2004, which are hereby incorporated herein by reference in their entireties, a video device for internal cabin surveillance and/or video telephone function, such as disclosed in U.S. Pat. Nos. 5,760,962; 5,877,897; 6,690,268 and/or 7,370,983, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties, a traffic sign recognition system, a system for determining a distance to a leading or trailing vehicle or object, such as a system utilizing the principles disclosed in U.S. Pat. Nos. 6,396,397 and/or 7,123,168, which are hereby incorporated herein by reference in their entireties, and/or the like.
Optionally, the circuit board or chip may include circuitry for the imaging array sensor and or other electronic accessories or features, such as by utilizing compass-on-a-chip or EC driver-on-a-chip technology and aspects such as described in U.S. Pat. No. 7,255,451 and/or U.S. Pat. No. 7,480,149; and/or U.S. patent applications, Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, and/or Ser. No. 12/578,732, filed Oct. 14, 2009, now U.S. Pat. No. 9,487,144, which are hereby incorporated herein by reference in their entireties.
Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device disposed at or in the interior rearview mirror assembly of the vehicle, such as by utilizing aspects of the video mirror display systems described in U.S. Pat. No. 6,690,268 and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, which are hereby incorporated herein by reference in their entireties. The video mirror display may comprise any suitable devices and systems and optionally may utilize aspects of the compass display systems described in U.S. Pat. Nos. 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252 and/or 6,642,851, and/or European patent application, published Oct. 11, 2000 under Publication No. EP 0 1043566, and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, which are all hereby incorporated herein by reference in their entireties. Optionally, the video mirror display screen or device may be operable to display images captured by a rearward viewing camera of the vehicle during a reversing maneuver of the vehicle (such as responsive to the vehicle gear actuator being placed in a reverse gear position or the like) to assist the driver in backing up the vehicle, and optionally may be operable to display the compass heading or directional heading character or icon when the vehicle is not undertaking a reversing maneuver, such as when the vehicle is being driven in a forward direction along a road (such as by utilizing aspects of the display system described in PCT Application No. PCT/US2011/056295, filed Oct. 14, 2011 and published Apr. 19, 2012 as International Publication No. WO 2012/051500, which is hereby incorporated herein by reference in its entirety).
Optionally, the vision system (utilizing the forward facing camera and a rearward facing camera and/or other cameras disposed at the vehicle with exterior fields of view) may be part of or may provide a display of a top-down view or birds-eye view system of the vehicle or a surround view at the vehicle, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012/075250, and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012, and published May 23, 2013 as International Publication No. WO 2013/074604, and/or PCT Application No. PCT/US2012/048993, filed Jul. 31, 2012, and published Feb. 7, 2013 as International Publication No. WO 2013/019795, and/or PCT Application No. PCT/US11/62755, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012-075250, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012, and published Nov. 1, 2012 as International Publication No. WO 2012/145822, and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011, now U.S. Pat. No. 9,264,672, and/or U.S. provisional application Ser. No. 61/588,833, filed Jan. 20, 2012, which are hereby incorporated herein by reference in their entireties.
Optionally, the video mirror display may be disposed rearward of and behind the reflective element assembly and may comprise a display such as the types disclosed in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,370,983; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187 and/or 6,690,268, and/or in U.S. patent applications, Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008; and/or Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are all hereby incorporated herein by reference in their entireties. The display is viewable through the reflective element when the display is activated to display information. The display element may be any type of display element, such as a vacuum fluorescent (VF) display element, a light emitting diode (LED) display element, such as an organic light emitting diode (OLED) or an inorganic light emitting diode, an electroluminescent (EL) display element, a liquid crystal display (LCD) element, a video screen display element or backlit thin film transistor (TFT) display element or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status, and/or the like. The mirror assembly and/or display may utilize aspects described in U.S. Pat. Nos. 7,184,190; 7,255,451; 7,446,924 and/or 7,338,177, which are all hereby incorporated herein by reference in their entireties. The thicknesses and materials of the coatings on the substrates of the reflective element may be selected to provide a desired color or tint to the mirror reflective element, such as a blue colored reflector, such as is known in the art and such as described in U.S. Pat. Nos. 5,910,854; 6,420,036 and/or 7,274,501, which are hereby incorporated herein by reference in their entireties.
Optionally, the display or displays and any associated user inputs may be associated with various accessories or systems, such as, for example, a tire pressure monitoring system or a passenger air bag status or a garage door opening system or a telematics system or any other accessory or system of the mirror assembly or of the vehicle or of an accessory module or console of the vehicle, such as an accessory module or console of the types described in U.S. Pat. Nos. 7,289,037; 6,877,888; 6,824,281; 6,690,268; 6,672,744; 6,386,742 and 6,124,886, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties.
Changes and modifications to the specifically described embodiments may be carried out without departing from the principles of the present invention, which is intended to be limited only by the scope of the appended claims as interpreted according to the principles of patent law.
The present application is a continuation of U.S. patent application Ser. No. 14/362,636, filed Dec. 7, 2012, now U.S. Pat. No. 9,762,880, which is a 371 national phase filing of PCT Application No. PCT/US2012/068331, filed Dec. 7, 2012, which claims the filing benefit of U.S. provisional applications, Ser. No. 61/706,406, filed Sep. 27, 2012; Ser. No. 61/615,410, filed Mar. 26, 2012; Ser. No. 61/570,017, filed Dec. 13, 2011; and Ser. No. 61/568,791, filed Dec. 9, 2011, which are hereby incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4961625 | Wood et al. | Oct 1990 | A |
4966441 | Conner | Oct 1990 | A |
4967319 | Seko | Oct 1990 | A |
4970653 | Kenue | Nov 1990 | A |
5003288 | Wilhelm | Mar 1991 | A |
5059877 | Teder | Oct 1991 | A |
5064274 | Alten | Nov 1991 | A |
5072154 | Chen | Dec 1991 | A |
5096287 | Kakinami et al. | Mar 1992 | A |
5177606 | Koshizawa | Jan 1993 | A |
5182502 | Slotkowski et al. | Jan 1993 | A |
5204778 | Bechtel | Apr 1993 | A |
5208701 | Maeda | May 1993 | A |
5208750 | Kurami et al. | May 1993 | A |
5214408 | Asayama | May 1993 | A |
5243524 | Ishida et al. | Sep 1993 | A |
5245422 | Borcherts et al. | Sep 1993 | A |
5276389 | Levers | Jan 1994 | A |
5289321 | Secor | Feb 1994 | A |
5305012 | Faris | Apr 1994 | A |
5307136 | Saneyoshi | Apr 1994 | A |
5351044 | Mathur et al. | Sep 1994 | A |
5355118 | Fukuhara | Oct 1994 | A |
5386285 | Asayama | Jan 1995 | A |
5406395 | Wilson et al. | Apr 1995 | A |
5408346 | Trissel et al. | Apr 1995 | A |
5414461 | Kishi et al. | May 1995 | A |
5426294 | Kobayashi et al. | Jun 1995 | A |
5430431 | Nelson | Jul 1995 | A |
5434407 | Bauer et al. | Jul 1995 | A |
5440428 | Hegg et al. | Aug 1995 | A |
5444478 | Lelong et al. | Aug 1995 | A |
5451822 | Bechtel et al. | Sep 1995 | A |
5469298 | Suman et al. | Nov 1995 | A |
5530420 | Tsuchiya et al. | Jun 1996 | A |
5535144 | Kise | Jul 1996 | A |
5535314 | Alves et al. | Jul 1996 | A |
5537003 | Bechtel et al. | Jul 1996 | A |
5539397 | Asanuma et al. | Jul 1996 | A |
5550677 | Schofield et al. | Aug 1996 | A |
5555555 | Sato et al. | Sep 1996 | A |
5568027 | Teder | Oct 1996 | A |
5574443 | Hsieh | Nov 1996 | A |
5648835 | Uzawa | Jul 1997 | A |
5661303 | Teder | Aug 1997 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5699044 | Van Lente et al. | Dec 1997 | A |
5724316 | Brunts | Mar 1998 | A |
5737226 | Olson et al. | Apr 1998 | A |
5757949 | Kinoshita et al. | May 1998 | A |
5760826 | Nayer | Jun 1998 | A |
5760962 | Schofield et al. | Jun 1998 | A |
5761094 | Olson et al. | Jun 1998 | A |
5765116 | Wilson-Jones et al. | Jun 1998 | A |
5781437 | Wiemer et al. | Jul 1998 | A |
5790403 | Nakayama | Aug 1998 | A |
5790973 | Blaker et al. | Aug 1998 | A |
5796094 | Schofield et al. | Aug 1998 | A |
5837994 | Stam et al. | Nov 1998 | A |
5845000 | Breed et al. | Dec 1998 | A |
5848802 | Breed et al. | Dec 1998 | A |
5850176 | Kinoshita et al. | Dec 1998 | A |
5850254 | Takano et al. | Dec 1998 | A |
5867591 | Onda | Feb 1999 | A |
5877707 | Kowalick | Mar 1999 | A |
5877897 | Schofield et al. | Mar 1999 | A |
5878370 | Olson | Mar 1999 | A |
5896085 | Mori et al. | Apr 1999 | A |
5920367 | Kajimoto et al. | Jul 1999 | A |
5923027 | Stam et al. | Jul 1999 | A |
5929786 | Schofield et al. | Jul 1999 | A |
5956181 | Lin | Sep 1999 | A |
6049171 | Stam et al. | Apr 2000 | A |
6052124 | Stein et al. | Apr 2000 | A |
6066933 | Ponziana | May 2000 | A |
6084519 | Coulling et al. | Jul 2000 | A |
6091833 | Yasui et al. | Jul 2000 | A |
6097024 | Stam et al. | Aug 2000 | A |
6100811 | Hsu et al. | Aug 2000 | A |
6169572 | Sogawa | Jan 2001 | B1 |
6175300 | Kendrick | Jan 2001 | B1 |
6198409 | Schofield et al. | Mar 2001 | B1 |
6201642 | Bos | Mar 2001 | B1 |
6226061 | Tagusa | May 2001 | B1 |
6259423 | Tokito et al. | Jul 2001 | B1 |
6266082 | Yonezawa et al. | Jul 2001 | B1 |
6266442 | Laumeyer et al. | Jul 2001 | B1 |
6285393 | Shimoura et al. | Sep 2001 | B1 |
6285778 | Nakajima et al. | Sep 2001 | B1 |
6297781 | Turnbull et al. | Oct 2001 | B1 |
6310611 | Caldwell | Oct 2001 | B1 |
6313454 | Bos et al. | Nov 2001 | B1 |
6317057 | Lee | Nov 2001 | B1 |
6320282 | Caldwell | Nov 2001 | B1 |
6353392 | Schofield et al. | Mar 2002 | B1 |
6370329 | Teuchert | Apr 2002 | B1 |
6396397 | Bos et al. | May 2002 | B1 |
6411204 | Bloomfield et al. | Jun 2002 | B1 |
6424273 | Gutta et al. | Jul 2002 | B1 |
6553130 | Lemelson et al. | Apr 2003 | B1 |
6570998 | Ohtsuka et al. | May 2003 | B1 |
6574033 | Chui et al. | Jun 2003 | B1 |
6578017 | Ebersole et al. | Jun 2003 | B1 |
6587573 | Stam et al. | Jul 2003 | B1 |
6589625 | Kothari et al. | Jul 2003 | B1 |
6593011 | Liu et al. | Jul 2003 | B2 |
6593698 | Stam et al. | Jul 2003 | B2 |
6593960 | Sugimoto et al. | Jul 2003 | B1 |
6594583 | Ogura et al. | Jul 2003 | B2 |
6611610 | Stam et al. | Aug 2003 | B1 |
6627918 | Getz et al. | Sep 2003 | B2 |
6631316 | Stam et al. | Oct 2003 | B2 |
6631994 | Suzuki et al. | Oct 2003 | B2 |
6636258 | Strumolo | Oct 2003 | B2 |
6672731 | Schnell et al. | Jan 2004 | B2 |
6678056 | Downs | Jan 2004 | B2 |
6690268 | Schofield et al. | Feb 2004 | B2 |
6700605 | Toyoda et al. | Mar 2004 | B1 |
6703925 | Steffel | Mar 2004 | B2 |
6704621 | Stein et al. | Mar 2004 | B1 |
6711474 | Treyz et al. | Mar 2004 | B1 |
6714331 | Lewis et al. | Mar 2004 | B2 |
6735506 | Breed et al. | May 2004 | B2 |
6744353 | Sjönell | Jun 2004 | B2 |
6762867 | Lippert et al. | Jul 2004 | B2 |
6795221 | Urey | Sep 2004 | B1 |
6807287 | Hermans | Oct 2004 | B1 |
6823241 | Shirato et al. | Nov 2004 | B2 |
6824281 | Schofield et al. | Nov 2004 | B2 |
6864930 | Matsushita et al. | Mar 2005 | B2 |
6889161 | Winner et al. | May 2005 | B2 |
6909753 | Meehan et al. | Jun 2005 | B2 |
6946978 | Schofield | Sep 2005 | B2 |
6968736 | Lynam | Nov 2005 | B2 |
6975775 | Rykowski et al. | Dec 2005 | B2 |
7004606 | Schofield | Feb 2006 | B2 |
7038577 | Pawlicki et al. | May 2006 | B2 |
7062300 | Kim | Jun 2006 | B1 |
7065432 | Moisel et al. | Jun 2006 | B2 |
7085637 | Breed et al. | Aug 2006 | B2 |
7092548 | Laumeyer et al. | Aug 2006 | B2 |
7113867 | Stein | Sep 2006 | B1 |
7116246 | Winter et al. | Oct 2006 | B2 |
7123168 | Schofield | Oct 2006 | B2 |
7133661 | Hatae et al. | Nov 2006 | B2 |
7149613 | Stam et al. | Dec 2006 | B2 |
7151996 | Stein | Dec 2006 | B2 |
7161616 | Okamoto et al. | Jan 2007 | B1 |
7167796 | Taylor et al. | Jan 2007 | B2 |
7195381 | Lynam et al. | Mar 2007 | B2 |
7202776 | Breed | Apr 2007 | B2 |
7227459 | Bos et al. | Jun 2007 | B2 |
7227611 | Hull et al. | Jun 2007 | B2 |
7307655 | Okamoto et al. | Dec 2007 | B1 |
7325934 | Schofield et al. | Feb 2008 | B2 |
7325935 | Schofield et al. | Feb 2008 | B2 |
7338177 | Lynam | Mar 2008 | B2 |
7375803 | Bamji | May 2008 | B1 |
7380948 | Schofield et al. | Jun 2008 | B2 |
7388182 | Schofield et al. | Jun 2008 | B2 |
7423821 | Bechtel et al. | Sep 2008 | B2 |
7425076 | Schofield et al. | Sep 2008 | B2 |
7526103 | Schofield et al. | Apr 2009 | B2 |
7541743 | Salmeen et al. | Jun 2009 | B2 |
7565006 | Stam et al. | Jul 2009 | B2 |
7566851 | Stein et al. | Jul 2009 | B2 |
7605856 | Imoto | Oct 2009 | B2 |
7619508 | Lynam et al. | Nov 2009 | B2 |
7720580 | Higgins-Luthman | May 2010 | B2 |
7786898 | Stein et al. | Aug 2010 | B2 |
7792329 | Schofield et al. | Sep 2010 | B2 |
7843451 | Lafon | Nov 2010 | B2 |
7855778 | Yung et al. | Dec 2010 | B2 |
7881496 | Camilleri | Feb 2011 | B2 |
7930160 | Hosagrahara et al. | Apr 2011 | B1 |
7949486 | Denny et al. | May 2011 | B2 |
8017898 | Lu et al. | Sep 2011 | B2 |
8064643 | Stein et al. | Nov 2011 | B2 |
8082101 | Stein et al. | Dec 2011 | B2 |
8164628 | Stein et al. | Apr 2012 | B2 |
8224031 | Saito | Jul 2012 | B2 |
8233045 | Luo et al. | Jul 2012 | B2 |
8254635 | Stein et al. | Aug 2012 | B2 |
8300886 | Hoffmann | Oct 2012 | B2 |
8378851 | Stein et al. | Feb 2013 | B2 |
8421865 | Euler et al. | Apr 2013 | B2 |
8452055 | Stein et al. | May 2013 | B2 |
8553088 | Stein et al. | Oct 2013 | B2 |
9146898 | Ihlenburg et al. | Sep 2015 | B2 |
9762880 | Pflug | Sep 2017 | B2 |
20020002427 | Ishida | Jan 2002 | A1 |
20020005778 | Breed | Jan 2002 | A1 |
20020011611 | Huang et al. | Jan 2002 | A1 |
20020113873 | Williams | Aug 2002 | A1 |
20030103142 | Hitomi et al. | Jun 2003 | A1 |
20030137586 | Lewellen | Jul 2003 | A1 |
20030222982 | Hamdan et al. | Dec 2003 | A1 |
20040164228 | Fogg et al. | Aug 2004 | A1 |
20050225645 | Kaku | Oct 2005 | A1 |
20050237385 | Kosaka | Oct 2005 | A1 |
20060103727 | Tseng | May 2006 | A1 |
20060250501 | Wildmann et al. | Nov 2006 | A1 |
20060274147 | Chinomi | Dec 2006 | A1 |
20060287825 | Shimizu | Dec 2006 | A1 |
20070024724 | Stein et al. | Feb 2007 | A1 |
20070025596 | Ravier | Feb 2007 | A1 |
20070104476 | Yasutomi et al. | May 2007 | A1 |
20070165908 | Braeunl | Jul 2007 | A1 |
20070177011 | Lewin | Aug 2007 | A1 |
20070229310 | Sato | Oct 2007 | A1 |
20070242339 | Bradley | Oct 2007 | A1 |
20080043099 | Stein et al. | Feb 2008 | A1 |
20080147321 | Howard et al. | Jun 2008 | A1 |
20080192132 | Bechtel et al. | Aug 2008 | A1 |
20080231710 | Asari et al. | Sep 2008 | A1 |
20080266396 | Stein | Oct 2008 | A1 |
20090113509 | Tseng et al. | Apr 2009 | A1 |
20090122140 | Imamura | May 2009 | A1 |
20090160987 | Bechtel et al. | Jun 2009 | A1 |
20090190015 | Bechtel et al. | Jul 2009 | A1 |
20090256938 | Bechtel et al. | Oct 2009 | A1 |
20090290032 | Zhang et al. | Nov 2009 | A1 |
20100220189 | Yanagi | Sep 2010 | A1 |
20100328060 | Chen et al. | Dec 2010 | A1 |
20110069169 | Kadowaki | Mar 2011 | A1 |
20110175752 | Augst | Jul 2011 | A1 |
20110216201 | McAndrew et al. | Sep 2011 | A1 |
20110234802 | Yamada | Sep 2011 | A1 |
20120045112 | Lundblad et al. | Feb 2012 | A1 |
20120069185 | Stein | Mar 2012 | A1 |
20120078686 | Bashani | Mar 2012 | A1 |
20120087546 | Focke | Apr 2012 | A1 |
20120140073 | Ohta | Jun 2012 | A1 |
20120140076 | Ohta | Jun 2012 | A1 |
20120200707 | Stein et al. | Aug 2012 | A1 |
20120262482 | Miwa | Oct 2012 | A1 |
20120314071 | Rosenbaum et al. | Dec 2012 | A1 |
20120320209 | Vico | Dec 2012 | A1 |
20130093851 | Yamamoto | Apr 2013 | A1 |
20130141580 | Stein et al. | Jun 2013 | A1 |
20130147957 | Stein | Jun 2013 | A1 |
20130169812 | Lu et al. | Jul 2013 | A1 |
20130222592 | Gieseke et al. | Aug 2013 | A1 |
20130286193 | Pflug | Oct 2013 | A1 |
20140043473 | Rathi et al. | Feb 2014 | A1 |
20140063197 | Yamamoto | Mar 2014 | A1 |
20140063254 | Shi et al. | Mar 2014 | A1 |
20140098229 | Lu et al. | Apr 2014 | A1 |
20140139640 | Shimizu | May 2014 | A1 |
20140152778 | Ihlenburg | Jun 2014 | A1 |
20140160284 | Achenbach et al. | Jun 2014 | A1 |
20140247352 | Rathi et al. | Sep 2014 | A1 |
20140247354 | Knudsen | Sep 2014 | A1 |
20140320658 | Pliefke | Oct 2014 | A1 |
20140333729 | Pflug | Nov 2014 | A1 |
20140347486 | Okouneva | Nov 2014 | A1 |
20140350834 | Turk | Nov 2014 | A1 |
20150022664 | Pflug et al. | Jan 2015 | A1 |
20150036885 | Pflug et al. | Feb 2015 | A1 |
20150049193 | Gupta et al. | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
0361914 | Feb 1993 | EP |
0640903 | Mar 1995 | EP |
0697641 | Feb 1996 | EP |
1115250 | Jul 2001 | EP |
2081149 | Jul 2009 | EP |
2377094 | Oct 2011 | EP |
2667325 | Nov 2013 | EP |
2233530 | Sep 1991 | GB |
6216073 | Apr 1987 | JP |
01123587 | May 1989 | JP |
H1168538 | Jul 1989 | JP |
H236417 | Aug 1990 | JP |
03099952 | Apr 1991 | JP |
6227318 | Aug 1994 | JP |
07105496 | Apr 1995 | JP |
2630604 | Jul 1997 | JP |
200274339 | Mar 2002 | JP |
20041658 | Jan 2004 | JP |
2011151446 | Aug 2011 | JP |
WO1994019212 | Feb 1994 | WO |
WO1996038319 | Dec 1996 | WO |
WO2009126748 | Oct 2009 | WO |
WO2010099416 | Sep 2010 | WO |
WO2011028686 | Mar 2011 | WO |
WO2012075250 | Jun 2012 | WO |
WO2012139636 | Oct 2012 | WO |
WO2012139660 | Oct 2012 | WO |
WO2012143036 | Oct 2012 | WO |
Entry |
---|
Behringer et al., “Simultaneous Estimation of Pitch Angle and Lane Width from the Video Image of a Marked Road,” pp. 966-973, Sep. 12-16, 1994. |
Broggi et al., “Self-Calibration of a Stereo Vision System for Automotive Applications”, Proceedings of the 2001 IEEE International Conference on Robotics & Automation, Seoul, KR, May 21-26, 2001. |
Philomin et al., “Pedestrian Tracking from a Moving Vehicle”. |
Sahli et al., “A Kalman Filter-Based Update Scheme for Road Following,” IAPR Workshop on Machine Vision Applications, pp. 5-9, Nov. 12-14, 1996. |
Sun et al., “On-road vehicle detection using optical sensors: a review”, IEEE Conference on Intelligent Transportation Systems, 2004. |
Van Leeuwen et al., “Motion Estimation with a Mobile Camera for Traffic Applications”, IEEE, US, vol. 1, Oct. 3, 2000, pp. 58-63. |
Van Leeuwen et al., “Motion Interpretation for In-Car Vision Systems”, IEEE, US, vol. 1, Sep. 30, 2002, p. 135-140. |
Van Leeuwen et al., “Real-Time Vehicle Tracking in Image Sequences”, IEEE, US, vol. 3, May 21, 2001, pp. 2049-2054, XP010547308. |
Van Leeuwen et al., “Requirements for Motion Estimation in Image Sequences for Traffic Applications”, IEEE, US, vol. 1, May 24, 1999, pp. 145-150, XP010340272. |
International Search Report and Written Opinion dated Feb. 22, 2013, for corresponding PCT Application No. PCT/US2012/68331. |
Number | Date | Country | |
---|---|---|---|
20170374340 A1 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
61706406 | Sep 2012 | US | |
61615410 | Mar 2012 | US | |
61570017 | Dec 2011 | US | |
61568791 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14362636 | US | |
Child | 15700274 | US |