1. Field
The present disclosure relates to a method and system for presenting panoramic surround view on a display in a vehicle. More specifically, embodiments in the present disclosure relate to a method and system for presenting panoramic surround view on a display in a vehicle such that a continuous surround display provides substantially maximum visibility with natural and prioritized view.
2. Description of the Related Art
While a driver is driving a vehicle, it is not easy for the driver to pay attention to all possible hazards in different directions surrounding the driver. Conventional multi-view systems provide wider and multiple views of such potential hazards by providing views of different angles from one or more cameras to the driver. However, the conventional systems typically provide non-integrated multiple views divided into pieces with limited visibility that are not scalable. These views are not intuitive to the driver. It is especially true when an object of the potential hazard exists in one view but is in a blind spot in the other view, even though these two views are supposed to be directed to the same region due to different points of view. Another typical confusion occurs when a panoramic view of aligning multiple views may result in showing the object of the potential hazard multiple times. While it is obvious that panoramic or surround view is desirable for the driver, poorly stitched views may cause extra stress to the driver due to the poor quality of images inducing extra cognitive load to the driver.
Accordingly, there is a need for a method and system for displaying a panoramic surround view that allows a driver to easily recognize objects surrounding the driver with a natural and intuitive view without blind spots, in order to enhance visibility of obstacles without stress due to cognitive load of surround information. To achieve this goal, there is a need for developing an intelligent stitching pipeline algorithm which functions with multiple cameras in a mobile environment.
In one aspect, a method of presenting a view to an occupant in a vehicle is provided. This method includes capturing a plurality of frames by a plurality of cameras for a period of time, detecting and matching invariant features in image regions in consecutive frames of the plurality of frames to obtain feature associations, estimating a transform based on the matched features of the plurality of cameras and a stitching region is identified based on the detected invariant features, the feature associations and the estimated transform. In particular, an optical flow is estimated from the consecutive frames captured by the plurality of cameras for the period of time and translated into a depth of an image region in consecutive frames of the plurality of cameras. A seam is estimated in the identified stitching region based on the depth information and stitching the plurality of frames is executed using the estimated seam. The stitched frames are presented as the view to the occupants in the vehicle.
In another aspect, a panoramic surround view display system is provided. The system includes a plurality of cameras, a non-transitory computer readable medium that stores computer executable programmed modules and information, at least one processor communicatively coupled with the non-transitory computer readable medium configured to obtain information and to execute the programmed modules stored therein. The plurality of cameras are configured to capture a plurality of frames by a plurality of cameras for a period of time and the plurality of frames are processed by the processor with the programmed modules. The programmed modules include a feature detection and matching module that detects features in image regions in consecutive frames of the plurality of frames and matches the features between the consecutive frames of the plurality of cameras to obtain feature associations; a transform estimation module that estimates at least one transform based on the matched features of the plurality of cameras, a stitch region identification module that identifies a stitching region based on the detected features, the feature associations and the estimated transform, a seam estimation module which estimates a seam in the identified stitching region and an image stitching module that stitches the plurality of frames using the estimated seam. Furthermore, the programmed modules include a depth analyzer that estimates an optical flow from the plurality of frames by the plurality of cameras for the period of time; and translates the optical flow into a depth of an image region in consecutive frames of the plurality of cameras so that the above seam estimation module is able to estimate the seam in the identified stitching region based on the depth information obtained by the depth analyzer. The programmed modules also include an output image processor which processes the stitched frames as the view to the occupants in the vehicle.
In one embodiment, the estimation of the optical flow can be executed densely in order to obtain fine depth information using pixel level information. In another embodiment, the estimation of the optical flow can be executed sparsely in order to obtain feature-wise depth information using features. The features may be the detected invariant features from the feature detection and matching module.
In one embodiment, object types, relative position of each object in original images, and priority information to each feature are assigned based on the depth information and the seam is computed in a manner to preserve a maximum number of priority features in the stitched view. Higher priority may be assigned to an object with a relatively larger region, an object with a rapid change of its approximate depth and size of the region indicative of approaching to the vehicle, or an object appearing in a first image captured by a first camera but not appearing in a second image captured by a second camera located next to the first camera.
In one embodiment, an object of interest in the view may be identified and a distance to the object of interest, current velocity, acceleration and projected trajectory of the vehicle are analyzed to determine whether the vehicle is in danger of an accident by recognizing the object of interest as an obstacle close to the vehicle, approaching to the vehicle, or being in a blind spot in the vehicle. Once it is determined that the object of interest is of a high risk for a potential accident of the vehicle, the object of interest can be highlighted in the view.
In one embodiment, the system may include a panoramic surround display between the front windshield and the dashboard for displaying the view from the output image processor. In another embodiment, the system may be coupled to a head up display that displays the view from the output image processor.
The above and other aspects, objects and advantages may best be understood from the following detailed discussion of the embodiments.
Various embodiments for the method and system of presenting panoramic surround view on a display in a vehicle will be described hereinafter with reference to the accompanying drawings. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which present disclosure belongs. Although the description will be made mainly for the case where the method and system method and system of presenting panoramic surround view on a display in a vehicle, any methods, devices and materials similar or equivalent to those described, can be used in the practice or testing of the embodiments. All publications mentioned are incorporated by reference for the purpose of describing and disclosing, for example, the designs and methodologies that are described in the publications which might be used in connection with the presently described embodiments. The publications listed or discussed above, below and throughout the text are provided solely for their disclosure prior to the filing date of the present disclosure. Nothing herein is to be construed as an admission that the inventors are not entitled to antedate such disclosure by virtue of prior publications.
In general, various embodiments of the present disclosure are related to a method and system for presenting panoramic surround view on a display in a vehicle. Furthermore, the embodiments in the present disclosure are related to a method and system for presenting panoramic surround view on a display in a vehicle such that a continuous surround display provides substantially maximum visibility with natural and prioritized view which minimizes blind spots.
In addition, a feature detection and matching module 202 conducts feature detection for each image after the synchronization. Feature detection is a technique to identify a kind of feature at a specific location in an image, such as an interesting point or edge. Invariant features are preferred to be used since they are robust for scale, translational and rotational variations which may be the case for vehicle cameras. Standard feature detectors may include Oriented FAST and Rotated BRIEF (Orb), Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), etc.
After feature detection, feature matching is executed. Any feature matching algorithm for finding approximate nearest neighbors can be employed for this process. Additionally, after feature detection, the detected features may be also provided to the depth analysis/optical flow processing module 201 in order to process the optical flow sparsely using the detected invariant features which increase efficiency of the optical flow calculation.
After feature matching, matched features can be used for estimation of image homography in a transform estimation process conducted by a transform estimation module 203. For example, transform between images from a plurality of cameras, namely homography, can be estimated. In one embodiment, random sample consensus (RANSAC) may be employed, however, any algorithm which provides homography estimate would be sufficient for this purpose.
The results of the transform estimation process are received at a stitch region identification module 204 as input. The stitch region identification module 204 determines a valid region of stitching within the original images by using the estimated transform from the transform estimation module 203 and by using the feature associations of detected features from the feature detection and matching module 202. Using the feature associations or matches from the feature detection and matching module 202, similar or substantially the same features across a plurality of images of the same and possibly neighboring timestamps are then identified based on attributes of the features. Based on the depth information, object types, relative position of each object in original images, and priority information is assigned to each feature.
Once the stitching regions are defined and identified, seam estimation process is executed in order to seek substantially the best points or lines where stitching is to be performed inside the stitching regions. A seam estimation module 205 receives output from the depth analysis module 201 and output from the stitch region identification module 204. The seam estimation module 205 computes an optimal stitching line, namely seam, that preserves a maximum number of priority features.
In one embodiment, as shown in
Once the optimal stitching line is determined by the seam estimation module 205, the images as output of the plurality of cameras 200 can be stitched by an image stitching module 206, using the determined optimal stitching line. Image stitching process can be embodied as the image stitching module 206 which executes a standard image stitching pipeline method of image alignment and stitching, such as blending based on the determined stitching line. As the image stitching process is conducted, a panoramic surround view 207 is generated. For example, after prioritization with the above strategies earlier described, the synthesized image in
In order to provide a more driver friendly panoramic surround view, some drive assisting functionality can be implemented over the panoramic surround view 207. In one embodiment, it is possible to identify an object of interest in the panoramic surround view and to alert the object of concern to the driver. An object detection module 208 takes the panoramic surround view 207 as input for further processing. In the object detection process, Haar-like features or histogram of oriented gradients (HOG) features can be used as feature representation and object classification by training algorithms such as AdaBoost or support vector machine (SVM) can be performed. Using the results of object detection, a warning analysis module 209 analyzes a distance to the object of interest, current velocity, acceleration and projected trajectory of the vehicle. Based on the analysis, the warning analysis module 209 determines whether the vehicle is in danger of an accident, such as recognizing the object of interest as an obstacle close to the vehicle, approaching to the vehicle, or being in a blind spot in the vehicle.
If it is determined that the object of interest is of a high risk for a potential accident of the vehicle, the object may be indicated on the panoramic surround view 207 with highlight. An output image processor 210 provides post-processing of images in order to improve quality of the images and to display warning system output in a human readable format. Standard image post-processing techniques, such as blurring and smoothing, as well as histogram equalization, in order to improve the image quality may be employed. All these image improvement, warning system output, and the highlighted object of interest can be integrated to an integrated view 211 as the system's final output to the panoramic surround display and presented to the driver.
In
Although this invention has been disclosed in the context of certain preferred embodiments and examples, it will be understood by those skilled in the art that the inventions extend beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the inventions and obvious modifications and equivalents thereof. In addition, other modifications which are within the scope of this invention will be readily apparent to those of skill in the art based on this disclosure. It is also contemplated that various combination or sub-combination of the specific features and aspects of the embodiments may be made and still fall within the scope of the inventions. It should be understood that various features and aspects of the disclosed embodiments can be combined with or substituted for one another in order to form varying mode of the disclosed invention. Thus, it is intended that the scope of at least some of the present invention herein disclosed should not be limited by the particular disclosed embodiments described above.