The present invention relates to an apparatus and method for displaying information. Particularly, but not exclusively, the present invention relates to a display method for use in a vehicle and a display apparatus for use in a vehicle. Aspects of the invention relate to a display method, a computer program product, a display apparatus and a vehicle.
Cameras mounted on or in a vehicle, arranged to view an area external to the vehicle, often have their view of the surroundings obscured by dust and dirt covering the lens of the camera or the window behind which the camera is located. Clearly this phenomenon is understandable, especially to a user that has been/is driving the vehicle off-road. Nonetheless, when the driver can no longer use the camera view because dirt is obscuring the camera, they are forced to either operate the vehicle without the useful features the cameras provide, or get out of the car to clean the lens. If this occurs frequently it will quickly become a source of irritation to the user.
One example of a vehicle-mounted camera is a reversing camera. Typically, a single camera mounted on the rear of the vehicle is used to enhance the view of the area rear of the vehicle for the driver whenever they perform a reversing manoeuvre. Due to the location of reversing cameras, dirt and water is commonly splashed or blown onto the lens during normal driving, necessitating regular cleaning of the lens. Another example of vehicle camera systems comes in the form of surround view camera systems. These systems take image data from multiple cameras mounted around the vehicle and stitch together the views of these cameras to provide a substantially 360 degree view around the vehicle. These surround view camera systems are particularly useful for helping a driver to park a large vehicle in a tight parking space or when driving a vehicle off-road on difficult terrain. In such situations it would be better if the driver could make the desired manoeuvre in the vehicle with the aid of the cameras rather than forcing them to stop the vehicle and clean the camera lenses before the manoeuvre can be completed.
It is known to tackle the problem of obscured cameras through the use of cleaning systems for the camera lenses or the windows behind which the cameras are located. Such cleaning systems include wiping the lens or window with a rubber blade or squirting a washing fluid or air whenever the view is deemed obstructed by dirt. However, such systems can be particularly difficult to package in a vehicle and add to the overall maintenance cost of the vehicle. An alternative known approach to tackle this problem is the use of hydrophobic coatings applied to camera lenses or windows to reduce the build-up or dirt and dust. However, such coatings may degrade over the life time of a vehicle and increase the maintenance cost if they are to be reapplied.
It is an object of the present invention to overcome at least some of the aforementioned problems and enhance the benefits that vehicle mounted camera systems can provide to the driver.
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art including the aforementioned problems. It is an object of certain embodiments of the invention to provide a method for providing temporary compensation for a partially obscured camera lens on a vehicle. According to certain embodiments of the invention, this object is achieved by detecting dirt or anything else obscuring part of a camera field of view through the use of software and then filling in the obscured portion from historic image data or image data from another camera.
Aspects and embodiments of the invention provide a display method, a computer program product, a display apparatus and a vehicle as claimed in the appended claims.
According to an aspect of the invention, there is provided a display method for use in a vehicle, the method comprising: obtaining a first image showing a region external to the vehicle from a first image capture device; detecting an obscured region of the first image for which a portion of the field of view of the first image is obscured at least in part; identifying image data in a second image corresponding to the obscured region;
generating a composite image from the first image and the identified image data; and displaying at least part of the composite image.
Detecting an obscured region may comprise comparing the first image to a further image obtained from the same image capture device, the first and further images being captured at different points in time.
Detecting an obscured region may further comprise detecting corresponding portions of the first image and the further image which are the same while other corresponding portions of the first image and the further image differ.
Detecting an obscured region may further comprise defining a boundary encompassing said corresponding portions of the first image and the further image which are the same.
The second image may be obtained from the first image capture device or a second image capture device having a different field of view relative to the first image capture device, the first and second images being captured at different points in time.
The second image may be obtained from a second image capture device at the time of capture of the first image, the fields of view of the first and second image capture devices overlapping and the region of overlap at least partially encompassing the obscured region.
The first or second image capture devices may be mounted upon or within the vehicle to capture images of the environment external to the vehicle.
The display method may further comprise: determining positions of the vehicle at the time the first and second images are captured; and storing an indication of the positions of the vehicle.
Generating a composite image may comprise matching portions of the first image and the second image.
Matching portions of the first image and the second image may comprise matching overlapping portions of the first image and the second image.
Matching portions of the first image and the second image may comprise performing pattern matching to identify features present in both the first image and the second image such those features are correlated in the composite image.
The display method may further comprise determining a pattern recognition region within the first image including the obscured region; and determining a second image including image data for the environment within the pattern recognition area.
Determining a pattern recognition region may comprise determining coordinates for the pattern recognition region according to a current position of the vehicle.
Determining a pattern recognition region may further include receiving a signal indicating an orientation of the vehicle and adjusting the pattern recognition region coordinates according to the vehicle orientation.
The display method may further comprise: obtaining at least one image property for each of the first and second images; calculating an image correction factor as a function of the at least one image property for each of the first and second images; and adjusting the appearance of the first image or the second image according to the calculated image correction factor.
The at least one image property may be indicative of a characteristic of the image, a setting of an image capture device used to capture the image or an environmental factor at the time the image was captured.
Generating a composite image may further comprise indicating the portion of the composite image corresponding to the obscured region.
Generating a composite image may further comprise using identified image data from the second image and at least one third image within the obscured region.
The display method may further comprise storing at least a predetermined number of images obtained from the first image capture device at different points in time.
According to a further aspect of the invention, there is provided a computer program product storing computer program code which is arranged when executed to implement the above method.
According to a further aspect of the invention, there is provided a display apparatus for use with a vehicle, comprising: a first image capture device arranged to obtain a first image showing a region external to the vehicle; a display means arranged to display a composite image; and a processing means arranged to: detect an obscured region of the first image for which a portion of the field of view of the first image capture device which is obscured at least in part; identify image data in a second image corresponding to the obscured region; generate a composite image from the first image and the identified image data; and cause the display means to display at least part of the composite image.
A display apparatus as described above, wherein the image capture device comprises a camera or other form of device arranged to generate and output still images or moving images. The display means may comprise a display screen, for instance a LCD display screen suitable for installation in a vehicle. Alternatively, the display may comprise a projector for forming a projected image. The processing means may comprise a controller or processor, suitably the vehicle ECU.
The processing means may be further arranged to implement the above method.
According to a further aspect of the invention, there is provided a vehicle comprising the above display apparatus.
According to a further aspect of the invention, there is provided a display method, a display apparatus or a vehicle substantially as herein described with reference to
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
One or more embodiments of the invention will now be described by way of example only, with reference to the accompanying figures, in which:
It is becoming commonplace for vehicles to be provided with one or more video cameras to provide live video images (or still images) of the environment surrounding a vehicle. The vehicle may be a land-going vehicle, such as a wheeled vehicle including a display means to display captured images. The display means may comprise a head-up display means for displaying information in a head-up manner to at least the driver of the vehicle, or any other form of display means such a display screen or a projection means. The projection means may be arranged to project an image onto an interior portion of the vehicle, such as onto a dashboard, door interior, or other interior components of the vehicle. In the following discussion, where reference is made to the view of the driver or from the driver's position, this should be considered to encompass the view of a passenger, though clearly for manually driven vehicles it is the driver's view that is of paramount importance. Such images may then be displayed for the benefit of the driver, for instance on a dashboard mounted display screen. In particular, it is well-known to provide at least one camera towards the rear of the vehicle directed generally behind the vehicle and downwards to provide live video images to assist a driver who is reversing (it being the case that the driver's natural view of the environment immediately behind the vehicle is particularly limited). It is further known to provide multiple such camera systems to provide live imagery of the environment surrounding the vehicle on multiple sides, for instance displayed on a dashboard mounted display screen. For instance, a driver may selectively display different camera views in order to ascertain the locations of objects close to each side of the vehicle.
Such cameras may be positioned externally and mounted upon the exterior of the vehicle, or internally viewing through the windscreen or other vehicle glass in order to capture images, their lenses directed outwards and downwards. Such cameras may be provided at varying heights, for instance generally at roof level, driver's eye level or some suitable lower location to avoid vehicle bodywork obscuring their view of the environment immediately adjacent to the vehicle.
As shown in
Advantageously, image data from multiple vehicle mounted cameras may be combined to form a composite image, expanding the view available to the driver. This may be used to address the problem that it can be hard for a driver to ascertain the position of the vehicle relative to objects underneath the vehicle. There will now be described a method for enabling a driver to view the terrain underneath a vehicle through the use of historic (that is, time delayed) video footage obtained from a vehicle camera system, for instance the vehicle camera system illustrated in
It is known to take video images (or still images) derived from multiple vehicle mounted cameras and form a composite image illustrating the environment surrounding the vehicle. Referring to
The composite image may be displayed to the user according to any suitable display means, for instance the Head Up Display, projection systems or dashboard mounted display systems described above. While it may be desirable to display at least a portion of the 3D composite image viewed for instance from an internal position in a selected viewing direction, optionally a 2-Dimensional (2D) representation of a portions of the 3D composite image may be displayed. Alternatively, it may be that a composite 3D image is never formed—the video images derived from the cameras being mapped only to a 2D plan view of the environment surrounding the vehicle 200. This may be a side view extending from the vehicle 200, or a plan view such as is shown in
In addition to the cameras being used to provide a composite live image of the environment surrounding the vehicle 300, historic images may be incorporated into the composite image to provide imagery representing the terrain under the vehicle—that is, the terrain within the boundary of the vehicle 300. By historic images, it is meant images that were captured previously by the vehicle camera system, for instance images of the ground in front of or behind the vehicle 300; the vehicle subsequently having driven over that portion of the ground. The historic images may be still images or video images or frames from video images. Such historic images may be used to fill the blank region 302 in
The composite image may be formed by combining the live and historic video images, and in particular by performing pattern matching to fit the historic images to the live images thereby filling the blind spot in the composite image comprising the area under the vehicle. The surround camera system comprises at least one camera and a buffer arranged to buffer images as the vehicle progresses along a path. The vehicle path may be determined by any suitable means, including but not limited to a satellite positioning system such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), wheel ticks (tracking rotation of the wheels, combined with knowledge of the wheel circumference) and image processing to determine movement according to shifting of images between frames. At locations where the blind spot from the live images overlaps with buffered images, the area of the blind spot copied from delayed video images is pattern matched through image processing to be combined with the live camera images forming the remainder of the composite image.
Referring now to
At step 402 the live frames are stitched together to form the composite 3D image (for instance, the image “bowl” described above in connection with
To constrain the image storage requirements, only video frames from cameras facing generally forwards (or forwards and backwards) may be stored as it is only necessary to save images of the ground in front of the vehicle (or in front and behind) that the vehicle may subsequently drive over in order to supply historic images for inserting into the live blind spot area. To further reduce the storage requirements it may be that not the whole of every image frame is stored. For a sufficiently fast stored frame rate (or slow driving speed) there may be considerable overlap between consecutive frames (or intermittent frames determined for storage if only every nth frame is to be stored) and so only an image portion differing from one frame for storage to the next may be stored, together with sufficient information to combine that portion with the preceding frame. Such an image portion may be referred to as a sliver or image sliver. It will be appreciated that other than an initially stored frame, every stored frame may require only a sliver to be stored. It may be desirable to periodically store a whole frame image to mitigate the risk of processing errors preventing image frames from being recreated from stored image slivers. This identification of areas of overlap between images may be performed by suitable known image processing techniques that may include pattern matching—that is, matching image portion of images common to a pair of frames to be stored. For instance, pattern matching may use known image processing algorithms for detecting edge features in images, which may therefore suitably identify the outline of objects in images, those outlines being identified in a pair of images to determine the degree of image shift between the pair due to vehicle movement.
Each stored frame, or stored partial frame (or image sliver) is stored in combination with vehicle position information. Therefore, in parallel to the capturing of live images at step 400 and the live image stitching at step 402, at step 404 vehicle position information is received. The vehicle position information is used to determine the vehicle location at step 406. The vehicle position may be expressed as a coordinate, for instance a Cartesian coordinate giving X, Y and Z positions. The vehicle position may be absolute or may be relative to a predetermined point. The vehicle position information may be obtained from any suitable known positioning sensor, for instance GPS, IMU, knowledge of the vehicle steering position and wheel speed, wheel ticks (that is, information about wheel revolutions), vision processing or any other suitable technique. Vision processing may comprise processing images derived from the vehicle camera systems to determine the degree of overlap between captured frames, suitably processed to determine a distance moved through knowledge of the time between the capturing of each frame. This may be combined with the image processing for storing captured frames as described above, for instance pattern matching including edge detection. In some instances it may be desirable to calculate a vector indicating movement of the vehicle as well as the vehicle position, to aid in determining the historic images to be inserted into the live blind spot area, as described below.
Each frame (or sliver) that is to be stored, from step 400, is stored in a frame store at step 408 along with the vehicle position obtained from step 406 at the time of image capture. That is, each frame is stored indexed by a vehicle position. The position may be an absolute position or relative to a reference datum. Furthermore, the position of image may be given relative only to a preceding stored frame allowing the position of the vehicle in respect of each historic frame to be determined relative to a current position of the vehicle by stepping backwards through the frame store and noting the shift in vehicle position until the desired historic frame is reached. Each record in the frame store may comprise image data for that frame (or image sliver) and the vehicle position at the time the frame was capture. That is, along with the image data, metadata may be stored including the vehicle position. The viewing angle of the frame relative to the vehicle position is known from the camera position and angle upon the vehicle (which as discussed above may be fixed or moveable). Such information concerning the viewing angle, camera position etc. may also be stored in frame store 408, which is shown representing the image and coordinate information as (frame <-> co-ord). It will be appreciated that there may be significant variation in the format in which such information is stored and the techniques disclosed herein are not limited to any particular image data or metadata storage technique, nor to the particulars of the position information that is stored.
At step 410 a pattern recognition area is determined. The pattern recognition area comprises the area under the vehicle that can't be seen in the composite image formed solely from stitched live images. Referring back to
However, the above described fitting of previous stored image data into a live stitched composite image is predicated on exact knowledge of the vehicle position both currently and when the image data is stored. It may be the case that it is not possible to determine the vehicle position to a sufficiently high degree of accuracy.
As an example, with reference to
It will be appreciated that where the degree of error in the vehicle position differs between the time at which an image is stored and the time at which it is fitted into a live composite image this may cause undesirable misalignment of the live and historic images. This may cause a driver to lose confidence in the accuracy of the representation of the ground under the vehicle. Worse still, if the misalignment is significant then there may be a risk of damage to the vehicle due to a driver being misinformed about the location of objects under the vehicle.
Due to risk of misalignment, at step 412 pattern matching is performed within the pattern recognition area to match regions of live and stored images. As noted above in connection with storing image frames, such pattern matching may include suitable edge detection algorithms. The determined pattern recognition region at step 410 is used to access stored images from the frame store 408. Specifically, historic images containing image data for ground within the pattern recognition area is retrieved. The pattern recognition area may comprise the expected vehicle blind spot and a suitable amount of overlap on at least one side to account for misalignment. Step 412 further takes as an input the live stitched composite image from step 402. The pattern recognition area may encompass portions of the live composite view adjacent to the blind spot 302. Pattern matching is performed to find overlapping portions of the live and historic images, such that close alignment between the two can be determined and used to select appropriate portions of the historic images to fill the blind spot. It will be appreciated that the amount of overlap between the live and historic images may be selected to allow for a predetermined degree of error between the determined vehicle position and its actual position. Additionally, to take account of possible changes in vehicle pitch and roll between a current position and a historic position as a vehicle traverses undulating terrain, the determination of the pattern recognition region may take account of information from sensor data indicating the vehicle pitch and roll. This may affect the degree of overlap of the pattern recognition area with the live images for one or more sides of the vehicle. It may not always be necessary to determine a pattern recognition area, rather the pattern matching may comprise a more exhaustive search through historic images (or historic images with an approximate time delay relative to the current images) relative to the whole composite live image. However, by constraining the region within the live composite image within which pattern matching to historic images is to be performed, and constraining the volume of historic images to be matched, the computational complexity of the task and the time taken may be reduced.
At step 414 selected portions of one or more historic images or slivers are inserted into the blind spot in the composite live images to form a composite image encompassing both live and historic images.
Furthermore, in addition to displaying a representation of the ground under the vehicle, a representation of the vehicle may be added to the output composite image. For instance, a translucent vehicle image or an outline of the vehicle may be added. This may assist a driver in recognising the position of the vehicle and the portion of the image representing the ground under the vehicle.
Where the composite image is to be displayed overlying portions of the vehicle to give the impression of the vehicle being transparent or translucent (for instance using a HUD or a projection means as described above), the generation of a composite image may also require that a viewing direction of a driver of the vehicle is determined. For instance, a camera is arranged to provide image data of the driver from which the viewing direction of the driver is determined. The viewing direction may be determined from an eye position of the driver, performed in parallel to the other steps of the method. It will be appreciated that where the composite image or a portion of the composite image is to be presented on a display in the vehicle which is not intended to show the vehicle being see-through, there is no need to determine the driver's viewing direction.
The combined composite image is output at step 416. As discussed above, the composite image output may be upon any suitable image display device, such as HUD, dashboard mounted display screen or a separate display device carried by the driver. Alternatively, portions of the composite image may be projected onto portions of the interior of the vehicle to give the impression of the vehicle being transparent or translucent. The techniques disclosed herein are not limited to any particular type of display technology.
Referring now to
As described above, the formation of a composite image comprises the stitching together of live and historic images derived from a vehicle camera system. This may be to provide a composite image having a wider field of view than is achievable using live images alone, for instance including images for portions or areas of the environment underneath the vehicle or otherwise not visible within live images due to that area being obscured by the vehicle itself. As described above, the combination of live and historic images may require the accurate tracking of the position of the vehicle at the time each image is recorded (including the live images). Alternatively, matching (for instance, pattern matching) portions of live and historic images can be used to determine which parts of historic images to stitch into a composite image.
Optionally, both techniques may be used as described in
To incorporate historic images into a composite image including live images as described above may involve a certain degree of image processing to scale and stretch historic images and/or live images to smoothly fit together. This may be true also for live images. However, the composite image may still appear disjointed. For an enhanced user experience, it may be desirable that the composite image has a uniform appearance such that it appears to have been captured by a single camera, or at least to minimise jarring differences between live and historic images are minimised. Such discontinuities may include colour and exposure mismatches. If uncorrected, composite image discontinuities may cause users to lose confidence in the accuracy of the information presented in a composite image, or to assume that there is a malfunction. It may be desirable that composite images appear to have been captured from a single camera, to give the appearance of the vehicle, or a portion of the vehicle, being transparent or translucent.
Such mismatches may result from changes in ambient lighting conditions between the time at which the live and historic images are captured (particularly when a vehicle is moving slowly) and changes in captured image properties arising from different camera positions (where multiple cameras are used and live or historic images are obtained from a different camera positions and stitched adjacent to one another) or apparently different camera positions following stretching and scaling of historic images. Furthermore, the problem of differences in image properties between live and historic images may be exacerbated by the wide field of view cameras used within vehicle camera systems. Multiple light sources around the vehicle, or changes in light sources between live and historic images, may cause further variation. For instance, at night time, portions of a live image may be illuminated by headlights. Historic images will also include areas illuminated by headlights at the time at which the images were captured. However, for the current position of the vehicle that portion of the environment (for instance under the vehicle) would no longer be illuminated by the vehicles headlights. Advantageously, this may serve to avoid the impression that areas under the vehicle in a composite image are being directly illuminated by headlights by ensuring that image properties for historic image portions match the image properties for adjacent live images.
Composite image discontinuities between live and historic images may be mitigated by adjusting the image properties of the historic images, live images or both to ensure a higher degree of consistency. This may provide an apparently seamless composite image, or at least a composite image where such seams are less apparent. In some cases it may be preferable to adjust the image properties of only historic images, or adjust the image properties of historic images to a greater extent than those of live images. This may be because by their very nature live images may include or be similar to regions directly observable by the driver and it may be desirable to avoid the composite image appearing dissimilar to the directly observed environment.
When images (or image slivers) are stored within a frame buffer or frame store, as described above in connection with step 408 of
When storing image properties an image processor may buffer a small number of frames, for instance four frames, and calculate average statistics to be stored in association with each frame. This may be performed for a rolling frame buffer. Additionally, or alternatively, this technique for averaging image properties across a small group of frames may be applied to live images to determine the image properties of live images for comparison with those of historic images. Advantageously, such averaging techniques mitigate against the possible negative effects of a single historic or live image including radically different image properties compared with preceding or subsequent images. Where the live images are averaged in this way each historic image or groups of historic images may be processed to match the current or rolling average image properties of live images. Specifically, taking the example of the image property under consideration being the image white balance (or more generally, colour balance), for each of a group of four live images the white balance WB may be calculated and averaged in accordance with equation (1) to give the average white balance AVGWB. Appropriate statistical techniques for the calculation of an image white balance, or other colour balance property, will be well known or available to the skilled person.
(WB1+WB2+WB3+WB4)/4=AVGWB (1)
From knowledge of the average white balance for a group of live images, the white balance for a historic image (HWB) may be compared to determine a difference in white balance (XWB) in accordance with equation (2). It will be appreciated that XWB may be positive or negative.
H
WB−AVGWB=XWB (2)
Following the determination of the difference in white balance, the white balance of a historic image may be appropriately adjusted to conform to the live image average white balance in accordance with equation (3) to provide an adapted historic image white balance HWB′. Appropriate techniques for adjusting the colour balance of an image will be known or available to the skilled person.
H
WB
−X
WB
=H
WB (3)
In equations (1) to (3) the respective WB or HWB property may be for a whole image or a portion of an image. In equations (2) and (3) above it will be appreciated that the historic white balance may also comprise the average white balance for a group of historic images. In both cases the group size may differ from four. The technique described above in connection with equations (1) to (3) may be equally applied to any other image property, for instance image exposure. Image property information for historic images may be stored per image (or image sliver) or per group of images. Where average image properties are stored this may be separately performed for images from each camera or averaged across two or more cameras. The stored image properties may be averaged across fixed groups of images or taken from a rolling image buffer and stored individually for each historic image.
As noted previously, in an alternative to equation (3) it may be that after calculating the difference in image properties between historic images and live images, that difference is applied to the live images such that the live images match the historic images by adding (for the example of white balance) the white balance difference XWB to the white balance WB for at least one live image. In some situations it may be disadvantageous to adjust the image properties of a live image. For instance, if a light source is within the field of view of a live image, adjusting the image properties of the live image to conform to a historic image could risk overexposure of the live image, resulting in the image appearing washed out. Additionally, as discussed above, it is desirable in some situations that live images appear as close as possible to how the environment surrounding the vehicle would appear if directly viewed by the driver.
The adjustment of image properties for historic or live images may be performed as part of step 412 illustrated in
As previously described, a vehicle camera system may include one or more cameras 710. Camera 710 provides a live image as indicated at 702 and also provides an output 704 for one or more image property. The specific image properties identified in
The live image data 702 and the corresponding image property data are supplied to the Electronic Control Unit (ECU) 706, though it will be appreciated that alternatively a separate image processing system may be used. The techniques disclosed herein may be implemented in any combination of hardware or software. Specifically, live images may be passed through directly to an output composite image 708 (for display on a vehicle display, for instance in the instrument cluster, and not illustrated in
The output image 708 includes both at least one live image 721 and at least one historic image 722, appropriately adjusted for consistency with the live image. The process whereby the images are stitched together to form an output composite image 708 has been described above in connection with
Referring now to
A first part of the image adjustment of
At step 804 a dynamic range correction factor is calculated by dividing the dynamic range of the live image by the dynamic range of the historic image to provide a dynamic range correction factor 806. Dynamic Range (DR) is defined as DR=Lsat/Lmin in ISO 15730 as a camera dynamic range calculation method. This terminology is used to describe the scene dynamic range by sensor and ISP suppliers, where they take Lmin as the noise floor of the sensor. Knowing Lmin will be constant for our sensor and exposure values and weightings will be adjusted so that Lsat will be shown as digitally maximised in the image. A dynamic range correction can then be applied as gain to the formula.
Similarly, the method receives as inputs a measure of the gamma for a historic image (step 808) and a measure of the gamma for a live image (step 810). Gamma (also called a tone mapping curve) is provided by ISP or sensor suppliers as an adjustable curve vs DR. As the image is adapted to a new scene with a new DR, it is also necessary to compensate for gamma. Gamma is adaptive to the scene but it is acquired as part of an adaptive setting of a camera. It is not necessary to measure this from the camera, rather it may be dependent on the gamma calculation method of the ISP or sensor supplier. Gamma correction will be well known to the skilled person, for instance as described at https://en.wikipedia.org/wiki/Gamma_correction
At step 812 a gamma correction factor is calculated by subtracting the gamma of the historic image from the gamma of the live image to provide a gamma correction factor 814.
Similarly, the method receives as inputs a measure of the white balance for a historic image (step 816) and a measure of the white balance for a live image (step 818). Colour temperature of the light in the scene is detected to apply white balance. The objective of white balance is to apply a correction factor to the image so that the colour temperature of the light in the scene will appear as white in the image.
At steps 820 and 822 an environmental light colour temperature is calculated separately for each of the historic image and the live image. An auto white balance algorithm requires knowledge of the colour temperature of the scene to apply corrected white balance. This may be specific for different sensor and ISP suppliers. The calculated environment light colour temperature is then used to provide inputs 824 and 826 in respect of the colour temperature of each image. At step 828 the colour temperature of each image is used to determine a YCbCr conversion matrix, as will be well understood by the skilled person.
The dynamic range correction factor 806, the gamma correction factor 814 and the YCbCr conversion matrix 828 may then be applied to the historic image (or part of the historic image) to appropriately adjust the historic image for consistency with the live image. In the method of
Step 830 comprises the application of the correction factors to a historic image after they are calculated. The CT conversion matrix may comprise a 3×3 correction matrix applied to the corrected Y, Cb and Cr values for each pixel of the historic image. A look up table may be generated by first calculating all values from 0-255 for all YCbCr values. This can simplify processing of the historic image may requiring only look up in the table instead of making the calculation for each pixel.
At step 836, if required the updated YCbCr data may be adjusted from a range of −128 to 128 for each pixel to a range of 0 to 255 for each pixel. In some embedded systems, Cb and Cr data is stored from 0-255, where their range is defined in YCbCr color space from −128 to +128. Cb and Cr data format in embedded system is implementation specific so step 836 may be omitted if modified for different implantations. At step 838 the updated historical image is output for further processing to be combined into a composite image, including by the pattern matching method of
It will be appreciated that the method of
The problem of portions of a camera field of view being obscured is described above. To summarise, if a camera lens, or a window through which a camera is directed in order to obtain its field of view, is dirty or otherwise obscured, then portions of images captured by that camera will be similarly obscured. This may make it harder for the driver to make out detail in the image of the environment surrounding the vehicle, and render the camera system less useful. To be clear, when a portion of an image or a portion of a camera field of view is described as being obscured, it is not necessary that all light has been blocked from reaching the camera in that portion. It may be that a portion of the image is missing, or it may be that a portion of an image is distorted or corrupted by a portion of the light being at least partially blocked and thus prevented from reaching the camera. Furthermore, it may be that within an obscured portion some smaller areas of the camera field of view are not obscured, but in general a sizable proportion of that portion of the camera field of view is obscured.
Referring now to
The compensation process begins with step 1000 of
At step 1010 a boundary for at least one obscured region may be established. An obscured region may comprise a single obscured area of a screen. Alternatively, an obscured region may comprise a collection of obscured areas interspersed with unobscured areas. This may be particularly relevant where dirt has splashed onto a camera lens or window causing multiple splattered obscured areas. It may be computationally simpler to aggregate a number of small, close together obscured areas into a single obscured region.
Once the boundary of an obscured region is defined, at step 1012 a further image including corresponding image data may be identified. For a single camera system the only option is to identify an historic, buffered image in which the corresponding image data is captured in a different part of the image which is not obscured. For a multiple camera system, suitable image data may alternatively be identified in a live image from another camera which has a field of view overlapping the field of view of the first camera, and the overlapped portion at least partially encompasses the obscured region. Alternatively, an historic image captured by another camera may be identified including the corresponding image data. Clearly where historic images are used, either derived from the first camera or another camera, this is on the basis that the position of the vehicle has changed between the time at which the historic image was captured and the time of capture of the current image, the change in position resulting in the obscured portion of the environment surrounding the vehicle having been captured in an historic image (assuming of course that the historic image is not similarly obscured in the corresponding part of the field of view). It will be appreciated that where there are multiple obscured regions in an image obtained from a first camera then different images from different sources may be identified to provide the obscured image data. Furthermore, particularly where a single obscured image is large, it may be that different parts of the same obscured regions are filled from different images.
Where historic image data is used either from the same camera or another camera, optical flow analysis may be used to detect a movement vector which is used to substitute a historic image region that will show the view through the obscured region (or at least a simulation of the view that would have been obtainable at the point in time at which the historic image was captured). The use of historic image data requires that camera images have previously been buffered at the time at which camera soiling is detected.
In a further variant, a composite image may be generated in which image data from different sources are overlaid, for instance image data from another camera and an historic image. This may be desirable in a situation in which the historic image is of a higher quality but the live image data from another camera has the benefit of revealing objects that may have moved into the field of view since the historic image data was captured.
Once a further image has been identified at step 1012 then a composite image is generated at step 1014 through a process of stitching together the image from the first camera and image data from the further image. At step 1016 at least a portion of the composite image is displayed to the driver.
It will be appreciated that the process of determining a further image able to provide image data to fill an obscured region may be implemented using the approach described above in connection with
It will be further appreciated that where an obscured image region is filled with image data from another image to form a composite image, then it is likely to be necessary to perform various image manipulations including scaling, stretching, rotating or skewing the image data from the other image so as to fit correctly into the obscured region. Furthermore, in certain embodiments of the invention it may be desirable to perform the same process of adjustment of image properties to avoid composite image discontinuities as described above in connection with
In order to exemplify the present invention, embodiments will now be described in connection firstly for a single camera system and secondly for a multiple camera system.
A single camera system may be embodied in a typical reversing camera system, where a single vehicle-mounted camera is located at or towards the rear of the vehicle, for example in a tailgate or rear bumper. Water spray and dirt cannot be avoided when driving on wet roads or off-road tracks. While the present invention does not guarantee a perfectly clear view of the area behind the vehicle, following the process described above in connection with
It will be appreciated that the in one basic embodiment, the image correction system cannot start until the system determines that the vehicle is moving, as the system will require two or more frames that are known to have slightly different views. Specifically, movement of the vehicle may be separately detected, for instance using the position sensors 404 of
When the system starts to correct the image using buffered image data, the user, such as the driver of the vehicle, may be notified, for instance by way of a visible warning on the display, that the image is being corrected using historical data, and that the driver should then proceed with greater caution. Advantageously, this will also help to let the driver know when they need to clean the lens. In an example, the displayed image may be artificially colourized to highlight that image correction is in progress. Any other visual indication may be used or other types of warning, such as an audible warning.
In an extension to the simple embodiment described above, images may be buffered and then stored for a longer period of time if the vehicle is stationary or even switched off, so as to provide historical images to begin again the process of detecting obscured regions and generating a composite image when the vehicle is switched on or resumes movement. As an extension, if obscured regions are detected when the vehicle is first switched on, and if those obscured regions are significant, then this may be a suitable time to generate a notification to the driver that they need to clean the camera lens before setting off on their journey.
It is noted above that it may be that only a portion of a composite image is displayed to the driver. It is not unusual for a camera to have a wider field of view than is strictly required. The required portion of the field of view may be cropped (also referred to as the remaining portions being masked). It will be appreciated that if the camera being used has a wider field of view, for instance than is typically used for the reversing view, then the image data from historic camera images in regions that are not typically displayed may also provide the necessary image data to compensate for obscured image regions. This additional image data may be used to augment the buffered image data. In this way, the total available image data collected by the camera may effectively cover a part of the vehicle path that has yet to be displayed to the driver. As such the time-delay, inherent with the replacement of obscured regions of a first image using historic image data can be reduced as the second image from which such image data is obtained may be more recent.
While the present invention cannot make reversing cameras impervious to dirt and spray, as there will ultimately come a point where they are significantly obscured, embodiments of the invention may allow for less frequent lens cleaning.
As described previously, a vehicle may be fitted with a plurality of cameras, each with its respective view of a region exterior to the vehicle. The cameras may be positioned so as to provide views to the front, rear and both left and right sides of the vehicle. Image processing is performed on these camera views to stitch them together and display them as a substantially 360 degree view of the surroundings of the vehicle, as illustrated in
In this arrangement, the camera views typically overlap at least slightly with the neighbouring camera about the vehicle. The images captured by each camera are stretched, cropped and/or resolved so as to provide a displayed view that can be readily understood by the driver. As described previously in connection with the single camera system, the image processor may further replace regions of the 360 degree view (referred to as an “image bowl” in the description of
Using, at least in part, image data from an overlapping field of view to compensate for obscured regions when generating a composite image relies a little less on having historical image data in the buffer where the historical image data is spatially separated due to vehicle movement. Rather, the overlapping parts of the available image data are recognised by a self-calibrating function of the image processor, and so known redundant parts of the image data (those parts normally cropped out) can be called upon with less burden on the image processor to augment and enhance an otherwise obscured view. That is, where the vehicle cameras are arranged to capture a wider field of view than is required for a given display mode, the image processor can use image data that is otherwise not displayed to the driver so as to correct for an obscured lens with less latency than would otherwise result from using corrections based on historical image data.
Furthermore, a multiple camera system can use buffered image data from cameras mounted on the front of the vehicle to assist the driver during a reversing manoeuvre. More generally, the use of historic image data for obscured region compensation is not restricted to neighbouring or nearby cameras. As long as at least one of the cameras have captured the portion of the external environment required to compensate for the obscured region (and the memory is sufficient to buffer enough image frames), then even if the lens of the rear-view camera is badly soiled, a composite reversing camera view can be created. In an example, the driver brings the car to a halt from driving forwards and then selects reverse. The driver wishes to move the vehicle backwards, but the rear-view camera lens is dirty. With this arrangement, suitable image data from the buffered forward facing camera can be used to compensate for obscuration of the rear-view camera. As an extension of this, if the rear-view camera is entirely obscured, the system may provide the driver with a reverse replay of front view camera data, time matched and imaged matched using position data and data gleaned from the views recorded from the side-view cameras, to create a view of the surroundings behind the vehicle based on historical camera views taken by the front-view camera. In this case it desirable to notify the driver that this compensation is being applied as it will not show any new event or obstruction that enters the area behind the vehicle during the reversing manoeuvre. As an example, such a generation of an entirely simulated rear view may be useful on very narrow, single-track lanes which rely on passing places to enable two-way traffic. In in a situation in which the vehicle detects that it is traversing such a road, for instance through GPS data and optionally corroborated by side-view cameras, buffering of camera image data may be extended, for instance for side-view cameras from the side on which the passing places are provided and especially where the system has determined that one or more camera lenses are partially obscured.
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. In particular, the method of
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the foregoing embodiments, but also any embodiments which fall within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
1702538.8 | Feb 2017 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/051525 | 1/23/2018 | WO | 00 |