The present invention relates to an apparatus and method for displaying information. Particularly, but not exclusively, the present invention relates to a display method for use in a vehicle and a display apparatus for use in a vehicle. Aspects of the invention relate to a display method, a computer program product, a display apparatus and a vehicle.
It is important for a driver of a vehicle to be provided with information to drive the vehicle safely and accurately. Information provided to the driver includes a view from the vehicle, in particular, ahead or forward of the vehicle, and also information concerning the vehicle such as a speed of the vehicle. In some vehicles, such as sports utility vehicles (SUVs) or 4 wheel drive vehicles, the view ahead of the vehicle is partially obscured by a bonnet or hood of the vehicle, particularly a region a short distance ahead of the vehicle. This can be exacerbated by the vehicle being on an incline, on a crest, or at a top of a descent, such as when driving off-road. Furthermore, especially when driving off-road, while obstacles and objects such as rocks may be viewed ahead of a vehicle before they are reached, once the vehicle is positioned over an object it can be difficult to ascertain where the vehicle is in relation to that object. Specifically, it can be hard to ascertain where the object is in relation to portions of the vehicle, for instance the wheels. More broadly, objects close to a vehicle, without being directly underneath the vehicle, can be hard to see from the driver's position. Correct positioning of a vehicle relative to an object can be important to avoid the risk of damage to the vehicle.
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art. It is an object of certain embodiments of the invention to aid a driver of a vehicle. It is an object of embodiments of the invention to improve a driver's understanding of the environment surrounding a vehicle including underneath the vehicle.
Aspects and embodiments of the invention provide a display method, a computer program product, a display apparatus and a vehicle as claimed in the appended claims.
According to an aspect of the invention, there is provided a display method for use in a vehicle, the method comprising: capturing images of a region external to the vehicle; storing at least a portion of the captured images; generating a composite image from a current image and a stored image by matching portions of the stored image and the current image; and displaying at least part of the composite image; wherein the composite image comprises a first region generated from the current image and a second region generated from the stored image, the second region not being visible within the current image.
The composite image may comprise a 3-Dimensional, 3D, representation or a 2-Dimensional, 2D, representation of the environment surrounding the vehicle and extending at least partially underneath the vehicle.
Displaying at least part of the composite image may comprise generating a 2D representation of at least a portion of the 3D representation and displaying the 2D representation.
The image data may comprise a series of image frames; and wherein the method may further comprise: determining a position of the vehicle; and storing an indication of the position of the vehicle with at least a portion of an image frame.
Matching portions of the stored image and the current image may comprise matching overlapping portions of the stored image and the current image.
Matching portions of the stored image and the current image may comprise performing pattern matching to identify features present in both the stored image and the current image such those features are correlated in the composite image.
The display method may further comprise determining a pattern recognition region including an area which is not visible within the current image; and determining a stored image including image data for the environment within the pattern recognition area.
Determining a pattern recognition region may comprise determining coordinates for the pattern recognition region according to a current position of the vehicle.
Determining a pattern recognition region may further include receiving a signal indicating an orientation of the vehicle and adjusting the pattern recognition region coordinates according to the vehicle orientation.
Storing at least a portion of the captured image may comprise: storing a first obtained image frame; determining a degree of overlap between the first obtained image frame and a second obtained image frame; and for the second obtained image frame storing a non-overlapping portion of the second image frame.
The composite image may include image data from multiple current images representing different views from the vehicle and at least one stored image such that the composite image comprises a contiguous view of at least a portion of the environment surrounding the vehicle and extending underneath the vehicle.
The display method may further comprise obtaining information associated with the vehicle and displaying a graphical representation of at least one component of the vehicle within the composite image.
The information associated with the vehicle may be information associated with the at least one component of the vehicle, the at least one component of the vehicle comprising at least one of: a steering system of the vehicle; one or more wheels of the vehicle; a suspension of the vehicle or an engine of the vehicle; or a further mechanical component of the vehicle.
The composite image may be displayed to overlie a portion of the vehicle to be indicative of a portion of the vehicle being at least partly transparent.
The composite image may be displayed to be translucent or overly an internal or external vehicle portion.
The image data may be obtained from one or more cameras associated with the vehicle and arranged to capture images of the environment surrounding the vehicle.
According to a further aspect of the invention, there is provided a computer program product storing computer program code which is arranged when executed to implement the above method.
According to a further aspect of the invention, there is provided a display apparatus for use with a vehicle, comprising: image capturing apparatus arranged to obtain images of a region external to the vehicle; a display arranged to display information; a storage means arranged to store at least a portion of the obtained images; a processing means arranged to: receive a current image from the image capturing apparatus; cause the storage means to store at least a portion of the obtained images; generate a composite image from a current image and a stored image by matching portions of stored image and current image; and cause the display to display at least part of the composite image; wherein the composite image comprises a first region generated from the current image and a second region generated from the stored image, the second region not being visible within the current image.
A display apparatus as described above, wherein the image capturing apparatus comprises a camera or other form of image capture device arranged to generate and output still images or moving images. The display may comprise a display screen, for instance a LCD display screen suitable for installation in a vehicle. Alternatively, the display may comprise a projector for forming a projected image. The processing means may comprise a controller or processor, suitably the vehicle ECU.
The processing means may be further arranged to implement the above method.
According to a further aspect of the invention, there is provided a vehicle comprising the above display apparatus.
According to a further aspect of the invention, there is provided a display method, a display apparatus or a vehicle substantially as herein described with reference to
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
One or more embodiments of the invention will now be described by way of example only, with reference to the accompanying figures, in which:
More generally, it may be considered that from the viewing position of the driver the view of the roadway 130 ahead is partially occluded both by external portions of the vehicle (especially the bonnet 105) and internal portions of the vehicle, for instance the bodywork surrounding the windscreen 160 and the dashboard. In the following description of the invention, where reference is made to the view of the driver or from the driver's position, this should be considered to encompass the view of a passenger, though clearly for manually driven vehicles it is the driver's view that is of paramount importance. In particular, it is portions of the roadway 130 close to the front of the vehicle that are occluded. It will be appreciated that the driver's view of the environment surrounding the vehicle on all sides is similarly restricted by the field of view available through each vehicle window from the driver's viewing position. In general, a driver is unable to see portions of the environment close to the vehicle due to restrictions imposed by vehicle bodywork.
Of course, a well-known partial solution to this problem is the use of mirrors, especially wing mirrors or side mirrors mounted on the exterior of the vehicle which provide an improved view of the environment close to the sides of the vehicle as well as of the blind-spots generally behind and to the side of the vehicle. However, it remains the case that for conventional vehicles the view for the driver of the environment and terrain immediately surrounding the vehicle can be very limited. Furthermore, there is no view at all of the terrain immediately underneath the vehicle. Particularly when driving a vehicle off-road it can be important to appreciate the alignment of the vehicle relative to objects, for instance rocks, both surrounding and underneath the vehicle in order to avoid vehicle damage. Conventionally, this can only be done by the driver remembering the terrain as the vehicle drives forward and visualising where the vehicle must be relative to objects as they disappear from the driver's field of view, and in particular pass under the vehicle, or by the driver exiting the vehicle and inspecting its position relative to objects that may pose a threat. Clearly this requires skill and practice on the part of the driver, and even then is inexact.
It is becoming commonplace for vehicles to be provided with one or more video cameras to provide live video images (or still images) of the environment surrounding a vehicle. Such images may then be displayed for the benefit of the driver, for instance on a dashboard mounted display screen. In particular, it is well-known to provide a camera system with at least one camera towards the rear of the vehicle directed generally behind the vehicle and downwards to provide live video images to assist a driver who is reversing (it being the case that the driver's natural view of the environment immediately behind the vehicle is particularly limited). It is known to provide multiple such camera systems to provide live imagery of the environment surrounding the vehicle on multiple sides, for instance displayed on a dashboard mounted display screen. For instance, a driver may selectively display different camera views in order to ascertain the locations of objects close to each side of the vehicle. Such cameras may be positioned externally mounted upon the vehicle, or internally and directed outwards and downwards through the vehicle glass in order to capture images.
Such cameras may be provided at varying heights relative to the ground on which the vehicle is standing, for instance generally at roof level, driver's eye level or some suitable lower location to avoid vehicle bodywork obscuring their view of the environment immediately adjacent to the vehicle. However, while being a significant improvement over a driver's natural field of view, such camera systems are of no assistance for determining the location of objects underneath the vehicle. It might be considered that the solution to this inability to view the terrain underneath a vehicle is to position one or more cameras underneath the vehicle or to the side of the vehicle and directed underneath the vehicle. However, the underneath of a vehicle is not a promising area for image capture due to it being poorly lit. Furthermore, cameras located generally underneath a vehicle are exposed to a significant risk of damage due to contact with objects such as rocks.
One solution to the above described problem of poor visualisation of the roadway ahead of a vehicle will now be described in relation to
The vehicle shown in
As shown in
The display means may comprise a head-up display means for displaying information in a head-up manner to at least the driver of the vehicle. The head-up display may form part of, consist of or be arranged proximal to the windscreen 260 such that the information 240, 250 is displayed to overlie the bonnet 205 of the vehicle. By overlie it is meant that the displayed information 240, 250 appears upon (or in front of) the bonnet 205. Where images of other portions of the environment surrounding the vehicle are to be displayed, the head-up display may be similarly arranged relative to another window of the vehicle. An alternative is for the display means to comprise a projection means. The projection means may be arranged to project an image onto an interior portion of the vehicle, such as onto a dashboard, door interior, or other interior components of the vehicle. The projection means may comprise a laser device for projecting the image onto the vehicle interior.
A method of providing the improved view of
The image data may be for a region ahead of the vehicle. The image data may be obtained by the processing device from one or more image sensing means, such as cameras, associated with the vehicle. As will be explained in connection with
As shown in
The next step of the method for providing the improved view of
The representation, particularly of the image data although optionally also of the at least one component, may be generated so as to match, or correspond to, a perspective from a point to view of the driver. For example, an image processing operation may be performed on the image data to adjust a perspective of the image data. The perspective may be adjusted to match, or to be closer to, a perspective of a subject of the image data as viewed from the driver's position within the vehicle.
The image processing operation comprises a delay being introduced into the image data. The delay time may be based upon a speed of travel of the vehicle. The delay may allow the displayed representation based on the image data obtained from the camera to correspond to a current location of the vehicle. For example, if the image data is for a location around 20 m ahead of the vehicle the delay may allow the location of the vehicle to approach the location of the image data such that, when the representation is displayed, the location corresponding to the image data is that which is obscured from the passenger's view by the bonnet 205. In this way the displayed representation matches a current view of the driver. It will be appreciated that the delay may also be variable according to the driver's viewing position given that the driver's viewing position affects the portion of the roadway occluded by the bonnet 205. The image processing operation may be performed by the processing device.
Once generated, the representation is displayed. The representation is displayed so as to overlie a portion of the vehicle's body from the viewer's point of view, such as the driver's point of view. The method may be performed continually in a loop until a predetermined event occurs, such as a user interrupting the method, for example by activating a control within the vehicle. It will be realised that the predetermined event may be provided from other sources.
The representation may be displayed upon a display apparatus provided within the vehicle such that the displayed information overlies a portion of the vehicle under the control of a processing device. The processing device may be further arranged to determine information associated with a vehicle, or to receive image data for a region ahead of the vehicle, and to cause the display device to display a graphical representation of at least one component of the vehicle having one or more characteristics based on the information associated with the vehicle, or a representation of the image data. The information received by the processing device may include a steering angle of the vehicle's wheels, or image data output by one or more cameras. The information or image data may be received by the processing device from a communication bus of the vehicle, or via a dedicated communication channel such as a video feed from the one or more cameras.
The graphic representation generated by the processing device may be a representation of the vehicle's wheels as shown in
The display device may comprises a projector for projecting light which is operably controlled by the processing device to project the representation by emitting light toward an optical combiner. The projection device and combiner together form a head-up display (HUD). When no light is being emitted by the projection device the combiner may be generally imperceptible to the driver of the vehicle, but when light is projected from the projection device and hits the combiner an image is viewed thereon by the passenger. The combiner is positioned such that an image viewed thereon by the driver appears to overlie a portion of the vehicle's body as the bonnet. That is the image appears above the bonnet. The displayed representation allows the driver to appreciate a location and direction of the vehicle's wheels and a position and direction of the roadway on which the vehicle is travelling, which is particularly useful for off-road driving. In addition to the representation of the wheel positions shown in
For the improved forwards driver view described above in connection with
While the improved forwards view illustrated in
It is known to take video images (or still images) derived from multiple vehicle mounted cameras and form a composite image illustrating the environment surrounding the vehicle. Referring to
The composite image may be displayed to the user according to any suitable display means, for instance the Head-Up Display, projection systems or dashboard mounted display systems described above. While it may be desirable to display at least a portion of the 3D composite image viewed for instance from an internal position in a selected viewing direction, optionally a 2-Dimensional (2D) representation of a portions of the 3D composite image may be displayed. According to certain other embodiments it may be that a composite 3D image is never formed—the video images derived from the cameras being mapped only to a 2D plan view of the environment surrounding the vehicle. This may be a side view extending from the vehicle, or a plan view such as is shown in
According to embodiments of the present invention, in addition to the cameras being used to provide a composite live image of the environment surrounding the vehicle, historic images may be incorporated into the composite image to provide imagery representing the terrain under the vehicle—that is, the terrain within the boundary of the vehicle. By historic images, it is meant images that were captured previously by the vehicle camera system, for instance images of the ground in front of or behind the vehicle; the vehicle subsequently having driven over that portion of the ground. The historic images may be still images or video images or frames from video images. Such historic images may be used to fill the blank region 502 in
The composite image may be formed by combining the live and historic video images, and in particular by performing pattern matching to fit the historic images to the live images thereby filling the blank region in the composite image comprising the area under the vehicle. The surround camera system comprises at least one camera and a buffer arranged to buffer images as the vehicle progresses along a path. The vehicle path may be determined by any suitable means, including but not limited to a satellite positioning system such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), wheel ticks (tracking rotation of the wheels, combined with knowledge of the wheel circumference) and image processing to determine movement according to shifting of images between frames. At locations where the blank region from the live images overlaps with buffered images, the area of the blank region copied from delayed video images is pattern matched through image processing to be combined with the live camera images forming the remainder of the composite image.
Advantageously, embodiments the present invention provide a method for displaying the position of all wheels of a vehicle and the whole vehicle floor clearance combined with live images of the environment surrounding the vehicle. This provides for improved driver information to allow them to progress safely and confidently, particularly when travelling through rough terrain. The use of pattern matching provides particular improvements in the combining of live and historic images.
Referring now to
At step 602 the live frames are stitched together to form the composite 3D image (for instance, the image “bowl” described above in connection with
According to certain embodiments, to constrain the image storage requirements, only video frames from cameras facing generally forwards (or forwards and backwards) may be stored as it is only necessary to save images of the ground in front of the vehicle (or in front and behind) that the vehicle may subsequently drive over in order to supply historic images for inserting into the live blind spot area. To further reduce the storage requirements it may be that not the whole of every image frame is stored. For a sufficiently fast stored frame rate (or slow driving speed) there may be considerable overlap between consecutive frames (or intermittent frames determined for storage if only every nth frame is to be stored) and so only an image portion differing from one frame for storage to the next may be stored, together with sufficient information to combine that portion with the preceding frame. Such an image portion may be referred to as a sliver or image sliver. It will be appreciated that other than an initially stored frame, every stored frame may require only a sliver to be stored. It may be desirable to periodically store a whole frame image to mitigate the risk of processing errors preventing image frames from being recreated from stored image slivers. This identification of areas of overlap between images may be performed by suitable known image processing techniques that may include pattern matching—that is, matching image portion of images common to a pair of frames to be stored. For instance, pattern matching may use known image processing algorithms for detecting edge features in images, which may therefore suitably identify the outline of objects in images, those outlines being identified in a pair of images to determine the degree of image shift between the pair due to vehicle movement.
Each stored frame, or stored partial frame (or image sliver) is stored in combination with vehicle position information. Therefore, in parallel to the capturing of live images at step 600 and the live image stitching at step 602, at step 604 vehicle position information is received. The vehicle position information is used to determine the vehicle location at step 606. The vehicle position may be expressed as a coordinate, for instance a Cartesian coordinate giving X, Y and Z positions. The vehicle position may be absolute or may be relative to a predetermined point. The vehicle position information may be obtained from any suitable known positioning sensor, for instance GPS, IMU, knowledge of the vehicle steering position and wheel speed, wheel ticks (that is, information about wheel revolutions), vision processing or any other suitable technique. As will be appreciated, use of data from GPS and inertial measurement equipment will provide useful information to the system as to the direction of the vehicle travel, vehicle orientation/attitude relative to the prevailing terrain and road roughness, all of which can be used to match the image being presented on the display to the driver with the current scene, compensating for any obscuration caused by the vehicle. Vision processing may comprise processing images derived from the vehicle camera systems to determine the degree of overlap between captured frames, suitably processed to determine a distance moved through knowledge of the time between the capturing of each frame. This may be combined with the image processing for storing captured frames as described above, for instance pattern matching including edge detection. In some instances it may be desirable to calculate a vector indicating movement of the vehicle as well as the vehicle position, to aid in determining the historic images to be inserted into the live blank region area, as described below.
Each frame that is to be stored (or sliver), from step 600, is stored in a frame store at step 608 along with the vehicle position obtained from step 606 at the time of image capture. That is, each frame is stored indexed by a vehicle position. The position may be an absolute position or relative to a reference datum. Furthermore, the position of image may be given relative only to a preceding stored frame allowing the position of the vehicle in respect of each historic frame to be determined relative to a current position of the vehicle by stepping backwards through the frame store and noting the shift in vehicle position until the desired historic frame is reached. Each record in the frame store may comprise image data for that frame (or image sliver) and the vehicle position at the time the frame was capture. That is, along with the image data, metadata may be stored including the vehicle position. The viewing angle of the frame relative to the vehicle position is known from the camera position and angle relative to the vehicle (which as discussed above may be fixed or moveable). Such information concerning the viewing angle, camera position etc. may also be stored in frame store 608, which is shown representing the image and coordinate information as (frame <->co-ord). It will be appreciated that there may be significant variation in the format in which such information is stored and the present invention is not limited to any particular image data or metadata storage technique, nor to the particulars of the position information that is stored.
At step 610 a pattern recognition area is determined. The pattern recognition area comprises the area under the vehicle that can't be seen in the composite image formed solely from stitched live images. Referring back to
However, the above described fitting of previous stored image data into a live stitched composite image is predicated on exact knowledge of the vehicle position both currently and when the image data is stored. It may be the case that it is not possible to determine the vehicle position to a sufficiently high degree of accuracy. As an example, with reference to
It will be appreciated that where the degree of error in the vehicle position differs between the time at which an image is stored and the time at which it is fitted into a live composite image this may cause undesirable misalignment of the live and historic images. This may cause a driver to lose confidence in the accuracy of the representation of the ground under the vehicle. Worse still, if the misalignment is significant then there may be a risk of damage to the vehicle due to a driver being misinformed about the location of objects under the vehicle.
Due to risk of misalignment, at step 612 pattern matching is performed within the pattern recognition area to match regions of live and stored images. As noted above in connection with storing image frames, such pattern matching may include suitable edge detection algorithms. The determined pattern recognition region at step 610 is used to access stored images from the frame store 608. Specifically, historic images containing image data for the ground within the pattern recognition area is retrieved. The pattern recognition area may comprise the expected vehicle blank region and a suitable amount of overlap on at least one side to account for misalignment. Step 612 further takes as an input the live stitched composite image from step 602. The pattern recognition area may encompass portions of the live composite view adjacent to the blank region 502. Pattern matching is performed to find overlapping portions of the live and historic images, such that close alignment between the two can be determined and used to select appropriate portions of the historic images to fill the blank region. It will be appreciated that the amount of overlap between the live and historic images may be selected to allow for a predetermined degree of error between the determined vehicle position and its actual position. Additionally, to take account of possible changes in vehicle pitch, roll and yaw between a current position and a historic position as a vehicle traverses undulating and/or slippery terrain, the determination of the pattern recognition region may take account of information from sensor data indicating the vehicle pitch, roll and yaw. This may affect the degree of overlap of the pattern recognition area with the live images for one or more sides of the vehicle. It will be appreciated that according to some embodiments it may not be necessary to determine a pattern recognition area, rather the pattern matching may comprise a more exhaustive search through historic images (or historic images with an approximate time delay relative to the current images) relative to the whole composite live image. However, by constraining the region within the live composite image within which pattern matching to historic images is to be performed, and constraining the volume of historic images to be matched, the computational complexity of the task and the time taken may be reduced.
At step 614 selected portions of one or more historic images or slivers are inserted into the blank region in the composite live images to form a composite image encompassing both live and historic images. As for the discussion above in connection with
Furthermore, in addition to displaying a representation of the ground under the vehicle, according to certain embodiments of the invention a representation of the vehicle may be added to the output composite image. For instance, a translucent vehicle image or an outline of the vehicle may be added. This may assist a driver in recognising the position of the vehicle and the portion of the image representing the ground under the vehicle.
In some embodiments, where the composite image is to be displayed overlying portions of the vehicle to give the impression of the vehicle being transparent or translucent (for instance using a HUD or a projection means as described above), the generation of a composite image may also require that a viewing direction of a driver of the vehicle is determined. For instance, a camera is arranged to provide image data of the driver from which the viewing direction of the driver is determined. The viewing direction may be determined from an eye position of the driver, performed in parallel to the other steps of the method. It will be appreciated that where the composite image or a portion of the composite image is to be presented on a display in the vehicle which is not intended to show the vehicle being see-through, there is no need to determine the driver's viewing direction.
The combined composite image is output at step 616. As discussed above, the composite image output may be upon any suitable image display device, such as HUD, dashboard mounted display screen or a separate display device carried by the driver. Alternatively, portions of the composite image may be projected onto portions of the interior of the vehicle to give the impression of the vehicle being transparent or translucent. The present invention is not limited to any particular type of display technology.
Referring now to
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. In particular, the method of
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the foregoing embodiments, but also any embodiments which fall within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
1702536.2 | Feb 2017 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/052085 | 1/29/2018 | WO | 00 |