This application is related to and claims priority from Japanese Patent Application No. 2013-215649 filed on Oct. 16, 2013, the contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to synthesized image generation devices capable of combining a plurality of acquired images transmitted from in-vehicle cameras and generating synthesized image data as bird's eye images.
2. Description of the Related Art
There has been known a conventional synthesized image generation device which receives acquired images transmitted from a plurality of in-vehicle cameras mounted on a motor vehicle. The in-vehicle cameras are arranged around the motor vehicle and acquire images. The conventional synthesized image generation device generates a bird's-eye view on the basis of the acquired images. The bird's-eye view is a top view of the motor vehicle when observed from above the motor vehicle. When a part of each of the acquired images transmitted from the in-vehicle cameras is overlapped with each other, the conventional synthesized image generation device cuts an overlapped part between the acquired images, and synthesizes the remained acquired images in order to make synthesized image data which show bird's-eye views of the motor vehicle.
The driver of the motor vehicle uses the generated bird's-eye view, i.e. the synthesized image data of the motor vehicle in order to recognize an environment state around the motor vehicle. In addition, it is possible to use the generated bird's-eye view, i.e. the synthesized image data of the motor vehicle in order to extract road markings, for example a lane line on a road from the synthesized image data, where these road markings are on roads by using a road making paint. That is, lane lines as road markings are on roads and highways by using a road making paint. However, when the acquired images transmitted from the in-vehicle cameras contain strong light such as strong sunlight and/or strong beams irradiated from an illuminated advertising pillar, the acquired images contain black defects in which an area other than a strong light in the acquired image has black. As a result, the synthesized image data have a decreased effective small area from which road markings such as lane lines are correctly extracted.
Further, the same phenomenon occurs when water drops or stains which adhere on the surfaces of lenses of the in-vehicle cameras. These cases have a possible difficulty of correctly extracting lane lines and other road markings on the surface of the road around the motor vehicle from the synthesized image data.
It is therefore desired to provide a synthesized image generation device, to be mounted on a motor vehicle, capable of synthesizing acquired images transmitted from a plurality of acquiring sections such as a plurality of in-vehicle cameras arranged around the own vehicle, and generating synthesized image data from which road markings such as lane lines on the surface of the road are correctly extracted.
An exemplary embodiment provides a synthesized image generation device which is mounted on a motor vehicle. The synthesized image generation device has an image acquiring section, an acquired image selection section, and a synthesized image generation section. The image acquiring section obtains acquired images transmitted from a plurality of in-vehicle cameras. The acquired image selection section selects the acquired images transmitted from the in-vehicle cameras so that image acquiring regions of the in-vehicle cameras are not overlapped with each other. The selected acquired images are used for extracting road markings on a surface of a road. The synthesized image generation section combines the selected acquired images and generates synthesized image data.
The synthesized image generation device having the structure previously described generates synthesized image data as a combination if acquired images transmitted from the selected in-vehicle cameras whose image acquiring regions are not overlapped with each other. This avoids a process of cutting a predetermined image area from the acquired images during the process of generating the synthesized image data. That is, this prevents a part of a clear acquired image from being cut and eliminated even if the clear acquired image and unclear acquired image are selected simultaneously and combined. It is possible for the synthesized image generation device to easily detect road markings on the surface of a road and extract the road markings from the synthesized image data with high accuracy.
It is possible to use software equivalent to the functions of the synthesized image generation device having the structure previously described.
A preferred, non-limiting embodiment of the present invention will be described by way of example with reference to the accompanying drawings, in which:
Hereinafter, various embodiments of the present invention will be described with reference to the accompanying drawings. In the following description of the various embodiments, like reference characters or numerals designate like or equivalent component parts throughout the several diagrams.
A description will be given of a synthesized image generation device according to an exemplary embodiment with reference to
An image processing section 10 having a central processing unit (CPU) 11 shown in
The in-vehicle cameras 21 to 24 are a combination of the front view camera 21, the rear view camera 22, the right view camera 23 and the left view camera 24. Those in-vehicle cameras 21 to 24 have image acquiring regions, respectively, designated by hatched areas shown in
In more detail, the front view camera 21 is arranged inside of a front bumper of the own vehicle and acquires an image of the road in front of the own vehicle (hereinafter, referred to as the “front view image”).
The rear view camera 22 is arranged inside of a rear bumper of the own vehicle and acquires an image of the road at a rear side of the own vehicle (hereinafter, referred to as the “rear view image”).
The right view camera 23 is arranged at the right wing mirror of the own vehicle and acquires an image of the road at a right side of the own vehicle (hereinafter, referred to as the “right side view image”).
The left view camera 24 is arranged at the left wing mirror of the own vehicle and acquires an image of the road at a left side of the own vehicle (hereinafter, referred to as the “left side view image”).
Each of the in-vehicle cameras 21 to 24, i.e. the front view camera 21, the rear view camera 22, the right view camera 23 and the left view camera 24 acquires the image every 33 ms interval and transmits the acquired image to the image processing section 10, for example.
The display unit 26 receives display instruction signals transmitted from the image processing section 10 and displays an image on the basis of the received display instruction signals. The indicator 27 also receives the display instruction signals transmitted from the image processing section 10 and provides visual information to the driver of the own vehicle on the basis of the received display instruction signals. For example, the visual information in the display instruction signals transmitted from the image processing section 10 indicates a degree in recognition accuracy of the detected road markings such as lane line on the road on which the own vehicle drives. For example, the indicator 27 is equipped with a plurality of emitting sections, and adjusts the number of the emitting sections on the basis of the display instruction signals transmitted from the image processing section 10.
The recognition accuracy of a road marking, for example, the detected lane line on the surface of the road by using a road making paint indicates an accuracy of a lane line extraction process (step S135 which will be explained later in detail). The image processing section 10 generates and outputs the display instruction signals corresponding to this recognition accuracy to the indicator 27
The environment state detection section 28 detects a current state of the image acquiring conditions of the in-vehicle cameras 21 to 24. For example, the environment state detection section 28 detects a direction of a light source such as sun light, the presence of stains which adhere on lenses of the in-vehicle cameras 21 to 24, the presence of water drop or fog. The modifications of the exemplary embodiment will disclose the feature and operation of the environment state detection section 28.
Because a microcomputer is easily available on the commercial market, it is possible to use a microcomputer as the image processing section 10. In general, the available microcomputer has a central processing unit (CPU) 11, a read only memory (ROM) and a random access memory (RAM), etc. The memory section such as the ROM and the RAM stores programs which contain a synthesized image generation program. The CPU 11 performs various types of programs stored in the memory section such as a lane line recognition process which will be explained later in detail.
A description will now be given of the process of the synthesized image generation device according to the exemplary embodiment.
In the image display system 1 having the structure previously described, the image processing section 10 performs the lane line recognition process indicated by the flow chart shown in
In the lane line recognition process shown in
The n-th value indicates the combination of the acquired images to be used for generating synthesized image data.
For example, the image processing section 10 according to the exemplary embodiment selects a combination of the acquired front view image transmitted from the front view camera 21 and the acquired rear view image transmitted from the rear view camera 22, as shown in
Further, the image processing section 10 selects a combination of the acquired right side view image transmitted from the right view camera 23 and the acquired left side view image transmitted from the left view camera 24, as shown in
The image processing section 10 selects a plurality of combinations, each combination includes acquired images which are opposite views when observed from the own vehicle so that the selected acquired images in each combination are not overlapped in image acquiring region to each other.
The operation flow goes to step S125. In step S125, the image processing section 10 performs a process for converting acquired images belonging to each combination to synthesized image data as a bird's eye view. The conversion process performs a coordination conversion of each pixel in the acquired images in the combination in order to make the bird's eye view on the basis of a geometric transformation table. This geometric transformation table is used for converting each image in the n-th combination to the bird's eye view observed from above the own vehicle.
The operation flow goes to step S130. In step S130, the image processing section 10 performs a process of synthesizing the bird's eye views corresponding to the acquired images selected in step S120. In this synthesizing process, the image processing section 10 generates a synthesized bird's eye view, i.e. the synthesized image data. In more detail, a plurality of the generated bird's eye views, i.e. the synthesized image data are arranged around the predetermined bird's eye view of the own vehicle which has been prepared.
A description will now be given of a driving scene in which the own vehicle drives on a drive lane of a road on a rainy day with reference to
When the own vehicle is running on the road on a rainy day shown in
That is, the image processing section 10 according to the exemplary embodiment obtains visual image data with which the driver of the own vehicle can recognize a lane line on the surface of the road. However, if it is a rainy day, there is a possible case in which water drops are adhered on at least one of the surfaces of the lenses of the front view camera 21 and the rear view camera 22, and the captured image becomes unclear. It can be considered in view of a direction of wind in such a heavy rainy day that at least one of the front view camera 21 and the rear view camera 22 acquires a clear image.
In addition, it can be considered that at least one of the right view camera 23 and the left view camera 24 acquires a clear image, i.e. at least one of the left side view image acquired by the right view camera 23 and the left side view image acquired by the left view camera 24 has a clear image. Accordingly, it is possible for the image processing section 10 to extract both a lane line at the right side of the own vehicle and a lane line at the left side of the own vehicle when extracting the lane lines from the synthesized image data of these acquired images.
A description will now be given of a driving scene in which the own vehicle drives on a drive lane of the road in a sunny day with reference to
That is,
As shown in
As previously described, even if it is a sunny day and a strong light source is present around the own vehicle and a strong sunlight is irradiated toward the own vehicle, it is possible for at least one of the in-vehicle cameras 21 to 24 to correctly acquire a correct image of a lane line on the surface of the road on which the own vehicle drives, and extract the lane lines present on the right side and the left side of the own vehicle from the synthesized image data obtained from the acquired images.
In particular, it is difficult for the conventional image processing section to correctly recognize the presence of a lane line designated by the circles shown in
The operation flow goes to step S135 shown in
The operation flow goes to step S140. In step S140, the image processing section 10 compares the variable n with the number N of the combinations of acquired images.
In the exemplary embodiment, the number N of the combinations of the acquired images becomes 2 because there are two combinations of acquired images, one is a combination of the acquired images transmitted from the front view camera 21 and the rear view camera 22, and the other is a combination of the acquired images transmitted from the right view camera 23 and the left view camera 24.
When the detection result in step S140 indicates negation (“NO” in step S140), i.e. indicates that the value n is less than the number N of the combinations of the acquired images, the operation flow goes to step S145.
In step S145, the image processing section 10 increments the variable n by 1 ((n+1)→n). The operation flow returns to step S120.
On the other hand, when the detection result in step S140 indicates affirmation (“YES” in step S140), i.e. indicates that the value n is not less than the number N of the combinations of the acquired images, the operation flow goes to step S150.
In step S150, the image processing section 10 performs a recognition result synthesizing process.
In the recognition result synthesizing process, the image processing section 10 selects the lane line having a maximum recognition accuracy present at both the sides of the own vehicle in the recognition results in step S135.
The image processing section 10 generates a detection signal which corresponds to a magnitude of the recognition accuracy.
The operation flow goes to step S155. In step S155, the image processing section 10 performs a display process, i.e. generates a display instruction signal on the basis of the detection signal obtained in step S150. The image processing section 10 transmits the display instruction signal to the display unit 26 and the indicator 27 in order to display information corresponding to the magnitude of the recognition accuracy obtained in step S150.
The image processing section 10 completes the execution of the process in the flow chart shown in
In the image display system 1 having the structure previously described, the image processing section 10 receives the acquired images transmitted from the in-vehicle cameras 21 to 24, i.e. the front view camera 21, the rear view camera 22, the right view camera 23 and the left view camera 24.
Further, the image processing section 10 selects some of a plurality of the acquired images transmitted from the in-vehicle cameras 21 to 24 so that the selected images are not overlapped in its image acquiring region on the road around the own vehicle. The image processing section 10 synthesizes the selected images to make synthesized image data.
According to the image display system 1 having the structure previously described, it is possible for the image processing section 10 to make synthesized image data of the selected images which are not overlapped in an image acquiring region on the surface of the road on which the own vehicle drives. As a result, even if an unclear image and a clear image acquired by the in-vehicle cameras are combined, it is possible for the image processing section 10 to easily and correctly detect the presence of one or more lane lines on the road of the own vehicle. The image processing section 10 can generate synthesized image data suitable for correctly extracting the lane lines on the road.
In the image display system 1 according to the exemplary embodiment having the structure previously described, the image processing section 10 selects the acquired images so that the image acquiring regions of the acquired images are not overlapped to each other. The image processing section 10 generates synthesized image data corresponding to the selected acquired images.
According to the image display system 1 having the structure previously described, it is possible for the image processing section 10 to generate a plurality of synthesized image data in order to extract road markings such as lane lines on the surface of the road, for example, painted by road making paint. It is therefore possible for the image processing section 10 to increase the detection accuracy for correctly detecting road markings such as lane lines on the surface of the road.
In the image display system 1 having the structure previously described, the image processing section 10 selects acquired images in opposite image acquiring regions when observed from the own vehicle. This selection allows the image acquiring regions of the selected acquired images to not overlap with each other.
In the image display system 1 having the structure previously described, the image processing section 10 generates a bird's eye view as the synthesized image data observed from above the own vehicle. Because the image display system 1 can provide such a bird's eye view to the driver of the own vehicle, it is possible to avoid a process of eliminating distortion when a road making is extracted from the acquired image. This provides a process of easily and simply extracting road markings such as a lane line on the surface of a road from the acquired images.
In the image display system 1 having the structure previously described, the image processing section 10 extracts road markings such as lane lines from the synthesized image data. Because the synthesized image data are obtained on the basis of the acquired images obtained in opposite image acquiring regions which are not overlapped with each other, it is possible for the image processing section 10 to correctly extract road markings such as a lane line with high accuracy.
The concept of the present invention is not limited by the exemplary embodiment previously described.
The exemplary embodiment previously described, the image processing section 10 selects a combination of the acquired images transmitted from a pair of the front view camera 21 and the rear view camera 22, or a pair of the right view camera 23 and the left view camera 24. However, the concept of the present invention is not limited by the exemplary embodiment. It is possible for the image processing section 10 in the image display system 1 to select a combination of not less than two acquired images according to the detection result transmitted from the environment state detection section 28. That is, the image processing section 10 selects the acquired images transmitted from the enabled in-vehicle cameras 21 to 24 according to the environmental state of the own vehicle and the road obtained by the environment state detection section 28.
According to the image display system 1 having the structure previously described, because the image processing section 10 selects acquired images without containing any blacked-out shadows, it is possible to increase the detection accuracy to detect road markings such as lane lines on the surface of a road by road making paint.
The image processing section 10 according to the exemplary embodiment previously described selects acquired images, where the image acquiring regions of which are not overlapped. However, the concept of the present invention is not limited by this. It is possible for the image processing section 10 in the image display system 1 to select acquired images having actual detection areas, which are not overlapped with each other, to be actually used for detecting road markings such as lane lines in the acquired images.
The image processing section 10 in the image display system 1 according to the exemplary embodiment extracts one or more lane lines from the acquired images. However, the concept of the present invention is not limited by the exemplary embodiment. It is possible for the image processing section 10 in the image display system 1 to extract road markings other than the lane lines on the surface of a road from the synthesized image data. In this case, the image processing section 10 performs a pattern matching process, etc. in order to recognize the presence of the road markings other than the lane lines on the surface of the road.
In the image display system 1 having the structure previously described, the front view camera 21 is arranged inside of a front bumper of the own vehicle, the rear view camera 22 is arranged inside of a rear bumper of the own vehicle, the right view camera 23 is arranged at the right wing mirror of the own vehicle, and the left view camera 24 is arranged at the left wing mirror of the own vehicle. However, the concept of the present invention is not limited by the exemplary embodiment. It is possible for the image processing section 10 in the image display system 1 to use a plurality of the in-vehicle cameras more than or less than four and arrange the in-vehicle cameras at different positions. It is possible to adjust a direction of the lens, i.e. have an optional direction of the lens (i.e. the direction of a central axis of the lens to acquire an image) of each of the in-vehicle cameras 21 to 24.
As shown in
The in-vehicle camera arranged at the right front corner section of the own vehicle has an image acquiring region toward a right front direction of the own vehicle which is at a right angle of the right side surface of the own vehicle as shown in
It is possible to further have a structure in which the in-vehicle camera arranged at the right front corner section of the own vehicle has an image acquiring region toward a right front direction of the own vehicle designated by the arrow shown in
In addition to the arrangement of the in-vehicle cameras in the image display system 1 shown in
In more detail, as shown in
However, it is possible to arrange the two in-vehicle cameras so that the in-vehicle cameras are directed with their central axis apart by approximately 135 degrees so long as the image acquiring regions of which are not overlapped with each other.
As shown in
Further, as shown in
Still further, as shown in
In each of the arrangements of the in-vehicle cameras shown in
The first modification shown in
The image processing section 10 according to the exemplary embodiment is equivalent to the synthesized image generation device used in the claims. The in-vehicle cameras 21 to 24 are equivalent to the image acquiring section used in the claims. The process in step S110 performed by the image processing section 10 is equivalent to the image acquiring section used in the claims. The process in step S120 performed by the image processing section 10 is equivalent to the acquired image selection section used in the claims.
The processes in step S125 and step S130 performed by the image processing section 10 are equivalent to the synthesized image generation section used in the claims. The processes in step S135 and step S150 performed by the image processing section 10 are equivalent to the road marking extracting section used in the claims.
While specific embodiments of the present invention have been described in detail, it will be appreciated by those skilled in the art that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limited to the scope of the present invention which is to be given the full breadth of the following claims and all equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2013-215649 | Oct 2013 | JP | national |