The present invention relates to a vehicle-mounted image processing device that recognizes the outside of a vehicle based on an image taken by a vehicle-mounted camera.
Patent Literature 1, for example, discloses a technique of taking an image of the surrounding of a vehicle as a whole using each of four cameras mounted on the vehicle on the front, rear, left and right, converting and combining these through-images to create a bird's eye view image, and recognizing a parking line based on the bird's eye view image.
Patent Literature 1: JP 2011-301140 A
Patent Literature 2: JP 2011-77772 A
Patent Literature 3: JP 2010-146478 A
Typically apparatuses to provide a user with a bird's eye view image provide both of a bird's eye view image and a through-image in the vehicle traveling direction. This is because while a bird's eye view image is suitable for a driver to recognize surroundings of the vehicle, a through-image is more suitable to understand a remote place and a three-dimensional object.
This applies to an apparatus implementing an application to recognize the outside of a vehicle by image processing for driver supporting as well, when an image to be provided to a user is processed so as to reduce the processing load and simplify the configuration of the apparatus. A bird's eye view image of the images to be provided to a user has a narrow viewing field (about 2 m in the front of the vehicle) as compared with a through-image that is not subjected to processing, and so such an image is not suitable to recognize a remote place. For instance, for the purpose of recognizing a parking line in a relatively high vehicle speed at a highway service area, there is a need to recognize a parking line that is far away from the viewing field of a bird's eye view image.
On the other hand, a through-image has a wider viewing field than that of a bird's eye view image, and can recognize a remote place, and so typically image processing is often performed based on a through-image to recognize a lane, for example. Meanwhile a bird's eye view image is easily processed compared with a through-image not subjected to conversion, and for example, it is easier to perform calculation, such as departure prediction from a white line, based on a bird's eye view image. In this way, precision for recognition may be improved in some applications by using a bird's eye view image.
In view of these points, the present invention aims to provide a vehicle-mounted image processing device capable of recognizing the outside of a vehicle based on an image taken by a vehicle-mounted camera more precisely.
In order to solve these problems, a vehicle-mounted image processing device of the present invention includes: an image acquisition unit that acquires through-images taken by cameras disposed on front, rear and sides of a vehicle; a bird's eye view image generation unit that converts the acquired through images to generate a bird's eye view image; a parking line recognition unit that performs parking line recognition processing to recognize a parking line based on at least one of the through-images and the bird's eye view image. The parking line recognition unit performs first parking line recognition processing based on the bird's eye view image to recognize a parking line in a predetermined range of the bird's eye view image, and second parking line recognition processing based on the through-images to recognize a parking line in an area that is far away from the predetermined range of the bird's eye view image.
A vehicle-mounted image processing device of the present invention recognizes a parking line in a predetermined range based on a bird's eye view image, and recognizes a parking line in an area far away from the predetermined range of the bird's eye view image based on a through-image, and so can perform image processing based on an appropriate image depending on the situation and can recognize the parking line more precisely. Problems, configurations, and advantageous effects other than those described above will be made clear by the following description of embodiments.
The following describes embodiments of the present invention, with reference to the drawings.
[Embodiment 1]
The vehicle-mounted image processing device is implemented by hardware and software in a camera device mounted on a vehicle 10. The vehicle-mounted image processing device includes an image acquisition unit 12, a bird's eye view image generation unit 13, an image synthesis unit 14, and a parking line recognition unit 15 as its internal functions.
The image acquisition unit 12 acquires through-images taken by a front camera 1, a rear camera 2, a right side camera 3, and a left side camera 4 that are attached on the front, the rear, the right side and the left side, respectively, of the vehicle 10.
The bird's eye view image generation unit 13 converts the through-images acquired by the image acquisition unit 12 to generate a bird's eye view image 25 having a point of view that is shifted to the above of the vehicle 10. The bird's eye view image 25 is generated using a well-known technique. The image synthesis unit 14 synthesizes at least one of the through-images and a bird's eye view image. The parking line recognition unit 15 as an outside recognition unit performs processing to recognize the outside parking line WL from the synthesized image synthesized by the image synthesis unit 14 and based on at least one of the through-images and the bird's eye view image.
In the example of
The parking line WL includes marking lines WL1 and WL2 for partitioning of both sides in the vehicle width direction, and a marking line WL3 for partitioning in the vehicle front-rear direction. Each through-image 21 to 24 shows the parking line WL located around the vehicle. Then, the bird's eye view image 25 that is generated by converting the through-images 21 to 24 shows the marking lines WL1 and WL2 and the marking line WL3 on the front side of the vehicle that surround the vehicle 10.
The image synthesis unit 14 acquires shift position information on the vehicle 10 (Step S101). The shift position information is acquired from a controller to control a driving system of the vehicle 10 via a CAN, for example. The shift position information contains information on the position of a shift of the vehicle 10, for example, in the case of a manual transmission, the position of a shift lever, and in the case of an automatic transmission, the position of a selector lever.
Then, determination is made based on the shift position information whether the position of reverse R is selected or not (Step S102). When the position of reverse R is selected (YES at Step S102), a bird's eye view image 25 and a through-image 22 of the rear camera 2 are synthesized (Step S103). When the position other than reverse R is selected (NO at Step S102), the bird's eye view image 25 and a through-image 21 of the front camera 1 are synthesized (Step S104). Note here that although one through-image and a bird's eye view image are synthesized in this example, a plurality of through-images and a bird's eye view image may be synthesized.
The parking line recognition unit 15 firstly performs first parking line recognition processing to recognize a parking line based on a bird's eye view image 25 (Step S111), and then performs second parking line recognition processing to recognize the parking line based on a through-image (Step S112).
As illustrated in
Then, determinations are made whether the thus selected two white lines have a difference in angle of a predetermined value or less (to Thθmax) (Step S128), whether the distance between the two white lines is within a predetermined range (from ThWmin to ThWmax) or not (Step S129) and displacement between the lower limits of the two white lines is within a predetermined range (ThBmin to ThBmax) or not (Step S130). Then when all of the conditions at Steps S129 to S130 hold, the procedure shifts to Step S131. At Step S131, coordinate positions of the points of four corners of the parking line WL. i.e., the upper end and the lower end of the white line (marking line WL2) on the left of the vehicle and the upper end and the lower end of the white line (marking line WL1) on the right of the vehicle are registered.
Then determination is made whether all combinations are checked or not (Step S132), and when they are checked (YES at Step S132), the first parking line recognition processing ends. When they are not checked (NO at Step S132), the procedure returns to the processing to select two white lines (Step S127), and the following processing is repeatedly performed until it is determined that all of the combinations are checked (YES at Step S132).
As illustrated in
The vehicle-mounted image processing device in the present embodiment recognizes a parking line using both of a bird's eye view image and a through-image, and so as compared with the case of using a bird's eye view image only, a parking line can be recognized to a more distant area.
In the parking line recognition method as stated above, both of the first parking line recognition processing and the second parking line recognition processing are always performed. Instead, it may be configured so that selection may be made as needed between the case of parking line recognition based on both of the first parking line recognition processing and the second parking line recognition processing and the case where only one of the first parking line recognition processing and the second parking line recognition processing is performed.
As illustrated as modification example 1 in
When it is determined that the vehicle speed vsp is higher than the predetermined threshold thvsp1 (YES at Step S163), the second parking line recognition processing is performed based on a through-image, and when the vehicle speed vsp is the threshold thvsp1 or lower (NO at Step S163), the procedure ends.
The configuration example of
[Embodiment 2]
Referring next to
The present embodiment has a feature in that a user is allowed to switch the display of a synthesized image.
As illustrated in
For instance, as illustrated in
When a parking line is recognized based on a bird's eye view image (Step S211), the parking line recognition unit 15 determines whether the current display on the in-vehicle monitor shows a synthesized image of the bird's eye view image and a through-image or not (Step S212). Then when it is determined that the display shows a synthesized image of the bird's eye view image and a through-image (YES at Step S212), the parking line recognition unit recognizes a parking line based on the through-image (Step S213). On the other hand, when it is determined that the current display does not show a synthesized image of the bird's eye view image and a through-image (NO at Step S212), the parking line recognition unit continuously performs recognition of a parking line based on the bird's eye view image.
The present embodiment includes a system allowing a user to select any monitor display image from a plurality of display contents, for example, and a device where an input image of the vehicle-mounted image processing device is the same as the monitor display screen, where a parking line can be recognized in accordance with the state of the display screen.
[Embodiment 3]
Referring next to
The present embodiment has a feature in that, in addition to parking line recognition of Embodiment 1 as stated above, a pedestrian is detected based on a bird's eye view image and a through-image.
As illustrated in
The first pedestrian detection processing based on a bird's eye view image and the second pedestrian detection processing based on a through-image may be performed using well-known techniques. For instance, the first pedestrian detection processing to detect a pedestrian based on a bird's eye view image may be based on a method using an optical flow described in JP 2011-77772 A (Patent Literature 2), and the second pedestrian detection processing to detect a pedestrian based on a through-image may be based on a method using an optical flow described in JP 2010-146478 A (Patent Literature 3).
As illustrated in
When it is determined the vehicle speed vsp exceeds the predetermined threshold thvsp2 (YES at Step S322), the second pedestrian detection processing based on a through-image is performed. When it is determined the vehicle speed vsp is the threshold thvsp2 or less (NO at Step S322), the first pedestrian detection processing based on a bird's eye view image is performed (Step S324).
The configuration example of
[Embodiment 4]
Referring next to
The present embodiment has a feature in that a scheduling unit 18 is added to the configuration of Embodiment 3.
As illustrated in
As illustrated in
The vehicle-mounted image processing device has a CPU as hardware, having a first image processing circuit made up of a chip for image processing only and a second image processing circuit made up of a general-purpose chip, for example. The scheduling unit 18 allocates a use rate to the first image processing circuit and the second image processing circuit in accordance with the vehicle-speed condition.
For instance, as illustrated in
When the vehicle speed vsp is higher than the threshold th_vlow, the parking line recognition processing only is performed at Step S414, where use allocation is performed so that the first image processing circuit performs 50% of the parking line recognition processing and the second image processing circuit performs the remaining 50% of the parking line recognition processing.
The present embodiment can allocate different processing to each of the chips when two applications are simultaneously operated, whereby their simultaneous operation can be implemented. When only one application is operated, the two chips are used appropriately to shorten the processing cycle.
[Embodiment 5]
Referring next to
The present embodiment has a feature in that a lane departure prediction unit 19 is provided instead of the pedestrian detection unit 17 of Embodiment 4.
As illustrated in
As illustrated in
As illustrated in
The bird's eye view image 25 is easily handled in terms of image processing compared with the through-image 22. That is, lane departure can be predicted more speedily and precisely based on the bird's eye view image 25 than the through-image 22.
As illustrated in
On the other hand, when it is determined that the vehicle speed vsp is higher than the low-speed threshold th_vlow (NO at STEP S522), determination is made whether the vehicle speed vsp is a high-speed threshold th_vhigh or lower or not (Step S524). Then when it is determined that the vehicle speed vsp is the high-speed threshold th_vhigh or less (YES at STEP S524), both of the parking line recognition processing and the lane departure prediction processing are performed (Step S525). On the other hand, when it is determined that the vehicle speed vsp is higher than the high-speed threshold th_vhigh (NO at STEP S524), the lane departure prediction processing only is performed (Step S526).
The vehicle-mounted image processing device has a CPU as hardware, having a first image processing circuit made up of a chip for image processing only and a second image processing circuit made up of a general-purpose chip, for example. The scheduling unit 18 allocates a use rate to the first image processing circuit and the second image processing circuit in accordance with the vehicle-speed condition.
For instance, as illustrated in
When the vehicle speed vsp is higher than the low-speed threshold th_vlow and is the high-speed threshold th_vhigh or less, both of the parking line recognition processing and the lane departure prediction processing are performed at Step S525, where use allocation is performed so that the first image processing circuit performs all (100%) of the parking line recognition processing and the second image processing circuit performs all (100%) of the lane departure prediction processing.
Then when the vehicle speed vsp is higher than the high-speed threshold th_vhigh, the lane departure prediction processing only is performed at Step S526, where use allocation is performed so that the first image processing circuit performs 50% of the lane departure prediction processing and the second image processing circuit performs the remaining 50% of the lane departure prediction processing.
The present embodiment can allocate different processing to each of the chips when two applications are simultaneously operated, whereby their simultaneous operation can be implemented. When only one application is operated, the two chips are used appropriately to shorten the processing cycle.
That is a detailed description on the embodiments of the present invention, and the present invention is not limited to the above-described embodiments and may include various modification examples without departing from the spirit of the present invention recited in claims. For instance, the entire detailed configuration of the embodiments described above for explanatory convenience is not always necessary for the present invention. A part of one embodiment may be replaced with the configuration of another embodiment, or the configuration of one embodiment may be added to the configuration of another embodiment. The configuration of each embodiment may additionally include another configuration, or a part of the configuration may be deleted or replaced.
Number | Date | Country | Kind |
---|---|---|---|
2012-258984 | Nov 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/081898 | 11/27/2013 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2014/084256 | 6/5/2014 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5961571 | Gorr | Oct 1999 | A |
6734896 | Nobori | May 2004 | B2 |
7640107 | Shimizu | Dec 2009 | B2 |
7659835 | Jung | Feb 2010 | B2 |
8044789 | Daura Luna | Oct 2011 | B2 |
8948990 | Kobayashi | Feb 2015 | B2 |
8958986 | Kagawa | Feb 2015 | B2 |
20020196340 | Kato | Dec 2002 | A1 |
20030165255 | Yanagawa | Sep 2003 | A1 |
20070225913 | Ikeda | Sep 2007 | A1 |
20100220190 | Hiroshi | Sep 2010 | A1 |
20110074957 | Kiyohara | Mar 2011 | A1 |
20120035846 | Sakamoto | Feb 2012 | A1 |
20120162427 | Lynam | Jun 2012 | A1 |
20120170808 | Ogata | Jul 2012 | A1 |
20120300078 | Ogata | Nov 2012 | A1 |
20120327236 | Kiyohara | Dec 2012 | A1 |
20130285804 | Huang | Oct 2013 | A1 |
20130314503 | Nix | Nov 2013 | A1 |
20150302261 | Kiyohara | Oct 2015 | A1 |
20150302561 | Pekkucuksen | Oct 2015 | A1 |
20150334385 | Takemura | Nov 2015 | A1 |
Number | Date | Country |
---|---|---|
104464375 | Mar 2015 | CN |
EP 1690777 | Aug 2006 | DE |
2 586 674 | May 2013 | EP |
2004-254219 | Sep 2004 | JP |
2007-161193 | Jun 2007 | JP |
2010-146478 | Jul 2010 | JP |
2011-30140 | Feb 2011 | JP |
2011-77772 | Apr 2011 | JP |
WO 2011162108 | Dec 2011 | WO |
Entry |
---|
International Search Report (PCT/ISA/210) dated Dec. 24, 2013 with English-language translation (four (4) pages). |
Japanese-language International Preliminary Report on Patentability (PCT/IPEA/409) dated Jan. 5, 2015 (three (3) pages). |
Number | Date | Country | |
---|---|---|---|
20150310285 A1 | Oct 2015 | US |