The present application is a national stage application under 35 U.S.C. 071(c) of PCT Application No. PCT/JP2017/016367, filed on Apr. 25, 2017, which is based on and claims the benefit of priority from Japanese Patent Application No. 2016-088183, filed on Apr. 26, 2016, the descriptions of which are incorporated herein by reference.
The present disclosure relates to a display control apparatus that displays an image on a display apparatus that is provided inside a vehicle, the image being captured by a camera that is mounted in the vehicle.
A technology in which a camera that is mounted in a vehicle captures an image of a predetermined area in the periphery of the vehicle and the captured image that has been captured is displayed on a display apparatus, such as a display, that is provided inside the vehicle is known.
In such a technology, an obstacle detection display technology in which information indicating an obstacle that is present in the captured image is displayed together with the captured image is known. For example, PTL 1 discloses a technology in which, in addition to an image of a vehicle rear portion being displayed in a display apparatus, a pattern that indicates the direction and distance of an obstacle detected by a rear sonar is displayed.
[PTL 1] JP-A-2000-142284
In the technology described in PTL 1, a driver can know the approximate direction of the obstacle and the distance to the obstacle by the position and color of the pattern that is displayed together with the captured image. However, as a result of detailed examination by the inventors, a following issue has been discovered. That is, information such as the position and color of the pattern alone is insufficient for ascertainment of specific properties including the shape and orientation of the obstacle.
According to an aspect of the present disclosure, it is desired to provide a technology for displaying information that enables a driver to easily ascertain properties including the shape and tilt of an obstacle that is present in an imaging area in the periphery of a vehicle.
A display control apparatus according to an aspect of the present disclosure is configured to display an image in a display apparatus that is provided inside a vehicle, the image being generated based on an image capturing a predetermined area in a periphery of the vehicle by a camera that is mounted in the vehicle. The display control apparatus includes an image acquiring unit, an obstacle identifying unit, and a control unit. Reference numbers within the parentheses in the claims indicate corresponding relationships with specific means described according to an embodiment, described hereafter as an aspect, and do not limit the technical scope of the present disclosure.
The image acquiring unit is configured to acquire a peripheral image that is an image based on the image captured by the camera. The peripheral image is an image that directly shows the image captured by the camera or an image that is obtained through coordinate transformation of the image captured by the camera to an image viewed from another perspective (such as a bird's-eye-view image). The obstacle identifying unit is configured to identify a shape of an obstacle that is identified from an area appearing in the peripheral image acquired by the image acquiring unit, the shape of the obstacle including at least a tilt of a section of the obstacle in a road-surface direction, the section of the obstacle facing the vehicle.
The control unit is configured to generate a superimposed image in which a mark image that is generated as a pattern that indicates the obstacle identified by the obstacle identifying unit is superimposed onto a positon corresponding to the obstacle in the peripheral image, and display the generated superimposed image in the display apparatus. Furthermore, the control unit is configured to variably change properties of the mark image based on the tilt of the obstacle identified by the obstacle identifying unit.
As a result of the display control apparatus configured as described above, the properties of the mark image that is the pattern that indicates an obstacle can be freely changed based on the shape including at least the tilt of the obstacle. In addition, as a result of a driver of the vehicle viewing the mark image that is superimposed onto the peripheral image, the driver can easily ascertain the shape and tilt of the obstacle.
The above-described object, other objects, characteristics, and advantages of the present disclosure will be further clarified through the detailed description hereafter, with reference to the accompanying drawings. An overview of the drawings is as follows:
An embodiment of the present disclosure will hereinafter be described with reference to the drawings. The present disclosure is not limited to the embodiment described below and may be carried out according to various modes.
[Description of an Onboard Display System Configuration]
A configuration of an onboard display system 10 according to the embodiment will be described with reference to
The camera 11 is an imaging apparatus that is set so as to face the periphery, such as ahead, to the side, or to the rear, of the vehicle 1. The camera 11 is configured to capture an image of a peripheral area of the vehicle 1, and output data of an image (also referred to, hereafter, as a captured image) that expresses the image that has been captured to the image processing unit 14.
The distance measuring unit 12 is a sensor that is configured to acquire information by scanning the area imaged by the camera 11. The information indicates the distance between an obstacle (such as another vehicle, a pedestrian, or a wall or a column of a building) that is present in the scanned area and the vehicle 1, and the direction of the obstacle when viewed from the vehicle 1. For example, the distance measuring unit 12 is realized by an ultrasonic sonar, a millimeter-wave radar, a laser radar, a stereo camera, a monocular camera, a periphery monitoring camera, or the like. The position, the shape of a border, the tilt of a face, and an approximate width of the obstacle can be recognized from the measurement results obtained by the distance measuring unit 12.
The display unit 13 is a display that is configured to display the image information provided by the image processing unit 14. For example, the display unit 13 is provided in a location that is easily visible to a driver of the vehicle 1, such as in an instrument panel of the vehicle 1.
The image processing unit 14 is an information processing apparatus that is mainly configured by a central processing unit (CPU), a random access memory (RAM), a read-only memory (ROM), a semiconductor memory such as a flash memory, an input/output interface, and the like (not shown). For example, the image processing unit 14 is realized by a microcontroller in which functions of a computer system are consolidated. The functions of the image processing unit 14 are actualized by the CPU running a program that is stored in a non-transitory tangible storage medium such as the ROM or the semiconductor memory. The image processing unit 14 may be configured by a single or a plurality of microcontrollers. The method for actualizing the functions of the image processing unit 14 is not limited to software. Some or all of the functions may be actualized through use of hardware combining logic circuits, analog circuits, and the like.
The image processing unit 14 performs a distance measurement process and an obstacle display process based on the above-described program. A detailed description of these processes will be given hereafter.
[Description of the Distance Measurement Process]
The steps in the distance measurement process performed by the image processing unit 14 will be described with reference to a flowchart in
At step S100, the image processing unit 14 measures the distance to an obstacle that is present in the periphery of the vehicle 1 using the distance measuring unit 12 and acquires positional information related to the obstacle. Specifically, the image processing unit 14 continuously scans the periphery of the vehicle 1 using detection waves of a radar, a sonar, or the like that configures the distance measuring unit 12, and receives reflected waves from the obstacle. The image processing unit 14 thereby acquires the positional information that indicates a distribution of the distance to an obstacle present in the scanned area. Alternatively, the positional information that indicates a distribution of the distance to an obstacle may be acquired through use of a known image recognition technology in which the distance to an object is recognized based on an image that is captured by a stereo camera, a monocular camera, a periphery monitoring camera, or the like.
At step S102, the image processing unit 14 stores the positional information acquired at step S100, that is, the information that indicates a distribution of the distance between the vehicle 1 and an obstacle in the memory within the image processing unit 14. After step S102, the image processing unit 14 returns the process to step S100.
[Description of the Obstacle Display Process]
The steps in the obstacle display process performed by the image processing unit 14 will be described with reference to a flowchart in
At step S200, the image processing unit 14 acquires the latest captured image amounting to a single frame from the camera 11. At step S202, the image processing unit 14 performs a coordinate transformation on the coordinates of the pixels that configure the captured image acquired at step S200 using a known technique for bird's-eye-view conversion, and thereby converts the captured image of the camera 11 to a bird's-eye-view image that simulates a state of overlooking from a viewpoint set above the vehicle 1.
At step S204, the image processing unit 14 reads the latest positional information acquired through the above-described distance measurement process (see
The mark image is a pattern used to indicate the obstacle that is present in the bird's-eye-view image. Specific properties of the mark image generated at this time will be described hereafter. At step S208, the image processing unit 14 generates a superimposed image in which the mark image generated at step S206 is superimposed onto a position that corresponds to the obstacle that appears in the bird's-eye-view image generated at step S202. The image processing unit 14 then displays the generated superimposed image in the display unit 13.
Here, the image processing unit 14 is configured to generate the superimposed image by changing the properties of the mark image to be superimposed onto the image of the obstacle, based on the state, such as the shape, tilt, position, and color, of the obstacle in the captured image captured by the camera 11 and identified by the distance measuring unit 12. For example, the properties of the mark image herein includes the shape, size, tilt, flashing, color, concentration, and transparency of the pattern. Hereafter, specific application examples of the mark image to be superimposed onto the image of the obstacle will be described with reference to
In a case in
In a case in
In a case in
In the cases in
The image processing unit 14 may be configured to periodically flash the mark image that is displayed so as to overlap the obstacle image. In addition, as shown in examples in
A case in
A case in
A case in
As shown in examples in
A case in
A case in
The image processing unit 14 may be configured to arrange a mark image for an obstacle that corresponds to a course on which the vehicle 1 is predicted to advance or an area obtained by the vehicle width being extended in frontward and rearward directions of the vehicle length. In this case, the mark image may not be displayed in other areas even when an obstacle is detected. Specifically, the image processing unit 14 predicts the course of the vehicle 1 by acquiring vehicle information that indicates a steering state of the vehicle 1 and the like. The image processing unit 14 then identifies the area of the predicted course or the area in the frontward and rearward directions of the vehicle length in the bird's-eye-view image based on information, such as the vehicle width and the vehicle length of the vehicle 1, registered in advance.
A case in
The image processing unit 14 may be configured to arrange a mark image for an obstacle that corresponds to an area with reference to the vehicle width of the vehicle 1, taking into consideration the likelihood of contact between the vehicle 1 and the obstacle. In this case, as shown in an example in
The case in
The image processing unit 14 may be configured to arrange a mark image related to the obstacle in an area that is wider than a width of a borderline (also referred to, hereafter, as a detection line) that indicates a shape of an obstacle that is detected by the radar or the sonar of the distance measuring unit 12. Specifically, the image processing unit 14 identifies the width of the detection line of the obstacle that is indicated by the positional information acquired by the distance measuring unit 12. The image processing unit 14 then identifies the area over which the mark image is arranged with reference to the width of the detection line.
A case in
The image processing unit 14 may be configured to recognize a border of an obstacle, which has been detected by the radar or the sonar of the distance measuring unit 12, from the bird's-eye-view image using image recognition. The image processing unit 14 may then arrange a mark image related to the obstacle along the recognized border.
Specifically, as shown in an example in
The image processing unit 14 may be configured to change the properties (such as the shape, size, color, and transmittance) of the mark image to be superimposed onto an image of an obstacle based on the farness/nearness of the distance between the vehicle 1 and the obstacle. As a result, the driver can accurately ascertain the distance to the obstacle.
A case in
A case in
A case in
The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle so as to be extended to an outer edge of a display area of the superimposed image in a direction corresponding to an upper side of the obstacle. Specifically, as shown in an example in
The image processing unit 14 may be configured to extend the shape of the mark image to be superimposed onto an image of an obstacle in the bird's-eye-view image in a radiating manner, taking into consideration distortion (such as the image being extended in a radiating manner as the image becomes farther from the center) in the image that occurs when the captured image captured by the camera 11 is converted to the bird's-eye-view image. Specifically, as shown in an example in
The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle in a mode in which a lower end side of the obstacle is emphasized. Specifically, as shown in examples in
The image processing unit 14 may be configured to recognize the color of the obstacle from the captured image and draw the mark image using a color that corresponds to a complementary color of the recognized color of the obstacle. Specifically, as shown in examples in
When the obstacle detected by the distance measuring unit 12 is presumed to be a sloped surface, the image processing unit 14 may be configured to also arrange the mark image in an area further towards the vehicle 1 than the detection line that indicates the border of the detected obstacle.
For example, when the radar or the sonar of the distance measuring unit 12 detects a sloped surface, such as an upward slope, an undetected sloped surface is likely to be continuing towards the vehicle 1 in an area below a lower limit of the detection area of the radar or the sonar in the vertical direction. Therefore, as shown in an example in
The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle in a mode in which an area that is actually detected by the distance measuring unit 12 is given more emphasis than other areas. Specifically, as shown in an example in
The image processing unit 14 may be configured to display lines (referred to, hereafter, as grid lines) in the form of squares that serve as an indicator of the distance between the vehicle 1 and the obstacle in the superimposed image, based on the farness/nearness of the distance between the vehicle 1 and the obstacle. In addition, the size of the squares formed by the grid lines may be variable, based on the farness/nearness of the distance between the vehicle 1 and the obstacle. As a result, the driver can accurately ascertain the distance to the obstacle.
A case in
A case in
A case in
When the obstacle detected by the distance measuring unit 12 is presumed to be a vehicle, the image processing unit 14 may be configured to display, in a superimposing manner, a mark image that is composed of an icon that represents a vehicle so as to match the orientation and size of the vehicle detected as the obstacle.
A case in
As shown in an example in
Alternatively, as shown in an example in
When drawing a mark image that is composed of an icon that represents a vehicle, the image processing unit 14 may be configured to use a single representative color that is acquired from an original image of another vehicle onto which the mark image is to be superimposed. Specifically, as shown in an example in
[Effects]
The following effects are achieved by the onboard display system according to the embodiment.
Based on the shape, such as the tilt and size, of an obstacle that is detected in the periphery of the vehicle 1, the properties, such as the orientation and shape, of the mark image that is the pattern indicating the obstacle can be freely changed. In addition, a display mode, such as the size, color, and flashing, of the mark image can be freely changed based on the distance to the obstacle. As a result of the driver of the vehicle viewing the superimposed image in which the mark image is superimposed onto the image of the obstacle in this way, the driver can easily ascertain the state of the obstacle.
The image processing unit 14 corresponds to an example of a display control apparatus. The processes at steps S200 and S202 performed by the image processing unit 14 corresponds to an example of a process as an image acquiring unit. The process at step S206 performed by the image processing unit 14 corresponds to an example of a process as an obstacle identifying unit and a control unit.
A function provided by a single constituent element according to the above-described embodiments may be divided among a plurality of constituent elements. Functions provided by a plurality of constituent elements may be provided by a single constituent element. In addition, a part of a configuration according to the above-described embodiments may be omitted. Furthermore, at least a pail of a configuration according to an above-described embodiment may be added to or replace a configuration according to another of the above-described embodiments. Any mode included in the technical concept specified by the wordings of the claims is an embodiment of the present disclosure.
For example, in the application example 1 (see
In addition, according to the above-described embodiment, a case in which, upon conversion of a captured image captured by the camera 11 into a bird's-eye-view image, the mark image is superimposed onto the image of an obstacle in the converted bird's-eye-view image is described. In addition, the bird's-eye-view image, the mark image may be superimposed on the captured image captured by the camera 11 itself. Alternatively, the mark image may be superimposed onto an image obtained by conversion to an image of a perspective other than the bird's-eye-view image.
The present disclosure can also be actualized in various modes, such as a program for enabling a computer to function as the above-described image processing unit 14, and a non-transitory tangible recording medium such as a semiconductor memory, in which the program is recorded.
Number | Date | Country | Kind |
---|---|---|---|
JP2016-088183 | Apr 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/016367 | 4/25/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/188247 | 11/2/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20070053551 | Kubo | Mar 2007 | A1 |
20070106475 | Kondoh | May 2007 | A1 |
20080205706 | Hongo | Aug 2008 | A1 |
20100220189 | Yanagi | Sep 2010 | A1 |
20120300075 | Yamamoto et al. | Nov 2012 | A1 |
20120320211 | Mitsugi | Dec 2012 | A1 |
20140085476 | Toyofuku | Mar 2014 | A1 |
20140118551 | Ikeda | May 2014 | A1 |
20140133700 | Seki | May 2014 | A1 |
20140176350 | Niehsen | Jun 2014 | A1 |
20140247352 | Rathi | Sep 2014 | A1 |
20150203036 | Kajiwara | Jul 2015 | A1 |
20150269450 | Tasaki | Sep 2015 | A1 |
20160155242 | Bean | Jun 2016 | A1 |
20160297430 | Jones | Oct 2016 | A1 |
20170144584 | Asaoka | May 2017 | A1 |
20180032823 | Ohizumi | Feb 2018 | A1 |
20180134217 | Peterson | May 2018 | A1 |
20180237069 | Gehin | Aug 2018 | A1 |
Number | Date | Country |
---|---|---|
2000-142284 | May 2000 | JP |
2007-295043 | Nov 2007 | JP |
WO2011145141 | Jul 2013 | JP |
2016-052867 | Apr 2016 | JP |
WO-2004083889 | Sep 2004 | WO |
WO-2010070920 | Jun 2010 | WO |
WO-2011058822 | May 2011 | WO |
WO-2012076789 | Jun 2012 | WO |
Number | Date | Country | |
---|---|---|---|
20190132543 A1 | May 2019 | US |