The present disclosure relates to an identification apparatus, an identification method, and a non-transitory recording medium storing an identification program.
Conventionally, a system for generating a three-dimensional model using two cameras is known.
However, recently, an automated parking system has been widespread. It is important to accurately detect whether or not there is a space for parking a vehicle in the automated parking system. Therefore, it is required to accurately identify a boundary between a vehicle (object) to be parked and a space.
In addition, a distance measurement method which uses a time of flight distance measurement method (referred to as a “time of flight (TOF) method” below) is known. A distance from subject vehicle to be parked to the parked vehicle is calculated by using the TOF method, and in a case where it is intended to identify a boundary between the parked vehicle and the space, it is important to accurately estimate the distance from subject vehicle to be parked to the parked vehicle.
A purpose of the present disclosure is to accurately identify a boundary between an object and a space.
One aspect of the present disclosure is an identification apparatus, including: circuitry that receives first distance image information obtained by imaging an object from a first point, using an imaging apparatus, and second distance image information obtained by imaging the object from a second point different from the first point, using the imaging apparatus; derives first distance information from the first point to a plurality of feature points of the object based on the first distance image information and derives second distance information from the second point to the plurality of feature points based on the second distance image information; identifies a boundary between the object and a space based on the first distance information and the second distance information; and outputs an identification result from the identifying to an external apparatus.
One aspect of the present disclosure may be any of a method and a non-transitory recording medium storing a program.
According to the present disclosure, it is possible to accurately identify a boundary between an object and a space.
Hereinafter, surrounding monitoring system 1 mounted with identification apparatus 100 according to one embodiment of the present disclosure will be described in detail with reference to the drawings. Embodiment which will be described below are merely examples and the present disclosure is not limited by the embodiments.
Surrounding monitoring system 1 is mounted on, for example, vehicle V. Hereinafter, surrounding monitoring system 1 will be continuously described as a system for monitoring a side of vehicle V, but may monitor portions (the front, the rear, or whole circumferential directions) other than the side of vehicle V.
As illustrated in
Imaging apparatus 200 is attached to, for example, a side surface of vehicle V in a direction orthogonal to a travel direction of vehicle V (refer to
Light source 210 is attached to emit invisible light (for example, infrared light or near infrared light) having a period of a pulse or a sinusoidal wave toward an imaging range.
Image sensor 220 is, for example, a complementary metal oxide semiconductor (CMOS) image sensor, and is attached approximately at the same location as light source 210 such that an optical axis thereof extends in the direction orthogonal to the travel direction of vehicle V.
Identification apparatus 100 is, for example, an electronic control unit (ECU), and includes an input terminal, an output terminal, a processor, a program memory, and a main memory mounted on a control board in order to control lateral monitoring of vehicle V.
The processor executes a program stored in the program memory using the main memory, processes various signals received through the input terminal, and outputs various control signals to light source 210 and image sensor 220 through the output terminal.
Identification apparatus 100 functions as imaging controller 110, distance measurer 120, identifier 130, storage section 140, and the like by executing of the program using the processor as illustrated in
Imaging controller 110 controls various conditions (specifically, a pulse width, a pulse amplitude, a pulse interval, pulse number, and the like) of the light emitted from light source 210 and outputs a control signal to light source 210.
In addition, the imaging controller 110 controls various conditions (specifically, exposure time, exposure timing, the number of times of exposure, and the like) of the light reception of the return light to image sensor 220 and outputs a control signal to peripheral circuits included in image sensor 220.
Image sensor 220 outputs an infrared image signal and a distance image signal relating to an imaging range to identification apparatus 100 at a predetermined cycle (predetermined frame rate) according to the exposure light control and the like described above.
Each pixel for a distance image output from image sensor 220 to identification apparatus 100 includes distance information derived by a TOF method.
Here, an example of distance measurement using the TOF method will be described. As illustrated in
Image sensor 220 is controlled to perform exposure at timing based on the emission timing of first pulse Pa and second pulse Pb by imaging controller 110. Specifically, as exemplified in
The first exposure starts simultaneously with rise of the first pulse Pa and ends after preset exposure time Tx in relation to the emission light from light source 210. The first exposure aims to receive return light components for first pulse Pa.
Output Oa of image sensor 220 by the first exposure includes return light component S0 with a hatched diagonal lattice shape and background component BG with dotted hatching. An amplitude of return light component 50 is smaller than amplitude of first pulse Pa.
Here, it is assumed that a time difference between rising edges of first pulse Pa and return light component S0 is Δt. Δt is time necessary for invisible light to travel back and forth distance dt from imaging apparatus 200 to target T.
The second exposure starts at the same time as second pulse Pb and ends after exposure time Tx. The second exposure aims to receive return light components for second pulse Pb.
Output Ob of image sensor 220 based on the second exposure includes partial return light component S1 (refer to a hatched portion of a diagonal lattice shape) rather than all return light components and background component BG with dotted hatching.
Above-described component S1 is represented by following equation (1).
S
1
=S
0×(Δt/Wa) (1)
The third exposure starts at timing that does not include return light components of first pulse Pa and second pulse Pb and ends after exposure time Tx. The third exposure aims to receive only background component BG which is an invisible light component irrelevant to the return light component.
Output Oc of image sensor 220 based on the third exposure includes only background component BG with dotted hatching.
Distance dt from imaging apparatus 200 to target T can be derived from a relationship between the emission light and the return light described above by following equations (2) to (4).
S
0
=Oa−BG (2)
S
1
=Ob−BG (3)
dt=c×(Δt/2)={(c×Wa)/2}×(Δt/Wa)={(c×Wa)/2}×(S1/S0) (4)
Here, c is speed of light.
In addition, in the present embodiment, image sensor 220 generates an image signal by adding information of a plurality of pixels adjacent to each other, and performs a so-called lattice transformation. However, in the present disclosure, generating the image signal by adding the information of the plurality of pixels adjacent to each other is not essential.
Distance measurer 120 extracts feature points of a parked vehicle located on a side of vehicle V from an infrared image or a distance image received from image sensor 220, specifies pixels corresponding to the feature points in the distance image, and derives a distance to the feature points based on distance information included in the distance image. The feature points are points previously determined according to a predetermined rule, and, for example, in a vehicle, the feature points are a portion with a high reflectivity of a light wave, such as a headlight or a doorknob, and a portion spaced apart from the portion by a predetermined distance.
Identifier 130 identifies a parkable space based on the distance to the feature point derived by distance measurer 120.
Storage section 140 stores various types of information used for distance measurement processing and identification processing.
A signal relating to the parkable space is output from surrounding monitoring system 1. The information is transmitted to, for example, an advanced driver assistance system (ADAS) ECU. The ADAS ECU performs automated parking of vehicle V using the information.
Next, the distance measurement processing and the identification processing performed by distance measurer 120 and identifier 130 of identification apparatus 100 will be described in detail with reference to a flowchart illustrated in
First, in Step S1, distance measurer 120 extracts a plurality of feature points Ci (i=1 to N: N is a natural number) of parked vehicle P from an infrared image or a distance image received from image sensor 220 and specifies pixel P1i corresponding to feature point Ci in the distance image. At a point of time of performing processing of step S1, vehicle V exists at a first point. Unlike a stereo camera, the TOF camera does not have a pixel shift of a target between the infrared image and the distance image. Accordingly, even if feature point Ci is extracted from either image, pixel locations on a screen are the same, and it is a matter of course that there is no change in effects described in the present embodiment.
In following step S2, distance measurer 120 derives a distance from vehicle V to feature point Ci based on the distance information of pixel P1i specified in step S1. In the following description, a “distance from vehicle V to feature point Ci” has the same meaning as a “distance from imaging apparatus 200 to feature point Ci”.
In the present embodiment, a distance from vehicle V to feature point Ci is derived by using distance information of a plurality of pixels existing above and below pixel P1i in the distance information and the distance image of the pixel P1i.
This is due to the following reason. In the distance image obtained by imaging parked vehicle P using imaging apparatus 200 illustrated in
Accordingly, by using together several pieces of the distance information of a plurality of pixels Pi1 to Pi6 existing above and below pixel Pi, which is considered to have approximately the same distance information as pixel Pi, the distance from vehicle V to feature point Ci is regarded as having a predetermined range. It is a matter of course that, in a case where a coordinate system of an image sensor does not coincide with a coordinate system of a target, the coordinate system is adjusted by correcting yaw, pitch, and roll, and thereafter, when pixels located above and below pixel Pi corresponding to feature point Ci are extracted, closest pixels may be selected or may be estimated from information of peripheral pixels.
In step S3 subsequent to step S2, distance measurer 120 determines whether or not vehicle V moves from the first point described above to a predetermined second point.
If it is determined in step S3 that vehicle V does not move to the second point (step S3: NO), processing of step S3 is repeated.
Meanwhile, if it is determined in step S3 that vehicle V moves to the second point (step S3: YES), the processing proceeds to step S4.
In step S4, distance measurer 120 specifies pixel P2i corresponding to feature point Ci of parked vehicle P in the infrared image or the distance image received from image sensor 220.
In step S5, distance measurer 120 derives a distance from vehicle V to feature point Ci based on the distance information of pixel P2i specified in step S4. Also at this time, in the same manner as in step S2, distance measurer 120 uses several pieces of distance information of a plurality of pixels existing above and below pixel P2i in the pixel P2i and the distance image.
In following step S6, identifier 130 identifies a boundary between parked vehicle P and a space, based on a distance to vehicle V at the first point derived by distance measurer 120 feature point Ci of parked vehicle P, and a distance from vehicle V at the second point to feature point Ci of parked vehicle P. A method of identifying the boundary between parked vehicle V and the space will be described in detail in the following specific examples.
In subsequent step S7, identifier 130 determines a parkable space.
Next, a first specific example of determination of the parkable space performed by identification apparatus 100 will be described with reference to
As illustrated in
In the state illustrated in
Distance measurer 120 extracts feature points C1 to C9 of the front surface and the left side surface of parked vehicle P from the infrared image or the distance image obtained by imaging apparatus 200 and determines pixels P1iC1 to P1iC9 corresponding to feature points C1 to C9.
C3 is a feature point corresponding to a front side end when viewed from imaging apparatus 200 on the left side surface of parked vehicle P, and C9 is a feature point corresponding to a back side end on the left side surface of the vehicle P when viewed from imaging apparatus 200 on the left side surface of parked vehicle P.
Subsequently, distance measurer 120 derives distances from vehicle V to feature points C1 to C9 using the distance image. Specifically, distance measurer 120 derives distances from vehicle V to feature points C1 to C9 from the pixels P1iC1 to P1iC9 and the distance image corresponding to feature points C1 to C9, based on the distance information of the plurality of pixels existing in the vertical direction of the pixels. The derived distances from vehicle V to feature points C1 to C9 are values having a predetermined range.
Subsequently, distance measurer 120 estimates locations (first estimated locations) of feature points C1 to C9 based on the derived distances from vehicle V to feature points C1 to C9.
If vehicle V continues to travel and reaches the second point (
Distance measurer 120 derives the distance from vehicle V to feature points C1 to C9 using the distance image. Specifically, distance measurer 120 derives the distances from vehicle V to feature points C1 to C9 from pixels P2iC1 to P2iC9 corresponding to feature points C1 to C9 and the distance image, based on the distance information of the plurality of pixels existing in the vertical direction of the pixel. The distance, which is derived by doing so, from vehicle V to feature points C1 to C9 are values having a predetermined range.
In the first specific example, the second point is a point where imaging apparatus 200, feature points C3, and feature point C9 do not exist in a straight line. In other words, the second point exists in a range in which a light wave emitted from imaging apparatus 200 is reflected from a side surface of parked vehicle P. In this case, the second point is at a location in which the TOF camera mounted on vehicle V comes before a location where a left side outline of parked vehicle P exists. This means that it is possible to accurately estimate an outline of the parked vehicle before passing through a vehicle to be parked when searching for a parking space. That is, timing to apply a vehicle control when automatic parking is performed can be set before reaching the vehicle to be parked, and it is possible to expect a dramatic increase in a degree of freedom in setting the vehicle control of the automatic parking.
Subsequently, distance measurer 120 estimates locations (second estimated locations) of feature points C1 to C9 based on the derived distances from vehicle V to feature points C1 to C9.
Identifier 130 identifies a boundary between parked vehicle P and a space, based on the distances from imaging apparatus 200 to feature points C1 to C9 at the first point derived as described above and the distance from imaging apparatus 200 to feature points C1 to C9 at the second point.
For example,
Identifier 130 determines that intersection point IC1 between line segment L1C1 indicating feature point C1 in
Then, identifier 130 determines the boundary between parked vehicle P and the space by connecting the locations of feature points C1 to C9 determined in the above-described sequence. Feature point C9 is a feature point corresponding to a back side end of parked vehicle P when viewed from imaging apparatus 200. Accordingly, identifier 130 determines a rear surface of parked vehicle P based on feature point C9.
If the outline of parked vehicle P (the boundary between parked vehicle P and the space) is determined by doing so, identifier 130 detects that there is no obstacle that vehicle V cannot overcome on the left side and/or the right side of parked vehicle P, based on the information from the TOF camera. Furthermore, identifier 130 detects that there is a rectangular space (that is, a space into which vehicle V enters), for example, in a plan view, in which vehicle V can park on the left side and/or the right side of parked vehicle P. Identifier 130 finally determines a parkable space adjacent to the left side and/or the right side of parked vehicle P as described above.
Next, a second specific example of determining the parkable space performed by identification apparatus 100 will be described with reference to
As illustrated in
In the state illustrated in
Distance measurer 120 extracts feature points C1 to C9 on the front surface and the left side surface of parked vehicle P1 from the infrared image or the distance image obtained by imaging apparatus 200 and determines pixels P1iC1 to P1iC9 corresponding to feature points C1 to C9.
C3 is a feature point corresponding to a front side end when viewed from imaging apparatus 200 on the left side surface of parked vehicle P1, and C9 is a feature point corresponding to the back side end when viewed from imaging apparatus 200 on the left side surface of parked vehicle P1.
Subsequently, distance measurer 120 derives the distances from vehicle V to feature points C1 to C9 using the distance image. Specifically, distance measurer 120 derives the distances from vehicle V to feature points C1 to C9, based on the distance information of the plurality of pixels existing in the vertical direction of the pixels in the pixels P1iC1 to P1iC9 corresponding to feature points C1 to C9 and the distance image. The distances, which are derived by doing so, from vehicle V to feature points C1 to C9 are values having a predetermined range.
Subsequently, distance measurer 120 estimates locations (first estimated locations) of feature points C1 to C9 based on the derived distances from vehicle V to feature points C1 to C9.
If vehicle V continues to move forward, imaging apparatus 200 enters a state of being present on a substantially extended line of the right side surface of parked vehicle P1 (
Distance measurer 120 extracts feature points C1 to C3 and C10 to C15 on the front surface and the right side surface of parked vehicle P1 from the infrared image or the distance image obtained by imaging apparatus 200 and determines pixels P2iC1 to P2iC3 and P2iC10 to P2iC15 corresponding to feature points C1 to C3 and C10 to C15. In this state, in the infrared image or the distance image obtained by imaging apparatus 200, feature points C10 to C15 overlap feature point C1 or exist at a very close place to feature point C1.
Distance measurer 120 derives the distances from vehicle V to feature points C1 to C3 and C10 to C15 using the distance image. Specifically, distance measurer 120 derives the distances from vehicle V to feature points C1 to C3 and C10 to C15, based on the distance information of the plurality of pixels existing in the vertical direction of the pixels in the pixels P2iC1 to P2iC3 and P2iC10 to P2iC15 corresponding to feature points C1 to C3 and C10 to C15 and the distance image. The distances, which are derived by doing so, from vehicle V to feature points C1 to C3 and C10 to C15 are values having a predetermined range.
Subsequently, distance measurer 120 estimates the locations (second estimated locations) of feature points C1 to C3 and C10 to C15, based on the derived distances from vehicle V to feature points C1 to C3 and C10 to C15.
In the second specific example, boundary line LR connecting feature points C3, and C10 to C 15 is determined by setting a state where feature points C3, and C10 to C15 are on a substantially straight line as the second point, as illustrated in
Identifier 130 identifies a boundary between parked vehicle P1 and the space, based on the distances from vehicle V at the first point derived by doing as described above to feature points C1 to C9, and the distances from vehicle V at the second point to feature points C1 to C3 and C10 to C15.
For example,
In addition, line segment L1C1 indicating the first estimated location of feature point C1 in
Identifier 130 determines the boundary between parked vehicle P1 and the space by connecting the intersection point of line segments L1C2 and L1C3 and line segments L2C2 and L2C3 and the intersection point of line segment L1C1 and boundary line LR.
At this time, as illustrated in
C23 is a feature point corresponding to the front side end of imaging apparatus 200 on the left side surface of parked vehicle P2, and C29 is a feature point corresponding to the back side end when viewed from imaging apparatus 200 on the left side surface of parked vehicle P2.
Subsequently, distance measurer 120 derives distances from vehicle V to feature points C21 to C29 using the distance image. Specifically, distance measurer 120 derives the distances from vehicle V to feature points C21 to C29, based on the distance information of the plurality of pixels existing in the vertical direction of the pixels in the pixels P1iC21 to P1iC29 corresponding to feature points C21 to C29 and the distance image. The distances, which are thus derived by doing so, from vehicle V to feature points C21 to C29 are values having a predetermined range.
Subsequently, distance measurer 120 estimates the locations (the first estimated locations) of feature points C21 to C29, based on the derived distances from vehicle V to feature points C21 to C29.
If vehicle V further continues to move forward, imaging apparatus 200, feature point C23, and feature point C29 exist in a substantially straight line (
Distance measurer 120 determines pixels P2iC21 to P2iC29 corresponding to pixels C23 to C29 in the infrared image or the distance image obtained by imaging apparatus 200.
Distance measurer 120 derives the distances from vehicle V to feature points C21 to C29 using the distance image. Specifically, the distance measurer 120 derives the distances from vehicle V to feature points C21 to C29, based on the distance information of the plurality of pixels existing in the vertical direction of the pixels in the pixels P2iC21 to P2iC29 corresponding to feature points C21 to C29 and the distance image. The distances, which are derived by doing so, from vehicle V to feature points C21 to C29 are values having a predetermined range.
Subsequently, distance measurer 120 estimates the locations (second estimated locations) of feature points C21 to C29, based on the derived distances from vehicle V to feature points C21 to C29.
In the present example, boundary line LL connecting feature points C23 to C29 is determined by setting a state where feature point C23 and feature point C29 are on a substantially straight line as the second point, as illustrated in
Identifier 130 identifies a boundary between parked vehicle P2 and the space, based on the distances from vehicle V to feature points C21 to C29 at the first point derived by doing as described above and the distances from vehicle V at the second point to feature points C21 to C29.
For example,
Line segments L1C23 to L1C29 indicating the first estimated locations of feature points C23 to C29 in
In addition, feature point C29 is a feature point corresponding to the back side end of parked vehicle P2 when viewed from imaging apparatus 200. Accordingly, identifier 130 determines the rear end of parked vehicle P2, based on feature point C29.
If the outlines of parked vehicles P1 and P2 are determined by doing so, identifier 130 further detects that there is no obstacle that vehicle V cannot overcome between the right side of parked vehicle P1 and the left side of parked vehicle P2, based on the information from the TOF camera. Furthermore, identifier 130 detects that there is a rectangular space (that is, a space into which vehicle V enters), for example, in a plan view, in which vehicle V can park between the right side of parked vehicle P1 and the left side of parked vehicle P2. As described above, identifier 130 finally determines a parkable space between parked vehicles P1 and P2.
In the example described above, it is described that, regarding parked vehicle P1, a point at which imaging apparatus 200 exists on the substantially extended line of the right side surface of parked vehicle P1 is set as the second point, and vehicle V uses the distance measurement information at the first point and the distance measurement information at the second point, but the example is not limited to this.
It is considered that vehicle V may not sufficiently obtain light reflected from the right side surface of parked vehicle P1 depending on a body color, a structure, and the like of parked vehicle P1 at a point where imaging apparatus 200 exists on a substantially extended line of the right side surface of parked vehicle P1. Likewise, even at a point where the vehicle slightly moves forward therefrom, the light reflected from the right side surface of parked vehicle P1 is not sufficient, and thus, it is conceivable that parked vehicle P1 is erroneously estimated not to exist in an area where parked vehicle P1 actually exists. In such a case, the parking prohibition area may be underestimated.
In contrast to this, for example, when vehicle V further moves forward, an angle between imaging apparatus 200 and parked vehicle P1 increases to some extent and a point where the light reflected from the right side surface of parked vehicle P1 is sufficiently obtained is set as a third point relating to parked vehicle P, and it is conceivable to perform distance measurement even at the third point. For example, the second point relating to parked vehicle P2 can be used As the third point.
By doing so, it is possible to prevent erroneously estimating that there is no parked vehicle P1 in an area where parked vehicle P1 exists and to preferably prevent the parking prohibition area from being underestimated.
That is, in addition to the distance measurement information on the first point where imaging apparatus 200 and the side surface of the parked vehicle do not exist in a straight line, and the second point where imaging apparatus 200 and the side surface of the parked vehicle exist in a substantially straight line, by further using the distance measurement information on the third point where imaging apparatus 200 and the side surface of the parked vehicle do not exist in a straight line, an identification accuracy can be improved.
As described above, the identification apparatus according to the present disclosure includes an inputter that receives first distance image information which is obtained by imaging an object from a first point using an imaging apparatus and second distance image information which is obtained by imaging the object from a second point different from the first point using the imaging apparatus, a distance measurer that derives first distance information from the first point to a plurality of feature points of the object based on the first distance image information and derives second distance information from the second point to the plurality of feature points based on the second range image information, an identifier that identifies a boundary between the object and a space based on the first distance information and the second distance information, and an outputter that outputs identification results of the identifier to an external apparatus.
According to an identification apparatus relating to the present disclosure, a boundary between an object and a space can be accurately identified.
In the above-described embodiment, examples of identifying a boundary between a parked vehicle and a space are described, but the embodiment is not limited to this. It is also possible to identify a boundary between an obstacle such as a shopping cart left in a parking space and a space. In the present embodiment, a specific example of accurately estimating a parkable space using images captured at a first point, a second point, and a third point is described, but the present invention is not limited thereto. It goes without saying that it is possible to estimate the parking space with a higher accuracy by using images captured at more points. However, since the amount of calculation increases by using more images, if it is possible to accurately obtain the boundary between the parking space and the parked vehicle, it is apparent that the smaller the used images, the better.
While various embodiments have been described herein above, it is to be appreciated that various changes in form and detail may be made without departing from the spirit and scope of the invention(s) presently or hereafter claimed.
This application is entitled and claims the benefit of Japanese Patent Application No. 2018-060306, filed on Mar. 27, 2018, the disclosure of which including the specification, drawings and abstract is incorporated herein by reference in its entirety.
According to an identification apparatus, an identification method, and a non-transitory recording medium storing an identification program relating to the present disclosure, it is possible to accurately identify a boundary between an object and a space, which is suitable for on-vehicle use.
Number | Date | Country | Kind |
---|---|---|---|
2018-060306 | Mar 2018 | JP | national |