The present invention relates to a parking assist apparatus for controlling a vehicle in such a manner that the vehicle automatically moves to a parking position registered in advance and stops there.
Japanese Patent Application Laid-Open (kokai) No. 2015-074265 discloses a parking assist apparatus (hereinafter, referred to as a“conventional apparatus”) configured to perform parking assist control by setting a target parking position based on a captured image captured by an imaging apparatus placed at a door and guiding a vehicle to this target parking position. The conventional apparatus is configured to prohibit the guidance of the vehicle (i.e., prohibit the parking assist control) when it has been detected that the door is in an opening state.
When the door is opened, a position of the imaging apparatus shifts (deviates) from a normal position, and thus it is likely that the conventional apparatus sets a false position as the target parking position and guides the vehicle into this false position.
However, according to a configuration of the conventional apparatus, the guidance of the vehicle is prohibited when the door is opened while the parking assist control is being performed, and therefore it is described that the guidance of the vehicle into a false position can be prevented.
A parking assist apparatus for performing following parking assist control has been known, wherein the parking assist control is control for registering feature points extracted from a captured image where a parking position is included in association with this parking position, and thereafter detecting the feature points from another captured image where the parking position is included, thereby calculating the parking position registered in association with these feature points and parking a vehicle in this calculated parking position. In this type of parking assist apparatus, a parking position is registered after a vehicle is temporarily moved to this parking position and is stopped there by the parking assist control. When the parking position (a position where the vehicle is temporarily moved by the parking assist control) is located at a position shifting from a desired parking position, a driver of the vehicle is allowed to correct the parking position. When the parking position is corrected by the driver, a corrected parking position is registered as a registered parking position.
When correcting a parking position, there is a need that the driver would like to correct the parking position after confirming that a parking position in which the driver is planning to register is safe (appropriate) as a registered parking position. In order to confirm that the parking position is safe, it is desirable for the driver to open a door of the vehicle so that the driver can actually confirm outside environment. However, according to the conventional apparatus, the control is prohibited when the door is opened while the parking assist control is being performed. Thus, in the above mentioned parking assist apparatus to which a configuration of the conventional apparatus is applied, the parking assist control is prohibited at a timing when the driver opens the door for a purpose of correcting the parking position, and therefore correction work of the parking position is discontinued. In order to prevent the correction work of the parking position from being discontinued, the driver has to correct the parking position without opening the door. In this case, it is impossible to fully confirm that a parking position in which the driver is planning to register is safe as the registered parking position, and thus it is highly likely that the parking position may not be registered at a desired position.
The present invention is made to resolve the problem above. That is, one of objects of the present invention is to provide a parking assist apparatus (hereinafter, may be also referred to as a “present invention apparatus”) capable of reducing a possibility to perform parking assist control for parking a vehicle in a false position different from a registered parking position as well as registering a parking position in a desired position.
A present invention apparatus comprising:
an imaging apparatus (21) configured to be capable of taking an image of a surrounding of a vehicle (SV);
a controller (10) configured to be capable of performing parking assist control including control at registration mode and control at parking assist mode; and
a door opening/closing sensor (101) for detecting whether or not a door of the vehicle (SV) is in an opening state;
wherein,
the control at registration mode includes:
the control at parking assist mode includes parking assist processing for detecting at least one of the feature points (F) from a captured image including the registered parking position (Ppark_reg) and thereby calculating the registered parking position (Ppark_reg), and performing either one of control for automatically parking the vehicle (SV) in the calculated registered parking position (Ppark_reg) or control for assisting in parking the vehicle (SV) in the calculated registered parking position (Ppark_reg); and
the controller (10) is configured to:
According to this configuration, even when the door is opened in a midst of the parking position correction processing among the control at registration mode being performed, this parking position correction processing is continued without being discontinued. Therefore, it becomes possible for the driver to correct a parking position after opening the door and actually confirming outside environment (specifically, after confirming that a parking position where the registration is planned is safe as the registered parking position). Accordingly, it becomes possible to register the parking position in a desired position. In addition, according to this configuration, when the door is opened in a midst of the control at parking assist mode being performed, this control at parking assist mode is discontinued at the opened timing. Therefore, even in a case where the imaging apparatus is placed at the door, a possibility that the parking assist control is performed in a false position different from the registered parking position due to a shift of the imaging apparatus from a normal position by the door being opened can be reduced. It should be noted that in the parking position correction processing, the corrected parking position is registered in association with feature points already extracted from the captured image including the registration-planned-region. That is, the feature points have been extracted prior to the parking position correction processing. Therefore, the extraction processing of the feature points will not be prevented even though the door is opened in the midst of the parking position correction processing and the imaging apparatus is shifted from the normal position in a case when the imaging apparatus is placed at the door.
In another aspect of the present invention,
when registration-mode-other-processing which is processing other than the parking position correction processing among the control at registration mode is being performed, the controller (10) is configured to discontinue the registration-mode-other-processing at a timing when it is determined that a state of the door has changed from a closing state to an opening state.
According to this configuration, when the door is opened in a midst of the registration-mode-other-processing, the registration-mode-other-processing is discontinued at the opened timing. Thus, in a case when the imaging apparatus is placed at the door, it can be prevented that the feature points are extracted from a bird's-eye view image captured by the imaging apparatus when the door is opened. That is, it can be prevented that a positional relationship between the registered parking position and the feature points is falsely registered when registering the registered parking position in association with the feature points. Therefore, during the control at parking assist mode, the registered parking position can be calculated with high accuracy based on the detected feature points.
In the above description, references used in the following descriptions regarding embodiments are added with parentheses to the elements of the present invention, in order to assist in understanding the present invention.
However, those references should not be used to limit the scope of the invention.
A parking assist apparatus according to an embodiment of the present invention (hereinafter, referred to as a “present embodiment apparatus”) is applied to a vehicle SV (refer to
Each ECU includes a microcomputer. This microcomputer includes CPU, ROM, RAM, readable/writable non-volatile memory, interfaces, and the like. The CPU realizes (performs) various functions (mentioned later) by executing instructions (i.e. programs, routines) stored in the ROM. Further, these ECUs are connected to each other in such a manner that they can mutually exchange data (communicate) via a CAN (Controller Area Network). Therefore, detected values etc. of sensors (including switches) connected to a specific ECU may be transmitted to other ECUs.
Radar sensors 11a to 11e, first ultrasonic sensors 12a to 12d, second ultrasonic sensors 13a to 13h, a parking assist switch 14 and a vehicle speed sensor 15 are connected to the VCECU.
It should be noted that when there is no need to distinguish the radar sensors 11a to 11e from each other, they will be referred to as a “radar sensor 11”. Similarly, when there is no need to distinguish the first ultrasonic sensors 12a to 12d from each other, they will be referred to as a “first ultrasonic sensor 12”. When there is no need to distinguish the second ultrasonic sensors 13a to 13h from each other, they will be referred to as a “second ultrasonic sensor 13”.
The radar sensor 11 is a well-known sensor making use of radio wave in a millimeter waveband. The radar sensor 11 acquires object information identifying a distance between a vehicle SV and a three-dimensional object, a relative speed of the three-dimensional object with respect to the vehicle SV, a relative position (direction) of the three-dimensional object with respect to the vehicle SV, and the like and outputs the object information to the VCECU.
Each of the radar sensors 11a to 11e is arranged at a predetermined position of the vehicle SV as shown in
The radar sensor 11a acquires the object information of a three-dimensional object existing in a right front region of the vehicle SV.
The radar sensor 11b acquires the object information of a three-dimensional object existing in a front region of the vehicle SV.
The radar sensor 11c acquires the object information of a three-dimensional object existing in a left front region of the vehicle SV.
The radar sensor 11d acquires the object information of a three-dimensional object existing in a right rear region of the vehicle SV.
The radar sensor 11e acquires the object information of a three-dimensional object existing in a left rear region of the vehicle SV.
Each of the first ultrasonic sensor 12 and the second ultrasonic sensor 13 is a well-known sensor making use of ultrasonic wave. When there is no need to distinguish the first ultrasonic sensor 12 and the second ultrasonic sensor 13 from each other, they will be collectively referred to as an “ultrasonic sensor”.
The ultrasonic sensor transmits ultrasonic wave to a predetermined area, receives reflected wave reflected from a three-dimensional object, and detects, based on a time from a timing of transmission to a timing of reception, whether or not a three-dimensional object exists as well as a distance to the three-dimensional object. The first ultrasonic sensor 12 is used to detect a three-dimensional object positioned at a relatively farther position from the vehicle SV, compared to the second ultrasonic sensor 13. Each of the first ultrasonic sensor 12 and the second ultrasonic sensor 13 is arranged at a predetermined position of a vehicle body of the vehicle SV as shown in
The first ultrasonic sensor 12 (12a to 12d) acquires a distance between the first ultrasonic sensor 12 and a three-dimensional object existing in a predetermined region (a detection region) described below, and transmits information on the acquired distance to the VCECU.
A detection region of the first ultrasonic sensor 12a is a front right region of the vehicle SV.
A detection region of the first ultrasonic sensor 12b is a front left region of the vehicle SV.
A detection region of the first ultrasonic sensor 12c is a rear right region of the vehicle SV.
A detection region of the first ultrasonic sensor 12d is a rear left region of the vehicle SV.
The second ultrasonic sensor 13 (13a to 13h) acquires a distance between the second ultrasonic sensor 13 and a three-dimensional object existing in a predetermined region (a detection region) described below, and transmits information on the acquired distance to the VCECU.
A detection region of each of the second ultrasonic sensors 13a to 13d is a front region of the vehicle SV.
A detection region of each of the second ultrasonic sensors 13e to 13h is a front region of the vehicle SV.
The parking assist switch 14 is a switch operated (pressed) by a driver.
The vehicle speed sensor 15 is configured to detect a vehicle speed of the vehicle SV and output a signal indicating the vehicle speed. It should be noted that the vehicle speed sensor 15 is, strictly speaking, a wheel speed sensor arranged at each of four wheels of the vehicle SV. The VCECU is configured to acquire a speed of the vehicle SV (a vehicle speed) based on a wheel speed of each wheel detected by the vehicle speed sensor 15 (the wheel speed sensor).
A front camera 21a, a rear camera 21b, a right side camera 21c, and a left side camera 21d are connected to the PVM-ECU 20. Hereinafter, when there is no need to distinguish these cameras 21a to 21d from each other, they will be collectively referred to as a “camera 21”. The camera 21 corresponds to one example of an “imaging apparatus”.
As shown in
The rear camera 21b is arranged on a wall part of a rear trunk RT positioned at a rear part of the vehicle SV. An optical axis of the rear camera 21b is oriented backward of the vehicle SV.
The right side camera 21c is arranged at a right-side door mirror DMR. An optical axis of the right side camera 21c is oriented to a right side of the vehicle SV.
The left side camera 21d is arranged at a left-side door mirror DML. An optical axis of the left side camera 21d is oriented to a left side of the vehicle SV.
An angle of view of the camera 21 is wide-angle. Therefore, an imaging range of the camera 21 includes “right-side, left-side, lower-side, and upper-side ranges” of each of the optical axes. A whole surrounding of the vehicle SV is included in imaging ranges of four cameras 21a to 21d.
The camera 21 takes an image of a surrounding region of the vehicle SV corresponding to the imaging range and acquires image information (image data) every time a predetermined time elapses. The camera 21 transmits the acquired image data to the PVM-ECU 20 and the VCECU.
More specifically, the front camera 21a takes an image of a “front surrounding region of the vehicle SV” corresponding to the imaging range thereof. The front camera 21a transmits to the PVM-ECU 20 the acquired image data (hereinafter, referred to as a “front image data”).
The rear camera 21b takes an image of a “rear surrounding region of the vehicle SV” corresponding to the imaging range thereof. The rear camera 21b transmits to the PVM-ECU 20 the acquired image data (hereinafter, referred to as a “rear image data”).
The right side camera 21c takes an image of a “right-side surrounding region of the vehicle SV” corresponding to the imaging range thereof. The right side camera 21c transmits to the PVM-ECU 20 the acquired image data (hereinafter, referred to as a “right-side image data”).
The left side camera 21d takes an image of a “left-side surrounding region of the vehicle SV” corresponding to the imaging range thereof. The left side camera 21d transmits to the PVM-ECU 20 the acquired image data (hereinafter, referred to as a “left-side image data”).
The PVM-ECU 20 generates surrounding image data using the front image data, the rear image data, the right-side image data, and the left-side image data every time the predetermined time elapses. An image displayed (generated) based on the surrounding image data is referred to as a surrounding image. The surrounding image is an image corresponding to at least a part of the range of the surrounding region of the vehicle SV. The surrounding image includes a camera's viewpoint image, a composite image, and the like.
The camera's viewpoint image is an image where a viewpoint is set at a position at which each lens of the camera 21 is arranged.
The composite image is, for example, an image of the surrounding of the vehicle SV seen from a virtual viewpoint set at an arbitrary position around the vehicle SV. Hereinafter, this image will be referred to as a “virtual viewpoint image”.
A method for generating this virtual viewpoint image is well-known (for example, refer to Japanese Patent Applications Laid-Open (kokai) No. 2012-217000, 2016-192772, 2018-107754 and the like). It should be noted that the PVM-ECU 20 may generate an image where a vehicle image (a polygon showing a vehicle shape, for instance), lines for supporting parking operation, and the like are further combined with (superimposed on) each of the camera's viewpoint image and the virtual viewpoint image. Such an image is also referred to as a surrounding image.
The PVM-ECU 20 generates, using the front image data, the rear image data, the right-side image data, and the left-side image data, a front bird's-eye view image data, a rear bird's-eye view image data, a right bird's-eye view image data, and a left bird's-eye view image data, respectively, every time a predetermined time elapses.
The front bird's-eye view image data is an image data acquired by converting the front image data to an image where the front image data is seen from a bird's-eye view direction (a vertically downward direction with respect to a surface on which the vehicle SV is grounded).
The rear bird's-eye view image data is an image data acquired by converting the rear image data to an image where the rear image data is seen from the bird's-eye view direction.
The right bird's-eye view image data is an image data acquired by converting the right-side image data to an image where the right-side image data is seen from the bird's-eye view direction.
The left bird's-eye view image data is an image data acquired by converting the left-side image data to an image where the left-side image data is seen from the bird's-eye view direction.
Images generated based on the front bird's-eye view image data, the rear bird's-eye view image data, the right bird's-eye view image data, and the left bird's-eye view image data will be referred to as a front bird's-eye view image, a rear bird's-eye view image, a right bird's-eye view image, and a left bird's-eye view image, respectively. Hereinafter, these bird's-eye view images may be collectively referred to as “bird's-eye view images”.
As shown in
As shown in
As shown in
The VCECU is configured to acquire the bird's-eye view images from the PVM-ECU 20 every time the predetermined time elapses and be able to conduct image analysis of the bird's-eye view images at a predetermined timing (mentioned later) so as to extract feature points F. When extracting the feature points F, the VCECU divides each of the capturing areas 81 to 84 of the bird's-eye view images into several divided regions, and extract a fixed number (mentioned later) of the feature points F, the number being set for every divided region in advance. A method for extracting the feature points F will be described below, referring to
In the present embodiment, as shown in
As shown in
As shown in
As shown in
As shown in
As shown in
It should be noted that the VCECU executes processing of conducting image analysis of the bird's-eye view images and masking a three-dimensional object included in the bird's-eye view images before executing the processing of extracting the feature points F from the bird's-eye view images. The VCECU is configured not to extract the feature points F from a masked part. Thereby, the feature points F are extracted as images of the ground 90.
A touch panel display 22 is further connected to the PVM-ECU 20. The touch panel display 22 is a touch-panel type display which a non-illustrated navigation apparatus comprises. The PVM-ECU 20 displays the surrounding image on the touch panel display 22 in response to an instruction transmitted from the VCECU.
The VCECU is configured to be capable of performing parking assist control. The parking assist control includes following two types of assist modes, that is, control at registration mode and control at parking assist mode. When the VCECU performs (executes) the parking assist control, the PVM-ECU 20 displays a parking assist image (an operation image) including the surrounding image on the touch panel display 22 in response to an instruction transmitted from the VCECU.
The engine ECU 30 is connected to an engine actuator 31. The engine actuator 31 includes a throttle valve actuator for changing an opening degree of the throttle valve of an engine (a spark ignition type or a fuel injection type of internal combustion engine) 32. The engine ECU 30 drives the engine actuator 31 and thereby can change torque generated by the engine 32. The torque generated by the engine 32 is transmitted to driving wheels via a non-illustrated transmission.
Therefore, the engine ECU 30 controls the engine actuator 31 and thereby can control driving force of the vehicle SV. The VCECU can transmit a driving instruction to the engine ECU 30. When having received the driving instruction, the engine ECU 30 controls the engine actuator 31 in response to this driving instruction. Thus, the VCECU can perform “driving force automatic control” (mentioned later) via the engine ECU 30. It should be noted that when the vehicle SV is a hybrid vehicle, the engine ECU 30 can control driving force of the vehicle SV generated by either one or both of “an engine and a motor” which are serving as a vehicle driving source. Further, when the vehicle SV is an electric vehicle, the engine ECU 30 can control driving force of the vehicle SV generated by a motor which is serving as a vehicle driving source.
The brake ECU 40 is connected to a brake actuator 41. The brake actuator 41 is provided in a hydraulic circuit between a non-illustrated master cylinder to compress operating fluid with a pedaling force of the brake pedal and friction brake mechanisms 42 provided at each wheel. Each of the friction brake mechanisms 42 comprises a brake disc 42a fixed to the wheel and a brake caliper 42b fixed to a vehicle body.
The brake actuator 41 adjusts, in response to an instruction from the brake ECU 40, a hydraulic pressure that is supplied to a wheel cylinder which is built in the brake caliper 42b, and operates the wheel cylinder with the hydraulic pressure. Thereby, the brake actuator 41 presses a brake pad onto the brake disc 42a to generate friction braking force. Accordingly, the brake ECU 40 controls the brake actuator 41 and thereby can control the braking force of the vehicle SV. The VCECU can transmit a braking instruction to the brake ECU 40. When having received the braking instruction, the brake ECU 40 controls the brake actuator 41 in response to this braking instruction. Thus, the VCECU can perform “braking force automatic control” (mentioned later) via the brake ECU 40.
The EPS•ECU 50 is a control apparatus of a well-known electrically-driven power steering system and is connected to a motor driver 51. The motor driver 51 is connected to a steered motor 52. The steered motor 52 is incorporated into “steering mechanism including a steering wheel SW, a steering shaft SF, a non-illustrated steering gear mechanism, and the like”. The steered motor 52 generates torque with electric power supplied from the motor driver 51 and with the torque, can generate steering assist torque or can turn left-and-right steered wheels. That is, the steered motor 52 can change a steered angle of the vehicle SV.
Further, the EPS•ECU 50 is connected to a steering angle sensor 53 and a steering torque sensor 54. The steering angle sensor 53 is configured to detect a steering angle of the steering wheel SW of the vehicle SV and output a signal indicating the detected steering angle. The steering torque sensor 54 is configured to detect steering torque generated at the steering shaft SF of the vehicle SV by the steering wheel SW being operated and to output a signal indicating the detected steering torque.
EPS•ECU 50 detects, using the steering torque sensor 54, the steering torque which the driver inputs to the steering wheel SW, and drives the steered motor 52 based on this steering torque. The EPS-ECU 50 thereby applies steering torque (steering assist torque) to the steering mechanism, which makes it possible to assist steering operation by the driver.
The VCECU can transmit a steering instruction to the EPS-ECU 50. When having received the steering instruction, the EPS-ECU 50 controls the steered motor 52 based on this steering instruction received. Accordingly, the VCECU can automatically change the steered angle of the steered wheels of the vehicle SV via the EPS•ECU 50 (that is, without the steering operation by the driver). Namely, the VCECU can perform “steered angle automatic control” (mentioned later) via the EPS•ECU 50.
The meter ECU 60 is connected to an indicator 61. The indicator 61 is a multi-information display provided in front of a driver's seat. The indicator 61 displays measured values such as the vehicle speed, engine rotational speed, and the like as well as various types of information.
The SBW•ECU 70 is connected to a shift position sensor 71. The shift position sensor 71 detects a position of a shift lever 72 serving as a movable part of a shift operation part. In the present embodiment, positions of the shift lever 72 include a parking position (P), a moving forward position (D), and a moving backward position (R). The SBW•ECU 70 is configured to receive a position of the shift lever 72 from the shift position sensor 71 and control, based on the position received, a non-illustrated transmission and/or a driving direction shifting mechanism of the vehicle SV (that is, perform shift control of the vehicle SV).
More specifically, when the shift lever 72 is positioned at “P”, the SBW·ECU 70 controls the transmission and/or the driving direction shifting mechanism in such a manner that no driving force is transmitted to driving wheels and the vehicle SV is mechanically locked at a stop position. When the shift lever 72 is positioned at “D”, the SBW•ECU 70 controls the transmission and/or the driving direction shifting mechanism in such a manner that driving force for moving the vehicle SV forward is transmitted to the driving wheels. Further, when the shift lever 72 is positioned at “R”, the SBW•ECU 70 controls the transmission and/or the driving direction shifting mechanism in such a manner that driving force for moving the vehicle SV backward is transmitted to the driving wheels.
The VCECU can transmit a shifting instruction to the SBW•ECU 70. When having received the shifting instruction, the SBW•ECU 70 can, in response to this shifting instruction, control the transmission and/or the driving direction shifting mechanism without relying on the operation of the shift lever 72 by the driver and thereby can shift a position of the shift lever 72. This control of the transmission and/or the driving direction shifting mechanism based on the shifting instruction transmitted from the VCECU will be referred to as “shift position automatic control”.
Door opening/closing sensors 101a to 101d are connected to the body ECU 100. In the present embodiment, the vehicle SV has four doors. Each of the door opening/closing sensors 101a to 101d is arranged at each of these four doors. It should be noted that when there is no need to distinguish the door opening/closing sensors 101a to 101d from each other, they will be referred to as a “door opening/closing sensor 101”.
The door opening/closing sensor 101 detects whether or not the door is in an opening state. When it has been detected that the door is in the opening state, the door opening/closing sensor 101 generates an open signal indicating that the door is in the opening state. The door opening/closing sensor 101 continues to generate the open signal during a period in which it has been detected that the door is in the opening state. When it has been detected that the door is in the closing state, the door opening/closing sensor 101 generates an close signal indicating that the door is in the closing state. The door opening/closing sensor 101 continues to generate the close signal during a period in which it has been detected that the door is in the closing state. The door opening/closing sensor 101 transmits the signal to the body ECU 100. The body ECU 100 is configured to be able to identify which door is in the opening state based on information that from which door opening/closing sensor 101a to 101d the body ECU 100 has received the open signal or the close signal. The body ECU 100 transmits to the VCECU the signal received from the door opening/closing sensor 101. When the signal received from the body ECU 100 has changed from the close signal to the open signal, the VCECU determines that a state of the door has changed from the closing state to the opening state at this timing.
As stated above, the parking assist control includes two types of assist modes, that is, the control at registration mode and the control at parking assist mode. Hereinafter, the “control at registration mode” may be also simply referred to as “registration mode” and the “control at parking assist mode” may be also simply referred to as “parking assist mode”. The registration mode is a mode where the driver of the vehicle SV can register a “position in which the driver is planning to park the vehicle SV (i.e., a planned parking position)” in the VCECU as a registered parking position in advance. On the other hand, the parking assist mode includes following two types of assist modes, that is, a first parking mode and a second parking mode. The first parking mode is a mode where control for automatically parking the vehicle SV in the registered parking position or control for assisting in parking the vehicle SV in the registered parking position is performed. The second parking mode is a well-known mode where a parking position is calculated based on the image information (white lines defining a parking space, for example) acquired from the camera 21, the object information (a wall of a building and a fence, for example) acquired by the radar sensor 11, and/or the information on the distance to a three-dimensional object acquired from the ultrasonic sensor, and thereafter control for automatically parking the vehicle SV in this parking position or control for assisting in parking the vehicle SV in this parking position is performed. In the present embodiment, a description on the registration mode and the first parking mode of the parking assist mode will be made. Hereinafter, the parking assist mode means the first parking mode unless otherwise stated.
As is obvious from the description above, in this specification, the parking assist control includes both of the “control for automatically parking the vehicle in the parking position” and the “control for assisting in parking the vehicle in the parking position”. The former control is performed by the VCECU performing following controls; the driving force automatic control, the braking force automatic control, the steered angle automatic control, and shift position automatic control. The latter control is performed by the VCECU performing at least one of the aforementioned four types of automatic controls and having the driver perform the rest of driving operation (for example, the operation of the shift lever 72). The present embodiment assumes a case where the VCECU performs (executes) the former control.
At the registration mode, it is configured that a position where backward perpendicular parking and/or backward parallel parking are/is possible can be registered as a parking position. In the present embodiment, the perpendicular parking is defined as a parking type in which the front-rear direction of the vehicle SV at a start timing of the parking assist control crosses the front-rear direction of the vehicle SV at a timing when the vehicle SV has been parked in the registered parking position. The parallel parking is defined as a parking type in which the front-rear direction of the vehicle SV at the start timing of the parking assist control is substantially parallel to the front-rear direction of the vehicle SV at a timing when the vehicle SV has been parked in the registered parking position.
When the driver operates the parking assist switch 14 under a state where the vehicle SV is stopped, a system (hereinafter, referred to as a “parking assist system”) constructed for performing the parking assist control is activated. In a case where the parking assist system is activated when a parking position has not been registered yet, the VCECU first determines whether or not the second parking mode of the parking assist mode is feasible based on the image information, the object information, the information on the distance to a three-dimensional object, and so on. When it is determined that the second parking mode is feasible, the VCECU displays on the touch panel display 22 a display image G1 illustrated in
The left side region of the display image G1 includes a composite image G1S1 and a registration start button G1a. The composite image G1S1 is a surrounding image where a polygon SP corresponding to the vehicle SV is superimposed on a virtual viewpoint image where a “region in which parking by the second parking mode is possible” is seen from a virtual viewpoint set above the vehicle SV. The registration start button G1a is a button touched by the driver for a purpose of initiating processing of registering a parking position in the VCECU.
The right side region of the display image G1 includes a composite image G1S2. The composite image G1S2 is a surrounding image where the polygon SP is superimposed on a virtual viewpoint image where a surrounding of the vehicle SV is seen from a virtual viewpoint set right above the vehicle SV. Hereinafter, a composite image where a virtual viewpoint is set right above the vehicle SV will be especially referred to as a “composite bird's-eye view image”.
When a parking start button (illustration omitted) included in the display image G1 is touched, the parking assist control at the second parking mode is started.
It should be noted that in the display image G1, various types of messages, buttons, and marks for starting the second parking mode are actually included. However, illustration and description thereof will be omitted for convenience sake. Same thing can be said for other images such as a display image G2 and a display image G3 described later.
On the other hand, when it is determined that the second parking mode is not feasible, the VCECU displays on the touch panel display 22 a message that the second parking mode is not feasible as well as the registration start button G1a (illustration omitted). That is, in a case where the parking assist system is activated when a parking position has not been registered yet, the registration start button G1a is displayed on the touch panel display 22 regardless of whether or not the second parking mode is feasible.
When the registration start button G1a is touched, the VCECU starts the execution of the registration mode, and determines whether or not a registration of a parking position is possible by means of the perpendicular parking and/or the parallel parking in a right side region of the vehicle SV as well as whether or not a registration of a parking position is possible by means of the perpendicular parking and/or the parallel parking in a left side region of the vehicle SV. Hereinafter, “right side/left side regions of the vehicle SV” will be also simply referred to as “right side/left side regions”.
Specifically, the VCECU determines, based on the image information, the object information, and the information on the distance to a three-dimensional object, whether or not a space where the perpendicular parking and/or the parallel parking of the vehicle SV are/is possible is present in the right side region and the left side region of the vehicle SV as well as whether or not it is possible to set a target route for moving the vehicle SV to this space without being interfered with any obstacles. Hereinafter, this determination will be referred to as “parking determination”.
In addition, the VCECU determines whether or not the predetermined number (for example, 12) of the feature points F are extractable (can be extracted) from each of the right bird's-eye view image and the left bird's-eye view image acquired from the PVM-ECU 20. That is, at the registration mode, a parking position is registered in association with a position of each of the feature points F (described later in detail). Therefore, when the feature points F are not extractable, even though a space is present where the perpendicular parking and/or the parallel parking are/is possible as well as it is possible to set a target route, a parking position cannot be registered in this space. Hereinafter, this determination will be referred to as “feature point determination”. Besides, “the predetermined number of the feature points F are extractable” will be also simply referred to as “the feature points F are extractable”.
When a space is present in the right side region where the perpendicular parking and/or the parallel parking are/is possible as well as it is possible to set a target route in a case where the feature points F are extractable from the right bird's-eye view image, the VCECU determines that the registration of a parking position is possible in the right side region by means of the perpendicular parking and/or the parallel parking, respectively.
When a space is present in the left side region where the perpendicular parking and/or the parallel parking are/is possible as well as it is possible to set a target route in a case where the feature points F are extractable from the left bird's-eye view image, the VCECU determines that the registration of a parking position is possible in the left side region by means of the perpendicular parking and/or the parallel parking, respectively.
When the feature points F are not extractable from the right/left bird's-eye view images, the VCECU determines that the registration of a parking position is impossible regardless of a result of the parking determination.
When a space is not present in the right side/left side regions where the perpendicular parking and/or the parallel parking are/is possible, or when it is impossible to set a target route even though the space is present, the VCECU determines that the registration of a parking position is impossible regardless of a result of the feature point determination.
When it is determined, by the parking determination and the feature point determination, that the registration of a parking position is possible by means of any one of the parking methods, the VCECU displays on the touch panel display 22 a display image G2 illustrated in
When it is determined that the registration of a parking position is possible in the right side region by means of the perpendicular parking and/or the parallel parking, the VCECU displays the right perpendicular parking selection button G2a and/or the right parallel parking selection button G2b in a selectable manner, respectively. In addition, the VCECU stores in the RAM thereof the right bird's-eye view image in association with the perpendicular parking and/or the parallel parking into the right side region.
When it is determined that the registration of a parking position is possible in the left side region by means of the perpendicular parking and/or the parallel parking, the VCECU displays the left perpendicular parking selection button G2c and/or the left parallel parking selection button G2b in a selectable manner, respectively. In addition, the VCECU stores in the RAM thereof the left bird's-eye view image in association with the perpendicular parking and/or the parallel parking into the left side region.
In an example of
For example, in an example of
On the other hand, when the VCECU has determined in the parking determination and in the feature point determination that the registration of a parking position is impossible, the VCECU displays on the touch panel display 22 a message that the registration of a parking position is impossible (illustration omitted), and terminates the registration mode.
When the driver touches either one of the parking method selection buttons G2a to G2d corresponding to a desired parking method among the parking method selection buttons displayed in a selectable manner, the VCECU determines to execute the registration of a parking position by means of the selected parking method. Hereinafter, a series of processing “from a timing when the registration mode is started to a timing when the parking method image G2 (or the message that the registration is impossible) is displayed on the touch panel display 22 based on the results of the parking determination and the feature point determination” will be referred to as “parking-method-image-display processing”. The parking-method-image-display processing is terminated at a timing when any one of the parking method selection buttons G2a to G2d has been touched.
When the parking-method-image-display processing is terminated, the VCECU displays on the touch panel display 22 a display image G3 illustrated in
The display image G3 includes a composite image G3S in aleft side region thereof. The composite image G3S is a composite bird's-eye view image. A parking position display frame G3a is displayed in a superimposed manner on the composite image G3S. The display image G2 includes a position operation button G3c and a setting completion button G3d in a right side region thereof. The position operation button G3c includes 6 arrow buttons of an upward arrow, a downward arrow, a leftward arrow, a rightward arrow, a clockwise directed arrow, and a counterclockwise directed arrow.
The parking position display frame G3a is a rectangular-shaped frame indicating a parking position where the registration is planned. The position operation button G3c is operated by the driver for a purpose of moving a position of the parking position display frame G3a in the composite image G3S.
When one of the upward arrow, the downward arrow, the leftward arrow, or the rightward arrow included in the position operation button G3c is touched once, the parking position display frame G3a moves toward a direction of the touched arrow by a predetermined distance in the composite image G3S. When one of the clockwise directed arrow or the counterclockwise directed arrow is touched once, the parking position display frame G3a rotates around a center thereof toward a rotational direction of the touched arrow by a predetermined angle in the composite image G3S. Thereby, the driver can move a position of the parking position display frame G3a to a desired position in the composite image G3S by operating the position operation button G3c. Hereinafter, this operation will be also referred to as “parking position setting operation”.
The setting completion button G3d is a button used for temporarily setting (determining) a position indicated by the parking position display frame G3a as a parking position Ppark where the registration is planned, and touched for a purpose of starting the control for automatically parking the vehicle SV in the parking position Ppark (the parking assist control). That is, the parking position Ppark is temporarily set in a region (a registration-planned-region) where the driver plans to register a parking position. Hereinafter, the display image G3 will be also referred to as a “parking position setting image G3”.
As shown in
As shown in
Hereinafter, a series of processing “from a timing when the parking-method-image-display processing is finished, via the parking position setting operation by which a position of the parking position display frame G3a is moved by the driver in the parking position setting image G3, to a timing when the position indicated by the parking position display frame G3a is temporarily set as the parking position Ppark” will be referred to as “parking position setting processing”. The parking position setting processing is terminated at a timing when the setting completion button G3d has been touched, and the gray level information, the coordinate (x, z), and the angle θ of each of the entrance feature points Fe have been stored in the RAM of the VCECU.
When the parking position setting processing is terminated, the VCECU performs (executes) the control (the parking assist control) for automatically parking the vehicle SV into the parking position Ppark set temporarily. This parking assist control is performed before the parking position Ppark (a parking position where the registration is planned) is actually registered, and therefore hereinafter, this control will be also referred to as “parking assist control for registration”.
In addition, when the parking position setting processing is terminated, the VCECU displays on the touch panel display 22 a parking assist image for registration (illustration omitted). The parking assist image for registration includes in a left side region thereof a camera's viewpoint image where a region toward a moving direction is seen from a position of the vehicle SV, and includes in a right side region thereof a composite bird's-eye view image. When the camera's viewpoint image and the composite bird's-eye view image include the parking position Ppark, a parking position display frame indicating the parking position Ppark is displayed on these camera's viewpoint image and the composite bird's-eye view image in a superimposed manner.
Specific description will be made on the parking assist control for registration. The VCECU determines, as a target route Rgt, a route for moving the vehicle SV from a current position (the position P1 in an example of
It should be noted that “the identification of the positional relationship between the current position of the vehicle SV and the parking position Ppark” described above is conducted by detecting the entrance feature points Fe. That is, when the parking assist control for registration is started, the VCECU determines, by a matching processing (described later), whether or not the entrance feature point(s) Fe is included in the bird's-eye view image acquired from the PVM-ECU 20 every time the predetermined time elapses. In a case when more than or equal to at least one entrance feature point Fe is included in the bird's-eye view image, the VCECU determines that the entrance feature point(s) Fe has been detected, and calculates the parking position Ppark based on the coordinate(s) (x, z) and the angle(s) 9 of the entrance feature point(s) Fe.
That is, the VCECU executes, while the parking assist control for registration is being performed, “processing for setting a target route Rgt based on the parking position Ppark calculated based on the entrance feature point(s) Fe, and performing the various types of controls for moving the vehicle SV along this target route Rgt” every time the predetermined time elapses. In the example of
It should be noted that there may also arise a case where the entrance feature point(s) Fe comes not to be detected from any bird's-eye view images as a result of the vehicle SV having moved along the target route Rgt. In this case, the VCECU makes use of the latest target route Rgt among a plurality of the target routes Rgt set in the past as a target route Rgt at the current timing.
Now, the matching processing will be described, referring to
Specifically, the VCECU executes the processing for calculating a similarity between the gray level information of the left bird's-eye view image and the gray level information of the entrance feature point Fe, shifting a pixel one by one in the longer direction of the capturing area 84. The VCECU repeats this processing, shifting a row one by one in the shorter direction of the capturing area 84. When an image having a similarity of the gray level information more than or equal to a predetermined similarity threshold is included in the left bird's-eye view image, the VCECU determines that the entrance feature point Fe has been detected from the left bird's-eye view image. The matching processing for other bird's-eye view images will be executed by a same method. Besides, when detecting other feature points F (an inside feature point(s) Fi and a peripheral feature point(s) Fp mentioned later) from the bird's-eye view images, a same matching processing will be executed.
At the registration mode, the inside feature points Fi and the peripheral feature points Fp are additionally extracted in addition to the entrance feature points Fe in order to improve calculation accuracy of the parking position Ppark based on the feature points F. First, the inside feature points Fi will be described.
The VCECU calculates position estimating accuracy of the vehicle SV with respect to the parking position Ppark in a process of moving the vehicle SV to the parking position Ppark along the target route Rgt. When it is determined that the position estimating accuracy has become more than or equal to a predetermined accuracy threshold, the VCECU extracts, as shown in
Most of the feature points F extracted in this way are present inside the parking position Ppark. Therefore, hereinafter, the feature point F extracted at a timing when the position estimating accuracy of the vehicle SV with respect to the parking position Ppark has become more than or equal to the accuracy threshold will be referred to as an inside feature point Fi”. The VCECU stores in the RAM thereof the gray level information, the coordinate (x, z), and the angle θ of each of the inside feature points Fi. The inside feature points Fi are used when calculating the parking position Ppark at the parking assist mode. That is, the inside feature points Fi are not used for calculating the parking position Ppark at the registration mode.
When the vehicle SV moves backward by a predetermined distance after the inside feature points Fi are extracted, the VCECU extracts the inside feature points Fi again. This predetermined distance is set, for example, to a distance by which the rear bird's-eye view image will not overlap with the previous rear bird's-eye view image. However, when the vehicle SV is already parked in the parking position Ppark before the vehicle SV moves backward by the predetermined distance, the extraction of the inside feature points Fi is conducted only once.
Next, the peripheral feature points Fp will be described. When the VCECU moves the vehicle SV to the parking position Ppark, the VCECU performs the breaking force automatic control to stop the vehicle SV, and thereafter shifts a position of the shift lever 72 to “P” by the shift position automatic control. In this way, the parking of the vehicle SV into the parking position Ppark is finished (completed). When it is determined that the parking of the vehicle SV into the parking position Ppark has been finished, the VCECU extracts, as shown in
The feature points F extracted in this way are present around the parking position Ppark. Therefore, hereinafter, the feature point F extracted at a timing when the parking of the vehicle SV into the parking position Ppark has been finished will be referred to as a “peripheral feature point Fp”. The VCECU stores in the RAM thereof the gray level information, the coordinate (x, z), and the angle θ of each of the peripheral feature points Fp. The peripheral feature points Fp are used when calculating the parking position Ppark at the parking assist mode. That is, the peripheral feature points Fp are not used for calculating the parking position Ppark at the registration mode. Hereinafter, the entrance feature points Fe, the inside feature points Fi, and the peripheral feature points Fp may be also collectively referred to as “feature points F”.
Hereinafter, a series of processing “from a timing when the parking position setting processing is finished to a timing when the parking assist control for registration is finished” will be referred to as “parking assist processing for registration”. The parking assist processing for registration is terminated at a timing when the peripheral feature points Fp have been extracted and the gray level information, the coordinate (x, z), and the angle θ of each thereof have been stored in the RAM of the VCECU after the parking of the vehicle SV into the parking position Ppark is finished. That is, the parking assist processing for registration is processing for performing the parking assist control for registration.
When the parking assist processing for registration is terminated, the VCECU displays on the touch panel display 22 a parking position correction image (illustration omitted). The parking position correction image includes a composite bird's-eye view image in a left side region thereof, and a position operation button as well as a registration button in a right side region thereof. A parking position display frame indicating the parking position Ppark is displayed on the composite bird's-eye view image in a superimposed manner. The position operation button has a same configuration and function as the position operation button G3c, and is touched by the driver for a purpose of moving a position of the parking position display frame in the composite bird's-eye view image. The registration button is a button touched for a purpose of determining a position indicated by the parking position display frame as a registered parking position Ppark_reg, and finishing the registration mode.
The driver operates the position operation button and moves the position of the parking position display frame to a desired position, and thereby as shown in
Hereinafter, a series of processing “from a timing when the parking assist processing for registration is finished, via the parking position correction operation by which a position of the parking position display frame is moved by the driver in the parking position correction image, to a timing when the position indicated by the parking position display frame is registered as the registered parking position Ppark_reg” will be referred to as “parking position correction processing”. The parking position correction processing is terminated at a timing when the registration button has been touched, and the coordinate (x, z) and the angle θ corrected in the coordinate system reset have been stored along with the gray level information in the non-volatile memory of the VCECU. When the parking position correction processing is terminated, the registration mode is finished. That is, the registration mode is a mode where the following four processing, that is, the parking-method-image-display processing, the parking position setting processing, the parking assist processing for registration, and the parking position correction processing are executed in this order.
Next, the parking assist mode will be described. Regarding same processing as the processing at the registration mode, a description thereof may be omitted.
When the parking assist switch 14 is operated under a state where the vehicle SV is stopped by the driver, the parking assist system is activated. When the parking assist system is activated in a case where the registered parking position Ppark_reg has been registered, first, the VCECU determines, just like a case at the registration mode, whether or not the second parking mode is feasible. When it is determined that the second parking mode is feasible, the VCECU displays on the touch panel display 22 the display image G1 (refer to
Here, when the registered parking position Ppark_reg has been registered, the VCECU executes, during a period where the vehicle SV is travelling at a vehicle speed less than or equal to a predetermined vehicle speed, the matching processing for the entrance feature points Fe in the right and left bird's-eye view images, each acquired every time the predetermined time elapses, and determines whether or not at least one entrance feature point Fe has been detected from either one of these images. When at least one entrance feature point Fe has been detected at a timing when the parking assist switch 14 has been operated (refer to
When the driver touches the mode button, the VCECU displays on the touch panel display 22 a parking assist image (illustration omitted). That is, the VCECU switches an image from the display image G1 to the parking assist image.
The parking assist image includes in a left side region thereof a camera's viewpoint image where a region toward a moving direction is seen from a position of the vehicle SV, and includes in a right side region thereof a composite bird's-eye view image as well as a parking start button (illustration omitted). When the camera's viewpoint image and the composite bird's-eye view image both include the registered parking position Ppark_reg, a parking position display frame indicating this registered parking position Ppark_reg is displayed on these images in a superimposed manner. It should be noted that this registered parking position Ppark_reg is a parking position calculated based on the detected entrance feature point(s) Fe. When the driver touches the parking start button, the VCECU starts the parking assist mode which is a mode for performing the control (the parking assist control) for automatically parking the vehicle SV in the registered parking position Ppark_reg.
When the parking assist mode is started, the VCECU executes “parking assist processing based on the entrance feature points”. This processing is substantially same as the parking assist processing for registration at the registration mode. That is, the VCECU executes “processing for setting a target route Rgt based on the registered parking position Ppark_reg calculated based on the entrance feature point(s) Fe, and performing the various types of controls for moving the vehicle SV along this target route Rgt” every time the predetermined time elapses. It should be noted that when the entrance feature point(s) Fe comes not to be detected from any bird's-eye view images as a result of the vehicle SV having moved along the target route Rgt, the VCECU terminates the “parking assist processing based on the entrance feature points”. In this case, the VCECU makes use of the latest target route Rgt among a plurality of the target routes Rgt set in the past as a target route Rgt at the current timing.
When “parking assist processing based on the entrance feature points” is terminated, the VCECU executes “parking assist processing based on the peripheral/inside feature points”. In this processing, the VCECU executes the matching processing using the bird's-eye view images (especially, the right, left, and rear bird's-eye view images) acquired from the PVM-ECU 20 every time the predetermined time elapses, and determines whether or not at least one peripheral feature point Fp and/or inside feature point Fi has been detected from these bird's-eye view images. When at least one peripheral feature point Fp and/or inside feature point Fi has been detected, the VCECU calculates the registered parking position Ppark_reg based on the coordinate (x, z) and the angle θ of each of the detected peripheral feature point(s) Fp and/or inside feature point(s) Fi. The VCECU sets a target route Rgt based on the registered parking position Ppark_reg, and performs the various types of controls for moving the vehicle SV along this target route Rgt.
The VCECU executes the above mentioned processing every time the predetermined time elapses. As a result, when the VCECU determines that the parking of the vehicle SV into the registered parking position Ppark_reg has been finished, the VCECU terminates the “parking assist processing based on the peripheral/inside feature points”.
When the “parking assist processing based on the peripheral/inside feature points” is terminated, the parking assist mode is finished. That is, the parking assist mode is a mode where the following two processing, that is, the “parking assist processing based on the entrance feature points” and the “parking assist processing based on the peripheral/inside feature points” are executed in this order.
During a period where the registration mode or the parking assist mode is being performed, the VCECU executes control continuation determination processing concurrently every time the predetermined time elapses. Specifically, when it is determined that the state of the door of the vehicle SV has changed from the closing state to the opening state in a case when “processing (hereinafter, also referred to as “registration-mode-other-processing”) other than the parking position correction processing at the registration mode” or the “parking assist mode” are being performed, the VCECU discontinues the currently performed control (processing) at this determination timing. On the other hand, when the “parking position correction processing at the registration mode” is being executed, the VCECU continues the currently performed parking position correction processing even though it is determined that the state of the door of the vehicle SV has changed from the closing state to the opening state. It should be noted that the “registration-mode-other-processing” is either processing of the parking-method-image-display processing, the parking position setting processing, or the parking assist processing for registration.
When the registration mode is started, the CPU of the VCECU executes a routine shown by a flowchart in
Therefore, when the registration mode is started, the CPU initiates processing from a step 1800 in
When proceeding to the step 1900, the CPU executes a routine (the parking-method-image-display processing) shown by a flowchart in
When the CPU makes an “Yes” determination in both the parking determination and the feature point determination (S1905: Yes), the CPU determines that the registration of a parking position by means of any one of the parking methods is possible, and proceeds to a step 1910 to display the parking method image G2 on the touch panel display 22. Thereafter, the CPU proceeds to a step 1995 at a timing when any one of the parking method selection buttons G2a to G2d has been touched to terminate the parking-method-image-display processing, and proceeds to the step 2000 in
In contrast, when the CPU makes a “No” determination in at least one of the parking determination and the feature point determination (S1905: No), the CPU determines that the registration of a parking position is impossible, and proceeds to a step 1915 to display on the touch panel display 22 the message that the registration of a parking position is impossible. Thereafter, the CPU proceeds to the step 1995 to terminate the parking-method-image-display processing as well as the registration mode.
When proceeding to the step 2000, the CPU executes a routine (the parking position setting processing) shown by a flowchart in
When the CPU makes an “Yes” determination at the step 2010 (S2010: Yes) in a midst of repeating the above processing, the CPU proceeds to a step 2015 to move a position of the parking position display frame G3a in the composite image G3S. Thereafter, the CPU proceeds to the step 2020 to determine whether or not the setting completion button G3d has been touched. When the CPU makes an “Yes” determination at the step 2020 (S2020: Yes), the CPU proceeds to a step 2025 to set a coordinate system for the parking position Ppark, define the feature points F determined to be extractable by the feature point determination at the step 1905 as the entrance feature points Fe, and store the gray level information, the coordinate (x, z), and the angle θ of each thereof in the RAM of the VCECU. In other words, the CPU stores a positional relationship between the entrance feature points Fe and the parking position Ppark. Thereafter, the CPU proceeds to the step 2095 to terminate the parking position setting processing, and proceeds to the step 2100 in
When proceeding to the step 2100, the CPU executes a routine (the parking assist processing for registration) shown by a flowchart in
When it is determined that a value of the moving backward flag is 0 (S2110: Yes), the CPU proceeds to a step 2115 to determine whether or not at least one entrance feature point Fe has been detected. When the CPU makes an “Yes” determination at the step 2115 (S2115: Yes), the CPU calculates, at a step 2120, the parking position Ppark based on the entrance feature point(s) Fe. Thereafter, the CPU proceeds to a step 2125 to set a target route Rgt based on the parking position Ppark, and executes, at a step 2130, the various types of controls for moving the vehicle SV along the target route Rgt.
Thereafter, the CPU proceeds to a step 2135 to determine whether or not the parking of the vehicle SV into the parking position Ppark has been finished. When the CPU makes a “No” determination at the step 2135 (S2135: No), the CPU determines, at a step 2140, whether or not the position estimating accuracy of the vehicle SV with respect to the parking position Ppark is more than or equal to the accuracy threshold. As mentioned above, it is when the vehicle SV moves backward and a part thereof has entered inside the parking position Ppark (in other words, a timing when the vehicle SV starts to move straight in a backward direction) that the position estimating accuracy becomes more than or equal to the accuracy threshold. Therefore, the CPU makes a “No” determination at the step 2140 (S2140: No) until the vehicle SV moves backward and a part thereof enters inside the parking position Ppark, and thereafter, proceeds to a step 2195 to tentatively terminate the present routine.
When any entrance feature point Fe comes not to be detected even by means of the rear camera 21b as a result of the vehicle SV having moved along the target route Rgt by repeating the above processing, the CPU makes a “No” determination at the step 2115 (S2115: No). In this case, the CPU proceeds to the step 2130 and performs the various types of controls for moving the vehicle SV along the target route Rgt calculated at the latest period.
Thereafter, the CPU proceeds to the step 2135 and when the CPU makes a “No” determination (S2135: No), the CPU makes a determination at the step 2140. When a part of the vehicle SV has not entered inside the parking position Ppark yet (in other words, when the vehicle SV has not been moving straight in the backward direction), the CPU makes a “No” determination at the step 2140 (S2140: No), and thereafter, proceeds to the step 2195 to tentatively terminate the present routine.
When the CPU makes an “Yes” determination at the step 2140 (S2140: Yes) in a midst of repeating the above processing, the CPU proceeds to a step 2145 to set a value of the moving backward flag to 1. Subsequently, the CPU proceeds to a step 2150 to determine whether or not a non-overlapping condition that “the feature points F extractable from the rear bird's-eye view image at the current period are not overlapping with the feature points F already extracted from the rear bird's-eye view image” is satisfied (in other words, whether or not there are non-overlapping feature points F in the rear bird's-eye view image at the current period). In a case where an “Yes” determination has been made for the first time at the step 2140, the processing for extracting the feature points F from the rear bird's-eye view image has not been executed yet, and therefore the CPU makes an “Yes” determination at the step 2150 (S2150: Yes). Thereafter, the CPU proceeds to a step 2155 to extract the feature points F as the inside feature points Fi from the rear bird's-eye view image acquired at the current period, and stores in the RAM of the VCECU the gray level information, the coordinate (x, z), and the angle 9 of each thereof. In other words, the CPU stores a positional relationship between the inside feature points Fi and the parking position Ppark. Thereafter, the CPU proceeds to the step 2195 to tentatively terminate the present routine.
The CPU repeats the processing described above, and makes a determination at the step 2110 via the step 2105. At this point, a value of the moving backward flag has been set to 1 at the step 2145, and therefore the CPU makes a “No” determination at the step 2110 (S2110: No), and performs, at the step 2130, the various types of controls for moving the vehicle SV along the target route Rgt calculated at the latest period. When the vehicle SV moves backward and once a part thereof has entered inside the parking position Ppark, the position estimating accuracy of the vehicle SV with respect to the parking position Ppark becomes more than or equal to the accuracy threshold until the parking of the vehicle SV into the parking position Ppark is finished. Therefore, after the processing at the step 2130, when the CPU makes a “No” determination at the step 2135 (S2135: No), the CPU makes an “Yes” determination at the step 2140 (S2140: Yes), and makes a determination at the step 2150 again via the step 2145. When the non-overlapping condition is not satisfied (that is, more than or equal to at least one of the feature points F extractable from the rear bird's-eye view image at the current period is overlapping with the feature points F already extracted from the rear bird's-eye view image) (S2150: No), the CPU proceeds to the step 2195 to tentatively terminate the present routine.
When the CPU makes an “Yes” determination at the step 2135 (S2135: Yes) in a midst of repeating the above processing, the CPU proceeds to a step 2160 to extract the feature points F as the peripheral feature points Fp from each of the right, left, and front bird's-eye view images acquired at the current period, and stores in the RAM of the VCECU the gray level information, the coordinate (x, z), and the angle θ of each thereof. In other words, the CPU stores a positional relationship between the peripheral feature points Fp and the parking position Ppark. Besides, the CPU sets (initializes) a value of the moving backward flag to 0. Thereafter, the CPU proceeds to the step 2195 to terminate the parking assist processing for registration, and proceeds to the step 2200 in
When proceeding to the step 2200, the CPU executes a routine (the parking position correction processing) shown by a flowchart in
When the CPU makes an “Yes” determination at the step 2210 (S2210: Yes) in a midst of repeating the above processing, the CPU proceeds to a step 2215 to move a position of the parking position display frame in the composite bird's-eye view image. Thereafter, the CPU proceeds to the step 2220 to determine whether or not the registration button has been touched. When the CPU makes an “Yes” determination at the step 2220 (S2220: Yes), the CPU displays, at a step 2225, the registration completion image on the touch panel display 22, and proceeds to a step 2230. At the step 2230, the CPU resets a coordinate system for the registered parking position Ppark_reg, and stores in the non-volatile memory of the VCECU the corrected coordinate (x, z) and angle θ of each of the entrance/inside/peripheral feature points Fe, Fi, Fp along with the gray level information of each thereof. In other words, the CPU stores a positional relationship between the entrance/inside/peripheral feature points Fe, Fi, Fp and the registered parking position Ppark_reg. Thereafter, the CPU proceeds to the step 2295 to terminate the parking position correction processing, and proceeds to a step 1895 in
When the parking assist mode is started, the CPU executes a routine shown by a flowchart in
Therefore, when the parking assist mode is started, the CPU initiates processing from a step 2300 in
When proceeding to the step 2400, the CPU executes a routine (the parking assist processing based on the entrance feature points) shown by a flowchart in
When any entrance feature point Fe comes not to be detected even by means of the rear camera 21b as a result of the vehicle SV having moved along the target route Rgt by repeating the above processing, the CPU makes a “No” determination at the step 2410 (S2410: No). In this case, the CPU proceeds to the step 2425 and performs the various types of controls for moving the vehicle SV along the target route Rgt calculated at the latest period. Thereafter, the CPU proceeds to the step 2495 to terminate the parking assist processing based on the entrance feature points, and proceed to the step 2500 in
When proceeding to the step 2500, the CPU executes a routine (the parking assist processing based on the peripheral/inside feature points) shown by a flowchart in
When a position of the shift lever 72 has been shifted to “R” in a midst of repeating the above processing, the CPU makes an “Yes” determination at the step 2510 (S2510: Yes), and proceeds to a step 2520 to determine whether or not at least one peripheral feature point Fp and/or inside feature point Fi has been detected. When the CPU makes a “No” determination (S2520: No), the CPU proceeds to the step 2515 to perform the various types of controls for moving the vehicle SV along the target route Rgt calculated at the latest period. Thereafter, the CPU proceeds to the step 2595 to tentatively terminate the present routine.
When the CPU makes an “Yes” determination at the step 2520 (S2520: Yes) in a midst of repeating the above processing, the CPU calculates, at a step 2525, the registered parking position Ppark_reg based on the peripheral feature point(s) Fp and/or the inside feature point(s) Fi. Subsequently, the CPU proceeds to a step 2530 to set a target route Rgt based on the registered parking position Ppark_reg, and performs, at a step 2535, the various types of controls for moving the vehicle SV along the target route Rgt.
Thereafter, the CPU proceeds to a step 2540 to determine whether or not the parking of the vehicle SV into the registered parking position Ppark_reg has been finished. When the CPU makes a “No” determination (S2540: No), the CPU proceeds to the step 2595 to tentatively terminate the present routine. When the CPU makes an “Yes” determination at the step 2540 (S2540: Yes) in a midst of repeating the above processing, the CPU proceeds to the step 2595 to terminate the parking assist processing based on the peripheral/inside feature points, and proceeds to a step 2395 in
During a period where the above mentioned registration mode or the parking assist mode is being performed, the CPU executes a routine (the control continuation determination processing) shown by a flowchart in
Therefore, when either one of the registration mode or the parking assist mode is started, the CPU starts processing from a step 2600 in
In contrast, when the CPU makes an “Yes” determination at the step 2605 (S2605: Yes), the CPU determines that a state of the door has changed to the opening state, and proceed to a step 2610. At the step 2610, the CPU determines whether or not the control (processing) which is being currently performed (executed) is “parking position correction processing at the registration mode”. When the CPU makes a “No” determination (S2610: No), the CPU determines that either one of “the registration-mode-other-processing (the processing other than the parking position correction processing at the registration mode)” or “the parking assist mode” is being currently performed, and proceeds to a step 2615 to discontinue (stop) the control (processing) determined to be being currently performed. Thereafter, the CPU proceeds to the step 2695 to tentatively terminate the present routine. At this time, a message that the control (processing) has been discontinued may be displayed on the touch panel display 22 or may be announced with a speaker (illustration omitted).
On the other hand, when the CPU makes an “Yes” determination at the step 2610 (S2610: Yes), the CPU determines that the parking position correction processing at the registration mode is being currently performed, and proceeds to the step 2695 to tentatively terminate the present routine. That is, this parking position correction processing at the registration mode will continue to be performed.
Effects of the present embodiment apparatus will be described. In the present embodiment apparatus, even when the door is opened in a midst of the parking position correction processing among the registration mode is being performed, this parking position correction processing is continued without being discontinued. Therefore, it becomes possible for the driver to correct a parking position Ppark after opening the door and actually confirming outside environment (specifically, after confirming that a parking position where the registration is planned is safe as the registered parking position Ppark_reg). Accordingly, it becomes possible to register the parking position Ppark in a desired position. In addition, according to the present embodiment apparatus, when the door is opened in a midst of the parking assist mode, this parking assist mode is discontinued at the opened timing. Therefore, a possibility that the parking assist control is performed in a false position different from the registered parking position Ppark_reg due to a shift of the camera 21 from a normal position by the door being opened can be reduced. It should be noted that the feature points F have been extracted prior to the parking position correction processing. Therefore, the extraction processing of the feature points F will not be prevented even though the door is opened in the midst of the parking position correction processing and the camera 21 is shifted from the normal position.
In addition, in the present embodiment apparatus, when the door is opened in a midst of the registration-mode-other-processing, the processing which is being performed is discontinued at the opened timing. Thus, it can be prevented that the feature points F are extracted from bird's-eye view images generated based on captured images taken by the camera 21 when the door is opened. That is, it can be prevented that a positional relationship between the registered parking position Ppark_reg and the feature points F is falsely registered when registering the registered parking position Ppark_reg in association with the feature points F. Therefore, at the parking assist mode, the registered parking position Ppark_reg can be calculated with high accuracy based on the detected feature points F.
The parking assist apparatus according to the embodiment of the present invention have been described. However, the present invention is not limited thereto and may adopt various modifications within a scope of the present invention. For example, the present embodiment apparatus may comprise a non-illustrated voice recognizer, and a part or all of the touching operation may be replaced with voice operation by the driver.
Number | Date | Country | Kind |
---|---|---|---|
2019-187485 | Oct 2019 | JP | national |