This application claims priority to and the benefit of Japanese Patent Application No. 2023-035849, filed Mar. 8, 2023, the entire disclosure of which is incorporated herein by reference.
The present invention relates to a moving object control system, a control method thereof, a storage medium, and a moving object.
In these years, compact moving objects are known such as electric vehicles called ultra-compact mobility vehicles (also referred to as micro mobility vehicles) each having a riding capacity of about one or two persons, and mobile robots that provide various types of services to humans. Some of such moving objects autonomously travel while periodically generating a traveling path to a destination.
Japanese Patent Laid-Open No. 2018-2082 proposes a moving path generation apparatus that sets a comfortable moving path in which a load applied to an occupant is reduced. Specifically, the moving path generation apparatus sets a traveling track having a smallest degree of curvature change per traveling distance based on an angle formed by an advancing direction of a vehicle at a current position and an advancing direction of the vehicle at a target position, and a curvature of a traveling track and a traveling distance according to a steering angle of the vehicle.
The micro mobility vehicles include, for example, a three-wheeled vehicle including a front wheel and a tail wheel (driven wheel) that operates following driving of the front wheel. In such a vehicle, there is a possibility that turning will occur on the spot according to a target value of a posture angle of the vehicle at the start of traveling or at the time of arrival determination. When the turning occurs on the spot, an angle of the tail wheel increases, which adversely affects a riding comfort of the occupant. In addition, a compact micro mobility vehicle needs to suppress the use of hardware resources as much as possible. Therefore, it is desirable to reduce a processing amount at the time of generating the traveling path as much as possible and to efficiently use limited hardware resources.
The present invention has been made in view of the above problems, and generates a traveling path in consideration of a position and a posture of a vehicle at low calculation cost.
According to one aspect the present invention, there is provided a moving object control system comprises: a setting unit configured to set a current position and a target position of a moving object; a path generation unit configured to generate a first path from the current position to the target position so as to satisfy a predetermined boundary condition on the lw coordinates in which a straight line connecting the current position and the target position of the moving object is defined as an l-axis and a straight line orthogonal to the l-axis is defined as a w-axis; and a conversion unit configured to convert the generated first path into the xy coordinates in which an advancing direction of the moving object is defined as an x-axis and an axis orthogonal to the x-axis is defined as a y-axis.
According to another aspect the present invention, there is provided a control method of a moving object control system, comprising: a setting step of setting a current position and a target position of a moving object; a path generation step of generating a first path from the current position to the target position so as to satisfy a predetermined boundary condition on the lw coordinates in which a straight line connecting the current position and the target position of the moving object is defined as an l-axis and a straight line orthogonal to the l-axis is defined as a w-axis; and a conversion step of converting the generated first path into the xy coordinates in which an advancing direction of the moving object is defined as an x-axis and an axis orthogonal to the x-axis is defined as a y-axis.
According to still another aspect the present invention, there is provided a non-transitory storage medium storing a program for causing a computer to function as: a setting unit configured to set a current position and a target position of a moving object; a path generation unit configured to generate a first path from the current position to the target position so as to satisfy a predetermined boundary condition on the lw coordinates in which a straight line connecting the current position and the target position of the moving object is defined as an l-axis and a straight line orthogonal to the l-axis is defined as a w-axis; and a conversion unit configured to convert the generated first path into the xy coordinates in which an advancing direction of the moving object is defined as an x-axis and an axis orthogonal to the x-axis is defined as a y-axis.
According to yet still another aspect the present invention, there is provided a moving object comprising: a setting unit configured to set a current position and a target position of the moving object; a path generation unit configured to generate a first path from the current position to the target position so as to satisfy a predetermined boundary condition on the lw coordinates in which a straight line connecting the current position and the target position of the moving object is defined as an l-axis and a straight line orthogonal to the l-axis is defined as a w-axis; and a conversion unit configured to convert the generated first path into the xy coordinates in which an advancing direction of the moving object is defined as an x-axis and an axis orthogonal to the x-axis is defined as a y-axis.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note that the following embodiments are not intended to limit the scope of the claimed invention, and limitation is not made an invention that requires all combinations of features described in the embodiments. Two or more of the multiple features described in the embodiments may be combined as appropriate. Furthermore, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
A configuration of a moving object 100 according to the present embodiment will be described with reference to
The moving object 100 is equipped with a battery 113, and is, for example, an ultra-compact mobility vehicle that moves mainly by the power of a motor. The ultra-compact mobility vehicle is an ultra-compact vehicle that is more compact than a general automobile and has a riding capacity of about one or two persons. In addition to a roadway and a sidewalk, the moving object 100 can also travel in a site of various facilities, a public open space, and the like. In the present embodiment, an ultra-compact mobility vehicle with three wheels will be described as an example of the moving object 100, but there is no intention to limit the present invention, and for example, a four-wheeled vehicle or a straddle type vehicle may be used. Further, the vehicle of the present invention is not limited to a vehicle, and may be a vehicle loaded with luggage and traveling alongside a person who is walking, or a vehicle leading a person. Furthermore, the present invention, without being limited to a four-wheeled or two-wheeled vehicle, is also applicable to a walking robot or the like capable of autonomously moving.
The moving object 100 is an electric autonomous vehicle including a traveling unit 112 and using a battery 113 as a main power supply. The battery 113 is, for example, a secondary battery such as a lithium ion battery, and the moving object 100 autonomously travels by the traveling unit 112 by electric power supplied from the battery 113. The traveling unit 112 is a three-wheeled vehicle including a pair of left and right front wheels 120 and a tail wheel (driven wheel) 121. The traveling unit 112 may be in another form, such as a four-wheeled vehicle. The moving object 100 includes a seat 111 for one person or two persons.
The traveling unit 112 includes a steering mechanism 123. The steering mechanism 123 uses motors 122a and 122b as a drive source to change a steering angle of the pair of front wheels 120. An advancing direction of the moving object 100 can be changed by changing the steering angle of the pair of front wheels 120. The tail wheel 121 is a driven wheel that does not individually have a drive source but operates following driving of the pair of front wheels 120. Further, the tail wheel 121 is connected to a vehicle body of the moving object 100 with a turning portion. The turning portion rotates such that an orientation of the tail wheel 121 changes separately from the rotation of the tail wheel 121. In this manner, the moving object 100 according to the present embodiment adopts a differential two-wheeled mobility vehicle with the tail wheel, but is not limited thereto.
The moving object 100 includes a detection unit 114 that recognizes a plane in front of the moving object 100. The detection unit 114 is an external sensor that monitors the front of the moving object 100, and is an imaging apparatus that captures an image of the front of the moving object 100 in the case of the present embodiment. In the present embodiment, a stereo camera having an optical system such as two lenses and respective image sensors will be described as an example of the detection unit 114. However, instead of or in addition to the imaging apparatus, a radar or a light detection and ranging (LiDAR) can also be used. Further, an example in which the detection unit 114 is provided only in front of the moving object 100 will be described in the present embodiment, but there is no intention to limit the present invention, and the detection unit 114 may be provided at the rear, the left, or right of the moving object 100.
The moving object 100 according to the present embodiment captures an image of a front region of the moving object 100 using the detection unit 114, and detects an obstacle or a topography (intersection) from the captured image. Furthermore, the moving object 100 can divide a peripheral region of the moving object 100 into grids, and control traveling while generating an occupancy grid map in which obstacle information is accumulated in each of the grids. Note that the occupancy grid map is generated in sidewalk traveling, traveling in a facility, or the like, and is useful for making a path plan for avoiding an obstacle. On the other hand, in roadway traveling, it is not always necessary to generate the occupancy grid map because a path plan is made by recognizing a road structure. However, even in the roadway traveling, the occupancy grid map may be generated by regarding a boundary of a lane, a parked vehicle, or the like as an obstacle, and the occupancy grid map may be used for a path plan including a lane change for avoiding the obstacle. That is, in the present invention in which a global path is generated by a polynomial curve, it is not always necessary to generate the occupancy grid map, but it is desirable to generate the occupancy grid map in a traveling scene in which there is a possibility that an obstacle exists. Details of the occupancy grid map will be described later.
The control unit 130 acquires a detection result of the detection unit 114, input information of an operation panel 131, voice information input from a voice input apparatus 133, position information from the GNSS sensor 134, and reception information via a communication unit 136, and executes corresponding processing. The control unit 130 performs control of the motors 122a and 122b (traveling control of the traveling unit 112), display control of the operation panel 131, notification to an occupant of the moving object 100 by voice of a speaker 132, and output of information.
The voice input apparatus 133 can collect a voice of the occupant of the moving object 100. The control unit 130 can recognize the input voice and execute processing corresponding to the recognized input voice. A global navigation satellite system (GNSS) sensor 134 receives a GNSS signal, and detects the current position of the moving object 100. A storage apparatus 135 is a storage device that stores a captured image by the detection unit 114, obstacle (target) information, a path generated in the past, an occupancy grid map, and the like. The storage apparatus 135 may also store programs to be executed by the processors, data to be used by the processors for processing, and the like. The storage apparatus 135 may store various parameters (for example, learned parameters of a deep neural network, hyperparameters, and the like) of a machine learning model for voice recognition or image recognition to be executed by the control unit 130.
The communication unit 136 communicates with a communication apparatus 140, which is an external apparatus, via wireless communication such as Wi-Fi or 5th generation mobile communication. The communication apparatus 140 is, for example, a smartphone, but is not limited thereto, and may be an earphone type communication terminal, a personal computer, a tablet terminal, a game machine, or the like. The communication apparatus 140 is connected to a network via wireless communication such as Wi-Fi or 5th generation mobile communication.
A user who owns the communication apparatus 140 can give an instruction to the moving object 100 via the communication apparatus 140. The instruction includes, for example, an instruction for calling the moving object 100 to a position desired by the user for joining. When receiving the instruction, the moving object 100 sets a target position based on position information included in the instruction. Note that, in addition to such an instruction, the moving object 100 can set the target position from the captured image of the detection unit 114, or can set the target position based on an instruction, received via the operation panel 131, from the user riding on the moving object 100. In a case of setting the target position from the captured image, for example, a person who raises his/her hand for the moving object 100 is detected in the captured image, and the position of the detected person is estimated and set as the target position.
Next, functional configurations of the moving object 100 according to the present embodiment will be described with reference to
A user instruction acquisition unit 301 has a function of receiving an instruction from a user, and can receive a user instruction via the operation panel 131, a user instruction from an external apparatus such as the communication apparatus 140 via the communication unit 136, and an instruction by an utterance of the user via the voice input apparatus 133. As described above, the user instruction includes an instruction to set a target position (also referred to as a destination) of the moving object 100 and an instruction related to the traveling control of the moving object 100.
An image information processing unit 302 processes the captured image acquired by the detection unit 114. Specifically, the image information processing unit 302 creates a depth image from a stereo image acquired by the detection unit 114 to obtain a three-dimensional point cloud. Image data converted into the three-dimensional point cloud is used to detect an obstacle or a target that hinders traveling of the moving object 100. In addition, the image information processing unit 302 may include a machine learning model that processes image information, and may perform processing on a learning stage and processing on an inference stage of the machine learning model. The machine learning model of the image information processing unit 302 can perform processing of recognizing a three-dimensional object and the like included in the image information by performing computation of a deep learning algorithm using a deep neural network (DNN), for example.
A grid map generation unit 303 creates a grid map of a predetermined size (for example, in a region of 20 m×20 m with each cell of 10 cm×10 cm) based on the image data of the three-dimensional point cloud. This is intended to reduce the amount since the amount of data of the three-dimensional point cloud is large and real-time processing is difficult. The grid map includes, for example, a grid map indicating a difference between a maximum height and a minimum height of an intra-grid point cloud (representing whether or not the cell is a step) and a grid map indicating a maximum height of the intra-grid point cloud from a reference point (representing a topography shape of the cell). Furthermore, the grid map generation unit 303 removes spike noise and white noise included in the generated grid map, detects an obstacle having a predetermined height or more, and generates an occupancy grid map indicating whether or not there is a three-dimensional object as the obstacle for each grid.
A path generation unit 304 generates a traveling path of the moving object 100 with respect to the target position set by the user instruction acquisition unit 301. Specifically, the path generation unit 304 generates the path using the occupancy grid map generated by the grid map generation unit 303 from the captured image of the detection unit 114 without requiring obstacle information of a high-precision map. Note that the detection unit 114 is the stereo camera that captures the image of the front region of the moving object 100, and thus, is not able to recognize obstacles in the other directions, the topography, or the like. Therefore, it is desirable that the moving object 100 stores detected obstacle information for a predetermined period in order to avoid a collision with an obstacle outside a viewing angle and to avoid getting stuck in a dead end. As a result, the moving object 100 can generate the path in consideration of both an obstacle detected in the past and an obstacle detected in real time.
Further, the path generation unit 304 periodically generates a global path using the occupancy grid map, and periodically generates a local path so as to follow the global path. That is, a target position of the local path is determined by the global path. In the present embodiment, as a generation cycle of each path, the generation cycle of the global path is set to 100 ms, and the generation cycle of the local path is set to 50 ms, but the present invention is not limited thereto. As an algorithm for generating a global path, various algorithms such as a rapid-exploring random tree (RRT), a probabilistic road map (PRM), and A* are known. Further, since the differential two-wheeled mobility vehicle with the tail wheel is adopted as the moving object 100, the path generation unit 304 generates the local path in consideration of the tail wheel 121 which is the driven wheel. Further, according to the present embodiment, when generating the global path, the path generation unit 304 considers the posture angle of the moving object 100 at the current position and the posture angle of the path at the target position. By considering the position and the posture angle of the moving object 100 in this manner, it is possible to avoid turning at the current position or the target position.
The traveling control unit 305 controls the traveling of the moving object 100 in accordance with the local path. Specifically, the traveling control unit 305 controls the traveling unit 112 in accordance with the local path to control a speed and an angular velocity of the moving object 100. Further, the traveling control unit 305 controls traveling in response to various operations of a driver. When a deviation occurs in a driving plan of the local path due to an operation of the driver, the traveling control unit 305 may control traveling by acquiring a new local path generated by the path generation unit 304 again, or may control the speed and angular velocity of the moving object 100 so as to eliminate the deviation from the local path in use.
The grid map generation unit 303 according to the present embodiment divides a peripheral region of the moving object 100 into grids, and generates an occupancy grid map including information indicating the presence or absence of an obstacle for each of the grids (divided regions). Note that an example in which a predetermined region is divided into grids will be described here. However, instead of being divided into grids, the predetermined region may be divided into other shapes to create an occupancy map indicating the presence or absence of an obstacle for each divided region. In addition, in the present invention, since a smooth curved path that does not depend on the divided region is generated, it is not essential to control division into a plurality of regions. In the occupancy grid map 400, a region having a size of, for example, 40 m×40 m or 20 m×20 m around the moving object 100 is set as the peripheral region, and the region is divided into grids of 20 cm×20 cm or 10 cm×10 cm and is dynamically set in accordance with movement of the moving object 100. That is, the occupancy grid map 400 is a region that is shifted such that the moving object 100 is always at the center in accordance with the movement of the moving object 100 and varies in real time. Note that any size of the region can be set based on hardware resources of the moving object 100.
Further, in the occupancy grid map 400, presence/absence information of an obstacle detected from the captured image by the detection unit 114 is defined for each grid. As the presence/absence information, for example, a travelable region is defined as “0”, and a non-travelable region (that is, presence of an obstacle) is defined as “1”. In
Reference numeral 510 denotes an obstacle detection map indicating detection information of an obstacle present in front of the moving object 100 from the captured image captured by the detection unit 114 of the moving object 100. The obstacle detection map 510 indicates real-time information, and is periodically generated based on the captured image acquired from the detection unit 114. Note that, since moving obstacles such as a person and a vehicle are also assumed, it is desirable to update the obstacle detection map 510 generated periodically within a viewing angle 511 of the detection unit 114, which is a front region of the moving object 100, instead of accumulating obstacles fixedly detected in the past. As a result, the moving obstacles can also be recognized, and generation of a path that avoids obstacles more than necessary can be prevented. On the other hand, the obstacles detected in the past are accumulated in a rear region (strictly speaking, outside the viewing angle of the detection unit 114) of the moving object 100 as illustrated in the local map 500. As a result, for example, when an obstacle is detected in the front region and a detour path is generated, it is possible to easily generate a path that avoids collisions with the passed obstacles.
Reference numeral 520 denotes an occupancy grid map generated by adding the local map 500 and the obstacle detection map 510. In this manner, the occupancy grid map 520 is generated as a grid map obtained by combining the local map and the obstacle detection information varying in real time with the obstacle information detected and accumulated in the past.
The target position 601 is set based on various instructions. For example, an instruction from an occupant riding on the moving object 100 and an instruction from a user outside the moving object 100 are included. The instruction from the occupant is performed via the operation panel 131 or the voice input apparatus 133. The instruction via the operation panel 131 may be a method of designating a predetermined grid of a grid map displayed on the operation panel 131. In this case, a size of each grid may be set to be large, and the grid may be selectable from a wider range of the map. The instruction via the voice input apparatus 133 may be an instruction using a surrounding target as a mark. The target may include a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the utterance information. When receiving the instruction via the voice input apparatus 133, the path generation unit 304 detects a designated target from the captured image acquired by detection unit 114 and sets the target as the target position.
A machine learning model is used for these voice recognition and image recognition. The machine learning model performs, for example, computation of a deep learning algorithm using a deep neural network (DNN) to recognize a place name, a landmark name such as a building, a store name, a target name, and the like included in the utterance information and the image information. The DNN for the voice recognition becomes a learned state by performing the processing of the learning stage, and can perform recognition processing (processing of the inference stage) for new utterance information by inputting the new utterance information to the learned DNN. Further, the DNN for the image recognition can recognize a pedestrian, a signboard, a sign, equipment installed outdoors such as a vending machine, building components such as a window and an entrance, a road, a vehicle, a two-wheeled vehicle, and the like included in the image.
Further, regarding the instruction from the user outside the moving object 100, it is also possible to notify the moving object 100 of the instruction via the owned communication apparatus 140 via the communication unit 136 or call the moving object 100 by an operation such as raising a hand toward the moving object 100 as illustrated in
When the target position 601 is set, the path generation unit 304 generates the global path 602 using the generated occupancy grid map. As a method of generating the global path, first, a path (first path) is generated at low calculation cost using a polynomial (parameter) to be described later, and if the generated path does not collide with the detected obstacle, the path is adopted as the global path. In the path generation method using the polynomial, it is possible to generate a path in consideration of the postures of the moving object 100 at the current position (self-vehicle position) and the target position in addition to generating a path at low calculation cost, and it is possible to reduce the turning control of the posture correction as much as possible. On the other hand, in a case where the generated path collides with an obstacle, various search algorithms such as RRT, PRM, and A* are known as path generation for avoiding the obstacle, but any method may be used. That is, according to the present embodiment, first, a path is simply generated using a polynomial described later, and a path for avoiding an obstacle is generated in a vicinity where the obstacle exists. Note that there is no intention to limit the present invention, and for example, whether or not an obstacle is detected in the vicinity of traveling is first determined. If there is no obstacle, path generation is performed by a polynomial, and if there is an obstacle, a global path may be generated by a search algorithm that avoids the obstacle. That is, the path generation method may be switched depending on the presence or absence of an obstacle.
When the global path is generated, the path generation unit 304 generates the local path 603 so as to follow the generated global path 602. As a method of local path planning, there are various methods such as a dynamic window approach (DWA), model predictive control (MPC), clothoid tentacles, and proportional-integral-differential (PID) control. Note that the global path 602 illustrated in
Next, a path generation method using a polynomial according to the present embodiment will be described with reference to
Reference numeral 700 denotes path generation using a polynomial. Reference numeral 701 denotes a path of the generated curve (polynomial curve). Reference numeral 702 denotes a current position of the moving object 100. Reference numeral 703 denotes a target position of the moving object 100. The target position 703 is different from the set final target position 601 and indicates a point closest to the moving object 100 among a plurality of intermediate points obtained by dividing the path to the target position 601. Reference numeral 705 denotes an advancing direction (posture angle) of the moving object 100 at the current position 702. Reference numeral 706 denotes a direction (posture angle) of the moving object 100 at the target position 703. Reference numeral 704 denotes an intersection point of a straight line in the advancing direction 705 at the current position and a straight line in the direction 706 at the target position.
Here, the advancing direction 705 of the moving object 100 is defined as an x-axis, and an axis orthogonal to the x-axis is defined as a y-axis (xy coordinates). Further, a straight line connecting the current position 702 and the target position 703 of the moving object 100 is defined as an l-axis, and a straight line orthogonal to the l-axis is defined as a w-axis (lw coordinates). According to the present embodiment, the path generation unit 304 converts the lw coordinates into the xy coordinates based on a predetermined boundary condition 710, and generates a path from the current position 702 to the target position 703 using the following Mathematical Formula (1). Note that R represents a rotation matrix.
The boundary condition indicates a condition at the current position 702 and the target position 703 in the moving object 100. As illustrated in
Here, since the predetermined boundary condition cannot be represented by w=w(l), it is represented as follows using the parameter. When the predetermined boundary condition 710 is rewritten in parameter display as l(t)=a0+a1t+a2t2+a3t3w(t)=b0+b1t+b2t2+b3t3t: 0→1, it can be represented as the following Mathematical Formula (2).
a0 to a3 and b0 to b3 are obtained from the above Mathematical Formula (2). Note that the curvature can be changed by adjusting parameters k0 and k1. Although it is also possible to obtain the optimum parameters k0 and k1, it is desirable to adopt a method for obtaining an approximate solution since optimization affects generation of the global path at low calculation cost in the present embodiment. For example, the approximate solution of the parameters k0 and k1 can be analytically solved if a condition that the curvature becomes 0 at the current position and the target position is given.
Reference numeral 800 in
In S101, the control unit 130 sets a target position of the moving object 100 based on a user instruction received by the user instruction acquisition unit 301. The user instruction can be received by various methods as described above. Subsequently, in S102, the control unit 130 captures an image of a front region of the moving object 100 by the detection unit 114, and acquires the captured image. The acquired captured image is processed by the image information processing unit 302, and a depth image is created and formed into a three-dimensional point cloud. In S103, the control unit 130 detects an obstacle that is a three-dimensional object of, for example, 5 cm or more from the image formed into the three-dimensional point cloud. In S104, the control unit 130 generates an occupancy grid map of a predetermined region around the moving object 100 based on the detected obstacle and position information of the moving object 100.
Next, in S105, the control unit 130 causes the path generation unit 304 to generate a traveling path of the moving object 100. As described above, the path generation unit 304 generates the global path using the polynomial curve, determines whether or not the generated path collides with the obstacle, and generates the global path by the search algorithm using the occupancy grid map when the generated path collides with the obstacle. Further, the path generation unit 304 generates a local path according to the generated global path. Subsequently, in S106, the control unit 130 determines a speed and an angular velocity of the moving object 100 according to the generated local path, and controls traveling. Thereafter, in S107, the control unit 130 determines whether or not the moving object 100 has reached the target position based on position information from the GNSS sensor 134, and when the moving object 100 does not reach the target position, the control unit 130 returns the processing to S102 to repeatedly perform the processing of generating a path and controlling traveling while updating the occupancy grid map. On the other hand, in a case where the moving object 100 has reached the target position, the processing of this flowchart ends.
S201 to S207 indicate the repetitive processing at 10 Hz. In S202, the control unit 130 acquires the current position and the target position, and the posture of the moving object 100 at each position. The target position acquired here may be the target position acquired in S101, or may be the nearest position when the position up to the target position is subdivided. In addition, the control unit 130 acquires, as the current position, information regarding the current self-vehicle position in the previously generated occupancy grid map and its posture. The information regarding the posture is acquired from a sensor group such as the detection unit 114.
Subsequently, in S203, the control unit 130 generates a polynomial path using the information acquired in S202 and the above Mathematical Formula (1). Further, in S204, the control unit 130 maps the polynomial path generated in S203 on the occupancy grid map generated in S104, and determines whether or not the path collides with an obstacle. A detailed determination method will be described later with reference to
In S206, the control unit 130 outputs the polynomial path generated in S203, or in a case where the global path is generated by the search algorithm in S205, the path as the global path. Thereafter, the control unit 130 generates a local path based on the output global path, and performs traveling control in S106.
Reference numeral 1202 denotes a state in which only an obstacle map is extracted from 1201. Reference numeral 1203 denotes a Minkowski distance map (first map) in a case where a margin of a vehicle width is considered from the obstacle map 1202. The Minkowski distance map 1203 is generated using a filter such as a uniform filter or a Gaussian filter. A region 1214 is an obstacle region in consideration of the vehicle width margin.
On the other hand, reference numeral 1204 denotes that the generated polynomial path 1211 is extracted from 1201. Reference numeral 1205 denotes a state in which the polynomial path 1211 is mapped on the occupancy grid map (second map). Reference numeral 1215 denotes a region through which the polynomial path 1211 passes on the occupancy grid map.
According to the present embodiment, a cost map 1206 (third map) is generated by the Hadamard product of the Minkowski distance map 1203 indicating the obstacle region in consideration of the margin of the vehicle width and the map 1205 of the polynomial path. That is, in the cost map 1206, a position where the region 1214 indicating the obstacle in consideration of the vehicle width and the region 1215 through which the polynomial path passes overlap is acquired. Reference numeral 1216 denotes the overlapping position, and in a case where such a region exists, it is determined that the generated polynomial path collides with the obstacle. On the other hand, when the region denoted by 1216 does not exist, it is determined that the generated polynomial path does not collide with the obstacle.
As described above, according to the present embodiment, in a sidewalk, a public open space, or the like, a polynomial path that can be generated at low calculation cost is first generated, whether or not the polynomial path collides with an obstacle is determined, and when the polynomial path collides with the obstacle, a global path is generated by a search algorithm using an occupancy grid map. On the other hand, when the polynomial path does not collide with the obstacle, the path can be generated at low calculation cost by using the polynomial path as the global path. Furthermore, at the time of generating the polynomial path, it is possible to generate a path in consideration of the posture of the moving object 100 at the current position and the target position, and to avoid traveling control such as sudden turning that gives discomfort to the occupant.
Hereinafter, a second embodiment of the present invention will be described. In the present embodiment, unlike the above embodiment, a method of using a polynomial path on a roadway will be described. In the present embodiment, path generation at an intersection will be described as an example of a case where the polynomial path is used on the roadway. Here, control on the premise that no obstacle is present at the intersection will be described. Therefore, in the present embodiment, in
Next, a path generation method using a polynomial according to the present embodiment will be described with reference to
Reference numeral 1300 denotes path generation at the intersection. Reference numeral 1301 denotes a path of the generated polynomial curve. Reference numeral 1302 denotes a current position of the moving object 100. Reference numeral 1303 denotes a target position (here, an exit of the intersection is set) of the moving object 100. The target position 1303 is different from a set final target position 601 and indicates a point closest to the moving object 100 among a plurality of intermediate points obtained by dividing a path to the target position 601. For example, at the time of entering the intersection, the exit of the intersection is set as the target position. Reference numeral 1305 denotes an advancing direction (posture angle) of the moving object 100 at the current position 1302. Reference numeral 1306 denotes a direction (posture angle) of the moving object 100 at the target position 1303. Reference numeral 1304 denotes an intersection point of a straight line in the advancing direction 1305 at the current position and a straight line in the direction 1306 at the target position.
Here, the advancing direction 1305 of the moving object 100 is defined as an x-axis, and an axis orthogonal to the x-axis is defined as a y-axis (xy coordinates). Further, a straight line connecting the current position 1302 and the target position 1303 of the moving object 100 is defined as an l-axis, and a straight line orthogonal to the l-axis is defined as a w-axis (lw coordinates). According to the present embodiment, a path generation unit 304 converts the lw coordinates into the xy coordinates based on a predetermined boundary condition 1310, and generates a path from the current position 1302 to the target position 1303 using the above Mathematical Formula (1). Details have already been described in
Reference numeral 1400 in
S301 to S306 indicate the repetitive processing at 10 Hz. In S302, the control unit 130 acquires the recognition information, and the current position and the posture of the moving object 100 at the position. Here, the recognition information indicates recognition information of a road structure. The recognition information of the road structure is to recognize a white line or the like from an image captured by a detection unit 114 and recognize a traveling lane, an intersection, a crosswalk, or the like of a road. The recognition information is output by a machine learning model that processes image information (captured image). The machine learning model performs processing of recognizing a road shape included in image information by performing computation of a deep learning algorithm using a deep neural network (DNN), for example. The recognition information includes information of various lines and various lanes of roads, a lane in which a self-vehicle is located (Ego lane), various intersections, road entrances (Road entrance) to various roads, and the like. Note that, since path generation at an intersection is assumed here, the acquired recognition information includes at least intersection information. Further, the control unit 130 acquires sensor information of a GNSS sensor 134 as the current position. The information regarding the posture is acquired from a sensor group such as the detection unit 114.
Subsequently, in S203, the control unit 130 determines a target lane and a target position that pass through the intersection based on the recognition information acquired in S202. For example, in a case where there are a plurality of lanes serving as the exit of the intersection from the recognition information, it is determined to which lane a path is to be generated among the plurality of lanes. Note that the advancing direction at the intersection is based on a direction instruction from an occupant, and is information regarding steering by the occupant.
When the target lane and the target position are determined, in S304, the control unit 130 generates a polynomial path using the information acquired in S302 and S303 and the above Mathematical Formula (1). Thereafter, in S305, the control unit 130 outputs the polynomial path generated in S304 as a global path. Thereafter, the control unit 130 generates a local path based on the output global path, and performs traveling control in S106.
As described above, according to the present embodiment, the polynomial path is generated in consideration of the postures of the current position and the target position in a traveling scene in which the presence or absence of the obstacle does not need to be detected. As a result, even when path generation is performed using the road recognition information, it is possible to avoid traveling control that gives discomfort to the occupant such as sudden turning.
1. A moving object control system (for example, 100) of the above embodiment comprises:
According to this embodiment, it is possible to suitably generate a traveling path in consideration of the position and the posture of the vehicle.
2. in the moving object control system of the above embodiment, the path generation unit generates a curved path by setting a curvature of a path at at least one of the current position and the target position to 0 as the predetermined boundary condition (
According to this embodiment, the curvatures of the current position and the target position are set to 0, the polynomial can be solved analytically, and lower calculation cost can be realized.
3. in the moving object control system of the above embodiment, the system further comprises:
According to this embodiment, first, a polynomial path is generated at low calculation cost, and when the path collides with an obstacle, a path for avoiding the obstacle can be generated by a search algorithm, and the obstacle can be avoided while the calculation cost is suppressed as much as possible.
4. in the moving object control system of the above embodiment, wherein
According to this embodiment, it is possible to simply and more safely determine the collision of the polynomial path generated at low calculation cost with the obstacle.
5. in the moving object control system of the above embodiment, the system further comprises:
According to this embodiment, it is possible to generate a path in consideration of the position and the posture of the vehicle using the polynomial path even on a roadway on which the occupancy grid map is not generated.
6. in the moving object control system of the above embodiment, wherein
According to this embodiment, it is possible to generate a path in consideration of the position and the posture of the vehicle using the polynomial path at the intersection.
7. in the moving object control system of the above embodiment, the system further comprises:
According to this embodiment, it is possible to accurately avoid an obstacle while generating a path in consideration of the position and the posture of the vehicle according to the presence or absence of the obstacle.
The invention is not limited to the foregoing embodiments, and various variations/changes are possible within the spirit of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2023-035849 | Mar 2023 | JP | national |