This application claims priority to Chinese Patent Application No. 2021104627958, filed on Apr. 28, 2021, entitled “METHOD AND APPARATUS FOR PATH PLANNING, COMPUTER DEVICE, AND STORAGE MEDIUM”, and the entire content of which is incorporated herein by reference.
The present disclosure relates to the field of path planning technology, and particularly to a method and an apparatus for continuous path planning, a computer device, and a computer-readable storage medium.
With the rapid development of the drone technology, drone-based technology image capturing technology is becoming mature and has become the focus of research.
All the existing target scene reconstruction-oriented path planning methods first calculate an optimal viewpoint set for image capturing based on the three-dimensional prior geometric information of the target scene, and then other algorithms such as the Travelling Salesman Problem algorithm are used to connect the optimized viewpoint set into a path. And finally, a planned path which is provided for the drone to complete the capturing task is obtained. The strategy of first determining the viewpoints for image capturing then connecting the viewpoints into a path makes the planned path ignores the capturing point that may exist in the path from one viewpoint to the next viewpoint and help improve the quality of reconstruction, thereby resulting waste of resources. Further, the target of optimization in the process of path planning is the viewpoints, which makes it difficult to guarantee or adjust the smoothness of the path during the planning process. The planned path is too long, which makes the energy consumption of the drone to be high, so that the data collection time is elongated and the data collection efficiency is affected.
It can be seen that the existing method for path planning has the problem of low data collection efficiency.
In view of this, it is necessary to provide a method and an apparatus for continuous path planning, a computer device, and a storage medium which can improve the data collection efficiency to address the above technical problem.
A method for continuous path planning, including:
In an embodiment, the screening out the optimal path from the newest random tree includes:
In an embodiment, the acquiring the reconstruction contribution degree of each feasible path includes:
In an embodiment, the acquiring the steering consumption value of each feasible path includes:
In an embodiment, the determining the lens orientation of each node on the feasible path includes:
In an embodiment, the method further includes:
In an embodiment, the acquiring the random tree includes:
An apparatus for continuous path planning, including:
A computer device, including a processor and a memory storing a computer program, the processor, when executing the computer program, implements the following steps:
A computer-readable storage medium on which a computer program is stored, the computer program is executed by a processor to implements the following steps:
The above-mentioned method and apparatus, computer device, and storage medium for continuous path planning design the optimization function based on the rapidly exploring random tree to plan path, and the optimal path is screened out from the random tree according to target function including the reconstruction completeness optimization function, the path effectiveness optimization function, and the path smoothness optimization function when the reconstruction degree of the preset sampling points is determined to meet the preset requirement. The above method eliminates the traditional approach of taking viewpoint as the optimization object and takes the path as the optimization object to design the path smoothness optimization function. The influence of the path smoothness degree on collection time and energy consumption is taken into account, so that the planned path is shorter and smoother, thereby saving time and energy consumption of data collection. At the same time, the path effectiveness optimization function and the reconstruction completeness optimization function are designed, which can further improve the effectiveness and quality of the data collected. In summary, the method provided in the present disclosure can effectively improve the efficiency of data collection.
In order to make the purpose, technical solution and advantages of the present disclosure clearer, the present disclosure will be described in further detail in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used for explaining the present disclosure, rather than limiting the present disclosure.
In an embodiment, as shown in
Step 202, scene prior geometric information and a random tree are acquired;
The scene prior geometric information refers to the prior geometric information of the target scene, including the three-dimensional representation data and sampling points (i.e., the local area of the scene) of the target to be reconstructed in the scene. The random tree is marked as G=<V, E>, where V←{wstart} is a node list of the random tree, and E←ϕ represents the expansion relationship between nodes.
As shown in
In the present embodiment, the starting navigation point can be generated according to the prior geometric information of the scene or randomly and marked as wstart. In practical application, each navigation point records the lens orientation at the position of the navigation point. In the present embodiment, the starting navigation point and the lens orientation on the starting navigation point can be used as the root node to initial the random tree, and then enter the process of expanding the random tree.
Step 204, whether a reconstruction degree of preset sampling points of the scene prior geometric information meets a preset requirement is determined;
When the condition of rough prior geometric information of the target scene to be reconstructed is known, it is necessary to determine whether each sampling point (local area) of the target scene can be well reconstructed. For this purpose, in the present embodiment, a metric formulation is defined to predict whether a specific part, that is, a preset sampling point, can be well reconstructed by existing images. Refer to
First, the reconstruction contribution degree (information gain) of each image capturing device such as the camera cj is defined:
Where Dij is the coverage of the camera cj to a local area, that is, the preset sampling point si. There are several cameras in a path segment. Based on this, the reconstruction contribution degree of the path segment to a local area can be defined:
Where wk represents a path segment that ends at the navigation point wk, {cm+1, cm+2, . . . , cm+n} are the cameras included in the path segment.
Based on the reconstruction contribution degree of the path segment to a local area, the reconstruction completeness optimization function Eg(T) aimed at making the scene reconstructed as much as possible is defined:
In addition, in order to make the path obtained in the planning process as smooth as possible and the capturing efficiency along the path as high as possible, the following consumption function is defined:
Ct(wk)=2.25−0.16(θk−π)2
Cm(wk)=|wk−wk−1|
Where θk is the turning angle of the path at the navigation point wk, Ct(wk) is driving steering consumption function, Cm(wk) is travel distance consumption function. Then, based on the consumption function and the reconstruction completeness optimization function, the path effectiveness optimization function Ee(T) aimed at making the capturing path more effective and the path smoothness optimization function Et(T) aimed at making the capturing path smoother are defined:
Finally, based on the reconstruction completeness optimization function Eg(T), the path effectiveness optimization function Ee(T), and the path smoothness optimization function Et(T), the target function E(T) to optimize the path is defined:
E(wn)=E(T)=Eg(T)+αeEe(T)−αtEt(T)
Where T={w1, w2, . . . , wn} represents a planned complete path.
Based on the optimization function, the Rapidly exploring Random Tree is used as optimizer. The node in the tree represents the navigation point wk′, the edge between nodes represents path segment (wk′, wk), and the value of the node is E(T).
In the present embodiment, the reconstruction contribution degree of the path segment to the preset sampling point si can be calculated according to the reconstruction contribution degree of the camera defined above. If the reconstruction contribution degree of the preset sampling point si is larger than a preset reconstruction contribution degree threshold, it is determined that the reconstruction degree of the sampling point si meets a preset requirement, and step 214 is entered. Otherwise, it is determined that the reconstruction degree of the preset sampling point si does not meet the preset requirement, and step 206 is entered.
Step 206, a newly-added node is generated randomly in a preset barrier-free area if the reconstruction degree of the preset sampling points does not meet the preset requirement.
The preset barrier-free area is a safety area. If the reconstruction degree of the preset sampling points does not meet the preset requirement, random sampling is performed in the barrier-free safety area, and the newly-added node that can be reached by the random tree is randomly generated and marked as wnew.
Step 208, nodes located within a preset range of the newly-added node are searched for in the random tree to obtain a node set.
When the newly-added node is generated, it is necessary to connect the newly-added node to the nodes of the random tree to expand the random tree and output an optimal path. Specifically, nodes within the range of the newly-added node can be searched for in the node list V of the random tree to obtain the node set N.
Step 210, a target node is screened out from the node set to connect the newly-added node to the target node; the target node maximizes a preset target function corresponding to a path which ends with the newly-added node.
In order to ensure that the planned path is the optimal path, it is necessary to screen out an optimal node from the node list of the random tree to connect to the newly-added node. A node which makes the target function corresponding to the path {wstart, . . . , wnew} from the starting navigation point wstart to wnew reaches the maximum value can be screened out. Specifically, for each node in the node list of the random tree, the parameters required to calculate the preset target function are acquired, such as reconstruction contribution degree, distance, and lens steering angle. The above parameters are substituted into the expression of the target function to calculate the value of the target function corresponding to the path segment from each node to the newly-added node. Then, the function value are compared to screen out the largest function value and mark it as a target node wp, and then the newly-added node wnew is connected to wp, so that wnew is connected to the random tree through wp.
As shown in
During the process of expanding the random tree, if no node within the preset range of the newly-added node is found in the node list of the random tree, that is, when the node set is empty, the node with the closest distance to the newly-added node is selected in the node list of the random tree as the target node. The newly-added node is connected to the node and step 212 is entered.
Step 212, sub-tree reconnection operation is performed on the newly-added node to update the random tree, and the step 204 is returned to.
The newly-added node is connected to the random tree. Similarly, the sub-tree of the newly-added node is reconnected. Specifically, a random tree <V, E> and a node w to be reconnected are acquired. A queue Q={w} is initialized. The first element of Q is extracted and marked as w*. The i=1 is initialized, and a node within a preset range r of w* but not an ancestor node thereof is searched in V to obtain a node set M={w_1, . . . , w_n}. The value of target function E(T) corresponding to all paths through the node w_i (0<i<n) is calculated, and the maximum value of E(T) is marked as v_old. Then, the maximum value of E(T) obtained by all paths through the node w_i is calculated and marked as v_new if w_i is connected to w*, and w_i is reconnected to w* if v_new is greater than v_old, otherwise i=i+1 and return to the step of calculating the value of the target function E(T) corresponding to all paths through the node w_i (0<i<n) until the reconnection operation is completed, and the random tree is updated and then the step 204 is returned to.
Step 214, an optimal path is screened out from the newest random tree when the reconstruction degree of the preset sampling points meets the preset requirement; and the optimal path maximizes the preset target function.
After the sub-tree reconnection operation of the newly-added node is completed, the random tree includes multiple path that can cover si at the time when the reconstruction degree meets the preset requirement and the preset sampling point si can be well covered, and a path that can maximize the target function is selected as the optimal path from multiple paths according to the target function E(T).
The above path planning method designs the optimization function based on the rapidly exploring random tree to plan a path, and the optimal path is screened out from the random tree according to target function including the reconstruction completeness optimization function, the path effectiveness optimization function, and the path smoothness optimization function when the reconstruction degree of the preset sampling points is determined to meet the preset requirement. The above method eliminates the conventional approach of taking viewpoint as the optimization object and takes the path as the optimization object to design the path smoothness optimization function. The influence of the path smoothness degree on collection time and energy consumption is taken into account, so that the planned path is shorter and smoother, thereby saving time and energy consumption of data collection. At the same time, the path effectiveness optimization function and the reconstruction completeness optimization function are designed, which can further improve the effectiveness and quality of the data collected. In summary, the method provided in the present disclosure can effectively improve the efficiency of data collection.
As shown in
Taking drones as an example, the steering consumption value is the flight steering consumption value, and the travel distance consumption value is flight distance consumption value. In a specific implementation, there are multiple cameras on a path, and the optimal path screening can be dividing each path into multiple path segments with the navigation point as the dividing point, for example, wk represents a path segment that ends at navigation point wk. The reconstruction contribution degree g(cj) of each feasible path on each path segment is acquired, and then the reconstruction contribution degree of all camera points are summed up to obtain reconstruction contribution degree g*(wk) of the path segment wk to the preset sampling point si and then reconstruction contribution degree Eg(T) corresponding to the entire path is obtained. Similarly, the lens steering angle of each navigation point and the distance between navigation points are acquired, and by combining the driving steering consumption function and the travel distance consumption function, flight steering consumption value and flight distance consumption value of each path are obtained, and then the value of the path effectiveness optimization function Ee(T), and the path smoothness optimization function Et(T) are obtained. Then, the value of the target function of each path is obtained according to the function expression of E(T). Finally, the path with the largest target function value is determined as the optimal path. In the present embodiment, three optimization functions are designed so that the optimal path planned is the most efficient path. Further, three optimization functions are designed according to reconstruction completeness, effectiveness, and smoothness. Three optimization functions can be weighted according to different purposes to achieve the objective of the task.
In an embodiment, the acquiring the reconstruction contribution degree of each feasible path includes: acquiring a known reconstruction contribution degree set; finding in the reconstruction contribution degree set the reconstruction contribution degree corresponding to the image capturing device closest to a current image capturing device to obtain a target reconstruction contribution degree; determining the target reconstruction contribution degree as the reconstruction contribution degree of the current image capturing device; and obtaining a reconstruction contribution degree of each feasible path based on the reconstruction contribution degree of the image capturing device on each feasible path.
The reconstruction contribution degree set can also be referred to as view information (VIF). The image capturing device includes camera and other devices used to capture image data. The camera is taken as an example in the present embodiment. In practical application, since the calculation of the target function value is complicated and time-consuming, the view information is designed to speed up the calculation and save time. Therefore, in the calculation process, the reconstruction contribution degree brought by the actual position and orientation of the node is no longer truly calculated. Instead, the reconstruction contribution degree of the camera point closest to the current camera is found in the view information based on each camera in the path, and then the found reconstruction contribution degree is taken as the reconstruction contribution degree of the current camera. By this way, the reconstruction contribution degree of each camera point in the path is obtained, and then the reconstruction contribution degree of each feasible path is obtained, which improves the speed of the algorithm.
As shown in
In practical application, the drones are equipped with multi-lens cameras and single-lens cameras, so the path planning can be sub-divided into multi-lens path planning algorithm and single-lens path planning algorithm. For the multi-lens cameras, since the lens itself does not move, the lens orientation thereof can be ignored in the planning process and defaults to the set lens orientation value. For the path planning of the single-lens cameras, the lens orientation of the drone camera needs to be determined. Here, the lens orientation of the navigation points of the root node is pre-determined, so every time a new node (navigation point) is added, the lens orientation of the new node needs to be determined, and every time when an existing node in the random tree needs to be reconnected to other nodes, the lens orientation also needs to be updated. Therefore, when the newly-added node is connected to the target node, the lens orientation of the newly-added node needs to be determined. Based on the driving steering consumption function, it is known that to obtain target function value of each path, it is necessary to know the lens orientation of each node to plan the optimal path.
In an embodiment, the determining the lens orientation of each node on the feasible path includes: acquiring lens parameters of the image capturing device and a preset lens orientation; determining the lens orientation of a current node as the preset lens orientation when the lens parameters of the image capturing device is characterized as multi-lens; obtaining position information of the current node and position information and lens orientation of the parent node of the current node when the lens parameters of the image capturing device is characterized as single-lens, and executing preset lens orientation calculation strategy based on the position information of the current node and the position information and the lens orientation of the parent node of the current node to determine lens orientation of the current node, and then determining the lens orientation of each node on the feasible path.
Specifically, the lens parameters of the image capturing device include camera lens parameters. The calculation strategy of the lens orientation is as follow: the lens orientation between navigations of the single-lens camera is determined by the distance interpolation of the forward and backward navigation point. Specifically, coordinates of a parent node w_a, lens orientation horizontal angle a_1 and vertical angle a_2 of the parent node w_a, and coordinates of a child node w_b are acquired. A maximum reconstruction contribution degree v_max=0 of this path segment, camera angle (b_1, b_2)=(0, 90) of the corresponding child node, and a temporary camera angle (b*_1, b*_2)=(0, 90) are initialized. The temporary camera angle is increased by 45 degrees rotation, and a path segment (w_a, w_b) and a drone camera capturing distance are divided into a viewpoint set {w_1, . . . , w_n}. The lens angle of each viewpoint in the viewpoint set is interpolated according to the distance and the camera angle of w_a (a_1, a_2) and w_b (b_1, b_2), and the reconstruction contribution degree v=g*(wb) of this path segment is calculated. If v is greater than a current maximum reconstruction contribution degree v_max, then v_max=v, (b_1, b_2)=(b*_1, b*_2), and the lens orientation of w_b is determined. Continuing the above embodiment, specifically in determining the lens orientation of the newly-added node w_new, the newly-added node w_new can be regarded as a child node, and the target node can be regarded as parent node. The lens orientation of the newly-added node can be determined according to the lens orientation calculation strategy. After the newly-added node is connected to the target node, the camera lens parameters and preset lens orientation are acquired to identify whether the camera is a single-lens camera or a multi-lens camera. If it is multi-lens camera, the lens orientation of the newly-added node is still the preset lens orientation by default. If it is single-lens camera, the position of the newly-added node, as well as the position information and the lens orientation of the target node are acquired. Based on the position information of the newly-added node, as well as the position information and the lens orientation of the target node, the preset lens orientation calculation strategy is executed to determine the lens orientation of the newly-added node. In the present embodiment, the lens orientation of the node is determined according to the distance interpolation, which can simply and quickly determine the lens orientation of each node, and then the value of the target function is obtained quickly to plan the optimal path, which improves the speed of the algorithm.
In order to verify the effectiveness and superiority of the path planning method provided in the present disclosure, simulation experiments and real experiment were implemented on the path planning method provided in the present disclosure.
Simulation experiment: a path planning is performed for a given virtual scene using the present algorithm in a virtual environment and a commercial software is used to reconstruct the captured images. The reconstruction result is shown in
Real experiments: real scene is more complicated than simulation scene. A certain real area is used as a target area to be reconstructed for continuous path planning, and the captured images are reconstructed with commercial software, and the reconstruction results as shown in
It can be seen from the above
It should be understood that although the steps in the flowcharts are displayed in order according to the arrows, the steps are not definitely executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in the flowcharts may include multiple sub-steps or multiple stages. These sub-steps or stages are not definitely executed at the same time, but may be performed at different time, the execution order of these sub-steps or stages is not definitely sequential, but may be executed in turns or alternately with at least a part of other steps or sub-steps or stages of other steps.
In an embodiment, as shown in
In an embodiment, the path screening module 570 is further configured to acquire a reconstruction contribution degree, a steering consumption value, and a travel distance consumption value of each feasible path for the feasible paths of the newest random tree; the feasible path is a path where the coverage of the preset sampling points reaches a preset requirement; obtain target function value corresponding to each feasible path based on the reconstruction contribution degree, the steering consumption value, and the travel distance consumption value, and by combining the reconstruction completeness optimization function, the path effectiveness optimization function, and the path smoothness optimization function; determine the feasible path with the largest target function value as the optimal path.
In an embodiment, the path screening module 570 is further configured to acquire a known reconstruction contribution degree set; find in the reconstruction contribution degree set the reconstruction contribution degree corresponding to the image capturing device closest to a current image capturing device to obtain a target reconstruction contribution degree; determine the target reconstruction contribution degree as the reconstruction contribution degree of the current image capturing device; obtain a reconstruction contribution degree of each feasible path based on the reconstruction contribution degree of the image capturing device on each feasible path.
As shown in
In an embodiment, the lens orientation determination module 555 is further configured to acquire lens parameters of the image capturing device and a preset lens orientation; determine lens orientation of a current node as the preset lens orientation when the lens parameters of the image capturing device is characterized as multi-lens; obtain position information of the current node and position information and lens orientation of the parent node of the current node when the lens parameters of the image capturing device is characterized as single-lens, and execute preset lens orientation calculation strategy based on the position information of the current node and the position information and the lens orientation of the parent node of the current node to determine lens orientation of the current node, and then determine the lens orientation of each node on the feasible path.
In an embodiment, the node connection module 550 is further configured to screen out in the random tree a node with the closest distance to the newly-added node to connect the newly-added node to the node when the node set is empty after searching in the random tree for the nodes located within the preset range of the newly-added node to obtain the node set.
As shown in
For specific embodiments of the apparatus for continuous path planning can refer to the above embodiments of the method for continuous path planning, which will not be repeated herein again. Each module in the above apparatus for continuous path planning may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in the hardware or independent of the processor in a computer device, or may be stored in a memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
In an embodiment, a computer device is provided. The computer device can be a server, and internal structure diagram thereof may be as shown in
Those of ordinary skill in the art may understand that the structure shown in
In an embodiment, a computer device is provided, including a processor and a memory storing a computer program, when the computer program is executed, the processor implements the steps of the above method for continuous path planning.
In an embodiment, a computer-readable storage medium is provided, which stores a computer program. The computer program is executed by a processor to implements the steps of the above method for continuous path planning.
Those of ordinary skill in the art may understand that all or part of the processes in the method of the above embodiments may be completed by instructing relevant hardware through a computer program, and the computer program may be stored in a non-transitory computer readable storage medium, when the computer program is executed, the process of the foregoing method embodiments may be included. Any reference to the memory, storage, database or other media used in the embodiments provided in this disclosure may include non-transitory and/or transitory memory. Non-transitory memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synch link) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
The technical features of the above-mentioned embodiments can be combined arbitrarily. In order to make the description concise, all possible combinations of the various technical features in the above-mentioned embodiments are not described herein. However, as long as there is no contradiction in the combination of these technical features, all should be considered as the scope of the present disclosure.
The above-mentioned embodiments are merely some exemplary embodiments of the present disclosure, and their descriptions are more specific and detailed, but they should not be understood as a limitation on the scope of the present disclosure. It should be pointed out that those of ordinary skill in the art can make several modifications and improvements without departing from the concept of the disclosure, and these all fall within the protection scope of the present disclosure. Therefore, the scope of protection of the present disclosure shall be subject to the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202110462795.8 | Apr 2021 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
9120485 | Dolgov | Sep 2015 | B1 |
20100174435 | Lim | Jul 2010 | A1 |
20110035050 | Kim | Feb 2011 | A1 |
20110035087 | Kim | Feb 2011 | A1 |
20110106306 | Kim | May 2011 | A1 |
20140121833 | Lee | May 2014 | A1 |
20170241790 | Yoshikawa | Aug 2017 | A1 |
20200027225 | Huang | Jan 2020 | A1 |
20200348145 | Paranjpe | Nov 2020 | A1 |
20200404166 | Ono | Dec 2020 | A1 |
Entry |
---|
J. Li and C. Yang, “AUV Path Planning Based on Improved RRT and Bezier Curve Optimization,” 2020 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2020, pp. 1359-1364, doi: 10.1109/ICMA49215.2020.9233842. (Year: 2020). |
Schmid, Lukas & Pantic, Michael & Khanna, Raghav & Ott, Lionel & Siegwart, Roland & Nieto, Juan. (2020). An Efficient Sampling-Based Method for Online Informative Path Planning in Unknown Environments. IEEE Robotics and Automation Letters. 5. 1-1. 10.1109/LRA.2020.2969191. (Year: 2020). |
Hernández JD, Istenič K, Gracias N, Palomeras N, Campos R, Vidal E, García R, Carreras M. Autonomous Underwater Navigation and Optical Mapping in Unknown Natural Environments. Sensors (Basel). Jul. 26, 2016;16(8):1174. doi: 10.3390/s16081174. PMID: 27472337; PMCID: PMC5017340. (Year: 2016). |
W. Tabib, M. Corah, N. Michael and R. Whittaker, “Computationally efficient information-theoretic exploration of pits and caves,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea (South), 2016, pp. 3722-3727, doi: 10.1109/IROS.2016.7759548. (Year: 2016). |
Number | Date | Country | |
---|---|---|---|
20220350333 A1 | Nov 2022 | US |