The present disclosure relates to the technical field of robot positioning, and in particular to a robot, a method for robot positioning, and a storage medium.
An initial positioning of a robot indoors is relatively difficult. In an absence of prior positioning, the robot needs to recognize its own pose in a full map. In related technologies, the positioning of the robot usually requires traversing all data in a database, which results in a decrease in positioning efficiency.
The above and/or additional aspects and advantages of the present disclosure will become apparent and comprehensible from the description of the embodiments in conjunction with the following drawings:
To more clearly understand the present disclosure, some definitions of selected terms employed in the embodiment are given. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Furthermore, the components discussed herein, may be combined, omitted, or organized with other components or into different architectures.
It should be noted that, in this disclosure, “at least one” refers to one or more, and “a plurality of” refers to two or more than two. “And/or” refers to an association relationship between associated objects, representing that three relationships may exist. For example, A and/or B may include a case where A exists separately, A and B exist simultaneously, and B exists separately. Wherein A and B may be singular or plural. The terms “first”, “second”, “third”, “fourth”, etc. in the description and claims and drawings of the disclosure are used for distinguishing similar objects, rather than for describing a specific sequence or order.
Furthermore, the term “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, JAVA, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
The various components of the robot 100 are specifically introduced below in conjunction with
The mechanical device 101 is part of the hardware of the robot 100. As shown in
The communication device 102 may receive and send signals and may also communicate with the network and other devices. For example, after the communication device 102 receives command sent by a remote controller or other robot 100 to move in a predetermined direction at a predetermined speed according to a predetermined gait, the communication device 102 transmits the command to the processor 110. The communication device 102 includes, for example, a Wi-Fi device, a 4G device, a 5G device, a Bluetooth, an infrared device, and the like.
The at least one sensor 103 acquires information of a surrounding environment of the robot 100 and monitors parameters of the components of the robot 100 and sends the information and the parameters to the processor 110. The at least one sensor 103 for acquiring surrounding environment information includes a laser radar (for long-range object detection, distance determination and/or speed value determination), a millimeter-wave radar (for short-range object detection, distance determination and/or or speed value), a camera, an infrared camera, a full Navigation Satellite System (full Navigation Satellite System, GNSS), etc. The at least one sensor 103 for monitoring parameters of the components of the robot 100 includes an inertial measurement module (Inertial Measurement Module, IMU) (for measuring the value of velocity value, acceleration value and angular velocity value), a plantar sensor (for monitoring a position of a plantar force point, a foot posture, a ground contact force size and direction), a temperature sensor (used to detect component temperature). As for other sensors such as load sensors, touch sensors, motor angle sensors, and torque sensors that can be configured on the robot 100 will not be introduced here.
The interface device 104 receives information (e.g., data information, power, etc.) from an external device and sends the received information to one or more components of the robot 100. The interface device 104 may include a power port, a data port such as a USB port, a memory card port, a port for connecting a device with an identification device, an audio input/output (I/O) port, a video I/O port, and the like.
The storage device 105 stores software programs and various data. The storage device 105 can mainly include a program storage state and a data storage state. The program storage state can store operating system programs, motion control programs, application programs (such as text editors), etc. The data storage state can store data generated by the robot 100 when the robot 100 is working (such as various sensing data acquired by the at least one sensor 103, log file data) and the like. In addition, the storage device 105 may include a high-speed random-access memory, and may also include a non-volatile memory, such as a magnetic disk memory, a flash memory, or other volatile solid-state memory.
The display device 106 displays information input by a user or information provided to the user. The display device 106 may include a display panel 1061, and the display panel 1061 may include a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
The input unit 107 can receive numeric or character information. Specifically, the input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, can collect touch operations (for example, operations input on the touch panel 1071 or near the touch panel 1071 by palms or fingers of the user, or accessories). The touch panel 1071 may include two parts, a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the user's touch orientation, detects the signal generated by the touch operation, and transmits the signal to the touch controller 1074. The touch controller 1074 receives the touch information from the touch detection device 1073, and converts the touch information into point coordinates, and sends the point coordinates to the processor 110. The touch processor 110 receives and executes commands sent by the processor 110. In addition to the touch panel 1071, the input unit 107 may also include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, one or more of remote controller handles, etc., which are not specifically limited here.
Further, the touch panel 1071 includes a display panel 1061, and when the touch panel 1071 detects a touch operation on or near the touch panel 1071. The touch panel 1071 transmits the signal generated by the touch operation to the processor 110 to determine a type of the touch operation, and then the processor 110 control the display panel 1061 to provide a corresponding visual output according to the determined type of the touch operation. In
The audio output device 108 may convert audio data received by the communication device 102 or stored in the storage device 105 into an audio signal and output as sound. The audio output device 108 may include a speaker, a buzzer, and the like.
The processor 110 is a control center of the robot 100. The processor 110 uses various interfaces and lines to connect the various parts of the robot 100 and controls the robot 100 by running or executing the software program stored in the storage device 105, and by calling the data stored in the storage device 105.
The power supply 111 supplies power to various components of the robot 100. The power supply 111 may include a battery and a power control board. The power control board is used to control functions such as battery charging, discharging, and power consumption management. In the embodiment shown in
According to the above embodiments, specifically, in some embodiments, a terminal device is communicated with the robot 100. The terminal device sends instructions to the robot 100, and the robot 100 receives the instructions from the communication device 102 and transmits them to the processor 110. The processor 110 obtains a target speed according to the instructions. The terminal device includes, but is not limited to, mobile phones, tablet computers, servers, personal computers, wearable smart devices, and other electrical equipment with image capture functions.
The instructions are set by preset conditions. In one embodiment, the robot 100 includes at least one sensor 103, and at least one sensor 103 generates instructions according to the current environment in which the robot 100 is located. The processor 110 determines whether the current speed of the robot 100 satisfies a corresponding preset condition according to the instructions. In response to the fact that the current speed satisfies the corresponding preset condition, the processor 110 controls the robot 100 to move according to the current speed and a current gait. In response to the fact that the current speed does not satisfy the corresponding preset condition, the processor 110 determines the target speed and the corresponding target gait according to the corresponding preset conditions and controls the robot 100 to move according to the target speed and the corresponding target gait. The at least one sensor 103 includes temperature sensors, air pressure sensors, visual sensors, and sound sensors. The instruction includes temperature information, air pressure information, image information, and sound information. A communication mode between the at least one sensor 103 and the processor 110 may be wired communication or wireless communication. Ways of wireless communication include, but are not limited to, wireless networks, mobile communication networks (3G, 4G, 5G, etc.), Bluetooth, and infrared.
The initial positioning of the robot indoors is usually a relatively difficult challenge. In the absence of prior positioning, the robot needs to recognize its precise pose in a full map through its own sensor information at a current moment. A visual bag of words model is one of the popular methods to solve the initial positioning of the robot. It uses image information extraction and clustering ORB feature descriptors to construct a visual dictionary, and uses a similarity of visual words to match a query image with a most similar image in a database, so as to find a full pose corresponding to the robot. This technical solution has obvious shortcomings, such as feature points being low-layer local image descriptors, are easily affected by lighting, rotation angles, and moving objects; limited by a field of view of the camera device of the robot, it is not easy to match different viewing angles at a same position; the efficiency is low, and it is necessary to traverse all the data in the database, and the efficiency decreases as map data becomes larger.
Referring to
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
The above positioning method and the robot 100 can quickly and accurately determine associated node pairs (i.e., associated object pairs) through topology map matching, therefore, the search branch with the highest matching degree can be quickly determined through the number of associated nodes with the largest number of associated node pairs in the current local topology map and the full topology map, and then the current pose of the robot 100 can be determined according to the search branch with the highest matching degree.
The current local topology map is established according to the at least one object in the current environment detected by the robot 100, the at least one object in the current environment detected by the robot 100 refers to the at least one object in the environment detected when the robot 100 is in the current pose.
The full topology map is pre-established according to the at least one object in the full environment in the preset area, the preset area can be any preset area, for example, the preset area can be an indoor area, specifically all areas of an entire building, or one of floors, or one of rooms, etc., are not specifically limited here.
Among them, the node in the current local topology map and the node in the full topology map are matched, and the degree of association of the at least one node pair to be associated can be determined according to a matching result. If the degree of association of the at least one node pair to be associated is greater than the threshold, it means that two nodes of the at least one node pair to be associated are related. Therefore, the at least one node pair to be associated can be determined as an associated node pair. Among them, the threshold may be preset, and is not specifically limited here.
The current local topology map and the full topology map can form a plurality of search branches, when the number of associated node pairs in each of the plurality of search branches is determined, and the search branch with the largest number of associated node pairs is determined, then the current pose of the robot 100 can be determined according to the search branch with the highest matching degree. This disclosure transforms a positioning problem of the robot 100 into a matching problem between topology maps, and can quickly and accurately determine the search branch with the highest matching degree through the topology map matching, thereby quickly and accurately determining the current pose of the robot 100. Among them, the current pose of the robot 100 may refer to an initial pose at a current moment, the initial pose means that the robot 100 has no prior positioning (no positioning information earlier than the current moment), and the pose may refer to the position and a posture (orientation).
In some embodiments, the obtaining of the full topology map at block 01, includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the full topology map can be generated based on object semantics.
Referring to
Taking each first bounding box as a node (taking building structures such as walls, corner lines, sofas, air conditioners, refrigerators and other objects that are not easy to move as nodes), and taking the connection line of two adjacent first bounding boxes as an edge (it can be an undirected edge, the undirected edge refers to an edge without direction, and the undirected edge has no arrow, that is, there is no bounding box pointing to another bounding box), thereby generating the full topology map, among them, if the distance between two first bounding boxes is less than the preset distance, the two first bounding boxes can be considered to be adjacent, so as to form the connection line between the two first bounding boxes and takes the connection line as an edge; if the distance between the two first bounding boxes is greater than the preset distance, the two first bounding boxes can be considered to be not adjacent, the two first bounding boxes are not connected, that is, no edge is generated. Please refer to
In some embodiments, the obtaining the current local topology map at block 01, includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the current local topology map can be generated based on object semantics.
Specifically, when the robot 100 is in an unknown position, the current local topology map can be generated. Referring to
In some embodiments, block 04 (the robot 100 selects, from the plurality of search branches, the search branch with the largest number of associated node pairs as the search branch with the highest matching degree) includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the search can be performed while building the search interpretation tree, and the corresponding search branch will not be constructed if the search fails to find a node that meets the requirements.
Specifically, the current local topology map and the full topology map are combined to form the search interpretation tree. The search interpretation tree includes the plurality of search branches, and each of the plurality of search branches is traversed to search for search branches meeting conditions, and from the plurality of search branches, selecting the search branch with the largest number of associated node pairs as the search branch with the highest matching degree. More specifically, please refer to
Compare the unary constraint between the two objects of the second node pair, that is, judge the similarity constraint of a single node: compare whether the first object type label corresponding to the node in the full topology map is the same as the second object type label corresponding to the node in the current local topology map in the second node pair; compare binary constraints between the node pair to be associated, that is, determine the similarity constraints between the two current topology nodes and the two full topology nodes in the first node pair and the second node pair in the same search branch: determine whether two nodes in the current local topology map are connected (whether the topology map has corresponding edge), and whether two nodes in the full topology map are connected, obtain the degree of association of the node pair to be associated according to the distance and the angle between two nodes in the current local topology map and the distance and the angle between two nodes in the full topology map. Among them, the distance and the angle between two nodes in the current local topology map or in the full topology map can be calculated according to the bounding boxes and spatial directions corresponding to the two nodes. The calculation of the distance is, for example: d=|p1−p2|, where, p1 and p2 represent the center points of the two bounding boxes, and the calculation of the angle is, for example, ΔR=|R1−1·R2|, where R1 and R2 represent rotation matrices of the two bounding boxes relative to the same coordinate system. A rotation matrix R of the bounding box in a certain coordinate system can be obtained using the Rodrigues formula, and a specific method is as follows. Let p be the center point of the bounding box, let px be an intersection of an x-axis of the bounding box and a surface of the bounding box, vector a=p−px, and vector b is a unit vector (1,0,0).
Let v=a×b, s=∥v∥, c=a·b. The rotation matrix R is:
v1, v2, and v3 represent three values of a three-dimensional vector v. The degree of association of the node pair to be associated is negatively correlated with a difference between the distance between two nodes in the current local topology map and the distance between two nodes in the full topology map. In one embodiment, if the difference is greater than a preset distance difference, it may be considered that the node pair to be associated is not associated with each other. The degree of association of the node pair to be associated is negatively correlated with the difference between the angle between two nodes in the current local topology map and the angle between two nodes in the full topology map.
If the first object type label corresponding to the node in the full topology map is different from the second object type label corresponding to the node in the current local topology map in the second node pair (for example, 2-1 in
The number of associated node pairs of each search branch is counted, and the search branch with the largest number of associated node pairs is regarded as the search branch with the highest matching degree.
In some embodiments, before block 04 (the robot 100 selects, from the plurality of search branches, the search branch with the largest number of associated node pairs as the search branch with the highest matching degree), the positioning method includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, referring to
In this way, a complete search interpretation tree can be constructed first based on the current local topology map and the full topology map, and then associated node pairs can be searched based on the complete search interpretation tree.
Specifically, the complete search interpretation tree is constructed according to the current local topology map and the full topology map. Each node in the current local topology map constitutes the second-layer node, take the second-layer node as the parent node, and construct the third-layer node by taking each node in the full topology map as a child node of the second-layer node. Take the third-layer node as the parent node, constitute the fourth-layer nodes by taking all the remaining nodes after removing the nodes that appeared in the upstream branch of the search branch in the current local topology map as the child nodes of the third-layer. Take the fourth-layer node as the parent node, and constitute the fifth-layer node by taking all the remaining nodes after removing the nodes that appeared in the upstream branch of the search branch in the full topology map as the child nodes of the fourth layer. By analogy, continue to traverse other nodes of the current local topology map and the full topology map to form the search interpretation tree. Each search branch is a complete node matching pair, and all search branches constitute all possible situations of the node pairs to be associated. Among them, it should be noted that the terms “first” and “second” are only used for descriptive purpose, and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. For example, the second-layer node in this application may be the second layer or the initial layer, and it is not limited that the first-layer node must be in front of the second-layer node. Specifically, in an embodiment, the second-layer node is preceded by a first-layer node, the first-layer node is an empty node, and the empty node may be used as a root node. In another embodiment, there is no first-layer node in front of the second-layer node, and the second-layer node directly serves as the root node.
In some embodiments, block 02 (the robot 100 matches the nodes in the current local topology map and the nodes in the full topology map, and the matching including determining a degree of association of at least one node pair to be associated, which is constructed by the nodes in the current local topology map and the nodes in the full topology map), includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the degree of association of the node pair to be associated can be obtained according to the object type label, the distance and the angle between the nodes.
Specifically, the matching of the node pair to be associated includes comparing the unary constraint between two objects and comparing the binary constraints between the node pair to be associated. Compare the unary constraint between two objects, that is, judge a similarity constraint of a single node: compare whether the second object type label of the first node of the current local topology map is the same as the first object type label of the first node of the full topology map. If the second object type label of the first node of the current local topology map is different from the first object type label of the first node of the full topology map, it means that the first node pair is not related to each other; if the second object type label of the first node of the current local topology map is the same as the first object type label of the first node of the full topology map, compare whether the second object type label of the second node of the current local topology map is the same as the first object type label of the second node of the full topology map. The degree of association is determined by the distance and the angle between nodes. Compare the binary constraints between the node pair to be associated, that is, judge the similarity constraint between two nodes in the current local topology map and two nodes in the full topology map: judge whether the two nodes in the current local topology map are connected (whether the topology map has the corresponding edge), whether the two nodes in the full topology map are connected, and obtain the degree of association of the node pair to be associated according to the distance and the angle between the two nodes in the current local topology map and the distance and angle between the two nodes in the full topology map. Among them, the distance and the angle between two nodes in the current local topology map or in the full topology map can be calculated according to the bounding boxes and spatial directions corresponding to the two nodes. The calculation of the distance is, for example: d=|p1−p2|, where p1 and p2 represent the center points of the two bounding boxes, and the calculation of the angle is, for example: ΔR=|R1−1·R2|, where R1 and R2 represent the rotation matrices of the two bounding boxes relative to the same coordinate system.
Through the topology map matching, the connection between objects (object type labels, whether nodes are connected, the distance and the angle between nodes) can be compared to obtain associated node pair (associated objects).
In some implementations, the search interpretation tree includes a plurality of search branches. Block 04 (the robot 100 selects, from a plurality of search branches, a search branch with a largest number of associated node pairs as a search branch with a highest matching degree), includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the search branch with the highest matching degree can be determined by traversing the search interpretation tree.
Specifically, for example, a depth-first traversal method may be used to traverse each search branch of the search interpretation tree. If the second object type label of a single node of the current local topology map is different from the first object type label of a single node of the full topology map, the search of the corresponding search branch can be ended; if the two nodes of the current local topology map are not connected, the two nodes of the full topology map are not connected, or the degree of association of the node pair to be associated is less than a threshold, it indicates that the degree of association of the node pair to be associated is low, and the node pair to be associated is not an associated node pair, and the search of the corresponding search branch can be ended; if the second object type label of the single node in the current local topology map is the same as the first object type label of the single node in the full topology map, two nodes in the current local topology map are connected, two nodes in the full topology map are connected, and the degree of association of the node pair to be associated is greater than the threshold, it indicates that the degree of association of the node pair to be associated is high, and the node pair to be associated is an associated node pair. The corresponding search branch can be determined as the matching branch and continue to traverse the search branch to determine all associated node pairs. The number of associated node pairs of each matching branch is counted, and the matching branch with the largest number of associated node pairs is regarded as the search branch with the highest matching pair.
In some implementations, before block 0412 or block 061 (the robot 100 combines the current local topology map and the full topology map to form a search interpretation tree), the positioning method further includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the full topology map can be pruned according to the size of the bounding box, the rarity rate of the object, and the random walk feature to obtain the updated full topology map. Compared with the original topology map, the updated full topology map has fewer nodes, so that the updated full topology map can be used as an entry for a limited range of topological map matching, to improve the matching efficiency.
Specifically, a distance from the center point of the bounding box to any vertex of the cuboid that constitutes the bounding box can be calculated according to the six-degree-of-freedom coordinates of the object, and the distance can be used as the size of the bounding box. Count the number of various objects to calculate the rarity rate of each kind of objects. For example, if there are 100 objects A in the topology map, the rarity rate of object A is 1/100; if there is only 1 object B in the topology map, the rarity rate of object B is 1. The random walk feature of the object is calculated, and the random walk feature of the object can be b group of walk sequences with length c. Take a current object as a starting point, randomly walking to a connected object is a walk for 1 step, and then randomly walking to a next connected object until c steps are taken, which is a walk. The order of the objects passed is a set of walks with a length of c, and then returns to the starting point and do this b times. For example, an object walks 2 times, each time randomly walking 4 steps towards connected objects.
Calculate the similarity of the topological node set of the current topological graph and the full topological graph through the size of the bounding box, the rarity rate, and the random walk feature, so that the node pair whose similarity in the topological node set is higher than the preset similarity can be found, for example, k node pairs whose similarity is higher than the preset similarity can be found. Among them, the object type labels of the first node and the second node of the topological node set are the same. The node in the full topological map in the node pair with the same object type label and higher similarity than the preset similarity is used as the central node, that is, the first node of the node pair is used as the central node, and the central node and the nodes within the preset range around the central node in the full topological map are used as a updated full topological map. Compared with the original topological map, the number of nodes in the updated full topological map is smaller, so that the updated full topological map can be used as an entry for a limited range of topological map matching, and the matching efficiency can be improved, and then quickly determine the associated node pairs, and determine the current pose of the robot 100 according to the search branch with the highest matching degree, Among them, the surrounding preset range may refer to all nodes within a preset number of steps connected to the central node, among them one edge (one connection) may be considered as one step, and the preset number of steps is, for example, 1, 2, 3, 4, 5, etc., which are not specifically limited here. Please refer to
In some embodiments, the associated node pair of the search branch with the highest matching degree is the associated node pair with the highest matching degree. Block 05 (determining the current pose of the robot 100 according to the search branch with the highest matching degree) includes:
The positioning method of the embodiment of the present disclosure can be realized by the robot 100 of the embodiment of the present disclosure. Specifically, please refer to
In this way, the current pose of the robot 100 can be determined through the associated node pair with the highest matching degree.
Specifically, the associated node pair can obtain a preprocessing initial position with a low-precision by combining the trilateration method of the least square method (or other methods such as ransac), among them, the preprocessing initial position has only position information but no attitude information. On the basis of taking the preprocessing initial position as an initial estimate, the point cloud matching is performed on the associated node pair with the highest matching degree, so as to obtain an accurate current pose of six degrees of freedom. Take the point clouds of the associated node pair corresponding to the node of the current topological map as a set of point clouds, and takes the point clouds of the associated node pair corresponding to the node of the full topological map as another set of point clouds, and find an optimal rotation matrix R and a translation t, so that one set of point clouds is transformed and then overlapped with another set of point clouds to the maximum. Among them, determining the preprocessing initial position of the robot 100 according to the associated node pair with the highest matching degree can be regarded as point cloud rough matching, and point cloud rough matching can use SAC-IA (Sample Consensus Initial Alignment, sampling consistency initial alignment algorithm), SAC-IA does not need initial values, and a matching error is relatively large. Based on the preprocessing initial position, point cloud matching is performed on the associated node pair with the highest matching degree, thereby determining the current pose of the robot 100, which can be regarded as point cloud fine matching, and point cloud fine matching can use ICP (Iterative Closest Point, iterative closest point algorithm).
Please refer to
According to an embodiment of the present application, the robot 100 includes a storage device and a processor. The storage device stores a computer program. When the processor executes the computer program, the positioning method in any one of the above-mentioned embodiments is realized. Referring to
For example, when the computer program is executed by the processor, the blocks of the following positioning method are implemented:
The above robot 100 can quickly and accurately determine associated node pairs (i.e., associated object pairs) through topology map matching, therefore, the search branch with the highest matching degree can be quickly determined through the number of associated nodes with the largest number of associated node pairs in the current local topology map and the full topology map, and then the current pose of the robot 100 can be determined according to the search branch with the highest matching degree.
A computer-readable storage medium according to an embodiment of the present application, on which a computer program is stored. When the program is executed by the processor, the positioning method in any one of the above-mentioned embodiments is realized.
For example, when the program is executed by the processor, the blocks of the following positioning method are implemented:
The above computer-readable storage medium can quickly and accurately determine associated node pairs (i.e., associated object pairs) through topology map matching, therefore, the search branch with the highest matching degree can be quickly determined through the number of associated nodes with the largest number of associated node pairs in the current local topology map and the full topology map, and then the current pose of the robot 100 can be determined according to the search branch with the highest matching degree.
The computer-readable storage medium can be set on the robot 100 or on other terminal devices, and the robot 100 can communicate with other terminal devices to acquire corresponding computer programs.
It can be understood that the computer-readable storage medium may include: any entity or device capable of carrying a computer program, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory (RAM), and a software distribution medium, etc. The computer program includes computer program code. The computer program code may be in form of source codes, in form of object codes, in form of executable file or some intermediate, etc.
Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of a process, and the scope of the embodiments of the present disclosure includes additional implementations in which functions may be performed out of an order shown or discussed, including substantially concurrently or in reverse order as the functions involved are understood by those skilled in the art to which embodiments of the present disclosure pertain.
In the description of the present disclosure, descriptions with reference to the terms “one embodiment”, “some embodiments”, “exemplary embodiments”, “example”, “specific examples” or “some examples” mean that a combination of the embodiments or examples describe specific features, structures, materials, or characteristics that are included in at least one embodiment or example of the present disclosure. In the present disclosure, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any manner in any one or more embodiments or examples.
The logic and/or blocks represented in the flowcharts or otherwise described herein. For example, the logic and/or blocks may be considered as a sequenced list of executable instructions for implementing logical functions and may be embodied in any computer-readable medium for use by an instruction execution system, apparatus, or device, or Used in conjunction with these instruction execution systems, devices or equipment. Such as computer-based systems, systems including processing modules, or other systems that can fetch and execute instructions from an instruction execution system, apparatus, or device. For the purposes of the present disclosure, a “computer-readable medium” may be any device that can contain, store, communicate, propagate or transmit a program for use or in conjunction with an instruction execution system, or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer-readable medium may even be paper or other medium on which the program may be printed, as it may be possible, for example, by optically scanning the paper or other medium, followed by editing, interpretation or other means if necessary. The program is obtained electronically by processing and then stored in storage device 105.
The processor 110 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors 300, a digital signal processor 110 (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor 110 may be a microprocessor 110 or the processor 110 may be any conventional processor 110 or the like.
It is understood that each part of the embodiments of the present disclosure may be realized by hardware, software, firmware or a combination thereof. In the embodiments described above, the blocks or the robot controlling method may be implemented by software or firmware stored in the storage device and executed by an instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
Those of ordinary skill in the art can understand that all or part of the steps carried by the method of the above-mentioned embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium 500, and the program can be executed when the program is executed. When, one or a combination of the steps of the method embodiment is included.
The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.
Although the embodiments of the present disclosure have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limitations on the present disclosure. The embodiments are subject to changes, modifications, substitutions and variations.
It is understood that the division of modules described above is a logical functional division, and there can be another division in actual implementation. In addition, each functional module in each embodiment of the present disclosure may be integrated in the same processing unit, or each module may physically exist separately, or two or more modules may be integrated in the same unit. The above integrated modules can be implemented either in the form of hardware or in the form of hardware plus software functional modules. The above description is only embodiments of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes can be made to the present disclosure. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present disclosure are intended to be included within the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202211069393.2 | Sep 2022 | CN | national |