This application relates generally to robotics and more specifically to systems, methods and apparatuses, including computer programs, for determining safety and/or operating parameters for robotic devices.
A robot is generally defined as a reprogrammable and multifunctional manipulator designed to move material, parts, tools, and/or specialized devices (e.g., via variable programmed motions) for performing tasks. Robots may include manipulators that are physically anchored (e.g., industrial robotic arms), mobile devices that move throughout an environment (e.g., using legs, wheels, or traction-based mechanisms), or some combination of one or more manipulators and one or more mobile devices. Robots are currently used in a variety of industries, including, for example, manufacturing, warehouse logistics, transportation, hazardous environments, exploration, and healthcare.
During operation, mobile robots can be hazardous to entities in the environment (e.g., humans or other robots). For example, mobile manipulator robots that are large and powerful enough to move packages from one location to another at high speeds can be dangerous to operators or other workers nearby. In such settings, mobile robots should have systems that protect entities of concern in the environment, e.g., by making sure that they are not dangerously close to the entities while operating at high speeds.
Some embodiments include systems, methods and/or apparatuses, including computer programs, for assigning, to each of a plurality of discrete regions of a safety field around a mobile robot, an “occupancy state” (e.g., “occupied” or “unoccupied”). Such a system may be considered a “stateful” safety system, where entities within the environment of the mobile robot are detected, tagged and/or tracked within the discrete regions of the safety field over time. Such a stateful safety system may enable continuous matching of occluded or partially occluded entities in sensor data enabling continuous tracking of entities in the environment of the robot.
Collectively the set of discrete regions within the safety field may be considered an “occupancy grid.” The occupancy state for each region of the occupancy grid may represent whether the region is determined to be occupied and/or potentially occupied (e.g., by a human or other entity). To facilitate safe operation, the mobile robot may take the occupancy states of the regions of the occupancy grid into consideration when controlling operation of the robot. For example, a distance between the robot and one or more regions associated with an occupied state may be used to help determine one or more thresholds or ranges of permitted operating parameters of the robot at a given time (e.g., the fastest allowable safe operating speed for an arm and/or the fastest allowable safe travel speed of a base of the robot at a particular time or interval). One or more operations of the robot can then be constrained according to these thresholds or ranges of permitted operating parameters to facilitate safe operation of the robot in particular environment scenarios.
In some embodiments, the occupancy states of different regions of the occupancy grid may be updated over time. For instance, as more sensor data describing characteristics of the robot's environment is acquired, a region assigned an occupied state may be reassigned to an unoccupied state if sensor data indicates that no entity occupies the region. In this way, the occupancy grid may be temporally updatable, with the regions within the occupancy grid that are associated with an occupied state reflecting the current location of potential safety hazards in the environment of the mobile robot during its operation.
Using such systems and/or methods, the robot can be enabled to maximize its operating efficiency in a given situation subject to the safety constraints that the situation presents. For example, the robot can be allowed to operate at one or more full (e.g., maximum) speeds when the regions of the occupancy grid having an occupied state are sufficiently far from the robot, but may be required to operate at one or more lower speeds (e.g., one or more maximum safe speeds) when such regions are closer to the robot. By updating the occupancy states of the occupancy grid over time, the maximum speed at which the robot is allowed to operate can be modulated as entities of concern and/or the mobile robot move within the environment.
Such systems and methods can lead to lower-cost and faster setup routines than other systems in place today. In some embodiments, the system includes fewer components that may fail over time. In some embodiments, fewer physical touch points exist within the system. In some embodiments, the system has less physical equipment to move (e.g., from bay to bay), reducing the amount of labor-intensive work and/or time required to transition the robot to the next task or area. Some or all of these advantages can lead to greater productivity during operation of the robot.
In one aspect, the invention features a method. The method comprises receiving first sensor data from one or more sensors, the first data being captured at a first time, identifying, based on the first sensor data, a first unobserved portion of a safety field in an environment of a mobile robot, assigning, to each region of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, updating, at a second time after the first time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field, and determining, by a computing device, one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions within the first unobserved portion of the safety field at the second time.
In some embodiments, the safety region defines a plane surrounding the mobile robot, and the plurality of contiguous regions within the first unobserved portion of the safety field are two-dimensional (2D) regions arranged within the plane. In some embodiments, the safety region defines a volume surrounding the mobile robot, and the plurality of contiguous regions within the first unobserved portion of the safety field are three-dimensional (3D) regions arranged within the volume. In some embodiments, the plurality of contiguous regions are uniformly spaced within the first unobserved portion of the safety field. In some embodiments, the one or more sensors include at least one sensor coupled to the mobile robot. In some embodiments, the one or more sensors include at least one depth sensor. In some embodiments, the at least one depth sensor includes at least one depth camera. In some embodiments, the plurality of contiguous regions within the first unobserved portion include a first region and a second region, the second region being closer to the mobile robot than the first region within the first unobserved portion of the safety field, and assigning an occupancy state to each of the plurality of contiguous regions within the first unobserved portion of the safety field comprises assigning an occupied state to the first region, and assigning an unoccupied state to the second region.
In some embodiments, the method further comprises identifying, based on the first sensor data, an entity in the safety field, determining based on information about the entity, whether the entity is a whitelisted entity, and ignoring, when it is determined that the entity is a whitelisted entity, the presence of the entity within the safety field when determining the one or more operating parameters for the mobile robot. In some embodiments, the information about the entity indicates that the entity is an object being manipulated by the mobile robot. In some embodiments, the information about the entity indicates that the entity is a portion of the mobile robot. In some embodiments, the information about the entity indicates that the entity is a portion of the environment of the mobile robot. In some embodiments, the information about the entity indicates that the entity is not an entity of concern. In some embodiments, the information about the entity indicates that the entity is another mobile robot. In some embodiments, the information about the entity indicates that the entity is an automated vehicle. In some embodiments, the information about the entity includes information identifying the entity with a particular confidence level, and the entity is determined as a whitelisted entity only when the particular confidence level is above a threshold confidence level. In some embodiments, the threshold confidence level is 99%
In some embodiments, updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field comprises assigning an occupied state to a first region of the plurality of contiguous regions, the first region having an unoccupied state at the first time, wherein the first region is located adjacent to a second region having an occupied state at the first time. In some embodiments, assigning an occupied state to the first region at the second time is based on elapsed time between first time and the second time. In some embodiments, updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field further comprises determining based on an entity speed for an entity associated with the second region at the first time, whether it is possible for the entity associated with the second region at the first time to have travelled into the first region at the second time, and assigning an occupied state to the first region only when it is determined that it is possible for the entity associated with the second region at the first time to have travelled into the first region at the second time. In some embodiments, updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field comprises assigning an occupied state to a third region of the plurality of contiguous regions, the third region having an unoccupied state at the first time, wherein the third region is located adjacent to the first region and is not located adjacent to the second region.
In some embodiments, the method further comprises receiving at or before the second time, second sensor data from the one or more sensors, and identifying, based on the second sensor data, a second unobserved portion of the safety field at the second time, wherein updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field is based on an overlap between the first unobserved portion and the second observed portion. In some embodiments, updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field based on an overlap between the first unobserved portion and the second observed portion comprises assigning an unoccupied state to a first region of the plurality of contiguous regions within the first unobserved portion of the safety field having an occupied state at the first time when the first region is not within the second unobserved portion of the safety field. In some embodiments, the plurality of contiguous regions within the first unobserved portion of the safety field include a first region and a second region, the first region having an occupied state at the first time and the second region having an unoccupied state at the first time, the second region being adjacent to the first region in the first unobserved portion of the safety field, and updating, at the second time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field based on an overlap between the first unobserved portion and the second observed portion comprises assigning an occupied state to the second region when the second region is included within the second unobserved portion of the safety field. In some embodiments, determining one or more operating parameters for the mobile robot comprises instructing the mobile robot to move at least a portion of the mobile robot to enable the one or more sensors to sense the presence or absence of entities in the first region at the second time.
In some embodiments, the one or more sensors include at least one sensor coupled to the mobile robot, and the second sensor data is received from the at least one sensor coupled to the mobile robot. In some embodiments, the one or more sensors include at least one sensor not coupled to the mobile robot, and the second sensor data is received from the at least one sensor not coupled to the mobile robot. In some embodiments, the at least one sensor not coupled to the mobile robot is coupled to another robot in the environment of the mobile robot. In some embodiments, the at least one sensor not coupled to the mobile robot is fixed in the environment of the mobile robot. In some embodiments, the second unobserved portion of the safety field includes a portion of the safety field not within a field of view of any of the one or more sensors at the second time. In some embodiments, at least a portion of the second unobserved portion of the safety field is with the field of view of at least one of the one or more sensors at the first time. In some embodiments, the second unobserved portion of the safety field includes a portion of the safety field in a blind spot of the one or more sensors created by one or more objects in the safety field at the second time.
In some embodiments, determining one or more operating parameters for the mobile robot comprises determining a trajectory plan for an arm of the mobile robot. In some embodiments, determining one or more operating parameters for the mobile robot comprises instructing the mobile robot to alter a speed of motion of at least a portion of the mobile robot. In some embodiments, determining one or more operating parameters for the mobile robot comprises determining the one or more operating parameters further based, at least in part, on a distance between the mobile robot and a first region of the plurality of contiguous regions within the first unobserved portion of the safety field having an occupied state at the second time.
In some embodiments, at the second time, the plurality of contiguous regions of the first unobserved portion of the safety field includes multiple regions, including the first region, having an occupied state, and the first region is a closest region of the multiple regions to the mobile robot. In some embodiments, assigning, to each region of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state comprises assigning an occupied state to at least one region of the plurality of contiguous regions at a boundary of the safety field within the first unobserved portion of the safety field. In some embodiments, the first unobserved portion of the safety field includes a portion of the safety field not within a field of view of any of the one or more sensors at the first time. In some embodiments, the first unobserved portion of the safety field includes a portion of the safety field in a blind spot of the one or more sensors created by one or more objects in the safety field at the first time. In some embodiments, the one or more sensors include at least one first sensor coupled to the mobile robot and at least one second sensor not coupled to the mobile robot, and the first unobserved portion of the safety field includes a portion of the safety field not observable by the at least one first sensor or the at least one second sensor.
In some embodiments, the safety field includes a restricted zone around the robot and a monitored zone located outside of the restricted zone and the method further comprises detecting an entity located in the monitored zone that has not yet entered the restricted zone, determining whether the entity is an entity of concern, and determining the one or more operating parameters for the mobile robot based, at least in part, on whether the entity is an entity of concern. In some embodiments, the method further comprises determining whether the entity is moving toward the restricted zone, wherein determining the one or more operating parameters for the mobile robot is further based, at least in part, on whether the entity is moving toward the restricted zone.
In one aspect, the invention features a non-transitory computer-readable medium encoded with a plurality of instructions that, when executed, by at least one computer processor, perform any of the methods described herein.
In one aspect, the invention features a mobile robot. The mobile robot comprises one or more sensors configured to sense first sensor data at a first time, and at least one computer processor. The at least one computer processor is programmed to perform a method of identifying, based on the first sensor data, a first unobserved portion of a safety field in an environment of the mobile robot, assigning, to each region of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, updating, at a second time after the first time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field, and determining one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions within the first unobserved portion of the safety field at the second time.
In some embodiments, the at least one computer processor is further programmed to perform any of the methods described herein. In some embodiments, the one or more sensors include at least one camera. In some embodiments, the one or more sensors include at least one LIDAR sensor. In some embodiments, the method further comprises a base having a top surface, a bottom surface and a plurality of sides arranged between the top surface and the bottom surface, and a manipulator arm coupled to the top surface of the base, wherein the one or more sensors include at least one camera coupled to each side of the plurality of sides of the base.
In one aspect, the invention features a safety system for a mobile robot. The safety system comprises one or more onboard sensors coupled to the mobile robot, one or more off-robot sensors not coupled to the mobile robot, and at least one computer processor. The at least one computer processor is programmed to perform a method of identifying, based on first sensor data sensed at a first time by the one or more onboard sensors and/or the one or more off-robot sensors, a first unobserved portion of a safety field in an environment of the mobile robot, assigning, to each region of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, updating, at a second time after the first time, the occupancy state of one or more regions of the plurality of contiguous regions within the first unobserved portion of the safety field, and determining one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions within the first unobserved portion of the safety field at the second time.
In some embodiments, the at least one computer processor is further programmed to perform any of the methods described herein. In some embodiments, the at least one computer processor is coupled to the mobile robot. In some embodiments, the one or more off-robot sensors are coupled to another robot in the environment of the mobile robot. In some embodiments, the one or more off-robot sensors are fixed in the environment of the mobile robot.
In one aspect, the invention features a method. The method comprises identifying, based on first sensor data sensed by one or more sensors at a first time, a first unobserved portion of a safety field in an environment of a mobile robot, assigning, to each region of an occupancy grid that includes a first plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, identifying, based on second sensor data sensed by the one or more sensors at a second time following the first time, a second unobserved portion of the safety field, updating the occupancy state of at least one region of the occupancy grid based at least in part, on the first unobserved portion and the second unobserved portion to provide an updated occupancy grid, determining, by a computing device, one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the updated occupancy grid.
In some embodiments, updating the occupancy state of at least one region of the occupancy grid based at least in part, on the first unobserved portion and the second unobserved portion comprises changing an occupancy state of a first region of the occupancy grid within the first unobserved portion to an unoccupied state when the first region is not included within the second unobserved portion. In some embodiments, updating the occupancy state of at least one region of the occupancy grid based at least in part, on the first unobserved portion and the second unobserved portion comprises changing an occupancy state of a first region of the occupancy grid within the first unobserved portion to an occupied state when the first region is included within the second unobserved portion, wherein the first region is adjacent to a second region of the occupancy grid within the first unobserved region, the second region having assigned an occupied state.
In one aspect, the invention features a method. The method comprises receiving, by a computing device, an occupancy grid for a safety field of a mobile robot, the occupancy grid including at least one uncertainty region within which an entity of concern may be located, muting one or more whitelisted entities in the safety field, determining one or more unobserved regions of the occupancy grid, wherein the one or more unobserved regions are formed by the one or more muted entities and/or correspond to regions outside the field of view of one or more sensors configured to sense objects within the safety zone, updating the at least one uncertainty region based on received sensor data, and determining, by a computing device, one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on a distance between the mobile robot to the updated at least one uncertainty region.
In one aspect, the invention features a method. The method comprises receiving, by a computing device, a state of a region of an environment of a mobile robot, determining a largest distance away from the mobile robot that is clear along all approach corridors within the region, muting one or more onboard sensors of the mobile robot and one or more whitelisted entities as or before the one or more whitelisted entities create occlusions in the region, determining a safe operating time limit and one or more operating parameters of the mobile robot based on an approach speed of an entity of concern outside of the region and the largest distance, and unmuting the one or more onboard sensors of the mobile robot when the safe operating time limit is reached or when the one or more whitelisted entities clear the region.
The advantages of the invention, together with further advantages, may be better understood by referring to the following description taken in conjunction with the accompanying drawings. The drawings are not necessarily to scale, and emphasis is instead generally placed upon illustrating the principles of the invention.
In some conventional robot systems, a safety field for the robot is created using information from sensor data captured at a single point in time. Based on the captured sensor data, it may be determined whether an object is located within the safety field, and if so, the operation of the robot may be changed (e.g., slowed or stopped altogether) to avoid collision of the robot with the detected object. The inventors have recognized and appreciated that slowing or halting operation of a robot whenever an object is detected within a certain distance from the robot may not strictly be necessary to ensure safe operation of the robot within its environment. Rather, such an overly conservative approach may result in the robot performing tasks slower or not at all, even though safe operation of the robot may be achievable under a particular scenario. To this end, some embodiments of the present technology improve upon existing techniques for ensuring safe operation of mobile robot in environment by assigning to each of a plurality of distinct contiguous regions of a safety field around the robot, an occupancy state, which indicates whether an entity is possibly present within the region. The collection of regions form an “occupancy grid” which covers the safety field. The occupancy states of the regions in the occupancy grid can be updated over time, and one or more operations and/or operating parameters of the robot can be modified accordingly to facilitate safe operation of the robot with high confidence.
Robots can be configured to perform a number of tasks in an environment in which they are placed. Exemplary tasks may include interacting with objects and/or elements of the environment. Notably, robots are becoming popular in warehouse and logistics operations. Before robots were introduced to such spaces, many operations were performed manually. For example, a person might manually unload boxes from a truck onto one end of a conveyor belt, and a second person at the opposite end of the conveyor belt might organize those boxes onto a pallet. The pallet might then be picked up by a forklift operated by a third person, who might drive to a storage area of the warehouse and drop the pallet for a fourth person to remove the individual boxes from the pallet and place them on shelves in a storage area. Some robotic solutions have been developed to automate many of these functions. Such robots may either be specialist robots (i.e., designed to perform a single task or a small number of related tasks) or generalist robots (i.e., designed to perform a wide variety of tasks). To date, both specialist and generalist warehouse robots have been associated with significant limitations.
For example, because a specialist robot may be designed to perform a single task (e.g., unloading boxes from a truck onto a conveyor belt), while such specialized robots may be efficient at performing their designated task, they may be unable to perform other related tasks. As a result, either a person or a separate robot (e.g., another specialist robot designed for a different task) may be needed to perform the next task(s) in the sequence. As such, a warehouse may need to invest in multiple specialized robots to perform a sequence of tasks, or may need to rely on a hybrid operation in which there are frequent robot-to-human or human-to-robot handoffs of objects.
In contrast, while a generalist robot may be designed to perform a wide variety of tasks (e.g., unloading, palletizing, transporting, depalletizing, and/or storing), such generalist robots may be unable to perform individual tasks with high enough efficiency or accuracy to warrant introduction into a highly streamlined warehouse operation. For example, while mounting an off-the-shelf robotic manipulator onto an off-the-shelf mobile robot might yield a system that could, in theory, accomplish many warehouse tasks, such a loosely integrated system may be incapable of performing complex or dynamic motions that require coordination between the manipulator and the mobile base, resulting in a combined system that is inefficient and inflexible.
Typical operation of such a system within a warehouse environment may include the mobile base and the manipulator operating sequentially and (partially or entirely) independently of each other. For example, the mobile base may first drive toward a stack of boxes with the manipulator powered down. Upon reaching the stack of boxes, the mobile base may come to a stop, and the manipulator may power up and begin manipulating the boxes as the base remains stationary. After the manipulation task is completed, the manipulator may again power down, and the mobile base may drive to another destination to perform the next task.
In such systems, the mobile base and the manipulator may be regarded as effectively two separate robots that have been joined together. Accordingly, a controller associated with the manipulator may not be configured to share information with, pass commands to, or receive commands from a separate controller associated with the mobile base. As such, such a poorly integrated mobile manipulator robot may be forced to operate both its manipulator and its base at suboptimal speeds or through suboptimal trajectories, as the two separate controllers struggle to work together. Additionally, while certain limitations arise from an engineering perspective, additional limitations must be imposed to comply with safety regulations. For example, if a safety regulation requires that a mobile manipulator must be able to be completely shut down within a certain period of time when a human enters a region within a certain distance of the robot, a loosely integrated mobile manipulator robot may not be able to act sufficiently quickly to ensure that both the manipulator and the mobile base (individually and in aggregate) do not threaten the human. To ensure that such loosely integrated systems operate within required safety constraints, such systems are forced to operate at even slower speeds or to execute even more conservative trajectories than those limited speeds and trajectories as already imposed by the engineering problem. As such, the speed and efficiency of generalist robots performing tasks in warehouse environments to date have been limited.
In view of the above, a highly integrated mobile manipulator robot with system-level mechanical design and holistic control strategies between the manipulator and the mobile base may provide certain benefits in warehouse and/or logistics operations. Such an integrated mobile manipulator robot may be able to perform complex and/or dynamic motions that are unable to be achieved by conventional, loosely integrated mobile manipulator systems. As a result, this type of robot may be well suited to perform a variety of different tasks (e.g., within a warehouse environment) with speed, agility, and efficiency.
In this section, an overview of some components of one embodiment of a highly integrated mobile manipulator robot configured to perform a variety of tasks is provided to explain the interactions and interdependencies of various subsystems of the robot. Each of the various subsystems, as well as control strategies for operating the subsystems, are described in further detail in the following sections.
During operation, the perception mast of robot 20a (analogous to the perception mast 140 of robot 100 of
Also of note in
To pick some boxes within a constrained environment, the robot may need to carefully adjust the orientation of its arm to avoid contacting other boxes or the surrounding shelving. For example, in a typical “keyhole problem”, the robot may only be able to access a target box by navigating its arm through a small space or confined area (akin to a keyhole) defined by other boxes or the surrounding shelving. In such scenarios, coordination between the mobile base and the arm of the robot may be beneficial. For instance, being able to translate the base in any direction allows the robot to position itself as close as possible to the shelving, effectively extending the length of its arm (compared to conventional robots without omnidirectional drive which may be unable to navigate arbitrarily close to the shelving). Additionally, being able to translate the base backwards allows the robot to withdraw its arm from the shelving after picking the box without having to adjust joint angles (or minimizing the degree to which joint angles are adjusted), thereby enabling a simple solution to many keyhole problems.
The tasks depicted in
The robotic arm 430 of
Starting at the turntable 420, the robotic arm 430 includes a turntable offset 422, which is fixed relative to the turntable 420. A distal portion of the turntable offset 422 is rotatably coupled to a proximal portion of a first link 433 at a first joint 432. A distal portion of the first link 433 is rotatably coupled to a proximal portion of a second link 435 at a second joint 434. A distal portion of the second link 435 is rotatably coupled to a proximal portion of a third link 437 at a third joint 436. The first, second, and third joints 432, 434, and 436 are associated with first, second, and third axes 432a, 434a, and 436a, respectively.
The first, second, and third joints 432, 434, and 436 are additionally associated with first, second, and third actuators (not labeled) which are configured to rotate a link about an axis. Generally, the nth actuator is configured to rotate the nth link about the nth axis associated with the nth joint. Specifically, the first actuator is configured to rotate the first link 433 about the first axis 432a associated with the first joint 432, the second actuator is configured to rotate the second link 435 about the second axis 434a associated with the second joint 434, and the third actuator is configured to rotate the third link 437 about the third axis 436a associated with the third joint 436. In the embodiment shown in
In some embodiments, a robotic arm of a highly integrated mobile manipulator robot may include a different number of degrees of freedom than the robotic arms discussed above. Additionally, a robotic arm need not be limited to a robotic arm with three pitch joints and a 3-DOF wrist. A robotic arm of a highly integrated mobile manipulator robot may include any suitable number of joints of any suitable type, whether revolute or prismatic. Revolute joints need not be oriented as pitch joints, but rather may be pitch, roll, yaw, or any other suitable type of joint.
Returning to
In some embodiments, an end effector may be associated with one or more sensors. For example, a force/torque sensor may measure forces and/or torques (e.g., wrenches) applied to the end effector. Alternatively or additionally, a sensor may measure wrenches applied to a wrist of the robotic arm by the end effector (and, for example, an object grasped by the end effector) as the object is manipulated. Signals from these (or other) sensors may be used during mass estimation and/or path planning operations. In some embodiments, sensors associated with an end effector may include an integrated force/torque sensor, such as a 6-axis force/torque sensor. In some embodiments, separate sensors (e.g., separate force and torque sensors) may be employed. Some embodiments may include only force sensors (e.g., uniaxial force sensors, or multi-axis force sensors), and some embodiments may include only torque sensors. In some embodiments, an end effector may be associated with a custom sensing arrangement. For example, one or more sensors (e.g., one or more uniaxial sensors) may be arranged to enable sensing of forces and/or torques along multiple axes. An end effector (or another portion of the robotic arm) may additionally include any appropriate number or configuration of cameras, distance sensors, pressure sensors, light sensors, or any other suitable sensors, whether related to sensing characteristics of the payload or otherwise, as the disclosure is not limited in this regard.
The environment of robot 460 includes a safety field 466 within which the onboard sensors 464 may be configured to monitor for the presence of entities (e.g., humans, other robots or vehicles, environment features (e.g., walls)). As shown, safety field 466 includes a 2D plane segmented using an occupancy grid 468 having a plurality of contiguous regions 470. The regions 470 of the occupancy grid 468 may be defined to have a sufficiently small size such that safety critical behavior of the mobile robot 460 can rely on it. For example, in some embodiments, the maximum size of a region of the occupancy grid may correspond to the average size of a human head, the average width of a human leg or another measurement of a human or other entity of concern.
Each of the regions 470 may be associated with an occupancy state representing the presence or absence of a possible entity within the region, and the occupancy state may be updated over time, to enable the robot 460 to operate safely within the safety field 466 with high confidence. Combining entity detections in an occupancy grid 468 with temporal models as described herein enables one or more operations of the robot 460 and/or other automated entities within the safety field 466 (e.g., automated guided vehicles, other robots) to be modified to improve safe operation of the robot 460.
It should be appreciated that although only a portion of occupancy grid 468 is shown in
The regions 470 are shown in
The coordinates of the regions 470 in occupancy grid 468 may be defined with respect to any suitable reference frame. In the example of
Although the occupancy grid 468 is shown as being defined in
Although the entire blind spot region shown in
As described above, in the scenario shown in
As described herein, modification of one or more operating parameters (e.g., velocity, configuration, direction of motion, time-to-stop) of a mobile robot may be determined, at least in part, on a location of an uncertainty region within an occupancy grid for a safety field of the mobile robot. Accordingly, to enable the mobile robot to have maximum operating flexibility, it may be desirable to increase the maximum operating space of the mobile robot within the safety field by decreasing the size of the uncertainty region during operation of the mobile robot. In some embodiments, increasing the operating flexibility of the robot 460 by reducing the size of the uncertainty region may be achieved by removing or “clearing” regions 470 from the uncertainty region 480 as additional information about entities within the safety field 466 is received.
In the example scenario described in
In the example shown in
As shown in
As described above, the location of some entities (e.g., automated guided vehicle (AGV), 710) within the environment of a mobile robot may be tracked using the techniques described herein, but the entities themselves may not cause safety concerns for operation of the mobile robot. For instance, warehouses may include multiple robots, AGVs, or other vehicles configured to work individually or collaboratively to facilitate operations within the warehouse. Because such entities do not include human drivers located within the vehicle, it may remain safe for the robot 700 to operate at high speeds (e.g., maximum safe speeds) even when such entities are within the safety field of the robot 700. In such instances, known entities may be added to a “whitelist” of entities that, when detected within safety field 720, do not result in modification of one or more operations of the robot 700, as described herein.
Any suitable shape safety field and corresponding occupancy grid covering the safety field may be used in accordance with embodiments of the present technology.
Process 900 then proceeds to act 904, where a first unobserved portion of a safety field in an environment of a mobile robot is identified based on the first sensor data. For example, the first unobserved portion of the safety field may include a “blind spot” not observable by the one or more sensors caused by an object located within the safety field. Additionally or alternatively, the first unobserved portion may include a portion of the safety field that is not within the field of view of the one or more sensors at the first time. In some embodiments, the one or more sensors may include multiple sensors arranged at different locations in the environment and the first unobserved portion may be determined based on sensor data obtained from each of the multiple sensors. For instance, a first sensor of the multiple sensors may be arranged in the environment to sense entities within at least a portion of a blind spot for a second sensor of the multiple sensors at the first time.
Process 900 then proceeds to act 906, where each of the plurality of contiguous regions within the first unobserved portion of the safety field (e.g., each of the regions of an occupancy grid that includes the first unobserved portion) is assigned an occupancy state. For instance, as described herein, regions located outside of a blind spot region caused by an object that obstructs a portion of a field of view of the one or more sensors may be assigned an unoccupied state, at least some regions located within a blind spot region may be assigned an occupied state, and other regions located within a blind spot region may be assigned an unoccupied state. In some embodiments, each region of the occupancy grid may be assigned one of two states (e.g., occupied or unoccupied). In other embodiments, more than two states may be used. For instance, some regions that are occupied by a whitelisted entity (e.g., another robot, an AGV), a portion of the robot (e.g., the robot's manipulator arm), or an object that the robot is manipulating, may be associated with a “muted” state, which indicates the region is occupied, but should be ignored for safety calculations. Additionally, in some embodiments, the regions that fall within a blind spot region of the occupancy grid may be assigned a separate state such as “recently unoccupied,” which indicates that although the occupied status of the region has not been verified by sensor data, it is unlikely that an entity of concern is located within the region.
Process 900 then proceeds to act 908, where at a second time after the first time, the occupancy state of one or more regions of the plurality of the contiguous regions within the first unobserved portion of the safety field is updated. In some embodiments, the occupancy state may be updated based, at least, in part, on an elapsed time from the first time to the second time. For example, as discussed in connection with the scenario illustrated in
Process 900 then proceeds to act 910, where one or more operating parameters for the mobile robot are determined based at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions within the first unobserved portion of the safety field at the second time. For instance, as described herein, a plurality of contiguous regions in an occupancy grid assigned an occupied state may be considered as an uncertainty region within which entities of concern (e.g., humans) may be located. A distance between the uncertainty region and the mobile robot may be determined, and one or more operating parameters of the mobile robot may be modified based on the distance to facilitate safe operation of the mobile robot within its local environment.
As described herein, regions located within an uncertainty region at a first time may be removed or “cleared” from the uncertainty region at a second time if it can be verified based on additional sensor data at the second time that no entities of concern are located in those regions. Accordingly, in some embodiments, the one or more operating parameters of the robot may be determined in act 910 to facilitate clearing of regions from the uncertainty region. For instance, when it is determined at the second time that the distance between the uncertainty region and the mobile robot is less than a threshold distance, the mobile robot may be instructed to operate differently (e.g., by moving its manipulator arm, by moving the object it is manipulating, by driving in a particular direction, etc.) to attempt to clear the portions of the uncertainty region closest to the mobile robot, thereby expanding the safe operating region of the mobile robot within its safety field. In one example, the mobile robot may be instructed to operate faster to, for example, move an object it is manipulating through a field of view of its onboard sensors quickly, thereby reducing the size of the blind spot region caused by the object within the field of view of the sensors. Such an example of speeding up operation of a robot to facilitate safety can be contrary to the operation of conventional safety systems, which may instruct a robot to slow (or shutdown) its operation whenever a possible safety risk is detected within a particular distance from the robot. In some embodiments, planning algorithms (e.g., for arm trajectory planning) may use constraints about occluded portions of the safety field and/or uncertainty regions within the occluded portions to plan manipulation trajectories which avoid or reduce occluded portions of the safety field from developing, especially for long durations.
Process 1000 then proceeds to act 1008, where the one or more uncertainty regions (e.g., including regions of the occupancy grid assigned an occupied state) of the occupancy grid are updated based on receive sensor data. The uncertainty region(s) may be updated in several ways. As the robot and/or the whitelisted entity moves in its environment, one or more new blind boundaries at the edge of the safety field of the robot may fall within the unobserved region. In such instances, the uncertainty region is expanded to include regions of the occupancy grid along the new blind boundary or boundaries. Additionally, portions of the uncertainty region that previously fell within an unobserved region, but became observable by, for example, movement of the robot, the whitelisted entity, or the presence of another robot having onboard sensors within the environment of the robot. In such instances, the observable portions may be removed from the uncertainty region, thereby increasing the safe operation portion of the safety field of the robot. Additionally, portions of the uncertainty region that remain within an obstructed region may be expanded toward the robot or other possible directions within the obstructed region to account for the possibility that entities within the uncertainty region are moving.
Process 1000 then proceeds to act 1010 where one or more operating parameters of the mobile robot are set based on a distance between the mobile robot and the updated uncertainty region. Non-limiting examples of updating operating parameter(s) of a mobile robot based on distance from the mobile robot to an uncertainty region are provided herein, for example, in the discussion of act 910 of process 900 shown in
Some embodiments of the technology described herein assign occupancy states to regions within an occupancy grid based on sensor data captured from one or more sensors located on a mobile robot. In some embodiments, sensor data may additionally or alternatively be captured from one or more sensors (e.g., a set of cameras) fixed in a world reference frame of an environment (e.g., a warehouse) to create an occupancy grid for the environment or to create a set of configurable smaller grids in areas of the environment of interest (e.g., different loading bays of a warehouse). Such off-robot sensors may be configured as an “eye in the sky” system configured to track entities of interest (e.g., human workers, robots, vehicles) in the environment and to send information about the tracked entities to mobile robots operating in the environment to ensure safe separation between the mobile robots and other tracked entities. For instance, an off-robot set of cameras may be configured to track people (or other entities) in a large warehouse that does not have full coverage of cameras. When people or other entities are identified by one or more of the cameras, the uncertainty of their location may be reset. When tracking of a particular entity is lost (e.g., because the entity is not currently within the field of view of any of the cameras), uncertainty about their location in the environment grows, and one or more mobile robots operating in the environment may be instructed to operate more conservatively (e.g., by implementing more conservative on-robot safety fields) until the tracked entities who were lost are re-acquired. Such a scenario may occur frequently in a large warehouse with many obstacles such as racking and aisles that cause occlusions within the field of view of fixed off-robot sensors in the warehouse. As described herein, such obstacles may create blind spots within an occupancy grid that covers the warehouse, making it a challenge to obtain a fully populated occupancy grid (e.g., an occupancy grid where blind spots are eliminated). In such instances, fixed sensors arranged at major junctions or travel routes within the environment may be used to identify when there is the potential for an entity of concern (e.g., a human) to be near a particular mobile robot. Additionally, a combination of on-robot and off-robot sensors may be used to more fully populate an occupancy grid using the techniques described herein.
In some embodiments, sensor data from a first mobile robot may be transmitted to a second mobile robot, and the second mobile robot may use the sensor data from the first mobile robot to, at least in part, assign occupancy states to regions of an occupancy grid associated with a safety field of the second mobile robot. To achieve this sensor data fusion, the sensor data from the first mobile robot may be transformed from a coordinate system associated with the first mobile robot (e.g., a first robot reference frame) to the coordinate system of the occupancy grid associated with the safety field of the second mobile robot (e.g., a second robot reference frame). Uncertainty in the assignment process may be introduced based on inaccuracies in the coordinate transformation. In some embodiments, such uncertainty is modeled within the occupancy grid to ensure that spatial variations due to the coordinate transformation do not result in assignment of unoccupied states to regions of the occupancy grid that could possibly include entities of concern.
In other embodiments, each of a first mobile robot and a second mobile robot operating in an environment may be associated with its own occupancy grid, and information from the multiple occupancy grids can be combined to cover larger areas and/or to access perspectives that eliminate or reduce blind spot regions within the environment. For example, first and second mobile robots working back-to-back in an aisle of warehouse may have a respective first occupancy grid and a second occupancy grid. By combining the first and second occupancy grids, the range of the occupancy grid may effectively be doubled while also providing multiple perspectives of the environment at each point in time. As an example, when the first mobile robot occludes a region within its safety field with a self-occlusion object in its end effector, the second mobile robot may be able to sense data within the “shadow” caused by the occluding object as reflected within its occupancy grid, and provide information to fill in the occupancy grid of the first mobile robot. The multi-robot concept using onboard sensors can be extended to blending a combination of on-robot and off-robot sensor data that can, for example, populate and communicate information into local-frame (e.g., robot specific, multiple-robot fused) or global-frame (e.g., for all or a portion of a warehouse) occupancy grids.
In multi-robot scenarios, where sensor data is combined from multiple mobile robots into a single occupancy grid, the robots working near each other may recognize each other with sufficient confidence to enable the robots to be treated as whitelisted entities that should not be considered for safety calculations by muting the location of such entities within the occupancy grid. As described above in connection with the scenario described in
Safe localization and 3D muting functions in a scenario such as that shown in
An orientation may herein refer to an angular position of an object. In some instances, an orientation may refer to an amount of rotation (e.g., in degrees or radians) about three axes. In some cases, an orientation of a robotic device may refer to the orientation of the robotic device with respect to a particular reference frame, such as the ground or a surface on which it stands. An orientation may describe the angular position using Euler angles, Tait-Bryan angles (also known as yaw, pitch, and roll angles), and/or Quaternions. In some instances, such as on a computer-readable medium, the orientation may be represented by an orientation matrix and/or an orientation quaternion, among other representations.
In some scenarios, measurements from sensors on the base of the robotic device may indicate that the robotic device is oriented in such a way and/or has a linear and/or angular velocity that requires control of one or more of the articulated appendages in order to maintain balance of the robotic device. In these scenarios, however, it may be the case that the limbs of the robotic device are oriented and/or moving such that balance control is not required. For example, the body of the robotic device may be tilted to the left, and sensors measuring the body's orientation may thus indicate a need to move limbs to balance the robotic device; however, one or more limbs of the robotic device may be extended to the right, causing the robotic device to be balanced despite the sensors on the base of the robotic device indicating otherwise. The limbs of a robotic device may apply a torque on the body of the robotic device and may also affect the robotic device's center of mass. Thus, orientation and angular velocity measurements of one portion of the robotic device may be an inaccurate representation of the orientation and angular velocity of the combination of the robotic device's body and limbs (which may be referred to herein as the “aggregate” orientation and angular velocity).
In some implementations, the processing system may be configured to estimate the aggregate orientation and/or angular velocity of the entire robotic device based on the sensed orientation of the base of the robotic device and the measured joint angles. The processing system has stored thereon a relationship between the joint angles of the robotic device and the extent to which the joint angles of the robotic device affect the orientation and/or angular velocity of the base of the robotic device. The relationship between the joint angles of the robotic device and the motion of the base of the robotic device may be determined based on the kinematics and mass properties of the limbs of the robotic devices. In other words, the relationship may specify the effects that the joint angles have on the aggregate orientation and/or angular velocity of the robotic device. Additionally, the processing system may be configured to determine components of the orientation and/or angular velocity of the robotic device caused by internal motion and components of the orientation and/or angular velocity of the robotic device caused by external motion. Further, the processing system may differentiate components of the aggregate orientation in order to determine the robotic device's aggregate yaw rate, pitch rate, and roll rate (which may be collectively referred to as the “aggregate angular velocity”).
In some implementations, the robotic device may also include a control system that is configured to control the robotic device on the basis of a simplified model of the robotic device. The control system may be configured to receive the estimated aggregate orientation and/or angular velocity of the robotic device, and subsequently control one or more jointed limbs of the robotic device to behave in a certain manner (e.g., maintain the balance of the robotic device).
In some implementations, the robotic device may include force sensors that measure or estimate the external forces (e.g., the force applied by a limb of the robotic device against the ground) along with kinematic sensors to measure the orientation of the limbs of the robotic device. The processing system may be configured to determine the robotic device's angular momentum based on information measured by the sensors. The control system may be configured with a feedback-based state observer that receives the measured angular momentum and the aggregate angular velocity, and provides a reduced-noise estimate of the angular momentum of the robotic device. The state observer may also receive measurements and/or estimates of torques or forces acting on the robotic device and use them, among other information, as a basis to determine the reduced-noise estimate of the angular momentum of the robotic device.
In some implementations, multiple relationships between the joint angles and their effect on the orientation and/or angular velocity of the base of the robotic device may be stored on the processing system. The processing system may select a particular relationship with which to determine the aggregate orientation and/or angular velocity based on the joint angles. For example, one relationship may be associated with a particular joint being between 0 and 90 degrees, and another relationship may be associated with the particular joint being between 91 and 180 degrees. The selected relationship may more accurately estimate the aggregate orientation of the robotic device than the other relationships.
In some implementations, the processing system may have stored thereon more than one relationship between the joint angles of the robotic device and the extent to which the joint angles of the robotic device affect the orientation and/or angular velocity of the base of the robotic device. Each relationship may correspond to one or more ranges of joint angle values (e.g., operating ranges). In some implementations, the robotic device may operate in one or more modes. A mode of operation may correspond to one or more of the joint angles being within a corresponding set of operating ranges. In these implementations, each mode of operation may correspond to a certain relationship.
The angular velocity of the robotic device may have multiple components describing the robotic device's orientation (e.g., rotational angles) along multiple planes. From the perspective of the robotic device, a rotational angle of the robotic device turned to the left or the right may be referred to herein as “yaw.” A rotational angle of the robotic device upwards or downwards may be referred to herein as “pitch.” A rotational angle of the robotic device tilted to the left or the right may be referred to herein as “roll.” Additionally, the rate of change of the yaw, pitch, and roll may be referred to herein as the “yaw rate,” the “pitch rate,” and the “roll rate,” respectively.
As shown in
Processor(s) 1602 may operate as one or more general-purpose processor or special purpose processors (e.g., digital signal processors, application specific integrated circuits, etc.). The processor(s) 1602 can be configured to execute computer-readable program instructions 1606 that are stored in the data storage 1604 and are executable to provide the operations of the robotic device 1600 described herein. For instance, the program instructions 1606 may be executable to provide operations of controller 1608, where the controller 1608 may be configured to cause activation and/or deactivation of the mechanical components 1614 and the electrical components 1616. The processor(s) 1602 may operate and enable the robotic device 1600 to perform various functions, including the functions described herein.
The data storage 1604 may exist as various types of storage media, such as a memory. For example, the data storage 1604 may include or take the form of one or more computer-readable storage media that can be read or accessed by processor(s) 1602. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with processor(s) 1602. In some implementations, the data storage 1604 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other implementations, the data storage 1604 can be implemented using two or more physical devices, which may communicate electronically (e.g., via wired or wireless communication). Further, in addition to the computer-readable program instructions 1606, the data storage 1604 may include additional data such as diagnostic data, among other possibilities.
The robotic device 1600 may include at least one controller 1608, which may interface with the robotic device 1600. The controller 1608 may serve as a link between portions of the robotic device 1600, such as a link between mechanical components 1614 and/or electrical components 1616. In some instances, the controller 1608 may serve as an interface between the robotic device 1600 and another computing device. Furthermore, the controller 1608 may serve as an interface between the robotic system 1600 and a user(s). The controller 1608 may include various components for communicating with the robotic device 1600, including one or more joysticks or buttons, among other features. The controller 1608 may perform other operations for the robotic device 1600 as well. Other examples of controllers may exist as well.
Additionally, the robotic device 1600 includes one or more sensor(s) 1610 such as force sensors, proximity sensors, motion sensors, load sensors, position sensors, touch sensors, depth sensors, ultrasonic range sensors, and/or infrared sensors, among other possibilities. The sensor(s) 1610 may provide sensor data to the processor(s) 1602 to allow for appropriate interaction of the robotic system 1600 with the environment as well as monitoring of operation of the systems of the robotic device 1600. The sensor data may be used in evaluation of various factors for activation and deactivation of mechanical components 1614 and electrical components 1616 by controller 1608 and/or a computing system of the robotic device 1600.
The sensor(s) 1610 may provide information indicative of the environment of the robotic device for the controller 1608 and/or computing system to use to determine operations for the robotic device 1600. For example, the sensor(s) 1610 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation, etc. In an example configuration, the robotic device 1600 may include a sensor system that may include a camera, RADAR, LIDAR, time-of-flight camera, global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment of the robotic device 1600. The sensor(s) 1610 may monitor the environment in real-time and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other parameters of the environment for the robotic device 1600.
Further, the robotic device 1600 may include other sensor(s) 1610 configured to receive information indicative of the state of the robotic device 1600, including sensor(s) 1610 that may monitor the state of the various components of the robotic device 1600. The sensor(s) 1610 may measure activity of systems of the robotic device 1600 and receive information based on the operation of the various features of the robotic device 1600, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic device 1600. The sensor data provided by the sensors may enable the computing system of the robotic device 1600 to determine errors in operation as well as monitor overall functioning of components of the robotic device 1600.
For example, the computing system may use sensor data to determine the stability of the robotic device 1600 during operations as well as measurements related to power levels, communication activities, components that require repair, among other information. As an example configuration, the robotic device 1600 may include gyroscope(s), accelerometer(s), and/or other possible sensors to provide sensor data relating to the state of operation of the robotic device. Further, sensor(s) 1610 may also monitor the current state of a function that the robotic system 1600 may currently be operating. Additionally, the sensor(s) 1610 may measure a distance between a given robotic limb of a robotic device and a center of mass of the robotic device. Other example uses for the sensor(s) 1610 may exist as well.
Additionally, the robotic device 1600 may also include one or more power source(s) 1612 configured to supply power to various components of the robotic device 1600. Among possible power systems, the robotic device 1600 may include a hydraulic system, electrical system, batteries, and/or other types of power systems. As an example illustration, the robotic device 1600 may include one or more batteries configured to provide power to components via a wired and/or wireless connection. Within examples, components of the mechanical components 1614 and electrical components 1616 may each connect to a different power source or may be powered by the same power source. Components of the robotic system 1600 may connect to multiple power sources as well.
Within example configurations, any type of power source may be used to power the robotic device 1600, such as a gasoline and/or electric engine. Further, the power source(s) 1612 may charge using various types of charging, such as wired connections to an outside power source, wireless charging, combustion, or other examples. Other configurations may also be possible. Additionally, the robotic device 1600 may include a hydraulic system configured to provide power to the mechanical components 1614 using fluid power. Components of the robotic device 1600 may operate based on hydraulic fluid being transmitted throughout the hydraulic system to various hydraulic motors and hydraulic cylinders, for example. The hydraulic system of the robotic device 1600 may transfer a large amount of power through small tubes, flexible hoses, or other links between components of the robotic device 1600. Other power sources may be included within the robotic device 1600.
Mechanical components 1614 can represent hardware of the robotic system 1600 that may enable the robotic device 1600 to operate and perform physical functions. As a few examples, the robotic device 1600 may include actuator(s), extendable leg(s), arm(s), wheel(s), one or multiple structured bodies for housing the computing system or other components, and/or other mechanical components. The mechanical components 1614 may depend on the design of the robotic device 1600 and may also be based on the functions and/or tasks the robotic device 1600 may be configured to perform. As such, depending on the operation and functions of the robotic device 1600, different mechanical components 1614 may be available for the robotic device 1600 to utilize. In some examples, the robotic device 1600 may be configured to add and/or remove mechanical components 1614, which may involve assistance from a user and/or other robotic device.
The electrical components 1616 may include various components capable of processing, transferring, providing electrical charge or electric signals, for example. Among possible examples, the electrical components 1616 may include electrical wires, circuitry, and/or wireless communication transmitters and receivers to enable operations of the robotic device 1600. The electrical components 1616 may interwork with the mechanical components 1614 to enable the robotic device 1600 to perform various operations. The electrical components 1616 may be configured to provide power from the power source(s) 1612 to the various mechanical components 1614, for example. Further, the robotic device 1600 may include electric motors. Other examples of electrical components 1616 may exist as well.
In some implementations, the robotic device 1600 may also include communication link(s) 1618 configured to send and/or receive information. The communication link(s) 1618 may transmit data indicating the state of the various components of the robotic device 1600. For example, information read in by sensor(s) 1610 may be transmitted via the communication link(s) 1618 to a separate device. Other diagnostic information indicating the integrity or health of the power source(s) 1612, mechanical components 1614, electrical components 1618, processor(s) 1602, data storage 1604, and/or controller 1608 may be transmitted via the communication link(s) 1618 to an external communication device.
In some implementations, the robotic device 1600 may receive information at the communication link(s) 1618 that is processed by the processor(s) 1602. The received information may indicate data that is accessible by the processor(s) 1602 during execution of the program instructions 1606, for example. Further, the received information may change aspects of the controller 1608 that may affect the behavior of the mechanical components 1614 or the electrical components 1616. In some cases, the received information indicates a query requesting a particular piece of information (e.g., the operational state of one or more of the components of the robotic device 1600), and the processor(s) 1602 may subsequently transmit that particular piece of information back out the communication link(s) 1618.
In some cases, the communication link(s) 1618 include a wired connection. The robotic device 1600 may include one or more ports to interface the communication link(s) 1618 to an external device. The communication link(s) 1618 may include, in addition to or alternatively to the wired connection, a wireless connection. Some example wireless connections may utilize a cellular connection, such as CDMA, EVDO, GSM/GPRS, or 4G telecommunication, such as WiMAX or LTE. Alternatively or in addition, the wireless connection may utilize a Wi-Fi connection to transmit data to a wireless local area network (WLAN). In some implementations, the wireless connection may also communicate over an infrared link, radio, Bluetooth, or a near-field communication (NFC) device.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Ser. No. 63/408,893, filed Sep. 22, 2022, and entitled “SYSTEMS AND METHODS FOR SAFE OPERATION OF ROBOTS,” the entire contents of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63408893 | Sep 2022 | US |