Objects underfoot represent not only a nuisance but a safety hazard. Thousands of people each year are injured in a fall at home. A floor cluttered with loose objects may represent a danger, but many people have limited time in which to address the clutter in their homes. Automated cleaning or robots may represent an effective solution.
However, some objects present a variety of challenges in how they may be effectively captured and contained for transport to an appropriate repository or deposition location. Objects that are proportionally large in comparison with the containment area may not be simply swept up and moved. A set of small, lightweight objects may scatter or roll with initial contact, and capturing them one at a time may present a drain on both time and robot energy. Highly deformable objects may simply slide out of or bunch away from rigid capturing mechanisms. And some objects may stack neatly with care but present an unpleasantly dispersed and disorganized pile if simply dropped and left as they land.
There is, therefore, a need for a capture, containment, transport, and deposition algorithm that accounts for the geometry and capabilities of the robot's components and potential difficulties associated with certain types of objects.
In one aspect, a method includes receiving a starting location and attributes of a target object to be lifted by a robot. The robot includes a robotic control system, a shovel, grabber pad arms with grabber pads and at least one wheel or one track for mobility of the robot. The method also includes determining an object isolation strategy, including at least one of using a reinforcement learning based strategy including rewards and penalties, a rules based strategy, relying upon observations, current object state, and sensor data. The method also includes executing the object isolation strategy to separate the target object from an other object. The method also includes determining a pickup strategy, including an approach path for the robot to the target object, a grabbing height for initial contact with the target object, a grabbing pattern for movement of grabber pads while capturing the target object, and a carrying position of the grabber pads and the shovel that secures the target object in a containment area on the robot for transport, the containment area including at least two of the grabber pad arms, the grabber pads, and the shovel. The method also includes executing the pickup strategy, including extending the grabber pads out and forward with respect to the grabber pad arms and raising the grabber pads to the grabbing height, approaching the target object via the approach path, coming to a stop when the target object is positioned between the grabber pads, executing the grabbing pattern to allow capture of the target object within the containment area, and confirming the target object is within the containment area. On condition that the target object is within the containment area, the method includes exerting pressure on the target object with the grabber pads to hold the target object stationary in the containment area, and raising the shovel and the grabber pads, holding the target object, to the carrying position. On condition that the target object is not within the containment area, the method also includes altering the pickup strategy with at least one of a different reinforcement learning based strategy, a different rules based strategy, and relying upon different observations, current object state, and sensor data, and executing the altered pickup strategy.
In one aspect, a robotic system includes a robot including a shovel, grabber pad arms with grabber pads, at least one wheel or one track for mobility of the robot, a processor, and a memory storing instructions that, when executed by the processor, allow operation and control of the robot. The robotic system also includes a base station. The robotic system also includes a plurality of bins storing objects. The robotic system also includes a robotic control system. The robotic system also includes logic that allows the robot and robotic system to perform the disclosed actions.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
Embodiments of a robotic system are disclosed that operate a robot to navigate an environment using cameras to map the type, size and location of toys, clothing, obstacles and other objects. The robot comprises a neural network to determine the type, size and location of objects based on input from a sensing system, such as images from a forward camera, a rear camera, forward and rear left/right stereo cameras, or other camera configurations, as well as data from inertial measurement unit (IMU), lidar, odometry, and actuator force feedback sensors. The robot chooses a specific object to pick up, performs path planning, and navigates to a point adjacent and facing the target object. Actuated grabber pad arms move other objects out of the way and maneuver grabber pads to move the target object onto a shovel to be carried. The shovel tilts up slightly and, if needed, grabber pads may close in front to keep objects in place, while the robot navigates to the next location in the planned path, such as the deposition destination.
In some embodiments the system may include a robotic arm to reach and grasp elevated objects and move them down to the shovel. A companion “portable elevator” robot may also be utilized in some embodiments to lift the main robot up onto countertops, tables, or other elevated surfaces, and then lower it back down onto the floor. Some embodiments may utilize an up/down vertical lift (e.g., a scissor lift) to change the height of the shovel when dropping items into a container, shelf, or other tall or elevated location.
Some embodiments may also utilize one or more of the following components:
The robotic system may be utilized for automatic organization of surfaces where items left on the surface are binned automatically into containers on a regular schedule. In one specific embodiment, the system may be utilized to automatically neaten a children's play area (e.g., in a home, school, or business) where toys and/or other items are automatically returned to containers specific to different types of objects, after the children are done playing. In other specific embodiments, the system may be utilized to automatically pick clothing up off the floor and organize the clothing into laundry basket(s) for washing, or to automatically pick up garbage off the floor and place it into a garbage bin or recycling bin(s), e.g., by type (plastic, cardboard, glass). Generally the system may be deployed to efficiently pick up a wide variety of different objects from surfaces and may learn to pick up new types of objects.
Some objects have attributes making them difficult to maneuver and carry using the grabber pads, grabber pad arms and shovel. These difficulties may be overcome by following an algorithm that specifically accounts for the attributes from which difficulties arise. For example, objects too large to fit completely within the shovel may be secured partially within the shovel through positioning the grabber pads above the object's center of gravity and lowering the grabber pad arms slightly causing the grabber pads to exert a slight downward pressure and hold the object securely within the shovel or even against the shovel bottom or edges. Small, light, and easily scattered objects such as plastic construction blocks or marbles may be dispersed if swept at too quickly with the grabber pads. Alternately, the pads may contact such objects at a point at a height where a direct and constant pressure by the pads may act to press the objects firmly to the floor rather than sweeping them along it. In such cases, a reduced force may be applied initially and then increased as the objects begin to move, or a series of gentle batting motions may be employed by the grabber pads in order to impart a horizontal force that moves the objects while avoiding the downward force that may increase their friction with the floor and prevent their motion. While many objects may simply be dropped from the shovel at their destination, such as into an assigned bin, a class of flat, stackable objects such as books, CDs, DVDs, narrow boxes, etc., may be easier and tidier to place by being raised above previously stacked objects and maneuvered out of the shovel by the grabber pads. An algorithm for handling objects such as these is disclosed herein.
The chassis 102 may support and contain the other components of the robot 100. The mobility system 104 may comprise wheels as indicated, as well as caterpillar tracks, conveyor belts, etc., as is well understood in the art. The mobility system 104 may further comprise motors, servos, or other sources of rotational or kinetic energy to impel the robot 100 along its desired paths. Mobility system 104 components may be mounted on the chassis 102 for the purpose of moving the entire robot without impeding or inhibiting the range of motion needed by the capture and containment system 108. Elements of a sensing system 106, such as cameras, lidar sensors, or other components, may be mounted on the chassis 102 in positions giving the robot 100 clear lines of sight around its environment in at least some configurations of the chassis 102, shovel 110, grabber pad 116, and grabber pad arm 118 with respect to each other.
The chassis 102 may house and protect all or portions of the robotic control system 1900, (portions of which may also be accessed via connection to a cloud server) comprising in some embodiments a processor, memory, and connections to the mobility system 104, sensing system 106, and capture and containment system 108. The chassis 102 may contain other electronic components such as batteries, wireless communication devices, etc., as is well understood in the art of robotics. The robotic control system 1900 may function as described in greater detail with respect to
The capture and containment system 108 may comprise a shovel 110, a shovel arm 112, a shovel arm pivot point 114, a grabber pad 116, a grabber pad arm 118, a pad pivot point 120, and a pad arm pivot point 122. In some embodiments, the capture and containment system 108 may include two grabber pad arms 118, grabber pads 116, and their pivot points. In other embodiments, grabber pads 116 may attach directly to the shovel 110, without grabber pad arms 118. Such embodiments are illustrated later in this disclosure.
The geometry and of the shovel 110 and the disposition of the grabber pads 116 and grabber pad arms 118 with respect to the shovel 110 may describe a containment area, illustrated more clearly in
The point of connection shown between the shovel arms and grabber pad arms is an exemplary position and not intended to limit the physical location of such points of connection. Such connections may be made in various locations as appropriate to the construction of the chassis and arms, and the applications of intended use.
In some embodiments, gripping surfaces may be configured on the sides of the grabber pads 116 facing in toward objects to be lifted. These gripping surfaces may provide cushion, grit, elasticity, or some other feature that increases friction between the grabber pads 116 and objects to be captured and contained. In some embodiments, the grabber pad 116 may include suction cups in order to better grasp objects having smooth, flat surfaces. In some embodiments, the grabber pads 116 may be configured with sweeping bristles. These sweeping bristles may assist in moving small objects from the floor up onto the shovel 110. In some embodiments, the sweeping bristles may angle down and inward from the grabber pads 116, such that, when the grabber pads 116 sweep objects toward the shovel 110, the sweeping bristles form a ramp, allowing the foremost bristles to slide beneath the object, and direct the object upward toward the grabber pads 116, facilitating capture of the object within the shovel and reducing a tendency of the object to be pressed against the floor, increasing its friction and making it more difficult to move.
In one embodiment, the mobility system 104 may comprise a right front wheel 136, a left front wheel 138, a right rear wheel 140, and a left rear wheel 142. The robot 100 may have front wheel drive, where right front wheel 136 and left front wheel 138 are actively driven by one or more actuators or motors, while the right rear wheel 140 and left rear wheel 142 spin on an axle passively while supporting the rear portion of the chassis 102. In another embodiment, the robot 100 may have rear wheel drive, where the right rear wheel 140 and left rear wheel 142 are actuated and the front wheels turn passively. In another embodiment, each wheel may be actively actuated by separate motors or actuators.
The sensing system 106 may further comprise cameras 124 such as the front cameras 126 and rear cameras 128, light detecting and ranging (LIDAR) sensors such as lidar sensors 130, and inertial measurement unit (IMU) sensors, such as IMU sensors 132. In some embodiments, front camera 126 may include the front right camera 144 and front left camera 146. In some embodiments, rear camera 128 may include the rear left camera 148 and rear right camera 150.
Additional embodiments of the robot that may be used to perform the disclosed algorithms are illustrated in
Pad arm pivot points 122, pad pivot points 120, shovel arm pivot points 114 and shovel pivot points 502 (as shown in
The carrying position, as illustrated in
The point of connection shown between the shovel arms 112/grabber pad arms 118 and the chassis 102 is an exemplary position and not intended to limit the physical location of this point of connection. Such connection may be made in various locations as appropriate to the construction of the chassis 102 and arms, and the applications of intended use.
The different points of connection 402 between shovel arm and chassis and grabber pad arms and chassis shown are exemplary positions and not intended to limit the physical locations of these points of connection. Such connections may be made in various locations as appropriate to the construction of the chassis and arms, and the applications of intended use.
The robot 100 may be configured with a shovel pivot point 502 where the shovel 110 connects to the shovel arm 112. The shovel pivot point 502 may allow the shovel 110 to be tilted forward and down while the shovel arm 112 is raised, allowing objects in the containment area 210 to slide out and be deposited in an area to the front 202 of the robot 100.
The elevated back shovel arm pivot point 602 may connect to a split shovel arm 604 in order to raise and lower the shovel 110. This configuration may allow the front camera 126 to capture images without obstruction from the shovel arm. Grooves or slots within the chassis 102 of the robot 600 may allow the portions of the split shovel arm 604 to move unimpeded by the dimensions of the chassis 102.
The grabber pad arms may comprise a right telescoping grabber pad arm 606 and a left telescoping grabber pad arm 608. In this manner, the grabber pad arms may extend (increase in length) and retract (decrease in length). This motion may be generated by linear actuators 610 configured as part of the grabber pad arms. Wrist actuators 612 may be positioned at the pad pivot points 120, allowing the grabber pads to pivot and push objects into the shovels 110.
The mobility system in one embodiment may comprise a right front drive wheel 614, a left front drive wheel 616, and a rear caster 618. The front drive wheels may provide the motive force that allows the robot 600 to navigate its environment, while the rear caster 618 may provide support to the rear portion of the robot 600 without limiting its range of motion. The right front drive wheel 614 and left front drive wheel 616 may be independently actuated, allowing the robot 600 to turn in place as well as while traversing a floor.
The single grabber pad 702 may be raised and lowered by the grabber pad arms, and in addition may be extended and retracted through the action of a single telescoping grabber pad arm 704 impelled by a linear actuator 706. Bearings 708 at an opposite pad arm pivot point 710 and at a sliding joint 712 in the opposite grabber pad arm 714 may allow the force of the linear actuator 706, transferred through the single grabber pad 702, to allow symmetry of motion in both grabber pad arms with one arm being actively moved. In another embodiment, the single grabber pad 702 may be positioned by synchronized actuation of a right telescoping grabber pad arm 606 and a left telescoping grabber pad arm 608 as illustrated in
The elevated back shovel arm pivot point 802 may connect to a single shovel arm 804 in order to raise and lower the shovel 110. This configuration may allow the front camera 126 to capture images without obstruction from the shovel arm. A groove or slot within the chassis 102 of the robot 800 may allow the single shovel arm 804 to move unimpeded by the dimensions of the chassis 102.
The single telescoping shovel arm 902 may be able to move the shovel 110 away from and toward the chassis 102 by extension and retraction powered by a linear actuator 904.
Rather than connecting to the chassis 102 as seen in other embodiments disclosed herein, the shovel-mounted grabber pad arms 1002 may connect to shovel-mounted pad arm pivot points 1004 positioned on the shovel 110. Actuators at the shovel-mounted pad arm pivot points 1004 may allow the shovel-mounted grabber pad arms 1002 to raise and lower with respect to the shovel 110, in addition to being raised and lowered along with the shovel 110.
Rather than connecting to grabber pad arms, the shovel-mounted grabber pads 1102 may connect to the shovel 110 at shovel-mounted pad pivot points 1104. Wrist actuators at the shovel-mounted pad pivot points 1104 may allow the shovel-mounted grabber pads 1102 to pivot into and out of the shovel 110 in order to move objects into the shovel 110.
The robot 1200 may have a split shovel arm 1202 that connects to the chassis 102 at two elevated back shovel arm pivot points 1204. Actuators at each elevated back shovel arm pivot points 1204 may be actuated in synchronization to raise and lower the shovel 110.
The shovel 110 may connect to the split shovel arm 1202 at a shovel pivot point 1206. A shovel pivot actuator 1208 at the shovel pivot point 1206 may allow the shovel 110 to be raised by the split shovel arm 1202 and tilted forward and down into a front drop position 500 such as was illustrated in
These tracks 1302 may improve the mobility and stability of the robot 1300 on some surfaces. A left and right track 1302 may each be separately actuated to allow the robot 1300 to turn while traversing or while remaining in place.
The single grabber pad arm 1402 may connect to the chassis 102 at a single pad arm pivot point 1404, allowing the single grabber pad arm 1402 to move with respect to the robot 1400. The single grabber pad arm 1402 may have a singe grabber pad 1406 connected to the single grabber pad arm 1402 at a single pad pivot point 1408, allowing the single singe grabber pad 1406 to move with respect to the single grabber pad arm 1402. Servos, DC motors, or other actuators at the single pad arm pivot point 1404 and single pad pivot point 1408 may impel the action of the single grabber pad arm 1402 and singe grabber pad 1406 to maneuver objects into the shovel 110.
The features of a robot illustrated with respect to
Input devices 1904 (e.g., of a robot or companion device such as a mobile phone or personal computer) comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 1904 are contact sensors which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices 1904 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 1906.
The memory 1906 is typically what is known as a first or second level memory device, providing for storage (via configuration of matter or states of matter) of signals received from the input devices 1904, instructions and information for controlling operation of the central processing unit or CPU 1902, and signals from storage devices 1910. The memory 1906 and/or the storage devices 1910 may store computer-executable instructions and thus forming logic 1914 that when applied to and executed by the CPU 1902 implement embodiments of the processes disclosed herein. Logic 1914 may include portions of a computer program, along with configuration data, that are run by the CPU 1902 or another processor. Logic 1914 may include one or more machine learning models 1916 used to perform the disclosed actions. In one embodiment, portions of the logic 1914 may also reside on a mobile or desktop computing device accessible by a user to facilitate direct user control of the robot.
Information stored in the memory 1906 is typically directly accessible to the CPU 1902 of the device. Signals input to the device cause the reconfiguration of the internal material/energy state of the memory 1906, creating in essence a new machine configuration, influencing the behavior of the robotic control system 1900 by configuring the CPU 1902 with control signals (instructions) and data provided in conjunction with the control signals.
Second or third level storage devices 1910 may provide a slower but higher capacity machine memory capability. Examples of storage devices 1910 are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories.
In one embodiment, memory 1906 may include virtual storage accessible through connection with a cloud server using the network interface 1912, as described below. In such embodiments, some or all of the logic 1914 may be stored and processed remotely.
The CPU 1902 may cause the configuration of the memory 1906 to be altered by signals in storage devices 1910. In other words, the CPU 1902 may cause data and instructions to be read from storage devices 1910 in the memory 1906 from which may then influence the operations of CPU 1902 as instructions and data signals, and from which it may also be provided to the output devices 1908. The CPU 1902 may alter the content of the memory 1906 by signaling to a machine interface of memory 1906 to alter the internal configuration, and then converted signals to the storage devices 1910 to alter its material internal configuration. In other words, data and instructions may be backed up from memory 1906, which is often volatile, to storage devices 1910, which are often non-volatile.
Output devices 1908 are transducers which convert signals received from the memory 1906 into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers).
The network interface 1912 receives signals from the memory 1906 and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 1912 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 1906. The network interface 1912 may allow a robot to communicate with a cloud server, a mobile device, other robots, and other network-enabled devices.
The robot 100 as previously described includes a sensing system 106. This sensing system 106 may include at least one of cameras 124, IMU sensors 132, lidar sensor 130, odometry 2004, and actuator force feedback sensor 2006. These sensors may capture data describing the environment 2002 around the robot 100.
Image data 2008 from the cameras 124 may be used for object detection and classification 2010. Object detection and classification 2010 may be performed by algorithms and models configured within the robotic control system 1900 of the robot 100. In this manner, the characteristics and types of objects in the environment 2002 may be determined.
Image data 2008, object detection and classification 2010 data, and other sensor data 2012 may be used for a global/local map update 2014. The global and/or local map may be stored by the robot 100 and may represent its knowledge of the dimensions and objects within its decluttering environment 2002. This map may be used in navigation and strategy determination associated with decluttering tasks.
The robot may use a combination of camera 124, lidar sensor 130 and the other sensors to maintain a global or local area map of the environment and to localize itself within that. Additionally, the robot may perform object detection and object classification and may generate visual re-identification fingerprints for each object. The robot may utilize stereo cameras along with a machine learning/neural network software architecture (e.g., semi-supervised or supervised convolutional neural network) to efficiently classify the type, size and location of different objects on a map of the environment.
The robot may determine the relative distance and angle to each object. The distance and angle may then be used to localize objects on the global or local area map. The robot may utilize both forward and backward facing cameras to scan both to the front and to the rear of the robot.
image data 2008, object detection and classification 2010 data, other sensor data 2012, and global/local map update 2014 data may be stored as observations, current robot state, current object state, and sensor data 2016. The observations, current robot state, current object state, and sensor data 2016 may be used by the robotic control system 1900 of the robot 100 in determining navigation paths and task strategies.
According to some examples, the method includes determining an object isolation strategy at block 2104. For example, the robotic control system 1900 illustrated in
In some cases, a valid isolation strategy may not exist. For example, the robotic control system 1900 illustrated in
If there is a valid isolation strategy determined at decision block 2106, the robot 100 such as that introduced with respect to
Rules based strategies may use conditional logic to determine the next logic based on observations, current robot state, current object state, and sensor data 2016 such as are developed in
According to some examples, the method includes determining whether or not the isolation succeeded at decision block 2110. For example, the robotic control system 1900 illustrated in
If the target object(s) were successfully isolated, the method then includes determining a pickup strategy at block 2112. For example, the robotic control system 1900 illustrated in
In some cases, a valid pickup strategy may not exist. For example, the robotic control system 1900 illustrated in
If there is a valid pickup strategy determined at decision block 2114, the robot 100 such as that introduced with respect to
According to some examples, the method includes determining whether or not the target object(s) were picked up at decision block 2118. For example, the robotic control system 1900 illustrated in
If the pickup strategy fails, the target object(s) may be marked as failed to pick up at block 2120, as previously described. If the target object(s) were successfully picked up, the method includes navigating to drop location at block 2122. For example, the robot 100 such as that introduced with respect to
According to some examples, the method includes determining a drop strategy at block 2124. For example, the robotic control system 1900 illustrated in
Object drop strategies may involve navigating with a rear camera if attempting a back drop, or with the front camera if attempting a forward drop.
According to some examples, the method includes executing the drop strategy at block 2126. For example, the robot 100 such as that introduced with respect to
Strategies such as the isolation strategy, pickup strategy, and drop strategy referenced above may be simple strategies, or may incorporate rewards and collision avoidance elements. These strategies may follow general approaches such as the strategy steps for isolation strategy, pickup strategy, and drop strategy 2200 illustrated in
In some embodiments, object isolation strategies may include:
In some embodiments, pickup strategies may include:
In some embodiments, drop strategies may include:
In one embodiment, strategies may incorporate a reward or penalty 2212 in determining action(s) from a policy at block 2202. These rewards or penalties 2212 may primarily be used for training the reinforcement learning model and, in some embodiments, may not apply to ongoing operation of the robot. Training the reinforcement learning model may be performed using simulations or by recording the model input/output/rewards/penalties during robot operation. Recorded data may be used to train reinforcement learning models to choose actions that maximize rewards and minimize penalties. In some embodiments, rewards or penalties 2212 for object pickup using reinforcement learning may include:
In some embodiments, rewards or penalties 2212 for object isolation (e.g., moving target object(s) away from a wall to the right) using reinforcement learning may include:
In some embodiments, rewards or penalties 2212 for object dropping using reinforcement learning may include:
In at least one embodiment, techniques described herein may use a reinforcement learning approach where the problem is modeled as a Markov decision process (MDP) represented as a tuple (S, O, A, P, r, γ), where S is the set of states in the environment, O is the set of observations, A is the set of actions, P: S×A×S→ is the state transition probability function, r: S×A→ is the reward function, and γ is a discount factor.
In at least one embodiment, the goal of training may be to learn a deterministic policy π: O→A such that taking action at=π(ot) at time t maximizes the sum of discounted future rewards from state st:
In at least one embodiment, after taking action at, the environment transitions from state st, to state st+1 by sampling from P. In at least one embodiment, the quality of taking action at in state st is measured by Q(st, at)=[Rt|st, at], known as the Q-function.
In one embodiment, data from a movement collision avoidance system 2214 may be used in determining action(s) from a policy at block 2202. Each strategy may have an associated list of available actions which it may consider. A strategy may use the movement collision avoidance system to determine the range of motion for each action involved in executing the strategy. For example, the movement collision avoidance system may be used to see if the shovel may be lowered to the ground without hitting the grabber pad arms or grabber pads (if they are closed under the shovel), an obstacle such as a nearby wall, or an object (like a ball) that may have rolled under the shovel.
According to some examples, the method includes executing action(s) at block 2204. For example, the robot 100 such as that introduced with respect to
According to some examples, the method includes checking progress toward a goal at block 2206. For example, the robotic control system 1900 illustrated in
Examples of pre-defined composite actions may include:
At block 2308, the process for determining an action from a policy 2300 may take the list of available actions 2306 determined at block 2304, and may determine a range of motion 2312 for each action. The range of motion 2312 may be determined based on the observations, current robot state, current object state, and sensor data 2016 available to the robot control system. Action types 2310 may also be indicated to the movement collision avoidance system 2214, and the movement collision avoidance system 2214 may determine the range of motion 2312.
Block 2308 of process for determining an action from a policy 2300 may determine an observations list 2314 based on the ranges of motion 2312 determined. An example observations list 2314 may include:
At block 2316, a reinforcement learning model may be run based on the observations list 2314. The reinforcement learning model may return action(s) 2318 appropriate for the strategy the robot 100 is attempting to complete based on the policy involved.
In block 2402, the robot may detect the destination where an object carried by the robot is intended to be deposited. In block 2404, the robot may determine a destination approach path to the destination. This path may be determined so as to avoid obstacles in the vicinity of the destination. In some embodiments, the robot may perform additional navigation steps to push objects out of and away from the destination approach path. The robot may also determine an object deposition pattern, wherein the object deposition pattern is one of at least a placing pattern and a dropping pattern. Some neatly stackable objects such as books, other media, narrow boxes, etc., may be most neatly decluttered by stacking them carefully. Other objects may not be neatly stackable, but may be easy to deposit by dropping into a bin. Based on object attributes, the robot may determine which object deposition pattern is most appropriate to the object.
In block 2406, the robot may approach the destination via the destination approach path. How the robot navigates the destination approach path may be determined based on the object deposition pattern. If the object being carried is to be dropped over the back of the robot's chassis, the robot may traverse the destination approach path in reverse, coming to a stop with the back of the chassis nearest the destination. Alternatively, for objects to be stacked or placed in front of the shovel, i.e., at the area of the shovel that is opposite the chassis, the robot may travel forward along the destination approach path so as to bring the shovel nearest the destination.
At decision block 2408, the robot may proceed in one of at least two ways, depending on whether the object is to be placed or dropped. If the object deposition pattern is intended to be a placing pattern, the robot may proceed to block 2410. If the object deposition pattern is intended to be a dropping pattern, the robot may proceed to block 2416.
For objects to be placed via the placing pattern, the robot may come to a stop with the destination in front of the shovel and the grabber pads at block 2410. In block 2412, the robot may lower the shovel and the grabber pads to a deposition height. For example, if depositing a book on an existing stack of books, the deposition height may be slightly above the top of the highest book in the stack, such that the book may be placed without disrupting the stack or dropping the book from a height such that it might have enough momentum to slide off the stack or destabilize the stack. Finally, at block 2414, the robot may use its grabber pads to push the object out of the containment area and onto the destination. In one embodiment, the shovel may be tilted forward to drop objects, with or without the assistance of the grabber pads pushing the objects out from the shovel.
If in decision block 2408 the robot determines that it will proceed with an object deposition pattern that is a dropping pattern, the robot may continue to block 2416. At block 2416, the robot may come to a stop with the destination behind the shovel and the grabber pads, and by virtue of this, behind the chassis for a robot such as the one illustrated beginning in
The disclosed algorithm may comprise a capture process 2500 as illustrated in
The capture process 2500 may begin in block 2502 where the robot detects a starting location and attributes of an object to be lifted. Starting location may be determined relative to a learned map of landmarks within a room the robot is programmed to declutter. Such a map may be stored in memory within the electrical systems of the robot. These systems are described in greater detail with regard to
In block 2504, the robot may determine an approach path to the starting location. The approach path may take into account the geometry of the surrounding space, obstacles detected around the object, and how components the robot may be configured as the robot approaches the object. The robot may further determine a grabbing height for initial contact with the object. This grabbing height may take into account an estimated center of gravity for the object in order for the grabber pads to move the object with the lowest chance of slipping off of, under, or around the object, or deflecting the object in some direction other than into the shovel. The robot may determine a grabbing pattern for movement of the grabber pads during object capture, such that objects may be contacted from a direction and with a force applied in intervals optimized to direct and impel the object into the shovel. Finally, the robot may determine a carrying position of the grabber pads and a shovel that secures the object in a containment area for transport after the object is captured. This position may take into account attributes such as the dimensions of the object, its weight, and its center of gravity.
In block 2506, the robot may extend its grabber pads out and forward with respect to the grabber pad arms and raise the grabber pads to the grabbing height. This may allow the robot to approach the object as nearly as possible without having to leave room for this extension after the approach. Alternately, the robot may perform some portion of the approach with arms folded in close to the chassis and shovel to prevent impacting obstacles along the approach path. In some embodiments, the robot may first navigate the approach path and deploy arms and shovel to clear objects out of and away from the approach path. In block 2508, the robot may finally approach the object via the approach path, coming to a stop when the object is positioned between the grabber pads.
In block 2510, the robot may execute the grabbing pattern determined in block 2502 to capture the object within the containment area. The containment area may be an area roughly described by the dimensions of the shovel and the disposition of the grabber pad arms with respect to the shovel. It may be understood to be an area in which the objects to be transported may reside during transit with minimal chances of shifting or being dislodged or dropped from the shovel and grabber pad arms. In decision block 2512, the robot may confirm that the object is within the containment area. If the object is within the containment area, the robot may proceed to block 2514.
In block 2514, the robot may exert a light pressure on the object with the grabber pads to hold the object stationary in the containment area. This pressure may be downward in some embodiments to hold an object extending above the top of the shovel down against the sides and surface of the shovel. In other embodiments this pressure may be horizontally exerted to hold an object within the shovel against the back of the shovel. In some embodiments, pressure may be against the bottom of the shovel in order to prevent a gap from forming that may allow objects to slide out of the front of the shovel.
In block 2516, the robot may raise the shovel and the grabber pads to the carrying position determined in block 2502. The robot may then at block 2518 carry the object to a destination. The robot may follow a transitional path between the starting location and a destination where the object will be deposited. To deposit the object at the destination, the robot may follow the deposition process 2400 illustrated in
If at decision block 2512 the object is not detected within the containment area, or is determined to be partially or precariously situated within the containment area, the robot may at block 2520 extending the grabber pads fall out of the shovel 2946 and forward with respect to the grabber pad arms and returns the grabber pads to the grabbing height. The robot may then return to block 2510. In some embodiments, the robot may at block 2522 back away from the object if simply releasing and reattempting to capture the object is not feasible. This may occur if the object has been repositioned or moved by the initial attempt to capture it. In block 2524, the robot may re-determine the approach path to the object. The robot may then return to block 2508.
In step 2604, the robot may adjust the approach angle and move obstacles. The best angle of approach may be determined for the particular object, such as approaching a book from the direction of its spine. The grabber pad arms may be used to push obstacles out of the way, and the robot may drive to adjust its angle of approach.
In step 2606, the robot may adjust its arm height based on the type of object or object group. The strategy for picking up the target object or object group includes the arm height and may differ by object to be picked up. For example, a basketball may be pushed from its top so it will roll. A stuffed animal may be pushed from its middle so it will slide and not fall sideways and become harder to push, or flop over the grabber pad arms. A book may be pushed from its sides, very near the floor. Legos may be pushed with the arms against the floor.
At step 2608, the robot may drive such that arms are aligned past the object or object group. The object or object group may be in contact with the shovel or scoop, and may lie within the area inside the two grabber pad arms.
At step 2610, the robot may use its arms to push the object or object group onto the shovel or scoop. The arms may be used intelligently, per a grabbing pattern as described previously. In some instances, both arms may push gradually together, but adjustments may be made as objects tumble and move.
At step 2612, the robot may determine if the object is picked up or not. The camera sensor or other sensor may be utilized to see or detect if the object or object group has been successfully pushed into onto the shovel or scoop. If the object is not picked up, then in step 2614 the robot may release arms up and open. The arms may first be released slightly, such that the object is not being squeezed, then moved up and over the object or object group so that it is not pushed farther out of or farther away from the shovel or scoop than it is. This allows the robot to make incremental progress with picking up if the initial actions are not sufficient. From here the robot may return to step 2604. If the object is detected to be within the shovel at step 2612, the robot may proceed to step 2616.
In step 2616, the robot may apply light pressure on top of the object or against the shovel or scoop. Based on the object type, the robot may thereby apply pressure to the object in order to hold the object or object group within the shovel or scoop. For example, the robot may hold the top of a basketball firmly, squeeze a stuffed animal, push down on a book, or push against the shovel or scoop to retain a group of small objects such as marbles or plastic construction blocks.
At step 2618, the robot may lift the object or object group while continuing to hold with the grabber pad arms. The shovel or scoop and the arms may be lifted together to an intended angle, such as forty-five degrees, in order to carry the object or the object group without them rolling out of or being dislodged from the shovel or scoop. With the shovel and arms raised to the desired angle, the arms may continue to apply pressure to keep the object secure in the shovel.
At step 2620, the robot may drive to the destination location to place the object. The robot may use a local or a global map to navigate to the destination location in order to place the object or object group. For example, this may be a container intended to hold objects, a stack of books, or a designated part of the floor where the object or object group may be out of the way.
At step 2622, the robot may move the shovel or scoop holding the object or object group up and over the destination. The shovel height or position may be adjusted to align with the destination location. For example, the shovel or scoop may be lifted over a container, or aligned with the area above the top of an existing pile of books.
At step 2624, the robot may use its grabber pad arms to place the object or object group at the destination location. The arms may be opened to drop the object or object group into the container, or may be used to push objects forward out of the shovel or scoop. For example, a basketball may be dropped into a container, over the back of the robot, and a book may be carefully pushed forward onto an existing stack of books. Finally, the process ends at step 2626, with the object successfully dropped or placed at the destination.
As illustrated in
As shown in
This process for a stackable object 2700 may be performed by any of the robots disclosed herein, such as those illustrated in
As illustrated in
As shown in
The process for a large, slightly deformable object 2800 may be performed by any of the robots disclosed herein, such as those illustrated in
As illustrated in
As shown in
The process for a large, highly deformable object 2900 may be performed by any of the robots disclosed herein, such as those illustrated in
As illustrated in
As shown in
The process for small, easily scattered objects 3000 may be performed by any of the robots disclosed herein, such as those illustrated in
The cameras may be disposed in a front-facing stereo arrangement, and may include a rear-facing camera or cameras as well. Alternatively, a single front-facing camera may be utilized, or a single front-facing along with a single rear-facing camera. Other camera arrangements (e.g., one or more side or oblique-facing cameras) may also be utilized in some cases.
One or more of the localization logic 3106, mapping logic 3108, and perception logic 3110 may be located and/or executed on a mobile robot, or may be executed in a computing device that communicates wirelessly with the robot, such as a cell phone, laptop computer, tablet computer, or desktop computer. In some embodiments, one or more of the localization logic 3106, mapping logic 3108, and perception logic 3110 may be located and/or executed in the “cloud”, i.e., on computer systems coupled to the robot via the Internet or other network.
The perception logic 3110 is engaged by an image segmentation activation 3144 signal, and utilizes any one or more of well-known image segmentation and objection recognition algorithms to detect objects in the field of view of the camera 3104. The perception logic 3110 may also provide calibration and objects 3120 signals for mapping purposes. The localization logic 3106 uses any one or more of well-known algorithms to localize the mobile robot in its environment. The localization logic 3106 outputs a local to global transform 3122 reference frame transformation and the mapping logic 3108 combines this with the calibration and objects 3120 signals to generate an environment map 3124 for the pick-up planner 3114, and object tracking 3126 signals for the path planner 3112.
In addition to the object tracking 3126 signals from the mapping logic 3108, the path planner 3112 also utilizes a current state 3128 of the system from the system state settings 3130, synchronization signals 3132 from the pick-up planner 3114, and movement feedback 3134 from the motion controller 3116. The path planner 3112 transforms these inputs into navigation waypoints 3136 that drive the motion controller 3116. The pick-up planner 3114 transforms local perception with image segmentation 3138 inputs from the perception logic 3110, the 3124 from the mapping logic 3108, and synchronization signals 3132 from the path planner 3112 into manipulation actions 3140 (e.g., of robotic graspers, shovels) to the motion controller 3116. Embodiments of algorithms utilized by the path planner 3112 and pick-up planner 3114 are described in more detail below.
In one embodiment simultaneous localization and mapping (SLAM) algorithms may be utilized to generate the global map and localize the robot on the map simultaneously. A number of SLAM algorithms are known in the art and commercially available.
The motion controller 3116 transforms the navigation waypoints 3136, manipulation actions 3140, and local perception with image segmentation 3138 signals to target movement 3142 signals to the motor and servo controller 3118.
In a less sophisticated operating mode, the robot may opportunistically picks up objects in its field of view and drop them into containers, without first creating a global map of the environment. For example, the robot may simply explore until it finds an object to pick up and then explore again until it finds the matching container. This approach may work effectively in single-room environments where there is a limited area to explore.
The sequence begins with the robot sleeping (sleep state 3402) and charging at the base station (block 3302). The robot is activated, e.g., on a schedule, and enters an exploration mode (environment exploration state 3404, activation action 3406, and schedule start time 3408). In the environment exploration state 3404, the robot scans the environment using cameras (and other sensors) to update its environmental map and localize its own position on the map (block 3304, explore for configured interval 3410). The robot may transition from the environment exploration state 3404 back to the sleep state 3402 on condition that there are no more objects to pick up 3412, or the battery is low 3414.
From the environment exploration state 3404, the robot may transition to the object organization state 3416, in which it operates to move the items on the floor to organize them by category 3418. This transition may be triggered by the robot determining that objects are too close together on the floor 3420, or determining that the path to one or more objects is obstructed 3422. If none of these triggering conditions is satisfied, the robot may transition from the environment exploration state 3404 directly to the object pick-up state 3424 on condition that the environment map comprises at least one drop-off container for a category of objects 3426, and there are unobstructed items for pickup in the category of the container 3428. Likewise the robot may transition from the object organization state 3416 to the object pick-up state 3424 under these latter conditions. The robot may transition back to the environment exploration state 3404 from the object organization state 3416 on condition that no objects are ready for pick-up 3430.
In the environment exploration state 3404 and/or the object organization state 3416, image data from cameras is processed to identify different objects (block 3306). The robot selects a specific object type/category to pick up, determines a next waypoint to navigate to, and determines a target object and location of type to pick up based on the map of environment (block 3308, block 3310, and block 3312).
In the object pick-up state 3424, the robot selects a goal location that is adjacent to the target object(s) (block 3314). It uses a path planning algorithm to navigate itself to that new location while avoiding obstacles. The robot actuates left and right pusher arms to create an opening large enough that the target object may fit through, but not so large that other unwanted objects are collected when the robot drives forwards (block 3316). The robot drives forwards so that the target object is between the left and right pusher arms, and the left and right pusher arms work together to push the target object onto the collection shovel (block 3318).
The robot may continue in the object pick-up state 3424 to identify other target objects of the selected type to pick up based on the map of environment. If other such objects are detected, the robot selects a new goal location that is adjacent to the target object. It uses a path planning algorithm to navigate itself to that new location while avoiding obstacles, while carrying the target object(s) that were previously collected. The robot actuates left and right pusher arms to create an opening large enough that the target object may fit through, but not so large that other unwanted objects are collected when the robot drives forwards. The robot drives forwards so that the next target object(s) are between the left and right pusher arms. Again, the left and right pusher arms work together to push the target object onto the collection shovel.
On condition that all identified objects in category are picked up 3432, or if the shovel is at capacity 3434, the robot transitions to the object drop-off state 3436 and uses the map of the environment to select goal location that is adjacent to bin for the type of objects collected and uses a path planning algorithm to navigate itself to that new location while avoiding obstacles (block 3320). The robot backs up towards the bin into a docking position where back of the robot is aligned with the back of the bin (block 3322). The robot lifts the shovel up and backwards rotating over a rigid arm at the back of the robot (block 3324). This lifts the target objects up above the top of the bin and dumps them into the bin.
From the object drop-off state 3436, the robot may transition back to the environment exploration state 3404 on condition that there are more items to pick up 3438, or it has an incomplete map of the environment 3440. the robot resumes exploring and the process may be repeated (block 3326) for each other type of object in the environment having an associated collection bin.
The robot may alternatively transition from the object drop-off state 3436 to the sleep state 3402 on condition that there are no more objects to pick up 3412 or the battery is low 3414. Once the battery recharges sufficiently, or at the next activation or scheduled pick-up interval, the robot resumes exploring and the process may be repeated (block 3326) for each other type of object in the environment having an associated collection bin.
A path is formed to the starting goal location, the path comprising zero or more waypoints (block 3506). Movement feedback is provided back to the path planning algorithm. The waypoints may be selected to avoid static and/or dynamic (moving) obstacles (objects not in the target group and/or category). The robot's movement controller is engaged to follow the waypoints to the target group (block 3508). The target group is evaluated upon achieving the goal location, including additional qualifications to determine if it may be safely organized (block 3510).
The robot's perception system is engaged (block 3512) to provide image segmentation for determination of a sequence of activations generated for the robot's manipulators (e.g., arms) and positioning system (e.g., wheels) to organize the group (block 3514). The sequencing of activations is repeated until the target group is organized, or fails to organize (failure causing regression to block 3510). Engagement of the perception system may be triggered by proximity to the target group. Once the target group is organized, and on condition that there is sufficient battery life left for the robot and there are more groups in the category or categories to organize, these actions are repeated (block 3516).
In response to low battery life the robot navigates back to the docking station to charge (block 3518). However, if there is adequate battery life, and on condition that the category or categories are organized, the robot enters object pick-up mode (block 3520), and picks up one of the organized groups for return to the drop-off container. Entering pickup mode may also be conditioned on the environment map comprising at least one drop-off container for the target objects, and the existence of unobstructed objects in the target group for pick-up. On condition that no group of objects is ready for pick up, the robot continues to explore the environment (block 3522).
Once the adjacent location is reached, as assessment of the target object is made to determine if may be safely manipulated (item 3610). On condition that the target object may be safely manipulated, the robot is operated to lift the object using the robot's manipulator arm, e.g., shovel (item 3612). The robot's perception module may by utilized at this time to analyze the target object and nearby objects to better control the manipulation (item 3614).
The target object, once on the shovel or other manipulator arm, is secured (item 3616). On condition that the robot does not have capacity for more objects, or it's the last object of the selected category(ies), object drop-off mode is initiated (item 3618). Otherwise the robot may begin the process again (3602).
“Scale invariant keypoint” or “visual keypoint” in this disclosure refers to a distinctive visual feature that may be maintained across different perspectives, such as photos taken from different areas. This may be an aspect within an image captured of a robot's working space that may be used to identify a feature of the area or an object within the area when this feature or object is captured in other images taken from different angles, at different scales, or using different resolutions from the original capture.
Scale invariant keypoints may be detected by a robot or an augmented reality robotic interface installed on a mobile device based on images taken by the robot's cameras or the mobile device's cameras. Scale invariant keypoints may help a robot or an augmented reality robotic interface on a mobile device to determine a geometric transform between camera frames displaying matching content. This may aid in confirming or fine-tuning an estimate of the robot's or mobile device's location within the robot's working space.
Scale invariant keypoints may be detected, transformed, and matched for use through algorithms well understood in the art, such as (but not limited to) Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Oriented Robust Binary features (ORB), and SuperPoint.
Objects located in the robot's working space may be detected at block 3704 based on the input from the left camera and the right camera, thereby defining starting locations for the objects and classifying the objects into categories. At block 3706, re-identification fingerprints may be generated for the objects, wherein the re-identification fingerprints are used to determine visual similarity of objects detected in the future with the objects. The objects detected in the future may be the same objects, redetected as part of an update or transformation of the global area map, or may be similar objects located similarly at a future time, wherein the re-identification fingerprints may be used to assist in more rapidly classifying the objects.
At block 3708, the robot may be localized within the robot's working space. Input from at least one of the left camera, the right camera, light detecting and ranging (LIDAR) sensors, and inertial measurement unit (IMU) sensors may be used to determine a robot location. The robot's working space may be mapped to create a global area map that includes the scale invariant keypoints, the objects, and the starting locations of the objects. The objects within the robot's working space may be re-identified at block 3710 based on at least one of the starting locations, the categories, and the re-identification fingerprints. Each object may be assigned a persistent unique identifier at block 3712.
At block 3714, the robots may receive a camera frame from an augmented reality robotic interface installed as an application on a mobile device operated by a user, and may update the global area map with the starting locations and scale invariant keypoints using a camera frame to global area map transform based on the camera frame. In the camera frame to global area map transform, the global area map may be searched to find a set of scale invariant keypoints that match the those detected in the mobile camera frame by using a specific geometric transform. This transform may maximize the number of matching keypoints and minimize the number of non-matching keypoints while maintaining geometric consistency.
At block 3716, user indicators may be generated for objects, wherein user indicators may include next target, target order, dangerous, too big, breakable, messy, and blocking travel path. The global area map and object details may be transmitted to the mobile device at block 3718, wherein object details may include at least one of visual snapshots, the categories, the starting locations, the persistent unique identifiers, and the user indicators of the objects. This information may be transmitted using wireless signaling such as BlueTooth or Wifi, as supported by the communications 134 module introduced in
The updated global area map, the objects, the starting locations, the scale invariant keypoints, and the object details, may be displayed on the mobile device using the augmented reality robotic interface. The augmented reality robotic interface may accept user inputs to the augmented reality robotic interface, wherein the user inputs indicate object property overrides including change object type, put away next, don't put away, and modify user indicator, at block 3720. The object property overrides may be transmitted from the mobile device to the robot, and may be used at block 3722 to update the global area map, the user indicators, and the object details. Returning to block 3718, the robot may re-transmit its updated global area map to the mobile device to resynchronize this information.
The robot 100 may use its sensors and cameras illustrated in
In one embodiment, the robotic system 3800 may also include a mobile device 3822 with an augmented reality robotic interface application 3824 installed and the ability to provide a camera frame 3826. The robotic system 3800 may include a user in possession of a mobile device 3822 such as a tablet or a smart phone. The mobile device 3822 may have an augmented reality robotic interface application 3824 installed that functions in accordance with the present disclosure. The augmented reality robotic interface application 3824 may provide a camera frame 3826 using a camera configured as part of the mobile device 3822. The camera frame 3826 may include a ground plane 3828 that may be identified and used to localize the mobile device 3822 within the robotic system 3800 such that information regarding the robot's working space 3816 detected by the robot 100 may be transformed according camera frame to global area map transform 3830 to allow the robot 100 and the mobile device 3822 to stay synchronized with regard to the objects 3806 in the robot's working space 3816 and user indicators and object property overrides that may be attached to those objects 3806.
The global area map 3820 may be a top-down two-dimensional representation of the robot's working space 3816 in one embodiment. The global area map 3820 may undergo a camera frame to global area map transform 3830 such that the information detected by the robot 100 may be represented in the augmented reality robotic interface application 3824 from a user's point of view. The global area map 3820 may be updated to include the mobile device location 3832, the robot location 3834, object starting locations 3836, and object drop locations 3838. In one embodiment, the global area map 3820 may identify furniture or other objects 3806 as obstacles 3840. Objects 3806 other than the target object currently under consideration by the 100 may be considered obstacles 3840 during that phase of pickup. In one embodiment, the augmented reality robotic interface application 3824 may also show the mobile device location 3832 and robot location 3834, those these are not indicated in the present illustration.
Various functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on. “Logic” refers to machine memory circuits and non-transitory machine readable media comprising machine-executable instructions (software and firmware), and/or circuitry (hardware) which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).
Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure may be said to be “configured to” perform some task even if the structure is not currently being operated. A “credit distribution circuit configured to distribute credits to a plurality of processor cores” is intended to cover, for example, an integrated circuit that has circuitry that performs this function during operation, even if the integrated circuit in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible.
The term “configured to” is not intended to mean “configurable to.” An unprogrammed field programmable gate array (FPGA), for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function after programming.
Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, claims in this application that do not otherwise include the “means for” [performing a function] construct should not be interpreted under 35 U.S.C § 112(f).
As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor that is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is synonymous with the phrase “based at least in part on.”
As used herein, the phrase “in response to” describes one or more factors that trigger an effect. This phrase does not foreclose the possibility that additional factors may affect or otherwise trigger the effect. That is, an effect may be solely in response to those factors, or may be in response to the specified factors as well as other, unspecified factors. Consider the phrase “perform A in response to B.” This phrase specifies that B is a factor that triggers the performance of A. This phrase does not foreclose that performing A may also be in response to some other factor, such as C. This phrase is also intended to cover an embodiment in which A is performed solely in response to B.
As used herein, the terms “first,” “second,” etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.), unless stated otherwise. For example, in a register file having eight registers, the terms “first register” and “second register” may be used to refer to any two of the eight registers, and not, for example, just logical registers 0 and 1.
When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof.
As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B.
The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Having thus described illustrative embodiments in detail, it will be apparent that modifications and variations are possible without departing from the scope of the disclosure as claimed. The scope of disclosed subject matter is not limited to the depicted embodiments but is rather set forth in the following Claims.
This application claims the benefit of U.S. provisional patent application Ser. No. 63/253,812, filed on Oct. 8, 2021, and U.S. provisional patent application Ser. No. 63/253,867, filed on Oct. 8, 2021, the contents of each of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63253812 | Oct 2021 | US | |
63253867 | Oct 2021 | US |