The invention generally relates to object processing systems, and relates in particular to object processing systems such as automated storage and retrieval systems, distribution center systems, and sortation systems that are used for processing a variety of objects.
Current object processing systems generally involve the processing of a large number of objects, where the objects are received in either organized or disorganized batches, and must be routed to desired destinations in accordance with a manifest or specific addresses on the objects (e.g., in a mailing system).
Automated storage and retrieval systems (AS/RS), for example, generally include computer-controlled systems for automatically storing (placing) and retrieving objects from defined storage locations. Traditional AS/RS typically employ totes (or bins), which are the smallest unit of load for the system. In these systems, the totes are brought to people who pick individual objects out of the totes. When a person has picked the required number of objects out of the tote, the tote is then re-inducted back into the AS/RS.
Current distribution center sorting systems, for example, generally assume an inflexible sequence of operations whereby a disorganized stream of input objects is first singulated into a single stream of isolated objects presented one at a time to a scanner that identifies the object. An induction element (e.g., a conveyor, a tilt tray, or manually movable bins) transport the objects to the desired destination or further processing station, which may be a bin, an inclined shelf, a chute, a bag or a conveyor etc.
In typical parcel sortation systems, human workers or automated systems typically retrieve parcels in an arrival order, and sort each parcel or object into a collection bin based on a set of given heuristics. For instance, all objects of like type might go to a collection bin, or all objects in a single customer order, or all objects destined for the same shipping destination, etc. The human workers or automated systems are required to receive objects and to move each to their assigned collection bin. If the number of different types of input (received) objects is large, a large number of collection bins is required.
Automated processing systems may employ programmable motion devices such as robotic systems that grasp and move objects from one location to another (e.g., from a tote to a destination container). During such grasping and movement however, there is a potential for errors, such as for example, more than one object being picked, an object being picked that is below other objects (which may then be ejected from a tote), and an object(s) being dropped or knocked from the end-effector of the robotic system. Any of these events could potentially cause errors in the automated processing systems.
Adding to these challenges are the conditions that some objects may have information about the object entered into the manifest or a shipping label incorrectly. For example, if a manifest in a distribution center includes a size or weight for an object that is not correct (e.g., because it was entered manually incorrectly), or if a shipping sender enters an incorrect size or weight on a shipping label, the processing system may reject the object as being unknown. Additionally, and with regard to incorrect information on a shipping label, the sender may have been undercharged due to the erroneous information, for example, if the size or weight was entered incorrectly by the sender.
There remains a need for more efficient and more cost-effective object processing systems that process objects of a variety of sizes and weights into appropriate collection bins or boxes, yet is efficient in handling objects of such varying sizes and weights.
In accordance with an aspect, the invention provides an object processing system including an input area for receiving a plurality of objects to be processed, an output area including a plurality of destination containers for receiving any of the plurality of objects, a programmable motion device proximate the input area and the output area, the programmable motion device including an end-effector for grasping a selected object of the plurality of objects, and a perception system for detecting the unexpected appearance of any of the plurality of objects that is not associated with the end-effector of the programmable motion device.
In accordance with another aspect, the invention provides an object processing system including an input area for receiving a plurality of objects to be processed, the input area including a weight sensing conveyor section and the plurality of objects being provided within at least one input container, an output area including a plurality of destination containers for receiving any of the plurality of objects, a programmable motion device proximate the input area and the output area, the programmable motion device including an end-effector for grasping any of the plurality of objects, and a perception system for detecting whether any of the plurality of objects on the weight sensing conveyor section are not within the at least one input container on the weight sensing conveyor section.
In accordance with a further aspect, the invention provides an object processing system including an input area for receiving a plurality of objects to be processed, an output area including a plurality of destination containers for receiving any of the plurality of objects, a programmable motion device proximate the input area and the output area, the programmable motion device including an end-effector for grasping any of the plurality of objects, and a perception system including at least one camera system and a plurality of scanning systems for detecting any identifying indicia on any of the plurality of objects that fall toward a portion of a floor of the object processing system as well as for detecting a number of any of the plurality of objects that fall toward the floor of the portion of the object processing system.
In accordance with yet a further aspect, the invention provides a method of processing objects including providing a plurality of objects in a container on a weight sensing conveyor section, grasping a selected object of the plurality of objects for movement to a destination container using a programmable motion device, and monitoring whether any of the plurality of objects other than the selected object become dropped or displaced using a perception system and weight sensing conveyor sections.
The following description may be further understood with reference to the accompanying drawings in which:
The drawings are shown for illustrative purposes only
The invention provides an efficient and economical object processing system that may be used, for example, to provide any of shipping orders from a wide variety of objects, groupings of objects for shipping purposes to a variety of locations, and locally specific groupings of objects for collection and shipment to a large location with locally specific areas such as product isles in a retail store. Each of the systems may be designed to meet Key Performance Indicators (KPIs), while satisfying industrial and system safety standards.
In accordance with an aspect, the system provides an object processing system that maintains knowledge of objects as they are processed, the knowledge including a number of objects picked, whether any objects are dropped or displaced, which objects are dropped or displaced, and how the objects became dropped or displaced.
The system 10 also includes a plurality of upper perception units 20, 22, 24, 26 as well as a floor-based catch bin 28. Input objects arrive in input containers 30 on an input conveyor 13 of the input conveyance system 12, and are provided by the programmable motion device 18 to either destination containers 32 on an output conveyor 15 of the output conveyance system 14 or to destination containers 34 on an output conveyor 17 of the output conveyance system 16. Operation of the system, including the conveyance systems 12, 14, 16, all perception systems (including perception units 20, 22, 24,26 and weight sensing belted conveyor sections 40, 42, 44) and the programmable motion device is provided by one or more computer processing systems 100. In accordance with various aspects, any of roller conveyors, belted conveyors and other conveyance systems (e.g., moving plates) may all include weight sensing capabilities by being mounted on load cells or force torque sensors in accordance with aspects of the present invention.
A goal of the system is to accurately and reliably move objects from an input container 30 to any of destination containers 32, 34 using, for example, the end-effector 46 with a vacuum cup 48. As discussed in more detail herein the system employs a robust set of perception processes that use weight sensing, imaging and scanning to maintain knowledge of locations of all objects at all times. With reference to
In accordance with an aspect of the invention, the system may take an initial weight measurement immediately prior to an event (either a pick or a placement) and then wait a buffer period of time prior to taking a post event weight measurement. The buffer period of time may be, for example, 1, 1.5, 2, 2.5, 3 or 5 seconds, to permit any forces applied to the bin during pick or placement by the end-effector to not alter the post event weight measurement.
While a reading of the stable state mass on the pick conveyor is typically taken at some time X before the robot's anticipated impact with the pick container, it is possible due to imperfect approximations of motion, or because the objects in the tote may still be dynamically moving from a previous action, that the system isn't truly in a steady state at this expected time X. As such, techniques can take place where after time X, based on the readings that come in thereafter, such a new reading may be considered if the new reading at time X+T is more accurate for the steady state system mass prior to impact. Specifically, one such approach considers all readings between time X, and the time that the robot is sensed to have made impact with the object it is picking, and a minimum reading among this time span is utilized. This is most useful in the example where a rapid retry takes place. A rapid retry is where the robot attempts to pick an object A, fails to acquire a grasp on it, and then rapidly retries to pick an object B. The time span between attempt A and B is generally very fast, so the weighing scale may not have come to rest and may have a positive spike in mass) at time X before attempt B, as a result of the robot's interference with the container from attempt A. As such, minimizing the readings from the timed callback before pick B, until pick B occurs, resolves this issue for finding the stable reading before pick B.
In accordance with further aspects, systems of the invention may employ continuous-movement object detection. In order to minimize the time it takes to detect that an undesirable amount of objects has/have been picked, continuous detection after the robot has picked an object, and whilst the robot is moving, can take place to regularly check for detection of an undesirable amount of objects having been picked. The benefit to doing so is such that an undesirable amount of objects being picked can be detected as quickly as possible. In detecting this sooner, the robot can be told to stop sooner, which improves system speed. Additionally, as seen in
In order to detect an undesirable pick whilst the robot is moving (and whilst objects within the pick container may be shuffling as a result of a pick), careful algorithmic techniques are derived which balance detecting an undesirable pick as soon as possible, while not introducing false positives. A false positive is defined such that the system believes an undesirable pick takes place, whereas in reality a valid pick occurred. The risk of a false positive exists due to the dynamics of a real-world system where objects may shuffle and move in a non-uniform manner while the robot picks up one or more objects.
This continuous detection whilst the system is in a dynamic state can take many, forms, but a non-limiting list of examples are provided herein. A clearance time may be used, such that while detecting for an invalid mass difference, such mass difference must remain above the provided threshold for the specified clearance time. This technique helps to mitigate false beliefs of quantity of objects picked as a result of spikes (sudden changes) in mass as a result of objects toppling over, hitting walls of the container, etc., while the robot performs a pick and a retract. Additionally, an affirming approach may take place such that if the mass difference registered is considered invalid, and the difference remains within a stability limit for a specified amount of stability time, then it can be believed that the system has reached steady state and a determination can be made immediately at that time.
In particular,
If the weight pick delta for grasping and lifting the selected object is within tolerance, then the system determines whether the camera system has detected any new object(s) that are not associated with the end-effector (step 1010). In particular, the upper camera system may run continuous background subtraction to identify consecutive pairs of images that are different. The consecutive images may be taken for example, 0.5, 1, 3 or 5 seconds apart. The images may be taken continuously or performed at discrete times, performing background subtraction on each consecutive pair of images, discounting any changes associated with movement of the end-effector. The object detection analysis is discussed in more detail below with reference to
If the system has not detected any new object(s) that are not associated with the end-effector, the process moves to
The system then confirms (using the upper camera system and/or sensors within the end-effector) that an object is still being grasped by the gripper (step 1016). If not, the process moves to
If the weight placement delta for placing the selected object is within tolerance, then the system determines whether the camera system has detected any new object(s) that are not associated with the end-effector (step 1028). Again, the upper camera system may run continuous background subtraction to identify consecutive pairs of images that are different. The consecutive images may be taken for example, 0.5, 1, 3 or 5 seconds apart, and the images may be taken continuously or performed at discrete times, performing background subtraction on each consecutive pair of images, discounting any changes associated with movement of the end-effector. The upper camera system includes the plurality of cameras at the upper perception units 20, 22, 24, 26. If the camera system does detect new object(s) that are not associated with the end-effector, the process moves to
If the system has not detected any new object(s) that are not associated with the end-effector (step 1028), then the system reviews all recent scans by the lower scanning units 60 in the floor-based catch bin 28 (step 1030). Any identifying indicia on a dropped object may be detected by the scanning units 60, thereby identifying each object that falls into the floor-based catch bin 28. The system then reviews images from each of plural lower camera units 62 that are directed to the floor-based catch bin 28 (step 1032). The camera units 62 are directed toward the inside of the floor-based catch bin 28, thereby identifying or confirming the identity of each object that lies in the floor-based catch bin 28.
With reference to
With reference to
If the system determines that a weight placement delta is not within tolerance (step 1026), then the system determines whether any object is on the gripper (step 1050), and if so, either returns the object to bin A or drops the object into the floor-based catch bin (step 1052). Again, if the object is returned to bin A, further attempts to grasp and move the object may be made, and if more than a limited number of prior attempts have been made (e.g., 3 or 4), then the system may drop the object into the floor-based catch bin 28. The system may then retrieve the last object from bin B (step 1054) and then either return the last object to bin A or drop the object into the floor-based catch bin (step 1056) as discussed above. The system then returns to step 1030 in
If the camera system has detected motion not associated with the motion of the end-effector (steps 1010 or 1028), then the system uses the upper camera system (and or end-effector sensors) to determine whether any object is being held by the gripper (step 1060) as discussed above, and if so, the system either returns the object to bin A or drops the object into the floor-based catch bin (step 1062). The system then determines whether any objects are detected as being on conveyor section A but not in bin A (step 1064). If so, the system then returns the object or objects on the conveyor section A to bin A or drops the object(s) into the floor-based catch bin (step 1066). Regardless of whether any object(s) were detected as being on conveyor section A, the system the determines whether any objects are detected as being on conveyor section B but not in bin B (step 1068). If so, the system then returns the object or objects on the conveyor section A to bin A or drops the object(s) into the floor-based catch bin (step 1070).
In accordance with a run-time example therefore, the system may, pick one object of mass 100 g. The system may then measure pick scale 0.4 seconds before impact, receive reading of one kg. The system may then successfully pick the object and move it to a pose-in-hand location. As this motion is occurring, the system periodically receives pick scale readings and fits a model to determine if more than one object has been picked. If a continuous check does not register double pick, the system will reach the pose in hand node, and receive a pick scale reading of 0.895 kg. The system will confirm that 105 g has been removed from the pick tote, which is within threshold of believing we have picked one 100 g object. The system will continue to place the object and do so successfully. The system will then take a place scale reading before the object is placed, say it is at 200 g, and the object is placed and the system will then take another place scale reading, which may read as say 295 g. The system has therefore verified that it has added 95 g to the place box, which is within tolerance of one 100 g object.
In accordance with another run-time example involving a double pick, the system may seek to pick one object of mass 100 g. The system may measure the pick scale 0.4 seconds before impact, receive reading of one kg. The system may successfully pick the object and move it to the pose-in-hand location. As this motion is occurring, the system will periodically receive pick scale readings and fit a model to determine if more than one object has been picked. The system determines that during the retract pick, the pick scale registers 810 g. This is an indication that the system has picked 2 objects. The system may interrupt the pick and return the objects as discussed above.
In accordance with a further run-time example, the system may use scale verification to verify that an object is displaced. The system may seek to pick one object of mass 100 g. The system will measure the pick scale 0.4 seconds before impact, receive reading of one kg. The system will successfully pick the object and move it to the pose-in-hand location. As this motion is occurring, the system will periodically receive pick scale readings and fit a model to determine if more than one object has been picked. Assuming a continuous check does not register a double pick, the system will reach the pose-in-hand node and receive a pick scale reading of 0.895 kg. The system confirms that it has removed 105 g from the pick tote, which is within threshold of believing the system has picked one 100 g object. The system will then take a place scale reading, which says it is at 200 g. The system will continue to place the object, but in this example for some reason, the object falls off the gripper. The system will take a pick scale reading, and see that it reads 895 g. This indicates that the object did not end up in the pick tote. The system will take a place scale reading and see it is still at 200 g. This indicates that the object did not end up in the placement bin. The object is therefore displaced and should be discovered by any of the perception units discussed above.
The perception units 20, 22, 24 may use RGB cameras and computer vision to detect whether a new object is on a conveyor section 40, 42, 44 (e.g., as shown in
delta=abs(Rbefore−Rafter)+abs(Gbefore−Gafter)+abs(Bbefore−Bafter)
A pixel is considered changed if the delta exceeds a threshold. The computed difference between the images is cleared from noise using dilation and the cleaned difference image is searched for blobs inside the region of interest (e.g., the conveyor sections 40, 42, 44). Blobs are limited in area, circularity, convexity and inertia to protect from noise detection. If one or more eligible blobs are detected, it is considered that one or more objects were dropped to the region of interest between the before and after events. In accordance with further aspects, the belted conveyor section 40, 42, 44 may be formed on their outer surfaces thereof, of a color or reflective material that facilitates detecting and isolating any blobs that may represent one or more objects.
Dropped objects may also fall onto a weight sensing belted conveyor section at a destination location (e.g., 42, 44) and be detected by any of the upper cameras 56 or scanners 58 as discussed above.
When objects are dropped into the floor-based catch bin 28, the system obtains the identity and quantity of the objects received by the floor-based catch bin 28. In particular, the system includes scanners 78 mounted on a robot support structure 76 as well as scanners 80 on the inner walls of the floor-based catch bin 28. These scanners detect each object falling (e.g., object 72 as shown in
In accordance with various aspects therefore, the invention provides object processing systems that include a perception system for detecting movement of any of a plurality of objects that is not associated with movement of the end-effector of the programmable motion device, may provide a perception system for detecting whether any of the plurality of containers on the weight sensing conveyor section are not within the at least one input container on the weight sensing conveyor section, or may provide a perception system including at least one camera system and a plurality of scanning systems for detecting any identifying indicia on any of the plurality of objects that fall toward a portion of a floor of the object processing system as well as for detecting a number of any of the plurality of objects that fall toward the floor of the portion of the object processing system. These perception systems are provided by the scanners 56, 78, 80 and cameras 58, 82 discussed above in combination with the one or more computer processing systems 100 that are in communication with the programmable motion device 18 conveyors 13, 15, 17 and conveyor sections 40, 42, 44.
Those skilled in the art will appreciate that numerous modifications and variations may be made to the above disclosed embodiments without departing from the spirit and scope of the present invention.
The present application claims priority to U.S. Provisional Patent Application No. 63/358,302 filed Jul. 5, 2022, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63358302 | Jul 2022 | US |