Systems and methods for processing objects including space efficient distribution stations and automated output processing

Abstract
A space efficient automated processing system for processing objects is disclosed. The processing system includes an input conveyance system for moving objects from an input area in at least an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, a perception system for receiving objects from the input conveyance system and for providing perception data regarding an object, a primary transport system for receiving the object from the perception system and for providing transport of the object along at least a primary transport vector including an primary transport horizontal component and a primary transport vertical component that is generally opposite the input conveyance horizontal direction component, and at least two secondary transport systems, each of which receives the object from the primary transport system and moves the object in either of reciprocal directions.
Description
BACKGROUND

The invention generally relates to automated, robotic and other processing systems, and relates in particular to automated and robotic systems intended for use in environments requiring, for example, that a variety of objects (e.g., articles, parcels or packages) be processed, e.g., sorted and/or otherwise distributed to several output destinations.


Many object distribution systems receive objects in a disorganized stream that may be provided as individual objects or objects aggregated in groups such as in bags, arriving on any of several different conveyances, commonly a conveyor, a truck, a pallet, a Gaylord, or a bin. Each object must then be distributed to the correct destination container, as determined by identification information associated with the object, which is commonly determined by a label printed on the object. The destination container may take many forms, such as a bag or a bin.


The processing of such objects has traditionally been done, at least in part, by human workers that scan the objects, e.g., with a hand-held barcode scanner, and then place the objects at assigned locations. For example many order fulfillment operations achieve high efficiency by employing a process called wave picking. In wave picking, orders are picked from warehouse shelves and placed at locations (e.g., into bins) containing multiple orders that are sorted downstream. At the processing stage individual objects are identified, and multi-object orders are consolidated, for example into a single bin or shelf location, so that they may be packed and then shipped to customers. The processing (e.g., sorting) of these objects has traditionally been done by hand. A human sorter picks an object from an incoming bin, finds a barcode on the object, scans the barcode with a handheld barcode scanner, determines from the scanned barcode the appropriate bin or shelf location for the article, and then places the article in the so-determined bin or shelf location where all objects for that order have been defined to belong. Automated systems for order fulfillment have also been proposed. See for example, U.S. Patent Application Publication No. 2014/0244026, which discloses the use of a robotic arm together with an arcuate structure that is movable to within reach of the robotic arm.


Other ways of identifying objects by code scanning either require manual processing, or require that the code location be controlled or constrained so that a fixed or robot-held code scanner (e.g., barcode scanner) can reliably detect it. Manually operated barcode scanners are generally either fixed or handheld systems. With fixed systems, such as those used at point-of-sale systems, the operator holds the object and places it in front of the scanner so that the barcode faces the scanning device's sensors, and the scanner, which scans continuously, decodes any barcodes that it can detect. If the object is not immediately detected, the person holding the object typically needs to vary the position or rotation of the object in front of the fixed scanner, so as to make the barcode more visible to the scanner. For handheld systems, the person operating the scanner looks for the barcode on the object, and then holds the scanner so that the object's barcode is visible to the scanner, and then presses a button on the handheld scanner to initiate a scan of the barcode.


Further, many current distribution center sorting systems generally assume an inflexible sequence of operations whereby a disorganized stream of input objects is first singulated into a single stream of isolated objects presented one at a time to a scanner that identifies the object. A conveyance element or elements (e.g., a conveyor, a tilt tray, or manually movable bins) transport the objects to the desired destination or further processing station, which may be a bin, a chute, a bag or a conveyor etc.


In conventional parcel sortation systems, human workers or automated systems typically retrieve objects in an arrival order, and sort each object into a collection bin based on a set of given heuristics. For instance, all objects of like type might go to a collection bin, or all objects in a single customer order, or all objects destined for the same shipping destination, etc. The human workers or automated systems are required to receive objects and to move each to their assigned collection bin. If the number of different types of input (received) objects is large, a large number of collection bins is required.


Such a system has inherent inefficiencies as well as inflexibilities since the desired goal is to match incoming objects to assigned collection bins. Such systems may require a large number of collection bins (and therefore a large amount of physical space, large capital costs, and large operating costs) in part, because sorting all objects to all destinations at once is not always most efficient.


Current state-of-the-art sortation systems rely on human labor to some extent. Most solutions rely on a worker that is performing sortation, by scanning an object from an induction area (chute, table, etc.) and placing the object in a staging location, conveyor, or collection bin. When a bin is full, another worker empties the bin into a bag, box, or other container, and sends that container on to the next processing step. Such a system has limits on throughput (i.e., how fast can human workers sort to or empty bins in this fashion) and on number of diverts (i.e., for a given bin size, only so many bins may be arranged to be within efficient reach of human workers).


Other partially automated sortation systems involve the use of recirculating conveyors and tilt trays, where the tilt trays receive objects by human sortation (human induction), and each tilt tray moves past a scanner. Each object is then scanned and moved to a pre-defined location assigned to the object. The tray then tilts to drop the object into the location. Further, partially automated systems, such as the bomb-bay style recirculating conveyor, involve having trays open doors on the bottom of each tray at the time that the tray is positioned over a predefined chute, and the object is then dropped from the tray into the chute. Again, the objects are scanned while in the tray, which assumes that any identifying code is visible to the scanner.


Such partially automated systems are lacking in key areas. As noted, these conveyors have discrete trays that can be loaded with an object; they then pass through scan tunnels that scan the object and associate it with the tray in which it is riding. When the tray passes the correct bin, a trigger mechanism causes the tray to dump the object into the bin. A drawback with such systems however, is that every divert requires an actuator, which increases the mechanical complexity and the cost per divert can be very high.


An alternative is to use human labor to increase the number of diverts, or collection bins, available in the system. This decreases system installation costs, but increases the operating costs. Multiple cells may then work in parallel, effectively multiplying throughput linearly while keeping the number of expensive automated diverts at a minimum. Such diverts do not ID an object and cannot divert it to a particular spot, but rather they work with beam breaks or other sensors to seek to ensure that indiscriminate bunches of objects get appropriately diverted. The lower cost of such diverts coupled with the low number of diverts keep the overall system divert cost low.


Unfortunately, these systems don't address the limitations to total number of system bins. The system is simply diverting an equal share of the total objects to each parallel manual cell. Thus each parallel sortation cell must have all the same collection bins designations; otherwise an object might be delivered to a cell that does not have a bin to which that object is mapped. There remains a need for a more efficient and more cost effective object sortation system that sorts objects of a variety of sizes and weights into appropriate collection bins or trays of fixed sizes, yet is efficient in handling objects of such varying sizes and weights.


SUMMARY

In accordance with an embodiment, the invention provides a space efficient automated processing system for processing objects. The processing system includes an input conveyance system for moving objects from an input area in at least an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, a perception system for receiving objects from the input conveyance system and for providing perception data regarding an object, a primary transport system for receiving the object from the perception system and for providing transport of the object along at least a primary transport vector including an primary transport horizontal component and a primary transport vertical component that is generally opposite the input conveyance horizontal direction component, and at least two secondary transport systems, each of which receives the object from the primary transport system and moves the object in either of reciprocal directions that are each generally parallel with the input conveyance horizontal direction component and the primary direction horizontal direction component.


In accordance with another embodiment, the invention provides a method for providing space efficient automated processing of objects. The method includes the steps of conveying objects on an input conveyance system from an input area in at least an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, receiving objects from the input conveyance system and for providing perception data regarding an object responsive to the object falling in a perception system vertical direction that is generally opposite in direction to the input conveyance vertical direction component, transporting objects received from the perception system, and using a primary transport system, along at least a primary transport vector including a primary transport horizontal direction component and a primary transport vertical component that is generally opposite the input conveyance horizontal direction component, and receiving the object from the primary transport system, and moving the object in a direction that is generally parallel with the input conveyance horizontal direction component and the primary direction horizontal direction component.


In accordance with yet another embodiment, the invention provides an automated processing system for processing objects. The automated processing system includes an input conveyance system for moving objects from an input area toward a perception system, the perception system for receiving objects from the input conveyance system and for providing perception data regarding an object, a primary transport system for receiving the object from the perception system and for providing transport of the object along at least a primary transport vector, and a diverter system for providing the object to one of a plurality of processing locations, each processing location including a processing bin or box, wherein each of the processing bins or boxes is provided on at least one input bin conveyor system that is biased to urge the processing bins or boxes on the input conveyor system to one side of the input conveyor system.


In accordance with a further embodiment, the invention provides a method of processing objects. The method includes the steps of moving objects from an input area using an input conveyance system toward a perception system, receiving the objects from the input conveyance system and for providing perception data regarding an object using a primary perception system, receiving the object from the primary perception system and for providing transport of the object using a primary transport system along at least a primary transport vector, and diverting the object to one of a plurality of processing locations, each processing location including a processing bin or box, wherein each of the processing bins or boxes is provided on at least one input bin conveyor system that is biased to urge the processing bins or boxes toward one end of the input conveyor system





BRIEF DESCRIPTION OF THE DRAWINGS

The following description may be further understood with reference to the accompanying drawings in which:



FIG. 1 shows an illustrative diagrammatic front view of an object processing system in accordance with an embodiment of the present invention;



FIG. 2 shows an illustrative diagrammatic processing side view of the system of FIG. 1;



FIG. 3 shows another illustrative diagrammatic rear view of the system of FIG. 1;



FIG. 4 shows an illustrative diagrammatic view of a programmable motion device processing station in the system of FIG. 1;



FIG. 5 shows an illustrative diagrammatic view of the perception system of FIGS. 2-4;



FIG. 6 shows an illustrative diagrammatic view from the perception system of FIGS. 2-4, showing a view of objects to be processed;



FIGS. 7A and 7B show illustrative diagrammatic views of a grasp selection process in an object processing system of an embodiment of the present invention;



FIGS. 8A and 8B show illustrative diagrammatic views of a grasp planning process in an object processing system of an embodiment of the present invention;



FIGS. 9A and 9B show illustrative diagrammatic views of a grasp execution process in an object processing system of an embodiment of the present invention;



FIG. 10 shows an illustrative diagrammatic front view of a drop perception system of FIG. 1;



FIG. 11 shows an illustrative diagrammatic rear view of a drop perception system of FIG. 1;



FIGS. 12A-12C show illustrative diagrammatic views of an object diverting system of FIG. 1;



FIG. 13 shows an illustrative diagrammatic view of a processing section in an object processing system in accordance with an embodiment of the invention wherein an object is placed in a carriage;



FIG. 14 shows an illustrative diagrammatic view of the processing section of FIG. 13 with the carriage having been moved along its track;



FIG. 15 shows an illustrative diagrammatic view of the processing section of FIG. 13 with the carriage having transferred its load to a destination bin;



FIGS. 16A and 16B show illustrative diagrammatic views of a bin removal mechanism for use in an object processing system in accordance with an embodiment of the invention;



FIG. 17 shows an illustrative diagrammatic view of the processing section of FIG. 13 with the carriage having returned to its base, and a removed destination bin being urged from its location;



FIG. 18 shows an illustrative diagrammatic view of the processing section of FIG. 13 with the removed destination bin being moved along an output conveyor;



FIG. 19 shows an illustrative diagrammatic exploded view of a box assembly for use as a storage bin or destination bin in accordance with various embodiments of the present invention;



FIG. 20 shows an illustrative diagrammatic view of the assembled box tray assembly of FIG. 19;



FIG. 21A-21D show illustrative diagrammatic views of a further embodiment of a bin displacement system for use in further embodiments of the invention;



FIG. 22 shows an illustrative diagrammatic view of a flowchart showing selected processing steps in a system in accordance with an embodiment of the present invention; and



FIG. 23 shows an illustrative diagrammatic view of a flowchart showing bin assignment and management steps in a system in accordance with an embodiment of the present invention;





The drawings are shown for illustrative purpose only.


DETAILED DESCRIPTION

In accordance with an embodiment, the invention provides a space efficient automated processing system for processing objects. The system includes an input conveyance system, a perception system, a primary transport system, and at least two secondary transport systems. The input conveyance system is for moving objects from an input area in at least an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component. The perception system is for receiving objects from the input conveyance system and for providing perception data regarding an object. The primary transport system is for receiving the object from the perception system and for providing transport of the object along at least a primary transport vector including a primary transport horizontal component and a primary transport vertical component that is generally opposite the input conveyance horizontal direction component. The at least two secondary transport systems each of which receive the object from the primary transport system and move the object in either of reciprocal directions that are each generally parallel with the input conveyance horizontal direction component and the primary direction horizontal direction component.


The described systems reliably automate the identification and conveyance of such objects, employing in certain embodiments, a set of conveyors and sensors and a robot arm. In short, applicants have discovered that when automating sortation of objects, there are a few main things to consider: 1) the overall system throughput (objects sorted per hour), 2) the number of diverts (i.e., number of discrete locations to which an object can be routed), 3) the total area of the sortation system (square feet), and 4) the annual costs to run the system (man-hours, electrical costs, cost of disposable components).


Processing objects in a distribution center (e.g., for example, sorting) is one application for automatically identifying and moving objects. In a shipping distribution center for example, objects commonly arrive in trucks, are conveyed to sortation stations where they are processed, e.g., sorted) according to desired destinations, aggregated in bags, and then loaded in trucks for transport to the desired destinations. Another application may be in the shipping department of a retail store or order fulfillment center, which may require that objects be processed for transport to different shippers, or to different distribution centers of a particular shipper. In a shipping or distribution center the objects may take form of plastic bags, boxes, tubes, envelopes, or any other suitable container, and in some cases may also include objects not in a container. In a shipping or distribution center the desired destination is commonly obtained by reading identifying information printed on the object or on an attached label. In this scenario the destination corresponding to identifying information is commonly obtained by querying the customer's information system. In other scenarios the destination may be written directly on the object, or may be known through other means.


In accordance with various embodiments, therefore, the invention provides a method of taking individual objects from a disorganized stream of objects, providing a generally singulated stream of objects, identifying individual objects, and processing them to desired destinations. The invention further provides methods for loading objects into the system, for conveying objects from one point to the next, for determining grasp locations and grasping objects, for excluding inappropriate or unidentifiable objects, for transferring objects from one conveyor to another, for aggregating objects and transferring to output conveyors, for digital communication within the system and with outside information systems, human operators and maintenance staff, and for maintaining a safe environment.


Important components of an automated object identification and processing system, in accordance with an embodiment of the present invention, include an input conveyance system, a perception system, a primary transport system, and secondary transport systems. FIG. 1 for example, shows a system 10 that includes an infeed area 12 into which objects may be dumped, e.g., by a dumper or transferred from a Gaylord. An infeed conveyor 14 conveys objects from the infeed area 12 to an intermediate conveyor 16 at a processing station 18. The infeed conveyor 14 may include cleats for assisting in lifting the objects from the input area 12 onto the intermediate conveyor 16.


The processing station 18 also includes a grasp perception system 20 that views the objects on the intermediate conveyor 16, and identifies grasp locations on the objects. The processing station 18 also includes a programmable motion device 22, such as an articulated arm, and a primary perception system 24 such as a drop perception unit. The grasp perception system 20 surveys the objects to identify objects when possible, and to determine good grasp points. The object is then grasped by the device 22, and dropped into the drop perception system 24 to ensure that the object is accurately identified. The object then falls through the primary perception system 24 onto a primary transport system 26, e.g., a conveyor. The primary transport system 26 carries the objects past one or more diverters 30, 32 that may be engaged to divert an object off of the primary transport system 26 into any of carriages (when the respective carriage is aligned with the diverter) 34, 36, 38 or the input area 12. Each of the carriages 34, 36, 38 is reciprocally movable along a track that runs between rows of destination stations 130 of shuttle sections 132 (as discussed below in more detail).


The flow of objects is diagrammatically shown in FIG. 2, which shows that objects move from the infeed area 12 to the intermediate conveyor 16. The programmable motion device 22 drops the objects into the drop perception unit 24, and the objects then land on the primary transport system 26. The objects are then conveyed by the primary transport system 26 to diverters that selectively divert objects to carriages (e.g., 36, 38). The carriages bring the objects to one of a plurality of destination stations 130 (e.g., a processing box or a processing bin) and drops the object into the appropriate destination station. When a destination station is full or otherwise complete, the destination station is moved to an output conveyor.



FIG. 3 shows a rear view of the system of FIG. 1 that more clearly shows the programmable motion device 22 and the drop perception system 24. The primary transport system 26 may be a cleated conveyor and the objects may be dropped onto the cleated conveyor such that one object is provided per cleated section. The speeds of the conveyors 14 and 26 may also be controlled to assist in providing a singulated stream of objects to the diverters 30, 32. With reference again to FIG. 1, the destination stations 130 (again, e.g., bins or boxes), are provided on destination input conveyors 160, 162, which may be gravity fed such that bins or boxes thereon are biased to move toward the processing station 18 (as generally shown by corresponding arrows). The destination output conveyors 150, 152, 154 may also be gravity fed to permit finished bins or boxes to be provided away from the processing station 18 (again, as generally shown by corresponding arrows). In further embodiments, the conveyors 150, 152, 154, 160, 162 may be gravity biased in any direction, or may be actively power controlled. The system may operate using a computer processing control system 170 that communicates with the conveyor control systems, the perception units, the programmable motion device, the diverters, the box or bin removal systems (as discussed below), and any and all sensors that may be provided in the system.


With reference to FIG. 4, the processing station 18 of an embodiment includes a grasp perception system 20 that is mounted above the intermediate conveyor 16, which provides objects to be processed. The grasp perception system 20, for example and with reference to FIG. 5, may include (on the underside thereof), a camera 40, a depth sensor 42 and lights 44. A combination of 2D and 3D (depth) data is acquired. The depth sensor 42 may provide depth information that may be used together with the camera image data to determine depth information regarding the various objects in view. The lights 44 may be used to remove shadows and to facilitate the identification of edges of objects, and may be all on during use, or may be illuminated in accordance with a desired sequence to assist in object identification. The system uses this imagery and a variety of algorithms to generate a set of candidate grasp locations for the objects in the bin as discussed in more detail below.


The programmable motion device 22 may include a robotic arm equipped with sensors and computing, that when combined is assumed herein to exhibit the following capabilities: (a) it is able to pick objects up from a singulated stream of objects using, for example, an end effector; (b) it is able to move the object to arbitrary places within its workspace; and, (c) it is able to generate a map of objects that it is able to pick, represented as a candidate set of grasp points in the workcell, and as a list of polytopes enclosing the object in space. The allowable objects are determined by the capabilities of the robotic system. Their size, weight and geometry are assumed to be such that the robotic system is able to pick, move and place them. These may be any kind of ordered goods, packages, parcels, or other articles that benefit from automated processing.



FIG. 6 shows a representation of an image detected by the grasp perception system 20 as it views objects 50, 52, 54 on the intermediate conveyor 16. Superimposed on the objects 50, 52, 54 (for illustrative purposes) are anticipated grasp locations 60, 62, 64 of the objects. Note that while candidate grasp locations 60, 62, 64 appear to be good grasp locations, other grasp locations may not be good grasp locations if the location is too near an edge of an object, or if the grasp location is on a very irregular surface of the object or if the object is partially obscured by another object. Candidate grasp locations may be indicated using a 3D model of the robot end effector placed in the location where the actual end effector would go to use as a grasp location as shown in FIG. 6. Grasp locations may be considered good, for example, if they are close to the center of mass of the object to provide greater stability during grasp and transport, and/or if they avoid places on an object such as caps, seams etc. where a good vacuum seal might not be available.


If an object cannot be fully perceived by the detection system, the perception system considers the object to be two different objects, and may propose more than one candidate grasps of such two different objects. If the system executes a grasp at either of these bad grasp locations, it will either fail to acquire the object due to a bad grasp point where a vacuum seal will not occur (e.g., on the right), or will acquire the object at a grasp location that is very far from the center of mass of the object (e.g., on the left) and thereby induce a great deal of instability during any attempted transport. Each of these results is undesirable.


If a bad grasp location is experienced, the system may remember that location for the associated object. By identifying good and bad grasp locations, a correlation is established between features in the 2D/3D images and the idea of good or bad grasp locations. Using this data and these correlations as input to machine learning algorithms, the system may eventually learn, for each image presented to it, where to best grasp an object, and where to avoid grasping an object.


As shown in FIGS. 7A and 7B, the perception system may also identify portions of an object that are the most flat in the generation of good grasp location information. In particular, if an object includes a tubular end and a flat end such as object 50, the system would identify the more flat end as shown at 58 in FIG. 7B. Additionally, the system may select the area of an object where a UPC code appears, as such codes are often printed on a relatively flat portion of the object to facilitate scanning of the barcode.



FIGS. 8A and 8B show that for each object 80, 82, the grasp selection system may determine a direction that is normal to the selected flat portion of the object 80, 82. As shown in FIGS. 9A and 9B, the robotic system will then direct the end effector 84 to approach each object 80, 82 from the direction that is normal to the surface in order to better facilitate the generation of a good grasp on each object. By approaching each object from a direction that is substantially normal to a surface of the object, the robotic system significantly improves the likelihood of obtaining a good grasp of the object, particularly when a vacuum end effector is employed.


The invention provides therefore in certain embodiments that grasp optimization may be based on determination of surface normal, i.e., moving the end effector to be normal to the perceived surface of the object (as opposed to vertical or “gantry” picks), and that such grasp points may be chosen using fiducial features as grasp points, such as picking on a barcode, given that barcodes are almost always applied to a flat spot on the object. The invention also provides operator assist, where an object that the system has repeatedly failed to grasp has a correct grasp point identified by a human, as well as operator assist, where the operator identifies bad grasp plans, thus removing them and saving the time of the system attempting to execute them.


In accordance with various embodiments therefore, the invention further provides a sortation system that may learn object grasp locations from experience and human guidance. Systems designed to work in the same environments as human workers will face an enormous variety of objects, poses, etc. This enormous variety almost ensures that the robotic system will encounter some configuration of object(s) that it cannot handle optimally; at such times, it is desirable to enable a human operator to assist the system and have the system learn from non-optimal grasps.


The system optimizes grasp points based on a wide range of features, either extracted offline or online, tailored to the gripper's characteristics. The properties of the suction cup influence its adaptability to the underlying surface, hence an optimal grasp is more likely to be achieved when picking on the estimated surface normal of an object rather than performing vertical gantry picks common to current industrial applications.


In addition to geometric information the system uses appearance-based features since depth sensors may not always be accurate enough to provide sufficient information about graspability. For example, the system can learn the location of fiducials such as barcodes on the object, which can be used as indicator for a surface patch that is flat and impermeable, hence suitable for a suction cup. One such example is shipping boxes and bags, which tend to have the shipping label at the object's center of mass and provide an impermeable surface, as opposed to the raw bag material which might be slightly porous and hence not present a good grasp.


By identifying bad or good grasp points on the image, a correlation is established between features in the 2D/3D imagery and the idea of good or bad grasp points; using this data and these correlations as input to machine learning algorithms, the system can eventually learn, for each image presented to it, where to grasp and where to avoid.


This information is added to experience based data the system collects with every pick attempt, successful or not. Over time the robot learns to avoid features that result in unsuccessful grasps, either specific to an object type or to a surface/material type. For example, the robot may prefer to avoid picks on shrink wrap, no matter which object it is applied to, but may only prefer to place the grasp near fiducials on certain object types such as shipping bags.


This learning can be accelerated by off-line generation of human-corrected images. For instance, a human could be presented with thousands of images from previous system operation and manually annotate good and bad grasp points on each one. This would generate a large amount of data that could also be input into the machine learning algorithms to enhance the speed and efficacy of the system learning.


In addition to experience based or human expert based training data, a large set of labeled training data can be generated based on a detailed object model in physics simulation making use of known gripper and object characteristics. This allows fast and dense generation of graspability data over a large set of objects, as this process is not limited by the speed of the physical robotic system or human input.


The correct processing destination is determined from the symbol (e.g., barcode) on the object. It is assumed that the objects are marked in one or more places on their exterior with a visually distinctive mark such as a barcode or radio-frequency identification (RFID) tag so that they may be identified with a scanner. The type of marking depends on the type of scanning system used, but may include 1D or 2D barcode symbologies. Multiple symbologies or labeling approaches may be employed. The types of scanners employed are assumed to be compatible with the marking approach. The marking, either by barcode, RFID tag, or other means, encodes a symbol string, which is typically a string of letters and numbers, which identify the object.


Once grasped, the object may be moved by the programmable motion device 22 to a primary perception system 24 (such as a drop scanner). The object may even be dropped into the perception system 24. In further embodiments, if a sufficiently singulated stream of objects is provided on the intermediate conveyor 16, the programmable motion device may be provided as a diverter (e.g., a push or pull bar) that diverts object off of the intermediate conveyor into the drop scanner. Additionally, the movement speed and direction of the intermediate conveyor 16 (as well as the movement and speed of infeed conveyor 14) may be controlled to further facilitate providing a singulated stream of objects on the intermediate conveyor 16 adjacent the drop scanner.


As further shown in FIGS. 10 and 11, the primary perception system 24 may include a structure 102 having a top opening 104 and a bottom opening 106, and may be covered by an enclosing material 108. The structure 102 includes a plurality of sources (e.g., illumination sources such as LEDs) 110 as well as a plurality of image perception units (e.g., cameras) 112. The sources 60 may be provided in a variety of arrangements, and each may be directed toward the center of the opening. The perception units 112 are also generally directed toward the opening, although some cameras are directed horizontally, while others are directed upward, and some are directed downward. The system 24 also includes an entry source (e.g., infrared source) 114 as well as an entry detector (e.g., infrared detector) 116 for detecting when an object has entered the perception system 24. The LEDs and cameras therefore encircle the inside of the structure 102, and the cameras are positioned to view the interior via windows that may include a glass or plastic covering (e.g., 118).


An aspect of certain embodiments of the present invention, is the ability to identify via barcode or other visual markings of objects by employing a perception system into which objects may be dropped. Automated scanning systems would be unable to see barcodes on objects that are presented in a way that their barcodes are not exposed or visible. The system 24 therefore is designed to view an object from a large number of different views very quickly, reducing or eliminating the possibility of the system 24 not being able to view identifying indicia on an object.


Key features in the perception system are the specific design of the perception system so as to maximize the probability of a successful scan, while simultaneously minimizing the average scan time. The probability of a successful scan and the average scan time make up key performance characteristics. These key performance characteristics are determined by the configuration and properties of the perception system, as well as the object set and how they are marked.


The two key performance characteristics may be optimized for a given item set and method of labeling. Parameters of the optimization for a system include how many scanners, where and in what orientation to place them, and what sensor resolutions and fields of view for the scanners to use. Optimization can be done through trial and error, or by simulation with models of the object.


Optimization through simulation employs a scanner performance model. A scanner performance model is the range of positions, orientations and barcode element size that an identifying symbol can be detected and decoded by the scanner, where the barcode element size is the size of the smallest feature on the symbol. These are typically rated at a minimum and maximum range, a maximum skew angle, a maximum pitch angle, and a minimum and maximum tilt angle.


Typical performance for camera-based scanners are that they are able to detect symbols within some range of distances as long as both pitch and skew of the plane of the symbol are within the range of plus or minus 45 degrees, while the tilt of the symbol can be arbitrary (between 0 and 360 degrees). The scanner performance model predicts whether a given symbol in a given position and orientation will be detected.


The scanner performance model is coupled with a model of where symbols would expect to be positioned and oriented. A symbol pose model is the range of all positions and orientations, in other words poses, in which a symbol will expect to be found. For the scanner, the symbol pose model is itself a combination of an article gripping model, which predicts how objects will be held by the robotic system, as well as a symbol-item appearance model, which describes the possible placements of the symbol on the object. For the scanner, the symbol pose model is itself a combination of the symbol-item appearance model, as well as an inbound-object pose model, which models the distribution of poses over which inbound articles are presented to the scanner. These models may be constructed empirically, modeled using an analytical model, or approximate models may be employed using simple sphere models for objects and uniform distributions over the sphere as a symbol-item appearance model.


Following detection by the perception unit 24, the object is now positively identified and drops onto the primary transport system 26 (e.g., a conveyor). With reference again to FIGS. 1 and 3, the primary transport system 26 moves the identified object toward diverters 30, 32 that are selectively engageable to divert the object off of the conveyor into any of carriages 34, 36, 38 or (if the object was not able to be identified), the object may be either returned to the input area 12 or it may be dropped off of the end of the conveyor 26 into a manual processing bin. Each carriage 34, 36, 38 is reciprocally movable among destination bins 130 of one of a plurality of destination sections 132. Efficiencies in space may be provided in accordance with certain embodiments by having objects first move from the input area 12 along the infeed conveyor 14 in a direction that includes a horizontal component and a vertical component. The object then drops through the drop scanner 24 (vertically) and lands on the primary transport conveyor 26, which also moves the object in a direction that has a horizontal component (opposite in direction to that of the infeed conveyor 14) and a vertical component. The object is then moved horizontally by a carriage 36, 38, and dropped (vertically) above a target destination station 130, such as a destination bin.


With reference to FIGS. 12A-12B, a diverter unit (e.g., 32) may be actuated to urge an object (e.g., 35) off of the conveyor 26 into a selected carriage (e.g., 38) that runs along a rail 39 between destination locations. The diverter unit may include a pair of paddles 31 that are suspended by a frame 33 that permits the paddles to be actuated linearly to move an object off of the conveyor in either direction transverse to the conveyor. Again, with reference to FIG. 1, one direction of diversion for diverter 30, is to return an object to the infeed area 12.


Systems of various embodiments provide numerous advantages because of the inherent dynamic flexibility. The flexible correspondence between sorter outputs and destinations provides that there may be fewer sorter outputs than destinations, so the entire system may require less space. The flexible correspondence between sorter outputs and destinations also provides that the system may choose the most efficient order in which to handle objects, in a way that varies with the particular mix of objects and downstream demand. The system is also easily scalable, by adding sorters, and more robust since the failure of a single sorter might be handled dynamically without even stopping the system. It should be possible for sorters to exercise discretion in the order of objects, favoring objects that need to be handled quickly, or favoring objects for which the given sorter may have a specialized gripper.



FIG. 13 shows the destination section 244 (e.g., such as any of sections 132 of the system 30) that includes a movable carriage 242 that may receive an object 254 from the end effector of the programmable motion device. The movable carriage 242 is reciprocally movable between two rows of the destination bins 246 along a guide rail 245. As shown in FIG. 13, each destination bin 246 includes a guide chute 247 that guides an object dropped therein into the underlying destination bin 246. The carriage 242 moves along a track 245 (as further shown in FIG. 14), and the carriage may be actuated to drop an object 254 into a desired destination bin 246 via a guide chute 247 (as shown in FIG. 15).


The movable carriage 242 is therefore reciprocally movable between the destination bins, and the/each carriage moves along a track, and may be actuated to drop an object into a desired destination bin 224. The destination bins may be provided in a conveyor (e.g., rollers or belt), and may be biased (for example by gravity) to urge all destination bins toward one end (for example, the distal end). When a destination bin is selected for removal (e.g., because the bin is full or otherwise ready for further processing), the system will urge the completed bin onto an output conveyor to be brought to a further processing or shipment station. The conveyor may be biased (e.g., by gravity) or powered to cause any bin on the conveyor to be brought to an output location.



FIGS. 16A and 16B show a bin 251 being urged from the plurality of destination bins 246, onto the output conveyor 248 by the use of a displacement mechanism 255. In accordance with further embodiments, the destination bins may be provided as boxes or containers or any other type of device that may receive and hold an item, including box tray assemblies as discussed below.


Following displacement of the bin 251 onto the conveyor 248 (as shown in FIG. 17), each of the remaining destination bins may be urged together (as shown in FIG. 18) and the system will record the change in position of any of the bins that moved. This way, a new empty bin may be added to the end, and the system will record the correct location and identified processing particulars of each of the destination bins.


As noted above, the bins 246 may be provided as boxes, totes, containers or any other type of device that may receive and hold an item. In further embodiments, the bins may be provided in uniform trays (to provide consistency of spacing and processing) and may further include open covers that may maintain the bin in an open position, and may further provide consistency in processing through any of spacing, alignment, or labeling.


For example, FIG. 19 shows an exploded view of a box tray assembly 330. As shown, the box 332 (e.g., a standard shipping sized cardboard box) may include bottom 331 and side edges 333 that are received by a top surface 335 and inner sides 337 of a box tray 334. The box tray 334 may include a recessed (protected) area in which a label or other identifying indicia 346 may be provided, as well as a wide and smooth contact surface 351 that may be engaged by an urging or removal mechanism as discussed below.


As also shown in FIG. 19, the box 332 may include top flaps 338 that, when opened as shown, are held open by inner surfaces 340 of the box cover 336. The box cover 336 may also include a recessed (protected) area in which a label or other identifying indicia 345 may be provided. The box cover 336 also provides a defined rim opening 342, as well as corner elements 344 that may assist in providing structural integrity of the assembly, and may assist in stacking un-used covers on one another. Un-used box trays may also be stacked on each other.


The box 332 is thus maintained securely within the box tray 134, and the box cover 136 provides that the flaps 338 remain down along the outside of the box permitting the interior of the box to be accessible through the opening 342 in the box cover 336. FIG. 20 shows a width side view of the box tray assembly 330 with the box 332 securely seated within the box tray 334, and the box cover holding open the flaps 338 of the box 332. The box tray assemblies may be used as any or both of the storage bins and destination bins in various embodiments of the present invention. In various embodiments, the bins or boxes may further include a collection bag in the bin or box prior to receiving objects.


With reference to FIGS. 21A-21D, a box kicker 384 in accordance with an embodiment of the present invention may be suspended by and travel along a track 386, and may include a rotatable arm 388 and a roller wheel 390 at the end of the arm 388. With reference to FIGS. 21B-21D, when the roller wheel 390 contacts the kicker plate 351 (shown in FIG. 19) of a box tray assembly 320, the arm 388 continues to rotate, urging the box tray assembly 380 from a first conveyor 382 to a second conveyor 380. Again, the roller wheel 390 is designed to contact the kicker plate 351 of a box tray assembly 381 to push the box tray assembly 381 onto the conveyor 380. Such a system may be used to provide that boxes that are empty or finished being unloaded may be removed (e.g., from conveyor 382), or that boxes that are full or finished being loaded may be removed (e.g., from conveyor 382). The conveyors 380, 382 may also be coplanar, and the system may further include transition roller 383 to facilitate movement of the box tray assembly 381, e.g., by being activated to pull the box tray over to the conveyor 380.


Systems of the invention are highly scalable in terms of sorts-per-hour as well as the number of storage bins and destination bins that may be available. The system provides in a specific embodiment an input system that interfaces to the customer's conveyors and containers, stores objects for feeding into the system, and feeds those objects into the system at a moderate and controllable rate. In one embodiment, the interface to the customer's process takes the form of a dumper from a Gaylord, but many other embodiments are possible. In one embodiment, feeding into the system is by an inclined cleated conveyor with overhead flow restrictors, e.g., baffles. In accordance with certain embodiments, the system feeds objects in at a modest controlled rate. Many options are available, including variations in the conveyor slope and speed, the presence, size and structure of cleats and baffles, and the use of sensors to monitor and control the feed rate.


The system includes in a specific embodiment a primary perception system that monitors the stream of objects on the primary conveyor. Where possible the primary perception system may identify the object to speed or simplify subsequent operations. For example, knowledge of the objects on the primary conveyor may enable the system to make better choices regarding which objects to move to provide a singulated stream of objects.


With reference to FIG. 22, a sortation process of the invention at a sorting station may begin (step 400) by providing a singulated stream of objects that, one at a time, drop an object into the drop scanner (step 402). The system then identifies the new object (step 404). The system then will determine whether the object is yet assigned to any collection bin (step 406). If not, the system will determine whether a next bin is available (step 408). If no next bin is available (step 410), the robotic system will return the object to the input buffer (step 410) and return to step 402. Alternatively, the system can pick one of the collection bins that is in process and decide that it can be emptied to be reused for the object in hand, at which point the control system can empty the collection bin or signal a human worker to do it. If a next bin is available (and the system may permit any number of bins per station), the system will then assign the object to a next bin (step 412). The system then places the object into the assigned bin (step 414). The system then returns to step 402 until finished. Again, in certain embodiments, the secondary conveyor may be an indexed conveyor that moves in increments each time an object is dropped onto the conveyor. The system may then register the identity of the object, access a warehouse manifest, and determine an assigned bin location or assign a new bin location.


A process of the overall control system is shown, for example, in FIG. 23. The overall control system may begin (step 500) by permitting a new collection bin at each station to be assigned to a group of objects based on overall system parameters (step 502) as discussed in more detail below. The system then identifies assigned bins correlated with objects at each station (step 504), and updates the number of objects at each bin at each station (step 506). The system then determines that when a bin is either full or the system expects that the associated sorting station is unlikely to see another object associated with the bin, the associated sorting station robotic system will then place the completed bin onto an output conveyor, or signal a human worker to come and empty the bin (step 508), and then return to step 502.


Systems of various embodiments provide numerous advantages because of the inherent dynamic flexibility. The flexible correspondence between sorter outputs and destinations provides that there may be fewer sorter outputs than destinations, so the entire system may require less space. The flexible correspondence between sorter outputs and destinations also provides that the system may choose the most efficient order in which to handle objects, in a way that varies with the particular mix of objects and downstream demand. The system is also easily scalable, by adding sorters, and more robust since the failure of a single sorter might be handled dynamically without even stopping the system. It should be possible for sorters to exercise discretion in the order of objects, favoring objects that need to be handled quickly, or favoring objects for which the given sorter may have a specialized gripper.


The operations of the systems described herein are coordinated by the central control system 170 as shown in FIGS. 1 and 3. The central control system is comprised of one or more workstations or central processing units (CPUs). The correspondence between barcodes, for example, and outbound destinations is maintained by the central control system in a database called a manifest. The central control system maintains the manifest by communicating with a warehouse management system (WMS). If the perception system successfully recognizes a marking on the object, then the object is then identified and forwarded to an assigned destination station 130. Again, if the object is not identified, the robotic system may divert the object to a human sortation bin 76 to be reviewed by a human.


Those skilled in the art will appreciate that numerous modification and variations may be made to the above disclosed embodiments without departing from the spirit and scope of the present invention.

Claims
  • 1. A method of processing objects, said method comprising: receiving a plurality of objects at an in-feed area at which the plurality of objects may be indiscriminately received at the in-feed area;lifting a subset of the plurality of objects from the in-feed area using an input conveyance system toward a perception system in an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, said input conveyance system including engagement features for engaging the subset of the plurality of objects;providing a singulated stream of the subset of the plurality of objects;receiving the singulated stream of the subset of the plurality of objects at a perception system, said perception system being adapted to provide perception data regarding each of the subset of the plurality of objects;assigning for each of the subset of the plurality of objects a selected destination location of a plurality of destination locations based on the perception data for each of the subset of the plurality of objects;moving each of the subset of the plurality of objects along a processing conveyance vector that includes a processing horizontal direction component that is generally opposite the input conveyance horizontal direction component; anddiverting each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations in a diverter direction that includes a diverter horizontal direction component that is generally orthogonal to the input conveyance horizontal direction component.
  • 2. The method of claim 1, wherein the in-feed area includes an in-feed conveyor section with side walls to facilitate containing the plurality of objects on the in-feed conveyor section.
  • 3. The method of claim 2, wherein the in-feed conveyor section is a cleated conveyor.
  • 4. The method of claim 1, wherein the input conveyance system includes a cleated conveyor.
  • 5. The method of claim 1, wherein the providing the singulated stream of the subset of the plurality of objects includes using a programmable motion device to individually select each of the subset of the plurality of objects from the in-feed conveyance system.
  • 6. The method of claim 5, wherein the programmable motion device includes an associated in-feed perception system for providing in-feed perception data regarding the subset of the plurality of objects.
  • 7. The method of claim 1, wherein the input conveyance system operates at a speed that is controlled responsive to the in-feed perception data.
  • 8. The method of claim 1, wherein the diverting each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations includes using an orthogonal diverter that diverts each respective object from a destination processing conveyor of the processing conveyance vector along a diverting direction that is generally orthogonal to the input conveyance horizontal direction component.
  • 9. The method of claim 8, wherein a return selected destination of the plurality of destination locations is the input area such that a return object of the subset of the plurality of objects may be returned from the destination processing conveyor to the input area.
  • 10. The method of claim 8, wherein each of the plurality of destination locations may be dynamically assigned to each of the subset of the plurality of objects.
  • 11. A method of processing objects, said method comprising: receiving a plurality of objects at an in-feed area at which the plurality of objects may be indiscriminately received at the in-feed area;lifting a subset of the plurality of objects from the in-feed area using an input conveyance system toward a perception system in an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, said input conveyance system including engagement features for engaging the subset of the plurality of objects;receiving the subset of the plurality of objects at a perception system, said perception system being adapted to provide perception data regarding each of the subset of the plurality of objects;assigning for each of the subset of the plurality of objects a selected destination location of a plurality of destination locations based on the perception data for each of the subset of the plurality of objects;moving each of the subset of the plurality of objects in a singulated stream of the subset of the plurality of objects along a processing conveyance vector that includes a processing horizontal direction component that is generally opposite the input conveyance horizontal direction component;diverting each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations in a diverter direction that includes a diverter horizontal direction component that is generally orthogonal to the input conveyance horizontal direction component; andreceiving each of the subset of the plurality of objects at each respective destination location.
  • 12. The method of claim 11, wherein the in-feed area includes an in-feed conveyor section with side walls to facilitate containing the plurality of objects on the in-feed conveyor section.
  • 13. The method of claim 12, wherein the in-feed conveyor section is a cleated conveyor.
  • 14. The method of claim 11, wherein the input conveyance system includes a cleated conveyor.
  • 15. The method of claim 11, wherein the receiving the subset of the plurality of objects from the perception system includes using a programmable motion device to individually select each of the subset of the plurality of objects from the input conveyance system and provide them to the perception system.
  • 16. The method of claim 15, wherein the programmable motion device includes an associated in-feed perception system for providing in-feed perception data regarding the subset of the plurality of objects.
  • 17. The method of claim 11, wherein the input conveyance system operates at a speed that is controlled responsive to the in-feed perception data.
  • 18. The method of claim 11, wherein the diverting each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations includes using an orthogonal diverter that diverts each respective object from a destination processing conveyor of the processing conveyance vector along a diverting direction that is generally orthogonal to the input conveyance horizontal direction component.
  • 19. The method of claim 18, wherein a return selected destination of the plurality of destination locations is the input area such that a return object of the subset of the plurality of objects may be returned from the destination processing conveyor to the input area.
  • 20. The method of claim 18, wherein each of the plurality of destination locations may be dynamically assigned to each of the subset of the plurality of objects.
  • 21. An object processing system comprising: an in-feed area at which a plurality of objects may be indiscriminately received, said in-feed area including at least one wall for facilitating retaining the plurality of objects at the in-feed area;an input conveyance system for lifting a subset of the plurality of objects from the in-feed area toward a perception system in an input conveyance vector that includes an input conveyance horizontal direction component and an input conveyance vertical direction component, said input conveyance system including engagement features for engaging the subset of the plurality of objects;a perception system for receiving the subset of the plurality of objects and for providing perception data regarding each of the subset of the plurality of objects;a control system for assigning each of the subset of the plurality of objects a selected destination location of a plurality of destination locations based on the perception data for each of the subset of the plurality of objects;a processing conveyor for moving each of the subset of the plurality of objects along a processing conveyance vector that includes a processing horizontal direction component that is generally opposite the input conveyance horizontal direction component; anda plurality of diverters for diverting the each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations in a diverter direction that includes a diverter horizontal direction component that is generally orthogonal to the input conveyance horizontal direction component.
  • 22. The object processing system of claim 21, wherein the at least one wall at the in-feed area is one of two walls on either side of an in-feed conveyor section.
  • 23. The object processing system of claim 22, wherein the in-feed conveyor section is a cleated conveyor.
  • 24. The object processing system of claim 21, wherein the input conveyance system includes a cleated conveyor.
  • 25. The object processing system of claim 21, wherein the object processing system further includes a programmable motion device for individually selecting each of the subset of the plurality of objects from the input conveyance system and providing them to the perception system.
  • 26. The object processing system of claim 25, wherein the programmable motion device includes an associated in-feed perception system for providing in-feed perception data regarding the subset of the plurality of objects.
  • 27. The object processing system of claim 21, wherein the input conveyance system operates at a speed that is controlled responsive to the in-feed perception data.
  • 28. The object processing system of claim 21, wherein each of the plurality of diverters divert each of the subset of the plurality of objects to each selected destination location of the plurality of processing locations is an orthogonal diverter that diverts each respective object from a destination processing conveyor of the processing conveyance vector along a diverting direction that is generally orthogonal to the input conveyance horizontal direction component.
  • 29. The object processing system of claim 28, wherein a return selected destination of the plurality of destination locations is the input area such that a return object of the subset of the plurality of objects may be returned from the destination processing conveyor to the input area.
  • 30. The object processing system of claim 28, wherein each of the plurality of destination locations may be dynamically assigned to each of the subset of the plurality of objects.
PRIORITY

The present application is a continuation of U.S. patent application Ser. No. 17/395,180, filed Aug. 5, 2021; which is a continuation of U.S. patent application Ser. No. 16/867,127, filed May 5, 2020, now U.S. Pat. No. 11,126,807, issued Sep. 21, 2021; which is a continuation of U.S. patent application Ser. No. 16/543,105, filed Aug. 16, 2019, now U.S. Pat. No. 10,796,116, issued Oct. 6, 2020; which is a continuation of U.S. patent application Ser. No. 15/956,442, filed Apr. 18, 2018, now U.S. Pat. No. 10,438,034, issued Oct. 8, 2019, which claims priority to U.S. Provisional Patent Application Ser. No. 62/486,783, filed Apr. 18, 2017, the disclosures of which are hereby incorporated by reference in their entireties.

US Referenced Citations (181)
Number Name Date Kind
3592326 Zimmerle et al. Jul 1971 A
3595407 Muller-Kuhn et al. Jul 1971 A
3734286 Simjian May 1973 A
3983988 Maxted et al. Oct 1976 A
4136780 Hunter et al. Jan 1979 A
4186836 Wassmer et al. Feb 1980 A
4360098 Nordstrom Nov 1982 A
4560060 Lenhart Dec 1985 A
4622875 Emery et al. Nov 1986 A
4722653 Williams et al. Feb 1988 A
4759439 Hartlepp Jul 1988 A
4819784 Sticht Apr 1989 A
4846335 Hartlepp Jul 1989 A
4895242 Michel Jan 1990 A
5119306 Metelits et al. Jun 1992 A
5190162 Hartlepp Mar 1993 A
5326219 Pippin et al. Jul 1994 A
5419457 Ross et al. May 1995 A
5460271 Kenny et al. Oct 1995 A
5585917 Woite et al. Dec 1996 A
5672039 Perry et al. Sep 1997 A
5713473 Satake et al. Feb 1998 A
5794788 Massen Aug 1998 A
5794789 Payson et al. Aug 1998 A
5806661 Martin et al. Sep 1998 A
5839566 Bonnet Nov 1998 A
5875434 Matsuoka et al. Feb 1999 A
5990437 Coutant et al. Nov 1999 A
6060677 Ulrichsen et al. May 2000 A
6124560 Roos et al. Sep 2000 A
6246023 Kugle Jun 2001 B1
6311892 O'Callaghan et al. Nov 2001 B1
6323452 Bonnet Nov 2001 B1
6401936 Isaacs et al. Jun 2002 B1
6688459 Bonham et al. Feb 2004 B1
6762382 Danelski Jul 2004 B1
6897395 Shiibashi et al. May 2005 B2
7306086 Boelaars Dec 2007 B2
8560406 Antony Oct 2013 B1
8731711 Joplin et al. May 2014 B1
8776694 Rosenwinkel et al. Jul 2014 B2
8997438 Fallas Apr 2015 B1
9020632 Naylor Apr 2015 B2
9102336 Rosenwinkel Aug 2015 B2
9174758 Rowley et al. Nov 2015 B1
9364865 Kim Jun 2016 B2
9650214 Hoganson May 2017 B2
9751693 Battles et al. Sep 2017 B1
9878349 Crest et al. Jan 2018 B2
9926138 Brazeau et al. Mar 2018 B1
9931673 Nice et al. Apr 2018 B2
9962743 Bombaugh et al. May 2018 B2
9975148 Zhu et al. May 2018 B2
10029865 McCalib, Jr. et al. Jul 2018 B1
10198710 Hahn et al. Feb 2019 B1
10206519 Gyori et al. Feb 2019 B1
10438034 Wagner et al. Oct 2019 B2
10538394 Wagner et al. Jan 2020 B2
10576621 Wagner et al. Mar 2020 B2
10577180 Mehta et al. Mar 2020 B1
10611021 Wagner et al. Apr 2020 B2
10809122 Danenberg et al. Oct 2020 B1
10810715 Chamberlin Oct 2020 B2
10853757 Hill et al. Dec 2020 B1
11055504 Wagner et al. Jul 2021 B2
11080496 Wagner et al. Aug 2021 B2
11126807 Wagner et al. Sep 2021 B2
11200390 Wagner et al. Dec 2021 B2
11205059 Wagner et al. Dec 2021 B2
11416695 Wagner et al. Aug 2022 B2
11481566 Wagner et al. Oct 2022 B2
11537807 Wagner et al. Dec 2022 B2
11681884 Wagner et al. Jun 2023 B2
11734526 Wagner et al. Aug 2023 B2
11842248 Wagner Dec 2023 B2
11847513 Wagner et al. Dec 2023 B2
20020092801 Dominguez Jul 2002 A1
20020134056 Dimario et al. Sep 2002 A1
20020157919 Sherwin Oct 2002 A1
20020170850 Bonham et al. Nov 2002 A1
20020179502 Cerutti et al. Dec 2002 A1
20030034281 Kumar Feb 2003 A1
20030038065 Pippin et al. Feb 2003 A1
20030075051 Watanabe et al. Apr 2003 A1
20040065597 Hanson Apr 2004 A1
20040118907 Rosenbaum et al. Jun 2004 A1
20040194428 Close et al. Oct 2004 A1
20040195320 Ramsager Oct 2004 A1
20040215480 Kadaba Oct 2004 A1
20040261366 Gillet et al. Dec 2004 A1
20050002772 Stone Jan 2005 A1
20050149226 Stevens et al. Jul 2005 A1
20050220600 Baker et al. Oct 2005 A1
20060021858 Sherwood Feb 2006 A1
20060070929 Fry et al. Apr 2006 A1
20070209976 Worth et al. Sep 2007 A1
20080046116 Khan et al. Feb 2008 A1
20080181753 Bastian et al. Jul 2008 A1
20080193272 Beller Aug 2008 A1
20090026017 Freudelsperger Jan 2009 A1
20100122942 Harres et al. May 2010 A1
20100318216 Faivre et al. Dec 2010 A1
20110084003 Benjamins Apr 2011 A1
20110130868 Baumann Jun 2011 A1
20110144798 Freudelsperger Jun 2011 A1
20110238207 Bastian, II et al. Sep 2011 A1
20110243707 Dumas et al. Oct 2011 A1
20110320036 Freudelsperger Dec 2011 A1
20120096818 Pippin Apr 2012 A1
20120118699 Buchman et al. May 2012 A1
20120125735 Schuitema et al. May 2012 A1
20120293623 Nygaard Nov 2012 A1
20130001139 Tanner Jan 2013 A1
20130051696 Garrett et al. Feb 2013 A1
20130104664 Chevalier, Jr. et al. May 2013 A1
20130110280 Folk May 2013 A1
20130202195 Perez Cortes et al. Aug 2013 A1
20140244026 Neiser Aug 2014 A1
20140249666 Radwallner et al. Sep 2014 A1
20140277693 Naylor Sep 2014 A1
20140291112 Lyon et al. Oct 2014 A1
20150068866 Fourney Mar 2015 A1
20150098775 Razumov Apr 2015 A1
20150114799 Hansl et al. Apr 2015 A1
20160042320 Dearing et al. Feb 2016 A1
20160083196 Dugat Mar 2016 A1
20160221762 Schroader Aug 2016 A1
20160221766 Schroader et al. Aug 2016 A1
20160228921 Doublet et al. Aug 2016 A1
20170057756 Dugat et al. Mar 2017 A1
20170108577 Loverich et al. Apr 2017 A1
20170121113 Wagner et al. May 2017 A1
20170157649 Wagner et al. Jun 2017 A1
20170197233 Bombaugh et al. Jul 2017 A1
20170225330 Wagner et al. Aug 2017 A1
20170243158 Gupta et al. Aug 2017 A1
20170312789 Schroader Nov 2017 A1
20170330135 Taylor et al. Nov 2017 A1
20170349385 Moroni et al. Dec 2017 A1
20170369244 Battles et al. Dec 2017 A1
20180001353 Stockard et al. Jan 2018 A1
20180044120 Mäder Feb 2018 A1
20180065156 Winkle et al. Mar 2018 A1
20180068266 Kirmani et al. Mar 2018 A1
20180085788 Engel et al. Mar 2018 A1
20180105363 Lisso et al. Apr 2018 A1
20180127219 Wagner et al. May 2018 A1
20180186572 Issing Jul 2018 A1
20180224837 Enssle Aug 2018 A1
20180265291 Wagner et al. Sep 2018 A1
20180265298 Wagner et al. Sep 2018 A1
20180265311 Wagner et al. Sep 2018 A1
20180273295 Wagner et al. Sep 2018 A1
20180273296 Wagner et al. Sep 2018 A1
20180273297 Wagner et al. Sep 2018 A1
20180273298 Wagner et al. Sep 2018 A1
20180282065 Wagner et al. Oct 2018 A1
20180282066 Wagner et al. Oct 2018 A1
20180312336 Wagner et al. Nov 2018 A1
20180327198 Wagner et al. Nov 2018 A1
20180330134 Wagner et al. Nov 2018 A1
20190022702 Vegh et al. Jan 2019 A1
20190030712 Sciog et al. Jan 2019 A1
20190091730 Torang Mar 2019 A1
20190337723 Wagner et al. Nov 2019 A1
20200023410 Tamura et al. Jan 2020 A1
20200143127 Wagner et al. May 2020 A1
20200319627 Edwards et al. Oct 2020 A1
20200363259 Bergstra et al. Nov 2020 A1
20210214163 Deacon et al. Jul 2021 A1
20220198164 Wagner et al. Jun 2022 A1
20220261738 Kumar et al. Aug 2022 A1
20220276088 Bergstra et al. Sep 2022 A1
20220314440 Mizoguchi et al. Oct 2022 A1
20230077893 Gebhardt et al. Mar 2023 A1
20230219767 Demir et al. Jul 2023 A1
20230334275 Wagner et al. Oct 2023 A1
20230342573 Wagner et al. Oct 2023 A1
20230401398 Wagner et al. Dec 2023 A1
20240054302 Wagner et al. Feb 2024 A1
20240054303 Wagner et al. Feb 2024 A1
Foreign Referenced Citations (76)
Number Date Country
2006204622 Mar 2007 AU
1033604 Jul 1989 CN
1643731 Jul 2005 CN
1671489 Sep 2005 CN
1783112 Jun 2006 CN
1809428 Jul 2006 CN
102884539 Jan 2013 CN
103129783 Jun 2013 CN
103442998 Dec 2013 CN
103842270 Jun 2014 CN
104355032 Feb 2015 CN
104507814 Apr 2015 CN
104858150 Aug 2015 CN
204837530 Dec 2015 CN
105314417 Feb 2016 CN
105383906 Mar 2016 CN
105668255 Jun 2016 CN
105761195 Jul 2016 CN
105800323 Jul 2016 CN
105855189 Aug 2016 CN
105873838 Aug 2016 CN
205500186 Aug 2016 CN
106111551 Nov 2016 CN
106169168 Nov 2016 CN
106734076 May 2017 CN
107430719 Dec 2017 CN
107472815 Dec 2017 CN
108136596 Jun 2018 CN
108137232 Jun 2018 CN
108290297 Jul 2018 CN
108290685 Jul 2018 CN
108351637 Jul 2018 CN
108602630 Sep 2018 CN
108604091 Sep 2018 CN
207981651 Oct 2018 CN
108778636 Nov 2018 CN
108921241 Nov 2018 CN
109181473 Jan 2019 CN
208304180 Jan 2019 CN
19510392 Sep 1996 DE
102004001181 Aug 2005 DE
102007023909 Nov 2008 DE
102007038834 Feb 2009 DE
102008039764 May 2010 DE
0235488 Sep 1987 EP
0613841 Sep 1994 EP
0648695 Apr 1995 EP
1695927 Aug 2006 EP
1995192 Nov 2008 EP
2233400 Sep 2010 EP
2477914 Apr 2013 EP
2995567 Mar 2016 EP
3112295 Jan 2017 EP
2832654 May 2003 FR
2084531 Apr 1982 GB
H0985181 Mar 1997 JP
200228577 Jan 2002 JP
2007182286 Jul 2007 JP
2008037567 Feb 2008 JP
4150106 Sep 2008 JP
2010202291 Sep 2010 JP
9731843 Sep 1997 WO
03095339 Nov 2003 WO
2005118436 Dec 2005 WO
2007009136 Jan 2007 WO
2008091733 Jul 2008 WO
2010017872 Feb 2010 WO
2011038442 Apr 2011 WO
2014130937 Aug 2014 WO
2015118171 Aug 2015 WO
2016012742 Jan 2016 WO
2017036780 Mar 2017 WO
2017044747 Mar 2017 WO
2017192783 Nov 2017 WO
2018175466 Sep 2018 WO
2018176033 Sep 2018 WO
Non-Patent Literature Citations (72)
Entry
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 18/382,452 on May 2, 2024, 12 pages.
Chao et al., Design and test of vacuum suction device for egg embryo activity sorting robot, Transactions of the Chinese Society of Agricultural Engineering, vol. 16, pp. 276-283, Aug. 23, 2017.
Examiner's Report issued by Innovation, Science and Economic Devleopment Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,277 on Feb. 1, 2024, 3 pages.
Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 17/670,324 on Dec. 26, 2023, 8 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 18/202,697 on Jan. 3, 2024, 11 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/739,738 on Feb. 12, 2024, 12 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 18/221,671 on Mar. 12, 2024, 11 pages.
Notice on First Office Action issued by the China Naitonal Intellectual Property Administration in related Chinese Patent Application No. 202080008300.8 on Nov. 17, 2023, 20 pages.
Notice on First Office Action issued by the China Naitonal Intellectual Property Administration in related Chinese Patent Application No. 202080008348.9 on Nov. 14, 2023, 18 pages.
Notice on First Office Action issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 202080008352.5 on Nov. 15, 2023, 23 pages.
Notice on First Office Action, along with its English translation, issued by the China National Intellectual Property Administration issued in related Chinese Patent Application No. 201980070400.0 on Nov. 9, 2023, 22 pages.
Notice on First Office Action, along with its English translation, issued by the China national Intellectual Property Administration in related Chinese Patent Application No. 202080008322.4 on Nov. 13, 2023, 21 pages.
Notice on the First Office Action, along with its English translation, issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 202080008347.4 on Nov. 21, 2023, 19 pages.
Communication pursuant to Rules 161(1) and 162 EPC issued by the European Patent Office in related European Patent Application No. 18723144.4 on Nov. 26, 2019, 3 pages.
Communication pursuant to Rules 161(1) and 162 EPC issued by the European Patent Office in related European Application No. 19805436.3 on Jun. 1, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20704961.0 on Aug. 17, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20703621.1 on Aug. 17, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20704119.5 on Aug. 17, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20704962.8 on Aug. 17, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20704645.9 on Aug. 17, 2021, 3 pages.
Communication pursuant to Rules 161(1) and 162EPC issued by the European Patent Office in related European Patent Application No. 20703866.2 on Aug. 17, 2021, 3 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada in related Canadian Patent Application No. 3,060,257 on Dec. 9, 2020, 3 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,060,257 on Oct. 28, 2021, 6 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,766 on Apr. 13, 2022, 5 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,277 on Sep. 12, 2022, 4 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,160 on Sep. 21, 2022, 4 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,161 on Sep. 21, 2022, 4 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,258 on Sep. 12, 2022, 5 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,276 on Sep. 13, 2022, 5 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,138 on Oct. 26, 2022, 7 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Application No. 3,152,708 on May 3, 2023, 7 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Application No. 3,126,161 on Jul. 28, 2023, 3 pages.
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,126,160 on Aug. 3, 2023, 4 pages.
Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,213 on Jul. 23, 2021, 10 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO on Oct. 22, 2019, in related International Application No. PCT/US2018/028164, 11 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2019/057710 on Apr. 27, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012744 on Jun. 16, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012720 on Jun. 16, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012695 on Jun. 16, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012713 on Jun. 16, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012754 on Jun. 16, 2021, 8 pages.
International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2020/012704 on Jun. 16, 2021, 9 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Aug. 9, 2018, in related International Application No. PCT/US2018/028164, 15 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012695, 14 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012704, 15 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012713, 14 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012720, 14 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012744, 14 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Mar. 25, 2020, in related International Application No. PCT/US2020/012754, 14 pages.
International Search Report and Written Opinion issued by the International Searching Authority on Feb. 6, 2020 in related International Application No. PCT/US2019/057710, 12 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 15/956,442 on Mar. 15, 2019, 8 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,213 on Jun. 4, 2020, 6 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,211 on Nov. 23, 2020, 11 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,213 on Feb. 24, 2021, 8 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,218 on Mar. 26, 2021, 13 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,202 on Apr. 13, 2021, 9 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,218 on Oct. 26, 2021, 7 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/661,820 on Oct. 27, 2021, 18 pages.
Non-Final Office Action issued by the U.S. Patent and Trademark Office in related U.S. Appl. No. 16/737,215 on Mar. 26, 2021, 13 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/324,588 on Feb. 7, 2022, 10 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/349,064 on Mar. 8, 2022, 14 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/516,862 on Jul. 19, 2022, 10 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/395,180 on Nov. 30, 2022, 12 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/747,515 on Dec. 21, 2022, 10 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/508,217 on Jan. 4, 2023, 12 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/899,294 on Feb. 23, 2023, 10 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 17/982,287 on Mar. 22, 2023, 14 pages.
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 18/142,071 on Nov. 21, 2023, 11 pages.
Notice of First Office Action, along with its English translation, issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 202080008346.X on Nov. 8, 2023, 15 pages.
Notice on the First Office Action and First Office Action, along with its English Translation, issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 201880038892.0 on Sep. 2, 2020, 23 pages.
Notice on the First Office Action, and its English translation, issued in related Chinese Patent Application No. 202111245956.4 on Dec. 14, 2022, 16 pages.
Notice on the Second Office Action and Second Office Action, along with its English Translation, issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 201880038892.0 on Apr. 14, 2021, 8 pages.
Related Publications (1)
Number Date Country
20240037353 A1 Feb 2024 US
Provisional Applications (1)
Number Date Country
62486783 Apr 2017 US
Continuations (4)
Number Date Country
Parent 17395180 Aug 2021 US
Child 18376939 US
Parent 16867127 May 2020 US
Child 17395180 US
Parent 16543105 Aug 2019 US
Child 16867127 US
Parent 15956442 Apr 2018 US
Child 16543105 US