Robots have been provided to handle products in boxes and other containers and/or packaging, in a variety of contexts, including without limitation sortation/singulation, palletization/de-palletization, truck or container loading/unloading, etc. In some contexts, there may be a requirement to ensure that a label, optical code, logo, sticker, symbol, primary or otherwise preferred box face, etc. be oriented in a certain prescribed way, such as to ensure that a bar code or other optical code can be read by a downstream scanner, or to ensure that product logos are facing forward for display in a retail setting, etc. For example, in palletization and de-palletization processes, there may be certain requirements that pertain to orienting an item in a prescribed orientation. Examples include, without limitation:
In typical prior approaches, palletization or de-palletization, or other operations to requiring specific item orientation at placement, would need to be done manually to ensure proper orientation, or if a customer were to use robotics, customers could be constrained to using robotics for single SKU palletization or depalletization, where the orientation of boxes are homogenous and known.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
A robotic system is disclosed that detects, automatically, a target feature of an item and places the item in an orientation such that the detected feature faces a desired direction. The feature may comprise one or more of a label, an optical or other scannable code, text, one or more images, one or more colors, a visual pattern, a texture, etc. The feature may be detected using computer vision. For example, one or more images may be processed to determine if the feature is included in an image. If the feature is detected in an image, the location and orientation of the feature is determined and tracked. The item is placed to cause the feature to be exposed (or not) as prescribed. If the feature is not, at first, detected, the item may be moved to another position and/or orientation, and/or continuously and/or iteratively to successive positions and/or orientations, until the feature is detected.
In various embodiments, computer perception and vision algorithms are used to recognize arbitrary features (barcodes, logos, symbols, etc.) on a box or other item that a robotic arm has been used to grasp (or is about to grasp). The robotic arm is used to place the box or other item in an orientation such that the detected feature faces a desired direction. Examples include, without limitation, ensuring that for each box or other item a shipping (or other label), a barcode or other optical code, a logo, etc. faces in a prescribed direction. For example, for items placed on an output conveyor, each item may be placed such that its bar code faces in a direction that facilitates scanning by a downstream scanner. For example, in a fixed or handheld scanner may be used to scan packages from a right side of the conveyor, each box/item may be placed on the conveyor such that its barcode faces right.
In various embodiments, the detected feature (not limited to but including barcodes, logos, symbols) can be displayed directly on a user interface, in addition to the robot physically orienting the box accordingly. Display in the user interface may enable an operator to manually scan the bar code or other feature from the user interface.
In some embodiments, the system may be configured and/or trained to detect the target feature. For example, images of a logo, label, barcode, etc. may be uploaded and the system may process image data to detect matching or partially matching images. In some embodiments, human supervised machine learning may be used to train the system to detect a target feature and/or to improve accuracy of detection.
In various embodiments, techniques disclosed herein enable robots to meet certain requirements for use cases including but not limited to:
In various embodiments, a robotic system as disclosed herein retrieves boxes or other items serially. Each item is moved first to or through a field of view of a set of cameras and/or other sensors. Sensor data (e.g., image data) is used to detect the prescribed feature. The system determines how to orient/reorient the box/item for placement such that the detected feature faces in the prescribed direction, rotates the item to the prescribed orientation (if needed) and moves the item to its destination and places it in the prescribed orientation.
In various embodiments, images generated to detect a target feature, such as a barcode, to be able to place the item with the detected feature oriented in a prescribed way may also be used to capture an image of the feature, e.g., a barcode, in a user interface. A human or other operator can then manually scan the barcode from the user interface, eliminating the need in some contexts to update data systems to handle or display information for a new item, such as a new product or SKU.
Control computer 108 sends commands to robotic arm 102 and/or end effector 104 to perform a set of tasks associated with a higher-level operation or objective. In the example shown, the higher-level task is to remove boxes from pallet 112 and place them singly on conveyor 114 at a prescribed orientation. For example, the prescribed orientation may be an orientation that results in a optically scannable code or other visible feature facing in a certain direction for scanning/image capture.
In the state shown in
In some embodiments, if the target image is not found by rotating a box about the vertical axis, robotic arm 102 may be used to tilt the box (e.g., 116) so as to expose the side opposite the end effector 104 to the camera 110. In some embodiments, e.g., a box that is too heavy to expose the surface opposite the end effector 104 to camera 110, the box may be placed down, e.g., in a buffer or work zone, and robotic arm 102 and/or end effector 104 may be used to carefully tip the box onto a different side, enabling the box to be re-grasped from the top in a different orientation than before, allowing other surfaces to be checked for the target image.
In some embodiments, an image of the top surface of a box may be checked prior to grasping, and if the target image is on the top the end effector may be used to grasp the box from the side, so the box can be placed with the image facing in the prescribed direction.
In some embodiments, if the target image cannot be found, the control computer 108 prompts a human worker to intervene. The box may be placed in a buffer or work zone accessible by the human worker, who may change the orientation of the box to better expose the target image to the view of robotic system 100, e.g., within a field of view of camera 110.
Once the target image is detected, the orientation of the box at the time the image was generated and the location (and orientation) of the target image are used to determine a side on which the target image is located. Control computer 108 uses the determined information to move and place the box at a placement location on the conveyor in the prescribed orientation. For example, as shown in
While in
While the examples shown in
Referring further to
As it is grasped, each box is then moved to a destination location and placed there at a prescribed orientation. For example, box 216 is shown being rotated about vertical axis 232 to place bar code 218 at a desired orientation when box 216 is placed on pallet 214. In this example, the boxes on pallet 214 are oriented such that for each its bar code faces out, as may be desired to enable a human or other robotic worker to scan the bar codes.
In some embodiments, a robotic system as disclosed herein is used to create an “aisle ready” pallet such the one shown in
While in the example shown in
In various embodiments, a user interface and handheld or other scanner, as shown in
While in the example discussed above the bar code images are captured in connection with detecting a target image and placing an item in an orientation determined based at least in part on the determined location of the target image on the item, in some embodiments a user interface and interaction as shown in
In various embodiments, a system as disclosed herein is configured to apply a label to a box or other item, e.g., in a prescribed location, at a prescribed orientation, and/or in a manner that does to obscure or otherwise interfere with the reading of any other label or printed information. For example, some palletizing and depalletizing applications require placing a label on the cases. Some customers require specific region of interest placements of these labels on the cases that vary customer by customer.
In various embodiments, a robotic system as disclosed herein detects existing features and/or labels on the cases (eg. SKU barcodes, QR codes, features on boxes) and avoids placing the new label on top of these labels or, alternatively, a customer may want the robot to place the new label on top of existing labels so that the box only has one exposed barcode. In various embodiments, techniques disclosed herein, e.g., a combination of barcode detection, machine learning, and instance segmentation, may be used in various embodiments, to detect existing barcodes, labels, features on the box when determining dynamically to place the label on the box or other item, e.g., either to avoid covering or intentionally to cover the preexisting bar code or other label.
In the example shown, at 502, an item is grasped and moved into a field of view of a scanner (e.g., camera). If the target feature is NOT detected in the image(s) generated by the scanner/camera, at 504, then at 506 the orientation of the item is changed unless/until the target feature is detected. Once the target feature is detected (504), then at 508 a placement location and orientation is determined for the item, based at least in part on where the target feature is determined to be located on the item, and at 510 a place to move and place the item at the determined placement location and orientation is determined and implemented. Successive iterations of the process 500 are performed until it is determined at 512 that no further items remain to be processed.
In some embodiments, a robotic system as disclosed herein is configured to use machine learning to determine how and/or how best to find a target feature on an item of a given type and/or having given attributes. For example, a system as disclosed herein may learn to recognize an image that has been observed to appear consistently on a side adjacent to a side having the target feature. In such a case, the system may learn that the feature can be found most efficiently by rotating the box or other item in a particular direction, e.g., the adjacent side having the target feature is exposed to the camera/scanner next, without having to rotate through sides that do not have the target feature.
While embodiments that include cameras/scanners mounted in a workspace and/or on fixed structures in a workspace have been described, in various embodiments one or more movable cameras/scanners may be used in place of and/or in addition to fixed cameras/scanners. For example, a camera mounted on a robotic arm and/or end effector may be used, such as to look down onto a top surface of any item prior to grasping the item.
In various embodiments, techniques disclosed herein may be used in a robotic system to automatically detect, in each item handled, a target feature, and to place the item in an orientation such that the target feature is oriented in a prescribed manner, such as to ensure the barcode on each item faces a downstream scanner or faces outward (or upward) to facilitate scanning by fixed or handheld scanners.
While in some examples the detected target feature is a barcode or other optical code, in other embodiments other features may be detected and used to orient the item, such as a warning label, arbitrary text, images, detected damage, etc.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
This application claims priority to U.S. Provisional Patent Application No. 63/463,368 entitled FEATURE RECOGNITION AND PROPER ORIENTATION IN ITEM PLACEMENT BY A ROBOT filed May 2, 2023, which is incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
63463368 | May 2023 | US |