The present technology related generally to robotic systems with labeling systems, and more specifically labeling systems with automated positioning and placement mechanisms.
With their ever-increasing performance and lowering cost, many robots (e.g., machines configured to automatically/autonomously execute physical actions) are now extensively used in many fields. Robots, for example, can be used to execute various tasks (e.g., manipulate, label, transfer an object through space) in manufacturing and/or assembly, packing and/or packaging, transport and/or shipping, etc. In executing the tasks, the robots can replicate human actions, thereby replacing or reducing human involvements that are otherwise required to perform dangerous or repetitive tasks.
However, despite the technological advancements, robots often lack the sophistication necessary to duplicate human interactions required for executing larger and/or more complex tasks. Accordingly, there remains a need for improved techniques and systems for managing operations of and/or interactions between robots and objects.
The drawings have not necessarily been drawn to scale. Similarly, some components and/or operations can be separated into different blocks or combined into a single block for the purpose of discussion of some of the embodiments of the present technology. Moreover, while the technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the technology to the particular embodiments described.
For ease of reference, the multi-purpose labeling system and the components thereof are sometimes described herein with reference to top and bottom, upper and lower, upwards and downwards, a longitudinal plane, a horizontal plane, an x-y plane, a vertical plane, and/or a z-plane relative to the spatial orientation of the embodiments shown in the figures. It is to be understood, however, that the end effector and the components thereof can be moved to, and used in, different spatial orientations without changing the structure and/or function of the disclosed embodiments of the present technology.
Multi-purpose labeling systems and methods are disclosed herein. Such multi-purpose labeling systems can visually inspect objects in or interfacing with the robotic system to determine physical and identifying information about the objects. Based on the physical and identifying information, the labeling system can determine a target labeling location for placing a label on the object. The labeling system can also print and prepare a label for adhering to the object based on the physical and identifying information. The multi-purpose labeling systems can then automatically align a labeling module with the target labeling location and, using the labeling module, can place the label on the object at the target labeling location. By automatically identifying information about an object, generating a label for the object, and placing the label on the object, the labeling system can improve the ability for robotic systems to complete complex tasks without human interaction. Additionally, aspects of the multi-purpose labeling systems can provide further benefits including, for example: (i) reducing human involvement in object handling and management, (ii) increasing robotic system handling speeds, and/or (iii) eliminating the need to remove objects from the robotic system to place labels thereon, among other benefits.
In various embodiments of the multi-purpose labeling system, the labeling system can include a conveyor, a visual analysis module, and a labeling assembly. The conveyor can move an object in a first direction. The visual analysis module can include an optical sensor directed toward the conveyor, or a related location, to generate image data depicting the object. The labeling assembly can be spaced from the conveyor in a second direction and include a printer, a labeling module, and an alignment assembly. The printer can print a label based on the image data, and the labeling module can have a labeling plate for receiving the label. The alignment assembly can include a lateral-motion module, a vertical-motion module, and a rotary module for moving the labeling module along or about the first, the second, and a third direction, and can place the labeling plate adjacent to an object surface. In some embodiments, the labeling system can include one or more controllers having a computer-readable medium carrying instructions to operate the visual analysis module, the printer, the labeling module, and the alignment assembly.
Embodiments of the labeling system can place the label on the object by optically scanning the object on the conveyor for visual features and physical features. The visual features can include available labeling space and an object identifier reading. The physical features can include dimensions of the object. From the available labeling space, the labeling system can identify a target labeling location. From the object identifier reading, the labeling system can prepare the label on the labeling module carried by the alignment assembly. The labeling system can then align the labeling module with the target labeling location using the conveyor and the alignment assembly, based on the physical features, and can apply the label to the object using the alignment assembly.
Several details describing structures or processes that are well-known and often associated with robotic systems and subsystems, but that can unnecessarily obscure some significant aspects of the disclosed techniques, are not set forth in the following description for purposes of clarity. Moreover, although the following disclosure sets forth several embodiments of different aspects of the present technology, several other embodiments can have different configurations or different components than those described in this section. Accordingly, the disclosed techniques can have other embodiments with additional elements or without several of the elements described below.
Many embodiments or aspects of the present disclosure described below can take the form of computer-executable or controller-executable instructions, including routines executed by a programmable computer or controller. Those skilled in the relevant art will appreciate that the disclosed techniques can be practiced on computer or controller systems other than those shown and described below. The techniques described herein can be embodied in a special-purpose computer or data processor that is specifically programmed, configured, or constructed to execute one or more of the computer-executable instructions described below. Accordingly, the terms “computer” and “controller” as generally used herein refer to any data processor and can include internet appliances and/or application or handheld devices, including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, mini computers, and the like. Information handled by these computers and controllers can be presented at any suitable display medium, including a liquid crystal display (LCD). Instructions for executing computer- or controller-executable tasks can be stored in or on any suitable computer-readable medium, including hardware, firmware, or a combination of hardware and firmware. Instructions can be contained in any suitable memory device, including, for example, a flash drive, USB device, and/or other suitable medium.
The terms “coupled” and “connected,” along with their derivatives, can be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” can be used to indicate that two or more elements are in direct contact with each other. Unless otherwise made apparent in the context, the term “coupled” can be used to indicate that two or more elements are in either direct or indirect (with other intervening elements between them) contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause-and-effect relationship, such as for signal transmission/reception or for function calls).
In the example illustrated in
In some embodiments, the task can include interaction with a target object 112, such as manipulation, moving, reorienting, labeling, or a combination thereof, of the object. The target object 112 is the object that will be handled by the robotic system 100. More specifically, the target object 112 can be the specific object among many objects that is the target of an operation or task by the robotic system 100. For example, the target object 112 can be the object that the robotic system 100 has selected for or is currently being handled, manipulated, moved, reoriented, labeled, or a combination thereof. The target object 112, as examples, can include boxes, cases, tubes, packages, bundles, an assortment of individual items, or any other object that can be handled by the robotic system 100.
As an example, the task can include transferring the target object 112 from an object source 114 to a task location 116. The object source 114 is a receptacle for storage of objects. The object source 114 can include numerous configurations and forms. For example, the object source 114 can be a platform, with or without walls, on which objects can be placed or stacked, such as a pallet, a shelf, or a conveyor belt. As another, the object source 114 can be a partially or fully enclosed receptacle with walls or lid in which objects can be placed, such as a bin, cage, or basket. In some embodiments, the walls of the object source 114 with the partially or fully enclosed can be transparent or can include openings or gaps of various sizes such that portions of the objects contained therein can be visible or partially visible through the walls.
For illustrative purposes, the robotic system 100 is described in the context of a shipping center; however, it is understood that the robotic system 100 can be configured to execute tasks in other environments or for other purposes, such as for manufacturing, assembly, packaging, healthcare, or other types of automation. It is also understood that the robotic system 100 can include other units, such as manipulators, service robots, modular robots, that are not shown in
The robotic system 100 can include a controller 120 configured to interface with and/or control one or more of the robotic units. For example, the controller 120 can include circuits (e.g., one or more processors, memory, etc.) configured to derive motion plans and/or corresponding commands, settings, and the like used to operate the corresponding robotic unit. The controller 120 can communicate the motion plans, the commands, settings, etc. to the robotic unit, and the robotic unit can execute the communicated plan to accomplish a corresponding task, such as to transfer the target object 112 from the object source 114 to the task location 116.
The control unit 202 can be implemented in a number of different ways. For example, the control unit 202 can be a processor, an application specific integrated circuit (“ASIC”), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (“FSM”), a digital signal processor (“DSP”), or a combination thereof. The control unit 202 can execute software 210 and/or instructions to provide the intelligence of the robotic system 100.
The control unit 202 can be operably coupled to the I/O device 208 to provide a user with control over the control unit 202. The I/O device 208 can be used for communication between the user and the control unit 202 and other functional units in the robotic system 100. The I/O device 208 can also be used for communication that is external to the robotic system 100. The I/O device 208 can receive information from the other functional units or from external sources, and/or can transmit information to the other functional units or to external destinations. The external sources and the external destinations refer to sources and destinations external to the robotic system 100.
The I/O device 208 can be implemented in different ways and can include different implementations depending on which functional units or external units are being interfaced with the I/O device 208. For example, the I/O device 208 can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system (“MEMS”), optical circuitry, waveguides, wireless circuitry, wireline circuitry, application programming interface, or a combination thereof.
The storage unit 204 can store the software instructions 210, master data 246, tracking data, or a combination thereof. For illustrative purposes, the storage unit 204 is shown as a single element, although it is understood that the storage unit 204 can be a distribution of storage elements. Also for illustrative purposes, the robotic system 100 is shown with the storage unit 204 as a single hierarchy storage system, although it is understood that the robotic system 100 can have the storage unit 204 in a different configuration. For example, the storage unit 204 can be formed with different storage technologies forming a memory hierarchal system including different levels of caching, main memory, rotating media, and/or off-line storage.
The storage unit 204 can be a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof. For example, the storage unit 204 can be a nonvolatile storage such as non-volatile random access memory (“NVRAM”), Flash memory, disk storage, and/or a volatile storage such as static random access memory (“SRAM”). As a further example, storage unit 204 can be a non-transitory computer medium including the non-volatile memory, such as a hard disk drive, NVRAM, solid-state storage device (“SSD”), compact disk (“CD”), digital video disk (“DVD”), and/or universal serial bus (“USB”) flash memory devices. The software 210 can be stored on the non-transitory computer readable medium to be executed by a control unit 202.
In some embodiments, the storage unit 204 is used to further store and/or provide access to processing results, predetermined data, thresholds, or a combination thereof. For example, the storage unit 204 can store master data 246 that includes descriptions of the one or more target objects 112 (e.g., boxes, box types, cases, case types, products, or a combination thereof). In one embodiment, the master data 246 includes dimensions, predetermined shapes, templates for potential poses and/or computer-generated models for recognizing different poses, a color scheme, an image, identification information (e.g., bar codes, quick response (QR) codes, logos, and the like), expected locations, an expected weight, or a combination thereof, for the one or more target objects 112 expected to be manipulated by the robotic system 100.
In some embodiments, the master data 246 includes manipulation-related information regarding the one or more objects that can be encountered or handled by the robotic system 100. For example, the manipulation-related information for the objects can include a center-of-mass location on each of the objects, expected sensor measurements (e.g., for force, torque, pressure, and/or contact measurements), corresponding to one or more actions, maneuvers, or a combination thereof.
The communication unit 206 can enable external communication to and from the robotic system 100. For example, the communication unit 206 can enable the robotic system 100 to communicate with other robotic systems and/or units, external devices, such as an external computer, an external database, an external machine, an external peripheral device, or a combination thereof, through a communication path 218, such as a wired or wireless network.
The communication path 218 can span and represent a variety of networks and/or network topologies. For example, the communication path 218 can include wireless communication, wired communication, optical communication, ultrasonic communication, or the combination thereof. For example, satellite communication, cellular communication, Bluetooth, Infrared Data Association standard (“lrDA”), wireless fidelity (“WiFi”), and/or worldwide interoperability for microwave access (“WiMAX”) are examples of wireless communication that can be included in the communication path 218. Cable, Ethernet, digital subscriber line (“DSL”), fiber optic lines, fiber to the home (“FTTH”), and/or plain old telephone service (“POTS”) are examples of wired communication that can be included in the communication path 218. Further, the communication path 218 can traverse a number of network topologies and distances. For example, the communication path 218 can include direct connection, personal area network (“PAN”), local area network (“LAN”), metropolitan area network (“MAN”), wide area network (“WAN”), or a combination thereof. The robotic system 100 can transmit information between the various units through the communication path 218. For example, the information can be transmitted between the control unit 202, the storage unit 204, the communication unit 206, the I/O device 208, the actuation devices 212, the transport motors 214, the sensor units 216, or a combination thereof.
The communication unit 206 can also function as a communication hub allowing the robotic system 100 to function as part of the communication path 218 and not limited to be an end point or terminal unit to the communication path 218. The communication unit 206 can include active and/or passive components, such as microelectronics or an antenna, for interaction with the communication path 218.
The communication unit 206 can include a communication interface 248. The communication interface 248 can be used for communication between the communication unit 206 and other functional units in the robotic system 100. The communication interface 248 can receive information from the other functional units and/or from external sources, and/or can transmit information to the other functional units and/or to external destinations. The external sources and the external destinations refer to sources and destinations external to the robotic system 100.
The communication interface 248 can include different implementations depending on which functional units are being interfaced with the communication unit 206. The communication interface 248 can be implemented with technologies and techniques similar to the implementation of the control interface 240.
The I/O device 208 can include one or more input sub-devices and/or one or more output sub-devices. Examples of the input devices of the I/O device 208 can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, sensors for receiving remote signals, a camera for receiving motion commands, or a combination thereof, to provide data and/or communication inputs. Examples of the output device can include a display interface. The display interface can be any graphical user interface such as a display, a projector, a video screen, and/or a combination thereof.
The control unit 202 can operate the I/O device 208 to present or receive information generated by the robotic system 100. The control unit 202 can operate the I/O device 208 to present information generated by the robotic system 100. The control unit 202 can also execute the software 210 and/or instructions for the other functions of the robotic system 100. The control unit 202 can further execute the software 210 and/or instructions for interaction with the communication path 218 via the communication unit 206.
The robotic system 100 can include physical and/or structural members, such as robotic manipulator arms, that are connected at joints for motion, such as rotational displacement, translational displacements, or a combination thereof. The structural members and the joints can form a kinetic chain configured to manipulate an end-effector, such as a gripping element, to execute one or more task, such as gripping, spinning, welding, and/or labeling, depending on the use or operation of the robotic system 100. The robotic system 100 can include the actuation devices 212, such as motors, actuators, wires, artificial muscles, electroactive polymers, or a combination thereof, configured to drive, manipulate, displace, reorient, label, or a combination thereof, the structural members about or at a corresponding joint. In some embodiments, the robotic system 100 can include the transport motors 214 configured to transport the corresponding units from place to place.
The robotic system 100 can include the sensor units 216 configured to obtain information used to execute tasks and operations, such as for manipulating the structural members or for transporting the robotic units. The sensor units 216 can include devices configured to detect and/or measure one or more physical properties of the robotic system 100, such as a state, a condition, a location of one or more structural members or joints, information about objects and/or surrounding environment, or a combination thereof. As an example, the sensor units 216 can include imaging devices, system sensors, contact sensors, and/or a combination thereof.
In some embodiments, the sensor units 216 include one or more imaging devices 222. The imaging devices 222 can be configured to detect and image the surrounding environment. For example, the imaging devices 222 can include 2-dimensional cameras (“2D”), 3-dimensional cameras (“3D”), both of which can include a combination of visual and infrared capabilities, lidars, radars, other distance-measuring devices, and/or other imaging devices. The imaging devices 222 can generate a representation of the detected environment, such as a digital image and/or a point cloud, used for implementing machine/computer vision for automatic inspection, object measurement, robot guidance, and/or other robotic applications. As described in further detail below, the robotic system 100 can process the digital image, the point cloud, or a combination thereof via the control unit 202 to identify the target object 112 of
In some embodiments, the sensor units 216 can include system sensors 224. The system sensors 224 can monitor the robotic units within the robotic system 100. For example, the system sensors 224 can include units and/or devices to detect and/or monitor positions of structural members, such as the robotic arms, the end-effectors, corresponding joints of robotic units, or a combination thereof. As a further example, the robotic system 100 can use the system sensors 224 to track locations, orientations, or a combination thereof, of the structural members and/or the joints during execution of the task. Examples of the system sensors 224 can include accelerometers, gyroscopes, position encoders, and/or other similar sensors.
In some embodiments, the sensor units 216 can include the contact sensors 226, such as pressure sensors, force sensors, strain gauges, piezoresistive/piezoelectric sensors, capacitive sensors, elastoresistive sensors, torque sensors, linear force sensors, other tactile sensors, and/or any other suitable sensors configured to measure a characteristic associated with a direct contact between multiple physical structures and/or surfaces. For example, the contact sensors 226 can measure the characteristic that corresponds to a grip of the end-effector on the target object 112 or measure the weight of the target object 112. Accordingly, the contact sensors 226 can output a contact measure that represents a quantified measure, such as a measured force or torque, corresponding to a degree of contact and/or attachment between the gripping element and the target object 112. For example, the contact measure can include one or more force or torque readings associated with forces applied to the target object 112 by the end-effector.
For ease of reference,
As illustrated in
The conveyor assembly 330 can include a conveyor 332 carried by a conveyor support 334 (e.g., housing, struts). The conveyor 332 can move objects from a first end of the conveyor assembly 330 to a second end of the conveyor assembly 330 (e.g., along a first direction), as well as hold (e.g., stop, move slowly) objects along the length of the conveyor assembly 330 (e.g., under portions of the labeling assembly 310). The conveyor 332 can include one or more linear and/or non-linear motorized belt, rollers, multi-direction rollers, wheels, and/or any suitable mechanisms that can operate to selectably moving and/or holding the objects thereon. As illustrated, the conveyor assembly 330 includes a single conveyor 332. In some embodiments, the conveyor assembly 330 can include one or more additional conveyors 332 in sequence for independent movement of and/or holding objects thereon. Further, in some embodiments, the labeling system 300 can include one or more conveyor assemblies 330 with one or more conveyors 332.
The labeling assembly 310 can include: (i) a visual analysis module 312 for visually inspecting the objects, (ii) a printing module 314 for printing labels, (iii) the labeling module 316 for receiving printed labels and for adhering labels to the objects, and (iv) a labeling alignment assembly for aligning the labeling module 316 with the target labeling location of each object. In some embodiments, the labeling assembly 310 can further include a label flipping module 318 for preparing (by, e.g., folding, flipping, and/or pealing) printed labels for the labeling module 316. The labeling alignment assembly can include, for example, a lateral-motion module 320 operable along the y-axis, a vertical-motion module 322 operable along the z-axis, and/or a rotary module 324 operable about the z-axis, each configured to move the labeling module 316 along and/or about the respective identified axes. As illustrated in
Objects can first interface with the labeling assembly 310 at the visual analysis module 312. The visual analysis module 312 can collect object information (e.g., collected and/or derived from one or more of an object reading, image data, etc.) for the labeling system 300 to identify the object and/or a target labeling locations thereon. The visual analysis module 312 can also collect information for aligning the labeling module 316 with the target labeling location. The target labeling location can be a portion of one or more surfaces of the object that satisfies one or more predetermined conditions for adhering a label. For example, the target labeling location can be separate from (e.g., non-overlapping) one or more existing labels, images, logos, object surface damages, and/or other similar items to be left uncovered in placing a label. Additionally or alternatively, the target labeling location can be associated with a known and/or preferred location. For example, the known location can be based on industry standard, future handling of the object, customer-specification, and/or other similar circumstances where certain labeling locations facilitate more efficient object label reading and/or object handling, such as for packing and/or gripping. Further, in some embodiments, the target labeling location can be a set location for certain objects, regardless of items on a surface of the object.
The visual analysis module 312 can be coupled to the assembly frame 304 and positioned above the conveyor assembly 330 to analyze the object before reaching the labeling assembly 310. The visual analysis module 312 can include one or more imaging and/or optical sensor devices (e.g., the imaging devices 222 of
Object information, collected by the 2D and 3D cameras can include physical characteristics of the object. For example, the 2D and 3D cameras can both collect the size of a surface (e.g., top, one or more sides) of the object, a rotational orientation (e.g., about the z-axis) and/or location of the object (e.g., along the y-axis) (individually or collectively, an object pose) relative to the conveyor assembly 330 and/or the labeling assembly 310. The 3D cameras can further collect a height, a width, and/or a length of the object, in addition to other exterior dimensions thereof when the object is non-rectangular or non-square. The 2D cameras can further collect images identifying a texture (e.g., the visual characteristics) of one or more surfaces of the object. For example, the 2D camera can identify images and/or labels and the contents thereof (e.g., image codes, wording, symbols), damage, and/or blank spaces on the top surface using image recognition, optical character recognition (“OCR”), color-based comparison, object-based comparison, text-based comparison, and/or other similar image analysis methods.
Object information collected by the scanners can include identifying information (e.g., an object identifier reading), such as an object and/or object contents identifier (e.g., shipping number, object identifier, contents identifier, part number, etc.). In some embodiments, identifying information can be derived from physical characteristics. For example, the labeling system 300 can use the visual analysis module 312, the controller in the controls cabinet 302, and/or one or more devices external to the labeling assembly 310 to analyze the object information/image data for identifying the target labeling location. In analyzing the object information, the labeling system 300 can derive or detect one or more identifiable information, such as the physical dimensions, object identifiers, visual/textural patterns, or the like depicted in the image data. The labeling system 300 can compare the identifiable information to the master data 246 of
The printing module 314 can use the object information to print a label for adhering to the analyzed object. The printing module 314 can include a housing coupled to the assembly frame 304 with a printer therein. As illustrated in
For example, the printing module 314 can print rectangular and/or square labels as small as, or smaller than, 1.0 in×1.0 in (2.5 cm×2.5 cm) or as large as, or greater than, 4.0 in×6.0 in (10.2 cm×15.2 cm). Further, the printed labels can have, for example, white backing and black lettering; black backing, white lettering, and red symbols; red backing and a yellow image; or any other combination of backing and printing colors and contents. In some embodiments, the printing module 314 can print non-rectangular and/or non-square labels, such as triangles, circles, ovals, and/or any other shape. Further, the printing module 314 can print labels having an adhesive on one or more portions thereof. For example, labels requiring flipping, folding, and/or peeling (e.g., a protective covering over the adhesive) before adhesion to the object can include an adhesive covering a first side (e.g., a side facing the conveyor assembly 330), and an adhesive covering at least a portion of a second side (e.g., a side facing away from the conveyor assembly 330).
When the object is visually analyzed, the labeling assembly 310 can print and transfer the label to the labeling module 316. Then, the labeling alignment assembly and the conveyor 332 (together, the “alignment elements”) can engage to align the labeling module 316 with the target labeling location. For example, (i) the conveyor 332 can advance the object to align the labeling module 316 with the target labeling location along the x-axis (e.g., along the first direction), (ii) the lateral-motion module 320 can move the labeling module 316 to align with the target labeling location along the y-axis (e.g., along a second direction), and (iii) the rotary module 324 can rotate the labeling module 316 to align with the target labeling location about the z-axis. Once aligned along the x-axis and the y-axis, and aligned about the z-axis, the vertical-motion module 322 can move the labeling module 316 along the z-axis (e.g., along a third direction) to place the labeling module 316 against the top surface of the object, adhering the label thereto.
In some embodiments, one or more of the alignment elements and/or the printing module 314 can operate in unison and/or in sequence to align the target labeling location with the labeling module 316. For example, while and/or after an object is visually analyzed and the target labeling location is identified, the printing module 314 can print the label, the conveyor 332 can engage to advance the object along the x-axis, and/or the lateral-motion module 320 can engage to move the labeling module 316 along the y-axis. The vertical-motion module 322 and the rotary module 324 can then engage to move the labeling module 316 along and about the z-axis, respectively, and place the label on the object. In some embodiments, the vertical-motion module 322 and/or the rotary module 324 can engage before, at the same time as, or after the conveyor 332 and the lateral-motion module 320. Further, the vertical-motion module 322 and/or the rotary module 324 can engage as or just after (e.g., 0.5 sec, 1 sec, 5 sec, etc.) the labeling module 316 is aligned with the target labeling location along the x, y, and/or z-axes, and/or about the z-axis. Once the label is placed on the object, the labeling module 316 can be retracted by the alignment assembly and prepared to place a label on a subsequent object (e.g., 02). For example, while the labeling module 316 is aligned with the target labeling location of the object (e.g., 01) and/or while the label is placed on the object (e.g., 01), the visual analysis module 312 can visually analyze the subsequent object.
The three zones of the labeling system 400 of
Like the conveyor assembly 330 of
The visual analysis unit 416 can be carried by a visual analysis unit frame 404. The visual analysis unit frame 404 can be coupled to or resting on the ground surface. In some embodiments, the visual analysis unit frame 404 can be coupled to the conveyor assembly 430 and moveable therewith. The visual analysis unit 416 can collect object information for the labeling system 400 to identify the object and/or the target labeling location thereon, as well as collect information for aligning the labeling module 316 with the target labeling location. The visual analysis unit 416 can include one or more imaging devices and/or sensors (e.g., the imaging devices 222 of
The one or more 3D cameras, one or more 2D cameras, and one or more scanners can be coupled to any portion of the visual analysis unit frame 404 and positioned to analyze any one or more surfaces of the object. For example, a 3D camera 418 can be positioned on a top, front or back portion of the visual analysis unit frame 404 (e.g., top of the frame 404 toward or away from the labeling assembly 310, respectively) facing a front of the object to collect a height, width, and length of the object within a vision field (e.g., VF) for labeling alignment and placement. One or more 2D cameras 420 can be coupled to the top and/or one or more sides of the visual analysis unit frame 404 to collect images of the top and/or one or more sides of the object to identify the target labeling location. Scanners 422 can be coupled to the top and/or one or more sides of the visual analysis unit frame 404 to collect identifying information from the object. Similarly, one or more sensor 424 can be coupled to the top and/or one or more sides of the visual analysis unit frame 404 for tracking information regarding the conveyor assembly 430 and/or objects thereon. For example, the sensor 424 can include one or more encoders, switches, force sensors, level sensors, proximeters, IR beam sensors, light curtains, and/or any similar sensor for tracking operation of the conveyor 432, identifying information regarding the object thereon, and/or a location and/or pose of an object thereon.
The robotic system and/or the labeling system can derive a target labeling location (e.g., TLL) for placing a label (e.g., by the labeling system) on the object 500 and/or print the label for placing on the object 500 based on the surface texture, the identity information, and/or other information regarding the object 500, one or more object surfaces, and/or items on the object surfaces. Further, the robotic system and/or the labeling system can align a labeling module (e.g., the labeling module 316 of
In some embodiments, the labeling system can derive the target labeling location separate from (e.g., non-overlapping) and/or relative to the preexisting items. The labeling system can operate according to one or more predetermined rules for deriving the target labeling location. For example, the labeling system can derive the target labeling location based on rules that prefer one or more regions (e.g., halves, quadrants, corner regions, etc.), use the preexisting label 502 and/or the preexisting image 504 as a reference, or the like. As illustrated in
The lateral frame 602 can translate along the one or more tracks 606 using one or more motors controlled by the robotic system (e.g., the robotic system 100 of
The vertical shaft 702 can be coupled to the labeling assembly 310 using any suitable mechanism allowing the labeling module 316 to translate along the z-axis. For example, the vertical shaft 702 can be carried by a vertical support assembly 703 stationary along (relative to the labeling assembly 310), and rotatable about, the z-axis. The vertical support assembly 703 can include an upper vertical support bracket 704 and a lower vertical support bracket 706 coupled to one or more structures extending from the lateral frame 602. Further, opposing side brackets 708 (or a single side bracket 708) can extend between the upper bracket 704 and the lower bracket 706. In some embodiments, the vertical support assembly 703 can exclude either the upper bracket 704 or the lower bracket 706. The vertical shaft 702 can extend through the upper bracket 704 and/or the lower bracket 706, and between the side brackets 708.
The vertical shaft 702 can translate along the z-axis using one or more motors controlled by the robotic system and/or the labeling system. For example, one or more vertical rack gears 714 can be coupled to the vertical shaft 702, and one or more vertical servos 710 can be coupled to the vertical support assembly 703. Each vertical servo 710 can include one or more vertical pinion gears 712 interfacing with the one or more vertical rack gears 714, and can selectively drive the vertical pinion gears to translate the vertical shaft 702. Additionally, the vertical support assembly 703 can include one or more vertical support gears 716 and/or vertical support cams 718 (e.g., cam rollers, camming surfaces) to maintain alignment of the vertical shaft 702 along the z-axis and allow smooth motion of the vertical shaft 702 along the z-axis. The vertical support gears 716 can interface with the one or more vertical rack gears 714. The vertical support cams 718 can interface with surfaces of the vertical shaft 702. As illustrated in
The labeling module 316 can be coupled to a bottom end (e.g., an end closest to the conveyor 332, 432 of
In some embodiments, the vertical-motion module 322 can alternatively align the labeling module 316 with the target labeling location along the z-axis by vertically translating the labeling module 316 and one or more other components of the labeling assembly (e.g., one or more elements of the labeling assembly except the vertical-motion module 322). For example, the vertical-motion module 322 can be moveably coupled to the assembly frame 304, the lateral-motion module 320 of
In some embodiments, the vertical-motion module 322 can alternatively include a mechanism the same as or similar to the mechanism that allows the lateral-motion module 320 of
The rotary module 324 can include a rotating portion interfacing with the vertical-motion module 322, and can be rotated by a stationary portion coupled to the printing module 314, the label flipping module 318, the lateral-motion module 320, and/or any other structure of the labeling assembly 310. The rotating portion can include one or more alignment gears 802 configured to rotate the vertical shaft 702 about the z-axis. The alignment gear 802 can be rotatably coupled to the upper and/or lower brackets 704, 706, and can interface with the vertical support brackets 708 and/or vertical shaft 702 to rotate the vertical shaft 702. For example, the alignment gear 802 can rigidly couple to and rotate the upper and/or lower brackets 704, 706. As a further example, the vertical shaft 702 can extend through an opening of the alignment gear 802, and an inner surface of the opening can press against and rotate the vertical shaft 702.
The stationary portion can rotate the rotating portion using one or more motors controlled by the robotic system and/or the labeling system 300. For example, one or more rotary servos 804 can each selectively drive a rotary pinion gear 806 interfacing with the alignment gear 802 to rotate the vertical-motion module 322. As illustrated in
The array of passthrough slots of the first label adapter 1102 can correspond with the shape and/or size of labels that can cover a majority of the bottom surface of the transfer plate 902 of
Regarding
Before and/or while the label 1300 extends from (e.g., is printed by, expelled from) the printing module 314, the flipping suction assembly 908 of
In some embodiments, a label for the identified object can additionally or alternatively require flipping after printing. For example, the printing module 314 can print the label extending over the bottom of the label flipping module 318 with an adhesive for adhering the label to the object facing the label flipping module 318. In these embodiments, the label can include an adhesive on a top surface (e.g., facing the label flipping module 318), and can include information printed on and/or exclude adhesive on a bottom surface. Before, while, and/or after the label extends from the printing module 314, the flipping suction assembly 908 can engage to hold and temporarily adhere the label to the label flipping module 318. Once the label is printed and partially adhered to the label flipping module 318: (i) the label flipping module 318 can activate, (ii) the flipping suction assembly 908 can disengage, (iii) the labeling suction assembly 1020 can engage to hold label against the labeling module 316, and/or (iv) the label flipping module 318 can deactivate and the label can separate therefrom. The labeling suction assembly 1020 can hold the prepared (e.g., flipped) label with the adhesive (previously on the top surface) facing the object and the target labeling location.
In some embodiments, a label for the identified object can require neither folding nor flipping. For example, as illustrated in
Optically scanning an object on an object conveyor for visual features and physical features (process portion 1602) can include moving and/or holding the object, or a portion thereof, within a vision field of a visual analysis module and/or unit, and/or collecting information regarding the object with one or more imaging devices of the visual analysis module and/or unit. For example, the one or more imaging devices can collect information regarding visual features, such as one or more available labeling spaces (e.g., available labeling space) and/or one or more object identifier readings. The available labeling space can include surface areas of the object having minimum required dimensions and/or uniform texture, and/or excluding any recognizable patterns (e.g., barcode, QR code, letters or design markers, etc.). The one or more imaging devices can also collect information regarding physical features, such as a height, a width, and/or a length of the object, and/or additional exterior dimensions; and can collect information regarding physical features regarding the pose of the object relative to the labeling system and/or the robotic system. For example, regarding the object pose, the collected information can identify (or be used to identify) a distance and/or rotation of the object, and/or one or more object surfaces, relative to the labeling system or a portion thereof.
Identifying (e.g., deriving) a target labeling location from the visual features (process portion 1604) can include the labeling system and/or the robotic system analyzing the available labeling space to locate a location that satisfies one or more predetermined conditions for placing the label. For example, the location can correspond with a location within the available labeling space, a location dictated by industry standard, a location improving future handling of the object, and/or another locations facilitating more efficient object label reading, such as distancing the label from other surface contents, rotating the label along a certain orientation, etc.
Preparing, based on the visual features, an object label on a labeling module carried by an alignment assembly (process portion 1606) can include the labeling system and/or the robotic system instructing the labeling assembly to print and prepare the label on the labeling module. A printing module can print a label with information thereon based on the available labeling space and/or the one or more object identifier readings. For example, the printing module (or the labeling system and/or the robotic system) can select a type (e.g., shape, size, color, etc.) of label to print, and/or barcodes, QR codes, letters, and/or designs to print on the label. A label flipping module can then fold, flip, peel, and/or transfer the printed label to the labeling module. The labeling module can hold the printed label, with an adhesive facing the object, by engaging a suction assembly.
Aligning, based on the physical features, the labeling module with the target labeling location using the object conveyor and the alignment assembly (process portion 1608) can include engaging the object conveyor, a lateral-motion module, a vertical-motion module, and/or a rotary module to move the object, or a portion thereof, under the labeling assembly. Further, aligning can include deriving an object placement pose where the labeling module is aligned with the target labeling location. For example, based on at least the height, the width, the length, and/or the pose of the object at the visual analysis module and/or unit, the labeling system and/or the robotic system can derive the object placement pose where the object can be located under the labeling assembly and the labeling module can be aligned with the target labeling location (e.g., a location of the object where the target labeling location is within a region of possible orientations of the labeling module by the alignment assembly). The labeling system and/or the robotic system can also derive a motion plan to align the labeling module with the target labeling location while the object is at the placement pose. The motion plan can include offset distances between the target labeling location and the labeling module between the pose of the object at the visual analysis module and/or unit and the placement pose. The offset distances can include distances along and/or about the operating axes of the object conveyor and the elements of the alignment assembly. The object conveyor, the lateral-motion module, the vertical-motion module, and/or the rotary module can then selectively, simultaneously and/or sequentially, engage to reduce and/or eliminate the respective offset distances. In some embodiments, the vertical-motion module can maintain the offset distance between the target labeling location and the labeling module along the operating axis thereof above a certain threshold distance. For example, the offset along the z-axis can be maintained as at least, greater than, or less than 1 in, 2 in, or 3 in (2.5 cm, 5.1 cm, or 7.6 cm).
Applying, based on the physical features, the object label to the object using the alignment assembly (process portion 1610) can include pressing the label adhesive against the object at the target labeling location. For example, the vertical-motion module can engage to eliminate the offset distance between the target labeling location and the labeling module along the operating axis thereof. The vertical-motion module can further press the labeling module against the surface of the object (e.g., exert a force against the object via the labeling module), ensuring adhesion of the label to the object. The suction assembly can be disengaged and the labeling module retracted by one or more elements of the labeling assembly, and the object conveyor can move the object from under the labeling assembly and/or to a subsequent portion of the labeling system and/or the robotic system.
Aspects of one or more of the robotic and/or labeling systems described can efficiently and/or automatically prepare and adhere labels to objects within the robotic system. Labels can be adhered to avoid preexisting labels, images, and/or other items on the objects as they progress through the robotic system. By providing automatic labeling, the robotic and/or labeling system can improve object tracking and/or management without requiring human involvement, slowing operation of the robotic system, and/or removing the objects from the robotic system.
The present technology is illustrated, for example, according to various aspects described below. Various examples of aspects of the present technology are described as numbered examples (1, 2, 3, etc.) for convenience. These are provided as examples and do not limit the present technology. It is noted that any of the dependent examples can be combined in any suitable manner, and placed into a respective independent example. The other examples can be presented in a similar manner.
1. A multi-purpose labeling system, comprising:
2. The multi-purpose labeling system of example 1 further comprising a label flipping module between the printer and the labeling module, the label flipping module configured to transfer the label from the printer to the labeling plate.
3. The multi-purpose labeling system of example 2, wherein the label flipping module includes:
4. The multi-purpose labeling system of example 1 further comprising an assembly frame carrying the labeling assembly over the conveyor and spacing the labeling assembly from the conveyor along the second direction.
5. The multi-purpose labeling system of example 4, wherein the lateral-motion module is moveably coupled to the assembly frame and carries the printer, the labeling module, the vertical-motion module, and the rotary module.
6. The multi-purpose labeling system of example 5, wherein the lateral-motion module is moveably coupled to the assembly frame using a carriage and track.
7. The multi-purpose labeling system of example 4, wherein the printer is rigidly coupled to the frame, and the lateral-motion module is moveably coupled to the assembly frame and carries the labeling module, the vertical-motion module, and the rotary module.
8. The multi-purpose labeling system of example 1, wherein the at least one processor and at least one memory component with instructions that, when executed by the processor, perform operations further including:
9. The multi-purpose labeling system of example 8, wherein computing the placement location includes identifying one or more labels, images, logos, or surface damages on the object, and computing the placement location as nonoverlapping with the one or more labels, images, logos, or surface damages on the object.
10. The multi-purpose labeling system of example 1 further comprising a visual analysis module frame independent of and spaced along the first direction from the labeling assembly, wherein the visual analysis module frame carries the visual analysis module over the conveyor and spacing the visual analysis module from the conveyor along the second direction.
11. The multi-purpose labeling system of example 1, wherein the labeling module includes a compliance assembly configured to align the labeling plate with the surface of the object when the labeling plate is adjacent thereto.
12. The multi-purpose labeling system of example 1, wherein the image data generated by the visual analysis module includes both 2D image data and/or 3D image data.
13. A multi-purpose labeling system, comprising:
14. The multi-purpose labeling system of example 13, wherein the operations further include positioning the labeling plate, using the alignment assembly, adjacent to a surface of the object to place the label thereon.
15. The multi-purpose labeling system of example 13, wherein aligning the labeling module with the object based on the reading by the visual analysis module further includes:
16. The multi-purpose labeling system of example 13, wherein aligning the labeling module with the object based on the reading by the visual analysis module further includes identifying a target labeling location for placing the label on a surface of the object.
17. A method for placing a label on an object using a multi-purpose labeling system, comprising:
18. The method of example 17, wherein the alignment assembly includes a lateral-motion module, and wherein aligning further includes:
19. The method of example 17, wherein the alignment assembly includes a rotary module, and wherein aligning further includes:
20. The method of example 17, wherein alignment assembly includes a vertical-motion module, and wherein applying further includes engaging the vertical-motion module to rotationally align the labeling module to adhere the object label to the object.
From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. To the extent any material incorporated herein by reference conflicts with the present disclosure, the present disclosure controls. Where the context permits, singular or plural terms may also include the plural or singular term, respectively. Moreover, unless the word “or” is expressly limited to mean a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. Furthermore, as used herein, the phrase “and/or” as in “A and/or B” refers to A alone, B alone, and both A and B. Additionally, the terms “comprising,” “including,” “having,” and “with” are used throughout to mean including at least the recited feature(s) such that any greater number of the same features and/or additional types of other features are not precluded.
From the foregoing, it will also be appreciated that various modifications may be made without deviating from the disclosure or the technology. For example, one of ordinary skill in the art will understand that various components of the technology can be further divided into subcomponents, or that various components and functions of the technology may be combined and integrated. In addition, certain aspects of the technology described in the context of particular embodiments may also be combined or eliminated in other embodiments. Furthermore, although advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.
The present application claims the benefit of U.S. Provisional Patent Application No. 63/232,665, filed Aug. 13, 2021, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63232665 | Aug 2021 | US |