The present technology is directed generally to robotic systems and, more specifically, to systems, processes, and techniques for operation and deployment thereof.
With their ever-increasing performance and lowering cost, many robots (e.g., machines configured to automatically/autonomously execute physical actions) are now extensively used in various different fields. Robots, for example, can be used to execute various tasks (e.g., manipulate or transfer an object through space) in manufacturing and/or assembly, packing and/or packaging, transport and/or shipping, etc. In executing the tasks, the robots can replicate human actions, thereby replacing or reducing human involvements that are otherwise required to perform dangerous or repetitive tasks.
However, despite the technological advancements, robots often lack the sophistication necessary to duplicate human interactions required for executing larger and/or more complex tasks. Furthermore, robots often lack the capability of rapid transport and deployment to multiple locations within work environments. Accordingly, there remains a need for improved techniques and systems for managing operations and/or interactions between robots, and there further remains a need for improved mobility of robotic systems to allow for rapid relocation within work environments.
In embodiments, a transfer unit cell for deployment of a robotic system is provided. The transfer unit cell may include a transfer unit cell for the transfer of objects, the transfer unit cell being in communication with the control system and translatable between a deployed configuration configured to receive a pallet within the transfer unit cell, and a retracted configuration wherein the transfer unit cell is retracted into itself, the transfer unit cell further including: a cell base plate; a robotic arm mount on the cell base plate for attachment of a robotic arm; a conveyor system, adjacent the robotic arm mount, for receiving a target object; a sensor mount attached to the cell base plate for a sensor system including a sensor array; and a unit enclosure mounted to the cell base plate of the transfer unit cell to facilitate transport of the transfer unit cell, and translation of the transfer unit cell between the deployed configuration and the retracted configuration.
In embodiments, a transfer unit cell for deployment of a robotic system is provided. The transfer unit cell may include a cell base plate for the transfer of objects, the transfer unit cell being translatable between a deployed configuration configured to receive and secure a pallet, and a retracted configuration wherein the transfer unit cell is retracted into itself; a robotic arm mount for receiving a robotic arm; a conveyor system for receiving a target object; a sensor mount for receiving a sensor system including a sensor array; and a unit enclosure mounted to the cell base plate to facilitate transport of the transfer unit cell, and translation of the transfer unit cell between the deployed configuration and the retracted configuration.
In embodiments, a method for rapid deployment and integration of a robotic system is provided. The method may include: locating and deploying a transfer unit cell having a cell base plate into a deployed configuration configured to receive and secure a pallet, containing a plurality of objects; securing the transfer unit cell in the deployed configuration such that the transfer unit cell is stabilized to support the transfer unit cell; sensing the plurality of objects contained on the pallet via a sensor system including a sensor array attached to a sensor mount of the transfer unit cell; controlling a robotic arm of the transfer unit cell to interact with a target object sensed from among the plurality of objects contained on the pallet; and transferring the target object from the pallet to a conveyor system of the transfer unit cell.
FIG. 2D is a block diagram that illustrates yet another embodiment of a computing system configured to perform or facilitate the detection, identification, and retrieval of objects, consistent with embodiments hereof.
Systems and methods for a robotic system with a coordinated transfer mechanism are described herein. The robotic system (e.g., an integrated system of devices that each execute one or more designated tasks) configured in accordance with some embodiments autonomously executes integrated tasks by coordinating operations of multiple units (e.g., robots). A depalletization robot is configured for integration with a transportable box or frame structure that can provide automation to a warehouse or other work environment without requiring adjustments to the warehouse/work environment infrastructure. The transportable frame structure can be delivered, located, deployed, and be made operational within a day to provide work environment automation without necessitating excessive cost outlay or time to deployment. More particularly, the transportable frame structure and integrated robot may be sized to fit within standard shipping containers to achieve this rapid delivery, location, deployment, and operation aspects as further detailed herein.
In the following, numerous specific details are set forth to provide a thorough understanding of the presently disclosed technology. In other embodiments, the techniques introduced here can be practiced without these specific details. In other instances, well-known features, such as specific functions or routines, are not described in detail in order to avoid unnecessarily obscuring the present disclosure. References in this description to “an embodiment,” “one embodiment,” or the like mean that a particular feature, structure, material, or characteristic being described is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, such references are not necessarily mutually exclusive either. Furthermore, the particular features, structures, materials, or characteristics can be combined in any suitable manner in one or more embodiments. It is to be understood that the various embodiments shown in the figures are merely illustrative representations and are not necessarily drawn to scale.
The present application refers to systems and robotic systems. Robotic systems, as discussed herein, may include robotic actuator components (e.g., robotic arms, robotic grippers, etc.), various sensors (e.g., cameras, etc.), and various computing or control systems. As discussed herein, computing systems or control systems may be referred to as “controlling” various robotic components, such as robotic arms, robotic grippers, cameras, etc. Such “control” may refer to direct control of and interaction with the various actuators, sensors, and other functional aspects of the robotic components. For example, a computing system may control a robotic arm by issuing or providing all of the required signals to cause the various motors, actuators, and sensors to cause robotic movement. Such “control” may also refer to the issuance of abstract or indirect commands to a further robotic control system that then translates such commands into the necessary signals for causing robotic movement. For example, a computing system may control a robotic arm by issuing a command describing a trajectory or destination location to which the robotic arm should move to and a further robotic control system associated with the robotic arm may receive and interpret such a command and then provide the necessary direct signals to the various actuators and sensors of the robotic arm to cause the required movement.
Several details describing structures or processes that are well-known and often associated with robotic systems and subsystems, but that can unnecessarily obscure some significant aspects of the disclosed techniques, are not set forth in the following description for purposes of clarity. Moreover, although the following disclosure sets forth several embodiments of different aspects of the present technology, several other embodiments may have different configurations or different components than those described in this section. Accordingly, the disclosed techniques may have other embodiments with additional elements or without several of the elements described below.
Many embodiments or aspects of the present disclosure described below may take the form of computer- or controller-executable instructions, including routines executed by a programmable computer or controller. Those skilled in the relevant art will appreciate that the disclosed techniques can be practiced on or with computer or controller systems other than those shown and described below. The techniques described herein can be embodied in a special-purpose computer or data processor that is specifically programmed, configured, or constructed to execute one or more of the computer-executable instructions described below. Accordingly, the terms “computer” and “controller” as generally used herein refer to any data processor and can include Internet appliances and handheld devices (including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, minicomputers, and the like). Information handled by these computers and controllers can be presented at any suitable display medium, including a liquid crystal display (LCD). Instructions for executing computer- or controller-executable tasks can be stored in or on any suitable computer-readable medium, including hardware, firmware, or a combination of hardware and firmware. Instructions can be contained in any suitable memory device, including, for example, a flash drive, USB device, and/or other suitable medium.
The terms “coupled” and “connected,” along with their derivatives, can be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” can be used to indicate that two or more elements are in direct contact with each other. Unless otherwise made apparent in the context, the term “coupled” can be used to indicate that two or more elements are in either direct or indirect (with other intervening elements between them) contact with each other, or that the two or more elements co-operate or interact with each other (e.g., as in a cause-and-effect relationship, such as for signal transmission/reception or for function calls), or both.
Any reference herein to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. Spatial structure information is merely one form of possible image analysis and other forms known by one skilled in the art may be used in accordance with the methods described herein.
In embodiments, the camera 1200 (which may also be referred to as an image sensing device) may be a 2D camera and/or a 3D camera. For example,
In embodiments, the system 1000 may be a robot operation system for facilitating robot interaction between a robot and various objects in the environment of the camera 1200. For example,
In embodiments, the computing system 1100 of
In embodiments, the computing system 1100 may form or be part of a vision system. The vision system may be a system which generates, e.g., vision information which describes an environment in which the robot 1300/306 is located, or, alternatively or in addition to, describes an environment in which the camera 1200 is located. The vision information may include the 3D image information and/or the 2D image information discussed above, or some other image information. In some scenarios, if the computing system 1100 forms a vision system, the vision system may be part of the robot control system discussed above or may be separate from the robot control system. If the vision system is separate from the robot control system, the vision system may be configured to output information describing the environment in which the robot 1300/306 is located. The information may be outputted to the robot control system, which may receive such information from the vision system and performs motion planning and/or generates robot interaction movement commands based on the information. Further information regarding the vision system is detailed below.
In embodiments, the computing system 1100 may communicate with the camera 1200 and/or with the robot 1300/306 via a direct connection, such as a connection provided via a dedicated wired communication interface, such as a RS-232 interface, a universal serial bus (USB) interface, and/or via a local computer bus, such as a peripheral component interconnect (PCI) bus. In embodiments, the computing system 1100 may communicate with the camera 1200 and/or with the robot 1300/306 via a network. The network may be any type and/or form of network, such as a personal area network (PAN), a local-area network (LAN), e.g., Intranet, a metropolitan area network (MAN), a wide area network (WAN), or the Internet. The network may utilize different techniques and layers or stacks of protocols, including, e.g., the Ethernet protocol, the internet protocol suite (TCP/IP), the ATM (Asynchronous Transfer Mode) technique, the SONET (Synchronous Optical Networking) protocol, or the SDH (Synchronous Digital Hierarchy) protocol.
In embodiments, the computing system 1100 may communicate information directly with the camera 1200 and/or with the robot 1300/306, or may communicate via an intermediate storage device, or more generally an intermediate non-transitory computer-readable medium. For example,
As stated above, the camera 1200 may be a 3D camera and/or a 2D camera. The 2D camera may be configured to generate a 2D image, such as a color image or a grayscale image. The 3D camera may be, e.g., a depth-sensing camera, such as a time-of-flight (TOF) camera or a structured light camera, or any other type of 3D camera. In some cases, the 2D camera and/or 3D camera may include an image sensor, such as a charge coupled devices (CCDs) sensor and/or complementary metal oxide semiconductors (CMOS) sensor. In embodiments, the 3D camera may include lasers, a LIDAR device, an infrared device, a light/dark sensor, a motion sensor, a microwave detector, an ultrasonic detector, a RADAR detector, or any other device configured to capture depth information or other spatial structure information.
As stated above, the image information may be processed by the computing system 1100. In embodiments, the computing system 1100 may include or be configured as a server (e.g., having one or more server blades, processors, etc.), a personal computer (e.g., a desktop computer, a laptop computer, etc.), a smartphone, a tablet computing device, and/or other any other computing system. In embodiments, any or all of the functionality of the computing system 1100 may be performed as part of a cloud computing platform. The computing system 1100 may be a single computing device (e.g., a desktop computer), or may include multiple computing devices.
In embodiments, the non-transitory computer-readable medium 1120, which is part of the computing system 1100, may be an alternative or addition to the intermediate non-transitory computer-readable medium 1400 discussed above. The non-transitory computer-readable medium 1120 may be a storage device, such as an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof, for example, such as a computer diskette, a hard disk drive (HDD), a solid state drive (SSD), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, any combination thereof, or any other storage device. In some instances, the non-transitory computer-readable medium 1120 may include multiple storage devices. In certain implementations, the non-transitory computer-readable medium 1120 is configured to store image information generated by the camera 1200 and received by the computing system 1100. In some instances, the non-transitory computer-readable medium 1120 may store one or more object recognition template used for performing methods and operations discussed herein. The non-transitory computer-readable medium 1120 may alternatively or additionally store computer readable program instructions that, when executed by the processing circuit 1110, causes the processing circuit 1110 to perform one or more methodologies described here.
In an embodiment of the computing system 1100B, as depicted in
In an embodiment, the processing circuit 1110 may be programmed by one or more computer-readable program instructions stored on the non-transitory computer-readable medium 1120. For example, FIG. 2D illustrates a computing system 1100C, which is an embodiment of the computing system 1100/1100A/1100B, in which the processing circuit 1110 is programmed by one or more modules, including an object recognition module 1121, a motion planning and control module 1129, and an object manipulation planning and control module 1126. Each of the above modules may represent computer-readable program instructions configured to carry out certain tasks when instantiated on one or more of the processors, processing circuits, computing systems, etc., described herein. Each of the above modules may operate in concert with one another to achieve the functionality described herein. Various aspects of the functionality described herein may be carried out by one or more of the software modules described above and the software modules and their descriptions are not to be understood as limiting the computational structure of systems disclosed herein. For example, although a specific task or functionality may be described with respect to a specific module, that task or functionality may also be performed by a different module as required. Further, the system functionality described herein may be performed by a different set of software modules configured with a different breakdown or allotment of functionality.
In an embodiment, the object recognition module 1121 may be configured to obtain and analyze image information as discussed throughout the disclosure. Methods, systems, and techniques discussed herein with respect to image information may use the object recognition module 1121. The object recognition module may further be configured for object recognition tasks related to object identification, as discussed herein.
The motion planning and control module 1129 may be configured plan and execute the movement of a robot. For example, the motion planning and control module 1129 may interact with other modules described herein to plan motion of a robot 3300 for object retrieval operations and for camera placement operations. Methods, systems, and techniques discussed herein with respect to robotic arm movements and trajectories may be performed by the motion planning and control module 1129.
In embodiments, the motion planning and control module 1129 may be configured to plan robotic motion and robotic trajectories to account for the carriage of soft objects. As discussed herein, soft objects may have a tendency to droop, sag, flex, bend, etc. during movement. Such tendencies may be addressed by the motion planning and control module 1129. For example, during lifting operations, it may be expected that a soft object will sag or flex, causing forces on the robotic arm (and associated gripping devices, as described below) to vary, alter, or change in unpredictable ways. Accordingly, the motion planning and control module 1129 may be configured to include control parameters that provide a greater degree of reactivity, permitting the robotic system to adjust to alterations in load more quickly. In another example, soft objects may be expected to swing or flex (e.g., predicted flex behavior) during movement due to internal momentum. Such movements may be adjusted for by the motion planning and control module 1129 by calculating the predicted flex behavior of an object. In yet another example, the motion planning and control module 1129 may be configured to predict or otherwise account for a deformed or altered shape of a transported soft object when the object is deposited at a destination. The flexing or deformation of a soft object (e.g., flex behavior) may result in an object of a different shape, footprint, etc., then that same object had when it was initially lifted. Thus, the motion planning and control module 1129 may be configured to predict or otherwise account for such changes when placing the object down.
The object manipulation planning and control module 1126 may be configured to plan and execute the object manipulation activities of a robotic arm or end effector apparatus, e.g., grasping and releasing objects and executing robotic arm commands to aid and facilitate such grasping and releasing. As discussed below, dual grippers and adjustable multi-point gripping devices may require a series of integrated and coordinated operations to grasp, lift, and transport objects. Such operations may be coordinated by the object manipulation planning and control module 1126 to ensure smooth operation of the dual grippers and adjustable multi-point gripping devices.
With reference to
In embodiments, the computing system 1100 may obtain image information representing an object in a camera field of view (e.g., field of view 3200) of a camera 1200. In some instances, the object may be at least one target object 112 from a plurality of objects in a start/source location 114 in a field of view of a camera 1200, as described below. The steps and techniques described below for obtaining image information may be referred to below as an image information capture operation 5002. In some instances, the object may be one object from a plurality of objects in the field of view 3200 of a camera 1200. The image information 2600, 2700 may be generated by the camera (e.g., camera 1200) when the objects are (or have been) in the camera field of view 3200 and may describe one or more of the individual objects in the field of view 3200 of a camera 1200. The object appearance describes the appearance of an object from the viewpoint of the camera 1200. If there are multiple objects in the camera field of view, the camera may generate image information that represents the multiple objects or a single object (such image information related to a single object may be referred to as object image information), as necessary. The image information may be generated by the camera (e.g., camera 1200) when the group of objects is (or has been) in the camera field of view, and may include, e.g., 2D image information and/or 3D image information.
As an example,
As stated above, the image information may in some embodiments be all or a portion of an image, such as the 2D image information 2600. In examples, the computing system 1100 may be configured to extract an image portion 2000A from the 2D image information 2600 to obtain only the image information associated with a corresponding object 3410A. Where an image portion (such as image portion 2000A) is directed towards a single object it may be referred to as object image information. Object image information is not required to contain information only about an object to which it is directed. For example, the object to which it is directed may be close to, under, over, or otherwise situated in the vicinity of one or more other objects. In such cases, the object image information may include information about the object to which it is directed as well as to one or more neighboring objects. The computing system 1100 may extract the image portion 2000A by performing an image segmentation or other analysis or processing operation based on the 2D image information 2600 and/or 3D image information 2700 illustrated in
The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to some other reference point. In some embodiments, the 3D image information 2700 may include a point cloud which includes respective coordinates for various locations on structures of objects in the camera field of view (e.g., field of view 3200). In the example of
In an embodiment, an image normalization operation may be performed by the computing system 1100 as part of obtaining the image information. The image normalization operation may involve transforming an image or an image portion generated by the camera 1200, so as to generate a transformed image or transformed image portion. For example, if the image information, which may include the 2D image information 2600, the 3D image information 2700, or a combination of the two, obtained may undergo an image normalization operation to attempt to cause the image information to be altered in viewpoint, object position, lighting condition associated with the visual description information. Such normalizations may be performed to facilitate a more accurate comparison between the image information and model (e.g., template) information. The viewpoint may refer to a pose of an object relative to the camera 1200, and/or an angle at which the camera 1200 is viewing the object when the camera 1200 generates an image representing the object. As used herein, “pose” may refer to an object location and/or orientation.
For example, the image information may be generated during an object recognition operation in which a target object is in the camera field of view 3200. The camera 1200 may generate image information that represents the target object when the target object has a specific pose relative to the camera. For instance, the target object may have a pose which causes its top surface to be perpendicular to an optical axis of the camera 1200. In such an example, the image information generated by the camera 1200 may represent a specific viewpoint, such as a top view of the target object. In some instances, when the camera 1200 is generating the image information during the object recognition operation, the image information may be generated with a particular lighting condition, such as a lighting intensity. In such instances, the image information may represent a particular lighting intensity, lighting color, or other lighting condition.
In an embodiment, the image normalization operation may involve adjusting an image or an image portion of a scene generated by the camera, so as to cause the image or image portion to better match a viewpoint and/or lighting condition associated with information of an object recognition template. The adjustment may involve transforming the image or image portion to generate a transformed image which matches at least one of an object pose or a lighting condition associated with the visual description information of the object recognition template.
The viewpoint adjustment may involve processing, warping, and/or shifting of the image of the scene so that the image represents the same viewpoint as visual description information that may be included within an object recognition template. Processing, for example, may include altering the color, contrast, or lighting of the image, warping of the scene may include changing the size, dimensions, or proportions of the image, and shifting of the image may include changing the position, orientation, or rotation of the image. In an example embodiment, processing, warping, and or/shifting may be used to alter an object in the image of the scene to have an orientation and/or a size which matches or better corresponds to the visual description information of the object recognition template. If the object recognition template describes a head-on view (e.g., top view) of some object, the image of the scene may be warped so as to also represent a head-on view of an object in the scene.
Further aspects of the object recognition and image normalization methods performed herein are described in greater detail in U.S. application Ser. No. 16/991,510, filed Aug. 12, 2020, and U.S. application Ser. No. 16/991,466, filed Aug. 12, 2020, each of which is incorporated herein by reference.
In various embodiments, the terms “computer-readable instructions” and “computer-readable program instructions” are used to describe software instructions or computer code configured to carry out various tasks and operations. In various embodiments, the term “module” refers broadly to a collection of software instructions or code configured to cause the processing circuit 1110 to perform one or more functional tasks. The modules and computer-readable instructions may be described as performing various operations or tasks when a processing circuit or other hardware component is executing the modules or computer-readable instructions.
In an embodiment, the system 3000 of
In an embodiment, the system 3000 may include a camera 1200 or multiple cameras 1200, including a 2D camera that is configured to generate 2D image information and a 3D camera that is configured to generate 3D image information. The camera 1200 or cameras 1200 may be mounted or affixed to the robot 3300, may be stationary within the environment, and/or may be affixed to a dedicated robotic system separate from the robot 3300 used for object manipulation, such as a robotic arm, gantry, or other automated system configured for camera movement.
In the example of
The robot 3300 may further include additional sensors configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units. The sensors can include devices configured to detect or measure one or more physical properties of the robot 3300 (e.g., a state, a condition, and/or a location of one or more structural members/joints thereof) and/or of a surrounding environment. Some examples of the sensors can include accelerometers, gyroscopes, force sensors, strain gauges, tactile sensors, torque sensors, position encoders, etc.
For the example illustrated in
In some embodiments, the task can include manipulation (e.g., moving and/or reorienting) of the at least one target object 112 (e.g., one of the packages, boxes, cases, cages, pallets, or individual objects, etc. corresponding to the executing task). For example, the task can be palletizing or depalletizing the at least one target object 112 from the start/source location 114 to the task/destination location 116. In an example of the task of palletizing, an unloading unit (not shown; e.g., a devanning robot) can be configured to transfer the at least one target object 112 from a location in a carrier (e.g., a truck) to a location on a conveyor 110, which may be integral with the transfer unit cell 104 as further described herein. Further, the transfer unit cell 104 can be configured to transfer the at least one target object 112 from one location (e.g., the conveyor, a pallet, or a bin) to another location (e.g., a pallet, a bin, etc.). The transfer unit cell 104 can be configured to transfer the at least one target object 112 from the start/source location 114 to the task/destination location 116.
In some embodiments, the task can be depalletizing the pallet containing the at least one target object 112. For example, the transport unit or vehicle 106 can transport a pallet loaded with the at least one target object 112 to the transfer unit cell 104 which can be configured to transfer the at least one target object 112 from the pallet to another location (e.g., another pallet, a bin, a conveyor, etc.). In further embodiments, the task may include any type of robotic picking or placing task.
For illustrative purposes, the robotic system 100 is described in the context of a packaging and/or shipping center; however, it is understood that the robotic system 100 can be configured to execute tasks in other environments/for other purposes, such as for manufacturing, assembly, storage/stocking, healthcare, and/or other types of automation. It is also understood that the robotic system 100 can include other units, such as manipulators, service robots, modular robots, etc., not shown in
The control units or processors 202 can include data processors (e.g., central processing units (CPUs), special-purpose computers, and/or onboard servers) configured to execute instructions (e.g., software instructions) stored on the storage units 204 (e.g., computer memory). The control units or processors 202 may include a control interface 240 for interaction via an end user. In some embodiments, the control units 202 can be included in a separate/stand-alone controller that is operably coupled to the other electronic/electrical devices illustrated in
The storage units 204 can include non-transitory computer-readable mediums having stored thereon program instructions (e.g., software). Some examples of the storage units 204 can include volatile memory (e.g., cache and/or random-access memory (RAM)) and/or non-volatile memory (e.g., flash memory and/or magnetic disk drives). Other examples of the storage units 204 can include portable memory and/or cloud storage devices. The storage units 204 may be implemented by any of the computer-readable media discussed herein.
In some embodiments, the storage units 204 can be used to further store and provide access to processing results and/or predetermined data/thresholds. For example, the storage units 204 can store master data 246 that includes descriptions of objects (e.g., boxes, cases, and/or products) that may be manipulated by the robotic system 100. In one or more embodiments, the master data 246 can include a dimension, a shape (e.g., templates for potential poses and/or computer-generated models for recognizing the object in different poses), a color scheme, an image, identification information (e.g., bar codes, quick response (QR) codes, logos, etc., and/or expected locations thereof), an expected weight, other physical/visual characteristics, or a combination thereof for the objects expected to be manipulated by the robotic system 100. In some embodiments, the master data 246 can include manipulation-related information regarding the objects, such as a center-of-mass (CoM) location on each of the objects, expected sensor measurements (e.g., for force, torque, pressure, and/or contact measurements) corresponding to one or more actions/maneuvers, or a combination thereof.
The communication units 206 can include circuits configured to communicate with external or remote devices via a network. For example, the communication units 206 can include receivers, transmitters, modulators/demodulators (modems), signal detectors, signal encoders/decoders, connector ports, network cards, etc. The communication units 206 can be configured to send, receive, and/or process electrical signals according to one or more communication protocols (e.g., the Internet Protocol (IP), wireless communication protocols, etc.). The communication units 206 may further include a communication interface 248 for interaction with via an end user for said sending, receiving, and/or processing of electrical signals according to said one or more communication protocols. In some embodiments, the robotic system 100 can use the communication units 206 to exchange information between units of the robotic system 100 and/or exchange information (e.g., for reporting, data gathering, analyzing, and/or troubleshooting purposes) with systems or devices external to the robotic system 100.
The system interfaces 208 can include user interface devices such as a display interface 250 configured to communicate information to and/or receive information from human operators. For example, the system interfaces 208 can include a display 210 and/or other output devices (e.g., a speaker, a haptics circuit, or a tactile feedback device, etc.) for communicating information to the human operator. Further, the system interfaces 208 can include control or receiving devices, such as a keyboard, a mouse, a touchscreen, a microphone, a user interface (UI) sensor (e.g., a camera for receiving motion commands), a wearable input device, etc. In some embodiments, the robotic system 100 can use the system interfaces 208 to interact with the human operators in executing an action, a task, an operation, or a combination thereof.
The robot or robotic arm 306 (which may be an example of the robot 3300) of the robotic system 100 may include physical or structural members (e.g., robotic manipulator arms) that are connected at joints for motion (e.g., rotational and/or translational displacements). The structural members and the joints can form a kinetic chain configured to manipulate an end-effector (e.g., the gripper) configured to execute one or more tasks (e.g., gripping, spinning, welding, etc.) depending on the use/operation of the robotic system 100. The robot or robotic arm 360 may include a distal end 306a with an end of arm tool or end effector apparatus 544 disposed thereon. The end effector apparatus 544 may be configured for interacting with the at least one target object 112. The robotic system 100 can include the actuation unit 212 (e.g., motors, actuators, wires, artificial muscles, electroactive polymers, etc.) configured to drive or manipulate (e.g., displace and/or reorient) the structural members about or at a corresponding joint. In some embodiments, the robotic system 100 can include the transport motors 214 configured to transport the corresponding units/chassis from place to place.
The robotic system 100 can include the sensor units 216 configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units. The sensor units 216 can include devices configured to detect or measure one or more physical properties of the robotic system 100 (e.g., a state, a condition, and/or a location of one or more structural members/joints thereof) and/or of a surrounding environment. Some examples of the sensor units 216 can include accelerometers, gyroscopes, force sensors, strain gauges, tactile sensors, torque sensors, position encoders, etc.
In some embodiments, for example, the sensor units 216 can include one or more imaging devices 222 (e.g., visual and/or infrared cameras, 2D and/or 3D imaging cameras, distance measuring devices such as lidars or radars, etc.) configured to detect the surrounding environment. The imaging devices 222 can generate representations of the detected environment, such as digital images and/or point clouds, that may be processed via machine/computer vision (e.g., for automatic inspection, robot guidance, or other robotic applications). As described in further detail above, the robotic system 100 (via, e.g., the control units 202) can process the digital image and/or the point cloud to identify the at least one target object 112 of
For manipulating the at least one target object 112, the robotic system 100 (via, e.g., the various circuits/devices described above) can capture and analyze image data of a designated area (e.g., a pickup location, such as inside the truck or on the conveyor belt) to identify the at least one target object 112 and the start/source location 114 thereof. Similarly, the robotic system 100 can capture and analyze image data of another designated area (e.g., a drop location for placing objects on the conveyor, a location for placing objects inside the container, or a location on the pallet for stacking purposes) to identify the task/destination location 116. For example, the imaging devices 222 can include one or more cameras configured to generate image data of the pickup area and/or one or more cameras configured to generate image data of the task area (e.g., drop area). Based on the image data, as described below, the robotic system 100 can determine the start/source location 114, the task/destination location 116, the associated poses, a packing/placement location, and/or other processing results.
In some embodiments, for example, the sensor units 216 can include position sensors 224 (e.g., position encoders, potentiometers, etc.) configured to detect positions of structural members (e.g., the robotic arms and/or the end-effectors) and/or corresponding joints of the robotic system 100. The robotic system 100 can use the position sensors 224 to track locations and/or orientations of the structural members and/or the joints during execution of the task. The robotic system 100 can include the transfer unit cell 104. As illustrated in
The cell base plate 302 may be a substantially level (i.e., within a five degree angle, of the horizontal axis, or top planar surface, of the cell base plate 302) structure or platform having a flat surface composed of metal (e.g., steel, aluminum, etc.) or any other material (e.g., carbon fiber) or combination of materials sufficient to support the robot 306, conveyor system 310, sensor mount 540, control system 308, unit enclosure 320, and any other features, and to maintain its structural integrity during translation of the transfer unit cell 104 between deployed, retracted, and transport configurations 410, 412, and 414 respectively, and during robotic system 100 operations. The cell base plate 302 may be formed in any parallelepiped shape where the top surface 302c includes a planar surface having an area sufficient to contain or mount thereon the robotic arm mount 304, the robot or robotic arm 306, the control system 308, the conveyor system 310, the sensor mount 540, and/or the unit enclosure 320.
Vertically oriented sides or edges 302a of the cell base plate 302 can include openings/pockets 303 configured for receiving the tines of a fork lift or other transport unit 106 to enable lifting of and transport of the transfer unit cell 104. The openings/pockets 303 may be positioned around the center of gravity of the transfer unit cell 104 to maximize stability when transporting/moving the transfer unit cell 104. The openings/pockets 303 may be slots disposed on the edges 302a of the cell base plate 302 formed of any material sufficient to maintain integrity while the tines of the fork lift insert and lift the transfer unit cell 104. Alternatively, the fork lift may lift and transport the transfer unit cell 104 by sliding its tines underneath the cell base plate 302.
As illustrated in
The base extensions 432 are formed or configured to provide stability and/or balance to support the transfer unit cell 104 while the transfer unit cell 104 is in the deployed configuration 410 (also exemplified in
The cell base plate 302 may further include payload guide 319 defined by at least one rail 318, as shown in
In embodiments, the cell base plate 302 and/or the base extensions 432 include anchorless support features 316 which may include one or more friction-increasing components extending from a bottom surface 302b of the cell base plate 302 and/or the base extensions 432. More particularly, the anchorless support features 316 may include rubber pads/feet, suction cups, magnets, adhesive strips, or any other material comprising a rough surface. By using the anchorless support features 316, the transfer unit cell 104 does not require securing to the facility floor via bolts or anchoring mechanisms. The anchorless support features 316 of the transfer unit cell 104 can therefore enable immediate deployment of the transfer unit cell 104.
The unit enclosure 320, aspects of which are illustrated in
As illustrated in
The conveyor system 310 may be configured to translate or move the at least one target object 112 received thereon to a location or position outside of the unit enclosure 320 of the transfer unit cell 104. The conveyor 110 or dynamic platform of the conveyor system 310 is configured for the movement of the at least one target object 112 received from the robot or robotic arm 306 along its length, e.g., in the direction of movement facilitated by the two or more pulleys. The conveyor system 310 may further serve as the task/destination location 116 configured for placement of the at least one target object 112 by the robot 306 while employing the methods or operations further described herein. In embodiments, the conveyor system 310 may instead be configured to translate or move the at least one target object 112 received thereon to a location or position inside of the unit enclosure 320 of the transfer unit cell 104. The conveyor 110 or dynamic platform of the conveyor system 310 is configured for the movement of the at least one target object 112 received from an outside source along its length, e.g., in the direction of movement facilitated by the two or more pulleys. The conveyor system may further serve as the start/source location 114 configured for providing the at least one target object 112 to the robotic system 100 for interaction with via the robot 306 while employing the methods or operations further described herein.
The robotic arm mount 304, illustrated, e.g., in
The robotic arm 306 may include an end effector apparatus 544 having appendages configured for grabbing, grasping, picking, or otherwise interacting with the at least one target object 112, the end effector apparatus 544 being disposed at a distal end of the robot or robotic arm 306. The end effector apparatus 544 may be a tool configured for manipulating objects. For example, the end effector apparatus 544 may be any form of gripper, such as hand or claw-based gripper or a vacuum or suction-based gripper.
The transfer unit cell 104 further includes a sensor system 312.
In some embodiments, the control system 308 may include the systems and elements described in
In embodiments, while in the retracted configuration 412, the transfer unit cell 104 may further be interacted with via the transport unit 106 to configure the transfer unit cell 104 to a transport configuration 414 which allows for the portability and rapid deployment and integration of the transfer unit cell 104 around the environment to another location where the transfer unit cell 104 may be re-configured into the deployed configuration 410, and/or into a storage space as previously described herein, and further described with respect to
In embodiments, the transfer unit cell 104 may be in a partially retracted configuration 412A, as illustrated in
In the deployment operation 2010, the method 2000 may first include a finding/locating step 2011 for locating the transfer unit cell 104 within the environment so as to initiate a loading step 2012, and a deploying/securing step 2014 of the transfer unit cell 104 having the cell base plate 302 into the deployed configuration 410, where, while in the deployed configuration 410, the transfer unit cell 104 and/or robotic system 100 is configured to receive and secure a pallet containing a plurality of objects.
Locating or finding the transfer unit cell 104 may include determining the location of the transfer unit cell 104 within the environment, such as a warehouse. The transfer unit cell 104 may be located by remote tracking or identification procedures (such as GPS), communicated to the robotic system 100 via the communication units 206, for example. Locating the transfer unit cell 104 may further include automatically controlling, or manually driving, the transport unit 106 to the known location of the transfer unit cell 104 after the location of the transfer unit cell 104 within the environment is identified. The loading step 2012 may include loading the transfer unit cell 104 onto the transport vehicle or the transport unit 106 for transport in the transport configuration 414 to a desired location (i.e., the start/source location 114).
In embodiments, loading the transfer unit cell 104 onto the transport unit 106 into the transport configuration 414 may include receiving or lifting the cell base plate 302 via tines of a fork lift received in the openings/pockets of the cell base plate 302, as previously described herein. Alternatively, loading the transfer unit cell 104 onto the transport unit 106 may include receiving the transfer unit cell 104 on a conveyor, an automated guided vehicle (AGV), an autonomous mobile robot (AMR), or any other type of dynamic structure capable of moving the transfer unit cell 104 around the environment to the start/source location 114.
The deploying/securing step 2014 of the deployment operation 2010 may include, positioning, lowering and/or securing the transfer unit cell 104 into a desired position in the deployed configuration 410 such that the transfer unit cell 104 is stabilized to support the transfer unit cell 104 during robotic system 100 operations. In embodiments, securing the transfer unit cell 104 in the deployed configuration 410 may include securing or stabilizing the transfer unit cell 104 using the anchorless support features 316 as previously described herein. As discussed above, the anchorless support features 316 may have friction-inducing properties that prevent sliding, shifting, or general displacement of the transfer unit cell 104 during operation.
In the sensing operation 2020, the method 2000 may include sensing or detecting the at least one target object 112 among the plurality of objects contained on the pallet or start/source location 114 via the sensor system 312 as previously described herein with respect to FIGS. 2D-3C. In embodiments, the sensing operation 2020 may include a target identification step 2022 that may include identifying the at least one target object 112 within the start/source location 114 (such as a pallet, a conveyor, a specified area on the warehouse floor, etc.). The target identification step 2022 may utilize any embodiment of the obtaining image information procedures previously described herein with respect to the systems 1000/1500A/1500B/1500C/1100/1100A/1100B/3000/100. In embodiments of the sensing operation 2020, sensing the at least one target object 112 within the start/source location 114 may include the sensor system 312 having the sensor array 542 attached to the sensor mount 540 of the transfer unit cell 104 as previously described herein. The sensor mount 540 may have any combination of sensors and/or peripheral devices for detection of the at least one target object 112, such as two dimensional cameras, three dimensional cameras, scanners, lighting arrays, or the like mounted thereon. The sensor mount 540 may further be adjusted along its vertical axis to a position for optimal sensing of the at least one target object 112 within the start/source location 114, as previously described herein.
In the trajectory generation operation 2030, the method 2000 may include calculating a planned trajectory of the robot arm or robot 306 and/or the end effector apparatus 544. Calculating the planned trajectory may include determining a trajectory path of the robot arm or robot 306 and/or the end effector apparatus 544 toward the start/source location 114 and/or the at least one target object 112. For example, the robotic system 100 may identify the start/source location 114 as the container placement area 314, which may include a pallet containing the at least one target object 112, or a stack or pile of at least one target object 112. In embodiments, the start/source location 114 may be identified by the robotic system 100 as the conveyor system 310. For example, the robotic system 100 may identify the start/source location 114 as the conveyor 110 of the conveyor system 310, which could present the at least one target object 112 in a queue while the conveyor moves the at least one target object 112 along its length in a direction toward or within the transfer unit cell 104. In calculating the planned trajectory, the robotic system 100 may further calculate a trajectory of the end effector apparatus 544 toward the at least one target object 112 once the robot arm or robot 306 and/or the end effector apparatus 544 are within the vicinity of the start/source location 114. The robotic system 100 may further calculate an approach trajectory of the end effector apparatus 544 toward the at least one target object 112 identified by the robotic system 100 for interaction. Calculating the approach trajectory may further include calculating a grip of the end effector apparatus 544 for picking, grasping, or otherwise interacting with the at least one target object 112.
In embodiments, the trajectory generation operation 2030 may include calculating a return trajectory of the robot arm or robot 306 and/or the end effector apparatus 544 from the start/source location 114 to the task/destination location 116 once the at least one target object 112 is picked, grasped, or otherwise interacted with via the end effector apparatus 544. For example, calculating the return trajectory of the robot arm or robot 306 and/or the end effector apparatus 544 may include determining a trajectory toward the conveyor system 310 serving as the task/destination location 116, from the container placement area 430 serving as the start/source location 114. The calculated return trajectory may include a trajectory path ending adjacent to the conveyor 110 of the conveyor system 310. Calculating the return trajectory may further include determining a trajectory of the robot arm or robot 306 that avoids collision with the other components of the transfer unit cell 104, such as the unit enclosure 320, the sensor system 312, the conveyor system, 310, the cell base plate 302, and/or any other related components as described herein. In other words, calculating the return trajectory may include determining a trajectory within the operational area defined by the inside volume of the transfer unit cell 104 (i.e., a product of the length of the cell base plate 302 plus the base extensions 432, the width of the cell base plate 302 along base plate edge 302a, and the height of the unit enclosure 320). The calculated return trajectory may further include releasing the at least one target object 112 via the end effector apparatus 544 once the robot arm or robot 306 and/or the end effector apparatus 544 are adjacent to the conveyor 110. In embodiments, calculating the return trajectory of the robot arm or robot 306 and/or the end effector apparatus 544 may include determining a trajectory toward the container placement area 430 serving as the task/destination location 116, from the conveyor system 310 serving as the start/source location 114. The calculated return trajectory may include a trajectory path ending adjacent to the container placement area 430 or a pallet disposed within the container placement area 430. The calculated return trajectory may further include releasing the at least one target object 112 via the end effector apparatus 544 once the robot arm or robot 306 and/or the end effector apparatus 544 are adjacent to the container placement area 430. In still other embodiments, the trajectory generation operation 2030 may include calculating a planned trajectory of the robot arm or robot 306 and/or the end effector apparatus 544 toward the start/source location 114, calculating an approach trajectory of the end effector apparatus 544 toward the at least one target object 112 once the robot arm or robot 306 and/or the end effector apparatus 544 are in the vicinity of the start/source location 114, calculating a return trajectory of the robot arm or robot 306 and/or the end effector apparatus 544 toward the task/destination location 116, and calculating release of the at least one target object 112 once the end effector apparatus 544 holding the at least one target object 112 is within the vicinity or adjacent to the task/destination location 116.
In the trajectory execution operation 2040, the method 2000 may include controlling the robot arm or robot 306 and/or the end effector apparatus 544 of the transfer unit cell 104 toward the start/source location 114 to interact with the at least one target object 112 sensed from among the plurality of objects contained on the start/source location 114, as determined during the trajectory generation operation 2030. The method 2000 may include controlling the robot arm or robot 306 and/or the end effector apparatus 544 within the transfer unit cell 104 toward the container placement area 314 serving as the start/source location 114, which may include the pallet containing the at least one target object 112. In embodiments, the method may include controlling the robot arm or robot 306 and/or the end effector apparatus 544 within the transfer unit cell 104 toward the conveyor system 310 serving as the start/source location 114 containing the at least one target object 112. The trajectory execution operation 2040 may further include controlling the end effector apparatus 544 within the start/source location 114 to pick, grasp, or otherwise interact with the at least one target object 112 identified by the robotic system 100 for transfer to the source/destination location 116. The trajectory execution operation 2040 may further include a transferring step 2042 for transferring the at least one target object 112 via the robot arm or robot 306 and/or the end effector apparatus 544 from the start/source location 114 to the source/destination location 116, as determined by the trajectory generation operation 2030. For example, the robot arm or robot 306 and/or the end effector apparatus 544 may transfer the at least one target object 112 from the container placement area 314, or from a pallet containing the at least one target object 112 within the container placement area 314, to the conveyor system 310 of the transfer unit cell 104.
In the retraction operation 2050, the method 2000 may include retracting the transfer unit cell 104 into the retracted configuration 412 wherein the transfer unit cell 104 is retracted into itself, as previously described herein. Retracting the transfer unit cell 104 into the retracted configuration 412 may include retracting the conveyor system 310, the cell base plate 302, and/or the sensor mount 540 into the transfer unit cell 104, as previously described herein. While in the retracted configuration 412, the transfer unit cell 104 may further be configurable into the transport configuration 414 which allows for the portability and rapid deployment and integration of the transfer unit cell 104. In the retracted configuration 412, the conveyor system 310 is retracted into and covered by the unit enclosure 320, the sensor system 312 is retracted to a position below the top of the unit enclosure 320 (i.e., below its peak vertical axis as previously described herein), and/or the base extensions 432 are folded into the edge 302a of the cell base plate 302 or removed from the cell base plate 302 entirely. The retraction operation 2050 may further include a loading step 2052 which includes translating the transfer unit cell 104 from the retracted configuration 412 into the transport configuration 414, where the retracted transfer unit cell 104 is interacted with or loaded onto the transport unit 106 as previously described herein to move the transfer unit cell within the environment. More particularly, the transport unit 106 may move the transfer unit cell 104 to another container placement area 314 for further execution of the method 2000 described herein. Alternatively, the transport unit 106 may move the transfer unit cell 104 to a storage area or storage position once all operations of the method 2000 described herein are complete, or there are no further objects within the environment for interaction with via the robotic system 100.
In general, the method 2000 described herein may be used for the rapid deployment and integration of a robotic system for the manipulation (e.g., moving and/or reorienting) of a target object (e.g., one of the packages, boxes, cases, cages, pallets, etc. corresponding to the executing task) from a start/source location to a task/destination location. For example, a transport unit (e.g., a forklift) may be configured to transport a transfer unit cell comprising the robotic system from one location to the start/source location (e.g., in a warehouse). The transfer unit cell can then be extended into a deployed configuration configured for robotic system interaction with the target objects in the start/source location. The robotic system may be configured to transfer the target object from the start/source location (e.g., a conveyor, a pallet, a container placement area, or a bin) to a task/destination location (e.g., a conveyor, a pallet, a container placement area, or a bin etc.). In completing the operation, the transfer unit cell may be retracted into a retracted or compacted position ready for further transport via the transport unit to another start/source location, or to a storage location. Details regarding the task and the associated actions are described above.
The above Detailed Description of examples of the disclosed technology is not intended to be exhaustive or to limit the disclosed technology to the precise form disclosed above. While specific examples for the disclosed technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosed technology, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Each of these processes or blocks may be implemented in a variety of different ways. Further, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further, any specific numbers noted herein are only examples; alternative implementations may employ differing values or ranges.
These and other changes may be made to the disclosed technology in light of the above Detailed Description. While the Detailed Description describes certain examples of the disclosed technology as well as the best mode contemplated, the disclosed technology may be practiced in many ways, no matter how detailed the above description appears in text. Details of the system may vary considerably in its specific implementation, while still being encompassed by the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosed technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosed technology with which that terminology is associated. Accordingly, the invention is not limited, except as by the appended claims. In general, the terms used in the following claims should not be construed to limit the disclosed technology to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms.
It will be apparent to one of ordinary skill in the relevant arts that other suitable modifications and adaptations to the methods and applications described herein may be made without departing from the scope of any of the embodiments. The embodiments described above are illustrative examples and it should not be construed that the present disclosure is limited to these particular embodiments. It should be understood that various embodiments disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the methods or processes). In addition, while certain features of embodiments hereof are described as being performed by a single component, module, or unit for purposes of clarity, it should be understood that the features and functions described herein may be performed by any combination of components, units, or modules. Thus, various changes and modifications may be affected by one skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims.
Further embodiments include:
Embodiment 1. A robotic system, comprising: a control system; a transfer unit cell for the transfer of objects, the transfer unit cell being in communication with the control system and translatable between a deployed configuration configured to receive a pallet within the transfer unit cell, and a retracted configuration wherein the transfer unit cell is retracted into itself, the transfer unit cell further including: a cell base plate; a robotic arm mount on the cell base plate for attachment of a robotic arm; a conveyor system, adjacent the robotic arm mount, for receiving a target object; a sensor mount attached to the cell base plate for a sensor system including a sensor array; and a unit enclosure mounted to the cell base plate of the transfer unit cell to facilitate transport of the transfer unit cell, and translation of the transfer unit cell between the deployed configuration and the retracted configuration.
Embodiment 2. The robotic system of embodiment 1 wherein the retracted configuration of the transfer unit cell includes the conveyor system, the cell base plate, and the sensor mount being retracted into the transfer unit cell.
Embodiment 3. The robotic system of embodiment 1, wherein the cell base plate includes base extensions extending from an edge of the cell base plate and forming a container placement area between the base extensions, the base extensions formed to provide stability and/or balance to support the transfer unit cell while in the deployed configuration.
Embodiment 4. The robotic system of embodiment 3, wherein the base extensions are detachable from the edge of the cell base plate to reduce a footprint of the transfer unit cell while in the retracted configuration.
Embodiment 5. The robotic system of embodiment 3, wherein the base extensions are hingedly connected to the edge of the cell base plate to permit folding of the base extensions toward the cell base plate to reduce a footprint of the transfer unit cell while in the retracted configuration.
Embodiment 6. The robotic system of embodiment 1, wherein the cell base plate includes anchorless support features including one or more friction increasing components extending from a bottom surface of the cell base plate.
Embodiment 7. The robotic system of embodiment 1, wherein the robotic arm further includes a distal end with an end effector apparatus disposed thereon, the end effector apparatus configured for interacting with the target object.
Embodiment 8. The robotic system of embodiment 1, wherein the conveyor system is mounted to the cell base plate and is extendable beyond an edge of the cell base plate, the conveyor system further including a dynamic platform for movement of the target object received from the robotic arm.
Embodiment 9. The robotic system of embodiment 1, wherein the unit enclosure further includes: a frame surrounding the transfer unit cell, the frame including vertical posts extending substantially perpendicularly from the cell base plate, and a fence attached to and between each of the vertical posts, such that the fence includes separable portions moveable to expose or cover portions of the transfer unit cell.
Embodiment 10. A transfer unit cell for deployment of a robotic system, the transfer unit cell comprising: a cell base plate for the transfer of objects, the transfer unit cell being translatable between a deployed configuration configured to receive and secure a pallet, and a retracted configuration wherein the transfer unit cell is retracted into itself; a robotic arm mount for receiving a robotic arm; a conveyor system for receiving a target object; a sensor mount for receiving a sensor system including a sensor array; and a unit enclosure mounted to the cell base plate to facilitate transport of the transfer unit cell, and translation of the transfer unit cell between the deployed configuration and the retracted configuration.
Embodiment 11. The transfer unit cell of embodiment 10, wherein the cell base plate further includes base extensions extending from an edge of the cell base plate and forming a container placement area between the base extensions, the base extensions configured to provide stability and/or balance to support the transfer unit cell during operation and motion of the robotic arm while in the deployed configuration.
Embodiment 12. The transfer unit cell of embodiment 11, wherein the base extensions are detachable from the edge of the cell base plate to reduce a footprint of the transfer unit cell while in the retracted configuration.
Embodiment 13. The transfer unit cell of embodiment 11, wherein the base extensions are hingedly connected to the edge of the cell base plate to permit the base extensions to fold toward the cell base plate to reduce a footprint of the transfer unit cell while in the retracted configuration.
Embodiment 14. The transfer unit cell of embodiment 10, wherein the cell base plate further provides anchorless support features including one or more friction-increasing components extending from a bottom surface of the cell base plate.
Embodiment 15. The transfer unit cell of embodiment 10, wherein the conveyor system is mounted to the cell base plate and is extendable beyond an edge of the cell base plate, the conveyor system further including a dynamic platform for movement of the target object received from the robotic arm.
Embodiment 16. The transfer unit cell of embodiment 10, wherein the sensor array includes any combination of two-dimensional cameras, three-dimensional cameras, scanners, and/or lighting arrays.
Embodiment 17. The transfer unit cell of embodiment 10, wherein the unit enclosure includes a frame surrounding the transfer unit cell, the frame including vertical posts extending perpendicularly from the cell base plate, and the unit enclosure further includes a fence attached to and between each of the vertical posts, such that the fence includes separable portions moveable to expose or cover portions of the transfer unit cell.
Embodiment 18. A method for rapid deployment and integration of a robotic system, comprising: locating and deploying a transfer unit cell having a cell base plate into a deployed configuration configured to receive and secure a pallet, containing a plurality of objects; securing the transfer unit cell in the deployed configuration such that the transfer unit cell is stabilized to support the transfer unit cell; sensing the plurality of objects contained on the pallet via a sensor system including a sensor array attached to a sensor mount of the transfer unit cell; controlling a robotic arm of the transfer unit cell to interact with a target object sensed from among the plurality of objects contained on the pallet; and transferring the target object from the pallet to a conveyor system of the transfer unit cell.
Embodiment 19. The method of embodiment 18 further comprising: loading the transfer unit cell having a cell base plate onto a transport vehicle in a transport configuration; and deploying the transfer unit cell into the deployed configuration.
Embodiment 20. The method of embodiment 18, further comprising: retracting the transfer unit cell into a retracted configuration wherein the transfer unit cell is retracted into itself; translating the transfer unit cell from the retracted configuration into a transport configuration; and moving the transfer unit cell via a transport vehicle.
Embodiment 21. The method of embodiment 18 further comprising controlling the robotic arm of the transfer unit cell, and communicating with systems external to the transfer unit cell.
The present application claims the benefit of U.S. Provisional Appl. No. 63/426,010, entitled “ROBOTIC SYSTEM AND METHOD OF OPERATION THEREOF” and filed Nov. 16, 2022, and U.S. Provisional Appl. No. 63/589,735, entitled “ROBOTIC SYSTEM AND METHOD OF OPERATION THEREOF” and filed Oct. 12, 2023, the entire contents of which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63426010 | Nov 2022 | US | |
63589735 | Oct 2023 | US |