The disclosed embodiment generally relates to material handling systems, and more particularly, to transports for automated logistics systems.
Generally, automated logistics systems, such as automated storage and retrieval systems, employ autonomous vehicles that transport goods within the automated storage and retrieval system. These autonomous vehicles are guided throughout the automated storage and retrieval system by location beacons, capacitive or inductive proximity sensors, line following sensors, reflective beam sensors and other narrowly focused beam type sensors. These sensors may provide limited information for effecting navigation of the autonomous vehicles through the storage and retrieval system or provide limited information with respect to identification and discrimination of hazards that may be present throughout the automated storage and retrieval system.
The autonomous vehicles may also be guided throughout the automated storage and retrieval system by vision systems that employ stereo or binocular cameras. However, the binocular cameras of these binocular vision systems are placed relative, to each other, at distances that are unsuitable for warehousing logistics case storage and retrieval. In a logistics environment the stereo or binocular cameras may be impaired or not always available due to, e.g., blockage or view obstruction (by, for example, payload carried by the autonomous vehicle, storage structure, etc.) and/or view obscurity of one camera in the pair of stereo cameras; or image processing may be degraded from processing of duplicate image data or images that are otherwise unsuitable (e.g., blurred, etc.) for guiding and localizing the autonomous vehicle within the automated storage and retrieval system.
The foregoing aspects and other features of the disclosed embodiment are explained in the following description, taken in connection with the accompanying drawings, wherein:
The aspects of the disclosed embodiment provide for a logistics autonomous guided vehicle 110 (referred to herein as an autonomous guided vehicle) having intelligent autonomy and collaborative operation. For example, the autonomous guided vehicle 110 includes a vision system 400 (see
As will be described in greater detail herein, the autonomous guided vehicle 110 includes a controller 122 that is programmed to access data from the vision system 400 to effect robust case/object detection and localization of cases/objects within a super-constrained system or operating environment with at least one pair of inexpensive two-dimensional rolling shutter, unsynchronized cameras (although in other aspects the camera pairs may include comparatively more expensive two-dimensional global shutter cameras that may or may not be synchronized with one another) and with the autonomous guided vehicle 110 moving relative to the cases/objects. The super-constrained system includes, but is not limited to, at least the following constraints: spacing between dynamically positioned adjacent cases is a densely packed spacing (also referred to herein as closely packed juxtaposition with respect to each other), the autonomous guided vehicle is configured to underpick (lift from beneath) cases, different sized cases are distributed within the storage array SA in a Gaussian distribution, cases may exhibit deformities, and cases may be placed on a support surface in an irregular manner, all of which impact the transfer of case units CU between the storage shelf 555 (or other case holding location) and the autonomous guided vehicle 110.
The cases CU stored in the storage and retrieval system have a Gaussian distribution (see
In addition, as can be seen in, e.g.,
It is also noted that the height HGT of the hats 444 is about 2 inches, where a space envelope ENV between the hats 444 in which a tine 210AT of the transfer arm 210A of the autonomous guided vehicle 110 is inserted underneath a case unit CU for picking/placing cases to and from the storage shelf 555 is about 1.7 inches in width and about 1.2 inches in height (see, e.g.,
Another constraint of the super-constrained system is the transfer time for an autonomous guided vehicle 110 to transfer a case unit(s) between a payload bed 210B of the autonomous guided vehicle 110 and a case holding location (e.g., storage space, buffer, transfer station, or other case holding location described herein). Here, the transfer time for case transfer is about 10 seconds or less. As such, the vision system 400 discriminates case location and pose (or holding station location and pose) in less than about two seconds or in less than about half a second.
The super-constrained system described above requires robustness of the vision system, and may be considered to define the robustness of the vision system 400 as the vision system 400 is configured to accommodate the above-noted constraints and may provide pose and localization information for cases CU and/or the autonomous guided vehicle 110 that effects an autonomous guided vehicle pick failure rate of about one pick failure for every about one million picks.
In accordance with the aspects of the disclosed embodiment, the autonomous guided vehicle 110 includes a controller (e.g., controller 122 or vision system controller 122VC that is communicably coupled to or otherwise forms a part of controller 122) that registers image data (e.g., video stream) from the cameras in one or more pairs of cameras (e.g., the pairs of cameras being formed by respective ones of the cameras 410A, 410B, 420A, 420B, 430A, 430B, 460A, 460B, 477A, 477B). The controller is configured to parse the registered (video) image data into individual registered (still) image frames to form a set of still (as opposed to the motion video from which the images are parsed) stereo vision image frames (e.g., see image frames 600A, 600B in
As will be described herein, the controller generates a dense depth map of objects within the fields of view of the cameras, in the pair of cameras, from the stereo vision frames so as to discriminate location and pose of imaged objects from the dense depth map. The controller also generates binocular keypoint data for the stereo vision frames, the keypoint data being separate and distinct from the dense depth map, where the keypoint data effects (e.g., binocular, three-dimensional) discrimination of location and pose of the objects within the fields of view of the cameras. It is noted that while the term “keypoint” is used herein, the keypoints described herein are also referred to in the art as “feature point(s),” “invariant feature(s),” “invariant point(s),” or a “characteristic” (such as a corner or facet joint or object surface). The controller combines the dense depth map with the keypoint data, with a weighted emphasis on the keypoint data, to determine or otherwise identify the pose and location of the imaged objects (e.g., in the logistics space and/or relative to the autonomous guided vehicle 110) with an accuracy that is greater than a pose and location determination accuracy of the dense depth map alone and greater than a pose and location determination accuracy of the keypoint data alone.
In accordance with the aspects of the disclosed embodiment, the automated storage and retrieval system 100 in
The automated storage and retrieval system 100 may be generally described as a storage and retrieval engine 190 coupled to a palletizer 162. In greater detail now, and with reference still to
The picking aisles 130A, and transfer decks 130B also allow the autonomous guided vehicles 110 to place case units CU into picking stock and to retrieve ordered case units CU (and define the different positions where the bot performs autonomous tasks, though any number of locations in the storage structure (e.g., decks, aisles, storage racks, etc.) can be one or more of the different positions). In alternate aspects, each level may also include respective transfer stations 140 that provide for an indirect case transfer between the autonomous guided vehicles 110 and the lift modules 150A, 150B. The autonomous guided vehicles 110 may be configured to place case units, such as the above described retail merchandise, into picking stock in the one or more storage structure levels 130L of the storage structure 130 and then selectively retrieve ordered case units for shipping the ordered case units to, for example, a store or other suitable location. The in-feed transfer stations 170 and out-feed transfer stations 160 may operate together with their respective lift module(s) 150A, 150B for bi-directionally transferring case units CU to and from one or more storage structure levels 130L of the storage structure 130. It is noted that while the lift modules 150A, 150B may be described as being dedicated inbound lift modules 150A and outbound lift modules 150B, in alternate aspects each of the lift modules 150A, 150B may be used for both inbound and outbound transfer of case units from the storage and retrieval system 100.
As may be realized, the storage and retrieval system 100 may include multiple in-feed and out-feed lift modules 150A, 150B that are accessible (e.g., indirectly through transfer stations 140 or through transfer of cases directly between the lift module 150A, 150B and the autonomous guided vehicle 110) by, for example, autonomous guided vehicles 110 of the storage and retrieval system 100 so that one or more case unit(s), uncontained (e.g., case unit(s) are not held in trays), or contained (within a tray or tote) can be transferred from a lift module 150A, 150B to each storage space on a respective level and from each storage space to any one of the lift modules 150A, 150B on a respective level. The autonomous guided vehicles 110 may be configured to transfer the cases CU (also referred to herein as case units) between the storage spaces 130S (e.g., located in the picking aisles 130A or other suitable storage space/case unit buffer disposed along the transfer deck 130B) and the lift modules 150A, 150B. Generally, the lift modules 150A, 150B include at least one movable payload support that may move the case unit(s) between the in-feed and out-feed transfer stations 160, 170 and the respective level of the storage space where the case unit(s) is stored and retrieved. The lift module(s) may have any suitable configuration, such as for example reciprocating lift, or any other suitable configuration. The lift module(s) 150A, 150B include any suitable controller (such as control server 120 or other suitable controller coupled to control server 120, warehouse management system 2500, and/or palletizer controller 164, 164′) and may form a sequencer or sorter in a manner similar to that described in U.S. patent application Ser. No. 16/444,592 filed on Jun. 18, 2019 and titled “Vertical Sequencer for Product Order Fulfillment” (the disclosure of which is incorporated herein by reference in its entirety).
The automated storage and retrieval system 100 may include a control system, comprising for example one or more control servers 120 that are communicably connected to the in-feed and out-feed conveyors and transfer stations 170, 160, the lift modules 150A, 150B, and the autonomous guided vehicles 110 via a suitable communication and control network 180. The communication and control network 180 may have any suitable architecture which, for example, may incorporate various programmable logic controllers (PLC) such as for commanding the operations of the in-feed and out-feed conveyors and transfer stations 170, 160, the lift modules 150A, 150B, and other suitable system automation. The control server 120 may include high level programming that effects a case management system (CMS) managing the case flow system. The network 180 may further include suitable communication for effecting a bi-directional interface with the autonomous guided vehicles 110. For example, the autonomous guided vehicles 110 may include an on-board processor/controller 122. The network 180 may include a suitable bi-directional communication suite enabling the autonomous guided vehicle controller 122 to request or receive commands from the control server 120 for effecting desired transport (e.g. placing into storage locations or retrieving from storage locations) of case units and to send desired autonomous guided vehicle 110 information and data including autonomous guided vehicle 110 ephemeris, status and other desired data, to the control server 120. As seen in
Referring now to
The frame 200 includes one or more idler wheels or casters 250 disposed adjacent the front end 200E1. Suitable examples of casters can be found in U.S. patent application Ser. No. 17/664,948 titled “Autonomous Transport Vehicle with Synergistic Vehicle Dynamic Response” (having attorney docket number 1127P015753-US (PAR)) filed on May 25, 2022 ( ) and U.S. patent application Ser. No. 17/664,838 titled “Autonomous Transport Vehicle with Steering” (having attorney docket number 1127P015753-US (PAR)) filed on May 26, 2021, the disclosures of which are incorporated herein by reference in their entireties. The frame 200 also includes one or more drive wheels 260 disposed adjacent the back end 200E2. In other aspects, the position of the casters 250 and drive wheels 260 may be reversed (e.g., the drive wheels 260 are disposed at the front end 200E1 and the casters 250 are disposed at the back end 200E2). It is noted that in some aspects, the autonomous guided vehicle 110 is configured to travel with the front end 200E1 leading the direction of travel or with the back end 200E2 leading the direction of travel. In one aspect, casters 250A, 250B (which are substantially similar to caster 250 described herein) are located at respective front corners of the frame 200 at the front end 200E1 and drive wheels 260A, 260B (which are substantially similar to drive wheel 260 described herein) are located at respective back corners of the frame 200 at the back end 200E2 (e.g., a support wheel is located at each of the four corners of the frame 200) so that the autonomous guided vehicle 110 stably traverses the transfer deck(s) 130B and picking aisles 130A of the storage structure 130.
The autonomous guided vehicle 110 includes a drive section 261D, connected to the frame 200, with drive wheels 260 supporting the autonomous guided vehicle 110 on a traverse/rolling surface 284, where the drive wheels 260 effect vehicle traverse on the traverse surface 284 moving the autonomous guided vehicle 110 over the traverse surface 284 in a facility (e.g., such as a warehouse, store, etc.). The drive section 261D has at least a pair of traction drive wheels 260 (also referred to as drive wheels 260—see drive wheels 260A, 260B) astride the drive section 261D. The drive wheels 260 have a fully independent suspension 280 coupling each drive wheel 260A, 260B of the at least pair of drive wheels 260 to the frame 200 and configured to maintain a substantially steady state traction contact patch between the at least one drive wheel 260A, 260B and rolling/travel surface 284 (also referred to as autonomous vehicle travel surface 284) over rolling surface transients (e.g., bumps, surface transitions, etc.) Suitable examples of the fully independent suspension 280 can be found in U.S. patent application Ser. No. 17/664,948 titled “Autonomous Transport Vehicle with Synergistic Vehicle Dynamic Response” (having attorney docket number 1127P015753-US (PAR)) filed on May 25, 2022, the disclosure of which was previously incorporated herein by reference in its entirety.
The autonomous guided vehicle 110 includes a physical characteristic sensor system 270 (also referred to as an autonomous navigation operation sensor system) connected to the frame 200. The physical characteristic sensor system 270 has electro-magnetic sensors. Each of the electro-magnetic sensors is responsive to interaction or interface of a sensor emitted or generated electro-magnetic beam or field with a physical characteristic (e.g., of the storage structure or a transient object such as a case unit CU, debris, etc.), where the electro-magnetic beam or field is disturbed by interaction or interface with the physical characteristic. The disturbance in the electro-magnetic beam is detected by and effects sensing by the electro-magnetic sensor of the physical characteristic, wherein the physical characteristic sensor system 270 is configured to generate sensor data embodying at least one of a vehicle navigation pose or location (relative to the storage and retrieval system or facility in which the autonomous guided vehicle 110 operates) information and payload pose or location (relative to a storage location 130S or the payload bed 210B) information.
The physical characteristic sensor system 270 includes, for exemplary purposes only, one or more of laser sensor(s) 271, ultrasonic sensor(s) 272, bar code scanner(s) 273, position sensor(s) 274, line sensor(s) 275, case sensors 278 (e.g., for sensing case units within the payload bed 210B onboard the vehicle 110 or on a storage shelf off-board the vehicle 110), arm proximity sensor(s) 277, vehicle proximity sensor(s) 278 or any other suitable sensors for sensing a position of the vehicle 110 or a payload (e.g., case unit CU). In some aspects, supplemental navigation sensor system 288 may form a portion of the physical characteristic sensor system 270. Suitable examples of sensors that may be included in the physical characteristic sensor system 270 are described in U.S. Pat. No. 8,425,173 titled “Autonomous Transport for Storage and Retrieval Systems” issued on Apr. 23, 2013, U.S. Pat. No. 9,008,884 titled “Bot Position Sensing” issued on Apr. 14, 2015, and U.S. Pat. No. 9,946,265 titled Bot Having High Speed Stability” issued on Apr. 17, 2018, the disclosures of which are incorporated herein by reference in their entireties.
The sensors of the physical characteristic sensor system 270 may be configured to provide the autonomous guided vehicle 110 with, for example, awareness of its environment and external objects, as well as the monitor and control of internal subsystems. For example, the sensors may provide guidance information, payload information or any other suitable information for use in operation of the autonomous guided vehicle 110.
The bar code scanner(s) 273 may be mounted on the autonomous guided vehicle 110 in any suitable location. The bar code scanners(s) 273 may be configured to provide an absolute location of the autonomous guided vehicle 110 within the storage structure 130. The bar code scanner(s) 273 may be configured to verify aisle references and locations on the transfer decks by, for example, reading bar codes located on, for example the transfer decks, picking aisles and transfer station floors to verify a location of the autonomous guided vehicle 110. The bar code scanner(s) 273 may also be configured to read bar codes located on items stored in the shelves 555.
The position sensors 274 may be mounted to the autonomous guided vehicle 110 at any suitable location. The position sensors 274 may be configured to detect reference datum features (or count the slats 520L of the storage shelves 555) (e.g. see
The line sensors 275 may be any suitable sensors mounted to the autonomous guided vehicle 110 in any suitable location, such as for exemplary purposes only, on the frame 200 disposed adjacent the drive (rear) and driven (front) ends 200E2, 200E1 of the autonomous guided vehicle 110. For exemplary purposes only, the line sensors 275 may be diffuse infrared sensors. The line sensors 275 may be configured to detect guidance lines 199 (see
The case sensors 276 may include case overhang sensors and/or other suitable sensors configured to detect the location/pose of a case unit CU within the payload bed 210B. The case sensors 276 may be any suitable sensors that are positioned on the vehicle so that the sensor(s) field of view(s) span the payload bed 210B adjacent the top surface of the support tines 210AT (see
The arm proximity sensors 277 may be mounted to the autonomous guided vehicle 110 in any suitable location, such as for example, on the transfer arm 210A. The arm proximity sensors 277 may be configured to sense objects around the transfer arm 210A and/or support tines 210AT of the transfer arm 210A as the transfer arm 210A is raised/lowered and/or as the support tines 210AT are extended/retracted.
The laser sensors 271 and ultrasonic sensors 272 may be configured to allow the autonomous guided vehicle 110 to locate itself relative to each case unit forming the load carried by the autonomous guided vehicle 110 before the case units are picked from, for example, the storage shelves 555 and/or lift 150 (or any other location suitable for retrieving payload). The laser sensors 271 and ultrasonic sensors 272 may also allow the vehicle to locate itself relative to empty storage locations 130S for placing case units in those empty storage locations 130S. The laser sensors 271 and ultrasonic sensors 272 may also allow the autonomous guided vehicle 110 to confirm that a storage space (or other load depositing location) is empty before the payload carried by the autonomous guided vehicle 110 is deposited in, for example, the storage space 130S. In one example, the laser sensor 271 may be mounted to the autonomous guided vehicle 110 at a suitable location for detecting edges of items to be transferred to (or from) the autonomous guided vehicle 110. The laser sensor 271 may work in conjunction with, for example, retro-reflective tape (or other suitable reflective surface, coating or material) located at, for example, the back of the shelves 555 to enable the sensor to “see” all the way to the back of the storage shelves 555. The reflective tape located at the back of the storage shelves allows the laser sensor 1715 to be substantially unaffected by the color, reflectiveness, roundness, or other suitable characteristics of the items located on the shelves 555. The ultrasonic sensor 272 may be configured to measure a distance from the autonomous guided vehicle 110 to the first item in a predetermined storage area of the shelves 555 to allow the autonomous guided vehicle 110 to determine the picking depth (e.g. the distance the support tines 210AT travel into the shelves 555 for picking the item(s) off of the shelves 555). One or more of the laser sensors 271 and ultrasonic sensors 272 may allow for detection of case orientation (e.g. skewing of cases within the storage shelves 555) by, for example, measuring the distance between the autonomous guided vehicle 110 and a front surface of the case units to be picked as the autonomous guided vehicle 110 comes to a stop adjacent the case units to be picked. The case sensors may allow verification of placement of a case unit on, for example, a storage shelf 555 by, for example, scanning the case unit after it is placed on the shelf.
Vehicle proximity sensors 278 may also be disposed on the frame 200 for determining the location of the autonomous guided vehicle 110 in the picking aisle 130A and/or relative to lifts 150. The vehicle proximity sensors 278 are located on the autonomous guided vehicle 110 so as to sense targets or position determining features disposed on rails 130AR on which the vehicle 110 travels through the picking aisles 130A (and/or on walls of transfer areas 195 and/or lift 150 access location). The position of the targets on the rails 130AR are in known locations so as to form incremental or absolute encoders along the rails 130AR. The vehicle proximity sensors 278 sense the targets and provide sensor data to the controller 122 so that the controller 122 determines the position of the autonomous guided vehicle 110 along the picking aisle 130A based on the sensed targets.
The sensors of the physical characteristic sensing system 270 are communicably coupled to the controller 122 of the autonomous guided vehicle 110. As described herein, the controller 122 is operably connected to the drive section 261D and/or the transfer arm 210A. The controller 122 is configured to determine from the information of the physical characteristic sensor system 270 vehicle pose and location (e.g., in up to six degrees of freedom, X, Y, Z, Rx, Ry, Rz) effecting independent guidance of the autonomous guided vehicle 110 traversing the storage and retrieval facility/system 100. The controller 122 is also configured to determine from the information of the physical characteristic sensor system 270 payload (e.g., case unit CU) pose and location (onboard or off-board the autonomous guided vehicle 110) effecting independent underpick (e.g., lifting of the case unit CU from underneath the case unit CU) and place of the payload CU to and from a storage location 130S and independent underpick and place of the payload CU in the payload bed 210B.
Referring to
Referring to
The forward navigation cameras 420A, 420B may be paired to form a stereo camera system and the rearward navigation cameras 430A, 430B may be paired to form another stereo camera system. Referring to
The forward navigation cameras 420A, 420B and the rear navigation cameras 430A, 430B may also provide for convoys of vehicles 110 along the picking aisles 130A or transfer deck 130B, where one vehicle 110 follows another vehicle 110A at predetermined fixed distances. As an example,
As another example, the controller 122 may obtain images from one or more of the three-dimensional imaging system 440A, 440B, the case edge detection sensors 450A, 450B, and the case unit monitoring cameras 410A, 410B (the case unit monitoring cameras 410A, 410B forming stereo vision or binocular image cameras) to effect case handling by the vehicle 110. Still referring
Images from the out of plane localization cameras 477A, 477B (which may also form respective stereo image cameras) may be obtained by the controller 122 to effect navigation of the autonomous guided vehicle 110 and/or to provide data (e.g., image data) supplemental to localization/navigation data from the one or more of the forward and rearward navigation cameras 420A, 420B, 430A, 430B. Images from the one or more traffic monitoring camera 460A, 460B may be obtained by the controller 122 to effect travel transitions of the autonomous guided vehicle 110 from a picking aisle 130A to the transfer deck 130B (e.g., entry to the transfer deck 130B and merging of the autonomous guided vehicle 110 with other autonomous guided vehicles travelling along the transfer deck 130B).
The one or more out of plane (e.g., upward or downward facing) localization cameras 477A, 477B (which may also form respective stereo image cameras) are disposed on the frame 200 of the autonomous transport vehicle 110 so as to sense/detect location fiducials (e.g., location marks (such as barcodes, etc.), lines 199 (see
The one or more traffic monitoring cameras 460A, 460B (which may also form respective stereo image cameras) are disposed on the frame 200 so that a respective field of view 460AF, 460BF faces laterally in lateral direction LAT1. While the one or more traffic monitoring cameras 460A, 460B are illustrated as being adjacent a transfer opening 1199 of the transfer bed 210B (e.g., on the pick side from which the arm 210A of the autonomous transport vehicle 110 extends), in other aspects there may be traffic monitoring cameras disposed on the non-pick side of the frame 200 so that a field of view of the traffic monitoring cameras faces laterally in direction LAT2. The traffic monitoring cameras 460A, 460B provide for an autonomous merging of autonomous transport vehicles 110 exiting, for example, a picking aisle 130A or lift transfer area 195 onto the transfer deck 130B (see
The case unit monitoring cameras 410A, 410B are any suitable two-dimensional rolling shutter high resolution or low resolution video cameras (where video images that include more than about 480 vertical scan lines and are captured at more than about 50 frames/second are considered high resolution) such as those described herein. The case unit monitoring cameras 410A, 410B are arranged relative to each other to form a stereo vision camera system that is configured to monitor case unit CU ingress to and egress from the payload bed 210B. The case unit monitoring cameras 410A, 410B are coupled to the frame 200 in any suitable manner and are focused at least on the payload bed 210B. As can be seen in
The robustness of the vision system 400 accounts for determination or otherwise identification of object location and pose given the above-noted disparity between the stereo image cameras 410A, 410B. In one or more aspects, the case unit monitoring (stereo image) cameras 410A, 410B are coupled to the transfer arm 210A so as move in direction LAT with the transfer arm 210A (such as when picking and placing case units CU) and are positioned so as to be focused on the payload bed 210B and support tines 210AT of the transfer arm 210A. In one or more aspects, closely spaced (e.g., less than about 255 pixel disparity) off the shelf camera pairs may be employed.
Referring also to
The case unit monitoring cameras 410A, 410B are also configured to effect, with the vision system controller 122VC, a determination of a front face case center point FFCP (e.g., in the X, Y, and Z directions with respect to, e.g., the autonomous guided vehicle 110 reference frame BREF (see
The determination of the front face case center point FFCP also effects a comparison of the “real world” environment in which the autonomous guided vehicle 110 is operating with a virtual model 400VM of that operating environment so that controller 122 of the autonomous guided vehicle 110 compares what is “sees” with the vision system 400 substantially directly with what the autonomous guided vehicle 110 expects to “see” based on the simulation of the storage and retrieval system structure in a manner similar to that described in U.S. patent application Ser. No. 17/804,026 filed on May 25, 2022 and titled “Autonomous Transport Vehicle with Vision System” (having attorney docket number 1127P016037-US (PAR)), the disclosure of which is incorporated herein by reference in its entirety. Moreover, in one aspect, illustrated in
As an example of the above-noted enhanced resolution, if one case unit disposed on a shelf that is imaged by the vision system 400 is turned compared to juxtaposed case units on the same shelf (also imaged by the vision system) and to the virtual model 400VM the vision system 400 may determine the one case is skewed (see
The case unit monitoring cameras 410A, 410B may also provide feedback with respect to the positions of the case unit justification features and case transfer features of the autonomous guided vehicle 110 prior to and/or after picking/placing a case unit from, for example, a storage shelf or other holding locations (e.g., for verifying the locations/positions of the justification features and the case transfer features so as to effect pick/place of the case unit with the transfer arm 210A without transfer arm obstruction). For example, as noted above, the case unit monitoring cameras 410A, 410B have a field of view that encompasses the payload bed 210B. The vision system controller 122VC is configured to receive sensor data from the case unit monitoring cameras 410A, 410B and determine, with any suitable image recognition algorithms stored in a memory of or accessible by the vision system controller 122VC, positions of the pushers 470, justification blades 471, pullers 472, tines 210AT, and/or any other features of the payload bed 210B that engage a case unit held on the payload bed 210B. The positions of the pushers 470, justification blades 471, pullers 472, tines 210AT, and/or any other features of the payload bed 210B may be employed by the controller 122 to verify a respective position of the pushers 470, justification blades 471, pullers 472, tines 210AT, and/or any other features of the payload bed 210B as determined by motor encoders or other respective position sensors; while in some aspects the positions determined by the vision system controller 122VC may be employed as a redundancy in the event of encoder/position sensor malfunction.
The justification position of the case unit CU within the payload bed 210B may also be verified by the case unit monitoring cameras 410A, 410B. For example, referring also to
Referring to
As illustrated in
It is noted that data from the one or more three-dimensional imaging system 440A, 440B may be supplemental to the object determination and localization described herein with respect to the stereo pairs of cameras. For example, the three-dimensional imaging system 440A, 440B may be employed for pose and location verification that is supplemental to the pose and location determination made with the stereo pairs of cameras, such as during stereo image cameras calibration or an autonomous guided vehicle pick and place operation. The three dimensional imaging system 440A, 440B may also provide a reference frame transformation so that object pose and location determined in the autonomous guided vehicle reference frame BREG can be transformed into a pose and location within the global reference frame GREF, and vice versa. In other aspects, the autonomous guided vehicle may be sans the three-dimensional imaging system.
The vision system 400 may also effect operational control of the autonomous transport vehicle 110 in collaboration with an operator. The vision system 400 provides data (images) and that vision system data is registered by the vision system controller 122VC that (a) determines information characteristics (in turn provided to the controller 122), or (b) information is passed to the controller 122 without being characterized (objects in predetermined criteria) and characterization is done by the controller 122. In either (a) or (b) it is the controller 122 that determines selection to switch to the collaborative state. After switching, the collaborative operation is effected by a user accessing the vision system 400 via the vision system controller 122VC and/or the controller 122 through a user interface UI. In its simplest form, however, the vision system 400 may be considered as providing a collaborative mode of operation of the autonomous transport vehicle 110. Here, the vision system 400 supplements the autonomous navigation/operation sensor system 270 to effect collaborative discriminating and mitigation of objects/hazards 299 (see
In one aspect, the operator may select or switch control of the autonomous guided vehicle (e.g., through the user interface UI) from automatic operation to collaborative operation (e.g., the operator remotely controls operation of the autonomous transport vehicle 110 through the user interface UI). For example, the user interface UI may include a capacitive touch pad/screen, joystick, haptic screen, or other input device that conveys kinematic directional commands (e.g., turn, acceleration, deceleration, etc.) from the user interface UI to the autonomous transport vehicle 110 to effect operator control inputs in the collaborative operational mode of the autonomous transport vehicle 110. For example, the vision system 400 provides a “dashboard camera” (or dash-camera) that transmits video and/or still images from the autonomous transport vehicle 110 to an operator (through user interface UI) to allow remote operation or monitoring of the area relative to the autonomous transport vehicle 110 in a manner similar to that described in U.S. patent application Ser. No. 17/804,026 filed on May 25, 2022 and titled “Autonomous Transport Vehicle with Vision System” (having attorney docket number 1127P016037-US (PAR)), the disclosure of which was previously incorporated herein by reference in its entirety.
Referring to
Referring also to
Referring to
The dense depth map 620 is “dense” (e.g., has a depth of resolution for every, or near every, pixel in an image) compared to a sparse depth map (e.g., stereo matched keypoints) and has a definition commensurate with discrimination of objects, within the field of view of the cameras, that effects resolution of pick and place actions of the autonomous guided vehicle 110. Here, the density of the dense depth map 620 may depend on (or be defined by) the processing power and processing time available for object discrimination. As an example, and as noted above, transfer of objects (such as case units CU) to and from the payload bed 210B of the autonomous guided vehicle 110, from bot traverse stopping to bot traverse starting, is performed in about 10 seconds or less. For transfer of the objects, the transfer arm 210A motion is initiated prior to stopping traverse of the autonomous guided vehicle 110 so that the autonomous guided vehicle is positioned adjacent the pick/place location where the object (e.g., the holding station location and pose, the object/case unit location pose, etc.) is to be transferred and the transfer arm 210A is extended substantially coincident with the autonomous guided vehicle stopping. Here, at least some of the images captured by the vision system 400 (e.g., for discriminating an object to be picked, a case holding location, or other object of the storage and retrieval system 100) are captured with the autonomous guided vehicle traversing a traverse surface (i.e., with the autonomous guided vehicle 110 in motion along a transfer deck 130B or picking aisle 130A and moving past the objects). The discrimination of the object occurs substantially simultaneously with stopping (e.g., occurs at least partly with the autonomous guided vehicle 110 in motion and decelerating from a traverse speed to a stop) of the autonomous guided vehicle such that generation of the dense depth map is resolved (e.g., in less than about two seconds, or less than about half a second), for discrimination of the object, substantially coincident with the autonomous guided vehicle stopping traverse and the transfer arm 210A motion initiation. The resolution of the dense depth map 620 renders (informs) the vision system controller 122VC (and controller 122) of anomalies of the object, such as from the object face (see the open case flap and tape on the case illustrated in
As noted above, and referring to
As can be seen in
Referring to
Referring to
The final (best fit) of the points in the case face (Block 1030) may be verified (e.g., in a weighted verification that is weighted towards the matched stereo keypoints 920F, see also keypoints KP1-KP12, which are exemplary of the matched stereo keypoints 920F). For example, the object extractor 1000 is configured to identify location and pose (e.g., with respect to a predetermined reference frame such as the global reference frame GREF and/or the autonomous guided vehicle reference ref BREF) of each imaged object based on superpose of the matching stereo (sets of) keypoints (and the depth resolution thereon) and the depth map 620. Here, the matched or matching stereo keypoints KP1-KP12 are superposed with the final estimate of the points in the case face (Block 1030) (e.g., the point cloud forming the final estimate of the points in the case face are projected into the plane formed by the matching stereo keypoints KP1-KP12) and resolved for comparison with the points in the case face so as to determine whether the final estimate of the points in the case face are within a predetermined threshold distance from the matching stereo keypoints KP1-KP12 (and the case face formed thereby). Where the final estimate of the points in the face are within the predetermined threshold distance, the final estimate of the points in the face (that define the determined case face CF) is verified and forms a planar estimate of the matching stereo keypoints (
Referring again to
The vision system controller 122VC may also determine, from the planar estimation of the matching stereo keypoints (Block 1040), the front face case center point FFCP and other dimensions/features (e.g., space envelope ENV between the hats 444, case support plane, distance DIST between cases, case skewing, case deformities/anomalies, etc.), as described herein, that effect case transfer between the storage shelf 555 and the autonomous guided vehicle 110. For example, the vision system controller 122VC is configured to characterize a planar surface PS of the front face (of the extracted object), and orientation of the planar surface PS relative to a predetermined reference frame (such as the autonomous guided vehicle reference frame BREF and/or global reference frame GREF). Again, referring to case CU2 as an example, the vision system controller 122VC characterizes, from the planar estimation of the matching stereo keypoints (Block 1040), the planar surface PS of the case face CF of case CU2 and determines the orientation (e.g., skew or yaw YW—see also
As described above, the determination of the planar estimation of the matching stereo keypoints (Block 1040) includes points that are disposed a predetermined distance in front of the plane/surface formed by the matched stereo keypoints KP1-KP12. Here, the vision system controller 122VC is configured to resolve, from the planar estimation of the matching stereo keypoints, presence and characteristics of an anomaly (e.g., such as tape on the case face CF (see
The vision system controller 122VC is configured to generate at least one of an execute command and a stop command of an actuator (e.g., transfer arm 210A actuator, drive wheel 260 actuator, or any other suitable actuator of the autonomous guided vehicle 110) of the autonomous guided vehicle 110 based on the identified location and pose of a case CU to be picked. For example, where the case pose and location identify that the case CU to be picked is hanging off a shelf 555, such that the case cannot be picked substantially without interference or obstruction (e.g., substantially without error), the vision system controller 122VC may generate a stop command that prevents extension of the transfer arm 210A. As another example, where the case pose and location identify that the case CU to be picked is skewed and not aligned with the transfer arm 210A, the vision system controller 122VC may generate an execute command that effects traverse of the autonomous guided vehicle along a traverse surface to position the transfer arm 210A relative to the case CU to be picked so that the skewed case is aligned with the transfer arm 210A and can be picked without error.
It is noted that converse or corollary to the robust resolution of the case CU pose and location to either or both of the autonomous guided vehicle reference frame BREF and the global reference frame GREF, the resolution of the reference frame BREF of the autonomous guided vehicle 110 (e.g., pose and location) to the global reference frame GREF is available and can be resolved with the three-dimensional imaging system 440A, 440B (see
Referring to
Each of the staging areas 130B1-130Bn includes a respective calibration station 1110 that is disposed so that autonomous guided vehicles 110 may repeatedly calibrate the stereo pairs of cameras 410A and 410B, 420A and 420B, 430A and 430B, 460A and 460B, 477A and 477B. The calibration of the stereo pairs of cameras may be automatic upon autonomous guided vehicle registration (via the autonomous guided vehicle ingress or egress location 1190 in a manner substantially similar to that described in U.S. Pat. No. 9,656,803, previously incorporated by reference) into the storage structure 130. In other aspects, the calibration of the stereo pairs of cameras may be manual (such as where the calibration station is located on the lift 1192) and be performed prior to insertion of the autonomous guided vehicle 110 into the storage structure 130 in a manner similar to that described herein with respect to calibration station 1110.
To calibrate the stereo pairs of cameras the autonomous guided vehicle is positioned (either manually or automatically) at a predetermined location of the calibration station 1110 (
One or more surfaces 1200 of each calibration station 1110 includes any suitable number of known objects 1210-1218. The one or more surfaces 1200 may be any surface that is viewable by the stereo pairs of cameras including, but not limited to, a side wall 1111 of the calibration station 1110, a ceiling 1112 of the calibration station 1110, a floor/traverse surface 1115 of the calibration station 1110, and a barrier 1120 of the calibration station 1110. The objects 1210-1218 (also referred to as vision datums or calibration objects) included with a respective surface 1200 may be raised structures, apertures, appliques (e.g., paint, stickers, etc.) that each have known physical characteristics such as shape, size, etc.
Calibration of case unit monitoring (stereo image) cameras 410A, 410B using the calibration station 1110 will be described for exemplary purposes and it should be understood that the other stereo image cameras may be calibrated in a substantially similar manner. With an autonomous guided vehicle 110 remaining persistently stationary at the predetermined location of the calibration station 1110 (at a location in which the objects 1210-1218 are within the fields of view of the cameras 410A, 410B) throughout the calibration process, each camera 410A, 410B of the stereo image cameras images the objects 1210-1218 (
Further, the binocular vision reference frame may be transformed or otherwise resolved to a predetermined reference frame (
Where the calibration of the stereo vision of the autonomous guided vehicle 110 is manually effected, the autonomous transport vehicle 110 is manually positioned at the calibration station. For example, the autonomous guided vehicle 110 is manually positioned on the lift 1191 which includes surface(s) 1111 (one of which is shown, while others may be disposed at ends of the lift platform or disposed above the lift platform in orientations similar to the surfaces of the calibration stations 1110 (e.g., the lift platform is configured as a calibration station). The surface(s) include the known objects 1210-1218 and/or global datum target GDT such that calibration of the stereo vision occurs in a manner substantially similar to that described above.
Referring to
In accordance with one or more aspects of the disclosed embodiment, an autonomous guided vehicle is provided. The autonomous guided vehicle includes a frame with a payload hold; a drive section coupled to the frame with drive wheels supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility; a payload handler coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array; a vision system mounted to the frame, having more than one camera disposed to generate binocular images of a field of a logistic space including rack structure shelving on which more than one objects are stored; and a controller, communicably connected to the vision system so as to register the binocular images, and configured to effect stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to determine location and pose of each imaged object from both the dense depth map resolved from the binocular images and the depth resolution from the stereo sets of keypoints.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are rolling shutter cameras.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera generate a video stream and the registered images are parsed from the video stream.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are unsynchronized with each other.
In accordance with one or more aspects of the disclosed embodiment, the binocular images are generated with the vehicle in motion past the objects.
In accordance with one or more aspects of the disclosed embodiment, the more than one objects on the racks structure are dynamically positioned in closely packed juxtaposition with respect to each other.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to determine a front face, of at least one extracted object, and dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to characterize a planar surface of the front face, and orientation of the planar surface relative to a predetermined reference frame.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to characterize a pick surface, of the extracted object based on characteristics of the planar surface, that interfaces the payload handler.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to resolve presence and characteristics of an anomaly to the planar surface.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to determine a logistic identity of the extracted object based on dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to generate at least one of an execute command and a stop command of a bot actuator based on the determined location and pose.
In accordance with one or more aspects of the disclosed embodiment, an autonomous guided vehicle is provided. The autonomous guided vehicle includes a frame with a payload hold; a drive section coupled to the frame with drive wheels supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility; a payload handler coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array; a vision system mounted to the frame, having binocular imaging cameras generating binocular images of a field of a logistic space including rack structure shelving on which more than one objects are stored; and a controller, communicably connected to the vision system so as to register the binocular images, and configured to effect stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set of keypoints, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to identify location and pose of each imaged object based on superpose of stereo sets of keypoints depth resolution and depth map.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are rolling shutter cameras.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera generate a video stream and the registered images are parsed from the video stream.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are unsynchronized with each other.
In accordance with one or more aspects of the disclosed embodiment, the binocular images are generated with the vehicle in motion past the objects.
In accordance with one or more aspects of the disclosed embodiment, the more than one objects on the racks structure are dynamically positioned in closely packed juxtaposition with respect to each other.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to determine a front face, of at least one extracted object, and dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to characterize a planar surface of the front face, and orientation of the planar surface relative to a predetermined reference frame.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to characterize a pick surface, of the extracted object based on characteristics of the planar surface, that interfaces the payload handler.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to resolve presence and characteristics of an anomaly to the planar surface.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to determine a logistic identity of the extracted object based on dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the controller is configured to generate at least one of an execute command and a stop command of a bot actuator based on the identified location and pose.
In accordance with one or more aspects of the disclosed embodiment, a method is provided. The method includes providing an autonomous guided vehicle including: a frame with a payload hold, a drive section coupled to the frame with drive wheels supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility, and a payload handler coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array; generating, with a vision system mounted to the frame and having more than one camera, binocular images of a field of a logistic space including rack structure shelving on which more than one objects are stored; registering, with a controller that is communicably connected to the vision system, the binocular images, and effecting stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field; detecting from the binocular images, with the controller, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; and determining, with an object extractor of the controller, location and pose of each imaged object from both the dense depth map resolved from the binocular images and the depth resolution from the stereo sets of keypoints.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are rolling shutter cameras.
In accordance with one or more aspects of the disclosed embodiment, the method further includes parsing the registered images from a video stream generated by the more than one camera.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are unsynchronized with each other.
In accordance with one or more aspects of the disclosed embodiment, the method further includes generating the binocular images with the vehicle in motion past the objects.
In accordance with one or more aspects of the disclosed embodiment, the more than one objects on the racks structure are dynamically positioned in closely packed juxtaposition with respect to each other.
In accordance with one or more aspects of the disclosed embodiment, the method further includes determining, with the controller, a front face of at least one extracted object, and dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the method further includes characterizing, with the controller, a planar surface of the front face, and orientation of the planar surface relative to a predetermined reference frame.
In accordance with one or more aspects of the disclosed embodiment, the method further includes, characterizing, with the controller, a pick surface, of the extracted object based on characteristics of the planar surface, that interfaces the payload handler.
In accordance with one or more aspects of the disclosed embodiment, the method further includes resolving, with the controller, presence and characteristics of an anomaly to the planar surface.
In accordance with one or more aspects of the disclosed embodiment, the method further includes determining, with the controller, a logistic identity of the extracted object based on dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the method further includes generating, with the controller, at least one of an execute command and a stop command of a bot actuator based on the determined location and pose.
In accordance with one or more aspects of the disclosed embodiment, a method is provided. The method includes providing an autonomous guided vehicle including: a frame with a payload hold, a drive section coupled to the frame with drive wheels supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility, and a payload handler coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array; generating, with a vision system having binocular imaging cameras, binocular images of a field of a logistic space including rack structure shelving on which more than one objects are stored; registering, with a controller communicably connected to the vision system, the binocular images, and effecting, with the controller, stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field; detecting from the binocular images, with the controller, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; and identifying, with an object extractor of the controller, location and pose of each imaged object based on superpose of stereo sets of keypoints depth resolution and depth map.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are rolling shutter cameras.
In accordance with one or more aspects of the disclosed embodiment, the method further includes parsing the registered images from a video stream generated by the more than one camera.
In accordance with one or more aspects of the disclosed embodiment, the more than one camera are unsynchronized with each other.
In accordance with one or more aspects of the disclosed embodiment, the method further includes generating the binocular images with the vehicle in motion past the objects.
In accordance with one or more aspects of the disclosed embodiment, the more than one objects on the racks structure are dynamically positioned in closely packed juxtaposition with respect to each other.
In accordance with one or more aspects of the disclosed embodiment, the method further includes determining, with the controller, a front face of at least one extracted object, and dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the method further includes characterizing, with the controller, a planar surface of the front face, and orientation of the planar surface relative to a predetermined reference frame.
In accordance with one or more aspects of the disclosed embodiment, the method further includes characterizing, with the controller, a pick surface, of the extracted object based on characteristics of the planar surface, that interfaces the payload handler.
In accordance with one or more aspects of the disclosed embodiment, the method further including resolving, with the controller, presence and characteristics of an anomaly to the planar surface.
In accordance with one or more aspects of the disclosed embodiment, the method further including determining, with the controller, a logistic identity of the extracted object based on dimensions of the front face.
In accordance with one or more aspects of the disclosed embodiment, the method further including generating, with the controller, at least one of an execute command and a stop command of a bot actuator based on the identified location and pose.
It should be understood that the foregoing description is only illustrative of the aspects of the disclosed embodiment. Various alternatives and modifications can be devised by those skilled in the art without departing from the aspects of the disclosed embodiment. Accordingly, the aspects of the disclosed embodiment are intended to embrace all such alternatives, modifications and variances that fall within the scope of any claims appended hereto. Further, the mere fact that different features are recited in mutually different dependent or independent claims does not indicate that a combination of these features cannot be advantageously used, such a combination remaining within the scope of the aspects of the disclosed embodiment.
This application is a non-provisional of and claims the benefit of U.S. provisional patent application No. 63/383,597 filed on Nov. 14, 2022, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63383597 | Nov 2022 | US |