The present invention is generally related to the fields of robotics and data acquisition. More particularly, the present invention is related to systems and methods providing robotic motion control and positional accuracy to a mobile platform acquiring data within an environment having pathways defined by fixed structures such as curbs, walls, or aisles.
In the retail robotics applications, autonomous robots can traverse store flooring while performing one or more operations that involve analysis of the store shelf contents. One such operation can be to read the barcodes that are present on the shelf edges. Another operation can be to determine empty store shelves for restocking. Such operations can further include capturing high resolution images of the shelves for reading barcodes, capturing low resolution images for product identification by image analysis, or using depth information sensors such as LIDAR or Kinect to identify “gaps” in the product presentation (missing products).
In any of these missions it is imperative that the location and orientation of the robot is well known when data is captured so the analytics can identify the location of items on shelving along store aisles accurately. In the case of barcode reading, a robotic data acquisition system built and tested by the present inventors was able to take high resolution images approximately every 12 inches. For certain optics and resolutions of interest, this system allowed a horizontal overlap between successive images of about 6 inches when the navigation system led the robot to the expected location at expected orientation. In many cases a single barcode will be visible in two successive images (on the right of the first image and on the left of the second image or vice versa depending on the travel direction of the robot). If the robot's orientation is off by just one degree from what is expected, then the evaluated position of the barcode can be off by 0.5 inch. If the location of the robot down the aisle is off by an inch, then the detected barcode location will be off by an inch. If the distance to the shelf is off by 2 inches, the barcode location can be off by another 0.5 inch. Combining the errors together can easily yield an error in the evaluated barcode position of +/−2 inches or more. Barcodes are typically about 1 inch wide. If the same barcode is visible in two successive frames and the errors are significant, then the system will not be able to realize that the barcode is the same and may consider it two separate barcodes of the same kind (e.g., same UPC). This is called a product facing error (the system sees more product barcodes than it should) and causes errors in the data analytics to be performed on the captured data, such as compliance testing. In our prototype systems, this has been a frequent problem. Orientation errors have been up to 4 degrees and positional errors up to 3 inches in system tests.
Some autonomous robots deployed in retail settings can use an algorithm based on the SLAM (Simultaneous Localization and Mapping) architecture to simultaneously understand the location of the robot and update a “store map”. This allows a device to constantly update its view of the environment and enable it to handle changes in the environment. However, such an algorithm heavily relies on statistical outcomes applied to noisy sensor data and does not meet the high positional accuracies required by certain retail robotics missions. SLAM can be used in combination with an appropriate path planning algorithm to move the robot to a specified point on the store map, but there are still limits as to how accurate the robot can achieve the desired location. When used to read store shelf barcodes, an autonomous robot based on SLAM architecture generally cannot report its location and orientation to the high accuracy required for reliable analysis of the data captured. Routinely, error in orientation can be up to 4 degrees and errors in position can be up to 3 inches. These errors have prevented systems from knowing the location of the barcodes accurately enough for the data analytics to perform the required analysis. The use of higher quality sensors in the robot may potentially reduce these errors, but at a prohibitively higher cost.
Therefore, there is a need for improved systems and methods for maintaining direction and speed of robotic systems engaged in acquiring data in mapped or route-planned environments having pathways (e.g., aisles) defined by fixed objects (e.g., shelving).
The present invention is described in the context of a solution for accurately acquiring data from shelving in a retail setting using a robotic data acquisition system; however, any reference to a retail environment, shelving, product-related data is for exemplary purposes only and refer to a particular embodiment. It should be appreciated, however, that the robotic data acquisition system described herein can also be used to acquire data in diverse environments containing fixed structures that define pathways for robot movement.
The present inventors have determined that an autonomous robot cannot reliably report its location and orientation to the high accuracy that is needed for reliable analysis of the data captured during a data gathering operation using SLAM alone. A second motion control mode is needed that can provide more accurate location information while the robot is performing data capture. Accordingly, it is a feature of the present embodiments to provide an autonomous robot control system that can maintain a desired distance and orientation to a fixed structure (e.g., a retail store shelf) at a specified speed and travel distance using: 1) range sensing, which can be from a Light Detection And Ranging (LIDAR); 2) a Proportional Integral Derivative (PID) controller to maintain a constant distance to the fixed structure; and 3) monitor high-precision wheel encoders to accurately measure distance traveled along a pathway that may be defined by the fixed structure.
The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein: rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosed embodiments. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which disclosed embodiments belong. It will be further understood that terms such as those defined in commonly used dictionaries should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by one skilled in the art, the present invention can be embodied as a method, system, and/or a processor-readable medium. Accordingly, the embodiments may take the form of an entire hardware application, an entire software embodiment, or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the embodiments may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer-readable medium or processor-readable medium may be utilized including, for example, but not limited to, hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
Computer program code for carrying out operations of the disclosed embodiments may be written in an object oriented programming language (e.g., Java, C++, etc.). The computer program code, however, for carrying out operations of the disclosed embodiments may also be written in conventional procedural programming languages such as the “C” programming language, HTML, XML, etc., or in a visually oriented programming environment such as, for example, Visual Basic.
The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network, or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
The disclosed embodiments are described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products, and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
Note that the instructions described herein such as, for example, the operations/instructions and steps discussed herein, and any other processes described herein can be implemented in the context of hardware and/or software. In the context of software, such operations/instructions of the methods described herein can be implemented as, for example, computer-executable instructions such as program modules being executed by a single computer or a group of computers or other processors and processing devices. In most instances, a “module” constitutes a software application.
Generally, program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions. Moreover, those skilled in the art will appreciate that the disclosed method and system may be practiced with other computer system configurations such as, for example, hand-held devices, multi-processor systems, data networks, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, tablet computers (e.g., iPad and other “Pad” computing device), remote control devices, wireless hand held devices, Smartphones, mainframe computers, servers, and the like.
Note that the term module as utilized herein may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines; and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module. The term module may also simply refer to an application such as a computer program designed to assist in the performance of a specific task such as word processing, accounting, inventory management, etc. Additionally, the term “module” can also refer in some instances to a hardware component such as a computer chip or other hardware.
It will be understood that the circuits and other means supported by each block and combinations of blocks can be implemented by special purpose hardware, software, or firmware operating on special or general-purpose data processors, or combinations thereof. It should also be noted that, in some alternative implementations, the operations noted in the blocks might occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently; the blocks may sometimes be executed in the reverse order; the varying embodiments described herein can be combined with one another: or portions of such embodiments can be combined with portions of other embodiments in another embodiment.
Due to the prevalence of surveillance cameras and the increasing interest in data-driven decision-making for operational excellence, several technical initiatives are currently focused on developing methods of collecting/extracting image-based and/or video-based analytics. In particular, but without limiting the applicable scope of the present invention, there is a desire by industry to bring new image-based and video-based technologies into retail business settings. An example is wherein image- and video-based technologies are being used that include store shelf-product imaging and identification, spatial product layout characterization, barcode and SKU recognition, auxiliary product information extraction, and panoramic imaging of retail environments.
Without unnecessarily limiting the scope of the present invention to retail uses, there are, for example, a large number of retail chains worldwide and across various market segments, including pharmacy, grocery, home improvement, and others. Functions that many such chains have in common are sale advertising and merchandising. An element within these processes is the printing and posting of sale item signage within each store, which very often occurs at a weekly cadence. It would be advantageous to each store if this signage was printed and packed in the order in which a person encounters sale products while walking down each aisle. Doing so eliminates a non-value-add step of manually having to pre-sort the signage into the specific order appropriate for a given store. Unfortunately, with few current exceptions, retail chains cannot control or predict the product locations across each of their stores. This may be due to a number of factors: store manager discretion, local product merchandising campaigns, different store layouts, etc. Thus it would be advantageous to a retail chain to be able to collect product location data (which can also be referred to as a store profile) automatically across its stores, since each store could then receive signage in an appropriate order to avoid a pre-sorting step.
There is growing interest by retail enterprises in having systems that use image acquisition for accelerating the process of determining the spatial layout of products in a store using printed tag information recognition. Although “barcodes” will be described as the tag information for purposes of the rest of this disclosure, it should be appreciated that imaging could equally apply to other patterns (e.g., such as QR codes) and serial numbers (e.g., such as UPC codes). Furthermore, the solutions disclosed herein can apply to several environments including retail, warehouse, and manufacturing applications, where identifying barcoded item location is desired. The invention described herein addresses a critical failure mode of such a system. In particular, the present invention is generally described, without suggesting any limitation of its applicability, with an embodiment aimed at eliminating or reducing the errors in determining the location of detected barcodes along the length of the aisle to improve the accuracy of the store profile and any analysis performed on the barcode data.
Referring to
As stated before, the data acquisition section 211 can include systems such as cameras or sensors required to acquire the targeted data. In
Referring to
In the prototype as tested, the robotic data acquisition system 200 moved to the beginning of the aisle 311 using an API called MoveTo(x,y,orientation) that utilizes standard motion commands (based on SLAM) to safely navigate around store obstacles to get to the desired point. However, once the robot arrives at the beginning of the aisle 311, a different API called TravelPath( . . . ) can be invoked. This method implements the control paradigm described herein. Accurate positional understanding is an enabler for the data analytics applied to barcode data as well as gap identification from LIDAR measures. The robot simply has to know where it is for any collected data to make sense.
Referring to
The method continues with the steps of:
Once the robot has come to the end of the aisle, SLAM-based navigation is used to safely move to the beginning of the next aisle, where the above control loop above is repeated. This continues until the store is completely scanned.
Since this embodiment monitors wheel motion along, for example, the aisle and the PID controller minimizes angle and distance errors, the location of the robot along the aisle is known to the accuracy of the wheel encoders when the camera pictures are taken. Therefore, if barcodes are detected within the images, the location of the barcodes along the aisle can be determined to the accuracy of the wheel encoders and the measured angle-to-the-shelf and distance-to-shelf.
It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.