The present disclosure relates generally to a method and system for monitoring store product or inventory data provided by robotic imaging systems. Data visualization techniques are used to provide time critical data to managers.
Retail stores or warehouses can have thousands of distinct products that are often sold, removed, added, or repositioned. Even with frequent restocking schedules, products assumed to be in stock may actually be out of stock, decreasing both sales and customer satisfaction. Point of sales data can be used to roughly estimate product availability, but it lacks accuracy and does not help with identifying misplaced, stolen, or damaged products, all of which can reduce product availability. However, manually monitoring product inventory and tracking product position is expensive, time consuming, and prone to errors.
One use of machine vision systems is shelf space compliance in retail stores or warehouses. For example, large numbers of fixed position cameras can be used throughout a store to monitor aisles. Alternatively, a smaller number of movable cameras can be used to scan a store aisle. Even with such systems, human intervention is often required when resolution is not adequate to determine product identification number or product count.
A system for inventory and shelf compliance includes image capture units to provide images and depth data of shelving fixtures and on-shelf inventory and a database to receive inventory images and track inventory state. Inventory state can include, but is not limited to, product type, number, and placement, fixture dimensions, shelf label placement, and pricing, or any other feature or aspect of items. A visual tracking application is connected to receive data from the database (which can be supported, for example, by a local server, or cloud based data service). The application has a user interface that supports product management in a first mode specific to a single store, and in a second mode specific to a plurality of stores. The first mode provides both a summary chart of product gaps and an image covering a product gap area.
In one embodiment, the movable base can be a manually pushed or guidable cart. Alternatively, the movable base can be a tele-operated robot, or in preferred embodiments, an autonomous robot capable of guiding itself through a store or warehouse. Depending on size of the store or warehouse, multiple autonomous robots can be used. Aisles can be regularly inspected to create image-based real time product planograms (i.e. realograms), with aisles having high product movement being inspected more often.
In another embodiment, an inventory monitoring method includes the steps of providing image capture units mounted on autonomous robots to provide images of inventory and create a realogram. The realogram can be used by a product database to support determination of item or inventory state. A user is provided with a visual tracking application connected to receive data from the product database, the application having a user interface that supports product management in first mode specific to a specified store, and a second mode specific to a plurality of stores. The first mode can provide both a summary chart of product gaps and an image covering a product gap area.
Advantageously, the realogram can be used in conjunction with shelf labels, bar codes, and product identification databases to identify products, localize product or label placement, estimate product count, count the number of product facings, or even identify missing products or locate misplaced products. Information can be communicated to a remote server and suitable user interface (e.g. a portable tablet) for use by store managers, stocking employees, or customer assistant representatives. Additionally, the realogram and other information received from other robots, from updated product databases, or from other stores, can be used to update or assist in creation of subsequent realograms. This permits maintenance of a realogram history.
A visual tracking application (labelled StoreStats in
In addition to the store specific first mode, a second mode specific to a plurality of stores is also available. The second mode further provides an aggregated summary of product gaps in the plurality of stores. In some embodiments, information relating to warehouse or supplier inventory may also be used to facilitate orders for product replenishment.
Other modes can support product or item identification, localization of product or label placement, product count estimation, presenting the number of product facings, or even identify missing products or locate misplaced products productivity tracking. Modes can also allow for determining how long change will take, and for determining suitable times for restocking products (e.g. application notes that restock is available between 3 PM and 7 PM). Product outages can be totaled across the company, or can be compared across departments, stores, other districts in the area. Averages across time can be calculated, permitting improved quality control and identifying superior managers and employees.
The Aisle number is also interactive. Clicking/tapping on an aisle number (G) will take user to another screen with the first section including outs displayed in the table with a matching image. By default, aisles are arranged chronologically. Tapping “Gaps” (H) or “Time” (I) will sort the data by that attribute. When “Gaps” are selected, the data is sorted from highest number of gaps to lowest. When “Time” is selected, the data is sorted newest scan to oldest scan.
Sections of an aisle can be chosen by tapping/clicking (G). Section buttons work like bookmarks, with the data displayed in the body of the user interface jumping the user to the selected section. The section navigation bar and table are linked together. Scrolling in the table affects the selection state of the section navigation bar. The image is not linked with the table or section navigation bar.
The object sensing suite includes forward (433), side (434 and 435), top (432) and rear (not shown) image sensors to aid in object detection, localization, and navigation. Additional sensors such as laser ranging units 436 and 438 (and respective laser scanning beams 437 and 439) also form a part of the sensor suite that is useful for accurate distance determination. In certain embodiments, image sensors can be depth sensors that project an infrared mesh overlay that allows estimation of object distance in an image, or that infer depth from the time of flight of light reflecting off the target. In other embodiments, simple cameras and various image processing algorithms for identifying object position and location can be used. For selected applications, ultrasonic sensors, radar systems, magnetometers or the like can be used to aid in navigation. In still other embodiments, sensors capable of detecting electromagnetic, light, or other location beacons can be useful for precise positioning of the autonomous robot.
The inventory monitoring camera system 400 is connected to an onboard processing module that is able to determine item or inventory state. This can include but is not limited to constructing from the camera derived images an updateable inventory map with product name, product count, or product placement. Because it can be updated in real or near real time, this map is known as a “realogram” to distinguish from conventional “planograms” that take the form of 3D models, cartoons, diagrams or lists that show how and where specific retail products and signage should be placed on shelves or displays. Realograms can be locally stored with a data storage module connected to the processing module. A communication module can be connected to the processing module to transfer realogram data to remote locations, including store servers or other supported camera systems, and additionally receive inventory information including planograms to aid in realogram construction. Inventory data can include but is not limited to an inventory database capable of storing data on a plurality of products, each product associated with a product type, product dimensions, a product 3D model, a product image and a current product shelf inventory count and number of facings. Realograms captured and created at different times can be stored, and data analysis used to improve estimates of product availability. In certain embodiments, frequency of realogram creation can be increased or reduced, and changes to robot navigation being determined.
In addition to realogram mapping, this system can be used to detect out of stock products, estimate depleted products, estimate amount of products including in stacked piles, estimate products heights, lengths and widths, build 3D models of products, determine products' positions and orientations, determine whether one or more products are in disorganized on-shelf presentation that requires corrective action such as facing or zoning operations, estimate freshness of products such as produce, estimate quality of products including packaging integrity, locate products, including at home locations, secondary locations, top stock, bottom stock, and in the backroom, detect a misplaced product event (also known as a plug), identify misplaced products, estimate or count the number of product facings, compare the number of product facings to the planogram, estimate label locations, detect label type, read label content, including product name, barcode, UPC code and pricing, detect missing labels, compare label locations to the planogram, compare product locations to the planogram, measure shelf height, shelf depth, shelf width and section width, recognize signage, detect promotional material, including displays, signage, and features and measure their bring up and down times, detect and recognize seasonal and promotional products and displays such as product islands and features, capture images of individual products and groups of products and fixtures such as entire aisles, shelf sections, specific products on an aisle, and product displays and islands, capture 360-deg and spherical views of the environment to be visualized in a virtual tour application allowing for virtual walk throughs, capture 3D images of the environment to be viewed in augmented or virtual reality, capture environmental conditions including ambient light levels, capture information about the environment including determining if light bulbs are off, provide a real-time video feed of the space to remote monitors, provide on-demand images and videos of specific locations, including in live or scheduled settings, and build a library of product images.
In addition to product and inventory related items, the disclosed system can be used for security monitoring. Items can be identified and tracked in a range of buildings or environments. For example, presence or absence of flyers, informational papers, memos, other documentation made available for public distribution can be monitored. Alternatively, position and presence of items in an office building, including computers, printers, laptops, or the like can be monitored.
Because of the available high precision laser measurement system, the disclosed system can be used facilitate tracking of properties related to distances between items or furniture, as well as measure architectural elements such as doorways, hallways or room sizes. This allows verification of distances (e.g. aisle width) required for applicable fire, safety, or Americans with Disability Act (ADA) regulations. For example, if a temporary shelving display blocks a large enough portion of an aisle to prevent passage of wheelchairs, the disclosed system can provide a warning to a store manager. Alternatively, high precision measurements of door sizes, width or slope of wheelchair access pathways, or other architectural features can be made.
As previously noted, a realogram can use camera derived images to produce an updateable map of product or inventory position. Typically, one or more shelf units (e.g. target 402) would be imaged by a diverse set of camera types, including downwardly (442 and 444) or upwardly (443 and 448) fixed focal length cameras that cover a defined field less than the whole of a target shelf unit; a wide field camera 445 to provide greater photographic coverage than the fixed focal length cameras; and a narrow field, zoomable telephoto 446 to capture bar codes, product identification numbers, and shelf labels. Alternatively, a high resolution, tilt controllable camera can be used to identify shelf labels. These camera 440 derived images can be stitched together, with products in the images identified, and position determined.
To simplify image processing and provide accurate results, the multiple cameras are typically positioned a set distance from the targeted shelves during the inspection process. The shelves can be illuminated with LED or other directable lights 450 positioned on or near the cameras. The multiple cameras can be linearly mounted in vertical, horizontal, or other suitable orientation on a camera support. In some embodiments, to reduce costs, multiple cameras are fixedly mounted on a camera support. Such cameras can be arranged to point upward, downward, or level with respect to the camera support and the shelves. This advantageously permits a reduction in glare from products having highly reflective surfaces, since multiple cameras pointed in slightly different directions can result in at least one image with little or no glare.
Electronic control unit 420 contains an autonomous robot sensing and navigation control module 424 that manages robot responses. Robot position localization may utilize external markers and fiducials, or rely solely on localization information provided by robot-mounted sensors. Sensors for position determination include previously noted imaging, optical, ultrasonic sonar, radar, Lidar, Time of Flight, structured light, or other means of measuring distance between the robot and the environment, or incremental distance traveled by the mobile base, using techniques that include but are not limited to triangulation, visual flow, visual odometry and wheel odometry.
Electronic control unit 420 also provides image processing using a camera control and data processing module 422. Autonomous robot sensing and navigation control module 424 manages robot responses, and communication module 426 manages data input and output. The camera control and data processing module 422 can include a separate data storage module 423 (e.g. solid state hard drives) connected to a processing module 425. The communication module 426 is connected to the processing module 425 to transfer realogram data to remote server locations, including store servers or other supported camera systems, and additionally receive inventory information to aid in realogram construction. In certain embodiments, realogram data is primarily stored and images are processed within the autonomous robot. Advantageously, this reduces data transfer requirements, and permits operation even when local or cloud servers are not available.
In some embodiments, the robots 462 and 463 support at least one range finding sensor to measure distance between the multiple cameras and the shelves and products on shelves, with an accuracy between about 5 cm and 4 mm. This can be used to improve illumination estimates, as well as for robot navigation. Using absolute location sensors, relative distance measurements to the shelves, triangulation to a known landmark, conventional simultaneous localization and mapping (SLAM) methodologies, or relying on beacons positioned at known locations in a blueprint or a previously built map, the robots 462 and 463 can move along a path generally parallel to a shelves 467. As the robots move, vertically positioned cameras are synchronized to simultaneously capture images of the shelves 467.
In certain embodiments, a depth map of the shelves and products is created by measuring distances from the shelf cameras to the shelves and products over the length of the shelving unit using a laser ranging system, an infrared depth sensor, or similar system capable of distinguishing depth at a centimeter or less scale. Consecutive depth maps as well as images are simultaneously taken to span an entire aisle or shelving unit. The images can be first stitched vertically among all the cameras, and then horizontally and incrementally stitched with each new consecutive set of vertical images as the robots 462 and 463 move along an aisle. These images, along with any depth information, are stitched together. Once a stitched image has been created, a realogram based on or derived from the composite depth map and stitched image and suitable for product mapping can be created or updated.
The communication system can include connections to both a wired or wireless connect subsystem for interaction with devices such as servers, desktop computers, laptops, tablets, or smart phones. Data and control signals can be received, generated, or transported between varieties of external data sources, including wireless networks, personal area networks, cellular networks, the Internet, or cloud mediated data sources. In addition, sources of local data (e.g. a hard drive, solid state drive, flash memory, or any other suitable memory, including dynamic memory, such as SRAM or DRAM) that can allow for local data storage of user-specified preferences or protocols. In one particular embodiment, multiple communication systems can be provided. For example, a direct Wi-Fi connection (802.11b/g/n) can be used as well as a separate 4G cellular connection.
Remote servers can include, but are not limited to servers, desktop computers, laptops, tablets, or smart phones. Remote server embodiments may also be implemented in cloud computing environments. Cloud computing may be defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
Realogram updating can begin when a robot moves to an identified position and proceeds along an aisle path at a predetermined distance. If the path is blocked by people or objects, the robot can wait till the path is unobstructed, begin movement and slow down or wait as it nears the obstruction, move along the path until required to divert around the object before reacquiring the path, or simply select an alternative aisle.
Many modifications and other embodiments of the invention will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the invention is not to be limited to the specific embodiments disclosed, and that modifications and embodiments are intended to be included within the scope of the appended claims. It is also understood that other embodiments of this invention may be practiced in the absence of an element/step not specifically disclosed herein.
This application claims the benefit of U.S. Provisional Application Ser. 62/407,375 filed Oct. 12, 2016, which is hereby incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62407375 | Oct 2016 | US |