During transactions in retail facilities, items being purchased by customers are scanned via a scanning device at a point-of-sale (POS), such as a staffed checkout or a self-checkout (SCO). As each item is scanned, a universal product code (UPC) for each item is added to a list of scanned items which are ultimately included in a purchase receipt when the transaction is complete. However, in some transactions, one or more items may inadvertently fail to be scanned, resulting in shrink. To address this issue, an exit greeter can be employed to scan one or more random items in each customer's cart as the customers exit the store. However, this is a time consuming and inefficient process, as only a few items are typically checked while the majority of the items in the cart remain unchecked, resulting in sub-optimal verification of cart contents that is limited in scope. Moreover, this procedure results in increased friction at store exit for customers, as the exit process can create a bottleneck at store exit that can cause unpredictable exit wait times and inconvenience leading to a potentially negative customer experience.
Some embodiments provide a system and method for zero-friction unpaid item identification via computer vision. An image of a selected cart is selected from a plurality of images of the selected cart using a set of anchor points associated with a field of view of an image capture device. A plurality of items associated with the selected cart is identified by item detection and item recognition models using the selected image. an item identifier (ID) associated with each item in the plurality of items associated with the selected cart, a set of identified items comprising a plurality of item IDs associated with the plurality of items. An e-receipt associated with the selected cart from a plurality of active e-receipts using a fuzzy matching of a set of paid items included in the selected receipt and the set of identified items generated using the selected image in real time. The set of paid items includes a scanned item ID associated with each item scanned at a POS device during a transaction associated with the selected e-receipt. Each scanned item ID in the set of paid items is mapped to an identified item ID in the set of identified items. An unmapped item in the set of identified items is a predicted unpaid item. Upon receiving a verification request signal associated with the selected receipt from a scanner device indicating a user is ready to exit, generate a notification including a verification result, wherein the verification result includes a set of unpaid items. Each predicted unpaid item in the set of unpaid items is associated with an item having an item ID in the set of identified items that fails to map to a corresponding item ID in the set of scanned items. The notification is sent to a user interface device associated with the scan device via a network. The notification includes a list of items in the set of paid items and a list of items in the set of unpaid items enabling near zero-friction item verification at exit via real-time computer vision object detection and recognition.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Corresponding reference characters indicate corresponding parts throughout the drawings.
A more detailed understanding can be obtained from the following description, presented by way of example, in conjunction with the accompanying drawings. The entities, connections, arrangements, and the like that are depicted in, and in connection with the various figures, are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure depicts, what a particular element or entity in a particular figure is or has, and any and all similar statements, that can in isolation and out of context be read as absolute and therefore limiting, can only properly be read as being constructively preceded by a clause such as “In at least some examples, . . . ” For brevity and clarity of presentation, this implied leading clause is not repeated ad nauseum.
Shrink is an issue troubling retailers worldwide, resulting in over $61B in lost profit every year. At some stores, missed item scans at exit alone can result in millions of dollars in losses attributed to shrink annually. Some stores attempt to reduce shrink by stopping customers as they exit the store and scanning a few random items in their cart. However, this approach has several drawbacks that impede its effectiveness. For example, the process of stopping customers for random scans introduces additional wait times and creates an additional step for customers to perform before leaving the store. This added inconvenience can lead to dissatisfaction among customers. The random checks of every customer's shopping cart can cause embarrassment and discomfort for customers, as it can imply a sense of distrust towards every customer. Such practices can negatively impact overall shopping experience and customer loyalty. It also limits the amount of shrink captured because only a small sample size of items, typically three random items from each cart, are scanned and verified. This method may miss unpaid items that are not among the randomly selected items, leaving potential losses unaddressed. This is inefficient and unreliable method of reducing shrink due to unscanned items.
Referring to the figures, examples of the disclosure enable exit computer vision unpaid item identification for reduced friction exit experience. In some embodiments, an unpaid item manager selects an image of a selected cart from a plurality of images of the selected cart using object tracking, a depth model and tracking trajectory of the cart using a set of anchor points associated with a field of view of an image capture device. In this manner, the system is able to select images having a full view of the selected cart with the optimum images which enable generation of the highest quality detection and recognition model results. This reduces false positives and error rates while improving accuracy of item recognition results.
In some embodiments, when a user completes a purchase transaction, the system maps each scanned item identified in an e-receipt to predicted item IDs for items identified using computer vision (CV) object detection and recognition models. If any item identified by the CV models fails to map to a scanned item included in the list of paid items in the e-receipt, a notification is generated and presented to a user via a user interface device. This enables the user to identify and/or scan missed items quickly and easily with minimal friction prior to exiting the store while reducing shrink resulting from unscanned items remaining at customer exit from the store.
Other aspects provide fuzzy basket matching for identifying an electronic receipt (e-receipt) associated with the identified items in a selected cart. This enables more accurate matching of paid item information obtained from the e-receipt with item recognition data associated with a selected cart while reducing error rates associated with false positives.
In some embodiments provide a mapping component that generates a list of unpaid items by mapping the predicted item identifiers (IDs) for items identified using item detection and recognition models with paid item IDs obtained from a selected e-receipt. This enables fast and efficient identification of unpaid items with zero-friction for customers attempting to checkout and exit a retail facility.
Other embodiments generate a notification including a list of unpaid items and paid items associated with a receipt ID for a basket of items purchased by a customer in real-time as a customer is exiting a retail facility. This enables frictionless exit while providing quick and reliable access to accurate unpaid item lists and paid item lists for each customer basket.
The computing device operates in an unconventional manner by automatically identifying unpaid items for customer baskets using e-receipt data and item recognition results. The results are provided to a user seamlessly in response to a verification request generated upon scanning a receipt associated with the customer basket. In this manner, the computing device is used in an unconventional way and allows accurate and reliable identification of unpaid items without physically scanning items in the customer's basket for prevention of shrink while improving customer exit experience and reducing time spent verifying item payments. The system further reduces system memory usage consumed in storing item scan data and matching scanned items to receipt data during manual verifications of customer baskets at exit.
In other embodiments, the computing device operates in an unconventional manner by presenting a list of unpaid items to a user requesting basket payment verification in real-time after completion of a transaction but before the customer exits the store, thereby enabling the user to quickly identify missed items. This reduces system resources consumed by scanning random items during a manual verification process and reduces the number of missed unpaid items, further improving overall efficiency of human users by eliminating the need to scan random items in a customer cart at exit. The system further reduces human time and effort consumed scanning receipts and matching randomly scanned items to the receipts at exit, reducing queue lines of customers waiting to exit the store.
In some embodiments, computer vision technology is used to capture images of customer carts and identify items within the cart. The system compares recognized items to the items scanned at the POS and/or recorded on the receipt. The system identifies potential shrink and notifies a user or cashier upstream right before the customer exits after the transaction is completed, sending the missed item alert to the tablet of a user and/or another user interface (UI) at the checkout terminal. This enables the user to identify missed items quickly and efficiently at exit.
The system further outputs the verification results to a user via a user interface (UI). The results include a list of unpaid items and/or images of the unpaid items, enabling a user to quickly locate and scan the unpaid items if desired. This improves user efficiency via UI interaction with increased user interaction performance, thereby improving the functioning of the underlying computing device.
Referring again to
In some embodiments, the computing device 102 has at least one processor 106 and a memory 108. The computing device 102, in other embodiments includes a user interface device 110.
The processor 106 includes any quantity of processing units and is programmed to execute the computer-executable instructions 104. The computer-executable instructions 104 are performed by the processor 106, performed by multiple processors within the computing device 102 or performed by a processor external to the computing device 102. In some embodiments, the processor 106 is programmed to execute instructions such as those illustrated in the figures (e.g.,
The computing device 102 further has one or more computer-readable media such as the memory 108. The memory 108 includes any quantity of media associated with or accessible by the computing device 102. The memory 108 in these embodiments is internal to the computing device 102 (as shown in
The memory 108 stores data, such as one or more applications. The applications, when executed by the processor 106, operate to perform functionality on the computing device 102. The applications can communicate with counterpart applications or services such as web services accessible via a network 112. In an example, the applications represent downloaded client-side applications that correspond to server-side services executing in a cloud.
In other embodiments, the user interface device 110 includes a graphics card for displaying data to the user and receiving data from the user. The user interface device 110 can also include computer-executable instructions (e.g., a driver) for operating the graphics card. Further, the user interface device 110 can include a display (e.g., a touch screen display or natural user interface) and/or computer-executable instructions (e.g., a driver) for operating the display. The user interface device 110 can also include one or more of the following to provide data to the user or receive data from the user: speakers, a sound card, a camera, a microphone, a vibration motor, one or more accelerometers, a BLUETOOTH® brand communication module, wireless broadband communication (LTE) module, global positioning system (GPS) hardware, and a photoreceptive light sensor. In a non-limiting example, the user inputs commands or manipulates data by moving the computing device 102 in one or more ways.
The network 112 is implemented by one or more physical network components, such as, but without limitation, routers, switches, network interface cards (NICs), and other network devices. The network 112 is any type of network for enabling communications with remote computing devices, such as, but not limited to, a local area network (LAN), a subnet, a wide area network (WAN), a wireless (Wi-Fi) network, or any other type of network. In this example, the network 112 is a WAN, such as the Internet. However, in other embodiments, the network 112 is a local or private LAN.
In some embodiments, the system 100 optionally includes a communications interface device 114. The communications interface device 114 includes a network interface card and/or computer-executable instructions (e.g., a driver) for operating the network interface card. Communication between the computing device 102 and other devices, such as but not limited to a user device 116 and/or a cloud server 118, can occur using any protocol or mechanism over any wired or wireless connection. In some embodiments, the communications interface device 114 is operable with short range communication technologies such as by using near-field communication (NFC) tags.
The user device 116 represents any device executing computer-executable instructions. The user device 116 can be implemented as a mobile computing device, such as, but not limited to, a wearable computing device, a mobile telephone, laptop, tablet, computing pad, netbook, gaming device, and/or any other portable device. The user device 116 includes at least one processor and a memory. The user device 116 can also include a user interface (UI) 120.
The cloud server 118 is a logical server providing services to the computing device 102 or other clients, such as, but not limited to, the user device 116. The cloud server 118 is hosted and/or delivered via the network 112. In some non-limiting embodiments, the cloud server 118 is associated with one or more physical servers in one or more data centers. In other embodiments, the cloud server 118 is associated with a distributed network of servers.
The system 100 can optionally include a data storage device 124 for storing data, such as, but not limited to one or more item detection model(s) 126, one or more item recognition model(s) 128, and/or image data 130. The image data optionally includes one or more indicator(s) 132 associated with one or more identified items 134. The indicator(s) 132 in some embodiments include color-coded bounding boxes placed around the images of objects, such as a shopping cart and/or items in the shopping cart.
The item detection model(s) 126 are deep learning CV object detection models that are trained to analyze one or more image(s) of a checkout area within a retail environment and identify shopping carts, items in shopping carts shown in the image(s). The detected items are enclosed within bounding boxes in the image(s). The images are cropped to isolate the selected shopping cart identified in the image. The image(s) of the shopping cart are cropped to isolate the one or more item(s) in the shopping cart.
The item recognition model(s) 128 includes one or more trained, CV deep learning item detection models trained to recognize the items detected by the item detection model(s) 126. The item recognition model(s) 128 predicts or infers an item ID for each recognized item. The item ID, in some embodiments, is a universal product code (UPC) associated with the identified item. For example, if the item detection model(s) isolate an item in an image that is recognized as a brand “A” 24-pack of soft drinks, the item recognition model(s) 128 infers an item ID for the brand “A” 24-pack of soft drinks and associates that item ID with the image of the brand “A” 24-pack soft drinks. The item recognition model(s) are trained using labeled image data including images of thousands of items in a catalog of items stocked and/or offered for sale in the retail store.
The data storage device 124 can include one or more different types of data storage devices, such as, for example, one or more rotating disks drives, one or more solid state drives (SSDs), and/or any other type of data storage device. The data storage device 124, in some non-limiting embodiments, includes a redundant array of independent disks (RAID) array. In some non-limiting embodiments, the data storage device(s) provide a shared data store accessible by two or more hosts in a cluster. For example, the data storage device may include a hard disk, a redundant array of independent disks (RAID), a flash memory drive, a storage area network (SAN), or other data storage device. In other embodiments, the data storage device 124 includes a database, such as, but not limited to, the database 232 in
The data storage device 124, in this example, is included within the computing device 102, attached to the computing device, plugged into the computing device, or otherwise associated with the computing device 102. In other embodiments, the data storage device 124 includes a remote data storage accessed by the computing device via the network 112, such as a remote data storage device, a data storage in a remote data center, or a cloud storage.
The memory 108 in some embodiments stores one or more computer-executable components, such as, but not limited to, an unpaid item manager 140, that, when executed by the processor 106 of the computing device 102, selects one or more image(s) of a selected cart from a plurality of images of the selected cart using a set of anchor points associated with a field of view of an image capture device. The unpaid item manager 140 identifies a plurality of items associated with the selected cart using the selected image(s). The unpaid item manager 140 predicts an item identifier (ID) associated with each item in the plurality of items associated with the selected cart. The unpaid item manager 140 generates a list of identified items 134 and/or a list of paid items 136. The paid items 136 are identified using an e-receipt selected from a plurality of e-receipts.
In some embodiments, the unpaid item manager 140 selects an e-receipt associated with the selected cart from a plurality of active e-receipts 152 using a fuzzy matching of the list of paid items 136 included in the selected e-receipt 154 and the list of identified items 134 generated using the selected image in real time. The paid items 136 include a paid item ID associated with each item scanned at a POS device during a transaction associated with the selected e-receipt. The unpaid item manager 140 maps each paid item ID in the list of paid items 136 to an identified item ID in the list (set) of identified items 134.
An unmapped item in the identified items 134 is a predicted unpaid item. When the unpaid item manager 140 receives a verification request 138 signal associated with the selected receipt from a scan device indicating a user is ready to exit and a verification of the basket contents payment is required. The unpaid item manager 140 generates a notification 142 including a verification result 144. The verification result 144 includes a list of unpaid items 146. Each predicted unpaid item is associated with an item ID in the list of identified items 134 that fails to map to a corresponding item ID in the list of paid items 136. The unpaid item manager 140 sends the notification 142 to a user interface device, such as, but not limited to, the user interface device 110 and/or the UI 120.
The notification includes the list of unpaid item(s) 146 and/or the list of paid items 136. The notification optionally also includes one or more image(s) 148 of the selected cart with an overlay of indicator(s) 132 highlighting the unpaid items. The notification 142 optionally also includes one or more instruction(s) 150, such as an instruction to scan one or more of the unpaid item(s) 146.
In this example, the unpaid item manager 140 is implemented on the computing device 102. However, in other embodiments, the unpaid item manager is implemented on a remote computing device or a cloud server, such as, but not limited to, the cloud server 118, as shown in
In the example shown in
The checkout area 202 is an area associated with a POS device 204 for completing a transaction to purchase one or more items associated with one or more cart(s) 206. The checkout area 202 includes staffed checkout lanes and/or unstaffed, self-checkout (SCO) lanes. An SCO is an unstaffed checkout lane. In this example, the POS device 204 includes a scan device 208 for scanning item identifiers codes associated with items, such as, but not limited to, a universal product code (UPC), a radio frequency identifier (RFID) tag, matrix barcode, or any other type of identifier. The scan device 208 generates scan data 210 associated with a plurality of items 212 associated with a selected cart 214.
The POS device 204 generates a receipt 216 associated with the plurality of items scanned by the scan device 208. The receipt 216 includes a physical receipt printed by a printer device 218 and/or an electronic receipt. The receipt 216 includes a list of scanned items and the item IDs 222 associated with the scanned items. In some embodiments, the receipt 216 includes a unique receipt ID 224. The receipt ID in some embodiments is a barcode or other identifier, such as, but not limited to, a UPC, a matrix barcode, or any other type of unique ID. The receipt ID is scanned by a scan device to obtain the receipt ID. The receipt ID is used to retrieve receipt data, including a list of purchased (paid) items on the receipt.
As each receipt is generated, the POS device transmits an e-receipt copy of the receipt to the unpaid item manager 140 via a network, such as, but not limited to, the network 112 in
In this example, the exit area includes one or more image capture device(s) 226 generating a plurality of images 228 of the plurality of car(s) 206 and cart contents, such as, but not limited to, the plurality of items 212 associated with the contents of the selected cart 214. The image capture device(s) generates images of the cart(s) as the cart(s) move away from the POS device and toward an exit. As the cart(s) move, they pass one or more anchor point(s) 230. In this example, there are three anchor point(s) 230 used to identify selected images of the selected carts. In other words, the system selects images of the selected cart when the cart is positioned near (in proximity to) an anchor point. The anchor point identifies a location in which a cart is fully within the field of view (FOV) 306 of at least one image capture device, such as a camera mounted to a ceiling of the retail facility.
The unpaid item manager 140 analyzes the image(s) using a depth model to identify a selected cart nearest to the anchor point(s), such as but not limited to, the depth model 310 in
The unpaid item manager 140 detects and identifies one or more items in the selected cart 214 via one or more detection and recognition models, such as, but not limited to, the item detection model(s) 126 and/or the item recognition model(s) 128 in
In other embodiments, the POS device includes a processor and a memory for generating messages transmitted to the unpaid item manager 140, such as, but not limited to, the messages including the e-receipts and/or a verification request message.
The item detection model(s) 126 applies an item detection algorithm to crop images of items present in the selected cart. This process enables precise localization of items within the captured images. The cropped image(s) 314 are isolated or highlighted by bounding boxes 316 enclosing the detected items within an overlay of the image data, in some embodiments.
The item recognition model(s) 128 apply two item recognition algorithms that are utilized to infer the Universal Product Code (UPC) from the cropped item images. These algorithms facilitate accurate identification and recognition of items in the transaction. The item recognition model(s) generate predicted item IDs 318 for the identified item(s) 320. The identified items include items detected in the images that the item recognition model(s) recognize and infer an item ID. The item IDs, in this example, include one or more item UPC(s) 322.
The depth model 310 includes one or more models trained to generate depth values 312 associated with one or more objects (items) in an image. In this example, the depth model determines a depth value for each shopping cart in an image. The depth values are used to identify a shopping cart which is closest in proximity to the POS device and/or an anchor point in an image. Carts which are located too far away from the anchor points (threshold depth/distance) are discarded or disregarded.
One or more images of the selected cart are selected from a plurality of images of the selected cart by a cart image selection 308. The cart image selection 308 identifies images in which the selected cart is located within a predetermined proximity or range from one or more anchor point(s) 324. The selected image(s) 326 include images in which the cart is fully visible in the FOV of the image capture device without any obstructions visible.
An item list generator 325 generates a set of identified items 320 including the inferred or predicted item ID(s) 328 for each item recognized by the item recognition model(s) 128. A list of identified items 330 is generated by the item list generator 325. The list of identified items 330 includes the predicted item IDs 328 for the set of identified items 320. The item list generator also generates a list of paid items 332 including the item IDs 336 for a set of paid items 334 obtained from a selected e-receipt 338. The selected e-receipt 338 is an electronic receipt selected from a plurality of e-receipts using a basket fuzzy matching 342 algorithm. Each e-receipt is optionally associated with a unique receipt ID 340 used to request verification when the customer receipt is scanned.
By leveraging the computer vision item detection model(s) and the item recognition model(s), in this example, the unpaid item manager generates a list of unpaid items 344 by inferring item ID(s) 346 from the image(s). A mapping component 343 maps the list of identified item IDs to the list of paid item IDs from the selected e-receipt 338 to generate the list of unpaid item(s) 344. The list of unpaid item(s) includes the item ID(s) for identified items in the set of identified items which fail to map to at least one paid item ID in the set of paid items. The list of unpaid item(s) 344 is included in verification result(s) 348 which are stored in a database with the receipt ID for the e-receipt associated with the selected cart.
When the unpaid item manager 140 receives a verification request from a user device, a notification component 350 generates one or more notifications 353 including the list of unpaid item(s) 344. The notification(s) optionally also include one or more instruction(s) 352, such as a scan item instruction, a “green to go” instruction, and/or any other type of instruction. The notification(s) 353 optionally also include one or more image(s) 354 of the unpaid items. The notification component 350 sends the notification(s) to a UI device for display to a user.
In some embodiments, the notification component sends a single notification identifying all the unpaid items or the notification component can send a series of notifications in which each notification identifies a single unpaid item. After each unpaid item is scanned, the new scan data for the scanned item is received triggering the unpaid item manager to send the next notification identifying the next unpaid item that requires scanning. In this manner, the notifications enable the system to walk the user through the process of identifying each unpaid item one at a time and scanning it (adding it to the basket of scanned items).
In some embodiments, these notifications display red bounding boxes, one at a time, on the cart image, effectively highlighting the unpaid items for further attention and resolution. In other embodiments, the notifications include cropped images of each unpaid item. In still other embodiments, the notifications include anchor images of the unpaid item. An anchor image is a stock image or image of an item from a catalog of items.
The process begins by selecting an image of a cart at 402. The cart is a selected cart, such as the selected cart 214 in
While the operations illustrated in
Referring now to
The process begins by obtaining a plurality of images of a selected cart at 502. The plurality of images includes images generated by an image capture device, such as, but not limited to, the image capture device(s) 226 in
While the operations illustrated in
The process begins by receiving scan data including a receipt ID associated with a verification request at 602. The scan data is generated by a scan device, such as, but not limited to, the scan device 208 in
While the operations illustrated in
In other embodiments, the images generated by the cameras are used for basket matching. During the basket matching process, the exit CV system generates a CV item list and skillfully matches it with active candidate receipts, effectively linking cart images generated by the cameras to the respective SNG receipts generated during cart checkout.
For example, if the exit CV system uses images of a customer cart to identify a list of items in the cart including a package of Brand “X” cookies, a package of Brand “Y” chips, a bottle of Brand “Z” soda, and a pineapple, the system attempts to match the list of identified items with a list of scanned items in a customer e-receipt. In this example, a first receipt includes a package of Brand “X” cookies, a package of Brand “Y” chips, a bottle of Brand “Z” soda, and a package of nuts. A second receipt includes paper towels, a package of Brand “X” cookies, a package of Brand “A” chips, and a bottle of water. A third receipt contains hotdog buns, beef ribs, tomatoes, and strawberries. In this example, the system matches the list of identified items captured in the image(s) with the first list of items associated with the first receipt because the items are the closest match.
In some embodiments, the system provides an innovative exit computer vision (CV) solution designed to address the challenges faced by self-checkout (SCO) customers during the checkout process. To streamline and enhance the exit experience, the system includes an archway between the checkout terminal and the exit door of the store. The archway is a structure including one or more cameras for capturing images of the cart from multiple different angles as the cart passes through the archway. In some embodiments, the archway includes a top camera which captured a bird's eye view (top view) of the cart, as well as a camera on the right side of the arch and a camera on the left side of the arch to capture images of both sides of the cart.
The system includes a valid cart verification in which a customer walks their cart towards the exit and the system performs a quick check to ensure that it falls within the SCO camera view. The SCO may also be referred to as a scan and go (SNG). The system determines if the cart is a valid cart eligible for the exit CV process. Item recognition with CV algorithms is employed, including advanced CV algorithms, to recognize and generate a comprehensive list of items present in the cart. Fuzzy matching is used with active SCO e-receipts and the CV generated list of items. The fuzzy matching links the cart images to the corresponding SCO e-receipts ensuring accurate verification. The system then compares the results and conveniently displays a list of paid items associated with a selected e-receipt and a list of unpaid items from the CV generated list of items which fail to match to an item on the selected receipt on a gallery page. Simultaneously, this information is transmitted to the receipt check team for further verification. When the exit greeter scans the receipt barcode provided by the customer exiting the store, the receipt check system fetches and presents the results, including the list of paid and the list of unpaid items. The results are displayed to the exit greeter via a user interface associated with a user device. If all the items in the CV generated list of items have been paid for, there is no need for random scanning of one or more items in the customer's cart. If any items are included in the list of unpaid items, the exit greeter scans those identified items to verify whether those items were paid for or not rather than scanning random items. This focused approach is more accurate and time-efficient for both the associates and the customers. Moreover, this significantly reduces friction for customers and streamlines the exit process of customers for a more customer-friendly experience.
In other embodiments, the system provides an exit CV solution that streamlines the cart verification process as customers approach an exit door in a retail facility, such as a brick-and-mortar retail store. The system employs a cart detection model and object tracking algorithm to analyze images generated by one or more exit cameras to determine if a customer cart is passing through an exit camera view. It verifies the cart's trajectory (direction of travel) using a series of images of the cart in sequence, ensuring it enters from the left side of the view and exits on the right side. Representative cart image selection uses a similarity measure between the bounding box and three anchor points. The system selects representative cart images along the cart trajectory. The chosen bounding box is fully visible in the image. The computer vision item list generation leverages computer vision models, such as cart detection, item detection, depth model, classification models and verification models. The system processes the selected cart images to generate a comprehensive computer vision item list. Basket fuzzy matching utilizes the CV generated item list and precisely associates the cart image with the corresponding SCO receipt from the active SCO receipt list. Finally, the system compares the detected items, displaying a comprehensive list of paid items and unpaid items. The results are forwarded to the receipt check team for verification.
In other embodiments, the system provides an efficient exit experience for customers and store associates as well. Upon scanning the receipt barcode, the exit greeter receives the system's results. If the cart is determined to be “green to go,” this indicates all items have been paid for and there is no need for random scanning of three items in the cart, significantly reducing friction for customers. Through this optimized exit CV process, a seamless and hassle-free exit experience is provided for customers, enhancing customer satisfaction, and increasing loyalty to the retail store.
Still other embodiments provide a system to recognize and identify items that are likely to have been missed during a checkout process at an exit of a store. The system includes multiple computer vision image capture models for identifying and/or validating the cart contents with a receipt. The system validates carts to ensure they fall within the field of view of one or more cameras. The system identifies items in the cart and performs fuzzy matching between the identified items and a set of possible electronic receipts (e-receipts) associated with recent transactions. The system presents information associated with paid items and possible unpaid items in the cart based on the comparison. The system sends notifications to an exit greeter to confirm the accuracy of three-receipt more efficiently with respect to items in the cart. The notification to the exit greeter is sent with visual cues (indicators) by outlining the potentially unpaid items with bounding boxes. The system reduces the occurrence of shrink associated with the unpaid items and improves the efficiency of the exit greeters that validate the contents of a cart as it exits the store where the exit greeters need not randomly scan items in the cart or take other such steps to prevent or control potential shrink.
In some embodiments, the exit CV system enables a cart verification process and enhances the overall customer experience. For example, the system provides frictionless exit verification. Unlike traditional methods that involve stopping customers and conducting random checks, the exit CV system efficiently verifies the presence of unpaid items as customers walk their carts to the exit door without any interruption. This eliminates the need for additional random checks by the exit greeter, reducing friction and ensuring a smooth and effortless exit experience, particularly for ‘green to go’ transactions.
In some embodiments, the system enables intelligent unpaid item notification. When the system detects unpaid items in a cart, it notifies the exit greeter with visual cues, outlining the potentially unpaid items with bounding boxes. This intelligent notification mechanism is superior to the traditional receipt check, which relies on random selection. The system employs smart sampling to accurately identify potentially unpaid items, fostering greater trust and confidence in customers.
In other embodiments, the system enables comprehensive item verification. The exit CV system's advanced capabilities allow it to detect and recognize every visible item in the shopping cart. As a result, our solution can verify more paid items and effectively catch a higher number of unpaid items. This comprehensive approach to item verification ensures that losses due to shrink are minimized, leading to substantial savings and reduction in shrink. The exit CV system provides the ability to seamlessly verify carts for unpaid items without disrupting the exit process, its intelligent notification system, and its capacity to accurately verify a comprehensive range of items. These pioneering features combine to create a frictionless, trustworthy, and efficient exit experience for customers.
In one example, the system performs cart detection, item detection, item classification and verification using computer vision image analysis. The system captures images of carts as the carts are exiting the retail facility. The images are received from one or more camera devices. The system uses computer vision to detect and track the customer carts. Sample images of the carts are selected, compressed, and sent to the unpaid item manager for analysis.
In another example, the system listens for cart detected trigger indicating a cart is exiting. The system performs item detection to get cropped images of the cart. The system calls classification and verification for cropped images. Basket matching is performed to match the cart to a receipt or an e-receipt. The system saves the basket matching results and images of the cart to a cloud storage or other data storage device. The system sends the results to an ML application. This is used to recommend the number of item scans to be performed by a user at receipt check.
In still another example, the system fetches real-time shrink results from the cloud storage. The tablets or other UI devices display real-time CV results to a user for review.
Alternatively, or in addition to the other embodiments described herein, embodiments include any combination of the following:
At least a portion of the functionality of the various elements in
In some embodiments, the operations illustrated in
In other embodiments, a computer readable medium having instructions recorded thereon which when executed by a computer device cause the computer device to cooperate in performing a method of identifying unpaid items via an exit CV, the method comprising selecting an image of a selected cart from a plurality of images of the selected cart using a set of anchor points associated with a field of view of an image capture device; identifying a plurality of items associated with the selected cart using the selected image; predicting an item identifier (ID) associated with each item in the plurality of items associated with the selected cart, a set of identified items comprising a plurality of item IDs associated with the plurality of items; selecting a e-receipt associated with the selected cart from a plurality of active e-receipts using a fuzzy matching of a set of paid items included in the selected e-receipt and the set of identified items generated using the selected image in real time, the set of paid items comprising a receipt item ID associated with each item scanned at a POS device during a transaction associated with the selected e-receipt; mapping each receipt item ID in the set of paid items to a predicted item ID in the set of identified items, wherein an unmapped item in the set of identified items is a predicted unpaid item; upon receiving a verification request signal associated with the selected receipt from a scan device indicating a user is ready to exit, generating a notification including a verification result, wherein the verification result includes a set of unpaid items, wherein each predicted unpaid item in the set of unpaid items is associated with a predicted item ID in the set of identified items that fails to map to a corresponding receipt item ID in the set of paid items; and sending the notification to a user interface device associated with the scan device, the notification comprising the set of paid items and the set of unpaid items.
While the aspects of the disclosure have been described in terms of various examples with their associated operations, a person skilled in the art would appreciate that a combination of operations from any number of different examples is also within scope of the aspects of the disclosure.
The term “Wi-Fi” as used herein refers, in some embodiments, to a wireless local area network using high frequency radio signals for the transmission of data. The term “BLUETOOTH®” as used herein refers, in some embodiments, to a wireless technology standard for exchanging data over short distances using short wavelength radio transmission. The term “NFC” as used herein refers, in some embodiments, to a short-range high frequency wireless communication technology for the exchange of data over short distances.
Exemplary computer-readable media include flash memory drives, digital versatile discs (DVDs), compact discs (CDs), floppy disks, and tape cassettes. By way of example and not limitation, computer-readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules and the like. Computer storage media are tangible and mutually exclusive to communication media. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se. Exemplary computer storage media include hard disks, flash drives, and other solid-state memory. In contrast, communication media typically embody computer-readable instructions, data structures, program modules, or the like, in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
Although described in connection with an exemplary computing system environment, examples of the disclosure are capable of implementation with numerous other special purpose computing system environments, configurations, or devices.
Examples of well-known computing systems, environments, and/or configurations that can be suitable for use with aspects of the disclosure include, but are not limited to, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, mobile computing and/or communication devices in wearable or accessory form factors (e.g., watches, glasses, headsets, or earphones), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. Such systems or devices can accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, proximity input (such as by hovering), and/or via voice input.
Examples of the disclosure can be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof. The computer-executable instructions can be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform tasks or implement abstract data types. Aspects of the disclosure can be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions, or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure can include different computer-executable instructions or components having more functionality or less functionality than illustrated and described herein.
In examples involving a general-purpose computer, aspects of the disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
The examples illustrated and described herein as well as examples not specifically described herein but within the scope of aspects of the disclosure constitute exemplary means for identifying unpaid items. For example, the elements illustrated in
Other non-limiting embodiments provide one or more computer storage devices having a first computer-executable instructions stored thereon for providing identification of unpaid items. When executed by a computer, the computer performs operations including selecting an image of a selected cart from a plurality of images of the selected cart using a set of anchor points associated with a field of view of an image capture device; identifying a plurality of items associated with the selected cart using the selected image; predicting an item identifier (ID) associated with each item in the plurality of items associated with the selected cart, a set of identified items comprising a plurality of item IDs associated with the plurality of items; selecting a e-receipt associated with the selected cart from a plurality of active e-receipts using a fuzzy matching of a set of paid items included in the selected e-receipt and the set of identified items generated using the selected image in real time, the set of paid items comprising a receipt item ID associated with each item scanned at a POS device during a transaction associated with the selected e-receipt; mapping each receipt item ID in the set of paid items to a predicted item ID in the set of identified items, wherein an unmapped item in the set of identified items is a predicted unpaid item; generating a notification including a verification result, wherein the verification result includes a set of unpaid items, wherein each predicted unpaid item in the set of unpaid items is associated with a predicted item ID in the set of identified items that fails to map to a corresponding receipt item ID in the set of paid items; and sending the notification to a user interface device associated with the scan device, the notification comprising the set of paid items and the set of unpaid items.
The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, unless otherwise specified. That is, the operations can be performed in any order, unless otherwise specified, and examples of the disclosure can include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing an operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure.
The indefinite articles “a” and “an,” as used in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or” as used in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to “A” only (optionally including elements other than “B”); in another embodiment, to B only (optionally including elements other than “A”); in yet another embodiment, to both “A” and “B” (optionally including other elements); etc.
As used in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used shall only be interpreted as indicating exclusive alternatives (i.e., “one or the other but not both”) when preceded by terms of exclusivity, such as “either” “one of” only one of or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of ‘A’ and ‘B’” (or, equivalently, “at least one of ‘A’ or ‘B’,” or, equivalently “at least one of ‘A’ and/or ‘B’”) can refer, in one embodiment, to at least one, optionally including more than one, “A”, with no “B” present (and optionally including elements other than “B”); in another embodiment, to at least one, optionally including more than one, “B”, with no “A” present (and optionally including elements other than “A”); in yet another embodiment, to at least one, optionally including more than one, “A”, and at least one, optionally including more than one, “B” (and optionally including other elements); etc.
The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof, is meant to encompass the items listed thereafter and additional items.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Ordinal terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term), to distinguish the claim elements.
Having described aspects of the disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
| Number | Date | Country | |
|---|---|---|---|
| 63616417 | Dec 2023 | US |