NATURAL-LANGUAGE BASED ORDER PROCESSING

Information

  • Patent Application
  • 20230114834
  • Publication Number
    20230114834
  • Date Filed
    October 13, 2021
    2 years ago
  • Date Published
    April 13, 2023
    a year ago
  • Inventors
    • Chadwick; Emma Marie (Atlanta, GA, US)
    • Bhandary; Trisha (Atlanta, GA, US)
    • Bilagi; Vishal (Atlanta, GA, US)
    • Jones; T'Avvion (Mableton, GA, US)
    • Singh; Harnoor (Atlanta, GA, US)
  • Original Assignees
Abstract
A customer is detected at a drive-thru and a natural language voice dialogue session is established with the customer. The customer provides voice inquiries and order details via speech during the session, the speech is translated to text sentences, and commands are issued to a transaction system through an Application Programming Interface (API) based on the text of the sentences. The transaction system updates a display associated with the drive-thru based on the commands processed for the session and places an order for the customer with a Point-Of-Sale (POS) terminal associated with the drive-thru based on the order details.
Description
BACKGROUND

During 2020, the average wait time for drive-thrus increased to 4 minutes and 50 seconds, which is a 27% increase compared to 2019. In addition, the fast-food worker attrition rate increased to 5.6% from 2019 to 2020 resulting in an industry crises.


The COVID-19 pandemic is likely to show that the attrition rate grew exponentially from 2021 to 2022. In fact, during much of 2021 venturing out to a fast-food restaurant for carryout was a hit and miss proposition as many businesses unexpectedly closed or reduced hours because of the lack of available staff. Any staff that was available during 2020 was stretched way too thin resulting in exponential increases in average wait times and resulting in further worker burnout and attrition.


Businesses are struggling to achieve sufficient staffing levels necessary to satisfy customer demand as the public comes out of government mandated lockdowns and business closures. Most fast-food businesses have significantly increased staff salaries and substantially expanded worker benefits in 2021 to decrease worker attrition rates and to reach acceptable staffing levels; yet nearly every fast-food business is still hiring and cannot backfill for workers that leave let alone fill new positions necessary to meet the present customer demand.


Customers have become frustrated with the level of service that the businesses are providing them and there is some indications that historical demand levels are now subsiding/waning. No customer wants to wait in a drive-thru line for a half an hour or more only to receive an incorrect order, which, unfortunately, is far to common right now in the industry. Businesses fear that many of these loyal customers will not return. Even before the pandemic businesses were struggling with sufficient customer service and staff training/competency.


SUMMARY

In various embodiments, a system and methods for natural-language based order processing are provided.


According to an embodiment, a method for natural-language based order processing is presented. A presence of a customer is detected at a device and a natural-language voice dialogue is initiated with the customer during a session based on the presence. Voice statements of the customer are translated during the session to order details of an order and the order is placed with a transaction system associated with the device based on the order details.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of a system for natural-language based order processing, according to an example embodiment.



FIG. 2 is a diagram of a method for natural-language based order processing, according to an example embodiment.



FIG. 3 is a diagram of another method for natural-language based order processing, according to an example embodiment.





DETAILED DESCRIPTION


FIG. 1 is a diagram of a system/platform 100 for natural-language based order processing, according to an example embodiment. It is to be noted that the components are shown schematically in greatly simplified form, with only those components relevant to understanding of the embodiments being illustrated.


Furthermore, the various components (that are identified in system/platform 100) are illustrated and the arrangement of the components are presented for purposes of illustration only. It is to be noted that other arrangements with more or less components are possible without departing from the teachings of natural-language based order processing, presented herein and below.


System/platform 100 (herein after just “system 100”) provides a processing environment by which natural language orders are processed through digital signs/Internet-of-Things (IoTs) devices from voice conversations with customers. A communication session with the user is established via speech (audio) captured by a speaker of the digital sign/IoTs device. The speech is translated into a natural language text dialogue that comprises commands associated with an order workflow of a retail store where the customer is ordering. The commands for the workflow are sent to a transaction system of the retailer for placing and processing the order via Application Programming Interface (API) calls. The transaction system interacts with fulfillment terminals at the store to communicate the order and order details for fulfillment and delivery to the customer. In an embodiment, the translation of the dialogue to text can be translated via a third-party API for a voice-to-text and text-to-voice service, such as Microsoft Azure Precept®. In an embodiment, in addition to taking and placing the order on behalf of the customer, system 100 can facilitate or initiate payment of the order with the transaction system, through a customer-provided Quick-Response (QR) code presented for scanning by a camera of the digital sign/IoTs device and further interaction with the retailer's transaction system via the API calls.


As used herein, the terms “customer,” “consumer,” and/or “user” may be used interchangeably and synonymously herein and below. This refers to an individual that has is placing an order with a retail store via a digital sign/IoTs device.


System 100 comprises a cloud/server 110, a plurality of user-operated devices 120, one or more retail servers 130, and a plurality of digital signs/fulfillment terminals/IoTs devices 140.


Cloud/Server 110 comprises at least one processor 111 and a non-transitory computer-readable storage medium 112. Medium 112 comprises executable instructions for a session manager 113, a dialogue-API translator 114, and API 115. The executable instructions when provided to and executed by processor 111 from medium 112 cause processor 111 to perform the processing discussed herein and below for session manager 113, dialogue-API translator 114, and API 115.


Each user-operated device 120 (hereinafter just “device 120”) comprises at least one processor 121 and a non-transitory computer-readable storage medium 122. Medium 122 comprises executable instructions for a mobile application (app) 123. The executable instructions when provided to and executed by processor 121 from medium 122 cause processor 121 to perform the processing discussed herein and below for app 123.


Each retail server 130 comprises at least one processor 131 and a non-transitory computer-readable storage medium 132. Medium 132 comprises executable instructions for a transaction system 133. The executable instructions when provided to and executed by processor 131 from medium 132 cause processor 131 to perform the processing discussed herein and below for transaction system 133.


Each digital sign/fulfillment terminal/IoTs device 140 comprises at least one processor 141, a non-transitory computer-readable storage medium 142, and a variety of peripheral devices 144. Medium 142 comprises executable instructions for a session/order agent 143. The executable instructions when provided to and executed by processor 141 from medium 142 cause processor 141 to perform the processing discussed herein and below for session/order agent 143.


Conventional drive through ordering relays a voice order of a customer from a drive-thru terminal to an attendant that operates a Point-Of-Sale (POS) terminal within a store associated with the drive-thru. Typically, the attendant has a headset on with a microphone so that the attendant can move around and assist in filing orders while taking drive-thru orders. The attendant then enters the voice order into the POS terminal which initiates a workflow that queues the customer's order for fulfillment by kitchen staff of the store. Most POS terminals associated with drive-thru orders are placed within or adjacent to the kitchen at fast food stores. The quarters are already tight for the staff and they frequently bump into one another. Some staff is dedicated to preparing customer orders, some staff dedicated to drive-thru orders, and some staff dedicated to taking in-store orders. As stated above, retailers are struggling with rising employee costs, inflation, and staffing shortages. System 100 remedies the staffing shortages by processing drive-thru orders remotely and automatically using natural-language dialogues with the customers, such that retail stores can focus on order preparation and eliminate order taking and order entry at their stores.


Device 140 can be a plurality of devices present at a retail store, such as a drive-thru display device equipped with at least a microphone and a speaker, and optionally a camera. Device 140 may also be a POS terminal situated within the store and interfaced to transaction system 133. Device 140 may also be a smart digital sign that displays order details for customer orders taking by transaction system 133 and that permits a touch or keyboard interface for staff to indicate when orders were completed for delivery to the customers. Device 140 may also be a IoTs device interfaced to a display, a microphone, a speaker, and, optionally, a camera and/or a motion sensor.


Session/order agent 143 detects when a customer is present in front of a display associated with a drive-thru of a retail store (e.g., fast food store). The customer's presence can be detected in a variety of manners. For example, a motion sensor can be triggered, an image captured by a camera of device 140 can identify from the image a car, a customer may speak into a microphone associated with device 140 and the audio detected via a speaker of device 140. In another case, a camera not directly interfaced to device 140 may capture an image that shows a car in front of a display of device 140. The images captured by such a camera available to session/order agent 143 and/or session manager 113 for purposes of determining that a customer is present at device 140.


Session/Order agent 143 may be configured to send an event notification to session manager 113 upon detection of a customer at device 140. Alternatively, session manager 113 utilizing network accessible audio and video captured at or in a vicinity of device 140 independently determines that a customer is at device 140 based on evaluation of the audio data and video data.


Once session manager 113 determines a customer is present at device 140, manager 113 establishes a natural language voice session or dialogue with the customer through the microphone and speakers associated with device 140. Audio received during the dialogue is provided to dialogue-API translator 114, which converts the audio speech of the customer into text structured sentences or text commands. Again, dialogue-API translator 114 may use an API 115 and a third-party Artificial Intelligence (AI) speech-to-text translator service, such as Microsoft Precept®.


When the session manager 113 establishes a voice session with a customer, session manager uses APIs 115 to instruct transaction system 133 to being an order for a customer at a retail store associated with device 140. This causes a menu of items associated with the store to be rendered and presented on a display associated with device 140 for customer viewing (assuming the menu was not already displayed before the customer was detected at device 140).


Session manager 113 may then play a natural language greeting to the customer over a speaker associated with device 140 that welcomes the customer and asks what you would like to order today. The customer responds in speech picked up through a microphone associated with device 140, relayed to session manager 113, provided to dialogue-API translator 114, and manager 113 receives text sentences or commands provided by the customer. The customer may issue questions rather than provided order details, such as what kinds of drinks are available to order or are their any combo deals today. Session manager 113 uses the text sentences provided by translator 114 to determine commands versus queries of the customer. Any queries are made to transaction system 133 by manager 113 using API 115 along with the order number (assigned by transaction system 133 when the session was established) and the search terms of the query; results of the query, such as a listing of drinks available or a listing of today's combo meals are sent to order agent 143 and presented on the display of device 140. A text version of the returned listing may also be provided by transaction system 133 back to manager 113 using API 115. Manager 113 may then read the options over the speaker of device 140 to the customer in addition to the listing being presented on the display of device 140.


As the customer uses speech during the dialogue with manager 113, items are ordered. Manager 113 uses API 115 to interact with transaction system 133 to cause each ordered item and a pending total of the order to be presented on the display of device 140 to the customer.


During the dialogue, the customer may remove ordered items and/or add additional order items. Again, interaction between manager 113 and transaction system 133 causes transaction system 133 to interact with order agent 143 and keep information about the order up-to-date on the display of device 140 for customer review and any further customer changes desired by the customer.


Once order details for the order are confirmed by the customer through the dialogue with manager 113, manager 113 uses API 115 to instruct transaction system 133 to place the order with a POS terminal 140 associated with the store where device 140 is located. Transaction system 133 sends a confirmation back to session manager 113. Session manager 113 plays speech over a speaker of device 140 that instructs the customer to pull forward or to pull into a designated parking spot to await delivery of the order.


Payment for the order can occur in a conventional manager by the customer pulling forward to pay an attendant at a drive-thru window with a customer-preferred method of payment.


In an embodiment, device 140 comprises a card reader (contact-based or contactless) that the customer can use with the card reader to make payment for the order.


In an embodiment, the customer may present a QR code on a screen of the customer's mobile device 120 for capturing by a camera of device 140. The QR code representing encoded loyalty information of the customer with cloud/server 110 and/or with the retailer associated with the store and server 130. Mobile application (app) 123 may provide access to the QR code for presentation on the display of device 120. The loyalty information comprises a registered payment card linked to the customer. Manager 113 receives an image of the QR code, decodes it to obtain the loyalty account of the customer, and obtains the registered payment card details. Alternative, manager 113 provides the QR code to transaction system 133 using API 115; transaction system 133 decodes it to obtain the loyalty account of the customer and obtains the registered payment card details. When the loyalty information is associated with cloud/server 110, manager 113 provides the registered payment card details to transaction system 133 using the API. Transaction system 133 processes the card details to obtain a confirmation of payment from a payment service linked to the card details.


In an embodiment, depending on the geographic location of any given store, dialogue-API translator 114 may be trained on specific dialects and accents known to be used by people residing in that geographic location. Session manager 113 provides a location identifier as a parameter to translator 114. Translator 114 uses the location identifier to determine a specific location dialect or accent. This can also be used to change the spoken language, such that when the store is located in France, translator uses French to translate the audio of the customer into text.


In an embodiment, device 140 comprises an integrated deposit peripheral that permits customer payment by cash and dispenses change to the customer in cash and coins.


In an embodiment, session manager 113, translator 114, and APIs 115 are subsumed into and processed on a specific retailer's server 130.


The above-referenced embodiments and other embodiments are now discussed within FIGS. 2-3.



FIG. 2 is a diagram of a method 200 for natural-language based order processing, according to an example embodiment. The software module(s) that implements the method 200 is referred to as an “voice order session manager.” The voice order session manager is implemented as executable instructions programmed and residing within memory and/or a non-transitory computer-readable (processor-readable) storage medium and executed by one or more processors of one or more devices. The processor(s) of the device that executes the voice order session manager is specifically configured and programmed to process the voice order session manager. The voice order session manager may have access to one or more network connections during its processing. The network connections can be wired, wireless, or a combination of wired and wireless.


In an embodiment, the device that executes the voice order session manager is cloud 110. Cloud 110 comprises a plurality of servers logically cooperating and accessible as a single server 110 (cloud 110).


In an embodiment, the device that executes the voice order session manager is a server 110 that is separate from any given retail server 120.


In an embodiment, the device that executes the voice order session manager is retail server 130.


In an embodiment, the AR mapper is all or some combination of 113, 114, and/or 115.


At 210, the voice order session manager detect a presence of a customer at a device 140.


In an embodiment, at 211, the voice order session manager receives a notification from an agent 143 of device 140 indicating the presence of the customer at the device 140.


In an embodiment, at 212, the voice order session manager evaluates sensor data captured by or captured in proximity to the device 140 and determining the presence of the customer based on the sensor data.


In an embodiment of 212 and at 213, the voice order session manager identifies the sensor data as one or more of motion data captured by a motion sensor, image data captured by a camera, and audio data captured by a microphone.


At 220, the voice order session manager initiates a natural-language dialogue with the customer during a session based on the presence of the customer at the device 140.


In an embodiment, at 221, the voice order session manager plays an auto-generated voice greeting over a speaker associated with the device 140 to initiate the natural-language dialogue for the session.


In an embodiment of 221 and at 222, the voice order session manager obtains an order number for the order from a transaction system 133 using an API 115.


At 230, the voice order session manager translates voice statements of the customer during the session to order details of an order.


In an embodiment of 222 and 230, at 231, the voice order session manager passes each voice statement to a voice-to-text translation service and receives a text sentence for the corresponding voice statement back from the voice-to-text translation service.


In an embodiment of 231 and at 232, the voice order session manager maps select text in each of the text sentences to a command recognized by the transaction system 133 and instructions the transaction system 133 to process the corresponding command using the API 115.


In an embodiment of 232 and at 233, the voice order session manager receives text results from the transaction system 133 based on the transaction system 133 processing the corresponding command using the API 115.


In an embodiment of 233 and at 234, the voice order session manager generates speech data for the text results and plays the speech data over the speaker of device 140.


At 240, the voice order session manager places the order with the transaction system 133 associated with the device 140 based on the order details.


In an embodiment, at 250, the voice order session manager receives an image of a QR code captured by a camera associated with device 140 off a display of a customer-operated device 120. The voice order session manager decodes the QR code to obtain a registered identifier for the customer and the voice order session manager uses the registered identifier to obtain a registered payment card of the customer. The voice order session manager provides card details for the registered payment card to the transaction system 133 using the API 115 for the transaction system 133 to process and obtain a payment form the customer for the order.


In an embodiment, at 260, the voice order session manager receives an image of a QR code captured by a camera associated with device 140 off a display of a customer-operated device 120. The voice order session manager provides the image to the transaction system 133 for the transaction system 133 to decode the QR code, link decoded information of the QR code to a loyalty account of the customer, obtain a registered payment card of the customer from the loyalty account, and obtain a payment for the order using the payment card details of the registered payment card from a payment service.



FIG. 3 is a diagram of another method 300 for natural-language based order processing, according to an example embodiment. The software module(s) that implements the method 300 is referred to as a “remote voice order manager.” The remote voice order manager is implemented as executable instructions programmed and residing within memory and/or a non-transitory computer-readable (processor-readable) storage medium and executed by one or more processors of a device. The processors that execute the remote voice order manager are specifically configured and programmed for processing the remote voice order manager. The remote voice order manager may have access to one or more network connections during its processing. The network connections can be wired, wireless, or a combination of wired and wireless.


In an embodiment, the device that executes the remote voice order manager is cloud 110. In an embodiment, the device that executes the remote voice order manager is server 110.


In an embodiment, the device that executes the remote voice order manager is retail server 130.


In an embodiment, the remote voice order manager is all of or some combination of 113, 114, 115, and/or method 200 of FIG. 2.


The remote voice order manager presents another and, in some ways, enhanced processing perspective from that which was discussed above for cloud 110 and method 200.


At 310, the remote voice order manager receives voice statements as speech communicated by a customer through a microphone of a device 140 associated with a store to take an order of the customer with the store during a session.


At 320, the remote voice order manager translates the voice statements into text during the session.


In an embodiment, at 321, the remote voice order manager provides a location identifier associated with a geographical location of the device 140 and the voice statements to a voice-to-text service and receives the text as output from the voice-to-text service. The location identifier configures the voice-to-text service for a dialect, or an accent used in the geographic location when translating the voice statements to text.


At 330, the remote voice order manager maps select text to commands associated with a transaction system 133 of the store.


In an embodiment, at 331, the remote voice order manager identifies first commands of the commands as inquiries posited by the customer during the session and second commands of the commands as items to order or instructions to customize a given item ordered.


At 340, the remote voice order manager sends the commands through an API 115 for processing by the transaction system 133.


In an embodiment of 331 and 340, at 341, the remote voice order manager causes the transaction system 133 to update a display associated with the device 140 with results based on sending the first commands to the transaction system 133 using the API 115.


In an embodiment, at 342, the remote voice order manager receives text feedback from the transaction system 133 processing the commands. The remote voice order manager translates the text feedback to speech feedback and plays the speech feedback over a speaker associated with the device 140 during the session.


At 350, the remote voice order manager assembles order details for the order from the session.


In an embodiment, at 351, the remote voice order manager confirms the order details with the customer through speech during the session.


At 360, the remote voice order manager places the order with the transaction system 133 using the API 115.


In an embodiment, at 361, the remote voice order manager updates a loyalty account of the customer based on the order details and the order placed with the transaction system 133.


In an embodiment, at 370, the remote voice order manager captures a code presented on a display of a customer-operated device 120. The remote voice order manager links the code a registered payment method of the customer and provides the registered payment method to the transaction system 133 using the API 115 for the transaction system 133 to process a payment for the order of the customer.


It should be appreciated that where software is described in a particular form (such as a component or module) this is merely to aid understanding and is not intended to limit how software that implements those functions may be architected or structured. For example, modules are illustrated as separate modules, but may be implemented as homogenous code, as individual components, some, but not all of these modules may be combined, or the functions may be implemented in software structured in any other convenient manner.


Furthermore, although the software modules are illustrated as executing on one piece of hardware, the software may be distributed over multiple processors or in any other convenient manner.


The above description is illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.


In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Description of the Embodiments, with each claim standing on its own as a separate exemplary embodiment.

Claims
  • 1. A method, comprising: detecting a presence of a customer at a device;initiating a natural-language voice dialogue with the customer during a session based on the presence;translating voice statements of the customer during the session to order details of an order; andplacing the order with a transaction system associated with the device based on the order details.
  • 2. The method of claim 1, wherein detecting further includes receiving a notification from an agent of the device indicating the presence of the customer at the device.
  • 3. The method of claim 1, wherein detecting further includes evaluating sensor data captured by or in proximity to the device and determining the presence based on the sensor data.
  • 4. The method of claim 1, wherein initiating further includes playing an auto-generated voice greeting over a speaker associated with the device to initiate the natural-language dialogue for the session.
  • 5. The method of claim 4, wherein playing further includes obtain an order number for the order from the transaction system using an Application Programming Interface (API).
  • 6. The method of claim 5, wherein translating further includes passing each voice statement to a voice-to-text translation service and receiving a text sentence for the corresponding voice statement.
  • 7. The method of claim 6, wherein passing further includes mapping select text in each of the text sentences to a command recognized by the transaction system and instructing the transaction system to process the corresponding command using the API.
  • 8. The method of claim 7, wherein mapping further includes receiving text results from the transaction system based on the transaction system processing the corresponding command.
  • 9. The method of claim 8, wherein receiving further includes generating speech data for the text results and playing the speech data over the speaker associated with the device.
  • 10. The method of claim 1 further comprising: receiving an image of a Quick Response (QR) code captured by a camera associated with the device off a display of a customer-operated device;decoding the QR code to obtain a registered identifier for the customer;using the registered identifier to obtain a registered payment card of the customer; andproviding card details for the registered payment card to the transaction system using the API for obtaining a payment from the customer for the order.
  • 11. The method of claim 1 further comprising: receiving an image of a Quick Response (QR) code captured by a camera associated with the device off a display of a customer-operated device; andproviding the image to the transaction system for the transaction system to decode the QR code, link decoded information of the QR code to a loyalty account of the customer, obtain a registered payment card of the customer from the loyalty account, and obtain a payment for the order using payment card details of the registered payment card.
  • 12. A method, comprising: receiving voice statements audibly communicated by a customer through a microphone of a device associated with a store to take an order of the customer with the store during a session;translating the voice statements into text during the session;mapping select text to commands associated with a transaction system of the store;sending the commands through an Application Programming Interface (API) for processing by the transaction system;assembling order details from the session; andplacing the order with the transaction system using the API.
  • 13. The method of claim 12 further comprising: capturing a code presented on a display of a customer-operated device;linking the code to a registered payment method of the customer; andproviding the registered payment method to the transaction system using the API for the transaction system to process a payment for the order of the customer.
  • 14. The method of claim 13, wherein translating further includes providing a location identifier associated with a geographical location of the device and the voice statements to a voice-to-text service and receiving the text as output from the voice-to-text service, wherein the location identifier configures the voice-to-text service for a dialect, or an accent used in the geographical location when translating the voice statements into the text.
  • 15. The method of claim 12, wherein mapping further includes identifying first commands of the commands as inquiries posited by the customer during the session and second commands of the commands as items to order or instructions to customize a given item ordered.
  • 16. The method of claim 12, wherein sending further includes receiving text feedback from the transaction system responsive to the transaction system processing the commands, translating the text feedback to speech feedback, and playing the speech feedback over a speaker associated with the device.
  • 17. The method of claim 12, wherein assembling further includes confirming the order details for the order with the customer through speech during the session.
  • 18. The method of claim 12, wherein placing further includes updating a loyalty account of the customer based on the order details and the order placed with the transaction system.
  • 19. A system, comprising: a cloud processing environment comprising at least one server;the at least one server comprising a processor and a non-transitory computer-readable storage medium;the non-transitory computer-readable storage medium comprises executable instructions; andthe executable instructions when executed on the processor from the non-transitory computer-readable storage medium cause the processor to perform operations comprising: engaging a customer in a natural-language dialogue at a drive-thru device of a store to take an order of the customer with the store;translating voice statements of the customer into one or more of an inquiry, an ordered item, and a customization of a given ordered item;providing commands to a transaction system associated with one or more of the inquiry, the ordered item, and the customization of the given ordered item using an Application Programming Interface (API);translating feedback from the transaction system into automated speech played to the customer over a speaker of the drive-thru device during the session;confirming order details for the order during the session; andplacing the order with the transaction system with the order details using the API.
  • 20. The system of claim 19, wherein the executable instructions further include additional executable instruction that further cause the processor to perform additional operations comprising: capturing a Quick Response (QR) code presented on a display of a customer-operated device at the drive-thru device;associating the QR with a payment method registered to the customer; andproviding the payment method to the transaction system using the API as a payment made by the customer for the order.