Augmented reality (AR) may refer to a live view of a physical, real-world environment that is modified by a computing device to enhance an individual's current perception of reality. In augmented reality, elements of the real-world environment are “augmented” by computer-generated or extracted input, such as sound, video, graphics, haptics, and/or global positioning system (GPS) data, among other examples. Augmented reality may be used to enhance and/or enrich the individual's experience with the real-world environment.
Some implementations described herein relate to a system for providing real time visual feedback for augmented reality (AR) map routing and item selection. The system may include one or more memories and one or more processors coupled to the one or more memories. The one or more processors may be configured to receive an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location. The one or more processors may be configured to determine a route through the entity location to an item location for each item included in the one or more items. The one or more processors may be configured to transmit, to a device, routing AR information associated with the route to cause an AR view of the route to be displayed by the device. The one or more processors may be configured to receive, from the device, an image captured by the device that is associated with a first item of the one or more items. The one or more processors may be configured to analyze, using a computer vision technique or another technique, the image to determine at least one of: whether an item depicted in the image is the first item, or one or more recommended items, associated with the first item, depicted in the image. The one or more processors may be configured to transmit, to the device, item AR information associated with the image to cause AR feedback information to be displayed by the device in connection with the image, wherein the AR feedback information identifies at least one of whether the item depicted in the image is the first item, or the one or more recommended items.
Some implementations described herein relate to a method for providing real time visual feedback for AR map routing and item selection. The method may include receiving, by a device, an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location, and wherein the one or more items are associated with a user device. The method may include determining, by the device, a route through the entity location to an item location for at least one item included in the one or more items. The method may include providing, by the device and to a client device, routing AR information associated with the route to cause an AR view of the route to be displayed by the client device. The method may include receiving, by the device and from the client device, visual media captured by the device that is associated with a first item of the one or more items. The method may include analyzing, by the device, the visual media to determine at least one of: whether an item depicted by the visual media is the first item, or one or more recommended items, associated with the first item, depicted by the visual media. The method may include providing, by the device and to the client device, presentation information to cause AR feedback information to be displayed by the client device in connection with the visual media, wherein the AR feedback information identifies at least one of whether the item depicted by the visual media is the first item, or the one or more recommended items in the visual media.
Some implementations described herein relate to a non-transitory computer-readable medium that stores a set of instructions for a client device. The set of instructions, when executed by one or more processors of the client device, may cause the client device to receive an indication of one or more items associated with a task, wherein the one or more items are associated with an entity location. The set of instructions, when executed by one or more processors of the client device, may cause the client device to obtain routing AR information associated with a route through the entity location to an item location for each item included in the one or more items. The set of instructions, when executed by one or more processors of the client device, may cause the client device to provide, based on the routing AR information, an AR view of the route for display by the client device. The set of instructions, when executed by one or more processors of the client device, may cause the client device to obtain item AR information associated with visual media captured by the client device that identifies at least one of whether an item depicted by the visual media is included in the one or more items, or one or more recommended items depicted by the visual media. The set of instructions, when executed by one or more processors of the client device, may cause the client device to provide, based on the item AR information, AR feedback information for display in connection with the visual media.
The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
In some cases, a user (e.g., an employee) may perform a task that includes obtaining one or more items from various locations at which the one or more items are stored. For example, the task may be associated with a list of requested items. The user may search an entity location to attempt to retrieve the items included in the list. The user may use a device (e.g., a client device) to assist in performing the task. For example, the client device may display the list of items to be retrieved.
However, it may be difficult to determine an efficient route through the entity location to retrieve all of the items included in the list. For example, different entity locations may be associated with different item locations, different checkout locations, different entry locations, different aisle layouts or configurations, and/or different exit locations, among other examples. As a result, the user may follow a route through the entity location that consumes significant time associated with retrieving the items included in the list.
Additionally, certain items may be associated with characteristics or attributes that are difficult to define and/or identify. For example, an item may be associated with a characteristic or attribute that is subjective and an interpretation of the characteristic or attribute may vary from person to person. As a specific example, a fruit may be associated with a level of ripeness. Some users may prefer the fruit at a first level of ripeness whereas other users may prefer the same fruit at a different level of ripeness. Additionally, two users may consider the same fruit to have different levels of ripeness. Therefore, it may be difficult to provide accurate instructions to the user performing the task to select an item having the correct characteristics or attribute when the characteristics or attribute are subjective. As a result, the user may select an item from the list having a different characteristic or attribute than desired or intended. This may result in the task being re-requested (e.g., via one or more devices) and/or re-performed, consuming time and/or resources (e.g., computing resources, network resources, and/or power resources) associated with a client device use by the user to perform the task.
Further, in some cases, an item included in the list may be unavailable or out of stock. Therefore, the user may be required to select a replacement item. However, it may be difficult to identify and/or select suitable replacement items for a given item because different users may prefer different replacement items for the same item. As a result, the user may select a replacement item that is not acceptable. This may result in the task being re-requested (e.g., via one or more devices) and/or re-performed, consuming time and/or resources (e.g., computing resources, network resources, and/or power resources) associated with a client device use by the user to perform the task.
Some implementations and techniques described herein enable real time visual feedback for augmented reality (AR) map routing and item selection. For example, a server device may receive an indication of one or more items associated with a task. The server device may determine a route through the entity location to an item location for each item included in the one or more items. The server device may transmit, to a client device, routing AR information associated with the route to cause an AR view of the route to be displayed by the client device.
In some implementations, the server device may receive, from the client device, visual media (e.g., one or more images, a video, or video streaming data) captured by the client device that is associated with a first item of the one or more items. The server device may analyze (e.g., using a computer vision technique and/or another technique) the visual media to determine whether an item depicted by the visual media is the first item, and/or one or more recommended items, associated with the first item, depicted by the visual media, among other examples. The server device may provide, to the client device, presentation information to cause AR feedback information to be displayed by the client device in connection with the visual media. In some implementations, the AR feedback information may identify whether the item depicted by the visual media is the first item, and/or the one or more recommended items in the visual media, among other examples.
In this way, the user performing the task may be enabled to quickly and easily be routed through an entity location (e.g., via the AR view of the route) to the one or more items. Additionally, the server device (e.g., a computer vision model or another machine learning model) may be trained to recognize and/or identify attributes or characteristics of items that may otherwise be subjective based on the judgement of a human. The AR feedback information may enable the user to identify a correct item having a certain desired attribute or characteristic, such as a certain level of ripeness of a fruit, among other examples. As a result, the server device may conserve time and/or resources (e.g., computing resources, network resources, and/or power resources) that would otherwise be used associated with the task being re-requested (e.g., via one or more devices) and/or re-performed by providing AR feedback to enable a user to quickly and easily identify the correct item and/or a suitable replacement item, among other examples. Additionally, the server device may conserve time associated with performing the task by efficiently routing the user through an entity location to locate the one or more items associated with the task.
Although some examples may be described herein in connection with AR, extended reality (XR), mixed reality (MR), and/or virtual reality (VR) techniques may be used in a similar manner as described herein. For example, AR generally refers to interactive technologies in which objects in a real-world environment are augmented using computer-generated virtual content that may be overlaid on the real-world environment. MR, sometimes referred to as “hybrid reality,” similarly merges real and virtual worlds to produce a visual environment in which real physical objects and virtual digital objects can co-exist. However, in addition to overlaying virtual objects on the real-world environment, mixed reality applications often anchor the virtual objects to the real world and allow users to interact with the virtual objects. VR refers to fully immersive computer-generated experiences that take place in a simulated environment, often incorporating auditory, visual, haptic, and/or other feedback mechanisms. Although some examples may be described only in connection with AR techniques, XR, MR, VR, and/or a combination of the techniques may be used in connection with operations described herein.
As shown in
As shown by reference number 105, the user device may transmit, and the server device may receive, an indication of one or more items associated with a task. The one or more items may be items to be purchased for, or by, the first user. For example, the first user may select and/or purchase the one or more items via the user device. For example, the first user may use an application executing on the user device or a web page, among other examples, to select and/or purchase the one or more items. The task may be associated with acquiring the one or more items. In other words, the task may be associated with completing an order for the one or more items. For example, the server device may be associated with the third-party service that acquires and/or delivers the one or more items to the first user. In some implementations, the user device may be associated with an account and/or the first user. For example, the account may be associated with the third-party service that acquires and/or delivers the one or more items to the first user. The first user may initiate the task by signing into the account (e.g., via the user device), selecting the one or more items, and purchasing or requesting the one or more items.
In some implementations, the one or more items may be associated with an entity. For example, the first user may purchase the one or more items from an entity (e.g., a store, a vendor, and/or a merchant). Additionally, or alternatively, the one or more items may be associated with multiple entities (e.g., the one or more items may be purchased from multiple entities). For example, the one or more items may be associated with an entity location (e.g., a given location associated with an entity), such as a physical store (e.g., a brick-and-mortar building), a marketplace, or other location.
Based on receiving the indication of the one or more items, the server device may configure the task to be performed by a second user (or the first user in some cases). In some implementations, the server device may determine one or more replacement items. For example, for a first item from the one or more items, the server device may determine one or more replacement items. A replacement item may refer to an item that may be acquired or purchased instead of another item. For example, the server device may determine one or more replacement items that would be acceptable to the first user. A replacement item may also be referred to as a recommended item or an alternative item herein (e.g., the one or more recommended items may be replacement items or alternative items for the first item). In some implementations, a recommended item may not be a replacement for an item, but rather may include a recommended attribute or characteristic associated with a given item.
In some implementations, the server device may determine, for the first item, the one or more recommended items based on user information associated with an account that is associated with the task, exchange history (e.g., transaction history) information associated with the account, and/or item information associated with the first item, among other examples. For example, the user information associated with the account may include information associated with the first user, such as an age, a gender, residence information (e.g., a town, a county, a city, state, and/or a country in which the first user lives), race, and/or similar information associated with the first user. For example, the information associated with the first user may provide insight as to which replacement items and/or which attributes of a given item may be acceptable for the first user. For example, a female user may typically prefer a first attribute of an item or a first replacement item for the item, whereas a male user may typically prefer a second attribute of the item or a second replacement item for the item. As another example, a younger user (e.g., with an age under 30 years old) may typically prefer a first attribute of an item or a first replacement item for the item, whereas an older user (e.g., with an age over 30 years old) may typically prefer a second attribute of the item or a second replacement item for the item.
Additionally, or alternatively, the user information may include economic information associated with the first user, such as a credit history, an employment history, and/or an income history, among other examples. The economic information may be used by the server device to determine the one or more recommended items and/or one or more attributes of the recommended item. For example, a user with a higher income may prefer a first brand associated with an item, whereas a user with a lower income may prefer a second brand associated with the item. Additionally, or alternatively, the user information may include a user profile. The user profile may include information input by the first user. For example, the user may provide answers to a questionnaire. The answers to the questionnaire may provide an insight as to which replacement items and/or which attributes of a given item may be acceptable for the first user.
The exchange history (e.g., transaction history) information associated with the account may indicate previous transactions completed by the first user (e.g., via the service associated with the server device). For example, the exchange history may indicate previous transactions associated with the first user that include a given item. The server device may determine one or more attributes associated with the given item that are preferred by the first user (e.g., based on attributes of previously purchased items by the first user). As another example, the server device may determine one or more replacement items that may be acceptable for the first user based on similar items that have been previously purchased by the first user. Additionally, or alternatively, the exchange history (e.g., transaction history) information associated with the account may indicate previous transactions completed by the first user at other entities and/or via other services. For example, the first user may provide approval for a financial institution to provide the exchange history to the server device. The exchange history may indicate entities, merchants, vendors, and/or other locations at which the first user typically or frequently shops. The server device may determine that the first user may prefer a well-known brand (e.g., a famous brand, a popular brand or a “name brand”) or a more expensive brand associated with a given item (e.g., for the item or as a replacement item) if the exchange history indicates that the first user shops at other locations associated with the brand or shops at locations associated with more expensive items (e.g., “high end” locations). As another example, the server device may determine that the first user may prefer a less expensive brand (e.g., a discount brand or a “store brand”) associated with a given item (e.g., for the item or as a replacement item) if the exchange history indicates that the first user shops at other locations associated with the brand or shops at locations associated with less expensive items (e.g., “discount” locations).
Additionally, or alternatively, the server device may determine, for the first item, the one or more recommended items based on information associated with the first item, such as a category, a type, a cost, a quantity, and/or a brand, among other examples. For example, the server device may determine a replacement item for the first item that is associated with a similar, or the same, category, type, cost, quantity, and/or brand, among other examples, as the first item.
In some implementations, the server device may determine, for a recommended item, a quantity of the recommended item, one or more attributes of the recommended item, and/or a brand associated with the recommended item. For example, the one or more recommended attributes may include a size of an item (e.g., two pounds of beef, and/or 16 ounces of water, among other examples), a quantity of pieces associated with an item (e.g., two cloves of garlic, and/or four bananas, among other examples), a ripeness level associated with an item (e.g., unripe or ripened), a texture associated with an item, and/or a color associated with an item, among other examples. For example, the one or more recommended attributes may include objective attributes, such as size and/or quantity of pieces, among other examples, and subjective attributes, such as ripeness level and/or texture, among other examples.
For example, the server device may determine a recommended item associated with the first item. For example, a first item may be associated with a size of 24 ounces. The server device may determine a recommended replacement item for the first item (e.g., based on one or more considerations described in more detail elsewhere herein). The recommended replacement item may be associated with a size of 8 ounces. Therefore, the server device may determine that three recommended replacement items (e.g., three pieces) should be recommended to replace the first item (e.g., to ensure that 24 ounces total are selected to replace the first item).
In some implementations, the server device may determine one or more recommended attributes associated with a given item based on the user information associated with an account that is associated with the task, the exchange history information associated with the account, and/or the item information associated with the given item, among other examples (e.g., in a similar manner as described above). Additionally, or alternatively, the server device may determine the one or more recommended attributes based on an input received from the user device. For example, the first user may request an attribute (e.g., a size, quantity, ripeness level, texture, and/or color, among other examples) when selecting a given item. The indication of the one or more items received by the server device may include an indication of one or more recommended (e.g., requested) attributes associated with at least one item of the one or more items.
In some implementations, the server device may use a machine learning model to predict a likelihood that a given attribute or a given replacement item, for an item requested by the first user, will be acceptable to the first user. For example, the server device may train the machine learning model using the user information associated with an account that is associated with the task, the exchange history information associated with the account, and/or the item information associated with the given item, among other examples. For example, an input to the machine learning model may include information associated with a given attribute or a given replacement item, the user information associated with an account that is associated with the task, the exchange history information associated with the account, and/or the item information associated with the given item, among other examples. An output of the machine learning model may include a likelihood that the given attribute or the given replacement item will be acceptable to the first user. For example, the output may indicate “yes” (e.g., indicating that the given attribute or the given replacement item will be acceptable to the first user) or “no” (e.g., indicating that the given attribute or the given replacement item will not be acceptable to the first user). As another example, the output may be a score (e.g., from 0 to 100, where a score closer to 100 indicates a higher likelihood that the given attribute or the given replacement item will be acceptable to the first user) or a probability value (e.g., a percentage value, where a percentage closer to 100% indicates a higher likelihood that the given attribute or the given replacement item will be acceptable to the first user), among other examples. A probability value may indicate a likelihood that a user (e.g., the first user) would accept a recommended item as a replacement for the a given item. The server device may determine the one or more recommended items and/or the one or more recommended attributes based on the output of the machine learning model.
Additionally, the server device may determine a route through an entity location to an item location for each item included in the one or more items (e.g., that are associated with the entity). For example, as shown by reference number 110, the server device may obtain layout information associated with the entity location. The layout information may indicate a layout associated with the entity location. For example, the layout information may indicate locations of aisles, displays, entrances, exits, checkout locations, and/or department locations, among other examples. An example entity layout is depicted in
As shown by reference number 115, the server device may identify and/or determine item locations of the one or more items based on the layout information. In some implementations, the layout information may indicate item locations of various items associated with (e.g., offered for sale in) the entity location. For example, the layout information may indicate, within the layout associated with the entity location, a location of various items. Additionally, or alternatively, the server device may determine item locations associated with the one or more items associated with the entity location. For example, the server device may identify a category associated with an item included in the one or more items (e.g., an apple may be associated with a category of fruit or produce, water may be associated with a category of beverages, among other examples). The server device may identify an aisle, department, and/or display, among other examples, associated with the category based on the layout information. The server device may determine that the item is located in the aisle, the department, and/or the display.
In some implementations, the server device may determine a precise location of an item (e.g., a shelf location or a bay location) based on an identifier associated with the item. For example, the item may be associated with a stock keeping unit (SKU) or another identifier. The server device may store information indicating locations associated with various SKUs. For example, a given SKU may be associated with an aisle and a bay number (e.g., indicating a precise location of the item within the aisle). A bay may refer to a set of shelves or a length of shelving within an aisle. The server device may perform a lookup operation to identify item locations (e.g., an aisle and a bay) associated with each of the one or more items.
As shown by reference number 120, the server device may determine the route associated with obtaining the one or more items based on determine the item locations for the one or more items. For example, the server device may determine one or more waypoints, associated with the route, corresponding to the item location for each item included in the one or more items. A waypoint may refer to a stopping point along the route. For example, the server device may create a waypoint for each item location associated with the one or more items.
The server device may determine an order of the one or more waypoints based on the layout information. For example, the server device may determine the route by ordering the waypoints associated with the one or more items in an efficient manner. For example, as shown in
In this way, the server device may efficiently route a shopper or user through the store to obtain the one or more items requested by the first user. For example, a shopper (e.g., the first user or a second user) may conserve time associated with locating the one or more items. Additionally, the shopper may conserve time that would have otherwise been used with a suboptimal route through the entity location. Further, the shopper may conserve time and resources (e.g., network resources, processing resources, and/or power resources) that would have otherwise been used searching for one or more item location (e.g., using a device, such as the user device or the client device (not shown in
As shown in
As shown by reference number 130, the server device may transmit, and the client device may receive, routing AR information associated with the route to cause the AR view of the route to be displayed by the client device. For example, the client device may obtain routing AR information associated with a route through the entity location to an item location for each item included in the one or more items. In some implementations, the client device may obtain the routing information by receiving the routing AR information (e.g., from the server device). In some other implementations, the client device may obtain the routing information by generating the routing AR information (e.g., in a similar manner as described elsewhere herein).
As shown by reference number 135, the client device may determine that the client device is located proximate to (e.g., inside) the entity location. The client device may determine that the AR view of the route to be displayed by the client device based on determining that the client device is located proximate to (e.g., inside) the entity location. The client device may obtain one or more images or a video associated with the entity location. The client device may process or analyze the one or more images or the video associated with the entity location using a computer vision technique, such as an object detection technique, to identify reference points within the image(s) or video. The client device may insert or overlay AR content in the image(s) or video based on the identified reference points and/or the routing AR information. The AR content may identify the route to be taken by a user (e.g., the first user or a second user) that is associated with the client device. Alternatively, the client device may transmit, to the server device, the one or more images or the video associated with the entity location. The server device may process or analyze the one or more images or the video associated with the entity location using a computer vision technique, such as an object detection technique, to identify reference points within the image(s) or video. The server device may insert or overlay the AR content into the image(s) or video and may transmit, to the client device, the image(s) or video with the AR content inserted or overlayed.
As shown by reference number 140, the client device may provide, based on the routing AR information, an AR view of the route for display by the client device. For example, the client device may display the AR view via a user interface of the client device. As shown in
As shown in
For example, as shown by reference number 145, the client device may capture visual media (e.g., using a camera or another device associated with the client device). The visual media may include one or more images, a video, and/or live stream media (e.g., a continual video stream from the client device), among other examples. In some implementations, as shown by reference number 150, the client device may transmit, and the server device may receive, the visual media captured by the client device. In some implementations, the visual media may be associated with a first item of the one or more items that are associated with the task. For example, the client device may indicate that the first item is associated with the visual media. Alternatively, the server device may determine that the first item is associated with the visual media based on a location of the user device in connection with the route (e.g., if the client device is located near a waypoint of the route that is associated with a given item when the client device transmits the visual media, then the server device may determine that the visual media is associated with the given item). In other words, the visual media may be associated with an expected location associated with the first item.
As shown by reference number 155, the server device may analyze and/or process the visual media. In some other implementations, the client device may analyze and/or process the visual media in a similar manner as described herein (e.g., rather than the server device). The server device may analyze the visual media using a computer vision technique or another technique, such as an object detection technique. The server device may analyze the visual media to determine whether an item depicted in the visual media is the first item (e.g., is the item associated with the visual media). Additionally, or alternatively, the server device may analyze the visual media to determine one or more recommended items, associated with the first item, that are depicted in the visual media.
For example, the server device may process and/or analyze the visual media to identify one or more features of an item depicted in the visual media. The server device may compare the features of the item depicted in the visual to expected features associated with the first item to determine whether the item depicted in the visual is the first item. Additionally, or alternatively, the server device may compare the features of the item depicted in the visual to one or more recommended attributes (e.g., as determined by the server device and/or the client device as described in more detail elsewhere herein) associated with the first item to determine whether the item depicted in the visual is the first item. For example, the visual media may include multiple items that may be the first item (e.g., the first item may be a banana and the visual media may include multiple bananas). The server device may compare the features of the items depicted in the visual media to one or more recommended attributes to identify one or more suitable or acceptable items from the multiple first items depicted in the visual media. For example, the first item may be a banana, the one or more recommended attributes may include a ripeness of the banana, and the visual media may include multiple bananas. The server device may analyze the images of the multiple bananas to identify one or more bananas having a ripeness level as indicated by the one or more recommended attributes (e.g., by analyzing a color of the bananas or other features of the bananas). For example, an unripe banana may have an approximately green color, a ripe banana may have an approximately yellow color, and an overripe banana may have an approximately brown color. The server device may identify one or more suitable bananas depicted in the visual media based on analyzing the colors of the bananas.
The server device may identify suitable or acceptable items for other items based on the one or more recommended attributes of the items in a similar manner (e.g., by analyzing visual features of the items to identify particular items having the one or more recommended attributes). For example, the server device may determine whether the item depicted in the image is the first item based on whether the one or more features of the item match the one or more recommended attributes.
In some implementations, the server device may analyze the visual media to identify suitable replacement items for the first item (e.g., for the item associated with the visual media). For example, the server device may receive, from the client device, an indication that the first item is unavailable. The server device may identify one or more recommended items depicted in the visual media. For example, the server device may determine the one or more recommended items that may be suitable replacements for the first item for the first user (e.g., as described in more detail elsewhere herein). The server device may analyze the visual media to identify one or more recommended items (e.g., that are suitable replacements for the first item) depicted in the visual media. In some implementations, the server device may determine probability values or scores associated with each item of the one or more recommended items identified in the visual media (e.g., using a machine learning model and/or as described in more detail elsewhere herein).
As shown by reference number 160, the server device may generate item AR information (e.g., AR feedback information) associated with the visual media. In some other implementations, the client device may generate the item AR information (e.g., AR feedback information) associated with the visual media (e.g., such as when the client device analyzes the visual media) in a similar manner as described herein. The item AR information may include presentation information to cause AR content to be displayed in connection with the visual media. The item AR information may be associated with AR content that identifies the first item, one or more recommended first items (e.g., based on the suitable first items having one or more recommended attributes), and/or one or more replacement items, among other examples, as depicted in the visual media. For example, the item AR information may include instructions that cause the AR content to be inserted and/or overlayed in the visual media.
In some implementations, the server device may generate AR feedback information to include visual representations of respective probability values located proximate to the one or more recommended items in the visual media. The visual representations may include numbers or words (e.g., “45% probability to be accepted”), colors (e.g., a green box around items with a probability score satisfying a first threshold, a yellow box around items with a probability score satisfying a second threshold but not the first threshold, and/or a red box around items with a probability score that does not satisfy the second threshold, among other examples), and/or other indicators to represent the respective probability values. The server device may generate the AR feedback information to cause AR content, including the visual representations of respective probability values, to be inserted or overlayed in the visual media proximate to the one or more recommended items in the visual media (e.g., such that a visual representations of probability value associated with a given item is included near the given item as depicted in the visual media).
In some implementations, as shown by reference number 165, the server device may transmit, and the client device may receive, the item AR information associated with the visual media to cause AR feedback information to be displayed by the client device in connection with the visual media. For example, the AR feedback information may identify or indicate whether the item depicted in the visual media is the first item, and/or the one or more recommended items depicted in the visual media, among other examples. For example, the client device may obtain the item AR information associated with visual media captured by the client device (e.g., by receiving the item AR information from the server device or by generating the item AR information).
As shown by reference number 170, the client device may provide, based on the item AR information, AR feedback information for display in connection with the visual media. For example, in some cases, the client device may provide the visual media for display with AR content inserted or overlayed in the visual media. As shown in
Additionally, or alternatively, the AR content may identify one or more recommended items and/or replacement items depicted in the visual media. For example, the AR content may identify a location (e.g., on a shelf or other display) of an item that is a suitable replacement (e.g., suitable for the first user and/or the task as described in more detail elsewhere herein) for the first item that is to be acquired in a location associated with the visual media. In some implementations, the client device may provide for display visual representations of respective probability values located proximate to the one or more recommended items in the visual media (e.g., where the respective probability values indicate a likelihood that the first user associated with the task would accept the one or more recommended items as a replacement). In this way, a user associated with the client device may quickly and easily identify the item to be acquired and/or suitable replacement items at a given location. Additionally, be enabling the server device and/or the client device to analyze the visual media to identify items having one or more recommended attributes (e.g., that may be subjective) a likelihood that a suitable item for the first user is selected may be improved.
In some implementations, the client device (and/or the server device) may transmit, to the user device associated with the task, an indication of a recommended item, from the one or more recommended items, that was selected (e.g., based on providing the AR feedback information for display). For example, a user of the client device may identify a recommended item based on the AR feedback information displayed by the client device. The user of the client device may provide an indication to the client device of a recommended item that was selected. The client device may transmit, to the user device, a request to approve the selected recommended item. The client device may receive, from the user device, an indication of whether the recommended item is approved as a replacement for an item from the one or more items. In other words, user feedback may be incorporated to improve the likelihood that a suitable item for the first user is acquired.
As shown in
As shown by reference number 180, the server device may update the routing AR information associated with the route to indicate that the first item has been obtained and to indicate a next waypoint associated with the route. For example, the next waypoint may correspond to a second item of the one or more items. The next waypoint may be identified based on an order of the waypoints associated with the route (e.g., determined by the server device as described in more detail elsewhere herein). For example, the updated routing AR information may cause AR content to be displayed indicating that a suitable item has been successfully obtained. The AR content may indicate a path and/or route to be followed by a user of the user device to reach the next waypoint associated with the route.
As shown by reference number 185, the server device may transmit, and the client device may receive, updated routing AR information to cause the client device to display an indication of the next waypoint associated with the route (e.g., based on the visual media indicating that a first item, from the one or more items, or a suitable replacement item or recommended item was obtained). As shown by reference number 190, the client device may provide the updated routing AR information for display indicating the next waypoint associated with the route (e.g., in a similar manner as described in more detail elsewhere herein, such as in connection with
In this way, the user performing the task may be enabled to quickly and easily be routed through an entity location (e.g., via the AR view of the route) to the one or more items. Additionally, the server device (e.g., a computer vision model or another machine learning model) may be trained to recognize and/or identify attributes or characteristics of items that may otherwise be subjective based on the judgement of a human. The AR feedback information may enable the user to identify a correct item having a certain desired attribute or characteristic, such as a certain level of ripeness of a fruit, among other examples. As a result, the server device may conserve time and/or resources (e.g., computing resources, network resources, and/or power resources) that would otherwise be used associated with the task being re-requested (e.g., via one or more devices) and/or re-performed by providing AR feedback to enable a user to quickly and easily identify the correct item and/or a suitable replacement item, among other examples. Additionally, the server device may conserve time associated with performing the task by efficiently routing the user through an entity location to locate the one or more items associated with the task.
As indicated above,
The server device 205 includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with real time visual feedback for AR map routing and item selection, as described elsewhere herein. The server device 205 may include a communication device and/or a computing device. For example, the server device 205 may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system, among other examples. In some implementations, the server device 205 includes computing hardware used in a cloud computing environment.
The client device 210 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with real time visual feedback for AR map routing and item selection, as described elsewhere herein. The client device 210 may include a communication device and/or a computing device. For example, the client device 210 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device. In some implementations, the client device 210 and the server device 205 may be co-located. For example, the server device 205 may be included in the client device 210.
The user device 215 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with real time visual feedback for AR map routing and item selection, as described elsewhere herein. The user device 215 may include a communication device and/or a computing device. For example, the user device 215 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.
The network 220 includes one or more wired and/or wireless networks. For example, the network 220 may include a wireless wide area network (e.g., a cellular network or a public land mobile network), a local area network (e.g., a wired local area network or a wireless local area network (WLAN), such as a Wi-Fi network), a personal area network (e.g., a Bluetooth network), a near-field communication network, a telephone network, a private network, the Internet, and/or a combination of these or other types of networks. The network 220 enables communication among the devices of environment 200.
The quantity and arrangement of devices and networks shown in
Bus 310 includes one or more components that enable wired and/or wireless communication among the components of device 300. Bus 310 may couple together two or more components of
Memory 330 includes volatile and/or nonvolatile memory. For example, memory 330 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). Memory 330 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). Memory 330 may be a non-transitory computer-readable medium. Memory 330 stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of device 300. In some implementations, memory 330 includes one or more memories that are coupled to one or more processors (e.g., processor 320), such as via bus 310.
Input component 340 enables device 300 to receive input, such as user input and/or sensed input. For example, input component 340 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator. Output component 350 enables device 300 to provide output, such as via a display, a speaker, and/or a light-emitting diode. Communication component 360 enables device 300 to communicate with other devices via a wired connection and/or a wireless connection. For example, communication component 360 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
Device 300 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 330) may store a set of instructions (e.g., one or more instructions or code) for execution by processor 320. Processor 320 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors 320, causes the one or more processors 320 and/or the device 300 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry is used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, processor 320 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
As shown in
As further shown in
As further shown in
As further shown in
As further shown in
As further shown in
Although
As shown in
As further shown in
As further shown in
As further shown in
As further shown in
Although
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations.
As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination and permutation of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).