Users conduct messaging conversations, e.g., chat, instant message, etc. using messaging services. Messaging conversations may be conducted using any user device, e.g., a computer, a mobile device, a wearable device, etc. As users conduct more conversations and perform more tasks using messaging applications, automated assistance with messaging conversations or tasks (e.g., via a bot or other automated assistant application) may be useful to improve efficiency. While automation may help make messaging communications more efficient for users, there may be a need to manage permissions relating to when and how a messaging bot accesses user information and what user information the messaging bot is permitted to access.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Some implementations can include a computer-implemented method comprising providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and at least one other user, and detecting, at the messaging application, a user request. The method can also include programmatically determining that an action in response to the user request requires access to data associated with the first user, and causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The method can further include upon receiving user input from the first user indicating approval of the access to the data associated with the first user, accessing the data associated with the first user and performing the action in response to the user request.
The method can also include upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. In some implementations the first user can include a human user and the at least one other user can include an assistive agent.
In some implementations, the first user is a human user and the at least one other user includes a second human user, different from the first user, associated with a second computing device. The permission interface can be rendered in the messaging application on the first computing device associated with the first user and the permission interface is not displayed on the second computing device associated with the second human user.
The method can further include, upon receiving user input from the first user prohibiting access of the data associated with the first user, providing a first indication for rendering on the first computing device associated with the first user. The method can also include providing a second indication for rendering on a second computing device associated with the at least one other user, the first and second indications indicating failure to serve the user request, wherein the first and second indications are different.
In some implementations, the first and second indications can include have one or more of: different textual content, different style, and different format. In some implementations, the first user includes a human user and the at least one other user includes a second human user, different from the first user and an assistive agent. The user request can be received from the first computing device associated with the first user. The method can also include initiating, in response to the user request, a separate conversation in the messaging application. The separate conversation can include the first user and the assistive agent, and may not include the second human user.
In some implementations, detecting the user request comprises analyzing one or more messages received in the messaging application from one or more of the first user and the at least one other user. The one or more messages can include one or more of a text message, a multimedia message, and a command to an assistive agent. Performing the action in response to the user request can include providing one or more suggestions to the first messaging application.
The method can also include causing the one or more suggestions to be rendered in the messaging application. The one or more suggestions can be rendered as suggestion elements that, when selected by the first user, cause details about the suggestion to be displayed.
Some implementations can include a computer-implemented method. The method can include detecting, at a messaging application, a user request, and programmatically determining that an action in response to the user request requires access to data associated with the first user. The method can also include causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The method can further include upon receiving approval from the first user at the permission interface, accessing the data associated with the first user and performing the action in response to the user request.
The method can also include, upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. The method can further include upon receiving user input from the first user prohibiting access of the data associated with the first user, and providing a first indication for rendering in the messaging application. The method can also include providing a second indication for rendering in a second messaging application associated with at least one other user, the first and second indications indicating failure to serve the user request, wherein the first and second indications are different.
Some implementations can include a system comprising one or more processors coupled to a nontransitory computer readable medium having stored thereon instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations can include providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and at least one other user, and detecting, at the messaging application, a user request. The operations can also include programmatically determining that an action in response to the user request requires access to data associated with the first user, and causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user. The operations can further include, upon receiving user input from the first user indicating approval of the access to the data associated with the first user, accessing the data associated with the first user and performing the action in response to the user request.
The operations can also include, upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed. In some implementations, the first user can include a human user and the at least one other user can include an assistive agent. In some implementations, the first user can include a human user and the at least one other user can include a second human user, different from the first user, associated with a second computing device. The permission interface can be rendered in the messaging application on the first computing device associated with the first user and the permission interface is not displayed on the second computing device associated with the second human user.
One or more implementations described herein relate to permission control and management for messaging application bots.
In the illustrated implementation, messaging server 101, client devices 115, and server 135 are communicatively coupled via a network 140. In various implementations, network 140 may be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration or other configurations. Furthermore, network 140 may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or other interconnected data paths across which multiple devices may communicate. In some implementations, network 140 may be a peer-to-peer network. Network 140 may also be coupled to or include portions of a telecommunications network for sending data in a variety of different communication protocols. In some implementations, network 140 includes Bluetooth® communication networks, Wi-Fi®, or a cellular communications network for sending and receiving data including via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, email, etc. Although
Messaging server 101 may include a processor, a memory, and network communication capabilities. In some implementations, messaging server 101 is a hardware server. In some implementation, messaging server 101 may be implanted in a virtualized environment, e.g., messaging server 101 may be a virtual machine that is executed on a hardware server that may include one or more other virtual machines. Messaging server 101 is communicatively coupled to the network 140 via signal line 102. Signal line 102 may be a wired connection, such as Ethernet, coaxial cable, fiber-optic cable, etc., or a wireless connection, such as Wi-Fi, Bluetooth, or other wireless technology. In some implementations, messaging server 101 sends and receives data to and from one or more of client devices 115a-115n, server 135, and bot 113 via network 140. In some implementations, messaging server 101 may include messaging application 103a that provides client functionality to enable a user (e.g., any of users 125) to exchange messages with other users and/or with a bot. Messaging application 103a may be a server application, a server module of a client-server application, or a distributed application (e.g., with a corresponding client messaging application 103b on one or more client devices 115).
Messaging server 101 may also include database 199 which may store messages exchanged via messaging server 101, data and/or configuration of one or more bots, and user data associated with one or more users 125, all upon explicit permission from a respective user to store such data. In some embodiments, messaging server 101 may include one or more assistive agents, e.g., bots 107a and 111. In other embodiments, the assistive agents may be implemented on the client devices 115a-n and not on the messaging server 101.
Messaging application 103a may be code and routines operable by the processor to enable exchange of messages among users 125 and one or more bots 105, 107a, 107b, 109a, 109b, 111, and 113. In some implementations, messaging application 103a may be implemented using hardware including a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In some implementations, messaging application 103a may be implemented using a combination of hardware and software.
In various implementations, when respective users associated with client devices 115 provide consent for storage of messages, database 199 may store messages exchanged between one or more client devices 115. In some implementations, when respective users associated with client devices 115 provide consent for storage of messages, database 199 may store messages exchanged between one or more client devices 115 and one or more bots implemented on a different device, e.g., another client device, messaging server 101, and server 135, etc. In the implementations where one or more users do not provide consent, messages received and sent by those users are not stored.
In some implementations, messages may be encrypted, e.g., such that only a sender and recipient of a message can view the encrypted messages. In some implementations, messages are stored. In some implementations, database 199 may further store data and/or configuration of one or more bots, e.g., bot 107a, bot 111, etc. In some implementations when a user 125 provides consent for storage of user data (such as social network data, contact information, images, etc.) database 199 may also store user data associated with the respective user 125 that provided such consent.
In some implementations, messaging application 103a/103b may provide a user interface that enables a user 125 to create new bots. In these implementations, messaging application 103a/103b may include functionality that enables user-created bots to be included in conversations between users of messaging application 103a/103b.
Client device 115 may be a computing device that includes a memory and a hardware processor, for example, a camera, a laptop computer, a tablet computer, a mobile telephone, a wearable device, a mobile email device, a portable game player, a portable music player, a reader device, head mounted display or other electronic device capable of wirelessly accessing network 140.
In the illustrated implementation, client device 115a is coupled to the network 140 via signal line 108 and client device 115n is coupled to the network 140 via signal line 110. Signal lines 108 and 110 may be wired connections, e.g., Ethernet, or wireless connections, such as Wi-Fi, Bluetooth, or other wireless technology. Client devices 115a, 115n are accessed by users 125a, 125n, respectively. The client devices 115a, 115n in
In some implementations, client device 115 may be a wearable device worn by a user 125. For example, client device 115 may be included as part of a clip (e.g., a wristband), part of jewelry, or part of a pair of glasses. In another example, client device 115 can be a smartwatch. In various implementations, user 125 may view messages from the messaging application 103a/103b on a display of the device, may access the messages via a speaker or other output device of the device, etc. For example, user 125 may view the messages on a display of a smartwatch or a smart wristband. In another example, user 125 may access the messages via headphones (not shown) coupled to or part of client device 115, a speaker of client device 115, a haptic feedback element of client device 115, etc.
In some implementations, messaging application 103b is stored on a client device 115a. In some implementations, messaging application 103b (e.g., a thin-client application, a client module, etc.) may be a client application stored on client device 115a with a corresponding a messaging application 103a (e.g., a server application, a server module, etc.) that is stored on messaging server 101. For example, messaging application 103b may transmit messages created by user 125a on client device 115a to messaging application 103a stored on messaging server 101.
In some implementations, messaging application 103a may be a standalone application stored on messaging server 101. A user 125a may access the messaging application 103a via a web page using a browser or other software on client device 115a. In some implementations, messaging application 103b that is implemented on the client device 115a may include the same or similar modules as those included on messaging server 101. In some implementations, messaging application 103b may be implemented as a standalone client application, e.g., in a peer-to-peer or other configuration where one or more client devices 115 include functionality to enable exchange of messages with other client devices 115. In these implementations, messaging server 101 may include limited or no messaging functionality (e.g., client authentication, backup, etc.). In some implementations, messaging server 101 may implement one or more bots, e.g., bot 107a and bot 111.
Server 135 may include a processor, a memory and network communication capabilities. In some implementations, server 135 is a hardware server. Server 135 is communicatively coupled to the network 140 via signal line 128. Signal line 128 may be a wired connection, such as Ethernet, coaxial cable, fiber-optic cable, etc., or a wireless connection, such as Wi-Fi, Bluetooth, or other wireless technology. In some implementations, server 135 sends and receives data to and from one or more of messaging server 101 and client devices 115 via network 140. Although server 135 is illustrated as being one server, various implementations may include one or more servers 135. Server 135 may implement one or more bots as server applications or server modules, e.g., bot 109a and bot 113.
In various implementations, server 135 may be part of the same entity that manages messaging server 101, e.g., a provider of messaging services. In some implementations, server 135 may be a third party server, e.g., controlled by an entity different than the entity that provides messaging application 103a/103b. In some implementations, server 135 provides or hosts bots.
A bot is an automated service, implemented on one or more computers, that users interact with primarily through text, e.g., via messaging application 103a/103b. A bot may be implemented by a bot provider such that the bot can interact with users of various messaging applications. In some implementations, a provider of messaging application 103a/103b may also provide one or more bots. In some implementations, bots provided by the provider of messaging application 103a/103b may be configured such that the bots can be included in other messaging applications, e.g., provided by other providers. A bot may provide several advantages over other modes. For example, a bot may permit a user to try a new service (e.g., a taxi booking service, a restaurant reservation service, etc.) without having to install an application on a client device, or accessing a website. Further, a user may interact with a bot via text, which requires minimal or no learning, compared with that required to use a website, software application, a telephone call, e.g., to an interactive voice response (IVR) service, or other manners of interacting with a service. Incorporating a bot within a messaging service or application may also permit users to collaborate with other users to accomplish various tasks such as travel planning, shopping, scheduling events, obtaining information, etc. within the messaging service, and eliminate cumbersome operations such as switching between various applications (e.g., a taxi booking application, a restaurant reservation application, a calendar application, etc.) or websites to accomplish the tasks.
A bot may be implemented as a computer program or application (e.g., a software application) that is configured to interact with one or more users (e.g., any of the users 125a-n) via messaging application 103a/103b to provide information or to perform specific actions within the messaging application 103. As one example, an information retrieval bot may search for information on the Internet and present the most relevant search result within the messaging app. As another example, a travel bot may have the ability to make travel arrangements via messaging application 103, e.g., by enabling purchase of travel and hotel tickets within the messaging app, making hotel reservations within the messaging app, making rental car reservations within the messaging app, and the like. As another example, a taxi bot may have the ability to call a taxi, e.g., to the user's location (obtained by the taxi bot from client device 115, when a user 125 permits access to location information) without having to invoke or call a separate taxi reservation app. As another example, a coach/tutor bot may tutor a user to instruct the user in some subject matter within a messaging app, e.g., by asking questions that are likely to appear on an examination and providing feedback on whether the user's responses were correct or incorrect. As another example, a game bot may play a game on the opposite side or the same side as a user within a messaging app. As another example, a commercial bot may provide services from a specific merchant, e.g., by retrieving product information from the merchant's catalog and enabling purchase through a messaging app. As another example, an interface bot may interface a remote device or vehicle so that a user of a messaging app can chat with, retrieve information from, and/or provide instructions to the remote device or vehicle.
A bot's capabilities may include understanding a user's intent and executing on it. The user's intent may be understood by analyzing and understanding the user's conversation and its context. A bot may also understand the changing context of a conversation or the changing sentiments and/or intentions of the users based on a conversation evolving over time. For example, if user A suggests meeting for coffee but if user B states that he does not like coffee, then a bot may assign a negative sentiment score for coffee to user B and may not suggest a coffee shop for the meeting.
Implementing bots that can communicate with users of messaging application 103a/103b may provide many advantages. Conventionally, a user may utilize a software application or a website to perform activities such as paying bills, ordering food, booking tickets, etc. A problem with such implementations is that a user is required to install or use multiple software applications, and websites, in order to perform the multiple activities. For example, a user may have to install different software applications to pay a utility bill (e.g., from the utility company), to buy movie tickets (e.g., a ticket reservation application from a ticketing service provider), to make restaurant reservations (e.g., from respective restaurants), or may need to visit a respective website for each activity. Another problem with such implementations is that the user may need to learn a complex user interface, e.g., a user interface implemented using multiple user interface elements, such as windows, buttons, checkboxes, dialog boxes, etc.
Consequently, an advantage of one or more described implementations is that a single application enables a user to perform activities that involve interaction with any number of parties, without being required to access a separate website or install and run software applications, which has a technical effect of reducing consumption of memory, storage, and processing resources on a client device. An advantage of the described implementations is that the conversational interface makes it easier and faster for the user to complete such activities, e.g., without having to learn a complex user interface, which has a technical effect of reducing consumption of computational resources. Another advantage of the described implementations is that implementing bots may enable various participating entities to provide user interaction at a lower cost, which has a technical effect of reducing the need for computational resources that are deployed to enable user interaction, such as a toll-free number implemented using one or more of a communications server, a website that is hosted on one or more web servers, a customer support email hosted on an email server, etc. Another technical effect of described features is a reduction in the problem of consumption of system processing and transmission resources required for completing user tasks across communication networks.
While certain examples herein describe interaction between a bot and one or more users, various types of interactions, such as one-to-one interaction between a bot and a user 125, one-to-many interactions between a bot and two or more users (e.g., in a group messaging conversation), many-to-one interactions between multiple bots and a user, and many-to-many interactions between multiple bots and multiple users are be possible. Further, in some implementations, a bot may also be configured to interact with another bot (e.g., bots 107a/107b, 109a/109b, 111, 113, etc.) via messaging application 103, via direct communication between bots, or a combination. For example, a restaurant reservation bot may interact with a bot for a particular restaurant in order to reserve a table.
In certain embodiments, a bot may use a conversational interface to use natural language to interact conversationally with a user. In certain embodiments, a bot may use a template-based format to create sentences with which to interact with a user, e.g., in response to a request for a restaurant address, using a template such as “the location of restaurant R is L.” In certain cases, a user may be enabled to select a bot interaction format, e.g., whether the bot is to use natural language to interact with the user, whether the bot is to use template-based interactions, etc.
In cases in which a bot interacts conversationally using natural language, the content and/or style of the bot's interactions may dynamically vary based on one or more of: the content of the conversation determined using natural language processing, the identities of the users in the conversations, and one or more conversational contexts (e.g., historical information on the user's interactions, connections between the users in the conversation based on a social graph), external conditions (e.g., weather, traffic), the user's schedules, related context associated with the users, and the like. In these cases, the content and style of the bot's interactions is varied based on only such factors for which users participating in the conversation have provided consent.
As one example, if the users of a conversation are determined to be using formal language (e.g., no or minimal slang terms or emojis), then a bot may also interact within that conversation using formal language, and vice versa. As another example, if a user in a conversation is determined (based on the present and/or past conversations) to be a heavy user of emojis, then a bot may also interact with that user using one or more emojis. As another example, if it is determined that two users in a conversation are in remotely connected in a social graph (e.g., having two or more intermediate nodes between them denoting, e.g., that they are friends of friends of friends), then a bot may use more formal language in that conversation. In the cases where users participating in a conversation have not provided consent for the bot to utilize factors such as the users' social graph, schedules, location, or other context associated with the users, the content and style of interaction of the bot may be a default style, e.g., a neutral style, that doesn't require utilization of such factors.
Further, in some implementations, one or more bots may include functionality to engage in a back-and-forth conversation with a user. For example, if the user requests information about movies, e.g., by entering “@moviebot Can you recommend a movie?”, the bot “moviebot” may respond with “Are you in the mood for a comedy?” The user may then respond, e.g., “nope” to which the bot may respond with “OK. The sci-fi movie entitled Space and Stars has got great reviews. Should I book you a ticket?” The user may then indicate “Yeah, I can go after 6 pm. Please check if Steve can join”. Upon user's consent to the bot accessing information about their contacts and upon the friend Steve's consent to receiving messages from the bot, the bot may send a message to user's friend Steve and perform further actions to book movie tickets at a suitable time.
In certain embodiments, a user participating in a conversation may be enabled to invoke a specific bot or a bot performing a specific task, e.g., by typing a bot name or bot handle (e.g., taxi, @taxibot, @movies, etc.), by using a voice command (e.g., “invoke bankbot”, etc.), by activation of a user interface element (e.g., a button or other element labeled with the bot name or handle), etc. Once a bot is invoked, a user 125 may send a message to the bot via messaging application 103a/103b in a manner similar to sending messages to other users 125. For example, to order a taxi, a user may type “@taxibot get me a cab”; to make hotel reservations, a user may type “@hotelbot book a table for 4 at a Chinese restaurant near me.”
In certain embodiments, a bot may automatically suggest information or actions within a messaging conversation without being specifically invoked. That is, the users may not need to specifically invoke the bot. In these embodiments, the bot may depend on analysis and understanding of the conversation on a continual basis or at discrete points of time. The analysis of the conversation may be used to understand specific user needs and to identify when assistance should be suggested by a bot. As one example, a bot may search for some information and suggest the answer if it is determined that a user needs information (e.g., based on the user asking a question to another user, based on multiple users indicating they don't have some information). As another example, if it is determined that multiple users have expressed interest in eating Chinese food, a bot may automatically suggest a set of Chinese restaurants in proximity to the users, including optional information such as locations, ratings and links to the websites of the restaurants.
In certain embodiments, rather than automatically invoking a bot or waiting for a user to explicitly invoke a bot, an automatic suggestion may be made to one or more users in a messaging conversation to invoke one or more bots. In these embodiments, the conversation may be analyzed on a continual basis or at discrete points of time, and the analysis of the conversation may be used to understand specific user needs and to identify when a bot should be suggested within the conversation.
In the embodiments in which a bot may automatically suggest information or actions within a messaging conversation without being specifically invoked, such functionality is disabled, e.g., if one or more users participating in the messaging conversation do not provide consent to a bot performing analysis of the users' conversation. Further, such functionality may also be disabled temporarily based on user input. For example, when the users indicate that a conversation is private or sensitive, analysis of conversational context is suspended until users provide input for the bot to be activated. Further, indications that analysis functionality is disabled may be provided to participants in the conversation, e.g., with a user interface element.
In various implementations, a bot may be implemented in a variety of configurations. For example, as shown in
In another example shown in
In another example, bot 109a (server module) is implemented on server 135 and bot 109b (client module) is implemented on client devices 115. In this example, the bot functionality is provided by modules implemented on client devices 115 and server 135, which is distinct from messaging server 101. In some implementations, a bot may be implemented as a distributed application, e.g., with modules distributed across multiple client devices and servers (e.g., client devices 115, server 135, messaging server 101, etc.). In some implementations, a bot may be implemented as a server application, e.g., bot 111 that is implemented on messaging server 101 and bot 113 that is implemented on server 135.
Different implementations such as client-only, server-only, client-server, distributed, etc. may provide different advantages. For example, client-only implementations permit bot functionality to be provided locally, e.g., without network access, which may be advantageous in certain contexts, e.g., when a user is outside of network coverage area or in any area with low or limited network bandwidth. Implementations that include one or more servers, such as server-only, client-server, or distributed configurations may permit certain functionality, e.g., financial transactions, ticket reservations, etc. that may not be possible to provide locally on a client device.
While
In some implementations, third parties distinct from a provider of messaging application 103a/103b and users 125, may provide bots that can communicate with users 125 via messaging application 103a/103b for specific purposes. For example, a taxi service provider may provide a taxi bot, a ticketing service may provide a bot that can book event tickets, a bank bot may provide capability to conduct financial transactions, etc.
In implementing bots via messaging application 103, bots are permitted to communicate with users only upon specific user authorization. For example, if a user invokes a bot, the bot can reply, e.g., based on the user's action of invoking the bot. In another example, a user may indicate particular bots or types of bots that may contact the user. For example, a user may permit travel bots to communicate with her, but not provide authorization for shopping bots. In this example, messaging application 103a/103b may permit travel bots to exchange messages with the user, but filter or deny messages from shopping bots.
Further, in order to provide some functionality (e.g., ordering a taxi, making a flight reservation, contacting a friend, etc.), bots may request that the user permit the bot to access user data, such as location, payment information, contact list, etc. In such instances, a user is presented with options to permit or deny access to the bot. If the user denies access, the bot may respond via a message, e.g., “Sorry, I am not able to book a taxi for you.” Further, the user may provide access to information on a limited basis, e.g., the user may permit the taxi bot to access a current location only upon specific invocation of the bot, but not otherwise. In different implementations, the user can control the type, quantity, and granularity of information that a bot can access, and is provided with the ability (e.g., via a user interface) to change such permissions at any time. In some implementations, user data may be processed, e.g., to remove personally identifiable information, to limit information to specific data elements, etc. before a bot can access such data. Further, users can control usage of user data by messaging application 103a/103b and one or more bots. For example, a user can specify that a bot that offers capability to make financial transactions require user authorization before a transaction is completed, e.g., the bot may send a message “Tickets for the movie Space and Starts are $12 each. Shall I go ahead and book?” or “The best price for this shirt is $125, including shipping. Shall I charge your credit card ending 1234?” etc.
In some implementations, messaging application 103a/103b may also provide one or more suggestions, e.g., suggested responses, to users 125 via a user interface, e.g., as a button, or other user interface element. Suggested responses may enable faster interaction, e.g., by reducing or eliminating the need for a user to type a response. Suggested responses may enable users to respond to a message quickly and easily, e.g., when a client device lacks text input functionality (e.g., a smartwatch that does not include a keyboard or microphone). Suggested responses may also enable users to respond quickly to messages, e.g., when the user selects suggested response (e.g., by selecting a corresponding a user interface element on a touchscreen). Suggested responses may be generated using predictive models, e.g., machine learning models, that are trained to generate responses.
For example, messaging application 103a/103b may implement machine learning, e.g., a deep learning model, that can enhance user interaction with messaging application 103. Machine-learning models may be trained using synthetic data, e.g., data that is automatically generated by a computer, with no use of user information. In some implementations, machine-learning models may be trained, e.g., based on sample data, for which permissions to utilize user data for training have been obtained expressly from users. For example, sample data may include received messages and responses that were sent to the received messages. Based on the sample data, the machine-learning model can predict responses to received messages, which may then be provided as suggested responses. User interaction is enhanced, e.g., by reducing burden on the user to compose a response to a received message, by providing a choice of responses that are customized based on the received message and the user's context. For example, when users provide consent, suggested responses may be customized based on the user's prior activity, e.g., earlier messages in a conversation, messages in different conversations, etc. For example, such activity may be used to determine an appropriate suggested response for the user, e.g., a playful response, a formal response, etc. based on the user's interaction style. In another example, when the user specifies one or more preferred languages and/or locales, messaging application 103a/103b may generate suggested responses in the user's preferred language. In various examples, suggested responses may be text responses, images, multimedia, etc.
In some implementations, machine learning may be implemented on messaging server 101, on client devices 115, or on both messaging server 101 and client devices 115. In some implementations, a simple machine learning model may be implemented on client device 115 (e.g., to permit operation of the model within memory, storage, and processing constraints of client devices) and a complex machine learning model may be implemented on messaging server 101. If a user does not provide consent for use of machine learning techniques, such techniques are not implemented. In some implementations, a user may selectively provide consent for machine learning to be implemented only on a client device 115. In these implementations, machine learning may be implemented on client device 115, such that updates to a machine learning model or user information used by the machine learning model are stored or used locally, and are not shared to other devices such as messaging server 101, server 135, or other client devices 115.
For the users that provide consent to receiving suggestions, e.g., based on machine-learning techniques, suggestions may be provided by messaging application 103. For example, suggestions may include suggestions of content (e.g., movies, books, etc.), schedules (e.g., available time on a user's calendar), events/venues (e.g., restaurants, concerts, etc.), and so on. In some implementations, if users participating in a conversation provide consent to use of conversation data, suggestions may include suggested responses to incoming messages that are based on conversation content. For example, if a first user of two users that have consented to suggestions based on conversation content, sends a message “do you want to grab a bite? How about Italian?” a response may be suggested to the second user, e.g. “@assistant lunch, Italian, table for 2”. In this example, the suggested response includes a bot (identified by the symbol @ and bot handle assistant). If the second user selects this response, the assistant bot is added to the conversation and the message is sent to the bot. A response from the bot may then be displayed in the conversation, and either of the two users may send further messages to the bot. In this example, the assistant bot is not provided access to the content of the conversation, and suggested responses are generated by the messaging application 103.
In certain implementations, the content of a suggested response may be customized based on whether a bot is already present in a conversation or is able to be incorporated into the conversation. For example, if it is determined that a travel bot could be incorporated into the messaging app, a suggested response to a question about the cost of plane tickets to France could be “Let's ask travel bot!”
In different implementations, suggestions, e.g., suggested responses, may include one or more of: text (e.g., “Terrific!”), emoji (e.g., a smiley face, a sleepy face, etc.), images (e.g., photos from a user's photo library), text generated based on templates with user data inserted in a field of the template (e.g., “her number is <Phone Number>” where the field “Phone Number” is filled in based on user data, if the user provides access to user data), links (e.g., Uniform Resource Locators), etc. In some implementations, suggested responses may be formatted and/or styled, e.g., using colors, fonts, layout, etc. For example, a suggested response that includes a movie recommendation may include descriptive text about the movie, an image from the movie, and a link to buy tickets. In different implementations, suggested responses may be presented as different types of user interface elements, e.g., text boxes, information cards, etc.
In different implementations, users are offered control over whether they receive suggestions, what types of suggestions they receive, a frequency of the suggestions, etc. For example, users may decline to receive suggestions altogether, or may choose specific types of suggestions, or to receive suggestions only during certain times of day. In another example, users may choose to receive personalized suggestions. In this example, machine learning may be used to provide suggestions, based on the user's preferences relating to use of their data and use of machine learning techniques.
At 504, a permission user interface element is caused to be displayed to the user associated with the request. An example of a permission request user interface element is shown in
At 508, the bot permission system determines whether permission was granted or not. Determining whether permission was granted can be accomplished by evaluating the indication received in step 506. If permission was granted, processing continues to 510. If permission was not granted, processing continues to 514.
At 510, an indication of user data being shared with the bot optionally can be provided. For example, the indication “Sharing location data” shown in
At 514, the bot can cause an indication of declining the task to be displayed to the user. For example, the bot could provide an indication such as “Sorry I didn't get your location—I'm unable to schedule a car” or the like. The indication could be displayed on a graphical user interface and/or provided in the form of an audio cue or other output indication.
The bot can cause to be displayed a permission allow/disallow interface element 606. The permission element 606 can include a description (608) of what type of permission is needed and input elements for not allowing or allowing the bot permission to access (or receive) user data, 610 and 612 respectively.
At 804, a progress indication is optionally displayed by the bot and may be visible in the group conversation to the group or to the individual user making the request. For example, a car service bot may display a message such as “I'm working on it” in the group conversation. Processing continues to 806.
At 806, a permission user interface element is caused to be displayed to the user associated with the request. An example of a permission request user interface element is shown in
At 808, an indication is received of whether one or more users grant the bot permission to access or obtain respective user data. The indication can be received in the form of a user interface element selection (e.g., touching, tapping, selecting an on screen user interface button, via typing, audio input, gesture input, etc.) that indicates whether the user grants permission or not. For example, the user could select one of “NOT NOW” or “ALLOW” shown in the permission user interface element of
At 810, the bot permission system determines whether permission was granted or not. Determining whether permission was granted can be accomplished by evaluating the indication received in step 808. If permission was granted, processing continues to 812. If permission was not granted, processing continues to 816.
At 812, the bot can start a one-to-one chat with the user. The one-to-one chat and the messages exchanged in the one-to-one chat are not visible to the group of users in the group messaging conversation. Processing continues to 814.
At 814, the bot may perform further processing to complete the task associated with the permissions that were granted within the one-to-one user messaging conversation. For example, a car service bot could continue to determine which cars may be in a location to provide car service to the user. In another example, a lodging bot could use shared user location to determine nearby accommodations that are vacant and available for rental.
At 816, the bot can cause a “graceful” indication of declining the task to be displayed to the user within the group messaging conversation. For example, the bot could provide an indication such as “I wasn't able to get your location—I'm unable to schedule a car” or the like. The indication could be displayed on a graphical user interface or provided in the form of an audio cue or other output indication. The graceful aspect of the decline message can include a message that does not explicitly indicate that a user did not grant the bot permission to use the user's data. In different implementations, the indication may include different textual content, e.g., based on the request, or other factors. For example, an indication in response to user prohibiting access to location in the context of ordering a car may include textual content such as “Sorry, unable to get location,” “I'm unable to find cars near you,” “Car service not available,” etc. In some implementations, different indications may be sent to different participants in a group conversation. In some implementations, indications may use different formats, e.g., text box, graphical indication, animated indication, etc. In some implementations, the indications may use different styles, e.g., boldface text, italicized text, fonts, colors, etc.
One or more methods described herein can be run in a standalone program that can be run on any type of computing device, a program run on a web browser, a mobile application (“app”) run on a mobile computing device (e.g., cell phone, smart phone, tablet computer, wearable device (wristwatch, armband, jewelry, headwear, virtual reality goggles or glasses, augmented reality goggles or glasses, etc.), laptop computer, etc.). In one example, a client/server architecture can be used, e.g., a mobile computing device (as a user device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display). In another example, all computations can be performed within the mobile app (and/or other apps) on the mobile computing device. In another example, computations can be split between the mobile computing device and one or more server devices.
In some implementations, computing device 900 includes a processor 902, a memory 904, and input/output (I/O) interface 906. Processor 902 can be one or more processors and/or processing circuits to execute program code and control basic operations of the computing device 900. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit (CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a particular geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory.
Memory 904 is typically provided in computing device 900 for access by the processor 902, and may be any suitable processor-readable storage medium, such as random access memory (RAM), read-only memory (ROM), Electrical Erasable Read-only Memory (EEPROM), Flash memory, etc., suitable for storing instructions for execution by the processor, and located separate from processor 902 and/or integrated therewith. Memory 904 can store software operating on the computing device 900 by the processor 902, including an operating system 908 and one or more applications 910 such as a messaging application, a bot application, etc. In some implementations, the applications 910 can include instructions that enable processor 902 to perform functions described herein, e.g., one or more of the methods of
Any of software in memory 904 can alternatively be stored on any other suitable storage location or computer-readable medium. In addition, memory 904 (and/or other connected storage device(s)) can store messages, permission settings, user preferences and related data structures, parameters, audio data, user preferences, and/or other instructions and data used in the features described herein in a database 912. Memory 904 and any other type of storage (magnetic disk, optical disk, magnetic tape, or other tangible media) can be considered “storage” or “storage devices.”
The I/O interface 906 can provide functions to enable interfacing the computing device 900 with other systems and devices. Interfaced devices can be included as part of the computing device 900 or can be separate and communicate with the computing device 900. For example, network communication devices, wireless communication devices, storage devices, and input/output devices can communicate via the I/O interface 906. In some implementations, the I/O interface 906 can connect to interface devices such as input devices (keyboard, pointing device, touch screen, microphone, camera, scanner, sensors, etc.) and/or output devices (display device, speaker devices, printer, motor, etc.).
Some examples of interfaced devices that can connect to I/O interface 906 can include a display device 914 that can be used to display content, e.g., images, video, and/or a user interface of an output application as described herein. Display device 914 can be connected to computing device 900 via local connections (e.g., display bus) and/or via networked connections and can be any suitable display device. The display device 914 can include any suitable display device such as a liquid crystal display (LCD), light emitting diode (LED), or plasma display screen, cathode ray tube (CRT), television, monitor, touch screen, 3-D display screen, or other visual display device. For example display device 914 can be a flat display screen provided on a mobile device, multiple display screens provided in a goggles device, or a monitor screen for a computer device.
The I/O interface 906 can interface to other input and output devices. Some examples include one or more cameras, which can capture image frames. Orientation sensors, e.g., gyroscopes and/or accelerometers, can provide sensor data indicating device orientation (which can correspond to view orientation in some implementations) and/or camera orientation. Some implementations can provide a microphone for capturing sound (e.g., voice commands, etc.), audio speaker devices for outputting sound, or other input and output devices.
For ease of illustration,
Methods described herein can be implemented by computer program instructions or code, which can be executed on a computer. For example, the code can be implemented by one or more digital processors (e.g., microprocessors or other processing circuitry) and can be stored on a computer program product including a non-transitory computer readable medium (e.g., storage medium), such as a magnetic, optical, electromagnetic, or semiconductor storage medium, including semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), flash memory, a rigid magnetic disk, an optical disk, a solid-state memory drive, etc. The program instructions can also be contained in, and provided as, an electronic signal, for example in the form of software as a service (SaaS) delivered from a server (e.g., a distributed system and/or a cloud computing system). Alternatively, one or more methods can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software. Example hardware can be programmable processors (e.g. Field-Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD), etc.), general purpose processors, graphics processors, Application Specific Integrated Circuits (ASICs), and the like. One or more methods can be performed as part of or component of an application running on the system, or as an application or software running in conjunction with other applications and operating system.
Although the description has been described with respect to particular implementations thereof, these particular implementations are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and implementations.
In situations in which certain implementations discussed herein may collect or use personal information about users (e.g., user's phone number or partial phone number, user data, information about a user's social network, user's location and time, user's biometric information, user's activities and demographic information), users are provided with one or more opportunities to control whether the personal information is collected, whether the personal information is stored, whether the personal information is used, and how the information is collected about the user, stored and used. That is, the systems and methods discussed herein collect, store and/or use user personal information specifically upon receiving explicit authorization from the relevant users to do so. In addition, certain data may be treated in one or more ways before it is stored or used so that personally identifiable information is removed. As one example, a user's identity may be treated so that no personally identifiable information can be determined. As another example, a user's geographic location may be generalized to a larger region so that the user's particular location cannot be determined.
Note that the functional blocks, operations, features, methods, devices, and systems described in the present disclosure may be integrated or divided into different combinations of systems, devices, and functional blocks as would be known to those skilled in the art. Any suitable programming language and programming techniques may be used to implement the routines of particular implementations. Different programming techniques may be employed such as procedural or object-oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular implementations. In some implementations, multiple steps or operations shown as sequential in this specification may be performed at the same time. Further implementations are disclosed below.
providing a messaging application, on a first computing device associated with a first user, to enable communication between the first user and at least one other user;
detecting, at the messaging application, a user request;
programmatically determining that an action in response to the user request requires access to data associated with the first user;
causing a permission interface to be rendered in the messaging application on the first computing device, the permission interface enabling the first user to approve or prohibit the access to the data associated with the first user; and
upon receiving user input from the first user indicating approval of the access to the data associated with the first user, accessing the data associated with the first user and performing the action in response to the user request.
upon receiving user input from the first user prohibiting the access to the data associated with the first user, providing an indication in the messaging application that the task is not performed.
upon receiving user input from the first user prohibiting access of the data associated with the first user,
providing a first indication for rendering on the first computing device associated with the first user; and
providing a second indication for rendering on a second computing device associated with the at least one other user, the first and second indications indicating failure to serve the user request, wherein the first and second indications are different.
This application claims the benefit of U.S. Application No. 62/397,047, entitled “BOT PERMISSIONS”, and filed on Sep. 20, 2016, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5963649 | Sako | Oct 1999 | A |
6092102 | Wagner | Jul 2000 | A |
D599363 | Mays | Sep 2009 | S |
7603413 | Herold et al. | Oct 2009 | B1 |
D611053 | Kanga et al. | Mar 2010 | S |
D624927 | Allen et al. | Oct 2010 | S |
D648343 | Chen | Nov 2011 | S |
D648735 | Arnold | Nov 2011 | S |
D651609 | Pearson et al. | Jan 2012 | S |
D658201 | Gleasman et al. | Apr 2012 | S |
D658677 | Gleasman et al. | May 2012 | S |
D658678 | Gleasman et al. | May 2012 | S |
D673172 | Peters | Dec 2012 | S |
8391618 | Chuang et al. | Mar 2013 | B1 |
8423577 | Lee et al. | Apr 2013 | B1 |
8515958 | Knight | Aug 2013 | B2 |
8554701 | Dillard et al. | Oct 2013 | B1 |
8589407 | Bhatia | Nov 2013 | B2 |
D695755 | Hwang et al. | Dec 2013 | S |
D716338 | Lee | Jan 2014 | S |
D699739 | Voreis et al. | Feb 2014 | S |
D699744 | Ho Kushner | Feb 2014 | S |
8645697 | Emigh | Feb 2014 | B1 |
8650210 | Cheng et al. | Feb 2014 | B1 |
D701228 | Lee | Mar 2014 | S |
D701527 | Brinda et al. | Mar 2014 | S |
D701528 | Brinda et al. | Mar 2014 | S |
8688698 | Black et al. | Apr 2014 | B1 |
8700480 | Fox et al. | Apr 2014 | B1 |
D704726 | Maxwell | May 2014 | S |
D705244 | Arnold et al. | May 2014 | S |
D705251 | Pearson et al. | May 2014 | S |
D705802 | Kerr et al. | May 2014 | S |
D706802 | Myung et al. | Jun 2014 | S |
8825474 | Zhai et al. | Sep 2014 | B1 |
D714821 | Chand et al. | Oct 2014 | S |
8938669 | Cohen | Jan 2015 | B1 |
8996639 | Faaborg et al. | Mar 2015 | B1 |
9020956 | Barr et al. | Apr 2015 | B1 |
9043407 | Gaulke | May 2015 | B1 |
9191786 | Davis | Nov 2015 | B2 |
9213941 | Petersen | Dec 2015 | B2 |
9230241 | Singh et al. | Jan 2016 | B1 |
9262517 | Feng et al. | Feb 2016 | B2 |
9467435 | Tyler | Oct 2016 | B1 |
9560152 | Jamdar et al. | Jan 2017 | B1 |
9674120 | Davis | Jun 2017 | B2 |
9715496 | Sapoznik et al. | Jul 2017 | B1 |
9805371 | Sapoznik et al. | Oct 2017 | B1 |
9807037 | Sapoznik et al. | Oct 2017 | B1 |
9817813 | Faizakof et al. | Nov 2017 | B2 |
9973705 | Ko et al. | May 2018 | B2 |
10146748 | Barndollar et al. | Dec 2018 | B1 |
20020040297 | Tsiao | Apr 2002 | A1 |
20030105589 | Liu et al. | Jun 2003 | A1 |
20030182374 | Haldar | Sep 2003 | A1 |
20060021023 | Stewart | Jan 2006 | A1 |
20060029106 | Ott et al. | Feb 2006 | A1 |
20060150119 | Chesnais et al. | Jul 2006 | A1 |
20060156209 | Matsuura et al. | Jul 2006 | A1 |
20060172749 | Sweeney | Aug 2006 | A1 |
20070030364 | Obrador et al. | Feb 2007 | A1 |
20070094217 | Ronnewinkel | Apr 2007 | A1 |
20070162942 | Hamynen et al. | Jul 2007 | A1 |
20070244980 | Baker et al. | Oct 2007 | A1 |
20080120371 | Gopal | May 2008 | A1 |
20080153526 | Othmer | Jun 2008 | A1 |
20090076795 | Bangalore et al. | Mar 2009 | A1 |
20090119584 | Herbst | May 2009 | A1 |
20090282114 | Feng et al. | Nov 2009 | A1 |
20090327436 | Chen | Dec 2009 | A1 |
20100228590 | Muller et al. | Sep 2010 | A1 |
20100260426 | Huang et al. | Oct 2010 | A1 |
20110074685 | Causey et al. | Mar 2011 | A1 |
20110107223 | Tilton et al. | May 2011 | A1 |
20110164163 | Bilbrey et al. | Jul 2011 | A1 |
20110252108 | Morris et al. | Oct 2011 | A1 |
20120030289 | Buford et al. | Feb 2012 | A1 |
20120033876 | Momeyer et al. | Feb 2012 | A1 |
20120041941 | King et al. | Feb 2012 | A1 |
20120041973 | Kim et al. | Feb 2012 | A1 |
20120042036 | Lau et al. | Feb 2012 | A1 |
20120089847 | Tu | Apr 2012 | A1 |
20120131520 | Tang et al. | May 2012 | A1 |
20120179717 | Kennedy et al. | Jul 2012 | A1 |
20120224743 | Rodriguez et al. | Sep 2012 | A1 |
20120245944 | Gruber et al. | Sep 2012 | A1 |
20120278164 | Spivack et al. | Nov 2012 | A1 |
20130036162 | Koenigs | Feb 2013 | A1 |
20130050507 | Syed et al. | Feb 2013 | A1 |
20130061148 | Das et al. | Mar 2013 | A1 |
20130073366 | Heath | Mar 2013 | A1 |
20130260727 | Knudson et al. | Oct 2013 | A1 |
20130262574 | Cohen | Oct 2013 | A1 |
20130346235 | Lam | Dec 2013 | A1 |
20140004889 | Davis | Jan 2014 | A1 |
20140035846 | Lee et al. | Feb 2014 | A1 |
20140047413 | Sheive et al. | Feb 2014 | A1 |
20140067371 | Liensberger | Mar 2014 | A1 |
20140088954 | Shirzadi et al. | Mar 2014 | A1 |
20140108562 | Panzer | Apr 2014 | A1 |
20140150068 | Janzer | May 2014 | A1 |
20140163954 | Joshi et al. | Jun 2014 | A1 |
20140164506 | Tesch et al. | Jun 2014 | A1 |
20140171133 | Stuttle et al. | Jun 2014 | A1 |
20140189027 | Zhang et al. | Jul 2014 | A1 |
20140189538 | Martens et al. | Jul 2014 | A1 |
20140201675 | Joo et al. | Jul 2014 | A1 |
20140228009 | Chen et al. | Aug 2014 | A1 |
20140237057 | Khodorenko | Aug 2014 | A1 |
20140317030 | Shen et al. | Oct 2014 | A1 |
20140337438 | Govande et al. | Nov 2014 | A1 |
20140344058 | Brown | Nov 2014 | A1 |
20140372349 | Driscoll | Dec 2014 | A1 |
20150006143 | Skiba et al. | Jan 2015 | A1 |
20150032724 | Thirugnanasundaram et al. | Jan 2015 | A1 |
20150058720 | Smadja et al. | Feb 2015 | A1 |
20150095855 | Bai et al. | Apr 2015 | A1 |
20150100537 | Grieves et al. | Apr 2015 | A1 |
20150171133 | Kim et al. | Jun 2015 | A1 |
20150178371 | Seth et al. | Jun 2015 | A1 |
20150178388 | Winnemoeller et al. | Jun 2015 | A1 |
20150207765 | Brantingham et al. | Jul 2015 | A1 |
20150227797 | Ko et al. | Aug 2015 | A1 |
20150248411 | Krinker et al. | Sep 2015 | A1 |
20150250936 | Thomas et al. | Sep 2015 | A1 |
20150286371 | Degani | Oct 2015 | A1 |
20150288633 | Ogundokun et al. | Oct 2015 | A1 |
20150302301 | Petersen | Oct 2015 | A1 |
20160037311 | Cho | Feb 2016 | A1 |
20160042252 | Sawhney et al. | Feb 2016 | A1 |
20160043974 | Purcell et al. | Feb 2016 | A1 |
20160072737 | Forster | Mar 2016 | A1 |
20160140447 | Cohen et al. | May 2016 | A1 |
20160140477 | Karanam et al. | May 2016 | A1 |
20160162791 | Petersen | Jun 2016 | A1 |
20160179816 | Glover | Jun 2016 | A1 |
20160210279 | Kim et al. | Jul 2016 | A1 |
20160224524 | Kay et al. | Aug 2016 | A1 |
20160226804 | Hampson et al. | Aug 2016 | A1 |
20160234553 | Hampson et al. | Aug 2016 | A1 |
20160283454 | Leydon et al. | Sep 2016 | A1 |
20160284011 | Dong | Sep 2016 | A1 |
20160342895 | Gao et al. | Nov 2016 | A1 |
20160350304 | Aggarwal et al. | Dec 2016 | A1 |
20160378080 | Uppala et al. | Dec 2016 | A1 |
20170075878 | Jon et al. | Mar 2017 | A1 |
20170093769 | Lind et al. | Mar 2017 | A1 |
20170098122 | el Kaliouby et al. | Apr 2017 | A1 |
20170134316 | Cohen et al. | May 2017 | A1 |
20170142046 | Mahmoud et al. | May 2017 | A1 |
20170149703 | Willett et al. | May 2017 | A1 |
20170153792 | Kapoor et al. | Jun 2017 | A1 |
20170171117 | Carr et al. | Jun 2017 | A1 |
20170180276 | Gershony et al. | Jun 2017 | A1 |
20170180294 | Milligan et al. | Jun 2017 | A1 |
20170187654 | Lee | Jun 2017 | A1 |
20170250930 | Ben-Itzhak | Aug 2017 | A1 |
20170250935 | Rosenberg | Aug 2017 | A1 |
20170250936 | Rosenberg et al. | Aug 2017 | A1 |
20170288942 | Plumb | Oct 2017 | A1 |
20170293834 | Raison | Oct 2017 | A1 |
20170308589 | Liu et al. | Oct 2017 | A1 |
20170324868 | Tamblyn et al. | Nov 2017 | A1 |
20170344224 | Kay et al. | Nov 2017 | A1 |
20170357442 | Peterson et al. | Dec 2017 | A1 |
20170366479 | Ladha | Dec 2017 | A1 |
20180004397 | Mazzocchi | Jan 2018 | A1 |
20180005272 | Todasco et al. | Jan 2018 | A1 |
20180012231 | Sapoznik et al. | Jan 2018 | A1 |
20180013699 | Sapoznik et al. | Jan 2018 | A1 |
20180060705 | Mahmoud et al. | Mar 2018 | A1 |
20180083894 | Fung et al. | Mar 2018 | A1 |
20180083898 | Pham | Mar 2018 | A1 |
20180083901 | McGregor et al. | Mar 2018 | A1 |
20180090135 | Schlesinger et al. | Mar 2018 | A1 |
20180137097 | Lim et al. | May 2018 | A1 |
20180196854 | Burks | Jul 2018 | A1 |
20180210874 | Fuxman et al. | Jul 2018 | A1 |
20180293601 | Glazier | Oct 2018 | A1 |
20180309706 | Kim et al. | Oct 2018 | A1 |
20180316637 | Desjardins | Nov 2018 | A1 |
20180322403 | Ron et al. | Nov 2018 | A1 |
20180336226 | Anorga et al. | Nov 2018 | A1 |
20180336415 | Anorga et al. | Nov 2018 | A1 |
20180367483 | Rodriguez et al. | Dec 2018 | A1 |
20180367484 | Rodriguez et al. | Dec 2018 | A1 |
20180373683 | Hullette et al. | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
1475908 | Feb 2004 | CN |
102222079 | Oct 2011 | CN |
102395966 | Mar 2012 | CN |
102467574 | May 2012 | CN |
103548025 | Jan 2014 | CN |
1376392 | Jan 2004 | EP |
1394713 | Mar 2004 | EP |
2523436 | Nov 2012 | EP |
2560104 | Feb 2013 | EP |
2688014 | Jan 2014 | EP |
2703980 | Mar 2014 | EP |
3091445 | Nov 2016 | EP |
2002-132804 | May 2002 | JP |
2014-86088 | May 2014 | JP |
2014-142919 | Aug 2014 | JP |
20110003462 | Jan 2011 | KR |
20130008036 | Jan 2013 | KR |
20130061387 | Jun 2013 | KR |
2004104758 | Dec 2004 | WO |
2011002989 | Jan 2011 | WO |
WO 2015183493 | Dec 2015 | WO |
2016130788 | Aug 2016 | WO |
2018089109 | May 2018 | WO |
Entry |
---|
Zhao et al.; Cloud-based push-styled mobile botnets: a case study of exploiting the cloud to device messaging service; Published in : Proceeding ACSAC '12 Proceedings of the 28th Annual Computer Security Applications Conference;Dec. 3 2012; pp. 119-128; ACM Digital Library (Year: 2012). |
Pieterse et al.; Android botnets on the rise: Trends and characteristics; Published in: 2012 Information Security for South Africa; Date of Conference: Aug. 15-17, 2012; IEEE Xplore (Year: 2012). |
EPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/52333, dated Aug. 17, 2018, 5 pages. |
EPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/52333, dated Dec. 4, 2018, 15 pages. |
PCT, “International Search Report and Written Opinion PCT application No. PCT/US2017/052333”, dated Nov. 30, 2017, 15 pages. |
Notice of Acceptance for Australian Patent Application No. 2015214298, dated Apr. 20, 2018, 3 pages. |
Examination Report No. 1 for Australian Patent Application No. 2015214298, dated Apr. 24, 2017, 3 pages. |
Examination Report No. 2 for Australian Patent Application No. 2015214298, dated Nov. 2, 2017, 3 pages. |
International Search Report and Written Opinion for International Patent Application No. PCTUS2015014414, dated May 11, 2015, 8 pages. |
Blippar, “Computer Vision API”, www.web.blippar.comcomputer-vision-api, 4 pages. |
Chen, et al., “Bezel Copy: An Efficient Cross-0Application Copy-Paste Technique for Touchscreen Smartphones.”, Advanced Visual Interfaces, ACM, New York, New York, May 27, 2014, pp. 185-192. |
CNIPA, First Office Action for Chinese Patent Application No. 201580016692.1, dated Nov. 2, 2018, 7 pages. |
WIPO, International Search Report and Written Opinion for PCT Application No. PCT/US2017/046858, dated Oct. 11, 2017, 10 Pages. |
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2018/021028, dated Jun. 14, 2019, 11 pages. |
WIPO, International Search Report and Written Opinion for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 13 pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2017/052713, dated Dec. 5, 2017, 4 Pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 4 pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2018/022501, dated May 14, 2018, 4 pages. |
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825663.4, dated Apr. 16, 2019, 5 pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2017/052349, dated Dec. 13, 2017, 5 pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2017/057044, dated Jan. 18, 2018, 5 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2017/052713, dated Dec. 5, 2017, 6 Pages. |
EPO, Communication Pursuant to Article 94(3) EPC for European Patent Application No. 16825666.7, dated Apr. 23, 2019, 6 pages. |
WIPO, International Search Report for International Patent Application No. PCT/US2018/022503, dated Aug. 16, 2018, 6 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2017/052349, dated Dec. 13, 2017, 6 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2018/022501, dated May 14, 2018, 6 pages. |
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/052713, dated Oct. 15, 2018, 6 pages. |
EPO, Extended European Search Report for European Patent Application No. 15746410.8, dated Sep. 5, 2017, 7 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2018/022503, dated Aug. 16, 2018, 8 pages. |
WIPO, Written Opinion of the International Preliminary Examination Authority for International Patent Application No. PCT/US2017/057044, dated Dec. 20, 2018, 8 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2017/057044, dated Jan. 18, 2018, 8 pages. |
WIPO, Written Opinion of the International Preliminary Examining Authority for International Patent Application No. PCT/US2017/052349, dated Aug. 6, 2018, 9 pages. |
WIPO, Written Opinion for International Patent Application No. PCT/US2016/068083, dated Mar. 9, 2017, 9 pages. |
Fuxman, Ariel, “Aw, so cute!”: Allo helps you respond to shared photos, Google Research Blog, https:research.googleblog.com201605aw-so-cute-allo-helps-you-respond-to.html, May 18, 2016, 6 pages. |
International Bureau of WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/046858, dated Feb. 19, 2019, 7 pages. |
International Bureau of WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2017/052349, dated Mar. 26, 2019, 7 pages. |
International Bureau of WIPO, International Preliminary Report on Patentability for International Patent Application No. PCT/US2016/068083, dated Jul. 5, 2018, 9 pages. |
Kannan, et al., “Smart Reply: Automated Response Suggestions for Email”, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, ACM Press, New York, New York, Aug. 13, 2016, pp. 955-965. |
Khandelwal, “Hey Allo! Meet Google's Al-powered Smart Messaging App”, The Hacker News, http:web.archive.orgweb20160522155700https:thehackernews.com201605google-allo-messenger.html, May 19, 2016, 3 pages. |
KIPO, Notice of Preliminary Rejection for Korean Patent Application No. 10-2019-7011687, dated May 7, 2019, 3 pages. |
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7013953, dated Jun. 13, 2019, 4 pages. |
KIPO, Notice of Final Rejection for Korean Patent Application No. 10-2018-7013953, dated May 8, 2019, 4 pages. |
KIPO, Preliminary Rejection for Korean Patent Application No. 10-2018-7013953, dated Oct. 29, 2018, 5 pages. |
KIPO, Notice of Preliminary Rejection for Korean Patent Application No. 10-2018-7019756, dated May 13, 2019, 9 pages. |
Lee, Jang Ho et al., “Supporting multi-user, multi-applet workspaces in CBE”, Proceedings of the 1996 ACM conference on Computer supported cooperative work, ACM, Nov. 16, 1996, 10 pages. |
Microsoft Corporation, “Windows Messenger for Windows XP”, Retrieved from Internet: http:web.archive.orgweb20030606220012messenger.msn.comsupportfeatures.asp?client=0 on Sep. 22, 2005, Jun. 6, 2003, 3 pages. |
WIPO, “International Search Report and Written Opinion for International Application No. PCT/US2018/021028”, dated Jun. 15, 2018, 11 Pages. |
Pinterest “Pinterest Lens”, www.help.pinterest.comenarticlespinterest-lens, 2 pages. |
Russell, “Google Allo is the Hangouts Killer We've Been Waiting for”, Retrieved from the Internet: http:web.archive.orgweb20160519115534https:www.technobuffalo.com20160518google-allo-hangouts-replacement, May 18, 2016, 3 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/709,418, dated Mar. 1, 2018, 11 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 16/003,661, dated May 1, 2019, 11 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,162, dated Nov. 27, 2018, 11 Pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,162, dated Nov. 27, 2018, 12 pages. |
USPTO, Final Office Action for U.S. Appl. No. 15/386,162, dated Jun. 5, 2019, 13 pages. |
USPTO, Final Office Action for U.S. Appl. No. 15/386,760, dated May 30, 2019, 13 pages. |
USPTO, Final Office Action for U.S. Appl. No. 15/238,304, dated Nov. 23, 2018, 14 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 14/618,962, dated Nov. 8, 2016, 14 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/709,418, dated Nov. 21, 2017, 15 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/350,040, dated Apr. 24, 2019, 16 pages. |
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 16/003,661, dated Dec. 14, 2018, 16 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/238,304, dated Jun. 7, 2018, 17 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/428,821, dated Jan. 10, 2018, 20 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/624,638, dated Feb. 28, 2019, 21 Pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/709,423, dated May 2, 2019, 21 Pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 14/618,962, dated Feb. 26, 2016, 25 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/415,506, dated Jul. 23, 2018, 25 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/428,821, dated May 18, 2017, 30 pages. |
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/624,637, dated Oct. 19, 2018, 4 pages. |
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/350,040, dated Oct. 30, 2018, 4 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/946,342, dated Jul. 26, 2018, 40 pages. |
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/415,506, dated Apr. 5, 2018, 5 pages. |
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/624,637, dated Jan. 25, 2019, 5 pages. |
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/350,040, dated Jul. 16, 2018, 5 pages. |
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 15/386,760, dated Nov. 6, 2018, 5 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/624,637, dated Apr. 19, 2019, 6 pages. |
USPTO, First Action Interview, Pre-Interview Communication for U.S. Appl. No. 16/003,661, dated Aug. 29, 2018, 6 pages. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/238,304, dated Apr. 5, 2019, 7 pages. |
USPTO, First Action Interview, Office Action Summary for U.S. Appl. No. 15/386,760, dated Jan. 30, 2019, 8 pages. |
USPTO, Notice of Allowance for Design U.S. Appl. No. 29/503,386, dated Jul. 13, 2016, 8 pages. |
USPTO, Non-final Office Action for Design U.S. Appl. No. 29/503,386, dated Feb. 1, 2016, 9 pages. |
Vinyals, O. et al., “Show and Tell: A Neural Image Caption Generator”, arXiv:1411.4555v2 [cs.CV], Apr. 20, 2015, pp. 1-9. |
Yeh, et al., “Searching the web with mobile images for location recognition”, Proceedings of the 2004 IEEE Computer Society Conference on Pattern Recognition, vol. 2, Jun.-Jul. 2004, pp. 1-6. |
JPO, Office Action for Japanese Patent Application No. 2018-532399, Jul. 23, 2019, 6 pages. |
USPTO, Non-final Office Action for U.S. Appl. No. 15/386,760, Oct. 11, 2019, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20180109526 A1 | Apr 2018 | US |
Number | Date | Country | |
---|---|---|---|
62397047 | Sep 2016 | US |