The present disclosure generally relates to special-purpose machines that manage data processing and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other special-purpose machines for generating virtual agents.
Network site users can create content for viewing and interaction by other network site users (e.g., booking, registering, subscribing, viewing of listings). The posted content can be updated, created, or deleted, and it can be computationally challenging for a network site to return valid search results to network site users searching for content (e.g., listings for reservations) with specified parameters (e.g., dates, categories, prices, quantity). For example, if there are a large number of users posting and updating content and also a large number of users submitting complex searches for the posted content, any delay in computation due to query complexity may cause inaccurate results to be returned and cause large computational resource consumption (e.g., processing, memory, network overhead).
Various ones of the appended drawings merely illustrate examples of the present disclosure and should not be considered as limiting its scope.
The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative examples of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various examples of the inventive subject matter. It will be evident, however, to those skilled in the art, that examples of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.
As discussed above, it can be difficult to return up-to-date results for complex queries for content posted on a network site. In addition, navigating the vast array of content available on the network site can be very complex and time consuming. Such navigation can entail browsing multiple pages of information to find a suitable result or action. Although in the following discussion the example posted content are accommodations listings (e.g., listings for reservations) posted on a network site for searching and interacting with other end users, other types of network site content posted by end users and searched for by other end users can likewise be implemented in the user interface processes and methods, such as transportation, experiences, and/or events.
Generally, a listing platform can be searched for result listings that are available for a specified date range, price range, and/or other attributes (amenities, cancelation policy, etc.), which can be specified in a given query (e.g., text field, drop-down menu, checkbox filters). To search and browse the listing platform, users can access the listing platform on a particular user interface channel, such as by phone or through an application associated with the listing platform. Sometimes, users encounter issues or difficulties finding a resolution to an issue or finding the appropriate resources to perform an action. For example, users can encounter a problem with a reservation that needs to be modified or canceled. Users typically contact a live human agent to address these issues. However, waiting for a live human agent to address issues or help users is very time consuming and wastes a great deal of resources as users need to spend time on the phone waiting for an agent to respond. This wastes bandwidth and battery of the device.
Certain systems allow users to chat with virtual agents or bots to find resolutions to the issues. However, these agents are not sophisticated enough to be able to provide resolutions to most issues as they generally are programmed to provide static responses to queries that include certain keywords. As such, even accessing such virtual agents can waste time and frustrate users and the users may still have to contact a live human agent to resolve issues. Such repetitive and manual processes are incredibly time-consuming and can be very frustrating to end users. This can result in missed opportunities and wasted computational resources. In addition, training these virtual agents to respond in a meaningful way to customer support tickets (e.g., customer issues) relies on a vast amount of training conversations and labeling. Generating such simulated conversations is also incredibly time-consuming and still results in training conversations that lack any diversity. This causes the virtual agents to be trained to provide unrealistic responses and also reduces the array of issues that the virtual agents are programmed to handle.
To address these technical problems, the disclosed techniques provide a network site that allows a first machine learning model to interact in a communication session with an agent (e.g., a virtual agent or human agent) on the listing network site to generate a simulated conversation. The first machine learning model can simulate a real-world customer based on prior customer interactions in a vast array of settings and personalities. Multiple of these simulated conversations between the first machine learning model and the agent are stored and then analyzed to train or update parameters of the virtual agent. This improves the quality of responses and vast array of solutions that the virtual agent is capable of generating. In some cases, the simulated conversations are scored (manually by a human) and/or automatically by a second machine learning model (e.g., a virtual judge). These scores are used to guide the updating of the parameters to improve the overall functioning of the virtual agent and the device.
Namely, the network site establishes a communication session with a virtual agent of a listing network platform and generates, by a first machine learning model, conversation data representing a customer support issue associated with the listing network platform. The network site transmits, by the first machine learning model, at least a portion of the conversation data to the virtual agent via the communication session. The network site receives one or more responses to the at least the portion of the conversation data from the virtual agent in the communication session and stores a simulated conversation comprising the at least the portion of the conversation data generated by the first machine learning model and the one or more responses received from the virtual agent. This reduces the overall amount of resources needed to generate training data and simulated conversations for training a virtual agent which improves the overall efficiency of the device.
With reference to
In various implementations, the client device 110 can include a computing device that includes at least a display and communication capabilities that provide access to the networked system 102 via the network 104. The client device 110 comprises, but is not limited to, a remote device, work station, computer, general purpose computer, Internet appliance, hand-held device, wireless device, portable device, wearable computer, cellular or mobile phone, Personal Digital Assistant (PDA), smart phone, tablet, ultrabook, netbook, laptop, desktop, multi-processor system, microprocessor-based or programmable consumer electronic, game consoles, set-top box (STB), network personal computer (PC), mini-computer, and so forth. In an example, the client device 110 comprises one or more of a touch screen, accelerometer, gyroscope, biometric sensor, camera, microphone, Global Positioning System (GPS) device, and the like.
The client device 110 can implement a first user interaction channel (e.g., an interaction voice response (IVR) system or telephone communication channel) and also a second user interaction channel (e.g., a graphical user interface (GUI) of a client application 114) that communicates over a network, such as the Internet with a remote server. While the disclosed techniques generally refer to telephone or voice-only based communication channels as the “first user interaction channel” and a client application 114 GUI-based communications through a network as the “second user interaction channel,” in some cases the second user interaction channel can perform the functions and take the place of the first user interaction channel and the first user interaction channel can perform the functions and take the place of the second user interaction channel. The first user interaction channel can correspond to an interactive voice response (IVR) service of the communication session system 150.
The client device 110 communicates with the network 104 via a wired or wireless connection. For example, one or more portions of the network 104 comprises an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (WLAN), a WAN, a wireless WAN (WWAN), a Metropolitan Area Network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a wireless network, a Wireless Fidelity (WI-FI®) network, a Worldwide Interoperability for Microwave Access (WiMax) network, another type of network, or any suitable combination thereof. In communicating with the network 104 through the first user interaction channel, the client device 110 may only send audio or voice data to the network 104. In communicating with the network 104 through the second user interaction channel, the client device 110 may send data representing selections on a GUI, image content, and/or audio or voice data to the network 104.
In some examples, the client device 110 includes one or more of the applications (also referred to as “apps”) such as, but not limited to, web browsers, book reader apps (operable to read e-books), media apps (operable to present various media forms including audio and video), fitness apps, biometric monitoring apps, messaging apps, electronic mail (email) apps, e-commerce site apps (also referred to as “marketplace apps”), and reservation applications for temporary stays or experiences at hotels, motels, or residences managed by other end users (e.g., a posting end user who owns a home and rents out the entire home or private room). In some implementations, the client application(s) 114 include various components operable to present information to the user and communicate with the networked system 102. In some examples, if the e-commerce site application is included in the client device 110, then this application is configured to locally provide the user interface and at least some of the functionalities with the application configured to communicate with the networked system 102, on an as-needed basis, for data or processing capabilities not locally available (e.g., access to a database of items available for sale, to authenticate a user, to verify a method of payment). Conversely, if the e-commerce site application is not included in the client device 110, the client device 110 can use its web browser to access the e-commerce site (or a variant thereof) hosted on the networked system 102.
The web client 112 accesses the various systems of the networked system 102 via the web interface supported by a web server 122. Similarly, the programmatic client 116 and client application(s) 114 access the various services and functions provided by the networked system 102 via the programmatic interface provided by an Application Program Interface (API) server 120.
Users (e.g., the user 106) can include a person, a machine, a machine learning model (e.g., a first machine learning model that is trained to simulate a customer) or other means of interacting with the client device 110. In some examples, the user 106 is not part of the network architecture 100, but interacts with the network architecture 100 via the client device 110 or another means. For instance, the user 106 provides input (e.g., touch screen input or alphanumeric input) to the client device 110 and the input is communicated to the networked system 102 via the network 104 by way of the second user interaction channel. In this instance, the networked system 102, in response to receiving the input from the user 106, communicates information to the client device 110 via the network 104 to be presented to the user 106. In this way, the user 106 can interact with the networked system 102 using the client device 110. As another example, the user 106 provides input (e.g., speech input) to the client device 110 and the input is communicated to the networked system 102 via the network 104 in the form of audio packets or audio data by way of the first user interaction channel. As another example, the user 106 provides input to the client device 110 via text entry in a GUI and the input is communicated to the networked system 102 via the network 104 in the form of data packets (that include the text received from the user 106) by way of the second user interaction channel.
The API server 120 and the web server 122 are coupled to and provide programmatic and web interfaces, respectively, to one or more application server(s) 140. The application server(s) 140 may host a listing network platform 142 and a communication session system 150, each of which includes one or more modules or applications each of which can be embodied as hardware, software, firmware, or any combination thereof. The application server(s) 140 are, in turn, shown to be coupled to one or more database server(s) 124 that facilitate access to one or more information storage repositories or database(s) 126. In an example, the database(s) 126 are storage devices that store information to be posted (e.g., inventory, image data, catalog data) to the listing network platform 142. The database(s) 126 also stores digital goods information in accordance with some examples.
The listing network platform 142 provides a number of publication functions and listing services to the users who access the networked system 102. While the listing network platform 142 is shown in
While the client-server-based network architecture 100 shown in
The listing network platform 142 can be hosted on dedicated or shared server machines that are communicatively coupled to enable communications between server machines. The components themselves are communicatively coupled (e.g., via appropriate interfaces) to each other and to various data sources, so as to allow information to be passed between the applications or so as to allow the applications to share and access common data. Furthermore, the components access one or more database(s) 126 via the database server(s) 124. The listing network platform 142 provides a number of publishing and listing mechanisms whereby a seller (also referred to as a “first user,” posting user, host) may list (or publish information concerning) goods or services for sale or barter, a buyer (also referred to as a “second user,” searching user, guest) can express interest in or indicate a desire to purchase or barter such goods or services, and a transaction (such as a trade) may be completed pertaining to the goods or services.
In some cases, the user 210 may initiate contact with the listing network platform 142 through any number of user interaction channels, such as the historical user interactions 220 and/or a second user interaction channel. For example, the user 210 can establish the historical user interactions 220 with the listing network platform 142 by placing a telephone call using the client device 110 to a service phone number associated with the listing network platform 142. In response to receiving the telephone call, the listing network platform 142 can search a database that associates a telephone number of the client device 110 with an account of the user 210 on the listing network platform 142. Specifically, the listing network platform 142 can receive the phone call and route the phone number of the client device 110 to a session management component. The session management component can search the database to locate an account associated with the phone number on the listing network platform 142.
Once the session management component locates the account, the session management component can retrieve a profile for the user 210 on the listing network platform 142 and provide access to the profile to the virtual agent component 250. Then, the virtual agent component 250 can generate a response on the client device 110 via the historical user interactions 220. The response includes information that has been determined to be relevant from the user profile and may provide instructions on how to perform certain actions to address predicted issues from the user profile.
The historical user interactions 220 can receive verbal or audible input from the user 210 via the historical user interactions 220 in response to presenting the audible prompt to the user 210. The virtual agent component 250 can convert the verbal input to text and then generates an additional message that responds to the verbal or audible input received from the user 210.
In some examples, the communication session is initially started with the programmatic client 116 and the messages provided by the virtual agent component 250 are presented through the historical user interactions 220 on the programmatic client 116. For example,
Upon submitting the query (e.g., via selection of the search button 320, or automatically upon selecting the combined listings element 313 (split stays option) or dates drop-down element 317, a communication is sent from the programmatic client 116. The communication session system 150 generates an output that includes a results display of the listings matching the query and transmits the output via the programmatic client 116. The results are then displayed in the listings results area 305. The user can then select the listings or navigate to additional pages via page navigational elements 325. In some examples, the user interface 300 includes a set of combined listings 323 together with individual listings displayed in the results area 305. The combined listings 323 can be positioned within the display in a dedicated area, on top of the individual listings, between two individual listings, and/or underneath the individual listings. In some examples, the combined listings 323 are provided in response to receiving input that selects the combined listings element 313. In some examples, the combined listings 323 are presented automatically without receiving input that selects the combined listings element 313.
In some examples, the combined listings 323 are displayed in different slots or portions of the display relative to other individual listings on the basis of the type of client device being used to access the system. For example, on a mobile device, the combined listings 323 can be placed in slots 3, 6, 9, and 12 on the first page, and on a desktop computer, the same combined listings 323 may be presented in slots 5, 9, 14 and 20 for better visual balance. As referred to herein, the term “slots” means an area of a display in which a category is presented. In some cases, the combined listings 323 are excluded from being presented for last minute stays, such as if the travel dates are within 48 hours of check in or starting the trip. In some examples, the combined listings 323 include individual listings of destinations or stays that are at least two hours driving distance apart but no more than 10 hours driving distance apart. In some examples, the combined listings 323 excludes repeating pairs of the same individual listings. In some examples, the combined listings 323 relate to pairs of individual listings from different neighborhoods and locations. In such cases, neighborhoods and listings can be repeated across pairs of combined listings 323.
The user interface 300 includes an option 390 to initiate a communication session for assistance. In response to receiving selection of the option 390, the programmatic client 116 transmits data to the communication session system 150 requesting that a virtual intelligent agent respond to requests from the user. The communication session system 150, in response to receiving the indication that the option 390 was selected, identifies a profile associated with the account for which the user interface 300 is presented. The communication session system 150 presents a greeting message that is unique and tailored to the user. Namely, the greeting message presented as a first message in response to receiving input that selects the option 390 can be generated by the virtual agent component 250.
The virtual agent component 250 continues conversing with the user 210 to attempt to resolve the customer support ticket raised by the user 210. The content of the conversation between the virtual agent component 250 and the user 210 is then stored in the historical user interactions 220. Multiple conversations between different users and the virtual agent component 250 can be recorded and stored as part of the historical user interactions 220. These conversations represent different types of customer support tickets representing different issues raised by the customers. These customer support tickets also represent different types of customer personas (e.g., angry customers, happy customers, and so forth). After a specified quantity of historical user interactions 220 are stored, the communication session system 150 proceeds to use and/or train the first ML model component 240 to generate simulated conversations between the first ML model component 240 and the virtual agent component 250.
The first ML model component 240 can implement a first LLM or any other suitable artificial neural network or convolutional neural network. The first LLM can receive a prompt that includes instructions for generating a conversation with the virtual agent component 250 that corresponds to one or more issues associated with a user based on the historical user interactions 220. For example, as shown in
Referring back to
During training, the first LLM is shown text excerpts (e.g., from profiles of various users, such as historical user interactions 220), asked to predict the next word, and then corrected on its guess. Over many iterations across the training dataset, prediction errors are progressively reduced as the first LLM adjusts its internal weights. Once trained, the first LLM can generate text by being given a prompt (e.g., prompt 410) and predicting the most statistically likely next words. The training process allows the first LLM to develop a substantial understanding of language structure and semantics. Other LLMs, discussed above and below, can be implemented in the same manner but over a different collection of text excerpts.
The first ML model component 240 provides the output (conversation segments) generated based on the historical user interactions 220 and the prompt 410 to the virtual agent component 250. The virtual agent component 250 provides the conversation segment to a generative machine learning model (e.g., LLM) to generate a customized, personalized and intelligent response message with a solution to the issue raised in the conversation segments. The first ML model component 240 processes the response message and generates an updated conversation segment to provide to the virtual agent component 250. This back and forth interaction between the first ML model component 240 and the virtual agent component 250 can be stored as one or more simulated conversations between virtual customers (implemented by the first ML model component 240) and the virtual agent component 250. In some cases, additional prompts can be provided to the first ML model component 240 with other portions of the historical user interactions 220 to generate additional simulated conversations. After a sufficient number (e.g., threshold number) of simulated conversations (or simulated dialogues) are stored, the communication session system 150 proceeds to score the simulated conversations based on various criteria. The scores can then be used to update parameters of the virtual agent component 250 to improve performance and solutions generated by the virtual agent component 250.
For example, as shown in the example 500 of
As shown in the example 600 of
In some cases, the virtual judge 620 is implemented by a second machine learning model. The second machine learning model can also be an LLM that receives a prompt with input that includes one of the simulated conversations from the portion 610, the scoring guidelines (based on the above criteria), and an instruction to act a human judge that scores the quality of the simulated conversation. In some cases, the prompt also includes similar simulated or real-world conversations and the corresponding scores associated to those conversations to use as a guide or model in generating the new score for the simulated conversation.
In some cases, the virtual judge 620 includes a convolutional neural network (CNN) that is trained based on labeled training data. In such cases, the virtual judge 620 can receive or access training data including a plurality of training conversations between the virtual agent and one or more users and corresponding ground truth training scores. The virtual judge 620 processes the training data by the CNN to predict a training score for an individual training conversation of the plurality of training conversations and computes a deviation between the training score and the ground truth training score corresponding to the individual training conversation. The virtual judge 620 updates one or more parameters of the CNN based on the computed deviation and repeats this training process for another set of the training data until a stopping criterion is reached or all of the training data has been processed.
In some cases, the virtual judge 620 determines if the score 630 of a particular simulated conversation transgresses a score threshold. If so, the virtual judge 620 presents the simulated conversation and the score 630 to a human 640. The human 640 can review the score 630 and supplement or modify the score to generate an updated score 650. This score, if the updated score 650 transgresses a threshold, is used to control whether the corresponding simulated conversation is used to modify or update parameters of the virtual agent 530.
In some examples, the virtual judge 620 aggregates (combines or computes an average) of multiple scores 630 or multiple simulated conversations. The simulated conversations that have respective scores 630 that correspond to the aggregated scores can then be used to modify or update parameters of the virtual agent 530.
In some cases, a difficulty level can be assigned or determined for a particular simulated conversation. The virtual judge 620 determines if the difficulty level transgresses a difficulty threshold. In response to determining that the difficulty level transgresses the difficulty threshold, the virtual judge 620 uses the CNN to generate the score for the simulated conversation. In some cases, in response to determining that the difficulty level fails to transgress the difficulty threshold, the virtual judge 620 uses an LLM to generate the score for the simulated conversation. In some cases, the virtual judge 620 uses the CNN to generate a first score associated with a first criterion and uses the LLM to generate a second score associated with a second criterion for an individual simulated conversation. In this way, the score 630 generated for the simulated conversation can be made up of scores generated by different system components (e.g., a CNN, an LLM, and/or a human).
At operation 705, the communication session system 150 establishes a communication session with a virtual agent of a listing network platform, as discussed above.
At operation 710, the communication session system 150 generates, by a first machine learning model, conversation data representing a customer support issue associated with the listing network platform, as discussed above.
At operation 715, the communication session system 150 transmits, by the first machine learning model, at least a portion of the conversation data to the virtual agent via the communication session, as discussed above.
At operation 720, the communication session system 150 receives one or more responses to the at least the portion of the conversation data from the virtual agent in the communication session, as discussed above.
At operation 725, the communication session system 150 stores a simulated conversation comprising the at least the portion of the conversation data generated by the first machine learning model and the one or more responses received from the virtual agent, as discussed above.
Example 1. A method comprising: establishing a communication session with a virtual agent of a listing network platform; generating, by a first machine learning model, conversation data representing a customer support issue associated with the listing network platform; transmitting, by the first machine learning model, at least a portion of the conversation data to the virtual agent via the communication session; receiving one or more responses to the at least the portion of the conversation data from the virtual agent in the communication session; and storing a simulated conversation comprising the at least the portion of the conversation data generated by the first machine learning model and the one or more responses received from the virtual agent.
Example 2. The method of Example 1, wherein the first machine learning model comprises a first large language model (LLM).
Example 3. The method of any one of Examples 1-2, wherein the communication session comprises an interactive voice response (IVR) communication session.
Example 4. The method of any one of Examples 1-3, wherein the communication session comprises an online chat communication session.
Example 5. The method of any one of Examples 1-4, wherein the first machine learning model is trained to simulate a virtual customer.
Example 6. The method of any one of Examples 1-5, wherein the conversation data corresponds to an individual customer personality of a plurality of customer personalities that the first machine learning model is trained to represent.
Example 7. The method of any one of Examples 1-6, wherein the first machine learning model comprises a large language model (LLM), further comprising: accessing a plurality of historical customer support tickets comprising a plurality of historical conversations between a plurality of users and the virtual agent; and training the LLM based on the plurality of historical customer support tickets to generate the conversation data representing the customer support issue associated with the listing network platform.
Example 8. The method of Example 7, further comprising: generating a prompt comprising an instruction to the LLM to leverage the plurality of historical customer support tickets to generate the conversation data representing the customer support issue associated with the listing network platform, the customer support issue representing at least one customer support issue specified in at least one of the plurality of historical conversations.
Example 9. The method of Example 8, wherein the prompt comprises a type of personality of a plurality of personalities to use in order to control a tone associated with the conversation data.
Example 10. The method of any one of Examples 1-9, further comprising: scoring the simulated conversation based on one or more criteria.
Example 11. The method of Example 10, further comprising: updating one or more parameters of the virtual agent in response to scoring the simulated conversation based on the one or more criteria.
Example 12. The method of any one of Examples 10-11, further comprising: analyzing the simulated conversation by a second machine learning model to generate the score based on the one or more criteria, the second machine learning model comprising a virtual judge.
Example 13. The method of Example 12, wherein the second machine learning model comprises a large language model (LLM), further comprising: accessing the one or more criteria, the one or more criteria being associated with instructions for assigning a score to the one or more criteria; accessing one or more training scores generated, using the one or more criteria, for one or more training conversations between the virtual agent and one or more users; and generating a prompt with an instruction for the LLM to generate the score based on the one or more criteria, the one or more training conversations, and the one or more training scores.
Example 14. The method of any one of Examples 12-13, wherein the second machine learning model comprises a convolutional neural network (CNN).
Example 15. The method of Example 14, further comprising training the CNN by performing training operations comprising: accessing training data comprising a plurality of training conversations between the virtual agent and one or more users and corresponding ground truth training scores; processing the training data by the CNN to predict a training score for an individual training conversation of the plurality of training conversations; computing a deviation between the training score and the ground truth training score corresponding to the individual training conversation; and updating one or more parameters of the CNN based on the computed deviation.
Example 16. The method of any one of Examples 10-15, further comprising: aggregating scores associated with multiple simulated conversations between the virtual agent and the machine learning model; and updating one or more parameters of the virtual agent based on the aggregated scores.
Example 17. The method of any one of Examples 10-16, further comprising: presenting the scored simulated conversation in a graphical user interface; and receiving input from a user that updates one or more scores of the scored simulated conversation.
Example 18. A system comprising: one or more processors of a machine; and a memory storing instruction that, when executed by the one or more processors, cause the machine to perform operations comprising: establishing a communication session with a virtual agent of a listing network platform; generating, by a first machine learning model, conversation data representing a customer support issue associated with the listing network platform; transmitting, by the first machine learning model, at least a portion of the conversation data to the virtual agent via the communication session; receiving one or more responses to the at least the portion of the conversation data from the virtual agent in the communication session; and storing a simulated conversation comprising the at least the portion of the conversation data generated by the first machine learning model and the one or more responses received from the virtual agent.
Example 19. The system of Example 18, wherein the first machine learning model comprises a first large language model (LLM).
Example 20. A machine-readable storage device embodying instructions that, when executed by a machine, cause the machine to perform operations comprising: establishing a communication session with a virtual agent of a listing network platform; generating, by a first machine learning model, conversation data representing a customer support issue associated with the listing network platform; transmitting, by the first machine learning model, at least a portion of the conversation data to the virtual agent via the communication session; receiving one or more responses to the at least the portion of the conversation data from the virtual agent in the communication session; and storing a simulated conversation comprising the at least the portion of the conversation data generated by the first machine learning model and the one or more responses received from the virtual agent.
In various implementations, the operating system 804 manages hardware resources and provides common services. The operating system 804 includes, for example, a kernel 820, services 822, and drivers 824. The kernel 820 acts as an abstraction layer between the hardware and the other software layers, consistent with some examples. For example, the kernel 820 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services 822 can provide other common services for the other software layers. The drivers 824 are responsible for controlling or interfacing with the underlying hardware, according to some examples. For instance, the drivers 824 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth.
In some examples, the libraries 806 provide a low-level common infrastructure utilized by the applications 810. The libraries 806 can include system libraries 830 (e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 806 can include API libraries 832 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries 806 can also include a wide variety of other libraries 834 to provide many other APIs to the applications 810.
The frameworks 808 provide a high-level common infrastructure that can be utilized by the applications 810, according to some examples. For example, the frameworks 808 provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks 808 can provide a broad spectrum of other APIs that can be utilized by the applications 810, some of which may be specific to a particular operating system or platform.
In an example, the applications 810 include a home application 850, a contacts application 852, a browser application 854, a book reader application 856, a location application 858, a media application 860, a messaging application 862, a game application 864, and a broad assortment of other applications such as a third-party application 866. According to some examples, the applications 810 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications 810, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, Kotlin, Ruby, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 866 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 866 can invoke the API calls 812 provided by the operating system 804 to facilitate the functionality described herein.
The machine 900 may include processors 910, memory 930, and I/O components 950, which may be configured to communicate with each other such as via a bus 902. In an example, the processors 910 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 912 and a processor 914 that may execute the instructions 916. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although
The memory 930 may include a main memory 932, a static memory 934, and a storage unit 936, all accessible to the processors 910 such as via the bus 902. The main memory 932, the static memory 934, and storage unit 936 store the instructions 916 embodying any one or more of the methodologies or functions described herein. The instructions 916 may also reside, completely or partially, within the main memory 932, within the static memory 934, within the storage unit 936, within at least one of the processors 910 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 900.
The I/O components 950 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 950 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 950 may include many other components that are not shown in
In further examples, the I/O components 950 may include biometric components 956, motion components 958, environmental components 960, or position components 962, among a wide array of other components. For example, the biometric components 956 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 958 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components 960 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 962 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication may be implemented using a wide variety of technologies. The I/O components 950 may include communication components 964 operable to couple the machine 900 to a network 980 or devices 970 via a coupling 982 and a coupling 972, respectively. For example, the communication components 964 may include a network interface component or another suitable device to interface with the network 980. In further examples, the communication components 964 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 970 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
Moreover, the communication components 964 may detect identifiers or include components operable to detect identifiers. For example, the communication components 964 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 964, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
The various memories (e.g., 930, 932, 934, and/or memory of the processor(s) 910) and/or storage unit 936 may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 916), when executed by processor(s) 910, cause various operations to implement the disclosed examples.
As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below.
In various examples, one or more portions of the network 980 may be an ad hoc network, an intranet, an extranet, a VPN, an LAN, a WLAN, a WAN, a WWAN, an MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network 980 or a portion of the network 980 may include a wireless or cellular network, and the coupling 982 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling 982 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology.
The instructions 916 may be transmitted or received over the network 980 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 964) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 916 may be transmitted or received using a transmission medium via the coupling 972 (e.g., a peer-to-peer coupling) to the devices 970. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 916 for execution by the machine 900, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal.
The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals.
Any biometric data collected by the biometric components is captured and stored only with user approval and deleted on user request. Further, such biometric data may be used for very limited purposes, such as identification verification. To ensure limited and authorized use of biometric information and other personally identifiable information (PII), access to this data is restricted to authorized personnel only, if at all. Any use of biometric data may strictly be limited to identification verification purposes, and the data is not shared or sold to any third party without the explicit consent of the user. In addition, appropriate technical and organizational measures are implemented to ensure the security and confidentiality of this sensitive information.