The present application generally relates to a machine learning technique that involves the use of a type of neural network referred to as a Transformer for encoding sequences of text to generate encoded representations of the texts that are then used as inputs to a deep neural network configured to generate a ranking score for an online job posting.
Many search systems utilize algorithms that involve two steps. First, a user-specified search query is received and processed to identify a set of candidate search results. Next, various attributes of the search results, information relating to the end-user who provided the search query, and the query itself are used as inputs to a ranking system to rank the various search results so that those deemed most relevant can be presented most prominently. Accordingly, the text of the search query and text associated with the search results play an important role in ensuring that relevant search results are identified and appropriately ranked for presentation to the end-user. For example, consider an online job hosting service that provides employers with the ability to post online job postings, while offering jobseekers a search capability that allows for specifying a text-based query to search for relevant job postings. The text that is entered by a jobseeker for use as a search query reveals the jobseeker's intent, while the text of the job title and the text of the company name provide important and relevant information about the relevance of any individual job posting. Many conventional search engines rely exclusively on simple text-based matching algorithms to process jobseeker search queries and in ranking online job postings. However, these text-based matching systems frequently fail to efficiently and accurately capture the true intent of the jobseeker. By way of example, if a jobseeker specifies a search query consisting of the text, “software engineer,” a text-based matching algorithm may fail to accurately identify and rank relevant job postings that use alternative language, such as “computer programmer” or “application developer,” despite the fact that such job postings may be relevant and of interest to the jobseeker. With many types of ranking systems, the use of input features that are derived from text can significantly contribute to the success of a machine learned ranking model.
Embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
Described herein are methods and systems for using a type of neural network referred to as a Transformer encoder to encode sequences of text for use as input features to a deep neural network that has been configured to output a ranking score for an online job posting. Specifically, the present disclosure describes techniques for generating an encoded representation of a sequence of words—for example, such as a user-specified search query. The encoded representation of the words is then used as an input feature to a deep neural network which, based on a variety of input features in addition to the encoded representation of the search query, outputs a ranking score for an online job posting. In the following description, for purposes of explanation, numerous specific details and features are set forth in order to provide a thorough understanding of the various aspects of different embodiments of the present invention. It will be evident, however, to one skilled in the art, that the present invention may be practiced and/or implemented with varying combinations of the many details and features presented herein.
A variety of natural language processing techniques that use deep learning models have been used to learn the meaning of their text inputs. Generally, such techniques involve vector space models that represent words using low dimensional vectors called embeddings. To apply vector space models to sequences of words, it is necessary to first select an appropriate composition function, which is a mathematical process for combining multiple words into a single vector. Composition functions come in two classes. Some composition functions are referred to as unordered functions, as the input texts are treated as a bag of word embeddings without consideration of their order. A second type of composition function may be referred to as a syntactic or semantic composition function, which takes into account word order and sentence structure. Sequence modeling techniques are examples of syntactic composition functions. While both techniques have proven effective, syntactic composition functions have been shown to outperform unordered composition functions in a variety of tasks.
However, syntactic composition functions tend to be more complex than their unordered counterparts, requiring significantly more training time. Furthermore, syntactic composition functions are prohibitively expensive in the case of huge datasets, situations in which computing resources are limited, and in online serving where the latency associated with inference time is a driving factor. Many of the best performing syntactic composition functions are based on complex recurrent neural networks or convolutional neural networks. Due to the sequential nature in which these neural networks process their inputs, these types of neural networks do not allow for parallelization during training, and thus require significant computational resources to operate effectively.
A relatively new type of deep learning model referred to as a Transformer has been designed to handle sequential data (e.g., sequences of text), while allowing for parallel processing of the sequential data. For instance, if the input data is a sequence of words, the Transformer does not need to process the beginning of the sequence prior to processing the end of the sequence, as is the case with other types of neural networks. As a result, with Transformers, the parallelization of computational operations results in reduced training times and significant efficiency improvements with larger datasets. Like other neural networks, Transformers are frequently implemented with multiple layers, such that the computational complexity per layer is a function of the length of the input sequence and the dimension of the vector representation of the input tokens (e.g., the word embeddings). The table immediately below provides a per-layer comparison of some key metrics, for layers of different model types, including a Transformer, Recurrent Neural Network and a Convolutional Neural Network. These key metrics are expressed in terms of the length of the input sequence (“n”), the dimension of the vector representation of each input token (“d”), and the size of the kernel matrix used in convolutional operations at each layer of a convolutional network (“k”). The first metric is the total computational complexity per layer. The second metric relates to the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required by each layer type. Finally, the third metric is the maximum path length between long range dependencies in the network. One of the key factors impacting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter the path, the easier it is to learn long-range dependencies. Thus, the maximum path length represents the length between any two input and output positions in networks composed of the different layers.
As is evident from the table, Transformers provide advantages over other network types in as much as the number of sequential operations and maximum path length per layer is a constant at one per layer. This being the result of the Transformer's ability, at each layer, to receive and process a sequence of text in parallel. Nonetheless, the computational complexity per layer is a function of the length of the input sequence (“n”) and dimension (“d”) of the vector representation. Consequently, implementing a Transformer in an online context where the latency associated with inference time is a primary concern remains a challenge. In particular, using Transformers in an online service context when latency is a driving factor, such as ranking search results, is problematic, particularly when the sequences of input text are lengthy.
Consistent with embodiments of the present invention, Transformer encoders are used to encode text as part of a machine learned model where a learning to rank approach is taken for ranking online job postings. The Transformers encode certain sequences of input text, and the encoded representations are provided to a deep neural network as input features, which, in combination with a variety of other input features, are used by the deep neural network to generate a ranking score for an online job posting. Consistent with some embodiments, the sequences of text that are encoded by the Transformers include the user-specified search query, the job title of a job posting, and the company name associated with a job posting. In other embodiments, other types of text might also be encoded with Transformers.
Consistent with embodiments of the present invention, an end-user specified search query is first received at a search engine of an online service. The search query is processed to identify an initial set of candidate job postings that satisfy the search query. For example, a search-based matching algorithm may use terms of the user-specified search query, individually and/or in combination, to identify job postings that include the same or similar terms. The result of processing the search query is a set of candidate job postings. Then, a machine learned model is used to generate a ranking score for each job posting in the set of candidate job postings. During the ranking stage, when the model is generating a ranking score for a particular job posting, Transformer encoders are used to encode sequences of text that correspond with the end-user specified search query, the job title of the particular job posting, and the company name of the company associated with the particular job posting. Finally, after a ranking score has been derived for all of the candidate job postings, a subset of the highest-ranking candidate job postings is selected for presentation to the end-user in a search results user interface.
Consistent with some embodiments, to reduce the overall complexity in implementing the Transformer encoders and to ensure latency requirements are satisfied with respect to the inference time, a maximum sequence length of each type of input—for example, search query, job title, and company name—is first determined by performing offline analysis. For example, with respect to search queries, a distribution of the frequency of the length of historical search queries processed by the online service may be analyzed to establish a maximum text input sequence length (e.g., an input length threshold) that will ensure coverage for some high percentage of search queries. For instance, the input length threshold for search queries may be selected to ensure that some high percentage, or range of percentages (e.g., 95%, or 90-98%) of all historical search queries processed over some prior duration of time would fall within the limit—that is, would have sequence lengths that do not exceed the input length threshold. This same analysis is done for all text input types, including search queries, job titles, and company names. Accordingly, the input length threshold for each text input type may vary by input type. Consistent with some embodiments, based on the offline analyses, the input length threshold for the length of a search query may be set to a value (e.g., eight words) that falls within a range of values, such as six to ten words. Similarly, the input length threshold for a job title may be selected to fall within the range of twelve to eighteen words, while the input length threshold for a company name may be selected to fall within the range of eight to twelve words.
In addition to reducing the computational complexity, and thus latency, of the Transformers by establishing a maximum text input length for each type of text input to be encoded by a Transformer, various hyperparameters of the Transformer are selected to ensure optimal performance of the overall ranking system. For example, with some embodiments, each Transformer used for encoding each text input type is configured to operate with a single layer having a fixed number of attention heads (e.g., ten) and feed forward mechanisms. By using the Transformers to encode the search query, job title, and company name, the overall ranking of the job postings is advantageously improved, providing an overall better experience for the job-seeking end-user. Various other advantages of embodiments of the invention will be readily apparent from the description of the figures that follows.
As illustrated in
An application logic layer may include one or more application server modules 206, which, in conjunction with the user interface module(s) 202, generate various user interfaces (e.g., web pages) with data retrieved from various data sources in a data layer. Consistent with some embodiments, individual application server modules 206 implement the functionality associated with various applications and/or services provided by the online system/service 200. For instance, the application logic layer may include a variety of applications and services to include an online job hosting service 208, via which end-users provide information about available jobs (e.g., job postings), which are stored as job postings in job postings database 214. Additionally, the application logic layer may include a search engine 210, via which end-users perform searches for online job postings. Other applications may include a job recommendation application, an online course recommendation application, and an end-user profile update service. These applications and services are provided as examples and are not meant to be an exhaustive listing of all applications and services that may be integrated with and provided as part of an online service. For example, although not shown in
As shown in
Once registered, an end-user may invite other end-users, or be invited by other end-users, to connect via the online service/system 200. A “connection” may constitute a bilateral agreement by the end-users, such that both end-users acknowledge the establishment of the connection. Similarly, with some embodiments, an end-user may elect to “follow” another end-user. In contrast to establishing a connection, the concept of “following” another end-user typically is a unilateral operation and, at least with some embodiments, does not require acknowledgement or approval by the end-user that is being followed. When one end-user follows another, the end-user may receive status updates relating to the other end-user, or other content items published or shared by the other end-user user who is being followed. Similarly, when an end-user follows an organization, the end-user becomes eligible to receive status updates relating to the organization as well as content items published by, or on behalf of, the organization. For instance, content items published on behalf of an organization that an end-user is following may appear in the end-user's personalized feed, sometimes referred to as news feed. In any case, the various associations and relationships that the end-users establish with other end-users, or with other entities (e.g., companies, schools, organization) and objects (e.g., metadata hashtags (“#topic”) used to tag content items), are stored and maintained within the profile and social graph in a social graph database 216. As shown in
As end-users interact with the various content items that are presented via the applications and services of the online social networking system 200, the end-users' interactions and behaviors (e.g., content viewed, links or buttons selected, messages responded to, job postings viewed, job applications submitted, etc.) are tracked by the end-user interaction detection module 204, and information concerning the end-users' activities and behaviors may be logged or stored, for example, as indicated in
Consistent with some embodiments, data stored in the various databases of the data layer may be accessed by one or more software agents or applications executing as part of a distributed data processing service 224, which may process the data to generate derived data. The distributed data processing service 224 may be implemented using Apache Hadoop® or some other software framework for the processing of extremely large data sets. Accordingly, an end-user's profile data and any other data from the data layer may be processed (e.g., in the background or offline) by the distributed data processing service 124 to generate various derived profile data. As an example, if an end-user has provided information about various job titles that the end-user has held with the same organization or different organizations, and for how long, this profile information can be used to infer or derive an end-user profile attribute indicating the end-user's overall seniority level or seniority level within a particular organization. This derived data may be stored as part of the end-user's profile or may be written to another database.
In addition to generating derived attributes for end-users' profiles, one or more software agents or applications executing as part of the distributed data processing service 224 may ingest and process data from the data layer for the purpose of generating training data for use in training various machine-learned models, and for use in generating features for use as input to the trained models. For instance, profile data, social graph data, and end-user activity and behavior data, as stored in the databases of the data layer, may be ingested by the distributed data processing service 224 and processed to generate data properly formatted for use as training data for training any one of the machine-learned models described herein. Similarly, the data may be processed for the purpose of generating features for use as input to the machine-learned models when ranking job postings. Once the derived data and features are generated, they are stored in a database 212, where such data can easily be accessed via calls to a distributed database service. As end-users perform searches of the online job postings, and then interact with the search results, for example, by selecting various individual job postings from the search results user interface, the selections are logged by the end-user interaction detection module. Accordingly, the end-user selections, which may be referred to as click-data, can be used in training the machine learned model used in ranking job postings.
Consistent with some embodiments of the invention, the search engine 210 may be implemented to include a query processing component 210-A, a broker 210-B and one or more search agents 210-C. When the search engine 210 receives a request to process a search query for online job postings on behalf of an end-user, the query processing component 210-A processes the received search query prior to performing the actual search for online job postings. By way of example, the query processing component 210-A may enhance the search query by adding information, including additional search terms, to the query. For instance, such information may relate to one or more user profile attributes of the end-user, and/or may relate to prior activity undertaken by the end-user (e.g., past searches, previously viewed job postings, previously viewed company profiles, and so forth). Moreover, the query processing component 210-A may expand the user-provided search query by adding search terms that are synonymous with a term provided with the initial search query by the end-user. In some instances, one or more search terms provided by the end-user may be analyzed to determine whether it matches a term or phrase in a taxonomy accessible to the search engine 210. For example, in some instances, a search term may match a particular skill included in a taxonomy of skills. The query may be expanded to include similar skills, as identified by referencing the skill taxonomy, which may group or categorize skills by similarity.
Once the query processing component 210-A has concluded its operation and a final enhanced query has been generated based on the text of the initial user-provided search query, a broker 210-B will distribute the final query to one or more searchers or search agents 210-C. For example, with some embodiments, the final query is processed in parallel by a group of distributed search agents 210-C, with each search agent 210-C accessing a separate database of online job postings. Each search agent 210-C will execute the final query against its respective database of job postings to identify job postings that include text matching one or more of the search terms of the final query. Additionally, for each job posting identified by a search agent 210-C, the search agent 210-C will obtain a set of features for use as input to a deep neural network that has been trained to derive a ranking score for the respective job posting. Accordingly, each search agent 210-C will return a set of ranked online job postings to the broker 210-B, which will then merge the ranked job postings received from each search agent 210-C, and re-order the job postings by their respective ranking to create a final list of ranked job postings. Consistent with some embodiments, each search agent 210-C may return a set of ranked job postings, with the set including a number of job postings that falls within a predetermined range, such as between two-hundred and two-hundred fifty-five ranked results. Furthermore, a subset of ranked online job postings may be selected from the final list of ranked job postings for presentation to the end-user in a search results user interface. The results may be paginated, such that a predetermined number of search results are shown on each page, where the user interface provides navigation controls enabling the end-user to sequentially navigate and view more than one page of search results. Alternatively, the search results may be presented in a continuously scrolling user interface.
When the broker 210-B makes a call to the search agent(s) 210-C, each search agent 210-C has a set amount of time (e.g., two seconds) to return a set of ranked search results (e.g., job postings). The time limit set by the broker 210-B represents a default timeout value, after which the broker 210-B will continue processing the end-user request if it has not received a response from a search agent 210-C. Accordingly, if a search agent 210-A fails to respond within the time specified as the default timeout limit of the broker, the search results from the search agent 210-C will not be included in the response generated for the end-user request. The timeout value ensures that the end-user is not needlessly waiting in the rare circumstance that an error has occurred, or if no matching search results are available. As described in greater detail below, the ranking algorithm used to rank the search results is subject to certain latency requirements, such as the timeout value of the broker 210-B, and therefore is implemented to rank online job postings in a sufficiently fast manner.
Consistent with some embodiments, the supervised learning technique used to train the model 300 uses a listwise approach. Accordingly, the training data 304 used in training the model 300 is presented as an ordered set of search results derived for a particular end-user's previous search query. For example, an instance of training data will generally include among other items of information, the text of a search query provided by an end-user and used to generate an ordered set of search results (e.g., online job postings), information about the end-user (e.g., user profile data), and information relating to each of several different online job postings that were presented in a search results interface. As described in greater detail below, with some embodiments, the first layer of the neural network model 300 includes one or more Transformer encoders, which receive as input features sequences of text. The input features provided to these Transformer encoders will include at least the text of the search query provided by the end-user and from which the search results were generated, the text of the job title associated with the online job posting for which a ranking score is derived, and the text of the company name of the company associated with the job posting for which the ranking score is derived. Of course, other examples of text inputs may also be used in various embodiments.
To ensure that the Transformer encoders can encode the text input sequences fast enough to satisfy the latency requirements of the ranking operation, the length of the text input sequence provided to each Transformer encoder is limited by design. As described below in connection with
During both training and at inference time, when the sequence of text provided as input to a Transformer encoder is less than the input length threshold for the Transformer encoder, mask padding is applied to those token positions for which there is no input text. For instance, each word or term in a sequence of terms for a search query is presumed to have a position within the sequence. By way of example, the search query, “Machine Learning Engineer” has three terms, with the term “Machine” being in the first position, the term “Learning” being in the second position, and so forth. When the length of the text input sequence is less than the input length threshold for the Transformer encoder, those input positions for which there is no corresponding text input (e.g., search term) receive a mask padding. This ensures that the Transformer encoder does not perform the self-attention operation on those input positions that do not include an actual text input. Similarly, when the length of the sequence of input text exceeds the input length threshold for a Transformer encoder, any tokens in excess of the threshold are simply discarded or ignored.
The labels for the training data are derived based on actions taken by the end-user with respect to the several job postings presented in a set of search results. For example, if an end-user selects (e.g., clicks) a particular online job posting from the search results, in order to view the job posting, this end-user activity is tracked so the end-user's action can be used to generate the labeled training data for training the ranking model 300. Consistent with some embodiments, the labeled training data may have different weights to reflect different actions taken by the end-user. For example, a selection or viewing of a job posting may be deemed a positive action, but given less weight than other end-user actions, such as saving a selected job posting for subsequent retrieval and viewing, and/or submitting an application for a job posting. Similarly, negative labels may be generated based on an end-user being presented with a job posting in the search results, but the end-user taking no action with respect to the job posting. Each end-user action may be provided a weighting factor commensurate with its perceived importance as a signal for use in ranking job postings.
Referring again to
Referring again to
When the text input length of a text input sequence exceeds the input length threshold for the Transformer encoder, the words or tokens in those positions that exceed the threshold are dropped or ignored. Similarly, when the text input length of a text input sequence is less than the input length threshold, a mask padding is applied to those input positions where there is no text input. As indicated by reference number 404, after the word embedding is completed, the vector representation of the text input is embedded with a positional encoding to indicate the position of each individual token or word in the sequence of text. This is required because the text input sequence is processed by the Transformer encoder in parallel, as opposed to sequentially, where the order of processing would indicate the position of the text input.
The Transformer encoder 400 of
Next, at method operation 706, for each identified job posting, a ranking score is generated using a deep neural network—a type of machine learned model. The deep neural network is provided with a variety of input features that include at least an encoded representation of the text sequence of the search query, an encoded representation of the text sequence of the job title of the job posting that is being ranked, and an encoded representation of the text sequence of a company name for a company associated with the job posting that is being ranked. Each of the various encoded representations is derived using a Transformer encoder. The length of the text input sequence (e.g., search query, job title, company name) is first compared with an input length threshold—that is, a maximum text sequence length for each Transformer encoder—to ensure that the length of the input text sequence does not exceed the input length threshold for the particular Transformer encoder. If a particular input text sequence does exceed an input text threshold, the words in positions that exceed the maximum are simply ignored. Next, each input token (word) of the sequence of input text is mapped to a pre-trained word embedding of a fixed size (e.g., 100). Mask paddings are applied to those positions where there is no text, for example, when the length of the text input is less than the input length threshold. Positional information is then encoded with each embedding, before the word embeddings are provided to a single layer Transformer encoder that generates the final encoded representation of the input text sequence by applying an average pooling operation to the output of the Transformer encoder. The outputs of the Transformer encoders (e.g., the encoded representation of the search query, the encoded representation of the job title, and the encoded representation of the company name) are then applied as input features to the deep neural network for purposes of deriving a ranking score for a particular job posting.
Once a ranking score has been generated for each of the job postings identified by the query processing module, at method operation 708, a subset of the highest-ranking job postings is selected for presentation, and then presented, to the end-user in a search results user interface. More specifically, a server computer of the online service causes the search results user interface to be displayed at a client computing device, by generating the information that represents the user interface at the server computer and communicating the information to the client computing device.
In various implementations, the operating system 804 manages hardware resources and provides common services. The operating system 804 includes, for example, a kernel 820, services 822, and drivers 824. The kernel 820 acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel 820 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services 822 can provide other common services for the other software layers. The drivers 824 are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers 824 can include display drivers, camera drivers. BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth.
In some embodiments, the libraries 806 provide a low-level common infrastructure utilized by the applications 810. The libraries 606 can include system libraries 830 (e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 806 can include API libraries 832 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC). Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic context on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries 806 can also include a wide variety of other libraries 834 to provide many other APIs to the applications 810.
The frameworks 808 provide a high-level common infrastructure that can be utilized by the applications 810, according to some embodiments. For example, the frameworks 608 provide various GUI functions, high-level resource management, high-level location services, and so forth. The frameworks 808 can provide a broad spectrum of other APIs that can be utilized by the applications 810, some of which may be specific to a particular operating system 804 or platform.
In an example embodiment, the applications 810 include a home application 850, a contacts application 852, a browser application 854, a book reader application 856, a location application 858, a media application 860, a messaging application 862, a game application 864, and a broad assortment of other applications, such as a third-party application 866. According to some embodiments, the applications 810 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications 810, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 866 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 866 can invoke the API calls 812 provided by the operating system 804 to facilitate functionality described herein.
The machine 900 may include processors 910, memory 930, and 1/O components 950, which may be configured to communicate with each other such as via a bus 902. In an example embodiment, the processors 910 (e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 912 and a processor 914 that may execute the instructions 916. The term “processor” is intended to include multi-core processors 710 that may comprise two or more independent processors 912 (sometimes referred to as “cores”) that may execute instructions 916 contemporaneously. Although
The memory 930 may include a main memory 932, a static memory 934, and a storage unit 936, all accessible to the processors 910 such as via the bus 902. The main memory 932, the static memory 934, and the storage unit 936 store the instructions 916 embodying any one or more of the methodologies or functions described herein. The instructions 916 may also reside, completely or partially, within the main memory 932, within the static memory 934, within the storage unit 936, within at least one of the processors 910 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 900.
The I/O components 950 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 950 that are included in a particular machine 900 will depend on the type of machine 900. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 950 may include many other components that are not shown in
Communication may be implemented using a wide variety of technologies. The I/O components 950 may include communication components 964 operable to couple the machine 900 to a network 980 or devices 970 via a coupling 982 and a coupling 972, respectively. For example, the communication components 964 may include a network interface component or another suitable device to interface with the network 980. In further examples, the communication components 964 may include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi®, components, and other communication components to provide communication via other modalities. The devices 970 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
Moreover, the communication components 964 may detect identifiers or include components operable to detect identifiers. For example, the communication components 964 may include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 964, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
The various memories (i.e., 930, 932, 934, and/or memory of the processor(s) 910) and/or the storage unit 736 may store one or more sets of instructions 916 and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 916), when executed by the processor(s) 910, cause various operations to implement the disclosed embodiments.
As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions 916 and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to the processors 910. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory including, by way of example, semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate array (FPGA), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below.
In various example embodiments, one or more portions of the network 980 may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a WAN, a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network 980 or a portion of the network 980 may include a wireless or cellular network, and the coupling 982 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling 982 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long-Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data-transfer technology.
The instructions 916 may be transmitted or received over the network 980 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 964) and utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Similarly, the instructions 916 may be transmitted or received using a transmission medium via the coupling 972 (e.g., a peer-to-peer coupling) to the devices 970. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 916 for execution by the machine 900, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals.