Data clustering techniques can be utilized to organize and group electronic data such that data instances in the same group are more similar to each other than data instances in other groups with respect to their properties and attributes. Data clustering can be immensely useful in a variety of electronic data tasks. For instance, search and retrieval of data is a complex problem that can be facilitated by data clustering. This is especially the case for large datasets and/or when dealing with high-dimensional data. High-dimensional data includes information that can be represented by a large number of features or attributes, such as images, video, and audio data. Data clustering can organize the data to facilitate more efficient retrieval. However, clustering data can be a time- and computationally-expensive process, which can be a disadvantage when new data is regularly generated, such as, for instance, when new images are added to an image set. Therefore, complex and non-trivial issues associated with data organization remain due to the limitations of existing techniques.
Embodiments of the present technology relate to, among other things, a clustering system that provides for incremental clustering of new data (i.e., input data instances) with existing data that has already been clustered (i.e., existing data instances in existing data clusters) in a bounded manner to control compute time and resource consumption. In accordance with aspects of the technology described herein, input data instances for clustering with existing data clusters are received. Clustering is performed on the input data instances to form input data clusters. Each input data cluster is then processed to cluster the input data instances with the existing data instances. For a given input data cluster, a subset of the existing data clusters are selected based on similarity to the input data cluster. Additionally, existing data instances are sampled from the selected existing data instances. Clustering is performed on the input data instances from the input data cluster and the sampled existing data instances to form intermediate clusters. The intermediate clusters are mapped to existing data clusters, where appropriate, based on similarity between the intermediate clusters and the existing data clusters. In some instances, intermediate clusters are not sufficiently similar to an existing data cluster. In those instances, new clusters are added based on the input data instances of the intermediate clusters.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The present technology is described in detail below with reference to the attached drawing figures, wherein:
Data clustering techniques have a wide range of applications for processing digital data. Among other things, data clustering techniques can facilitate search and retrieval of digital data from storage. For instance, in the field of digital imaging, data clustering can be used to categorize images according to objects in those images. For example, within a group of images of people, the images can be categorized according to the faces of the people so that the images can be quickly referenced and located. Several existing techniques can be used for clustering and categorizing digital images and data in general.
Conventional data clustering techniques can be time- and computationally-expensive to perform every time new data instances are added to an existing dataset. For instance, a brute force solution to adding new data instances to an existing dataset is to re-cluster all existing data instances with the new data instances to get a new set of clusters. However, brute-force clustering is often not practical for use in a production environment having large datasets where new data instances are regularly added because the time and computational power needed to re-cluster all the existing data instances with the new data instances becomes ever increasing. One way to reduce the computational load of data clustering is to incrementally cluster newly added data instances into existing clusters. Even so, it remains difficult to incrementally cluster new data instances into a large number of existing clusters efficiently using existing techniques.
Incremental hierarchical agglomerative clustering (HAC) is one attempt to address these shortcomings of existing clustering techniques. However, incremental HAC still presents drawbacks. For instance, clustering using incremental HAC is not bounded. When the existing dataset is large, the number samples taken from the dataset to perform clustering is likewise large. This results in unbounded compute time and resource consumption, making it inadequate for clustering large datasets.
Embodiments of the present technology solve these problems by providing a clustering system that enables bounded incremental clustering of data instances with existing clusters. Aspects of the technology provide for input data instances to be incrementally clustered with existing clusters in a manner that provides for bounded compute time and resource consumption.
In accordance with some aspects of the technology described herein, input data instances are received for clustering with existing data instances that have already been clustered in existing data clusters of a cluster dataset. Clustering of the input data instances is performed to form input data clusters. Each input data cluster is then processed to cluster the input data instances with the existing data clusters.
For a given input data cluster, a subset of the existing data clusters that are most similar to the input data cluster are selected. Similarity can be based on, for instance, a distance measure between a representation of the input data cluster and a representation of each of the existing data clusters. Additionally, a subset of existing data instances is sampled from each of the selected existing data instances. Clustering is performed on the input data instances of the given input data cluster and the selected existing data instances to form intermediate clusters. The intermediate clusters are then mapped to existing data clusters of the cluster dataset, where appropriate. An intermediate cluster is mapped to an existing data cluster based on similarity, which can be based on, for instance, a distance measure between a representation of the intermediate cluster and a representation of the existing data cluster, or based on a number or proportion of existing data clusters in the intermediate cluster that belong to the existing data cluster.
In some instances, an intermediate cluster is not mapped to an existing data cluster. This can occur, for instance, when the intermediate cluster is not sufficiently similar to an existing data cluster. In some configurations, the intermediate cluster is added to the cluster dataset as a new cluster. In other configurations, clustering is performed on input data instances from unmapped intermediate clusters to form new clusters, and the new clusters are added to the cluster dataset.
The technology described herein provides a number of advantages over existing approaches. For instance, aspects of the technology described herein allow for incremental addition of new data to existing clusters. In this manner, it is not necessary to re-cluster all existing data when new data is generated, and new data can be added to existing clusters more quickly and at lower computational expense as compared to existing clustering techniques. Additionally, aspects provide for bounded clustering as each clustering run is limited to a finite number of existing data clusters selected from a cluster data set and a finite number of existing data instances selected from those existing data clusters. The results from the technology described herein are functionally equivalent to conventional clustering, improving with each run, as new data is added. Thus, this technology is very well suited for flowing data, where the objective is to cluster the in flowing data on a continuous basis in a time and resource bounded manner. Moreover, whereas conventional clustering techniques are unbounded, the technology described herein provides for bounded clustering runs. As a results, the technology reduces the clustering time, compute, and memory required significantly in comparison to conventional clustering techniques, including incremental agglomerative clustering. For instance, if the number of data instances to be clustered is represented by n and the number of clusters is represented by k, then the time complexity of incremental agglomerative clustering is O(k*n2) and space complexity is 0(n2). For incremental agglomerative clustering, n is dependent on existing clustered data, and can be unbounded for large amounts of data. In contrast, while the technology described herein can include multiple cluster runs, for each run, n is finite and bounded and is at most the new data to cluster. Hence, the technology described herein provides significant gains on compute time and memory resources.
With reference now to the drawings,
The system 100 is an example of a suitable architecture for implementing certain aspects of the present disclosure. Among other components not shown, the system 100 includes a user device 102 and clustering system 104. Each of the user device 102 and clustering system 104 shown in
At a high level, the clustering system 104 operates on input data instances 110 and a cluster dataset 112 of existing data clusters to add the input data instances 110 to the cluster dataset 112. The existing data clusters of the cluster dataset 112 comprise clusters formed from existing data instances. Any of a variety of different clustering algorithms can be employed to form the existing data clusters, such as for instance, HAC or mean shift clustering. Each data instance (including each input data instance from the input data instances 110 and each existing data instance from the existing data clusters of the cluster dataset 112) comprises any type of data object or collection of data. For instance, each data instance can comprise image data, audio data, video data, document data, or any other type of data that can be clustered. For purposes of explanation, digital image processing is used as an example application of the disclosed techniques. However, the disclosed techniques are not limited to image processing, and can be used in any suitable application context on any suitable type of data set (e.g., audio data, video data, seismic data, statistical data, etc.). In some cases, each data instance can be a representation of underlying data. For instance, each data instance can comprise a vector, a fingerprint, a hash, a neural network embedding, or other representation formed from underlying data, such as an image.
As shown in
The input data clustering module 114 clusters the input data instances 110 into a number of input data clusters. The input data clustering module 114 can use any of a variety of different clustering algorithms. By way of example only and not limitation, the input data cluster module 114 can use HAC or mean shift clustering. The clustering algorithm can form clusters based on similarity determined between input data instances. The similarity can be determined, for instance, using a distance function that indicates a distance between two data instances in a given space (e.g., a vector space). By way of example only and not limitation, similarity can be determined using Euclidian distance, cosine distance, Hamming distance, or other distance measure. Data instances that are relatively close to each other (in terms of the distance between the data instances) are more similar than data instances that are relatively far away from each other.
The clustering algorithm of the input data clustering module 114 can employ a similarity threshold when forming clusters. A similarity threshold used for clustering represents a lower limit of similarity between data instances for the data instances to be clustered together. In some configurations, the clustering algorithm used by the input data clustering module 114 employs the same similarity threshold as that employed by the clustering algorithm used to cluster the existing data instances in the existing data clusters of the cluster dataset 112. In other configurations, the clustering algorithm used by the input data clustering module 114 employs a higher similarity threshold than the similarity threshold employed by the clustering algorithm used to cluster the existing data instances in the existing data clusters of the cluster dataset 112. Using a higher similarity threshold causes the input data instances to be more tightly clustered—i.e., input data instances have a higher level of similarity to be included in the same input data cluster.
For each of the input data clusters generated by the input data clustering module 114, the sampling module 116 samples existing data clusters from the cluster dataset 112 to facilitate the process of clustering the input data instances 110 with existing data instances in the cluster dataset 112. For a given input data cluster, the sampling module 116 selects a finite number of existing data clusters that are closest to the given input data cluster. Limiting the selected existing data clusters to a finite number ensures that the clustering is bounded. For instance, the sampling module 116 can select 20 existing data clusters that are closest to the input data cluster. It should be understood that the number of existing data clusters selected by the sampling module 116 for a given input data cluster can be configurable and can be selected based on, for instance, the overall number of the existing data instances, existing data clusters, input data instances, and/or input data clusters.
The sampling module 116 can use any of a variety of different techniques for determining the distance between a given input data cluster and existing data clusters when selecting the existing data clusters for that input data cluster. In some configurations, the sampling module 116 can determine the distance between the given input data instance and each existing data cluster based on the average distance between data instances from each cluster. This can include computing an average representational value from input data instances for the input data cluster, as well as an average representational value from existing data instances for each existing data cluster. The existing data clusters having an average representational value closest (e.g., using simple similarity match) to the average representational value for the input data cluster are selected. In some configurations, the average representational value for a cluster is based on all data instances in the cluster; while in other configurations, the average representational value is based on a portion of the data instances sampled from the cluster (as it may be difficult to compute an average for a large number of data instances). For example, if a cluster has 100 data instances, 10 of those 100 data instances can be sampled from the cluster, and the average of the 10 data instances is used for representing the cluster.
It should be noted that using average representational values is provided by way of example only and not limitation. Other approaches for determining the distance between a given input data cluster and existing data clusters can be employed within the scope of the technology described herein. For instance, the sampling module 116 can determine the distance between clusters based on the distance between the closest data instance from each cluster (i.e., smallest minimum pairwise distance), or based on the distance between the furthest data instance from each cluster (i.e., smaller maximum pairwise distance).
After selecting a finite number of existing data clusters for a given input data cluster, the sampling module 116 samples existing data instances from the selected existing data clusters. In some aspects, the sampling module 116 selects a finite number of existing data instances from each selected existing data cluster. Limiting the selected existing data instances from each selected existing data cluster to a finite number ensures that the clustering is bounded. For instance, the sampling module 116 can select 20 existing data instances from each of the selected existing data clusters. As such, if the number of selected existing data clusters is limited to 20, and the number of existing data instances from each selected existing data cluster is limited to 20, the total number of existing data instances to use for clustering input data instances from an input data cluster is limited to 400. It should be understood that the number of existing data instances selected by the sampling module 116 for each selected existing data cluster can be configurable and can be determined based on, for instance, the overall number of the existing data instances, existing data clusters, input data instances, and/or input data clusters. The sampling module 116 can arbitrarily sample existing data instances from each selected existing data instance or can sample the existing data instances using some configurable criteria. In some instances, an equal number of data instances is sampled from each selected existing data instance.
For each input data cluster, the re-clustering module 118 forms intermediate clusters from the input data instances from a given input data cluster and the existing data instances sampled for the given input data cluster by the sampling module 116. The re-clustering module 118 can use any of a variety of different clustering algorithms. By way of example only and not limitation, the re-clustering module 118 can use HAC or mean shift clustering. The clustering algorithm can form intermediate clusters based on similarity determined between data instances. The similarity can be determined, for instance, using a distance function that indicates a distance between two data instances in a given space (e.g., a vector space). By way of example only and not limitation, similarity can be determined using Euclidian distance, cosine distance, Hamming distance, or other distance measure. Data instances that are relatively close to each other (in terms of the distance between the data instances) are more similar than data instances that are relatively far away from each other.
The clustering algorithm of the re-clustering module 118 can employ a similarity threshold when forming intermediate clusters. As noted previously, a similarity threshold used for clustering represents a lower limit of similarity between data instances for the data instances to be clustered together. In some configurations, the clustering algorithm used by the re-clustering module 118 employs the same similarity threshold as that employed by the clustering algorithm used to cluster the existing data instances in the existing data clusters of the cluster dataset 112.
The mapping module 120 maps intermediate clusters formed by the re-clustering module 118 to existing data clusters from the cluster dataset 112, where appropriate. For instance, if an intermediate cluster formed by the re-clustering module 118 is sufficiently similar to an existing data cluster, the mapping module 120 maps the intermediate cluster to the existing data cluster.
In some configurations, the mapping module 120 determines similarity between an intermediate cluster and an existing data cluster based on a distance between the two clusters and/or data instances in the two clusters. A similarity threshold based on distance can be used by the mapping module 120 to determine whether to map an intermediate cluster to an existing data cluster. In some instances, the similarity threshold used to determine whether to map an intermediate cluster to an existing data cluster is the same similarity threshold (i.e., same distance) used to cluster existing data instances into the existing data clusters in the cluster dataset 112.
In some configurations, the mapping module 120 maps an intermediate cluster to an existing data cluster based on the number or percentage of existing data instances in the intermediate cluster coming from a given existing cluster. For instance, each existing data instance can be labeled with a cluster identifier that identifies the existing data clusters from which the existing data instance was sampled. If the number or percentage of existing data instances in an intermediate cluster having a particular cluster identifier satisfies a threshold, the intermediate cluster is mapped to the existing data cluster with that particular cluster identifier. For instance, the mapping module 120 can map an intermediate cluster to an existing data cluster if a majority (e.g., over 50 percent) of the existing data instances in the intermediate cluster have the cluster identifier for that existing data cluster. Mapping an intermediate cluster to an existing data cluster can comprise, for instance, assigning each input data instance in the intermediate cluster with the cluster identifier of the existing data cluster.
The cluster addition module 122 adds new clusters to the cluster dataset 112 based on any intermediate clusters that the mapping module 120 does not map to an existing data cluster. This includes, for instance, any intermediate cluster that is not sufficiently similar to an existing data cluster. In some configurations, each intermediate cluster that is not mapped to an existing data cluster is added to the cluster dataset 112 as a new cluster. In some configurations, the cluster addition module 122 takes the input data instances from each intermediate cluster not mapped to an existing data cluster and clusters those input data instances to form new clusters, which are added to the cluster dataset 112. In such instances, the cluster addition module 122 can use any of a variety of different clustering algorithms. By way of example only and not limitation, the cluster addition module 122 can use HAC or mean shift clustering. The clustering algorithm can form new clusters based on similarity determined between data instances. The similarity can be determined, for instance, using a distance function that indicates a distance between two data instances in a given space (e.g., a vector space). By way of example only and not limitation, similarity can be determined using Euclidian distance, cosine distance, Hamming distance, or other distance measure. Data instances that are relatively close to each other (in terms of the distance between the data instances) are more similar than data instances that are relatively far away from each other.
The clustering algorithm of the cluster addition module 122 can employ a similarity threshold when forming new clusters. As noted previously, a similarity threshold used for clustering represents a lower limit of similarity between data instances for the data instances to be clustered together. In some configurations, the clustering algorithm used by the cluster addition module 122 employs the same similarity threshold as that employed by the clustering algorithm used to cluster the existing data instances in the existing data clusters of the cluster dataset 112.
The clustering system 104 can also include a user interface (UI) module 124 that provides one or more user interfaces for interacting with the clustering system 104. For instance, the UI module 124 can provide user interfaces to a user device, such as the user device 102. The user device 102 can be any type of computing device, such as, for instance, a personal computer (PC), tablet computer, desktop computer, mobile device, or any other suitable device having one or more processors. As shown in
Each data instance of the input data instances A-G and the existing data instances 1-40 can comprise any type of classifiable data. By way of example for illustration purposes, each of the data instances could be an image of a person's face (or a representation derived from an image of a person's face). In this example, the existing data clusters 204A-204D each correspond with images of a person's face. For instance, existing data cluster 204A includes facial images of person A (i.e., existing data instances 1-10), existing data cluster 204B includes facial images of person B (i.e. existing data instances 11-20), existing data cluster 204C includes facial images of person C (i.e., existing data instances 21-30), and existing data cluster 204D includes facial images of person D (i.e., existing data instances 31-40). Input data instances A-G include new images of different people's faces that are to be clustered with the cluster dataset 204.
With reference now to
As shown at block 702, input data instances are received for clustering with a cluster dataset of existing data clusters with existing data instances. Each data instance (including each input data instance from the input data instances and each existing data instance from the existing data clusters of the cluster dataset) comprises any type of data object or collection of data. For instance, each data instance can comprise image data, audio data, video data, document data, or any other type of data that can be clustered. In some cases, each data instance can be a representation of underlying data. For instance, each data instance can comprise a vector, a fingerprint, a hash, a neural network embedding, or other representation formed from underlying data, such as an image.
The input data instances are clustered to product input data clusters, as shown at block 704. Any of a variety of clustering algorithms, such as HAC or mean shift clustering, can be employed. In some configurations, the clustering algorithm used to cluster the input data instances uses a similarity threshold that can be based on a distance between input data instances. In some cases, the input data instance clustering uses a higher similarity threshold than that was used to cluster the existing data instances in the existing data clusters.
As shown at block, 706, an input data cluster is selected for further processing by blocks 708-714. It should be understood that the process of selecting and processing an input data cluster at blocks 706-714 can be performed for each input data cluster formed at block 704. The processing of the input data clusters can be performed serially or in parallel.
A subset of existing data clusters is selected from the cluster dataset for the input data cluster, as shown at block 708. In other words, a finite number of existing data clusters is selected that is less than all existing data clusters in the cluster dataset. The existing data clusters can be selected for the input data cluster in a number of different manners. Generally, the existing data clusters that are most similar to the input data cluster are selected. In some aspects, existing data clusters are selected based on a distance function that measures a distance between the input data cluster and an existing data cluster (e.g., based on average representation of data instances from each cluster, closest minimum pairwise data instances between the clusters, closest maximum pairwise data instances between the clusters, etc.).
As shown at block 710, a subset of existing data instances is selected from the subset of existing data clusters selected at block 708. The subset of existing data instances can be selected by sampling (randomly or based on some function) a finite number of existing data instances from each of the selected existing data clusters that is less than all existing data instances in each existing data cluster.
The input data instances and the subset of existing data instances selected at block 710 are clustered to produce intermediate clusters, as shown at block 712. Any of a variety of clustering algorithms, such as HAC or mean shift clustering, can be employed. In some configurations, the clustering algorithm used to form the intermediate clusters uses a similarity threshold that can be based on a distance between data instances. In some cases, the clustering algorithm used to form the intermediate clusters uses the same similarity threshold than that was used to cluster the existing data instances in the existing data clusters.
The intermediate clusters are mapped to existing data clusters, where appropriate, as shown at block 714. In some instances, an intermediate cluster is mapped to an existing data clusters based on a similarity between the intermediate cluster and the existing data cluster satisfying a threshold. In some aspects, the similarity is based on a distance function that measures a distance between an intermediate cluster and an existing data cluster (e.g., based on average representation of data instances from each cluster, closest minimum pairwise data instances between the clusters, closest maximum pairwise data instances between the clusters, etc.). In some aspects, the similarity between an intermediate cluster and an existing data cluster is based on the presence of existing data instances from the existing data cluster in the intermediate cluster. For instance, each existing data instance can be labeled with a cluster identifier identifying an existing data cluster to which it belongs. If the number or percentage of existing data instances in an intermediate cluster satisfies a threshold (e.g., a majority), the intermediate cluster is mapped to the existing data instance. Mapping an intermediate cluster to an existing data cluster can comprise storing data that correlates the intermediate cluster with the existing data cluster. In some instances, mapping an intermediate cluster to an existing data cluster can comprise labeling each input data instances from the intermediate cluster with the cluster identifier of the existing data cluster.
For any intermediate cluster that is not mapped to an existing data cluster, one or more new clusters are added to the cluster dataset, as shown at block 716. For instance, an intermediate cluster can be insufficiently similar to any existing data clusters. In some instances, the intermediate cluster is added as a new cluster to the cluster dataset. In other instances, new clusters are formed by clustering input data instances from multiple intermediate clusters not mapped to an existing data cluster, and those new clusters are added to the cluster dataset. Any of a variety of clustering algorithms, such as HAC or mean shift clustering, can be employed. In some configurations, the clustering algorithm used to form the new clusters uses a similarity threshold that can be based on a distance between data instances. In some cases, the clustering algorithm used to form the new clusters uses the same similarity threshold than that was used to cluster the existing data instances in the existing data clusters.
Having described implementations of the present disclosure, an exemplary operating environment in which embodiments of the present technology can be implemented is described below in order to provide a general context for various aspects of the present disclosure. Referring initially to
The technology can be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The technology can be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The technology can also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With reference to
Computing device 800 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 800 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 800. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 812 includes computer storage media in the form of volatile and/or nonvolatile memory. The memory can be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 800 includes one or more processors that read data from various entities such as memory 812 or I/O components 820. Presentation component(s) 816 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 818 allow computing device 800 to be logically coupled to other devices including I/O components 820, some of which can be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components 820 can provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instance, inputs can be transmitted to an appropriate network element for further processing. A NUI can implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye-tracking, and touch recognition associated with displays on the computing device 800. The computing device 800 can be equipped with depth cameras, such as, stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these for gesture detection and recognition. Additionally, the computing device 800 can be equipped with accelerometers or gyroscopes that enable detection of motion.
The present technology has been described in relation to particular embodiments, which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present technology pertains without departing from its scope.
Having identified various components utilized herein, it should be understood that any number of components and arrangements can be employed to achieve the desired functionality within the scope of the present disclosure. For example, the components in the embodiments depicted in the figures are shown with lines for the sake of conceptual clarity. Other arrangements of these and other components can also be implemented. For example, although some components are depicted as single components, many of the elements described herein can be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Some elements can be omitted altogether. Moreover, various functions described herein as being performed by one or more entities can be carried out by hardware, firmware, and/or software, as described below. For instance, various functions can be carried out by a processor executing instructions stored in memory. As such, other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions) can be used in addition to or instead of those shown.
Embodiments described herein can be combined with one or more of the specifically described alternatives. In particular, an embodiment that is claimed can contain a reference, in the alternative, to more than one other embodiment. The embodiment that is claimed can specify a further limitation of the subject matter claimed.
The subject matter of embodiments of the technology is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” can be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
For purposes of this disclosure, the word “including” has the same broad meaning as the word “comprising,” and the word “accessing” comprises “receiving,” “referencing,” or “retrieving.” Further, the word “communicating” has the same broad meaning as the word “receiving,” or “transmitting” facilitated by software or hardware-based buses, receivers, or transmitters using communication media described herein. In addition, words such as “a” and “an,” unless otherwise indicated to the contrary, include the plural as well as the singular. Thus, for example, the constraint of “a feature” is satisfied where one or more features are present. Also, the term “or” includes the conjunctive, the disjunctive, and both (a or b thus includes either a or b, as well as a and b).
For purposes of a detailed discussion above, embodiments of the present technology are described with reference to a distributed computing environment; however, the distributed computing environment depicted herein is merely exemplary. Components can be configured for performing novel embodiments of embodiments, where the term “configured for” can refer to “programmed to” perform particular tasks or implement particular abstract data types using code. Further, while embodiments of the present technology can generally refer to the technical solution environment and the schematics described herein, it is understood that the techniques described can be extended to other implementation contexts.
From the foregoing, it will be seen that this technology is one well adapted to attain all the ends and objects set forth above, together with other advantages which are obvious and inherent to the system and method. It will be understood that certain features and subcombinations are of utility and can be employed without reference to other features and subcombinations. This is contemplated by and is within the scope of the claims.