Granular support vector machine with random granularity

Information

  • Patent Grant
  • 8160975
  • Patent Number
    8,160,975
  • Date Filed
    Friday, January 25, 2008
    18 years ago
  • Date Issued
    Tuesday, April 17, 2012
    13 years ago
Abstract
Methods and systems for granular support vector machines. Granular support vector machines can randomly select samples of datapoints and project the samples of datapoints into a randomly selected subspaces to derive granules. A support vector machine can then be used to identify hyperplane classifiers respectively associated with the granules. The hyperplane classifiers can be used on an unknown datapoint to provide a plurality of predictions which can be aggregated to provide a final prediction associated with the datapoint.
Description
BACKGROUND AND FIELD

This disclosure relates generally to data mining using support vector machines.


Support vector machines are useful in providing input to identify trends in existing data and to classify new sets of data for analysis. Generally support vector machines can be visualized by plotting data into an n-dimensional space, n being the number of attributes associated with the item to be classified. However, given large numbers of attributes and a large volume of training data, support vector machines can be processor intensive.


Recently analysts have developed an algorithm known as “Random Forests.” “Random Forests” uses decision trees to classify data. Decision trees modeled on large amounts of data can be difficult to parse and hence classification accuracy is limited. Thus, “Random Forests” utilizes a bootstrap aggregating (bagging) algorithm to randomly generate multiple bootstrapping datasets from a training dataset. Then a decision tree is modeled on each bootstrapping dataset. For each decision tree modeling, at each node a small fraction of attributes are randomly selected to determine the split. Because all attributes need to be available for random selection, the whole bootstrapping dataset is needed in the memory. Moreover, “Random Forests” has difficulty working with sparse data (e.g., data which contains many zeroes). For example, a dataset, formatted as a matrix with rows as samples and columns as attributes, has to be entirely loaded into the memory even when a cell is zero. Thus, “Random Forests” is space-consuming, and when modeling the entire data matrix, “Random Forests” is also time-consuming, given a large and sparse dataset. The dataset cannot be parallelized on a distributed system such as a computer cluster, because it is time-consuming to transfer a whole bootstrapping dataset between different computer nodes.


SUMMARY

Systems, methods, apparatuses and computer program products for granular support vector machines are provided. In one aspect, methods are disclosed, which include: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples; deriving a plurality of granules from the training dataset, each granule comprising a plurality of sample tuples and a plurality of sample attributes; processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules; predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions; and aggregating the predictions to derive a decision on a final classification of the new tuple.


Systems can include a granule selection module, multiple granule processing modules, one or more prediction modules and an aggregation module. The granule selection module can select a plurality of granules from a training dataset. Each of the granules can include multiple tuples and attributes. The granule processing modules can process granules using support vector machine processes identifying a hyperplane classifier associated with each of the granules. The one or more prediction modules can predict a classification associated with an unknown tuple based upon the hyperplane classifiers to produce multiple granule predictions. The aggregation module can aggregate the granule predictions to derive a decision on a final classification associated with the unknown tuple.


The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.





DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram of a network environment including an example classification system.



FIG. 2 is a block diagram of an example classification system.



FIG. 3 is a block diagram of a messaging filter using a classification system and illustrating example policies.



FIG. 4 is a block diagram of an example distributed classification system.



FIG. 5 is a block diagram of another example distributed classification system.



FIG. 6 is a flowchart illustrating an example method used to derive granules and classification planes.



FIG. 7 is a flowchart illustrating an example method used to derive classification associated with a new set of attributes for classification.



FIG. 8 is a flowchart illustrating an example method used to derive granules and distribute granules to processing modules.





DETAILED DESCRIPTION

Granular support vector machines with random granularity can help to provide efficient and accurate classification of many types of data. For example, granular support vector machines can be used in the context of spam classification. Moreover, in some implementations, the granules, typically much smaller than the bootstrapping datasets before random subspace projection, can be distributed across many processors, such that the granules can be processed in parallel. In other implementations, the granules can be distributed based upon spare processing capability at distributed processing modules. The nature of the granules can facilitate distributed processing. The reduction in size of the training dataset can facilitate faster processing of each of the granules. In comparison to “Random Forests”, this granular support vector machine with random granularity works well on large and sparsely populated datasets (e.g., data which contains a lot of zeroes or null sets), because all zeros or null sets are not needed in the memory. In some implementations, the classification system can be used to classify spam. In other implementations, the classification system can be used to classify biological data. Other classifications can be derived from any type of dataset using granular support vector machines with random granularity.



FIG. 1 is a block diagram of a network environment including an example classification system 100. The classification system 100 can receive classification queries from an enterprise messaging filter 110. The enterprise messaging filter 110 can protect enterprise messaging entities 120 from external messaging entities 130 attempting to communicate with the enterprise messaging entity 120 through a network 140.


In some implementations, the classification system 100 can receive a training dataset 150. The training dataset 150 can be provided by an administrator, for example. The classification system 100 can use granular support vector machine classification 160 to process the training dataset and derive hyperplane classifiers respectively associated with randomly selected granules, thereby producing a number of granular support vector machines (e.g., GVSM 1 170, GVSM 2 180 . . . GVSM n 190).


In some implementations, the classification system 100 can derive a number of granules from the training dataset. The granules can be derived, for example, using a bootstrapping process whereby a tuple (e.g., a record in the dataset) is randomly selected for inclusion in the in-bag data. Additional tuples can be selected from among the entire training dataset (e.g., sampled with replacement). Thus, the selection of each tuple is independent from the selection of other tuples and the same tuple can be selected more than once. For example, if a training dataset included 100 tuples and 100 bootstrapping samples are selected from among the 100 tuples, on average 63.2 of the tuples would be selected, and 36.8 of the tuples would not be selected. The selected data can be identified as in-bag data, while the non-selected data can be identified as out-of-bag data. In some examples, the sample size can be set at 10% of the total number of tuples in the training dataset. Thus, if there were 100 tuples, the classification system 100 can select 10 samples with replacement.


The classification system 100 can then project the data into a random subspace. The random subspace projection can be a random selection of tuple attributes (e.g., features). In some implementations, the random selection of tuple attributes can be performed without replacement (e.g., no duplicates can be selected). In other implementations, the random selection of tuple attributes can be performed with replacement (e.g., duplicate samples are possible, but discarded). The random selection of tuples with a projection of the tuple attributes into a random subspace generates a granule. The granule can be visualized as a matrix having a number of rows of records (e.g., equal to the number of unique tuples selected from the training dataset) with a number of columns defining attributes associated with the granule.


The classification system 100 can then execute a support vector machine process operable to receive the data and to plot the data into an n-dimensional space (e.g., n being the number of unique tuples sampled during the bootstrapping process). The support vector machine process can identify a hyperplane classifier (e.g., a linear classifier) to find the plane which best separates the data into two or more classifications. In some implementations, adjustments to the support vector machine process can be made to avoid overfitting the hyperplane classifier to the datapoints. In various examples, there can be more than one potential hyperplane classifier which provides separation between the data. In such instances, the hyperplane classifier which achieves maximum separation (e.g., maximum margin classifier) can be identified and selected by the support vector machine process. In some implementations, the support vector machine can warp the random subspace to provide better fit of the hyperplane classifier to the datapoints included in the granule.


The hyperplane classifiers (e.g., GVSM 1 170, GVSM 2 180 . . . GVSM n 190) can then be used to analyze new data. In some implementations, a new tuple (e.g., set of attributes) with an unknown classification can be received. In other implementations, the classification system can receive an unparsed document and can parse the document to extract the attributes used for classification by the various granules.


In some implementations, the hyperplane classifiers can be stored locally to the classification system and can be used to derive a number of predictions for the classification of the new tuple. In other implementations, the hyperplane classifiers are stored by the respective processing modules that processed the granule and the new tuple can be distributed to each of the respective processing modules. The processing modules can then each respond with a predicted granule classification, resulting in a number of granule predictions equal to the number of derived granules.


The predicted classifications can be aggregated to derive a final classification prediction associated with the new tuple. In some implementations, the predicted classifications can be aggregated by majority voting. For example, each prediction can be counted as a “vote.” The “votes” can then be tallied and compared to determine which classification received the most “votes.” This classification can be adopted by the classification system as the final classification prediction.


In other implementations, the granule predictions can include a distance metric describing the distance of a datapoint associated with the new tuple from the hyperplane classifier. The distance can be used to weight the aggregation of the predicted classifications. For example, if the classification system were determining whether a set of data indicates a man versus a woman, and one hyperplane classifier predicts that the datapoint is associated with a woman while another hyperplane classifier predicts that the same datapoint is associated with a man, the distance of each from the hyperplane classifier can be used to determine which classifier to use as the final classification prediction. In other examples, it can be imagined that 5 hyperplane classifiers predict that the datapoint is associated with a man while 10 hyperplane classifiers predict that the datapoint is associated with a woman. In those implementations where distance is used to provide a weighting to the predictions, if the 5 classifiers predicting that the datapoint is male have a greater aggregate distance from the respective hyperplane classifiers than the 10 classifiers predicting that the datapoint is female, then the final classification prediction can be male.


In still further implementations, each of the hyperplane classifiers can have an effectiveness metric associated with the classifier. In such implementations, the effectiveness metric can be derived by validating the hyperplane classifier against the out-of-bag data not chosen for inclusion in the granule associated with the hyperplane classifier. Thus, for example, using a 10% bootstrapping process on a training sample of 100 records, there are expected to be about 93 out-of-bag tuples (e.g., datapoints). Those datapoints can be used in an attempt to determine the effectiveness of the hyperplane classifier derived with respect to the granule. If the hyperplane classifier, for example, is measured to be 90% effective on the out-of-bag data, the prediction can be weighted at 90%. If another hyperplane classifier is measured, for example, to be 70% effective on the out-of-bag data, the prediction can be weighted at 70%. In some implementations, if a hyperplane classifier is measured to be less than a threshold level of effectiveness on the out-of-bag data, the hyperplane classifier can be discarded. For example, if a hyperplane classifier is less than 50% effective on the out-of-bag data, it is more likely than not that the classification is incorrect (at least as far as the out-of-bag data is concerned). In such instances, it the hyperplane classifier could be based on datapoints which are outliers that do not accurately represent the sample. In some implementations, if a threshold number of hyperplane classifiers are discarded because they do not predict with a threshold effectiveness, the classification system can request a new training dataset, or possibly different and/or additional attributes associated with the current training dataset. In other implementations, the classification system can continue to run the support vector machine processing until a threshold number of hyperplane classifiers are identified.



FIG. 2 is a block diagram of an example classification system 100. In some implementations, the classification system 100 can include a granule selection module 210, a processing module 220, a prediction module 230, and an aggregation module 240. The granule selection module 210 can receive a training dataset 250 and can randomly select (with replacement) tuples from the training dataset 250. In some implementations, the random selection of the tuples can be based upon a bootstrapping process, whereby a selection of a tuple is made, the tuple is replaced and then another tuple is selected. In various examples, this process can continue until a threshold number of selections are made. As a specific example, 10% bootstrapping on a 100-tuple training dataset can mean that 10 selections are made (including duplicates). Thus, there are expected to be less than 10 unique sample tuples in the in-bag data, on average.


The in-bag tuples are then projected onto a random subspace. For example, the in-bag tuples can be visualized as datapoints plotted onto an n-dimensional space, where n equals the number of attributes associated with each tuple. If a dimension is removed, the datapoints can be said to be projected into the subspace comprising the remaining attributes. In various implementations, the random subspace can be selected by randomly selecting attributes to remove from the subspace or randomly selecting the attributes that are included in the subspace. In some implementations, the random subspace is chosen by randomly selecting the attributes for inclusion in the granule without replacement (e.g., no duplicates can be selected, because once an attribute is selected, it is removed from the sample). Thus, an original matrix associated with the training dataset can be reduced into a granule. Granules can be continued to be selected until a threshold number of granules have been selected. The random selection of the granules and a smaller sample size can facilitate diversity among the granules. For example, one granule is unlikely to be similar to any of the other granules.


The processing module 220 can be operable to process the granules using a support vector machine process. The processing module 220 can use the support vector machine process to plot the tuples associated with a respective granule into an n-dimensional space, where n equals the number of tuples associated with the granule. The processing module 220 can identify a hyperplane classifier (e.g., linear classifier) which best separates the data based upon the selected category for classification. In some instances, multiple hyperplane classifiers might provide differentiation between the data. In some implementations, the processing module 220 can select the hyperplane classifier that provides maximum separation between the datapoints (e.g., a maximum margin classifier).


In various implementations, the granular nature of the process can facilitate distributed processing of the granules. For example, if there are 10 granules to process on five processors, each of the processors could be assigned to handle two granules. Some implementations can include a distribution module operable to distribute the granules among potentially multiple processing modules 220 (e.g., processors running a support vector machine processes on the granules). In such implementations, the distribution module can, for example, determine the available (e.g., spare) processing capacity and/or specialty processing available on each of a number of processors and assign the granules to the processors accordingly. Other factors for determining distribution of the granules can be used.


The prediction module 230 can receive features 260 (e.g., from an unclassified tuple) for classification. In some implementations, the features 260 can be received from a messaging filter 280. In such implementations, the features 260 can be derived from a received message 270 by a messaging filter 280. The messaging filter 280, for example, can extract the features 260. In some implementations, the messaging filter 280 can be a part of the classification system 100. In other implementations, the messaging filter 280 can query the classification system 100 by sending the attributes associated with the tuple to be classified to the classification system 100.


The prediction module 230 can compare datapoints associated with the features against each of the hyperplane classifiers derived from the granules to derive granule predictions associated with the respective hyperplane classifiers. For example, the prediction module 230 could plot the unclassified new tuple onto a random subspace associated with a first granule and associated hyperplane classifier and determine whether the unclassifier new tuple shows characteristics associated with a first classification (e.g., men) or a characteristics associated with a second classification (e.g., women). The prediction module 230 could continue this process until each of the hyperplane classifiers have been compared to a datapoint associated with the unclassified new tuple.


In some implementations, the prediction module 230 can include distributed processing elements (e.g., processors). In such implementations, the prediction module 230 can distribute classification jobs to processors, for example with available processing capability. In other implementations, the prediction module 230 can distribute classification jobs based upon which processors previously derived the hyperplane classifier associated with a granule. In such implementations, for example, a processor used to derive a first hyperplane classifier for a first granule can also be used to plot an unclassified new tuple into the random subspace associated with the first granule and can compare the datapoint associated with the new tuple to the first hyperplane classifier associated with the first granule.


The granule predictions can be communicated to the aggregation module 240. In some implementations, the aggregation module 240 can use a simple voting process to aggregate the granule predictions. For example, each prediction can be tallied as a “vote” for the classification predicted by the granule prediction. The classification that compiles the most votes can be identified as the final classification decision.


In another implementation, each granule prediction can include a distance metric identifying the distance of datapoints associated with the unclassified new tuple from the respective hyperplane classifiers. The distance metric can be used to weight the respective granule predictions. For example, if there are three predictions, one for classifier A located a distance of 10 units from the hyperplane classifier, and two for classifier B located a distance of 2 and 5 units from their respective hyperplane classifier, then classifier A is weighted at 10 units and classifier B is weighted at 7 units. Thus, in this example, classifier A can be selected as the final classification prediction.


In other implementations, each of the predictions can be weighted by a Bayesian confidence level associated with the respective hyperplane classifiers. In some such implementations, the Bayesian confidence level can be based upon a validation performed on the hyperplane classifier using the out-of-bag data associated with each respective hyperplane classifier. For example, if a first hyperplane classifier is measured to be 85% effective at classifying the out-of-bag data, the predictions associated with the hyperplane classifier can be weighted by the effectiveness metric. The weighted predictions can be summed and compared to each other to determine the final classification prediction.



FIG. 3 is a block diagram of a messaging filter 300 using a classification system 310 and illustrating example policies 320-360. In various implementations, the policies can include an information security policy 320, a virus policy 330, a spam policy 340, a phishing policy 350, a spyware policy 360, or combinations thereof. The messaging filter 300 can filter communications received from messaging entities 380 destined for other messaging entities 380.


In some implementations, the messaging filter 300 can query a classification system 310 to identify a classification associated with a message. The classification system 310 can use a granular support vector machine process to identify hyperplane classifiers associated with a number of granules derived from a training dataset 390. The training dataset can include, for example, documents that have previously been classified. In some examples, the documents can be a library of spam messages identified by users and/or provided by third parties. In other examples, the documents can be a library of viruses identified by administrators, users, and/or other systems or devices. The hyperplane classifiers can then be compared to the attributes of new messages to determine to which classification the new message belongs.


In other implementations, the messaging filter 300 can also query a reputation system. Reputation systems are described in U.S. patent application Ser. No. 11/142,943, entitled “Systems and Methods for Classification of Messaging Entities,” filed on Jun. 2, 2005, which is hereby incorporated by reference.


In those implementations that include an information security policy, incoming and/or outgoing messages can be classified and compared to the information security policy to determine whether to forward the message for delivery. For example, the classification system might determine that the document is a technical specification document. In such an example, the information security policy, for example, might specify that technical specification documents should not be forwarded outside of an enterprise network, or only sent to specific individuals. In other examples, the information security policy could specify that technical documents require encryption of a specified type so as to ensure the security of the technical documents being transmitted. Other information security policies can be used.


In those implementations that include a virus policy, the virus policy can specify a risk level associated with communications that are acceptable. For example, the virus policy can indicate a low tolerance for viruses. Using such a policy, the messaging filter can block communications that are determined to be even a low risk for including viruses. In other examples, the virus policy can indicate a high tolerance for virus activity. In such examples, the messaging filter might only block those messages which are strongly correlated with virus activity. For example, in such implementations, a confidence metric can be associated with the classification. If the confidence metric exceeds a threshold level set by the virus policy, the message can be blocked. Other virus policies can be used.


In those implementations that include a spam policy, the spam policy can specify a risk level associated with communications that is acceptable to the enterprise network. For example, a system administrator can specify a high tolerance for spam messages. In such an example, the messaging filter 300 can filter only messages that are highly correlated with spam activity.


In those implementations that include a phishing policy, the phishing policy can specify a risk level associated with communications that are acceptable to the enterprise network. For example, a system administrator can specify a low tolerance for phishing activity. In such an example, the messaging filter 300 can filter even communications which show a slight correlation to phishing activity.


In those implementations that include a spyware policy, the spyware policy can specify a network tolerance for communications that might include spyware. For example, an administrator can set a low tolerance for spyware activity on the network. In such an example, the messaging filter 300 can filter communications that show even a slight correlation to spyware activity.



FIG. 4 is a block diagram of an example classification system 100 using distributed processing modules 400a-e. In some implementations, the classification system 100 can include a granule selection module 410, a distribution module 420, a prediction module 430 and an aggregation module 440. The classification system can operate to receive a training dataset 450, to derive a number of hyperplane classifiers from the training dataset, and then to predict the classification of incoming unclassified messages 460.


In some implementations, the granule selection module can receive the training dataset 450. The training dataset 450 can be provided, for example by a system administrator or a third party device. In some implementations, the training dataset 450 can include a plurality of records (e.g., tuples) which have previously been classified. In other implementations, the training dataset 450 can include a corpus of documents that have not been parsed. The granule selection module 410, in such implementations, can include a parser operable to extract attributes from the document corpus. In some implementations, the granule selection module 410 can randomly select granules by using a bootstrapping process on the tuples, and then projecting the tuples into a random subspace.


The distribution module 420 can operate to distribute the granules to a plurality of processing modules 400a-e for processing. In some implementations, the distribution module 420 can distribute the granules to processing modules 400a-e having the highest available processing capacity. In other implementations, the distribution module 420 can distribute the granules to processing modules 400a-e based upon the type of content being classified. In still further implementations, the distribution module 420 can distribute the granules to processing modules 400a-e based upon other characteristics of the processing modules 400a-e (e.g., availability of special purpose processing power (e.g., digital signal processing, etc.)).


In some implementations, the distributed processing modules 400a-e can return a hyperplane classifier to the distribution module 420. The hyperplane classifiers can be provided to the prediction module 430. The prediction module 430 can also receive unclassified messages 460 and can use the hyperplane classifiers to provide granule classification predictions associated with each of the hyperplane classifiers.


The granule classification predictions can be provided to an aggregation module 440. The aggregation module 440 can operate to aggregate the granule classification predictions. In some implementations, the aggregation module 440 can aggregate the granule classification predictions to derive a final classification prediction based upon a simple voting process. In other implementations, the aggregation module 440 can use a distance metric associated with each of the granule classification predictions to weight the respective granule predictions. In still further implementations, the aggregation module 440 can use a Bayesian confidence score to weight each of the granule classification predictions. The Bayesian confidence score can be derived, for example, by validating the each respective hyperplane classifier associated with a granule against out-of-bag data not selected for inclusion in the granule. The resulting final classification prediction can be provided as output of the classification system 100.



FIG. 5 is a block diagram of another example classification system 100 having distributed processing and prediction modules 500a-e. In some implementations, the classification system 100 can include a granule selection module 510, a distribution module 520 and an aggregation module 530. The classification system 100 can operate to distribute the processing associated with both the granule processing to derive the hyperplane classifiers associated with the granules and the prediction processing to provide granule predictions based upon the derived hyperplane classifiers.


In some implementations, the granule selection module 510 can receive the training dataset 540. The training dataset 540 can be provided, for example by a system administrator or a third party device. In some implementations, the training dataset 540 can include a plurality of records (e.g., tuples) which have previously been classified. In other implementations, the training dataset 540 can include a corpus of documents that have not been parsed. The granule selection module 510, in such implementations, can include a parser operable to extract attributes from the document corpus. In some implementations, the granule selection module 510 can randomly select granules by using a bootstrapping process on the tuples, and then projecting the tuples into a random subspace.


In some implementations, the distribution module 520 can received the granules from the granule selection module 520. The distribution module 520 can distribute the granules to one or more distributed processing and prediction modules 500a-e. The distribution module 520 can also distribute an unclassified message to the distributed processing and prediction modules 500a-e.


Each distributed processing and prediction modules 500a-e can operate to execute a support vector machine process on the receive granule(s). The support vector machine process can operate to derive a hyperplane classifier(s) associated with the granule(s). Each distributed processing and prediction modules 500a-e can then use the derived hyperplane classifier(s) to generate a granule classification prediction (or predictions) associated with an unclassified message 550.


The granule classification predictions can be provided to an aggregation module 530. The aggregation module 530 can operate to aggregate the granule classification predictions. In some implementations, the aggregation module 530 can aggregate the granule classification predictions to derive a final classification prediction based upon a simple voting process. In other implementations, the aggregation module 530 can use a distance metric associated with each of the granule classification predictions to weight the respective granule predictions. In still further implementations, the aggregation module 530 can use a Bayesian confidence score to weight each of the granule classification predictions. The Bayesian confidence score can be derived, for example, by validating the each respective hyperplane classifier associated with a granule against out-of-bag data not selected for inclusion in the granule. The resulting final classification prediction can be provided as output of the classification system 100.



FIG. 6 is a flowchart illustrating an example method used to derive granules and classification planes. At stage 610, a training dataset is received. The training dataset can be received, for example, by a granule selection module (e.g., classification system 100 of FIG. 2). The training dataset, in various examples, can include parsed or unparsed data describing attributes of an item for classification. In some examples, the item can include documents, deoxyribonucleic acid (DNA) sequences, chemicals, or any other item that has definite and/or quantifiable attributes that can be compiled and analyzed. In other examples, the training dataset can include a document corpus operable to be parsed to identify attributes of each document in the document corpus.


At stage 620, a plurality of granules are derived. The plurality of granules can be derived, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the granule selection module can use a bootstrapping process to identify a random sampling of a received training dataset. The granule selection module can then project the random sampling into a random subspace, thereby producing a granule. In various implementations, the granule is much smaller than the original dataset, which can facilitate more efficient processing of the granule than can be achieved using the entire training dataset. In some implementations, the granule further supports distributed processing, thereby facilitating the parallel processing of the derived granules.


At stage 630, the granules are processed using a support vector machine process. The granules can be processed, for example, by a processing module (e.g., processing module 220 of FIG. 2). The support vector machine process can operate to derive a hyperplane classifier associated with each granule. The hyperplane classifiers can be used to provide demarcations between given classifications of the data (e.g., spam or non-spam, virus or non-virus, spyware or non-spyware, etc.).



FIG. 7 is a flowchart illustrating an example method used to derive classification associated with a new set of attributes for classification. At stage 710 a new tuple and associated attributes can be received. The new tuple and associated attributes can be received, for example, by a prediction module (e.g., classification system 100 of FIG. 2).


At stage 720 a prediction can be generated based upon each hyperplane classifier. The prediction can be generated, for example, by a prediction module (e.g., prediction module 230 of FIG. 2). In various implementations, the prediction module can use the derived hyperplane classifiers to generate a granule classification prediction associated with each hyperplane classifier.


At stage 730, the granule classification predictions from each of the hyperplane classifiers can be aggregated. The predictions can be aggregated, for example, by an aggregation module (e.g., aggregation module 240 of FIG. 2). In various implementations, the granule classification predictions can be aggregated using a simple voting process, a distance between the datapoint and the hyperplane classifiers can be used to factor the final classification, or a Bayesian confidence can be used to weight the predictions based upon the confidence associated with the respective hyperplane classifiers.



FIG. 8 is a flowchart illustrating an example method used to derive granules and distribute granules to processing modules. The method is initialized at stage 800. At stage 805, a training dataset is received. The training dataset can be received, for example, by a granule selection module (e.g., classification system 100 of FIG. 2). The training dataset, in various examples, can include parsed or unparsed data describing attributes of an item for classification. In some examples, the item can include documents, deoxyribonucleic acid (DNA) sequences, chemicals, or any other item that has definite and/or quantifiable attributes that can be compiled and analyzed. In other examples, the training dataset can include a document corpus operable to be parsed to identify attributes of each document in the document corpus.


At stage 810, a counter can be initialized. The counter can be initialized, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the counter can be used to identify when enough granules have been generated based on the training dataset. For example, the number of granules for a given dataset can be a percentage (e.g., 50%) of the number of tuples in the training dataset.


At stage 815, a bootstrap aggregating process is used to randomly select tuples from among the training dataset. The bootstrap aggregating process can be performed, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the bootstrap aggregating process randomly selects a tuple from the training dataset, and replaces the tuple before selecting another tuple, until a predefined number of selections have been made. In such implementations, duplicates can be selected. Thus, it is unknown how many tuples will be selected prior to the bootstrap aggregating process, though it does ensure that the number of samples will be no greater than the number of selections made. In some examples, the predefined number of selections can be based upon a percentage (e.g., 10%) of the size of the training dataset.


At stage 820, the random sample of tuples is projected into a random subspace. The projection into a random subspace can be performed, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). The random subspace can be selected, in some implementations, by randomly selecting the features to be used within the granule, without replacement. For example, when a first feature is selected, the feature is not replaced into the group, but removed so as not to be selected a second time. Such random selection guarantees that the granule will include a predefined number of features in each granule.


At stage 825, the generated granule is labeled as the nth granule, where n is the current counter value. The granule can be labeled, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2).


At stage 830, the counter is incremented (n=n+1). The counter can be incremented, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). At stage 835, the counter can be compared to a threshold to determine whether a predefined number of granules have been generated. If a predefined number of granules have not been generated, the process returns to stage 815 and generates additional granules until the specified number of granules have been generated.


However, if the counter has reached the threshold at stage 835, the process can continue to stage 840 where the granules can be distributed. The granules can be distributed, for example, by a distribution module (e.g., distribution module 420, 520 of FIGS. 4 and 5, respectively). In various implementations, the granules can be distributed based upon the characteristics of a plurality of processing modules or the characteristics of the granules themselves.


At stage 845, the granules can be processed. The granules can be processed, for example, by distributed processing module (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, the distributed processing modules can be executed by multiple processors. In additional implementations, the distributed processing modules can execute a support vector machine process on each generated granule to derive a hyperplane classifier associated with each generated granule. The hyperplane classifier can be compared to unclassified data to derive a classification prediction associated with the unclassified data.


At optional stage 850, the hyperplane classifiers can be validated. The hyperplane classifiers can be validated, for example, by distributed processing modules (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, each hyperplane classifier can be validated using respective out-of-bag data associated with the granule used to generate the hyperplane classifier. Thus, each hyperplane classifier can be tested to determine the effectiveness of the derived hyperplane classifier.


At optional stage 855, a determination is made which hyperplane classifiers to use in prediction modules based upon the validation. The determination of which hyperplane classifiers to use can be performed, for example, by a distributed processing module (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, a threshold effectiveness level can be identified whereby if the validation does not meet the threshold, it is not used for predicting classifications for unclassified datasets. For example, if a hyperplane classifier is validated as being correct less than 50% of the time, the classification associated with the hyperplane classifier is incorrect more often than it is correct. In some implementations, such hyperplane classifier can be discarded as misleading with respect to the final classification prediction.


The method ends at stage 860. The method can be used to efficiently derive a plurality of hyperplane classifiers associated with a training dataset by distributing the granules for parallel and/or independent processing. Moreover, inaccurate hyperplane classifiers can be discarded in some implementations.


In various implementations of the above description, message filters can forward, drop, quarantine, delay delivery, or specify messages for more detailed testing. In some implementations, the messages can be delayed to facilitate collection of additional information related to the message.


The systems and methods disclosed herein may use data signals conveyed using networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks (e.g., wireless local area networks, wireless metropolitan area networks, cellular networks, etc.), etc. for communication with one or more data processing devices (e.g., mobile devices). The data signals can carry any or all of the data disclosed herein that is provided to or from a device.


The methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by one or more processors. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein.


The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.


The computer components, software modules, functions and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that software instructions or a module can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code or firmware. The software components and/or functionality may be located on a single device or distributed across multiple devices depending upon the situation at hand.


This written description sets forth the best mode of the invention and provides examples to describe the invention and to enable a person of ordinary skill in the art to make and use the invention. This written description does not limit the invention to the precise terms set forth. Thus, while the invention has been described in detail with reference to the examples set forth above, those of ordinary skill in the art may effect alterations, modifications and variations to the examples without departing from the scope of the invention.


As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of “and” and “or” include both the conjunctive and disjunctive and may be used interchangeably unless the context clearly dictates otherwise.


Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.


These and other implementations are within the scope of the following claims.

Claims
  • 1. A method comprising: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples;deriving a plurality of granules from the training dataset, each granule comprising a plurality of sample tuples and a plurality of sample attributes, wherein for each of the plurality of granules: the plurality of sample tuples is randomly selected from among the plurality of tuples with replacement; andthe plurality of sample attributes is randomly selected from among the plurality of attributes without replacement;processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules;predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions;aggregating the predictions to derive a decision on a final classification of the new tuple;validating a first hyperplane classifier associated with a granule by classifying a plurality of tuples from the training dataset which were not included in the granule;generating a hyperplane classifier effectiveness level based upon the validation of the first hyperplane classifier against tuples from the training dataset which were not included in the granule;determining whether the hyperplane classifier effectiveness level exceeds a threshold effectiveness level; andin response to determining that the hyperplane classifier effectiveness level does not exceed the threshold effectiveness level: removing the first hyperplane classifier; andrequesting a second plurality of sample attributes, each sample attribute in the second plurality of sample attributes different from each sample attribute in the plurality of sample attributes, for use in identifying new hyperplane classifiers.
  • 2. The method of claim 1, wherein the selection of granule tuples and granule attributes for each granule is independent of the selection of granule tuples and granule attributes for other granules.
  • 3. The method of claim 1, wherein the training dataset comprises a relational database of tuples with associated attributes, each of the tuples having a known classification.
  • 4. The method of claim 1, wherein aggregating the predicted classifications comprises weighting the predictions based upon hyperplane classifier effectiveness levels associated with the granules, respectively, and aggregating the weighted predictions.
  • 5. The method of claim 1, further comprising: weighting the predictions based upon a distance of the new tuple from the hyperplane classifiers, respectively; andaggregating the weighted predictions.
  • 6. The method of claim 1, wherein each of the predictions comprises a vote, and aggregating the predictions comprises adding the votes together and determining which classification is most common.
  • 7. The method of claim 1, wherein the tuples are gene sequences and the attributes are features of the gene sequences, whereby the method is operable to determine whether a gene sequence is likely to share known characteristics of other gene sequences.
  • 8. The method of claim 1, wherein the tuples are documents and the attributes are features of the documents, whereby the method is operable to determine whether a document is likely to share known characteristics of other documents.
  • 9. The method of claim 8, wherein the known characteristics comprise one or more of spam characteristics, virus characteristics, spyware characteristics, or phishing characteristics, and the method is operable to determine whether the new tuple should be classified as including one or more of the known characteristics.
  • 10. The method of claim 1, wherein processing the granules comprises processing the granules on a plurality of processors in parallel, the processors being operable to operate support vector machines to identify hyperplane classifiers associated with the granules.
  • 11. The method of claim 10, further comprising selecting the plurality of processors based upon the processing power available on the respective processors.
  • 12. A system for performing the disclosed methods) comprising: one or more computers;one or more memory devices in data communication with the one or more computers and storing instructions defining: a granule selection module operable to select a plurality of granules from a training dataset, each of the granules comprising a plurality of tuples and a plurality of attributes, wherein, for each of the plurality of granules, the plurality of tuples are randomly selected with replacement and the plurality of attributes are randomly selected without replacement;a plurality of granule processing modules operable to process granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules;one or more prediction modules operable to predict a classification associated with an unknown tuple based upon the hyperplane classifiers to produce a plurality of granule predictions; andan aggregation module operable to aggregate the granule predictions to derive a decision on a final classification associated with the unknown tuple; anda validation module operable to: validate a first hyperplane classifier associated with a granule by attempting to classify a plurality of tuples from the training dataset which were not included in the granule;generate a hyperplane classifier effectiveness level based upon the validation of the first hyperplane classifier against tuples from the training dataset which were not included in the granule;determine whether the hyperplane classifier effectiveness level exceeds a threshold effectiveness level; andin response to determining that the hyperplane classifier effectiveness level does not exceed the threshold effectiveness level: remove the first hyperplane classifier; andrequest a second plurality of attributes, each attribute in the second plurality of sample attributes different from each attribute in the plurality of attributes, for use in identifying new hyperplane classifiers.
  • 13. The system of claim 12, wherein the unknown tuple comprises features of an unclassified document, and the system further comprises a parsing module operable to parse the unclassified document to derive a plurality of unclassified attributes associated with the unknown tuple.
  • 14. The system of claim 13, wherein the one or more prediction modules are operable to extract a portion of the unclassified attributes based upon the granule and the hyperplane classifier, and is operable to compare the unclassified attributes to the hyperplane classifier to derive the prediction associated with the granule.
  • 15. The system of claim 14, wherein the one or more prediction modules are operable to generate a prediction for each of the hyperplane classifiers to produce the plurality of granule predictions.
  • 16. The system of claim 15, wherein the aggregation module is operable to count each of the predictions as a vote, and to derive a final prediction based upon which of the classifications accumulates the most votes.
  • 17. The system of claim 12, wherein each prediction includes a distance metric from the hyperplane classifier, and the aggregation module is operable to weight each prediction based upon the associated distance metric.
  • 18. The system of claim 12, wherein the plurality of granule modules are operable to be processed independently.
  • 19. The system of claim 12, wherein the plurality of granule processing modules are operable to be processed in parallel.
  • 20. The system of claim 12, wherein the plurality of granule processing modules are operable to be executed by separate processors.
  • 21. The system of claim 12, wherein the system is operable to classify a tuple as one or more of a spam risk, a phishing risk, a virus risk, or a spyware risk.
  • 22. A method comprising: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples;deriving a plurality of granules from the training dataset, each granule comprising a plurality of sample tuples and a plurality of sample attributes, wherein for each of the plurality of granules: the plurality of sample tuples is randomly selected from among the plurality of tuples with replacement; andthe plurality of sample attributes is randomly selected from among the plurality of attributes without replacement;processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules;predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions;aggregating the predictions to derive a decision on a final classification of the new tuple;validating a first hyperplane classifier associated with a granule by attempting to classify a plurality of tuples from the training dataset which were not included in the granule;generating a hyperplane classifier effectiveness level based upon the validation of the first hyperplane classifier against tuples from the training dataset which were not included in the granule;determining whether the hyperplane classifier effectiveness level exceeds a threshold effectiveness level; andin response to determining that the hyperplane classifier effectiveness level does not exceed the threshold effectiveness level: removing the first hyperplane classifier; andrequesting a new training dataset for use in identifying new hyperplane classifiers, wherein the new training dataset does not include data from the training dataset.
  • 23. A computer program product, encoded on a computer-readable medium, operable to cause one or more processors to perform operations comprising: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples;deriving a plurality of granules from the training dataset, each granule comprising a plurality of sample tuples and a plurality of sample attributes, wherein for each of the plurality of granules: the plurality of sample tuples is randomly selected from among the plurality of tuples with replacement; andthe plurality of sample attributes is randomly selected from among the plurality of attributes without replacement;processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules;predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions;aggregating the predictions to derive a decision on a final classification of the new tuple;validating a first hyperplane classifier associated with a granule by attempting to classify a plurality of tuples from the training dataset which were not included in the granule;generating a hyperplane classifier effectiveness level based upon the validation of the first hyperplane classifier against tuples from the training dataset which were not included in the granule;determining whether the hyperplane classifier effectiveness level exceeds a threshold effectiveness level; andin response to determining that the hyperplane classifier effectiveness level does not exceed the threshold effectiveness level: removing the first hyperplane classifier; andrequesting a new training dataset for use in identifying new hyperplane classifiers, wherein the new training dataset does not include data from the training dataset.
US Referenced Citations (360)
Number Name Date Kind
4289930 Connolly et al. Sep 1981 A
4384325 Slechta et al. May 1983 A
4386416 Giltner et al. May 1983 A
4532588 Foster Jul 1985 A
4713780 Schultz et al. Dec 1987 A
4754428 Schultz et al. Jun 1988 A
4837798 Cohen et al. Jun 1989 A
4853961 Pastor Aug 1989 A
4864573 Horsten Sep 1989 A
4951196 Jackson Aug 1990 A
4975950 Lentz Dec 1990 A
4979210 Nagata et al. Dec 1990 A
5008814 Mathur Apr 1991 A
5020059 Gorin et al. May 1991 A
5051886 Kawaguchi et al. Sep 1991 A
5054096 Beizer Oct 1991 A
5105184 Pirani et al. Apr 1992 A
5119465 Jack et al. Jun 1992 A
5144557 Wang Sep 1992 A
5144659 Jones Sep 1992 A
5144660 Rose Sep 1992 A
5167011 Priest Nov 1992 A
5210824 Putz et al. May 1993 A
5210825 Kavaler May 1993 A
5235642 Wobber et al. Aug 1993 A
5239466 Morgan et al. Aug 1993 A
5247661 Hager et al. Sep 1993 A
5276869 Forrest et al. Jan 1994 A
5278901 Shieh et al. Jan 1994 A
5283887 Zachery Feb 1994 A
5293250 Okumura et al. Mar 1994 A
5313521 Torii et al. May 1994 A
5319776 Hile et al. Jun 1994 A
5355472 Lewis Oct 1994 A
5367621 Cohen et al. Nov 1994 A
5377354 Scannell et al. Dec 1994 A
5379340 Overend et al. Jan 1995 A
5379374 Ishizaki et al. Jan 1995 A
5404231 Bloomfield Apr 1995 A
5406557 Baudoin Apr 1995 A
5414833 Hershey et al. May 1995 A
5416842 Aziz May 1995 A
5418908 Keller et al. May 1995 A
5424724 Williams et al. Jun 1995 A
5479411 Klein Dec 1995 A
5481312 Cash et al. Jan 1996 A
5483466 Kawahara et al. Jan 1996 A
5485409 Gupta et al. Jan 1996 A
5495610 Shing et al. Feb 1996 A
5509074 Choudhury et al. Apr 1996 A
5511122 Atkinson Apr 1996 A
5513126 Harkins et al. Apr 1996 A
5513323 Williams et al. Apr 1996 A
5530852 Meske, Jr. et al. Jun 1996 A
5535276 Ganesan Jul 1996 A
5541993 Fan et al. Jul 1996 A
5544320 Konrad Aug 1996 A
5550984 Gelb Aug 1996 A
5550994 Tashiro et al. Aug 1996 A
5557742 Smaha et al. Sep 1996 A
5572643 Judson Nov 1996 A
5577209 Boyle et al. Nov 1996 A
5602918 Chen et al. Feb 1997 A
5606668 Shwed Feb 1997 A
5608819 Ikeuchi Mar 1997 A
5608874 Ogawa et al. Mar 1997 A
5619648 Canale et al. Apr 1997 A
5632011 Landfield et al. May 1997 A
5638487 Chigier Jun 1997 A
5644404 Hashimoto et al. Jul 1997 A
5657461 Harkins et al. Aug 1997 A
5673322 Pepe et al. Sep 1997 A
5675507 Bobo, II Oct 1997 A
5675733 Williams Oct 1997 A
5677955 Doggett et al. Oct 1997 A
5694616 Johnson et al. Dec 1997 A
5696822 Nachenberg Dec 1997 A
5706442 Anderson et al. Jan 1998 A
5708780 Levergood et al. Jan 1998 A
5708826 Ikeda et al. Jan 1998 A
5710883 Hong et al. Jan 1998 A
5727156 Herr-Hoyman et al. Mar 1998 A
5740231 Cohn et al. Apr 1998 A
5742759 Nessett et al. Apr 1998 A
5742769 Lee et al. Apr 1998 A
5745574 Muftic Apr 1998 A
5751956 Kirsch May 1998 A
5758343 Vigil et al. May 1998 A
5764906 Edelstein et al. Jun 1998 A
5768528 Stumm Jun 1998 A
5771348 Kubatzki et al. Jun 1998 A
5778372 Cordell et al. Jul 1998 A
5781857 Hwang et al. Jul 1998 A
5781901 Kuzma Jul 1998 A
5790789 Suarez Aug 1998 A
5790790 Smith et al. Aug 1998 A
5790793 Higley Aug 1998 A
5793763 Mayes et al. Aug 1998 A
5793972 Shane Aug 1998 A
5796942 Esbensen Aug 1998 A
5796948 Cohen Aug 1998 A
5801700 Ferguson Sep 1998 A
5805719 Pare, Jr. et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5812776 Gifford Sep 1998 A
5822526 Waskiewicz Oct 1998 A
5822527 Post Oct 1998 A
5826013 Nachenberg Oct 1998 A
5826014 Coley et al. Oct 1998 A
5826022 Nielsen Oct 1998 A
5826029 Gore, Jr. et al. Oct 1998 A
5835087 Herz et al. Nov 1998 A
5845084 Cordell et al. Dec 1998 A
5850442 Muftic Dec 1998 A
5855020 Kirsch Dec 1998 A
5860068 Cook Jan 1999 A
5862325 Reed et al. Jan 1999 A
5864852 Luotonen Jan 1999 A
5878230 Weber et al. Mar 1999 A
5884033 Duvall et al. Mar 1999 A
5892825 Mages et al. Apr 1999 A
5893114 Hashimoto et al. Apr 1999 A
5896499 McKelvey Apr 1999 A
5898836 Freivald et al. Apr 1999 A
5903723 Becker et al. May 1999 A
5911776 Guck Jun 1999 A
5923846 Gage et al. Jul 1999 A
5930479 Hall Jul 1999 A
5933478 Ozaki et al. Aug 1999 A
5933498 Schneck et al. Aug 1999 A
5937164 Mages et al. Aug 1999 A
5940591 Boyle et al. Aug 1999 A
5948062 Tzelnic et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5963915 Kirsch Oct 1999 A
5978799 Hirsch Nov 1999 A
5987609 Hasebe Nov 1999 A
5991881 Conklin et al. Nov 1999 A
5999932 Paul Dec 1999 A
6003027 Prager Dec 1999 A
6006329 Chi Dec 1999 A
6012144 Pickett Jan 2000 A
6014651 Crawford Jan 2000 A
6023723 McCormick et al. Feb 2000 A
6029256 Kouznetsov Feb 2000 A
6035423 Hodges et al. Mar 2000 A
6052709 Paul Apr 2000 A
6058381 Nelson May 2000 A
6058482 Liu May 2000 A
6061448 Smith et al. May 2000 A
6061722 Lipa et al. May 2000 A
6072942 Stockwell et al. Jun 2000 A
6092114 Shaffer et al. Jul 2000 A
6092194 Touboul Jul 2000 A
6094277 Toyoda Jul 2000 A
6094731 Waldin et al. Jul 2000 A
6104500 Alam et al. Aug 2000 A
6108688 Nielsen Aug 2000 A
6108691 Lee et al. Aug 2000 A
6108786 Knowlson Aug 2000 A
6118856 Paarsmarkt et al. Sep 2000 A
6119137 Smith et al. Sep 2000 A
6119142 Kosaka Sep 2000 A
6119230 Carter Sep 2000 A
6119236 Shipley Sep 2000 A
6122661 Stedman et al. Sep 2000 A
6141695 Sekiguchi et al. Oct 2000 A
6141778 Kane et al. Oct 2000 A
6145083 Shaffer et al. Nov 2000 A
6151675 Smith Nov 2000 A
6161130 Horvitz et al. Dec 2000 A
6185689 Todd, Sr. et al. Feb 2001 B1
6192407 Smith et al. Feb 2001 B1
6199102 Cobb Mar 2001 B1
6202157 Brownlie et al. Mar 2001 B1
6219714 Inhwan et al. Apr 2001 B1
6223213 Cleron et al. Apr 2001 B1
6249575 Heilmann et al. Jun 2001 B1
6249807 Shaw et al. Jun 2001 B1
6260043 Puri et al. Jul 2001 B1
6269447 Maloney et al. Jul 2001 B1
6269456 Hodges et al. Jul 2001 B1
6272532 Feinleib Aug 2001 B1
6275942 Bernhard et al. Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6279133 Vafai et al. Aug 2001 B1
6282565 Shaw et al. Aug 2001 B1
6285991 Powar Sep 2001 B1
6289214 Backstrom Sep 2001 B1
6298445 Shostack et al. Oct 2001 B1
6301668 Gleichauf et al. Oct 2001 B1
6304898 Shiigi Oct 2001 B1
6304973 Williams Oct 2001 B1
6311207 Mighdoll et al. Oct 2001 B1
6317829 Van Oorschot Nov 2001 B1
6320948 Heilmann et al. Nov 2001 B1
6321267 Donaldson Nov 2001 B1
6324569 Ogilvie et al. Nov 2001 B1
6324647 Bowman-Amuah Nov 2001 B1
6324656 Gleichauf et al. Nov 2001 B1
6330589 Kennedy Dec 2001 B1
6347374 Drake et al. Feb 2002 B1
6353886 Howard et al. Mar 2002 B1
6363489 Comay et al. Mar 2002 B1
6370648 Diep Apr 2002 B1
6373950 Rowney Apr 2002 B1
6385655 Smith et al. May 2002 B1
6393465 Leeds May 2002 B2
6393568 Ranger et al. May 2002 B1
6405318 Rowland Jun 2002 B1
6442588 Clark et al. Aug 2002 B1
6442686 McArdle et al. Aug 2002 B1
6453345 Trcka et al. Sep 2002 B2
6460141 Olden Oct 2002 B1
6470086 Smith Oct 2002 B1
6487599 Smith et al. Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6502191 Smith et al. Dec 2002 B1
6516411 Smith Feb 2003 B2
6519703 Joyce Feb 2003 B1
6539430 Humes Mar 2003 B1
6546416 Kirsch Apr 2003 B1
6546493 Magdych et al. Apr 2003 B1
6550012 Villa et al. Apr 2003 B1
6574737 Kingsford et al. Jun 2003 B1
6662170 Dom et al. Dec 2003 B1
6892178 Zacharia May 2005 B1
6892179 Zacharia May 2005 B1
6895385 Zacharia et al. May 2005 B1
6941348 Petry et al. Sep 2005 B2
7519563 Urmanov et al. Apr 2009 B1
20010049793 Sugimoto Dec 2001 A1
20020004902 Toh et al. Jan 2002 A1
20020016910 Wright et al. Feb 2002 A1
20020023140 Hile et al. Feb 2002 A1
20020026591 Hartley et al. Feb 2002 A1
20020032871 Malan et al. Mar 2002 A1
20020035683 Kaashoek et al. Mar 2002 A1
20020042876 Smith Apr 2002 A1
20020046041 Lang Apr 2002 A1
20020049853 Chu et al. Apr 2002 A1
20020078382 Sheikh et al. Jun 2002 A1
20020087882 Schneier et al. Jul 2002 A1
20020095492 Kaashoek et al. Jul 2002 A1
20020112185 Hodges Aug 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020120853 Tyree Aug 2002 A1
20020133365 Grey et al. Sep 2002 A1
20020138416 Lovejoy et al. Sep 2002 A1
20020138755 Ko Sep 2002 A1
20020138759 Dutta Sep 2002 A1
20020138762 Horne Sep 2002 A1
20020143963 Converse et al. Oct 2002 A1
20020147734 Shoup et al. Oct 2002 A1
20020152399 Smith Oct 2002 A1
20020165971 Baron Nov 2002 A1
20020172367 Mulder et al. Nov 2002 A1
20020178227 Matsa et al. Nov 2002 A1
20020178383 Hrabik et al. Nov 2002 A1
20020188864 Jackson Dec 2002 A1
20020194469 Dominique et al. Dec 2002 A1
20020199095 Bandini et al. Dec 2002 A1
20030005326 Flemming Jan 2003 A1
20030009554 Burch et al. Jan 2003 A1
20030009693 Brock et al. Jan 2003 A1
20030009696 Bunker et al. Jan 2003 A1
20030009699 Gupta et al. Jan 2003 A1
20030014664 Hentunen Jan 2003 A1
20030023692 Moroo Jan 2003 A1
20030023695 Kobata et al. Jan 2003 A1
20030023873 Ben-Itzhak Jan 2003 A1
20030023874 Prokupets et al. Jan 2003 A1
20030023875 Hursey et al. Jan 2003 A1
20030028803 Bunker et al. Feb 2003 A1
20030033516 Howard et al. Feb 2003 A1
20030033542 Goseva-Popstojanova et al. Feb 2003 A1
20030041264 Black et al. Feb 2003 A1
20030051026 Carter et al. Mar 2003 A1
20030051163 Bidaud Mar 2003 A1
20030051168 King et al. Mar 2003 A1
20030055931 Cravo De Almeida et al. Mar 2003 A1
20030061506 Cooper et al. Mar 2003 A1
20030065943 Geis et al. Apr 2003 A1
20030084280 Bryan et al. May 2003 A1
20030084320 Tarquini et al. May 2003 A1
20030084323 Gales May 2003 A1
20030084347 Luzzatto May 2003 A1
20030088792 Card et al. May 2003 A1
20030093667 Dutta et al. May 2003 A1
20030093695 Dutta May 2003 A1
20030093696 Sugimoto May 2003 A1
20030095555 McNamara et al. May 2003 A1
20030097439 Strayer et al. May 2003 A1
20030097564 Tewari et al. May 2003 A1
20030105976 Copeland, III Jun 2003 A1
20030110392 Aucsmith et al. Jun 2003 A1
20030110396 Lewis et al. Jun 2003 A1
20030115485 Milliken Jun 2003 A1
20030115486 Choi et al. Jun 2003 A1
20030123665 Dunstan et al. Jul 2003 A1
20030126464 McDaniel et al. Jul 2003 A1
20030126472 Banzhof Jul 2003 A1
20030135749 Gales et al. Jul 2003 A1
20030140137 Joiner et al. Jul 2003 A1
20030140250 Taninaka et al. Jul 2003 A1
20030145212 Crumly Jul 2003 A1
20030145225 Bruton, III et al. Jul 2003 A1
20030145226 Bruton, III et al. Jul 2003 A1
20030149887 Yadav Aug 2003 A1
20030149888 Yadav Aug 2003 A1
20030154393 Young Aug 2003 A1
20030154399 Zuk et al. Aug 2003 A1
20030154402 Pandit et al. Aug 2003 A1
20030158905 Petry et al. Aug 2003 A1
20030159069 Choi et al. Aug 2003 A1
20030159070 Mayer et al. Aug 2003 A1
20030167402 Stolfo et al. Sep 2003 A1
20030172166 Judge et al. Sep 2003 A1
20030172167 Judge et al. Sep 2003 A1
20030172289 Soppera Sep 2003 A1
20030172291 Judge et al. Sep 2003 A1
20030172292 Judge Sep 2003 A1
20030172294 Judge Sep 2003 A1
20030172301 Judge et al. Sep 2003 A1
20030172302 Judge et al. Sep 2003 A1
20030187996 Cardina et al. Oct 2003 A1
20030212791 Pickup Nov 2003 A1
20030233328 Scott et al. Dec 2003 A1
20040015554 Wilson Jan 2004 A1
20040025044 Day Feb 2004 A1
20040054886 Dickinson et al. Mar 2004 A1
20040058673 Irlam et al. Mar 2004 A1
20040059811 Sugauchi et al. Mar 2004 A1
20040088570 Roberts et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040139160 Wallace et al. Jul 2004 A1
20040139334 Wiseman Jul 2004 A1
20040177120 Kirsch Sep 2004 A1
20040203589 Wang et al. Oct 2004 A1
20040205135 Hallam-Baker Oct 2004 A1
20040267893 Lin Dec 2004 A1
20050021738 Goeller Jan 2005 A1
20050052998 Oliver et al. Mar 2005 A1
20050065810 Bouron Mar 2005 A1
20050102366 Kirsch May 2005 A1
20050262209 Yu Nov 2005 A1
20050262210 Yu Nov 2005 A1
20060036727 Kurapati et al. Feb 2006 A1
20060042483 Work et al. Mar 2006 A1
20060095404 Adelman et al. May 2006 A1
20060112026 Graf et al. May 2006 A1
20060123083 Goutte et al. Jun 2006 A1
20060212925 Shull et al. Sep 2006 A1
20060212930 Shull et al. Sep 2006 A1
20060212931 Shull et al. Sep 2006 A1
20060230039 Shull et al. Oct 2006 A1
20060253458 Dixon et al. Nov 2006 A1
20070239642 Sindhwani et al. Oct 2007 A1
20080177684 Laxman et al. Jul 2008 A1
20090089279 Jeong et al. Apr 2009 A1
Foreign Referenced Citations (29)
Number Date Country
2564533 Dec 2005 CA
0375138 Jun 1990 EP
0413537 Feb 1991 EP
0420779 Apr 1991 EP
0720333 Jul 1996 EP
0838774 Apr 1998 EP
0869652 Oct 1998 EP
0907120 Apr 1999 EP
1326376 Jul 2003 EP
1271846 Jul 2005 EP
2271002 Mar 1994 GB
18350870 Dec 2006 JP
2006-0012137 Feb 2006 KR
1020060041934 May 2006 KR
WO 9635994 Nov 1996 WO
WO 9905814 Feb 1999 WO
WO 9933188 Jul 1999 WO
WO 9937066 Jul 1999 WO
WO 0042748 Jul 2000 WO
WO 0117165 Mar 2001 WO
WO 0150691 Jul 2001 WO
WO 0176181 Oct 2001 WO
WO 0213469 Feb 2002 WO
WO 0213489 Feb 2002 WO
WO 02075547 Sep 2002 WO
WO 02091706 Nov 2002 WO
WO 2004061703 Jul 2004 WO
WO 2004081734 Sep 2004 WO
WO 2005116851 Dec 2005 WO
Related Publications (1)
Number Date Country
20090192955 A1 Jul 2009 US