Multi-level learning for predicting and classifying traffic flows from first packet data

Information

  • Patent Grant
  • 11044202
  • Patent Number
    11,044,202
  • Date Filed
    Monday, April 29, 2019
    5 years ago
  • Date Issued
    Tuesday, June 22, 2021
    3 years ago
Abstract
Disclosed herein are systems and methods for multi-level classification of data traffic flows based in part on information in a first data packet for a data traffic flow. In exemplary embodiments of the present disclosure, a key can be generated to track data traffic flows by application names and data packet information or properties. Based in part on these keys, patterns can be discerned to infer data traffic information based on only the information in a first data packet. The determined patterns can be used to predict classifications of future traffic flows with similar key information. In this way, data traffic flows can be classified and steered in a network based on limited information available in a first data packet.
Description
TECHNICAL FIELD

This disclosure relates generally to the classification of a network traffic flow and prediction of an associated application name and/or associated application characteristics based in part on the classification.


BACKGROUND

The approaches described in this section could be pursued, but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.


Typically, data is sent between computing devices across a communications network in packets. The packets may be generated according to a variety of protocols such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or the like. A network appliance in a network can be connected to many other computing devices via many different network paths. Furthermore, the network paths may traverse multiple communication networks.


When selecting a network path for a particular data traffic flow, a network appliance may first need to classify the flow to determine which network path is appropriate or optimal for the flow. The network path selection needs to be made on a first packet for a flow. However, often times a first packet for a flow is merely a packet for establishing a connection and may only have limited information, such as only header information. Thus mechanisms are needed for classifying a traffic flow based on the limited information available in a first packet for a flow.


SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


In various embodiments of the present disclosure, a method of selecting a network path for transmitting data across a network is disclosed. The method may comprise: receiving at a first network appliance, a first data packet of a first flow to be transmitted across a network; extracting information from a header of the first data packet; predicting an associated application name for the first flow based in part on the extracted information from the header of the first data packet; determining a confidence level for the predicted application name for the first flow based in part on the extracted information from the first data packet; selecting by the first network appliance a network tunnel based in part on the application prediction; transmitting the first packet of the first flow by the first network appliance over the selected network tunnel with a packet header with supplementary header information, to a second network appliance; receiving a second packet of the first flow at the first network appliance from the second network appliance, via the selected network tunnel, the second packet of the first flow comprising a packet header with supplementary header information with a predicted application name and confidence level for the first flow; and updating the predicted application name for the first flow at the first appliance, if the confidence level received from the second appliance is greater than the confidence level generated at the first network appliance.


In other embodiments, a method for predicting and classifying traffic flows at a second network appliance is disclosed. The method comprises: receiving at a second network appliance, a first data packet of a first flow transmitted across a network tunnel by a first network appliance, the first data packet comprising a packet header with supplementary header information with a predicted application name and confidence level for the first flow; generating at the second network appliance, a predicted application name and confidence level for the first flow; updating the predicted application name for the first flow at the second network appliance, if the confidence level received from the first appliance is greater than the confidence level generated at the second network appliance.


Other features, examples, and embodiments are described below.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are illustrated by way of example, and not by limitation in the figures of the accompanying drawings, in which like references indicate similar elements.



FIG. 1A depicts an exemplary environment within which the present disclosure may be implemented.



FIG. 1B depicts an exemplary data packet.



FIG. 2 depicts another exemplary environment within which the present disclosure may be implemented.



FIG. 3A depicts an exemplary data structure that is constructed by an appliance.



FIG. 3B depicts another exemplary data structure that is constructed by an appliance.



FIG. 4 depicts an exemplary table for tracking an exemplary string of data.



FIG. 5 depicts another exemplary table for tracking an exemplary string of data.



FIG. 6 illustrates a block diagram of an exemplary appliance.



FIG. 7 illustrates an exemplary environment for network appliances.



FIG. 8 depicts an exemplary method undertaken by the network appliance in steering traffic.



FIG. 9 depicts an exemplary system for aggregating information across multiple appliances.



FIG. 10 depicts an exemplary analysis that is conducted on packet information to classify a flow.



FIG. 11 depicts an exemplary method undertaken by a network appliance in computing a key from packet (header) data.



FIG. 12 depicts an exemplary method undertaken by a network appliance when a first packet of a flow arrives.



FIG. 13 depicts an exemplary method undertaken by a network appliance when the final application name of a flow is determined.



FIG. 14 illustrates an exemplary system within which the present disclosure can be implemented.



FIG. 15A illustrates an exemplary message sequence chart for two appliances of a network tunnel.



FIG. 15B illustrates exemplary supplementary packet header information.





DETAILED DESCRIPTION

The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations, in accordance with exemplary embodiments. These exemplary embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.


The embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system containing one or more computers, or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium, such as a disk drive, or computer-readable medium.


The embodiments described herein relate to mechanisms for classifying flows via a first packet of the flow.


I. Steering Network Traffic


In some circumstances, the determination of which communication network to use to transfer packets of a particular flow must be made on the first packet of the flow. Because there can be multiple network paths (including different communication networks and layers of overlay tunnels) for transmitting data, traffic needs to be steered in a Wide Area Network (WAN). In many cases, once a flow transmission begins over a particular network path, all packets of the flow need to be transmitted over the same path. In addition, different types of data may be transmitted over differing network paths depending on whether the network is trusted or not.


Further, in many cases, internet traffic from a particular location is routed to one firewall that is located in a branch center or in a data center. However, in an environment where multiple firewalls are used for different kinds of traffic, routing data over some networks is more expensive than routing data over other types of networks. Additionally, better and more direct paths can be found from a source to a destination.


In an exemplary environment of FIG. 1A, an appliance in a network receives data packets for transmission. The appliance needs to determine in which direction to steer the data packets, depending on whether the data is associated with a trusted business application, a recreational application, or an untrusted/suspicious application. The determination of the application generating the data flow needs to be made on the first packet of the flow so that the appliance can send the data over the correct path. Further, while trusted business application data may be transmitted over the general Internet, recreational application may be sent to a cloud firewall. Untrusted or suspicious applications, such as traffic to prohibited or suspicious websites, may be sent to a data center. At the data center, this traffic may be logged, inspected for viruses/malware, or be treated more carefully by the appliance. Thus, it is important to know which application the data packets are associated with, before transmission can begin by the appliance.


In an exemplary environment of FIG. 2, one or more user computing devices are connected to a network appliance 220, also sometimes referred to herein as appliance 220. In the exemplary environment, the appliance 220 is connected to an MPLS network and an Internet network. A user computing device 210 may initiate a connection to an application 235 that is hosted by server 230. Server 230 is also sometimes referred to herein as application server 230. Typically, the application 235 can be any application that is accessible from the public Internet, such as any website, but the present disclosure is not limited to that embodiment. Application 235 can comprise an entire application, or simply a part of an application. That is, application 235 can be hosted by a single server, or by a combination of servers. Each server may be physical or virtual, and each server may be in different geographic locations. For example, in one embodiment, application 235 may provide a web-based email service hosted by a single server. In another embodiment, application 235 may provide a news aggregation service, with news articles provided by multiple servers located in different geographic locations.


Based on the IP address of server 230 that is hosting application 235, and/or the location of server 230, embodiments of the present disclosure provide for an inference to be made as to the name of the application 235 hosted by server 230. For example, by learning which destination server IP addresses are associated with which application names, the name of application 235 can be inferred in the future from the destination server IP address in a data packet transmitted by user computing device 210 to initiate a connection with application 235.


While the exemplary environment of FIG. 2 depicts just one server 230 for the application 235, there can actually be many physical or virtual servers at a geographic location hosting the application 235. Furthermore, while not depicted here, there can be any number of additional network components present, such as load balancers, routers, switches, firewall, etc. There may also be layers of address translation inside a data center hosting application 235, such that the apparent server IP address for server 230 appears different publicly than internally inside the data center. For simplicity, a single server 230 is described here with a single public IP address. However, a person of ordinary skill in the art will understand that the single server scenario depicted herein can be generalized to more complicated scenarios involving multiple servers.


The user request to access the application 235 hosted at the location may be routed by appliance 220 directly through the Internet, or through an MPLS network to private data center 260 first, and then over the Internet. There may additionally be one or more firewalls along either or both paths.


The traffic originating from user computing device 210 may have a private source IP address such as a.b.c.d, and a destination IP address for server 230 of m.n.o.p., as shown in table 215 of FIG. 2. However, the appliance 220 and/or the firewall 225 may perform network address translation to alter the source IP to a different address such as e.f.g.h. While firewall 225 is depicted as being external to appliance 220, it may actually be internal to appliance 220 in some embodiments. If the data traffic is routed over path 240 to application server 230, then the flow between user computing device 210 and application server 230 will appear to the application server 230 as having an apparent source IP address of e.f.g.h and a destination IP address of m.n.o.p., as depicted in table 245 of FIG. 2.


In another embodiment, the data traffic from user computing device 210 to application server 230 is routed through the MPLS network first to a private data center 260. A firewall 265 in the private data center 260 may perform network address translation to a different source IP address, such as i.j.k.l. This network address translation could be performed by a firewall appliance, a server, a router or other device. Thus, the data traffic routed over path 250 to application server 230 will have an apparent source IP address of i.j.k.l at the application server 230 and a destination IP address of m.n.o.p., as shown in table 255 of FIG. 2. In this way, even though the user computing device 210 originating the flow is the same, the application server 230 views incoming traffic from path 240 as being different from incoming traffic from path 250 since the source IP address for traffic arriving on path 240 is different from the source IP address for traffic arriving on path 250.


Because of the network address translation, if a first packet of a flow is transmitted by appliance 220 to application server 230 over path 240, but a second packet of the same flow is transmitted by appliance 220 to application server 230 over path 250, the server will not recognize the two packets as belonging to the same flow. This can become problematic if, for example, a TCP handshake is conducted over path 240 and data traffic is transmitted over path 250. Thus, appliance 220 needs to select an appropriate network path for transmitting data from user computing device 210 to application server 230, such that the same network path is used for all packets of a given flow.


When steering traffic by appliance 220, a determination of which network path to take needs to be made on the first packet for each flow, as once traffic has started in one direction, the appliance 220 generally cannot change directions for the traffic flow. The selection of network path can be based on traffic type, name of application 235, destination IP address of the server 230, or any other such criteria. However, often a first packet is used to establish a connection between the two devices (such as a TCP SYN packet), and does not have much (if any) other information besides simply header information, as depicted in FIG. 1B. There may be no explicit information about traffic type or application name in the information in a first packet. As a result, these characteristics need to be inferred from the limited information that is available in the information in the first packet for the flow. While embodiments of the present disclosure refer to information in a TCP packet, a person of ordinary skill in the art would understand that this is equally applicable to packets of other types of protocols.


In exemplary embodiments of the present disclosure, a neural network or other such learning algorithm may be used by an appliance 220 to infer an application name and/or one or more application characteristics or “tags” from the limited information in a first packet of a flow. As used herein, an application characteristic may be any characteristic or property related to an application or traffic type. The characteristic may have multiple possible values of the key. For example, an application characteristic can be “safety” which represents the safety of the network traffic. This can have multiple key values, such as “very safe”, “safe”, “unsafe”, “dangerous”, etc. Furthermore, a “tag” as used herein may comprise a specific string, such as “safe”, or “unsafe”. In this way, a “tag” may represent a value of a “characteristic”, or be independent from a characteristic.


While the application name is discussed herein as the tracked parameter that is inferred, there can actually be an inference made for any other parameter. For example, the inference made by the appliance may be regarding a tag (safe/unsafe), or any other parameter.


II. Key Strings


Once an inference is made by the appliance, the appliance begins steering a particular data flow over a particular network path. In a later packet of the flow, the name of the application that the flow is associated with may be apparent from payload information in the data packet. In exemplary embodiments of the present disclosure, the appliance can track information regarding the application and corresponding key value and build/update one or more data structures to influence the learning algorithm for future inferences.



FIG. 3A depicts an exemplary table 300 that is constructed from selected information in a first packet of a flow. A string of information is built in a hierarchical manner in the depicted table. While the general term table is used here, a person of ordinary skill in the art would understand that the data can actually be stored in any type of data structure, including table(s), database(s), nodes, etc.


A network administrator can determine one or more strings of information to track. For example, a network administrator may determine a source IP address should be collected, along with the name of the corresponding application that the flow is associated with. In the exemplary table 300, a network appliance collects information regarding a source IP address, the name of the associated application (regardless of inference), and a counter for how many times that combination has been viewed. The counter indicates a confidence level of the inference. In exemplary table 300, the tracked string of information is shown on a row in a concatenated manner. However, as would be understood by persons of ordinary skill in the art, the information can be collected and stored in any manner.


Rows 320 and 330 of table 300 depicts that data traffic from source IP address a.b.c.d was associated with the application “Skype” three hundred times and data traffic from source IP address a.b.c.d was associated with the application “Amazon” one time. Row 310 shows the global counter for source IP address a.b.c.d, which is that the particular source IP address was encountered by the appliance three hundred one times. From the counter, confidence information can be gleaned as to how the accuracy of the predicted application name, as discussed herein.


Row 340 of exemplary table 300 shows that network appliance also steered traffic from a source IP address of e.f.g.h for a total of three times. Rows 350-370 show that one time data traffic from source IP address e.f.g.h was associated with an FTP (file transfer protocol) server, one time it was associated with the Google application, and one time it was associated with the Facebook application.


From table 300, a determination can be made as to how well an source IP address can predict the associated application. For example, with source IP address a.b.c.d, predicting that the data traffic is associated with the “Skype” application is overwhelmingly accurate (>99%), and thus using this source IP address to infer an application name is likely to yield a good inference. However, source IP address e.f.g.h is associated with FTP 33% of the times, Google 33% of the time, and to Facebook 33% of the time. Thus, simply knowing that an source IP address is e.f.g.h does not allow the appliance to make a good prediction as to which application the data traffic is associated with.


While table 300 tracks an source IP address, a person of ordinary skill in the art would understand that table 300 can actually track any singular field, such as destination IP address, IP source port, IP destination port, etc.



FIG. 3B depicts another exemplary table 375 that can be constructed from information regarding a source IP address, along with the name of the associated application for the flow. In the exemplary table, a network appliance collects information regarding a source IP address, the name of the application that the traffic from that source IP address is associated with (regardless of inference), a counter for how many times that combination has been viewed, and a counter for how many flows have represented that combination, to yield confidence information regarding the prediction. In the exemplary table 375, this information is shown on a row in a concatenated manner. However, as would be understood by persons of ordinary skill in the art, the information can be collected and stored in any manner. Table 375 of FIG. 3B encompasses similar information as table 300 of FIG. 3A, but requires less storage space at the appliance while still providing relevant information needed by the appliance to make an inference regarding application name.


Row 380 depicts that data traffic from source IP address a.b.c.d was associated with the application “Skype” three hundred times out of a total of three hundred one flows processed by the appliance within the tracked time period. Row 390 shows that network appliance also steered traffic from an source IP address of e.f.g.h for a total of three times. One time data traffic from source IP address e.f.g.h was associated with an FTP (file transfer protocol) server. While row 390 depicts this information with the exemplary notation “⅓”, a person of ordinary skill in the art would understand that any notation can be used to depict one out of three flows, including punctuation, spacing, etc.


From a table such as table 375, a determination can be made as to how well an source IP address can predict the application that data traffic is associated with. For example, with source IP address a.b.c.d, predicting that the data traffic is associated with the “Skype” application is overwhelmingly accurate (>99%), and thus using this source IP address to infer an application name is likely to yield a good inference. However, source IP address e.f.g.h is associated with FTP 33% of the times. Thus, simply knowing that an source IP address is e.f.g.h does not allow the appliance to make a good prediction as to which application the data traffic is associated with.


Again, while the table 375 of FIG. 3B tracks an source IP address, a person of ordinary skill in the art would understand that the table can actually track any singular field, such as destination IP address, IP source port, IP destination port, etc. Further, any combination of fields can be tracked in a manner similar to table 375.


In various embodiments, the exemplary tables 300 and 375 may store information regarding all flows observed by the network appliance within a particular time period, or any other limited window. After the expiration of the time period, the table(s) can be purged as discussed herein to accommodate for gathering of information of future flows. In other embodiments, the exemplary tables 300 and 375 may be dynamic. In some embodiments, such that the appliance may only track one possibility for each key, for instance the application from the most recent flow observed (for example, only the information in row 390 rather than the three rows 350, 360 and 370). In this way, the table does not have to store information about every flow observed by the appliance and the appliance can still infer application names without storing ever increasing amounts of data. More information regarding how data is accumulated in these data structures of the appliance is discussed below with respect to the pseudocode.



FIG. 4 depicts an exemplary table 400 for tracking another exemplary string, that of the combination of source IP address and destination IP address. Row 410 shows that data traffic from source IP address of a.b.c.d was destined for a destination IP address of e.f.g.h one time, and that traffic was for an FTP application. Row 420 shows that data traffic from source IP address of a.b.c.d was destined for a destination IP address of i.j.k.l a total of 25 times, and that traffic was associated with the Google application. Row 430 shows that data traffic from source IP address of a.b.c.d was destined for a destination IP address of m.n.o.p a total of 10 times, and that traffic was associated with the Amazon application.


By collecting this information, an appliance can infer how well a particular source IP address and destination IP address combination can predict the application name associated with the flow. If the combination is a good predictor, then that information can be used by the learning algorithm of the appliance to infer a classification of future data flows from the particular IP address combination.



FIG. 5 depicts an exemplary table 500 that is constructed from a destination IP address and a minimum of a source port and destination port. Typically when a person visits a website, the destination port is commonly port number 80 for http protocol and port number 443 for https protocol. However, the source port can be a random value. Also, the destination port is typically the smaller port number. By storing the minimum of the two ports, an inference can be made on the type of traffic based on the common port numbers.


In an exemplary embodiment, an appliance may have processed four different data flows: (1) a data flow processed one time for a destination IP address of a.b.c.d, destination port number 80, and source port number 30002, for an Oracle application; (2) a data flow processed one time for the same IP address, destination port number 80, and source port number 38955 for an Oracle application. This information can be combined and stored as row 510 in exemplary table 500 of FIG. 5. Only the minimum port number, 80, is stored in the table and the counter reflects that this information was processed two times by the appliance, within the tracked time period.


The appliance may further have processed data flow (3) for a destination IP address of e.f.g.h., destination port number 443, source port number 40172 for application name “Google”, and (4) one data flow for the same destination IP address, destination port number 443, source port number 39255, for the application name “Google”. This information can be combined and stored as row 520 in exemplary table 500. Only the minimum port number, 443, is stored in the table and the counter reflects that this information was processed two times by the appliance, within the tracked time period.


Since the source port will typically be a random number, tracking each port number combination would generate many rows, a significant portion of which will be unlikely to be good predictors of future flows due to the randomness of the port assignment. However, by storing only the minimum port number in the table, information regarding multiple data flows can be combined in each row (such as row 510) to show that data traffic for destination IP address of a.b.c.d and a minimum port number of 80 is associated with Oracle traffic. In this way, only information that is likely to be useful in a future prediction with a high level of confidence is tracked by the appliance.


Further, as discussed herein, table 500 may actually store only one row for each key (e.g. IP address, or IP address and port combination) and the most likely application associated with the key, rather than multiple rows for every application associated with the key. In addition, while not depicted in FIG. 5, table 500 may store a counter for how many times the particular application association was processed out of the total number of flows with the same key, to track the accuracy and/or confidence level of the prediction.


In this way, similar tables can be constructed for any field or combination of fields—not only the IP address and port combinations discussed herein. Similarly, tables can be constructed for various packet properties, such as packet length, optimization system, encryption status, etc. Similar tables can also be constructed for application characteristics and/or application tags. Further, while tables are discussed herein, a person of ordinary skill in the art would understand that any type of data structure can be utilized.


III. Building Key Strings



FIG. 11 depicts an exemplary method undertaken by a network appliance (such as appliance 220 of FIG. 2) in computing a key from packet data. In step 1110, appliance 220 receives a first packet of a new flow. The appliance 220 then extracts information from the first packet in step 1120 (using a feature extraction engine). As discussed herein, the first packet may contain only header information if it is, for example, a TCP SYN packet. In other embodiments, the first packet may have more than just header information. In any case, the extraction engine of appliance 220 extracts the information available from the first packet for the flow. In step 1130, any transformation may optionally be applied to the extracted data. The transformation may include determining the minimum port number, as discussed above with reference to FIG. 5, or any other transformation. In step 1140, the extracted and optionally transformed data are combined into a key. The key value is optionally transformed in step 1150, such as hashed.


Exemplary pseudocode that may be utilized to accomplish this method is shown below.


How to compute a key from packet [header] data:






    • 1. Receive [first] packet of a flow

    • 2. Extract one or more fields from packet [header], e.g. a combination of source/destination IP address, source/destination port, protocol

    • 3. Optionally apply a transformation to the extracted data e.g. minport=min(destination port, source port)

    • 4. Combine the extracted and optionally transformed data into a key e.g. 8 bytes of source IP+destination IP

    • 5. Optionally transform the key (e.g. compute key=hash(key))






FIG. 12 depicts an exemplary method undertaken by a network appliance such as appliance 220 of FIG. 2, when a first packet of a flow arrives. In step 1210, the appliance builds a key using the first packet information. The key and its associated flow information can optionally be stored at the appliance in step 1220. Information regarding when new information is stored in a table and when it is not stored is discussed herein.


In step 1230, a determination is made as to whether the key is present in one or more data structures at the appliance. If not, then no prediction or inference is made by the appliance. If yes, then a determination is made in step 1240 if the application prediction meets a confidence level threshold. If the prediction does meet a predetermined confidence level threshold, then a prediction is returned. If not, then no prediction is returned. If a prediction is returned, then the appliance may determine a next hop along a network path, for transmission of the data flow. If no prediction is returned, then the data flow may be dropped by the appliance, or a next hop along a default network path is chosen by the appliance for transmission of the data flow.


In various embodiments, the requisite confidence threshold for returning a prediction can be any value determined by a network administrator, and can be adjusted as needed. Further, the confidence threshold can be variable depending on any parameter, such as source IP address, destination IP address, source port, destination port, protocol, application name, etc. That is, different parameters may have different confidence thresholds for returning a prediction and utilizing the prediction by the appliance in determine how to process a data packet.


Furthermore, the confidence level for a particular prediction may be gleaned from one table (such as exemplary tables 300, 375, 400, and 500), or from a combination of different tables. That is, a key may be present in multiple data structures at the appliance. Each data structure may have the same confidence level for the key, or different confidence levels for the key. A mathematical operation may be used to combine the information in multiple tables and determine an aggregate confidence level for the key. The aggregate confidence level may be determined using any mathematical operation, neural network, or through any other mechanism. Furthermore, each data structure may have its own confidence level, separate and apart from a confidence level for a particular key in the data structure. In various embodiments, a confidence level for a particular key may be combined with a confidence level for the data structure it appears in, to determine an aggregate confidence level. This aggregate confidence level may be compared to the confidence threshold to determine whether the appliance should rely on the prediction or not.


For example, if a particular source IP address is present in a source IP address table such as table 375, and also present in a table such as table 400 that tracks source IP address/destination IP address combinations, then the confidence level of the source IP address from each table may be combined to yield an aggregated confidence level for the source IP address.


Exemplary pseudocode that may be utilized to accomplish this method is shown below.

  • What to do when first packet of a flow arrives (can do this for multiple key types, with a separate data structure for each):
    • 1. Build a key (using this first packet)
    • 2. Optionally save the key with its associated flow
    • 3. Look up key in data structure (could be a hash table, a sorted list of keys+nodes etc.)
    • 4. If key is not found
      • a. Do nothing yet
      • b. Make no prediction
    • 5. If key is found
      • a. Examine node data for this key
      • b. Is there an [application name] prediction which meets our confidence threshold?
        • i. YES—return prediction (and confidence)
        • ii. NO—make no prediction


In an example implementation, each node of a data structure may have at least three pieces of information: total count, success count, and name (a string name of the predicted application). A confidence level is computed as Success Count/Total Count. If the confidence level is >99%, return name. Otherwise, no prediction is returned. As discussed above, the confidence level required to return an application name prediction can be variable. Further, the confidence level threshold required to return an application name prediction can be either the confidence level associated with a particular key in one data structure, a confidence level associated with a key in multiple data structures, a confidence level associated with a key and a data structure, or any combination of the above.



FIG. 13 depicts an exemplary method undertaken by a network appliance when the final application name of a flow is determined. In step 1310, the appliance builds or restores a key using the first packet information. The restoring if the key was optionally saved in a prior step. In step 1320, a determination is made as to whether the key is present in one or more data structures at the appliance. If not, then a node is initialized in the data structure for the key. If yes, then a determination is made in step 1330 as to whether the application prediction was correct. The node information and optionally the table confidence information is updated accordingly.


Exemplary pseudocode that may be utilized to accomplish this method is shown below.

  • What to do when the final application name of a flow is determined:
    • 1. Either
      • a. Build a key (using either the latest packet or the save first packet—either works for header information, if payload is included in key, preferably use the first packet), or
      • b. Restore the key saved for this flow (see optional step 2 above)
    • 2. Look up key in data structure
    • 3. If key is not found
      • a. Initialize a node for this key in the data structure
      • Example implementation
        • Total Count=1
        • Success Count=1
        • Name=final application name
    • 4. If key is found
      • a. If the prediction was correct (predicted name=final application name)
        • i. Optionally update the table confidence tracking information based on a successful prediction
        • ii. Update the node information based on a successful prediction
          • Example Implementation:
          •  Total Count+=1
          •  Success Count+=1
          •  If (Total Count>Max Count)//optional scaling
          •  Total Count=Total Count/2
          •  Success Count=Success Count/2
      • b. If the prediction was wrong (predicted name does not equal final application name)
        • i. Optionally update the table confidence tracking information based on an incorrect prediction
        • ii. Update the node information based on an incorrect prediction


In an example implementation for immediate replacement of a key string in a data structure, Total Count=1, Success Count=1, Name=final application name. While the application name did have 100% accuracy, there was only one instance of it. Thus, this is determined to not be a good predictor of application name due to the low sample size, and the field can be replaced with updated information when a new data flow is processed by the appliance.


In an example implementation for conditional replacement of a key string in a data structure, exemplary pseudocode that may be utilized to accomplish this is presented below:

















If (Total_Count>5 && Success Count/Total Count>80%)



   // don't replace just yet



   Total Count+=1



   // optionally penalize further



  Success Count = Success Count * 0.9



Else



 // restart prediction with new name as hypothesis



 Total Count = 1



 Success Count = 1



 Name = final application name










Furthermore, as discussed above, the information that is tracked such as in exemplary tables 300, 375, 400 and 500 can be periodically purged to remove the items that are outdated, not good predictors, and/or to save storage space in the memory of an appliance. In some embodiments items that are not useful predictors might also be purged. For instance, if the prediction application association is already definitely known from data in the first packet. Exemplary pseudocode that may be utilized for background maintenance on the tables in the appliance is shown below. Background maintenance (periodic or triggered when data structure is nearly full):

    • 1. Delete all nodes that have a total count<X
    • 2. Delete all nodes that have not been accessed since time X (need to have an access time stored in each node)


A data structure may be considered to be nearly full or heavily utilized when a predetermined percentage of the available space has been utilized. Furthermore, either one or both of the criteria from the pseudocode may be satisfied before nodes are deleted. Other more complicated deletion criteria could be used. In other embodiments, the data structure can be purged periodically, based on elapse time, even when it is not full.


While the above embodiments are discussed in terms of predicted application names, the present disclosure can also be used to predict one or more application tags instead of, or in addition to, application names. Data structures such as those described in reference to FIGS. 3-5 can be used to track application tags instead of, or in addition to application names. Thus, a network appliance may be able to predict one or more tags to classify a flow and aid in steering the flow over the proper network path.


In one example, an appliance may have a table with a particular source IP address and destination IP address combination. The application name associated with that IP address combination may not meet a predetermined confidence level threshold, however a particular application tag may meet a predetermined confidence level threshold and thus used in the prediction. For example, the tag may denote that the data is likely “safe” or “unsafe”, which can determine whether the flow is processed as a trusted business application or potentially malware for which further inspection is prudent. Further, the tag may denote a type of traffic, such as data, video, voice, etc., enabling the network appliance to implement a particular policy for handling the traffic types, despite not knowing the name of the specific application with a high level of confidence.


IV. Predicting Classification of Data Flows


As discussed above, the tables stored in the appliance may be periodically culled to only keep the data that is a good predictor, and discard data that does not yield a good prediction. “Good” prediction may be determined by evaluating subsequent packet data, as discussed above.


Furthermore, a network administrator may determine a threshold for a success count necessary before a prediction can be made, a threshold for a success count for keeping information in the tables, and/or a threshold for when rows are culled from a table, such as tables 300, 375, 400, and 500. In addition, to prevent the tables from continuously becoming larger, the tables may be stored as a hash, instead of as direct data.



FIG. 8 depicts an exemplary method undertaken by a network appliance such as appliance 220 of FIG. 2, when steering traffic. In step 805, appliance 220 receives a first packet of a new flow. The appliance 220 then extracts information from the first packet in step 810 using a feature extraction engine. As discussed herein, the first packet may contain only header information if it is, for example, a TCP SYN packet. In other embodiments, the first packet may have more than just header information. In any case, the extraction engine of appliance 220 extracts the information available from the first packet for the flow. A simple inspection engine (such as inspection engine 910 of FIG. 9) is used to analyze the extracted information in step 815. A determination is made whether this information is indicative of known application names and/or one or more tags. An inference engine (such as inference engine 920 of FIG. 9) is then used to infer an application name and one or more application tags in step 820.


If the extracted information is indicative of known application names and/or tags, then the inference engine 920 uses the known mapping to classify the flow as belonging to the known application name and/or application tags or characteristics. If the extracted information is partially indicative of known application names and/or tags, or is not indicative of any known application names and/or tags at all, then an inference is made as to the application name associated with the flow and/or one or more application tags or characteristics. In some embodiments, the inference engine 920 is unable to make any inference as to application name and/or tag(s) and returns a value of “unknown”. In various embodiments, a confidence percentage can be used by appliance 220 for the inspection engine 910 and/or the inference engine 920. For example, the engines may need to determine an application name and/or tag with a predetermined level of confidence before selecting that application name and/or tag as corresponding to the data in the packet being analyzed. The predetermined confidence level can be preset or be variable for different appliances, application names, tags/characteristics, enterprises, or based on time.


Once the appliance determines the application name and/or tag(s) via inference engine 920, the appliance determines a network path over which to transmit the flow in step 825. The selection of a path can be based on any number of factors. For example, the appliance may have a policy that all voice over IP traffic should be routed over an MPLS network while data traffic is routed over the public Internet. A determination from the inference engine 920 aids the appliance in determining which path to use for the flow. In some embodiments, if the inference engine 920 is unable to make an inference, then a default path may be selected.


When the appliance receives a second packet of the same flow in step 830, the second packet may continue to be routed over the chosen path for the first packet. However, the appliance may still analyze and extract information from the second packet to improve the learning and inference of the inference engine 920. Thus information can be extracted from the subsequent packet in step 835. Typically the subsequent packet may contain more information than was present in the first packet of the flow, and thus more information can be gleaned from this packet. Furthermore, information can be gleaned from a combination of data packets, and not simply a singular packet. That is, there may be data, such as an embedded domain name, that spans across multiple packet boundaries. For example, one packet may have “www.go” embedded within it, while a subsequent packet has “ogle.com” embedded within it. The domain name can be gleaned from a combination of the information in the two packets. While only two packets are discussed here, information can be gleaned from a combination of any number of packets.


Deep packet inspection, using any of the known methods, can be performed on the extracted information from the subsequent packet in step 840. The deep packet inspection will typically yield additional information about the associated application. This additional information can be useful for other future flows, such as FTP (File Transfer Protocol) control channel or DNS (Domain Name Server) queries. This additional information might not change the direction of routing for the current flow, but rather inform how future flows are handled by the appliance. In some embodiments, the deep packet inspection may find that the inferred application name and/or one or more inferred application tags or characteristics originally determined by the inference engine 920 for the first packet in step 820 was incorrect. The information is passed on to the inference engine 920 in step 845.


In other embodiments, the deep packet inspection may find that the inferred application and/or inferred tags originally determined for the first packet in step 820 was correct, but additional application characteristics or tags are gleaned from the deep packet inspection. This augmented information is passed on to the inference engine 920 in step 845 while traffic continues to be routed over the selected path for the flow. In step 850, the augmented application characteristics can be used to determine flow settings, such as quality of service or flow prioritization.


In step 855, a determination is made by the appliance whether the augmented information gleaned from a subsequent packet contradicts the original inference. Additionally, a confidence level for the contradiction may be determined, such that the augmented information can contradict the original inference on a sliding scale from strong to weak. If there is no contradiction, then the subsequent packet continues to be routed 860 over the path determined in step 825. If there is a contradiction with a low level of confidence, then the subsequent packet continues to be routed over the path determined in step 825. If there is a contradiction with a high level of confidence, then the appliance 650 may drop the packet in step 865 and optionally reset the connection (e.g., with a RST packet). In alternate embodiments, if there is a contradiction with a high level of confidence in step 865, the appliance may decide to route further packets on a new path associated with the augmented information, thus changing direction mid-flow. The destination server may not recognize the packets from the different path and reset the connection automatically.


It will be understood that where the term second packet is used herein, the process applies to any subsequent packet in the flow, regardless of whether it is actually chronologically the second, third, tenth, or any later packet. Further, the deep packet inspection may be performed for only one subsequent packet of a flow, or for multiple subsequent packets of a flow. In this way, a learning algorithm at the inference engine 920 is continually updated such that the inference made on the first packet can continue to be refined and the optimal path can be chosen for a given flow based only on limited information in the first packet of the flow.


In the exemplary environment of FIG. 2, appliance 220 receives traffic destined for application server 230. Based on information in the first packet (source IP a.b.c.d, destination IP m.n.o.p and TCP protocol), and observations of past history of flows with similar information, the appliance 220 may infer that this flow is for a particular application 235 hosted at server 230 and has a tag of “data” for file transfer traffic. Consequently, the appliance 220 may choose to transmit data through the Internet via path 240.


A subsequent packet of the same flow may contain information to determine that the flow is actually streaming video and thus the tag should have been “video” and not “data”. Thus, the traffic type classification inferred by appliance 220 from the first packet was incorrect, and updates are made by the learning algorithm such that a subsequent flow with similar extracted information from the packet is classified as being streaming video traffic and not data traffic. In some embodiments, an incorrect classification may be detected a certain number of times before the learning algorithm alters the inferred application name, application characteristic(s), and/or one or more inferred application tags based on information in the first packet.


In other embodiments, information such as a timestamp may be used in conjunction with extracted information to infer an application name, application characteristic(s) and/or tags. For example, appliance 220 may determine that every Tuesday at 10 am, user computing device 210 initiates a Voice over IP (VoIP) call. Thus traffic from a.b.c.d at that time is for VoIP, whereas at other times it is data. Upon observing traffic flows in this way, a distributed deep learning algorithm can determine patterns for traffic flowing through appliance 220 and use these patterns to better classify and infer data traffic flows from only information present in a first packet for each flow.


In various embodiments, the inference engine at an appliance can be in communication with other databases to help refine the inference made on the first packet. As depicted in FIG. 9, the inference engine 920 at every appliance 650 in the overlay network can be in communication with the orchestrator 710, which manages all of the appliances at a given enterprise. For example, if an enterprise has multiple network appliances deployed in various locations of its WAN, information from all of the inference engines at each appliance can be aggregated over the enterprise and be maintained by one or more data structures (such as a database) at the orchestrator 710 to provide more data points for the distributed deep learning algorithm and perform more accurate classification on the first packet. Furthermore, machine learning can be used at the orchestrator 710 to combine information received from the network appliances in the network.


In addition, a user such as a network administrator can customize the inference for a particular set of packet information such that flows are classified in a particular manner. In this way, the learning algorithm in a particular network appliance can be informed by data inspected through that one appliance and also by data inspected at other appliances throughout the enterprise.


Further, information from multiple enterprise orchestrators can be aggregated in a cloud-based system, along with information from third party databases, to better inform the distributed deep learning algorithm of the neural network and allow each network appliance to perform more accurate classification and inference on the first packet for various flows.


Similarly, information from the cloud intelligence can be communicated to an orchestrator 710, which in turn can be relayed to an appliance 650 at a location. In this way, an inference engine 920 at an appliance at one location can have the benefit of data points from multiple appliances, orchestrators, and third party databases, to aid in its inference. The cloud-based system can also use machine learning techniques applied to the data it receives from different sources. The cloud-based system can determine and evaluate trends across multiple orchestrators (and hence enterprises) and distribute classification and inference information back to each orchestrator 710 and appliance 650, as depicted in FIG. 9.



FIG. 10 depicts an exemplary analysis that is conducted on packet information to classify a flow. Information from a packet is extracted by a feature extraction engine. The feature extraction engine may extract information such as IP protocol, TCP/UDP port, domain name, subnet/IP, any result from deep packet inspection methods, and an artificial intelligence inference. While these specific features are shown in FIG. 10, a person of ordinary skill in the art would understand that there can be a different set of features or fewer or additional features extracted for any given packet.


A first packet for a flow may only have a few features available, such as IP protocol, TCP/UDP port, and subnet/IP. A subsequent packet for the flow, or combination of subsequent packets, may have one or more additional features that can be extracted, such as an embedded destination domain name. As discussed above, the domain name or other information may span across multiple packets.


From the extracted features, mapping tables are used to map each feature to an application name, priority, and/or one or more tags for the flow. For example, a mapping table may determine that an IP protocol of 6 is for TCP data with a priority of 2. A mapping table may further determine that port number 443 is for https traffic with a priority of 50. A further mapping table may determine that googlevideo.com is for the application name YouTube, which has a priority of 70 and tags of “video”, “streaming”, “recreational”, and “safe”.


From these mapped values, the highest priority mapped value may be determined to represent the flow by a prioritization and concatenation engine. In the exemplary embodiment of FIG. 10, the highest priority is 70 and it is indicative of the application “YouTube” with tags of “video”, “streaming”, “recreational” and “safe”. Further, the concatenation engine may also determine that the traffic uses https, and so an application name of “YouTube-https” is determined for the flow. In various embodiments, a characteristic can comprise a key-value pair. For example, “traffic type: video”, “business relevance: high”, “business relevance: personal”.


In various embodiments, the feature extraction process may be performed on a first packet for a flow and/or on one or more subsequent packets for the same flow.


In various embodiments, a domain name and/or subnet can be inferred from an IP address. A DNS table may be consulted with information regarding corresponding domain names and IP addresses. However, since there are many IP addresses in different addressing system, maintaining a local DNS table for every possible IP address is cumbersome. In some embodiments, caching or other similar methods can be used to maintain a subset of DNS information in a location accessible by a network appliance.


In another embodiment, a map can be maintained and distributed from a portal in the orchestrator to all appliances. The map may contain information such as a range of IP addresses or a subnet, the organization/owner of that range, and a geolocation for that range. For example, IP addresses from 0 to X1-1 may correspond to Company A located in San Francisco, Calif. IP addresses from X1 to X2-1 may correspond to Company B located in Chicago, Ill. IP addresses from X2 to X3-1 may correspond to Company C located in Miami, Fla. In this way, a subnet/IP can be inferred from a single IP address.


In a third embodiment, DNS snooping can be used to determine a mapping from a domain name to an IP address. A DNS server may be located in the private data center, at the application 235, or at any other location in the network. When a user computer, such as the user computing device 210 of FIG. 2, sends a request to the DNS server for the IP address associated with a domain name or website, the DNS server responds with the IP address and domain name. The appliance, such as appliance 220 of FIG. 2, can intercept the DNS response to user computing device 210 and create a cached table such that the information is available for future requests to that domain name. Further, this information can be aggregated across all appliances in the enterprise network and maintained in a central location such as in the orchestrator.


In a fourth embodiment, deep packet inspection methods can be used to determine the domain name. For example, a first packet for a flow may have only header information. However, a fourth packet may have information about the destination domain name in the payload of the packet. Thus, deep packet inspection methods can yield the domain name associated with the destination IP address in the header. This information can be aggregated across all appliances and maintained in a central location such as in the orchestrator.


V. System Setup



FIG. 14 illustrates an exemplary system 1400, within which the present disclosure can be implemented. The exemplary system 1400 includes a first location 110, a second location 120, and communication networks 130A-130D. While four communication networks are depicted in exemplary system 1400, there can be any number of communication networks, including just one. Additionally, system 1400 can include many locations, though only two are depicted in the exemplary figure for simplicity.


In the exemplary embodiment depicted in FIG. 14, the first location 110 includes computers 140 and a first appliance 150. In the first location 110, the computers 140 are linked to the first appliance 150. While only one appliance is depicted in first location 110, there can be multiple appliances, physical and/or virtual, at first location 110. In some embodiments, the first location is a branch location of an enterprise. While not depicted here, first location 110 can also comprise additional elements such as routers, switches, or any other physical or virtual computing equipment.


Computers 140 may be any type of computing device capable of accessing a communication network, such as a desktop computer, laptop computer, server, mobile phone, tablet, or any other “smart” device.


The first appliance 150 comprises hardware and/or software elements configured to receive data and optionally perform any type of processing before transmitting across a communication network.


As illustrated, the first appliance 150 is configured in-line (or serially) between the computers 140 and the router 160. The first appliance 150 intercepts network traffic between the computers 140 and the servers 170, in either direction.


In other embodiments, the first appliance 150 can be configured as an additional router, gateway, bridge, or be transparent on some or all interfaces. As a router, for example, the first appliance 150 appears to the computers 140 as an extra hop before the router 160. In some embodiments, the first appliance 150 provides redundant routing or peer routing with the router 160. Additionally, the first appliance 150 may provide failure mechanisms, such as, fail-to-open (e.g., no data access) or fail-to-wire (e.g., a direct connection to the router 160). If an appliance has multiple interfaces, it can be transparent on some interfaces, or act like a router, or act like a bridge on others. Alternatively, the appliance can be transparent on all interfaces, or appear as a router or bridge on all interfaces.


In FIG. 14, the first appliance 150 is linked to a router 160, which is coupled to communication networks 130A and 130B. While only one router 160 is depicted in exemplary system 1400, there can be multiple routers, switches, or other equipment (physical or virtual) present in system 1400, either within the first location 110 or outside of the first location 110. Typically, router 160 would be located within first location 110. In various embodiments, first appliance 150 may be in communication with communication networks 130C and 130D directly (on separate interfaces), instead of through router 160. While router 160 is depicted as being connected to two communication networks and first appliance 150 is also depicted as being connected to two communication networks, a person of ordinary skill in the art would understand that there can be any number of communication networks (including just one communication network) connected to the first location 110, either via router 160, via first appliance 150, or via another computing device. To illustrate that each of the access links is possible but not required in every embodiment, the access links 125 are shown as dashed lines in FIG. 14.


The second location 120 in exemplary system 1400 includes servers 170. While the term “server” is used herein, any type of computing device may be used in second location 120, as understood by a person of ordinary skill in the art. The server may also be a virtual machine. While not depicted in FIG. 14, second location 120 can optionally include at least one second appliance in addition to, or instead of, servers 170. Second location 120 can also include other components not depicted in FIG. 14, such as routers, switches, load-balancers or any other physical or virtual computing equipment. In some embodiments, the second location 120 is a central location or data center for an enterprise. In other embodiments, the second location 120 is a data center hosting a public web service or application.


The servers 170 are depicted in FIG. 14 as being linked to the communication networks 130A-130D via destination access links 145. In some embodiments, servers 170 may actually be in communication with the one or more of the communication networks through a router, switch, second appliance, or other physical or virtual equipment. Further, while four destination access links 145 are depicted in FIG. 14, for four communication networks (130A-130D), there may actually be fewer (such as just one) or more communication networks connected to second location 120. To illustrate that each of the destination access links 145 is possible but not required in every embodiment, the destination access links 145 are shown as dashed lines in FIG. 14.


The communication networks 130A-130D comprise hardware and/or software elements that enable the exchange of information (e.g., voice, video and data) between the first location 110 and the second location 120. Some examples of the communication networks 130A-130D are a private wide-area network (WAN), the public Internet, Multiprotocol Label Switching (MPLS) network, and wireless LTE network. Typically connections from the first location 110 to the communication networks 130A-130D (e.g., from router 160 and first appliance 150) are T1 lines (1.544 Mbps), or broadband connections such as digital subscriber lines (DSL) and cable modems. Other examples are MPLS lines, T3 lines (43.232 Mbps), OC3 (155 Mbps), OC48 (2.5 Gbps), fiber optic cables, or LTE wireless access connection. In various embodiments, each of the communication networks 130A-130D may be connected to at least one other communication network via at least one Inter-ISP link 155. For example, communication network 130A may be connected to communication network 130B, 130C, and/or 130D via one or more inter-ISP links. Data may traverse more than one communications network along a path from first location 110 to second location 120. For example, traffic may flow from the first location 110 to communication network 130A, over inter-ISP link 155 to communication network 130B, and then to the second location 120.


The router 160 and first appliance 150 are optionally connected to the communication networks 130A-130D via access links 125, sometimes also referred to herein as network access links. The communication networks 130A-130D consist of routers, switches, and other internal components that make up provider links 135. The provider links 135 are managed by the network service providers such as an Internet Service Provider (ISP). The second location 120 can be connected to communication networks 130A-130D via destination access links 145. Access links 125, provider links 135, and destination access links 145 can be combined to make various network paths along which data travels between the first location 110 and the second location 120. The exemplary embodiment of FIG. 14 depicts two paths along various provider links 135 through each communication network. However, as understood by persons of ordinary skill in the art, there can be any number of network paths across one or more communication networks.


In addition, communication networks may be in communication with one another via inter-ISP link(s) 155. For example, data traveling through communication network 130A may also travel through communication network 130C before reaching second location 120. In various embodiments, data can travel through any one or more of the communication networks 130A-130D from first location 110 to second location 120, and vice versa. Generally, an inter-ISP link connects communication networks of different internet service providers, such as a link connecting Verizon LTE wireless network with Comcast broadband network. In some embodiments, an inter-ISP link can connect communication networks from the same internet service provider, such as a link connecting Verizon LTE wireless network with the Verizon Fire network.


The first appliance 150, along with any other appliances in system 1400 can be physical or virtual. In the exemplary embodiment of a virtual appliance, it can be in a virtual private cloud (VPC), managed by a cloud service provider, such as Amazon Web Services, or others. An appliance in a customer data center can be physical or virtual. Similarly, the second location 120 may be a cloud service such as Amazon Web Service, Salesforce, or others.


As discussed herein, the communication networks 130A-130D can comprise multiple provider links, made up of routers and switches, connecting networked devices in different locations. These provider links, which together form various paths, are part of one or more core networks, sometimes referred to as an underlay network. In addition to these paths, there can also be tunnels connecting two networked devices. A virtual network, sometimes called an overlay network, can be used to transmit data across an underlay network, regardless of which Service Provider manages the routes or provider links. Data from connected devices can travel over this overlay network, which can consist of any number of tunnels or paths between each location.


In an exemplary embodiment, data from computers 140 at first location 110 may include voice, video, and data. This information can be transmitted by first appliance 150 over one or more communication networks 130A-130D to second location 120. In some embodiments, voice, video, and data may be received and transmitted on separate LAN or vLAN interfaces, and first appliance 150 can distinguish the traffic based on the LAN/vLAN interface at which the data was received.


In some embodiments, the system 1400 includes one or more secure tunnels between the first appliance 150 and servers 170, or optionally a second appliance at the second location. The secure tunnel may be utilized with encryption (e.g., IPsec), access control lists (ACLs), compression (such as header and payload compression), fragmentation/coalescing optimizations, and/or error detection and correction provided by an appliance.


In various embodiments, first location 110 and/or second location 120 can be a branch location, central location, private cloud network, data center, or any other type of location. In addition, multiple locations can be in communication with each other. As understood by persons of ordinary skill in the art, any type of network topology may be used.


The principles discussed herein are equally applicable to multiple first locations (not shown) and to multiple second locations (not shown). For example, the system 1400 may include multiple branch locations and/or multiple central locations coupled to one or more communication networks. System 1400 may also include many sites (first locations) in communication with many different public web services (second locations). Branch location/branch location communication, central location/central location communication, central location/cloud appliance communication, as well as multi-appliance and/or multi-node communication and bi-directional communication are further within the scope of the disclosure. However, for the sake of simplicity, FIG. 14 illustrates the system 1400 having a single first location 110 and a single second location 120.



FIG. 6 illustrates a block diagram of an appliance 650 (also referred to herein as network appliance), in an exemplary implementation of the invention. Appliance 650 may be similar to appliance 220 of FIG. 2 and first appliance 150 of FIG. 14, as discussed herein. The appliance 650 includes a processor 610, a memory 620, a WAN communication interface 630, a LAN communication interface 640, and database(s) 690. A system bus 680 links the processor 610, the memory 620, the WAN communication interface 630, the LAN communication interface 640, and the database(s) 690. When deployed in a branch location, line 660 links the WAN communication interface 630 to the router 160 (in FIG. 14), and line 670 links the LAN communication interface 640 to the computers 140 in FIG. 14.


The database(s) 690 comprises hardware and/or software elements configured to store data in an organized format to allow the processor 610 to create, modify, and retrieve the data. The hardware and/or software elements of the database(s) 690 may include storage devices, such as RAM, hard drives, optical drives, flash memory, and magnetic tape.


In some embodiments, some appliances comprise identical hardware and/or software elements. Alternatively, in other embodiments, some appliances, such as a second appliance, may include hardware and/or software elements providing additional processing, communication, and storage capacity.


Embodiments of the present invention also allow for centrally assigned policies to be implemented throughout an organization's entire network, to secure and control all WAN traffic for the organization. Software defined WAN (SD-WAN) overlay networks can be created independently from the physical network, and from each other, and in multiple layers. Topology, security, and forwarding rules can be specified independently for each overlay. This design allows for high-scale and secure application segmentation. Each overlay scales automatically as endpoints are added to the SD-WAN fabric, and configuration integrity is maintained as each site maps a local profile into a global overlay.


All of the overlay networks, labels, and corresponding ports, subnets and vLANs can be maintained in one or more databases in communication with an orchestrator device, as depicted in FIG. 7. The orchestrator 710 can be hardware and/or software, and be in communication with each of the networked devices, such as the network appliances, as well as in communication with the database(s) 720.


In exemplary embodiments, the orchestrator 710 may maintain information regarding the configuration of each appliance at each location (physical or virtual). In this way, the orchestrator 710 can create, manage and implement policies for network traffic throughout the network of connected appliances. For example, if a higher priority is designated for voice traffic, the orchestrator 710 can automatically configure the corresponding network appliances at all relevant locations accordingly.


By having knowledge of the configuration of each appliance in the network, the orchestrator 710 can also create and manage tunnels in the enterprise network, including tunnels to carry a particular type of network traffic between each source-destination appliance pair. The orchestrator 710 can automatically configure the enterprise network by determining which tunnels need to be set up, and automatically creating them based on the network nodes and overlays. The orchestrator 710 can also configure policies based on the application classification techniques described herein to preferentially steer certain types of applications over one path rather than over another path.


In exemplary embodiments, network interfaces of a network appliance 650 can be designated on the WAN side and LAN side as processing a specific type of traffic, or traffic from specific applications. For example, a first WAN interface may connect to the public Internet, while a second WAN interface connects to an MPLS service. Both WAN interfaces can support encryption and the Internet uplink can be configured for Network Address Translation (NAT).



FIG. 15A depicts an exemplary message sequence chart between two network appliances at terminal ends of a tunnel, according to embodiments of the present disclosure. In the exemplary message sequence chart, appliance 1502 determines an application prediction and confidence level in step 1506 for a first data packet of a flow, utilizing methods discussed herein. Optionally, appliance 1504 may also determine an application prediction and confidence level for the flow in step 1508.


Appliance 1504 may then transmit a TCP syn packet 1510 to appliance 1504 to initiate a TCP handshake, via the tunnel. The TCP syn packet 1510 may comprise the contents of the TCP syn, along with supplementary header information. Exemplary supplementary header information of a packet header is depicted in FIG. 15B and may comprise the predicted application name, as well as a confidence level for that prediction. While FIG. 15B depicts only these two components of supplementary header information, a packet header may comprise additional information as well, as would be understood by persons of ordinary skill in the art.


Returning to FIG. 15A, appliance 1504 may compare the application prediction and confidence from the supplementary header information of TCP syn packet 1510 with its own determined application prediction and confidence in step 1508. Generally, receiving appliance 1504 is not privy to communications that appliance 1502 may have with other devices (e.g., a DNS server, user device, etc.) so receiving appliance 1504 may not have enough information to make a prediction with a high level of confidence. As such, application prediction 1508 made by the receiving appliance 1504 may be different than application prediction 1506 made by the transmitting appliance 1502.


In step 1512, receiving appliance 1504 may compare its own application prediction and confidence with the application prediction and confidence received from transmitting appliance 1502, and keep the higher confidence prediction.


Appliance 1504 may then transmit a TCP syn/ack packet 1514 to appliance 1502. In a packet header with supplementary header information of TCP syn/ack packet 1514 may also be an application prediction and confidence of the data stream. Upon receipt of this information, appliance 1502 may compare its own application prediction and confidence from step 1506 with the received information in TCP syn/ack packet 1514, and keep the higher confidence prediction. In this way, two network appliances at ends of a tunnel work together to find the best confidence match across the two network appliances (as opposed to each network appliance independently operating). This in turn improves accuracy and sharing of the most accurate data among network appliances connected in the communication network.


As would be understood by persons of ordinary skill in the art, although FIG. 15A depicts a TCP syn 1510 packet and a TCP syn/ack 1514 packet exchange between appliance 1502 and appliance 1504, the same process can be applicable to any first and second packets of a flow between the two appliances. That is, the first and second packets of a flow may be mid-stream TCP traffic (and thus not specifically a syn packet and syn/ack packet exchange). In other embodiments, the first and second packet of a flow between the two appliances may consist of non-TCP traffic, such as for UDP or any other protocol.


Further, while FIG. 15A depicts appliance 1502 as initiating a data transmission and appliance 1504 as receiving the data transmission, the two appliances are terminal ends of a network tunnel and data traffic can flow in both directions between them. Additionally, one or more of appliance 1504 and appliance 1504 may be located at a midpoint along a network tunnel and not specifically at a terminal end.


Thus, methods and systems for multi-level learning for predicting and classifying traffic flows from first packet data are disclosed. Although embodiments have been described with reference to specific examples, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A method for predicting and classifying traffic flows at a first network appliance from first packet data, the method comprising: receiving at a first network appliance, a first data packet of a first flow to be transmitted across a network;extracting information from a header of the first data packet;predicting an associated application name for the first flow based in part on the extracted information from the header of the first data packet;determining a confidence level for the predicted application name for the first flow based in part on the extracted information from the first data packet;selecting by the first network appliance a network tunnel based in part on the application prediction;transmitting the first packet of the first flow by the first network appliance over the selected network tunnel with a packet header with supplementary header information, to a second network appliance;receiving a second packet of the first flow at the first network appliance from the second network appliance, via the selected network tunnel, the second packet of the first flow comprising a packet header with supplementary header information with a predicted application name and confidence level for the first flow; andupdating the predicted application name for the first flow at the first appliance, if the confidence level received from the second appliance is greater than the confidence level generated at the first network appliance.
  • 2. The method of claim 1, further comprising: verifying that the application prediction at the first network appliance meets a confidence threshold prior to transmitting the first packet of the first flow to the second network appliance.
  • 3. The method of claim 1, further comprising: receiving a third data packet of the first flow;determining an application name from payload information of the third data packet of the first flow;verifying that the application prediction based in part on the first packet was correct; andupdating confidence information in a data structure at the first network appliance for the first flow.
  • 4. The method of claim 1, further comprising: receiving a third data packet of the first flow;determining an application name from payload information of the third data packet of the first flow;determining that the application prediction based in part on the first packet was incorrect; andupdating confidence information in a data structure at the first network appliance for the first flow.
  • 5. The method of claim 1, further comprising: performing network address translation based in part on the selected network tunnel to change at least one of a source network address, destination network address, destination port, and a source port in packets of the first flow.
  • 6. The method of claim 1, wherein the first packet is a TCP syn packet.
  • 7. The method of claim 1, wherein the second packet is a TCP syn/ack packet.
  • 8. A method for predicting and classifying traffic flows at a second network appliance, the method comprising: receiving at a second network appliance, a first data packet of a first flow transmitted across a network tunnel by a first network appliance, the first data packet comprising a packet header with supplementary header information with a predicted application name and confidence level for the first flow;generating at the second network appliance, a predicted application name and confidence level for the first flow;updating the predicted application name for the first flow at the second network appliance, if the confidence level received from the first appliance is greater than the confidence level generated at the second network appliance.
  • 9. The method of claim 8, further comprising: transmitting a second data packet of the first flow by the second network appliance to the first network appliance via the selected network tunnel, the second data packet comprising a packet header with supplementary header information with the updated predicted application name and confidence level for the first flow.
  • 10. The method of claim 9, further comprising: verifying that the application prediction at the second network appliance meets a confidence threshold prior to transmitting the second packet of the first flow to the first network appliance.
  • 11. The method of claim 1, wherein the first packet is a TCP syn packet.
  • 12. The method of claim 1, wherein the second packet is a TCP syn/ack packet.
  • 13. The method of claim 8, further comprising: receiving a third data packet of the first flow;determining an application name from payload information of the third data packet of the first flow;verifying that predicted application name based in part on the first packet was correct; andupdating confidence information in a data structure at the second network appliance for the first flow.
  • 14. The method of claim 8, further comprising: receiving a third data packet of the first flow;determining an application name from payload information of the third data packet of the first flow;determining that the predicted application name based in part on the first packet was incorrect; andupdating confidence information in a data structure at the second network appliance for the first flow.
  • 15. The method of claim 8, wherein the extracted information from the header of the first data packet comprises at least one of source IP address, destination IP address, source port, destination port, and protocol.
  • 16. The method of claim 8, further comprising: performing network address translation based in part on the selected network tunnel to change at least one of a source network address, destination network address, destination port, and a source port in packets of the first flow.
  • 17. The method of claim 8, wherein the second network appliance uses a learning algorithm of a neural network to infer the application name for the first packet.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation-in-part of, and claims the priority benefit of, U.S. patent application Ser. No. 15/632,008 filed on Jun. 23, 2017, which in turn is a continuation-in-part of, and claims the priority benefit of, U.S. patent application Ser. No. 15/425,798 filed on Feb. 6, 2017 and entitled “Multi-level Learning for Classifying Traffic Flows”. The disclosures of the above-reference applications are incorporated herein in their entirety for all purposes.

US Referenced Citations (573)
Number Name Date Kind
4494108 Langdon, Jr. et al. Jan 1985 A
4558302 Welch Dec 1985 A
4612532 Bacon et al. Sep 1986 A
5023611 Chamzas et al. Jun 1991 A
5159452 Kinoshita et al. Oct 1992 A
5243341 Seroussi et al. Sep 1993 A
5307413 Denzer Apr 1994 A
5357250 Healey et al. Oct 1994 A
5359720 Tamura et al. Oct 1994 A
5373290 Lempel et al. Dec 1994 A
5483556 Pillan et al. Jan 1996 A
5532693 Winters et al. Jul 1996 A
5592613 Miyazawa et al. Jan 1997 A
5602831 Gaskill Feb 1997 A
5608540 Ogawa Mar 1997 A
5611049 Pitts Mar 1997 A
5627533 Clark May 1997 A
5635932 Shinagawa et al. Jun 1997 A
5652581 Furlan et al. Jul 1997 A
5659737 Matsuda Aug 1997 A
5675587 Okuyama et al. Oct 1997 A
5710562 Gormish et al. Jan 1998 A
5748122 Shinagawa et al. May 1998 A
5754774 Bittinger et al. May 1998 A
5802106 Packer Sep 1998 A
5805822 Long et al. Sep 1998 A
5883891 Williams et al. Mar 1999 A
5903230 Masenas May 1999 A
5955976 Heath Sep 1999 A
6000053 Levine et al. Dec 1999 A
6003087 Housel et al. Dec 1999 A
6054943 Lawrence Apr 2000 A
6081883 Popelka et al. Jun 2000 A
6084855 Soirinsuo et al. Jul 2000 A
6175944 Urbanke et al. Jan 2001 B1
6191710 Waletzki Feb 2001 B1
6240463 Benmohamed et al. May 2001 B1
6295541 Bodnar et al. Sep 2001 B1
6308148 Bruins Oct 2001 B1
6311260 Stone et al. Oct 2001 B1
6339616 Kovalev Jan 2002 B1
6374266 Shnelvar Apr 2002 B1
6434191 Agrawal et al. Aug 2002 B1
6434641 Haupt et al. Aug 2002 B1
6434662 Greene et al. Aug 2002 B1
6438664 McGrath et al. Aug 2002 B1
6452915 Jorgensen Sep 2002 B1
6463001 Williams Oct 2002 B1
6489902 Heath Dec 2002 B2
6493698 Beylin Dec 2002 B1
6570511 Cooper May 2003 B1
6587985 Fukushima et al. Jul 2003 B1
6614368 Cooper Sep 2003 B1
6618397 Huang Sep 2003 B1
6633953 Stark Oct 2003 B2
6643259 Borella et al. Nov 2003 B1
6650644 Colley et al. Nov 2003 B1
6653954 Rijavec Nov 2003 B2
6667700 McCanne Dec 2003 B1
6674769 Viswanath Jan 2004 B1
6718361 Basani et al. Apr 2004 B1
6728840 Shatil et al. Apr 2004 B1
6738379 Balazinski et al. May 2004 B1
6754181 Elliott et al. Jun 2004 B1
6769048 Goldberg et al. Jul 2004 B2
6791945 Levenson et al. Sep 2004 B1
6823470 Smith et al. Nov 2004 B2
6839346 Kametani Jan 2005 B1
6842424 Key Jan 2005 B1
6856651 Singh Feb 2005 B2
6859842 Nakamichi et al. Feb 2005 B1
6862602 Guha Mar 2005 B2
6910106 Sechrest et al. Jun 2005 B2
6963980 Mattsson Nov 2005 B1
6968374 Lemieux et al. Nov 2005 B2
6978384 Milliken Dec 2005 B1
7007044 Rafert et al. Feb 2006 B1
7020750 Thiyagaranjan et al. Mar 2006 B2
7035214 Seddigh et al. Apr 2006 B1
7047281 Kausik May 2006 B1
7069268 Burns et al. Jun 2006 B1
7069342 Biederman Jun 2006 B1
7110407 Khanna Sep 2006 B1
7111005 Wessman Sep 2006 B1
7113962 Kee et al. Sep 2006 B1
7120666 McCanne et al. Oct 2006 B2
7145889 Zhang Dec 2006 B1
7149953 Cameron et al. Dec 2006 B2
7177295 Sholander et al. Feb 2007 B1
7197597 Scheid et al. Mar 2007 B1
7200847 Straube et al. Apr 2007 B2
7215667 Davis May 2007 B1
7216283 Shen et al. May 2007 B2
7242681 Van Bokkelen et al. Jul 2007 B1
7243094 Tabellion et al. Jul 2007 B2
7249309 Glaise et al. Jul 2007 B2
7266645 Garg et al. Sep 2007 B2
7278016 Detrick et al. Oct 2007 B1
7318100 Demmer et al. Jan 2008 B2
7359393 Nalawade et al. Apr 2008 B1
7366829 Luttrell et al. Apr 2008 B1
7380006 Srinivas et al. May 2008 B2
7383329 Erickson Jun 2008 B2
7383348 Seki et al. Jun 2008 B2
7388844 Brown et al. Jun 2008 B1
7389357 Duffie et al. Jun 2008 B2
7389393 Karr et al. Jun 2008 B1
7417570 Srinivasan et al. Aug 2008 B2
7417991 Crawford et al. Aug 2008 B1
7420992 Fang et al. Sep 2008 B1
7428573 McCanne et al. Sep 2008 B2
7441039 Bhardwaj Oct 2008 B2
7451237 Takekawa et al. Nov 2008 B2
7453379 Plamondon Nov 2008 B2
7454443 Ram et al. Nov 2008 B2
7457315 Smith Nov 2008 B1
7460473 Kodama et al. Dec 2008 B1
7471629 Melpignano Dec 2008 B2
7496659 Coverdill et al. Feb 2009 B1
7532134 Samuels et al. May 2009 B2
7555484 Kulkarni et al. Jun 2009 B2
7571343 Xiang et al. Aug 2009 B1
7571344 Hughes et al. Aug 2009 B2
7587401 Yeo et al. Sep 2009 B2
7596802 Border et al. Sep 2009 B2
7617436 Wenger et al. Nov 2009 B2
7619545 Samuels et al. Nov 2009 B2
7620870 Srinivasan et al. Nov 2009 B2
7624333 Langner Nov 2009 B2
7624446 Wilhelm Nov 2009 B1
7630295 Hughes et al. Dec 2009 B2
7633942 Bearden et al. Dec 2009 B2
7639700 Nabhan et al. Dec 2009 B1
7643426 Lee et al. Jan 2010 B1
7644230 Hughes et al. Jan 2010 B1
7676554 Malmskog et al. Mar 2010 B1
7698431 Hughes Apr 2010 B1
7702843 Chen et al. Apr 2010 B1
7714747 Fallon May 2010 B2
7746781 Xiang Jun 2010 B1
7764606 Ferguson et al. Jul 2010 B1
7793193 Koch et al. Sep 2010 B2
7810155 Ravi Oct 2010 B1
7826798 Stephens et al. Nov 2010 B2
7827237 Plamondon Nov 2010 B2
7849134 McCanne et al. Dec 2010 B2
7853699 Wu et al. Dec 2010 B2
7873786 Singh et al. Jan 2011 B1
7917599 Gopalan et al. Mar 2011 B1
7924795 Wan et al. Apr 2011 B2
7925711 Gopalan et al. Apr 2011 B1
7941606 Pullela et al. May 2011 B1
7945736 Hughes et al. May 2011 B2
7948921 Hughes et al. May 2011 B1
7953869 Demmer et al. May 2011 B2
7957307 Qiu et al. Jun 2011 B2
7970898 Clubb et al. Jun 2011 B2
7975018 Unrau et al. Jul 2011 B2
7996747 Dell et al. Aug 2011 B2
8046667 Boyce Oct 2011 B2
8069225 McCanne Nov 2011 B2
8072985 Golan et al. Dec 2011 B2
8090027 Schneider Jan 2012 B2
8090805 Chawla et al. Jan 2012 B1
8095774 Hughes et al. Jan 2012 B1
8140757 Singh Mar 2012 B1
8171238 Hughes et al. May 2012 B1
8209334 Doerner Jun 2012 B1
8225072 Hughes et al. Jul 2012 B2
8271325 Silverman et al. Sep 2012 B2
8271847 Langner Sep 2012 B2
8307115 Hughes Nov 2012 B1
8312226 Hughes Nov 2012 B2
8352608 Keagy et al. Jan 2013 B1
8370583 Hughes Feb 2013 B2
8386797 Danilak Feb 2013 B1
8392684 Hughes Mar 2013 B2
8442052 Hughes May 2013 B1
8447740 Huang et al. May 2013 B1
8473714 Hughes et al. Jun 2013 B2
8489562 Hughes et al. Jul 2013 B1
8516158 Wu et al. Aug 2013 B1
8553757 Florencio et al. Oct 2013 B2
8565118 Shukla et al. Oct 2013 B2
8570869 Ojala et al. Oct 2013 B2
8576816 Lamy-Bergot et al. Nov 2013 B2
8595314 Hughes Nov 2013 B1
8613071 Day et al. Dec 2013 B2
8681614 McCanne et al. Mar 2014 B1
8699490 Zheng et al. Apr 2014 B2
8700771 Ramankutty et al. Apr 2014 B1
8706947 Vincent Apr 2014 B1
8725988 Hughes et al. May 2014 B2
8732423 Hughes May 2014 B1
8738865 Hughes et al. May 2014 B1
8743683 Hughes Jun 2014 B1
8755381 Hughes et al. Jun 2014 B2
8775413 Brown et al. Jul 2014 B2
8811431 Hughes Aug 2014 B2
8843627 Baldi et al. Sep 2014 B1
8850324 Clemm et al. Sep 2014 B2
8885632 Hughes et al. Nov 2014 B2
8891554 Biehler Nov 2014 B2
8929380 Hughes et al. Jan 2015 B1
8929402 Hughes Jan 2015 B1
8930650 Hughes et al. Jan 2015 B1
8998544 Higgins Apr 2015 B1
9003541 Patidar Apr 2015 B1
9036662 Hughes May 2015 B1
9054876 Yagnik Jun 2015 B1
9092342 Hughes et al. Jul 2015 B2
9106530 Wang Aug 2015 B1
9130991 Hughes Sep 2015 B2
9131510 Wang Sep 2015 B2
9143455 Hughes Sep 2015 B1
9152574 Hughes et al. Oct 2015 B2
9171251 Camp et al. Oct 2015 B2
9191342 Hughes et al. Nov 2015 B2
9202304 Baenziger et al. Dec 2015 B1
9253277 Hughes et al. Feb 2016 B2
9306818 Aumann et al. Apr 2016 B2
9307442 Bachmann et al. Apr 2016 B2
9363248 Hughes Jun 2016 B1
9363309 Hughes Jun 2016 B2
9380094 Florencio et al. Jun 2016 B2
9397951 Hughes Jul 2016 B1
9438538 Hughes et al. Sep 2016 B2
9549048 Hughes Jan 2017 B1
9584403 Hughes et al. Feb 2017 B2
9584414 Sung et al. Feb 2017 B2
9613071 Hughes Apr 2017 B1
9626224 Hughes et al. Apr 2017 B2
9647949 Varki May 2017 B2
9712463 Hughes et al. Jul 2017 B1
9716644 Wei et al. Jul 2017 B2
9717021 Hughes et al. Jul 2017 B2
9875344 Hughes et al. Jan 2018 B1
9906452 Dosovitsky Feb 2018 B1
9906630 Hughes Feb 2018 B2
9948496 Hughes et al. Apr 2018 B1
9961010 Hughes et al. May 2018 B2
9967056 Hughes May 2018 B1
10038616 Talat Jul 2018 B2
10091172 Hughes Oct 2018 B1
10164861 Hughes et al. Dec 2018 B2
10257082 Hughes Apr 2019 B2
10313930 Hughes et al. Jun 2019 B2
10326551 Hughes Jun 2019 B2
10432484 Hughes et al. Oct 2019 B2
10601848 Jeyaraman Mar 2020 B1
10637721 Hughes et al. Apr 2020 B2
10719588 Hughes et al. Jul 2020 B2
10771370 Hughes et al. Sep 2020 B2
10771394 Hughes Sep 2020 B2
10805840 Hughes et al. Oct 2020 B2
10812361 Hughes et al. Oct 2020 B2
20010026231 Satoh Oct 2001 A1
20010054084 Kosmynin Dec 2001 A1
20020007413 Garcia-Luna-Aceves et al. Jan 2002 A1
20020009079 Jungck et al. Jan 2002 A1
20020010702 Ajtai et al. Jan 2002 A1
20020010765 Border Jan 2002 A1
20020040475 Yap et al. Apr 2002 A1
20020061027 Abiru et al. May 2002 A1
20020065998 Buckland May 2002 A1
20020071436 Border et al. Jun 2002 A1
20020078242 Viswanath Jun 2002 A1
20020101822 Ayyagari et al. Aug 2002 A1
20020107988 Jordan Aug 2002 A1
20020116424 Radermacher et al. Aug 2002 A1
20020129158 Zhang et al. Sep 2002 A1
20020129260 Benfield et al. Sep 2002 A1
20020131434 Vukovic et al. Sep 2002 A1
20020150041 Reinshmidt et al. Oct 2002 A1
20020159454 Delmas Oct 2002 A1
20020163911 Wee et al. Nov 2002 A1
20020169818 Stewart et al. Nov 2002 A1
20020181494 Rhee Dec 2002 A1
20020188871 Noehring et al. Dec 2002 A1
20020194324 Guha Dec 2002 A1
20030002664 Anand Jan 2003 A1
20030009558 Ben-Yehezkel Jan 2003 A1
20030012400 McAuliffe et al. Jan 2003 A1
20030033307 Davis et al. Feb 2003 A1
20030046572 Newman et al. Mar 2003 A1
20030048750 Kobayashi Mar 2003 A1
20030048785 Calvignac et al. Mar 2003 A1
20030067940 Edholm Apr 2003 A1
20030123481 Neale et al. Jul 2003 A1
20030123671 He et al. Jul 2003 A1
20030131079 Neale et al. Jul 2003 A1
20030133568 Stein et al. Jul 2003 A1
20030142658 Ofuji et al. Jul 2003 A1
20030149661 Mitchell et al. Aug 2003 A1
20030149869 Gleichauf Aug 2003 A1
20030204619 Bays Oct 2003 A1
20030214502 Park et al. Nov 2003 A1
20030214954 Oldak et al. Nov 2003 A1
20030233431 Reddy et al. Dec 2003 A1
20040008711 Lahti et al. Jan 2004 A1
20040047308 Kavanagh et al. Mar 2004 A1
20040083299 Dietz et al. Apr 2004 A1
20040085894 Wang et al. May 2004 A1
20040086114 Rarick May 2004 A1
20040088376 McCanne May 2004 A1
20040114569 Naden et al. Jun 2004 A1
20040117571 Chang et al. Jun 2004 A1
20040123139 Aiello et al. Jun 2004 A1
20040158644 Albuquerque et al. Aug 2004 A1
20040179542 Murakami et al. Sep 2004 A1
20040181679 Dettinger et al. Sep 2004 A1
20040199771 Morten et al. Oct 2004 A1
20040202110 Kim Oct 2004 A1
20040203820 Billhartz Oct 2004 A1
20040205332 Bouchard Oct 2004 A1
20040243571 Judd Dec 2004 A1
20040250027 Hettinger Dec 2004 A1
20040255048 Lev Ran et al. Dec 2004 A1
20050010653 McCanne Jan 2005 A1
20050044270 Grove et al. Feb 2005 A1
20050053094 Cain et al. Mar 2005 A1
20050055372 Springer, Jr. et al. Mar 2005 A1
20050055399 Savchuk Mar 2005 A1
20050071453 Ellis et al. Mar 2005 A1
20050091234 Hsu et al. Apr 2005 A1
20050111460 Sahita May 2005 A1
20050131939 Douglis et al. Jun 2005 A1
20050132252 Fifer et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050171937 Hughes et al. Aug 2005 A1
20050177603 Shavit Aug 2005 A1
20050182849 Chandrayana et al. Aug 2005 A1
20050190694 Ben-Nun et al. Sep 2005 A1
20050207443 Kawamura et al. Sep 2005 A1
20050210151 Abdo et al. Sep 2005 A1
20050220019 Melpignano Oct 2005 A1
20050220097 Swami et al. Oct 2005 A1
20050235119 Sechrest et al. Oct 2005 A1
20050240380 Jones Oct 2005 A1
20050243743 Kimura Nov 2005 A1
20050243835 Sharma Nov 2005 A1
20050256972 Cochran et al. Nov 2005 A1
20050278459 Boucher et al. Dec 2005 A1
20050283355 Itani et al. Dec 2005 A1
20050286526 Sood et al. Dec 2005 A1
20060010243 DuRee Jan 2006 A1
20060013210 Bordogna et al. Jan 2006 A1
20060026425 Douceur et al. Feb 2006 A1
20060031936 Nelson et al. Feb 2006 A1
20060036901 Yang et al. Feb 2006 A1
20060039354 Rao et al. Feb 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060059171 Borthakur et al. Mar 2006 A1
20060059173 Hirsch Mar 2006 A1
20060109805 Malamal Vadakital et al. May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060136913 Sameske Jun 2006 A1
20060143497 Zohar et al. Jun 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060195547 Sundarrajan et al. Aug 2006 A1
20060195840 Sundarrajan et al. Aug 2006 A1
20060212426 Shakara et al. Sep 2006 A1
20060218390 Loughran et al. Sep 2006 A1
20060227717 van den Berg et al. Oct 2006 A1
20060250965 Irwin Nov 2006 A1
20060268932 Singh et al. Nov 2006 A1
20060280205 Cho Dec 2006 A1
20070002804 Xiong et al. Jan 2007 A1
20070008884 Tang Jan 2007 A1
20070011424 Sharma et al. Jan 2007 A1
20070038815 Hughes Feb 2007 A1
20070038816 Hughes et al. Feb 2007 A1
20070038858 Hughes Feb 2007 A1
20070050475 Hughes Mar 2007 A1
20070076693 Krishnaswamy Apr 2007 A1
20070076708 Kolakowski et al. Apr 2007 A1
20070081513 Torsner Apr 2007 A1
20070097874 Hughes et al. May 2007 A1
20070110046 Farrell et al. May 2007 A1
20070115812 Hughes May 2007 A1
20070127372 Khan et al. Jun 2007 A1
20070130114 Li et al. Jun 2007 A1
20070140129 Bauer et al. Jun 2007 A1
20070150497 De La Cruz et al. Jun 2007 A1
20070160200 Ishikawa et al. Jul 2007 A1
20070174428 Lev Ran et al. Jul 2007 A1
20070179900 Daase et al. Aug 2007 A1
20070192863 Kapoor et al. Aug 2007 A1
20070195702 Yuen et al. Aug 2007 A1
20070195789 Yao Aug 2007 A1
20070198523 Hayim Aug 2007 A1
20070226320 Hager et al. Sep 2007 A1
20070237104 Alon et al. Oct 2007 A1
20070244987 Pedersen Oct 2007 A1
20070245079 Bhattacharjee et al. Oct 2007 A1
20070248084 Whitehead Oct 2007 A1
20070258468 Bennett Nov 2007 A1
20070260746 Mirtorabi et al. Nov 2007 A1
20070263554 Finn Nov 2007 A1
20070276983 Zohar et al. Nov 2007 A1
20070280245 Rosberg Dec 2007 A1
20080005156 Edwards et al. Jan 2008 A1
20080013532 Garner et al. Jan 2008 A1
20080016301 Chen Jan 2008 A1
20080028467 Kommareddy et al. Jan 2008 A1
20080031149 Hughes et al. Feb 2008 A1
20080031240 Hughes et al. Feb 2008 A1
20080037432 Cohen et al. Feb 2008 A1
20080071818 Apanowicz Mar 2008 A1
20080095060 Yao Apr 2008 A1
20080133536 Bjorner et al. Jun 2008 A1
20080133561 Dubnicki et al. Jun 2008 A1
20080184081 Hama et al. Jul 2008 A1
20080205445 Kumar et al. Aug 2008 A1
20080222044 Gottlieb et al. Sep 2008 A1
20080229137 Samuels et al. Sep 2008 A1
20080243992 Jardetzky et al. Oct 2008 A1
20080267217 Colville et al. Oct 2008 A1
20080285463 Oran Nov 2008 A1
20080300887 Chen et al. Dec 2008 A1
20080313318 Vermeulen et al. Dec 2008 A1
20080320151 McCanne et al. Dec 2008 A1
20090006801 Shultz et al. Jan 2009 A1
20090024763 Stepin Jan 2009 A1
20090037448 Thomas Feb 2009 A1
20090060198 Little Mar 2009 A1
20090063696 Wang et al. Mar 2009 A1
20090080460 Kronewitter et al. Mar 2009 A1
20090089048 Pouzin Apr 2009 A1
20090092137 Haigh et al. Apr 2009 A1
20090100483 McDowell Apr 2009 A1
20090158417 Khanna et al. Jun 2009 A1
20090168786 Sarkar Jul 2009 A1
20090175172 Prytz Jul 2009 A1
20090182864 Khan et al. Jul 2009 A1
20090204961 DeHaan et al. Aug 2009 A1
20090234966 Samuels et al. Sep 2009 A1
20090245114 Vijayaraghavan Oct 2009 A1
20090265707 Goodman et al. Oct 2009 A1
20090274294 Itani Nov 2009 A1
20090279550 Romrell et al. Nov 2009 A1
20090281984 Black Nov 2009 A1
20100005222 Brant et al. Jan 2010 A1
20100011125 Yang et al. Jan 2010 A1
20100020693 Thakur Jan 2010 A1
20100054142 Moiso et al. Mar 2010 A1
20100070605 Hughes et al. Mar 2010 A1
20100077251 Liu et al. Mar 2010 A1
20100082545 Bhattacharjee et al. Apr 2010 A1
20100085964 Weir et al. Apr 2010 A1
20100115137 Kim et al. May 2010 A1
20100121957 Roy et al. May 2010 A1
20100124239 Hughes May 2010 A1
20100131957 Kami May 2010 A1
20100150158 Cathey et al. Jun 2010 A1
20100169467 Shukla et al. Jul 2010 A1
20100177663 Johansson et al. Jul 2010 A1
20100225658 Coleman Sep 2010 A1
20100232443 Pandey Sep 2010 A1
20100242106 Harris et al. Sep 2010 A1
20100246584 Ferguson et al. Sep 2010 A1
20100290364 Black Nov 2010 A1
20100318892 Teevan et al. Dec 2010 A1
20100333212 Carpenter et al. Dec 2010 A1
20110002346 Wu Jan 2011 A1
20110022812 Van Der Linden et al. Jan 2011 A1
20110113472 Fung et al. May 2011 A1
20110131411 Lin et al. Jun 2011 A1
20110154169 Gopal et al. Jun 2011 A1
20110154329 Arcese et al. Jun 2011 A1
20110181448 Koratagere Jul 2011 A1
20110219181 Hughes et al. Sep 2011 A1
20110225322 Demidov et al. Sep 2011 A1
20110258049 Ramer et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276963 Wu et al. Nov 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20120005549 Ichiki et al. Jan 2012 A1
20120036325 Mashtizadeh et al. Feb 2012 A1
20120069131 Abelow Mar 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120173759 Agarwal et al. Jul 2012 A1
20120185775 Clemm et al. Jul 2012 A1
20120198346 Clemm et al. Aug 2012 A1
20120218130 Boettcher et al. Aug 2012 A1
20120221611 Watanabe et al. Aug 2012 A1
20120230345 Ovsiannikov Sep 2012 A1
20120239872 Hughes et al. Sep 2012 A1
20120290636 Kadous et al. Nov 2012 A1
20130018722 Libby Jan 2013 A1
20130018765 Fork et al. Jan 2013 A1
20130031642 Dwivedi et al. Jan 2013 A1
20130044751 Casado et al. Feb 2013 A1
20130058354 Casado et al. Mar 2013 A1
20130080619 Assuncao et al. Mar 2013 A1
20130083806 Suarez Fuentes et al. Apr 2013 A1
20130086236 Baucke et al. Apr 2013 A1
20130086594 Cottrell Apr 2013 A1
20130094501 Hughes Apr 2013 A1
20130103655 Fanghaenel et al. Apr 2013 A1
20130117494 Hughes et al. May 2013 A1
20130121209 Padmanabhan et al. May 2013 A1
20130141259 Hazarika et al. Jun 2013 A1
20130142050 Luna Jun 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130250951 Koganti Sep 2013 A1
20130263125 Shamsee et al. Oct 2013 A1
20130266007 Kumbhare et al. Oct 2013 A1
20130282970 Hughes et al. Oct 2013 A1
20130325986 Brady et al. Dec 2013 A1
20130343191 Kim et al. Dec 2013 A1
20140052864 van Der Linden et al. Feb 2014 A1
20140075554 Cooley Mar 2014 A1
20140086069 Frey et al. Mar 2014 A1
20140101426 Senthurpandi Apr 2014 A1
20140108360 Kunath et al. Apr 2014 A1
20140114742 Lamontagne et al. Apr 2014 A1
20140123213 Vank et al. May 2014 A1
20140181381 Hughes et al. Jun 2014 A1
20140269705 DeCusatis et al. Sep 2014 A1
20140279078 Nukala et al. Sep 2014 A1
20140321290 Jin Oct 2014 A1
20140379937 Hughes et al. Dec 2014 A1
20150058488 Backholm Feb 2015 A1
20150074291 Hughes Mar 2015 A1
20150074361 Hughes et al. Mar 2015 A1
20150078397 Hughes et al. Mar 2015 A1
20150110113 Levy et al. Apr 2015 A1
20150120663 Le Scouamec et al. Apr 2015 A1
20150127701 Chu et al. May 2015 A1
20150143505 Border et al. May 2015 A1
20150170221 Shah Jun 2015 A1
20150281099 Banavalikar Oct 2015 A1
20150281391 Hughes et al. Oct 2015 A1
20150312054 Barabash et al. Oct 2015 A1
20150334210 Hughes Nov 2015 A1
20150365293 Madrigal et al. Dec 2015 A1
20160014051 Hughes et al. Jan 2016 A1
20160034305 Shear et al. Feb 2016 A1
20160093193 Silvers et al. Mar 2016 A1
20160112255 Li Apr 2016 A1
20160142310 Means May 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160255000 Gattani et al. Sep 2016 A1
20160255542 Hughes et al. Sep 2016 A1
20160359740 Parandehgheibi et al. Dec 2016 A1
20160380886 Blair et al. Dec 2016 A1
20170026467 Barsness et al. Jan 2017 A1
20170111692 An et al. Apr 2017 A1
20170149679 Hughes et al. May 2017 A1
20170187581 Hughes et al. Jun 2017 A1
20170359238 Hughes et al. Dec 2017 A1
20180027416 Bickford Jan 2018 A1
20180089994 Dhondse et al. Mar 2018 A1
20180121634 Hughes et al. May 2018 A1
20180123861 Hughes et al. May 2018 A1
20180131711 Chen et al. May 2018 A1
20180205494 Hughes Jul 2018 A1
20180227216 Hughes Aug 2018 A1
20180227223 Hughes Aug 2018 A1
20190089620 Hefel et al. Mar 2019 A1
20190104207 Goel et al. Apr 2019 A1
20190149447 Hughes et al. May 2019 A1
20190230038 Hughes Jul 2019 A1
20190245771 Wu et al. Aug 2019 A1
20190253187 Hughes Aug 2019 A1
20190274070 Hughes et al. Sep 2019 A1
20190280917 Hughes et al. Sep 2019 A1
20200021506 Hughes et al. Jan 2020 A1
20200213185 Hughes et al. Jul 2020 A1
20200258118 Kovvali Aug 2020 A1
20200279029 Hughes et al. Sep 2020 A1
20200358687 Hughes et al. Nov 2020 A1
Foreign Referenced Citations (3)
Number Date Country
1507353A2 Feb 2005 EP
H05061964 Mar 1993 JP
WO0135226 May 2001 WO
Non-Patent Literature Citations (251)
Entry
“IPsec Anti-Replay Window: Expanding and Disabling,” Cisco IOS Security Configuration Guide. 2005-2006 Cisco Systems, Inc. Last updated: Sep. 12, 2006, 14 pages.
Singh et al. ; “Future of Internet Security—IPSEC”; 2005; pp. 1-8.
Muthitacharoen, Athicha et al., “A Low-bandwidth Network File System,” 2001, in Proc. of the 18th ACM Symposium on Operating Systems Principles, Banff, Canada, pp. 174-187.
“Shared LAN Cache Datasheet”, 1996, <http://www.lancache.com/slcdata.htm>, 8 pages.
Spring et al., “A protocol-independent technique for eliminating redundant network traffic”, ACM SIGCOMM Computer Communication Review, vol. 30, Issue 4 (Oct. 2000) pp. 87-95, Year of Publication: 2000.
Hong, B et al. “Duplicate data elimination in a SAN file system”, In Proceedings of the 21st Symposium on Mass Storage Systems (MSS '04), Goddard, MD, Apr. 2004. IEEE, pp. 101-114.
You, L. L. and Karamanolis, C. 2004. “Evaluation of efficient archival storage techniques”, In Proceedings of the 21st IEEE Symposium on Mass Storage Systems and Technologies (MSST), pp. 1-6.
Douglis, F. et al., “Application specific Delta-encoding via Resemblance Detection”, Published in the 2003 USENIX Annual Technical Conference, pp. 1-14.
You, L. L. et al., “Deep Store an Archival Storage System Architecture” Data Engineering, 2005. ICDE 2005. 3roceedings of the 21st Intl. Conf. on Data Eng.,Tokyo, Japan, Apr. 5-8, 2005, pp. 12.
Manber, Udi, “Finding Similar Files in a Large File System”, TR 93-33 Oct. 1994, Department of Computer Science, University of Arizona. <http://webglimpse.net/pubs/TR93-33.pdf>. Also appears in the 1994 winter USENIX Technical Conference, pp. 1-10.
Knutsson, Bjorn et al., “Transparent Proxy Signalling”, Journal of Communications and Networks, vol. 3, No. 2, Jun. 2001, pp. 164-174.
Definition memory (n), Webster'd Third New International Dictionary, Unabridged (1993), available al <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Not provided in IPR2013-00402 proceedings.
Definition appliance, 2c, Webster's Third New International Dictionary, Unabridged (1993), available at <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Not provided in IPR2013-00402 proceedings.
Newton, “Newton's Telecom Dictionary”, 17th Ed., 2001, pp. 38, 201, and 714.
Silver Peak Systems, “The Benefits of Byte-level WAN Deduplication” (2008), pp. 1-4.
Business Wire, “Silver Peak Systems Delivers Family of Appliances for Enterprise-Wide Centralization of Branch Office Infrastructure; Innovative Local Instance Networking Approach Overcomes Traditional Application Acceleration Pitfalls” (available at http://www.businesswire.com/news/home/20050919005450/en/Silver-Peak-Systems-Delivers-Family-Appliances-Enterprise-Wide#.UVzkPk7u-1 (last visited Aug. 8, 2014)), 4 pages.
Riverbed, “Riverbed Introduces Market-Leading WDS Solutions for Disaster Recovery and Business Application Acceleration” (available at http://www.riverbed.com/about/news-articles/pressreleases/riverbed-introduces-market-leading-wds-solutions-fordisaster-recovery-and-business-application-acceleration.html (last visited Aug. 8, 2014)), 4 pages.
Tseng, Josh, “When accelerating secure traffic is not secure” (available at http://www.riverbed.com/blogs/whenaccelerati.html?&isSearch=true&pageSize=3&page=2 (last visited Aug. 8, 2014)), 3 pages.
Riverbed, “The Riverbed Optimization System (RiOS) v4.0: A Technical Overview” (explaining “Data Security” through segmentation) (available at http://mediacms.riverbed.com/documents/TechOverview-Riverbed-RiOS_4_0.pdf (last visited Aug. 8, 2014)), pp. 1-18.
Riverbed, “Riverbed Awarded Patent on Core WDS Technology” (available at: http://www.riverbed.com/about/news-Articles/pressreleases/riverbed-awarded-patent-on-core-wds-technology.html (last visited Aug. 8, 2014)), 2 pages.
Final Written Decision, dated Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00403, pp. 1-38.
Final Written Decision, dated Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00402, pp. 1-37.
“Notice of Entry of Judgement Accompanied by Opinion”, United States Court of Appeals for the Federal Circuit, Case: 15/2072, Oct. 24, 2017, 6 pages.
“Decision Granting Motion to Terminate”, Inter Partes Review Case No. IPR2014-00245, dated Feb. 7, 2018, 4 pages.
Non-Final Office Action, dated Dec. 31, 2014, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Non-Final Office Action, dated Dec. 6, 2010, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Dec. 9, 2009, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Non-Final Office Action, dated Feb. 2, 2010, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Feb. 3, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Feb. 3, 2011, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Non-Final Office Action, dated Feb. 4, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Jan. 12, 2016, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Non-Final Office Action, dated Jan. 22, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 23, 2015, U.S. Appl. No. 14/548,195, filed Nov. 19, 2014.
Non-Final Office Action, dated Jan. 26, 2009, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 26, 2010, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Non-Final Office Action, dated Jan. 3, 2013, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Non-Final Office Action, dated Jan. 4, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 6, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Jul. 1, 2013, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Jul. 10, 2008, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Non-Final Office Action, dated Jul. 10, 2013, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Jul. 11, 2014, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Non-Final Office Action, dated Jul. 15, 2015, U.S. Appl. No. 14/734,949, filed Jun. 9, 2015.
Non-Final Office Action, dated Jul. 17, 2008, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Non-Final Office Action, dated Jul. 18, 2011, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Jul. 18, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Jul. 21, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Jul. 25, 2016, U.S. Appl. No. 14/067,619, filed Oct. 30, 2011.
Non-Final Office Action, dated Jul. 27, 2017, U.S. Appl. No. 14/981,814, filed Dec. 28, 2015.
Non-Final Office Action, dated Jul. 30, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 201t.
Non-Final Office Action, dated Jul. 5, 2012, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Non-Final Office Action, dated Jul. 7, 2011, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Jul. 8, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Non-Final Office Action, dated Jun. 13, 2014, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated Jun. 14, 2010, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Non-Final Office Action, dated Jun. 15, 2016, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Non-Final Office Action, dated Jun. 17, 2010, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Non-Final Office Action, dated Jun. 18, 2010, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Non-Final Office Action, dated Jun. 2, 2015, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated Jun. 20, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Non-Final Office Action, dated Jun. 22, 2009, U.S. Appl. No. 11/796,239, filed Apr. 27, 2007.
Non-Final Office Action, dated Jun. 22, 2016, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Non-Final Office Action, dated Jun. 6, 2014, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Non-Final Office Action, dated Jun. 8, 2015, U.S. Appl. No. 14/248,167, filed Apr. 8, 2014.
Non-Final Office Action, dated Mar. 11, 2015, U.S. Appl. No. 14/549,425, filed Nov. 20, 2014.
Non-Final Office Action, dated Mar. 22, 2010, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Mar. 24, 2009, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Mar. 8, 2011, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated May 13, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated May 18, 2015, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Non-Final Office Action, dated May 20, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated May 24, 2010, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated May 3, 2016, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Advisory Action, dated Apr. 4, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Advisory Action, dated Aug. 20, 2012, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Advisory Action, dated Dec. 3, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Advisory Action, dated Jan. 24, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Advisory Action, dated Jan. 29, 2013, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Advisory Action, dated Jan. 9, 2017, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Advisory Action, dated Jul. 16, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Advisory Action, dated Jul. 2, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Advisory Action, dated Jun. 27, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Advisory Action, dated Mar. 21, 2016, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Advisory Action, dated Mar. 25, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Advisory Action, dated Mar. 5, 2015, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Advisory Action, dated May 23, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Advisory Action, dated Nov. 25, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Advisory Action, dated Nov. 25, 2015, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Advisory Action, dated Oct. 2, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Advisory Action, dated Sep. 27, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Corrected Notice of Allowability, dated Aug. 5, 2015, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Corrected Notice of Allowability, dated Mar. 14, 2016, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Corrected Notice of Allowability, dated Mar. 7, 2016, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Decision on Appeal, dated Jun. 28, 2012, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Decision on Appeal, dated Nov. 14, 2012, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Decision on Appeal, dated Sep. 17, 2012, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Examiner's Answer to Appeal Brief, dated Oct. 14, 2009, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Examiner's Answer to Appeal Brief, dated Oct. 27, 2009, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Examiner's Answer to Appeal Brief, dated Oct. 27, 2009, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Final Office Action, dated Apr. 1, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Final Office Action, dated Apr. 15, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Apr. 18, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Apr. 23, 2012, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Aug. 12, 2011, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Aug. 17, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Aug. 7, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Dec. 18, 2014, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Dec. 21, 2015, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Final Office Action, dated Dec. 31, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Final Office Action, dated Dec. 8, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Feb. 1, 2013, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Feb. 10, 2012, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Feb. 17, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Final Office Action, dated Feb. 22, 2008, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Final Office Action, dated Feb. 22, 2012, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Final Office Action, dated Feb. 3, 2011, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Feb. 4, 2013, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Jan. 11, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Final Office Action, dated Jan. 11, 2016, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Jan. 12, 2009, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Final Office Action, dated Jan. 12, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Final Office Action, dated Jan. 14, 2014, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Final Office Action, dated Jan. 22, 2008, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Final Office Action, dated Jan. 5, 2009, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Final Office Action, dated Jan. 9, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Final Office Action, dated Jul. 13, 2010, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Jul. 14, 2015, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Final Office Action, dated Jul. 17, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Final Office Action, dated Jul. 19, 2016, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Final Office Action, dated Jul. 22, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Jul. 26, 2016, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Final Office Action, dated Mar. 16, 2012, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Mar. 25, 2014, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Mar. 30, 2011, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Final Office Action, dated May 11, 2009, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Final Office Action, dated May 14, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated May 25, 2012, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Final Office Action, dated May 3, 2017, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Final Office Action, dated May 5, 2010, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Final Office Action, dated Nov. 2, 2012, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Final Office Action, dated Nov. 20, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Nov. 21, 2016, U.S. Appl. No. 14/447,505, filed Jul. 30, 2015.
Final Office Action, dated Nov. 4, 2010, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Final Office Action, dated Nov. 9, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Oct. 12, 2010, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Final Office Action, dated Oct. 12, 2011, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Final Office Action, dated Oct. 13, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Oct. 4, 2016, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Final Office Action, dated Oct. 5, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Final Office Action, dated Sep. 1, 2009, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Sep. 18, 2015, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Final Office Action, dated Sep. 23, 2010, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Final Office Action, dated Sep. 26, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Sep. 30, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Final Written Decision, dated Jun. 9, 2015, Inter Partes Review Case No. IPR2014-00245, pp. 1-40.
Non-Final Office Action, dated Apr. 2, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Non-Final Office Action, dated Apr. 27, 2017, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Non-Final Office Action, dated Aug. 10, 2016, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Non-Final Office Action, dated Aug. 11, 2015, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Non-Final Office Action, dated Aug. 12, 2010, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Aug. 14, 2013, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Aug. 14, 2013, U.S. Appl. No. 13/917,517, filed Jun. 13, 2013.
Non-Final Office Action, dated Aug. 18, 2015, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Non-Final Office Action, dated Aug. 26, 2016, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Non-Final Office Action, dated Aug. 28, 2012, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Dec. 1, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Dec. 15, 2015, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Non-Final Office Action, dated Dec. 16, 2015, U.S. Appl. No. 14/859,179, filed Sep. 18, 2015.
Non-Final Office Action, dated Dec. 20, 2011, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Dec. 30, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated May 4, 2017, U.S. Appl. No. 14/811,482, filed Jul. 28, 2015.
Non-Final Office Action, dated May 6, 2015, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Non-Final Office Action, dated May 6, 2016, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated Nov. 17, 2008, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Non-Final Office Action, dated Nov. 2, 2017, U.S. Appl. No. 15/403,116, filed Jan. 10, 2017.
Non-Final Office Action, dated Nov. 26, 2014, U.S. Appl. No. 14/333,486, filed Jul. 16, 2014.
Non-Final Office Action, dated Nov. 6, 2012, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Oct. 1, 2014, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Non-Final Office Action, dated Oct. 13, 2010, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Oct. 20, 2009, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Oct. 22, 2013, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Non-Final Office Action, dated Oct. 4, 2011, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Oct. 6, 2016, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Non-Final Office Action, dated Oct. 7, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Oct. 9, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Sep. 10, 2013, U.S. Appl. No. 13/757,548, filed Feb. 1, 2013.
Non-Final Office Action, dated Sep. 11, 2017, U.S. Appl. No. 15/148,671, filed May 6, 2016.
Non-Final Office Action, dated Sep. 13, 2012, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Sep. 17, 2008, U.S. Appl. No. 11/357,657, filed Feb. 16, 2006.
Non-Final Office Action, dated Sep. 20, 2012, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Non-Final Office Action, dated Sep. 22, 2011, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Sep. 25, 2012, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Sep. 26, 2008, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Sep. 27, 2011, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Notice of Allowance, dated Apr. 14, 2014, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Notice of Allowance, dated Apr. 21, 2011, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Notice of Allowance, dated Apr. 28, 2009, U.S. Appl. No. 11/357,657, filed Feb. 16, 2006.
Notice of Allowance, dated Aug. 24, 2016, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Notice of Allowance, dated Aug. 30, 2012, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Notice of Allowance, dated Aug. 31, 2009, U.S. Appl. No. 11/724,800, filed Mar. 15, 2007.
Notice of Allowance, dated Dec. 22, 2014, U.S. Appl. No. 14/333,486, filed Jul. 16, 2014.
Notice of Allowance, dated Dec. 26, 2012, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Notice of Allowance, dated Dec. 3, 2009, U.S. Appl. No. 11/796,239, filed Apr. 27, 2007.
Notice of Allowance, dated Dec. 5, 2016, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Notice of Allowance, dated Dec. 9, 2010, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Notice of Allowance, dated Feb. 11, 2011, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Notice of Allowance, dated Feb. 14, 2014, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Notice of Allowance, dated Feb. 16, 2016, U.S. Appl. No. 14/248,167, filed Apr. 8, 2014.
Notice of Allowance, dated Feb. 19, 2013, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Notice of Allowance, dated Feb. 29, 2012, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Notice of Allowance, dated Feb. 8, 2016, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Notice of Allowance, dated Jan. 16, 2014, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Notice of Allowance, dated Jan. 2, 2014, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Notice of Allowance, dated Jan. 20, 2011, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Notice of Allowance, dated Jan. 23, 2015, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Notice of Allowance, dated Jan. 3, 2014, U.S. Appl. No. 13/757,548, filed Feb. 1, 2013.
Notice of Allowance, dated Jul. 27, 2015, U.S. Appl. No. 14/549,425, filed Nov. 20, 2014.
Notice of Allowance, dated Jun. 10, 2014, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Notice of Allowance, dated Jun. 3, 2015, U.S. Appl. No. 14/548,195, filed Nov. 19, 2014.
Notice of Allowance, dated Jun. 3, 2016, U.S. Appl. No. 14/859,179, filed Sep. 18, 2015.
Notice of Allowance, dated Mar. 16, 2012, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Notice of Allowance, dated Mar. 16, 2015, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Notice of Allowance, dated Mar. 2, 2016, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Notice of Allowance, dated Mar. 21, 2013, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Notice of Allowance, dated Mar. 22, 2017, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Notice of Allowance, dated Mar. 23, 2017, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Notice of Allowance, dated Mar. 26, 2012, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Notice of Allowance, dated May 14, 2013, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Notice of Allowance, dated May 21, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Notice of Allowance, dated Nov. 12, 2011, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Notice of Allowance, dated Nov. 16, 2016, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Notice of Allowance, dated Nov. 23, 2016, U.S. Appl. No. 14/067,619, filed Oct. 30, 2013.
Notice of Allowance, dated Nov. 25, 2013, U.S. Appl. No. 13/917,517, filed Jun. 13, 2013.
Notice of Allowance, dated Oct. 23, 2012, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Notice of Allowance, dated Oct. 25, 2017, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Notice of Allowance, dated Oct. 5, 2015, U.S. Appl. No. 14/734,949, filed Jun. 9, 2015.
Notice of Allowance, dated Oct. 6, 2014, U.S. Appl. No. 14/270,101, filed May 5, 2014.
Notice of Allowance, dated Sep. 12, 2014, U.S. Appl. No. 13/657,733, filed Oct. 22, 2012.
Notice of Allowance, dated Sep. 26, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Notice of Allowance, dated Sep. 5, 2014, U.S. Appl. No. 14/248,229, filed Apr. 8, 2014.
Notice of Allowance, dated Sep. 5, 2017, U.S. Appl. No. 14/811,482, filed Jul. 28, 2015.
Notice of Allowance, dated Sep. 8, 2009, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Notice of Allowance, dated Sep. 8, 2017, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Request for Trial Granted, dated Jan. 2, 2014, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Request for Trial Granted, dated Jan. 2, 2014, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Request for Trial Granted, dated Jun. 11, 2014, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Supplemental Notice of Allowability, dated Oct. 9, 2014, U.S. Appl. No. 13/657,733, filed Oct. 22, 2012.
Related Publications (1)
Number Date Country
20190260683 A1 Aug 2019 US
Continuation in Parts (2)
Number Date Country
Parent 15632008 Jun 2017 US
Child 16398255 US
Parent 15425798 Feb 2017 US
Child 15632008 US