Aggregating select network traffic statistics

Information

  • Patent Grant
  • 10432484
  • Patent Number
    10,432,484
  • Date Filed
    Monday, June 13, 2016
    8 years ago
  • Date Issued
    Tuesday, October 1, 2019
    4 years ago
Abstract
Disclosed herein are systems and methods for the collection, aggregation, and processing of network traffic statistics for a plurality of network appliances in a wide area network. Select network traffic statistics can be collected and associated with a hierarchical string, and aggregated over time. In this way, only information that is likely to be relevant is gathered and maintained, allowing for the maintenance of select network traffic statistics for large-scale operations.
Description
TECHNICAL FIELD

This disclosure relates generally to the collection, aggregation, and processing of network traffic statistics for a plurality of network appliances.


BACKGROUND

The approaches described in this section could be pursued, but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.


An increasing number of network appliances, physical and virtual, are deployed in communication networks such as wide area networks (WAN). For each network appliance, it may be desirable to monitor attributes and statistics of the data traffic handled by the device. For example, information can be collected regarding source IP addresses, destination IP addresses, traffic type, port numbers, etc. for the traffic that passes through the network appliance. Typically this information is collected for each data flow using industry standards such as NetFlow and IPFIX. The collected data is transported across the network to a collection engine, stored in a database, and can be utilized for running queries and generating reports regarding the network.


Since there can be any number of data flows processed by a network appliance each minute (hundreds, thousands, or even millions), this results in a large volume of data that is collected each minute, for each network appliance. As the number of network appliances in a communication network increases, the amount of data generated can quickly become unmanageable. Moreover, transporting all of this data across the network from each network appliance to the collection engine can be a significant burden, as well as storing and maintaining a database with all of the data. Further, it may take longer to run a query and generate a report since the amount of data to be processed and analyzed is so large.


Thus, there is a need for a more efficient mechanism for collecting and storing network traffic statistics for a large number of network appliances in a communication network.


SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


In various exemplary methods of the present disclosure, a system for aggregating select network traffic statistics is disclosed. The system comprises a plurality of network appliances in a communication network configured to collect a plurality of flow attributes for network traffic through each network appliance, build a plurality of hierarchical strings of network traffic flow attributes with extracted attribute values of those flow attributes, extract at least one network metric for at least one network characteristic associated with each of the plurality of hierarchical strings, and aggregate the at least one network metric for the at least one network characteristic over the plurality of flows, and transmit the aggregated information to a network information collector in communication with each network appliance; and the network information collector configured to receive the information from each network appliance, and provide the information to a user on a graphical user display in response to the user running a query on the received information.


In other embodiments, a method for aggregating select network traffic statistics for each of a plurality of network appliances connected in a communication network is disclosed. The method for each flow from a network appliance, extracting an attribute value of a first flow attribute; for each flow from the network appliance, extracting an attribute value of a second flow attribute; building at least one hierarchical string with the extracted attribute values; extracting at least one network metric for at least one network characteristic associated with the at least one hierarchical string; aggregating the at least one network metric for the at least one network characteristic over a plurality of flows; and transmitting the at least one hierarchical string and associated aggregated network metrics to a network information collector in communication with the network appliance.


Other features, examples, and embodiments are described below.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are illustrated by way of example and not by limitation in the figures of the accompanying drawings, in which like references indicate similar elements.



FIG. 1A depicts an exemplary system of the prior art.



FIG. 1B depicts an exemplary system within which the present disclosure can be implemented.



FIG. 2 illustrates a block diagram of a network appliance, in an exemplary implementation of the disclosure.



FIG. 3 depicts an exemplary flow table at a network appliance.



FIG. 4A depicts an exemplary accumulating map at a network appliance.



FIG. 4B depicts exemplary information from a row of an accumulating map.



FIG. 5A depicts an exemplary sorting via bins for an accumulating map.



FIG. 5B depicts an exemplary eviction policy for an accumulating map.



FIG. 6 depicts an exemplary method for building a hierarchical string.





DETAILED DESCRIPTION

The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations, in accordance with exemplary embodiments. These exemplary embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.


The embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system containing one or more computers, or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium, such as a disk drive, or computer-readable medium.


The embodiments described herein relate to the collection, aggregation, and processing of network traffic statistics for a plurality of network appliances.



FIG. 1A depicts an exemplary system 100 within which embodiments of the prior art are implemented. The system comprises a plurality of network appliances 110 in communication with a flow information collector 120 over one or more wired or wireless communication network(s) 160. The flow information collector 120 is further in communication with one or more flow database(s) 125, which in turn is in communication with a reporting engine 140 that is accessible by a user 150.


Network appliance 110 collects information about network flows that are processed through the appliance and maintains flow records 112. These flow records are transmitted to the flow information collector 120 and maintained in flow database 125. User 150 can access information from these flow records 112 via reporting engine 140.



FIG. 1B depicts an exemplary system 170 within which the present disclosure can be implemented. The system comprises a plurality of network appliances 110 in communication with a network information collector 180 over one or more wired or wireless communication network(s) 160. The network information collector 180 is further in communication with one or more database(s) 130, which in turn is in communication with a reporting engine 140 that is accessible by a user 150. While network information collector 180, database(s) 130, and reporting engine 140 are depicted in the figure as separate, one or more of these engines can be part of the same computing machine or distributed across many computers.


In a wide area network, there can be multiple network appliances deployed in one or more geographic locations. Each network appliance 110 comprises hardware and/or software elements configured to receive data and optionally perform any type of processing, including but not limited to, WAN optimization techniques to the data, before transmitting to another appliance. In various embodiments, the network appliance 110 can be configured as an additional router or gateway. If a network appliance has multiple interfaces, it can be transparent on some interfaces, and act like a router/bridge on others. Alternatively, the network appliance can be transparent on all interfaces, or appear as a router/bridge on all interfaces. In some embodiments, network traffic can be intercepted by another device and mirrored (copied) onto network appliance 110. The network appliance 110 may further be either physical or virtual. A virtual network appliance can be in a virtual private cloud (not shown), managed by a cloud service provider, such as Amazon Web Services, or others.


Network appliance 110 collects information about network flows that are processed through the appliance in flow records 112. From these flow records 112, network appliance 110 further generates an accumulating map 114 containing select information from many flow records 112 aggregated over a certain time period. The flow records 112 and accumulating map 114 generated at network appliance 110 are discussed in further detail below with respect to FIGS. 3 and 4.


At certain time intervals, network appliance 110 transmits information from the accumulating map 114 (and not flow records 112) to network information collector 180 and maintains this information in one or more database(s) 130. User 150 can access information from these accumulating maps via reporting engine 140, or in some instances user 150 can access information from these accumulating maps directly from a network appliance 110.



FIG. 2 illustrates a block diagram of a network appliance 110, in an exemplary implementation of the disclosure. The network appliance 110 includes a processor 210, a memory 220, a WAN communication interface 230, a LAN communication interface 240, and a database 250. A system bus 280 links the processor 210, the memory 220, the WAN communication interface 230, the LAN communication interface 240, and the database 250. Line 260 links the WAN communication interface 230 to another device, such as another appliance, router, or gateway, and line 270 links the LAN communication interface 240 to a user computing device, or other networking device. While network appliance 110 is depicted in FIG. 2 as having these exemplary components, the appliance may have additional or fewer components.


The database 250 comprises hardware and/or software elements configured to store data in an organized format to allow the processor 210 to create, modify, and retrieve the data. The hardware and/or software elements of the database 250 may include storage devices, such as RAM, hard drives, optical drives, flash memory, and magnetic tape.


In some embodiments, some network appliances comprise identical hardware and/or software elements. Alternatively, in other embodiments, some network appliances may include hardware and/or software elements providing additional processing, communication, and storage capacity.


Each network appliance 110 can be in communication with at least one other network appliance 110, whether in the same geographic location, different geographic location, private cloud network, customer datacenter, or any other location. As understood by persons of ordinary skill in the art, any type of network topology may be used. There can be one or more secure tunnels between one or more network appliances. The secure tunnel may be utilized with encryption (e.g., IPsec), access control lists (ACLs), compression (such as header and payload compression), fragmentation/coalescing optimizations and/or error detection and correction provided by an appliance.


A network appliance 110 can further have a software program operating in the background that tracks its activity and performance. For example, information about data flows that are processed by the network appliance 110 can be collected. Any type of information about a flow can be collected, such as header information (source port, destination port, source address, destination address, protocol, etc.), packet count, byte count, timestamp, traffic type, or any other flow attribute. This information can be stored in a flow table 300 at the network appliance 110. Flow tables will be discussed in further detail below, with respect to FIG. 3.


In exemplary embodiments, select information from flow table 300 is aggregated and populated into an accumulating map, which is discussed in further detail below with respect to FIG. 4. Information from the accumulating map is transmitted by network appliance 110 across communication networks(s) 160 to network information collector 180. In this way, the information regarding flows processed by network appliance 110 is not transmitted directly to network information collector 180, but rather a condensed and aggregated version of selected flow information is transmitted across the network, creating less network traffic.


After a flow table 300 is used to populate an accumulating map, or on a certain periodic basis or activation of a condition, flow table 300 may be discarded by network appliance 110 and a new flow table is started. Similarly, after an accumulating map 400 is received by network information collector 180, or on a certain periodic basis or activation of a condition, accumulating map 400 may be discarded by network appliance 110 and a new accumulating map is started.


Returning to FIG. 1B, network information collector 180 comprises hardware and/or software elements, including at least one processor, for receiving data from network appliance 110 and processing it. Network information collector 180 may process data received from network appliance 110 and store the data in database(s) 130. In various embodiments, database(s) 130 is a relational database that stores the information from accumulating map 400. The information can be stored directly into database(s) 130 or separated into columns and then stored in database(s) 130.


Database(s) 130 is further in communication with reporting engine 140. Reporting engine 140 comprises hardware and/or software elements, including at least one processor, for querying data in database(s) 130, processing it, and presenting it to user 150 via a graphical user interface. In this way, user 150 can run any type of query on the stored data. For example, a user can run a query requesting information on the most visited websites, or a “top talkers” report, as discussed in further detail below.



FIG. 3 depicts an exemplary flow table 300 at network appliance 110 for flows 1 through N, with N representing any number. The flow table contains one or more rows of information for each flow that is processed through network appliance 110. Data packets transmitted and received between a single user and a single website that the user is browsing can be parsed into multiple flows. Thus, one browsing session for a user on a website may comprise many flows. Typically a TCP flow begins with a SYN packet and ends with a FIN packet. Other methods can be used for determining the start and end of non-TCP flows. The attributes of each of these flows, while they may be identical or substantially similar, are by convention stored in different rows of flow table 300 since they are technically different flows.


In exemplary embodiments, flow table 300 may collect certain information about the flow, such as header information 310, network information 320, and other information 330. As would be understood by a person of ordinary skill in the art, flow table 300 can comprise fewer or additional fields than depicted in FIG. 3. Moreover, even though header information 310 is depicted as having three entries in exemplary flow table 300, there can be fewer or additional entries for header information. Similarly, there can be fewer or additional entries for network information 320 and for other information 330 than the number of entries depicted in exemplary flow table 300.


Header information 310 can comprise any type of information found in a packet header, for example, source port, destination port, source address (such as IP address), destination address, protocol. Network information 320 can comprise any type of information regarding the network, such as a number of bytes received or a number of bytes transmitted during that flow. Further, network information 320 can contain information regarding other characteristics such as loss, latency, jitter, re-ordering, etc. Flow table 300 may store a sum of the number of packets or bytes of each characteristic, or a mathematical operator other than the sum, such as maximum, minimum, mean, median, average, etc. Other information 330 can comprise any other type of information regarding the flow, such as traffic type or domain name (instead of address).


In an example embodiment, entry 340 of flow N is the source port for the flow, entry 345 is the destination port for the flow, and entry 350 is the destination IP address for the flow. Entry 355 is the domain name for the website that flow N originates from or is directed to, entry 360 denotes that the flow is for a voice traffic type, and entry 365 is an application name (for example from deep packet inspection (DPI)). Entry 370 contains the number of packets in the flow and entry 375 contains a number of bytes in the flow.


The flow information regarding every flow is collected by the network appliance 110 at all times, in the background. A network appliance 110 could have one million flows every minute, in which case a flow table for one minute of data for network appliance 110 would have one million rows. Over time, this amount of data becomes cumbersome to process, synthesize, and manipulate. Conventional systems may transport a flow table directly to a flow information collector, or to reduce the amount of data, retain only a fraction of the records from the flow table as a sample. In contrast, embodiments of the present disclosure reduce the amount of data to be processed regarding flows, with minimal information loss, by synthesizing selected information from flow table 300 into an accumulating map. This synthesis can occur on a periodic basis (such as every minute, every 5 minutes, every hour, etc.), or upon the meeting of a condition, such as number of flows recorded in the flow table 300, network status, or any other condition.



FIG. 4A depicts exemplary accumulating maps that are constructed from information from flow table 300. A string of information is built in a hierarchical manner from information in flow table 300. A network administrator can determine one or more strings of information to be gathered. For example, a network administrator may determine that information should be collected regarding a domain name, user computing device, and user computer's port number that is accessing that domain. A user computing device can identify different computing devices utilized by the same user (such as a laptop, smartphone, desktop, tablet, smartwatch, etc.). The user computing device can be identified in any manner, such as by host name, MAC address, user ID, etc.


Exemplary table 400 has rows 1 through F, with F being any number, for the hierarchical string “/domain name/computer/port” that is built from this information. Since the accumulating map 400 is an aggregation of flow information, F will be a much smaller value than N, the total number of flows from flow table 300.


Exemplary table 450 shows data being collected for a string of source IP address and destination IP address combinations. Thus, information regarding which IP addresses are communicating with each other is accumulated. Network appliance 110 can populate an accumulating map for any number of strings of information from flow table 300. In an exemplary embodiment, network appliance 110 populates multiple accumulating maps, each for a different string hierarchy of information from flow table 300. While FIG. 4A depicts only two string hierarchies, there can be fewer or additional strings of information collected in accumulating maps.


Row 410 in exemplary accumulating map 400 shows that during the time interval represented, sampledomain1 was accessed by computer1 from port 1. All of the flows where sampledomain1 was accessed by computer1 from port1 in flow table 300 are aggregated into a single row, row 410, in accumulating map 400. The network information 320 may be aggregated for the flows to depict a total number of bytes received and a total number of packets received from sampledomain1 accessed by computer1 via port1 during the time interval of flow table 300. In this way, a large number of flows may be condensed into a single row in accumulating map 400.


As would be understood by a person of ordinary skill in the art, while accumulating map 400 depicts a total number of bytes received and a total number of packets received (also referred to herein as a network characteristic), any attribute can be collected and aggregated into accumulating map 400. For example, instead of a sum of bytes received, accumulating map 400 can track a maximum value, minimum value, median, percentile, or other numeric attribute for a string. Additionally, the network characteristic can be other characteristics besides number of packets or number of bytes. Loss, latency, re-ordering, and other characteristics can be tracked for a string in addition to, or instead of, packets and bytes, such as number of flows that are aggregated into the row. For example, packet loss and packet jitter can be measured by time stamps and serial numbers from the flow table. Additional information on measurement of network characteristics can be found in commonly owned U.S. Pat. No. 9,143,455 issued on Sep. 22, 2015 and entitled “Quality of Service Using Multiple Flows”, which is hereby incorporated herein in its entirety.


Row 430 shows that the same computer (computer1) accessed the same domain name (sampledomain1), but from a different port (port2). Thus, all of the flows in flow table 300 from port2 of computer1 to sampledomain1 are aggregated into row 430. Similarly, accumulating map 400 can be populated with information from flow table 300 for any number of domains accessed by any number of computers from any number of ports, as shown in row 440.


Flow table 300 may comprise data for one time interval while accumulating map 400 can comprise data for a different time interval. For example, flow table 300 can comprise data for all flows through network appliance 110 over the course of a minute, while data from 60 minutes can all be aggregated into one accumulating map. Thus, if a user returns to the same website from the same computer from the same port within the same hour, even though this network traffic is on a different flow, the data can be combined with the previous flow information for the same parameters into the accumulating map. This significantly reduces the number of records that are maintained. All activity between a computer and a domain from a certain port is aggregated together as one record in the accumulating map, instead of multiple records per flow. This provides information in a compact manner for further processing, while also foregoing the maintenance of all details about more specific activities.


Exemplary accumulating map 450 depicts flow information for another string-source IP address and destination IP address combinations. In IPv4 addressing alone, there are four billion possibilities for source IP addresses and four billion possibilities for destination IP addresses. To maintain a table of all possible IP address combinations between these would be an unwieldy table of information to collect. Further, most combinations for a particular network appliance 110 would be zero. Thus, to maintain large volumes of data in a scalable way, the accumulating map 450 only collects information regarding IP addresses actually used as a source or destination, instead of every possible combination of IP addresses.


The accumulating map 450 can be indexed in different indexing structures, as would be understood by a person of ordinary skill in the art. For example, a hash table can be used where the key is the string and a hash of the string is computed to find a hash bin. In that bin is a list of strings and their associated values. Furthermore, there can be additional indexing to make operations (like finding smallest value) fast, as discussed herein. An accumulating map may comprise the contents of the table, such as that depicted in 400 and 450, and additionally one or more indexing structures and additional information related to the table. In some embodiments, only the table itself from the accumulating map may be transmitted to network information collector 180. In other embodiments, some or all of the additional information, such as indexing information, may be transmitted with the table.


The information from an accumulating map can be collected from the network appliances and then stored in database(s) 130, which may be a relational database. The scheme can use raw aggregated strings and corresponding values in columns of the database(s) 130, or separate columns can be used for each flow attribute of the string and its corresponding values. For example, port, computer, and domain name can all be separate columns in a relational database, rather than stored as one column for the string.


The reporting engine 140 allows a user 150 or network administrator to run a query and generate a report from information in accumulating maps that was stored in database(s) 130. For example, a user 150 can query which websites were visited by searching “/domain/*”. A user 150 can query the top traffic types by searching “/*/traffic type”. Multi-dimensional searches can also be run on data in database(s) 130. For example, who are they top talkers and which websites are they visiting? For the top destinations, who is going there? For the top websites, what are the top traffic types? A network administrator can configure the system to aggregate selected flow information based specifically on the most common types of queries that are run on network data. Further, multi-dimensional queries can be run on this aggregated information, even though the data is not stored in a multi-dimensional format (such as a cube).


Further, by collecting flow information for a certain time interval in flow table 300 (e.g., once a minute), and aggregating selected flow information into one or more accumulating maps for a set time interval (e.g., once an hour) at the network appliance 110, only relevant flow information is gathered by network information collector 180 and maintained in database(s) 130. This allows for efficient scalability of a large number of network appliances in a WAN, since the amount of information collected and stored is significantly reduced, compared to simply collecting and storing all information about all flows through every network appliance for all time. Through an accumulating map, information can be aggregated by time, appliance, traffic type, IP address, website/domain, or any other attribute associated with a flow.


While the strings of an accumulating map are depicted herein with slashes, the information can be stored in an accumulating map in any format, such as other symbols or even no symbol at all. A string can be composed of binary records joined together to make a string, or normal ASCII text, Unicode text, or concatenations thereof. For example, row 410 can be represented as “sampledomain1, computer1, port1” or in any number of ways. Further, instead of delimiting a string by characters, it can be delimited by links and values. Information can also be sorted lexicographically.



FIG. 4B depicts exemplary information from a row of an accumulating map. A string is composed of an attribute value 412 (such as 1.2.3.4) of a first attribute 411 (such as source IP address), and an attribute value 414 (such as 5.6.7.8) of a second attribute 413 (such as destination IP address). For each string of information, there is an associated network characteristic 415 (such as number of bytes received) and its corresponding network metric 416 (such as 54) and there can optionally be a second network characteristic 417 (number of packets received) and its corresponding network metric 418 (such as 13). While two network characteristics are depicted here, there can be only one network characteristic or three or more network characteristics. Similarly, there can be fewer or additional attributes in a string. This information can also be stored as a binary key string 419 as depicted in the figure.


Furthermore, while data is discussed herein as being applicable to a particular flow, a similar mechanism can be utilized to gather data for a tunnel, instead of just a flow. For example, a string of information comprising “/tunnelname/application/website” can be gathered in an accumulating map. In this way, information regarding which tunnel a flow goes into and which application is using that tunnel can be collected and stored. Data packets can be encapsulated into tunnel packets, and a single string may collect information regarding each of these packets as a way of tracking tunnel performance.


In various embodiments, an accumulating map, such as map 400, can have a maximum or target number of rows or records that can be maintained. Since one purpose of the accumulating map is to reduce the amount of flow information that is collected, transmitted, and stored, it can be advantageous to limit the size of the accumulating map. Once a defined number of records is reached, then an eviction policy can be applied to determine how new entries are processed. The eviction policy can be triggered upon reaching a maximum number of records, or upon reaching a lower target number of records.


In one eviction policy, any new strings of flow information that are not already in the accumulating map will simply be discarded for that time interval, until a new accumulating map is started for the next time interval.


In a second eviction policy, the strings of information that constitute overflow are summarized into a log file, called an eviction log. The eviction log can be post-processed and transmitted to the network information collector 180 at substantially the same time as information from the accumulating map. Alternatively, the eviction log may be consulted only at a later time when further detail is required.


In a third eviction policy, when a new string needs to be added to an accumulating map, then an existing record can be moved from the accumulating map into an eviction log to make space for the new string which is then added to the accumulating map. The determination of which existing record to purge from the accumulating map can be based on a metric. For example, the existing entry with the least number of bytes received can be evicted. In various embodiments there can also be a time parameter for this so that new strings have a chance to aggregate and build up before automatically being evicted for having the lowest number of bytes. That is, to avoid a situation where the newest entry is constantly evicted, a time parameter can be imposed to allow for any desired aggregation of flows for the string.


In some embodiments, to find the existing entry with the least number of bytes to be evicted, the whole accumulating map can be scanned. In other embodiments, the accumulating map is already indexed (such as via a hash table) so it is already sorted and the lowest value can be easily found.


In further embodiments, information from an accumulating map can be stored in bins such as those depicted in FIG. 5A. In the exemplary embodiment of FIG. 5A, aggregated network metric values of a network characteristic are displayed, and bins are labeled with various numeric ranges, such as 0-10, 11-40 and 41-100. Each network metric is associated with the bin of its numeric range. Thus strings and their corresponding aggregated values can be placed in an indexing structure for the accumulating map in accordance with the metric value of their corresponding network characteristic. As a network metric increases (for example from new flows being aggregated into the string), or as a network metric decreases (for example from some strings being evicted), then the entry can be moved to a different bin in accordance with its new numeric range. In an exemplary embodiment, the table of an accumulating map is a first data structure, a bin is a second data structure, and sorting operations can be conducted in a third data structure.


Placing data from accumulating map 400 in bins allows for eviction to occur from the lowest value bin with data. Any record can be evicted from the lowest value bin with data, or the lowest value bin can be scanned to find the entry with the lowest network metric for eviction.


The bins can also be arranged in powers of two to cover bigger ranges of values. For example, bins can have ranges of 0-1, 2-3, 4-7, 8-15, 16-31, 32-63, 64-127 and so on. In this way, the information from accumulating map doesn't need to be kept perfectly sorted by network metric, which can require a significant amount of indexing.


In another exemplary embodiment, space can be freed up in an accumulating map by combining multiple records that have common attributes. For example, in the accumulating map of FIG. 5B, there are two entries with the same domain and computer, but different port numbers. The data from these entries can be combined by keeping the domain and computer in the string, but removing the port numbers. In this way, two or more records in the accumulating map with common flow attributes can be aggregated into one record by removing the uncommon attributes from the record. The bytes received and packets received for the new condensed record is an aggregation of the previous separate records. In this way, some information may be lost from the accumulating map (through loss of some granularity), but by least importance as defined by the combination of attributes in the string (by removing a lower level but keeping a higher level of information in the string). Alternatively, of the two entries with the same domain and computer but different port numbers, the record with the lowest number of bytes may simply be evicted from the accumulating map and added to the eviction log. There can also be a time interval allotted to the record before it is evicted to allow flow data to be aggregated for that string before eviction.


In a fourth eviction policy, a batch eviction can be conducted on the accumulating map to free up space. For example, a determination may be made of which records are the least useful, and then those are evicted from the accumulating map and logged in the eviction log. In an exemplary embodiment, an accumulating map may be capable of having 10,000 records. A batch eviction may remove 1,000 records at a time. However, any number of records can be moved in a batch eviction process, and an accumulating map size can be set to any number of records. A batch eviction can also remove one or more bins of information.



FIG. 6 depicts an exemplary method for building a hierarchical string and aggregating the associated values, as discussed herein. In step 610, information about network traffic flows is collected at a network appliance. In step 620, an attribute value of a first attribute (or flow attributes) is extracted when the flow ends, or on a periodic basis. For example, if a flow attribute is source IP address, then the attribute value of the source IP address (such as 1.2.3.4) is extracted. An attribute value of a second flow attribute can also be extracted. There can be any number of flow attributes extracted from flow information. In step 630, at least one hierarchical string is built with the extracted attribute values. For example, source IP may be a part of only one, or multiple different hierarchical strings. Network metric(s) for the associated network characteristic(s) of the hierarchical string(s) are extracted in step 640, and the network metrics are aggregated for the different flows into an accumulating map record for each hierarchical string in step 650. For example, a string of “/source IP/destination IP” can be built from the various source and destination IP address combinations with the aggregated network metrics of the network characteristic of number of bytes exchanged between each source IP and destination IP combination.


The aggregated information may be sent from each network device to the network information collector 180 as discussed herein. The information can be transmitted as raw data, or may be subjected to processing such as encryption, compression, or other type of processing. The network information collector 180 may initiate a request for the data from each network appliance, or the network appliance may send it automatically, such as on a periodic basis after the passage of a certain amount of time (for example, every minute, every 5 minutes, every hour, etc.).


While the method has been described in these discrete steps, various steps may occur in a different order, or concurrently. Further, this method may be practiced for each incoming flow or outgoing flow of a network appliance.


Thus, methods and systems for aggregated select network traffic statistics are disclosed. Although embodiments have been described with reference to specific examples, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A method for aggregating select network traffic statistics for each of a plurality of network appliances connected in a communication network, the method comprising: for each flow from a first network appliance, extracting a first attribute value of a first flow attribute;for each flow from the first network appliance, extracting a second attribute value of a second flow attribute;building at least one hierarchical string with the extracted first attribute value and the extracted second attribute value, wherein the hierarchical string represents a subset of network traffic statistics collected for the first network appliance, andthe second attribute of the hierarchical string is more specific than the first attribute of the hierarchical string;extracting at least one network metric for at least one network characteristic associated with the at least one hierarchical string;aggregating the at least one network metric for the at least one network characteristic over a plurality of flows to and from the first network appliance in the communication network;generating an accumulating map that is updated in substantially real time, the accumulating map comprising the at least one hierarchical string and associated aggregated network metrics for the first flow attribute and the second flow attribute of the hierarchical string, wherein the accumulating map has a target number of entries for a specified time period and an eviction policy determines how information is aggregated once the accumulating map reaches its target number of entries for the specified time period, the eviction policy determining that a record is aggregated into a higher level record of the accumulating map and is evicted from the accumulating map; andtransmitting the accumulating map to a network information collector in communication with the plurality of network appliance.
  • 2. The method of claim 1, wherein information regarding each flow to or from a given network appliance is collected in a flow table.
  • 3. The method of claim 1, wherein the first and the second flow attributes are extracted at a first time interval.
  • 4. The method of claim 1, wherein the accumulating map is transmitted to the network information collector at a second time interval, the second time interval being a different amount of time than a first time interval.
  • 5. The method of claim 1, wherein a new accumulating map is started at the first network appliance after the aggregated information is transmitted to the network information collector.
  • 6. The method of claim 1, wherein the accumulating map comprises an eviction log for collected information in excess of the target number of entries for the specified time period, the eviction log comprising a summary of strings of information in excess of the target number of entries for the specified time period.
  • 7. The method of claim 1, wherein the eviction policy determines that once the target number of entries is reached for the specified time period, any new information collected will be discarded, and not aggregated during that time period.
  • 8. The method of claim 1, wherein the eviction policy further determines that an evicted record is moved to an eviction log when aggregated into a higher level record of the accumulating map.
  • 9. The method of claim 1, wherein the eviction policy determines that a portion of at least one hierarchical string of information is removed from the accumulating map to reduce the number of entries below a maximum number of entries for the specified time period.
  • 10. The method of claim 1, wherein the eviction policy removes a predetermined number of records from the accumulating map and moves them to an eviction log, when a maximum number of entries for the specified time period is reached.
  • 11. The method of claim 1, further comprising: in response to a query regarding network traffic from a user, displaying a portion of the information collected from each network appliance on a graphical user interface to the user.
  • 12. The method of claim 6, wherein the eviction log is post-processed to minimize information loss.
  • 13. The method of claim 1, wherein the aggregated information is stored in bins.
  • 14. The method of claim 1, further comprising: for each flow from the first network appliance, extracting a second network metric of the first flow attribute and its corresponding value.
  • 15. A system for aggregating select network traffic statistics, comprising: a plurality of network appliances in a communication network, each of the plurality of network appliances configured to: collect a plurality of flow attributes for network traffic through each network appliance;build at least one hierarchical string of network traffic flow attributes with an extracted first attribute value and an extracted second attribute value of the collected flow attributes, wherein the hierarchical string represents a subset of the collected flow attributes for the network appliance, and the second attribute of the hierarchical string is more specific than the first attribute of the hierarchical string;extract at least one network metric for at least one network characteristic associated with each of the at least one hierarchical string;aggregate the at least one network metric for the at least one network characteristic over a plurality of flows to or from the network appliance;generate an accumulating map that is updated in substantially real time, the accumulating map comprising the at least one hierarchical string and associated aggregated network metrics for a first flow attribute and a second flow attribute of the hierarchical string, wherein the accumulating map has a target number of entries for a specified time period and an eviction policy determines that a record is aggregated into a higher level record of the accumulating map and is evicted from the accumulating map when the accumulating map reaches the target number of entries for the specified time period; andtransmit the accumulating map to a network information collector in communication with each network appliance; andthe network information collector configured to receive information from each network appliance, and provide the information to a user on a graphical user display.
  • 16. The system of claim 15 wherein the network information collector is further configured to store the information in one or more databases.
  • 17. The system of claim 15 wherein each of the plurality of network appliances further generates at least one indexing data structure for the accumulating map.
  • 18. The system of claim 15, wherein the extracted first attribute value and the extracted second attribute value are extracted at a first time interval.
  • 19. The system of claim 15, wherein the accumulating map is transmitted to the network information collector at a second time interval, the second time interval being a different amount of time than a first time interval at which the extracted first attribute value and the extracted second attribute value are extracted.
  • 20. The system of claim 15, wherein the network appliance is further configured to: generate a new accumulating map, after a previous accumulating map is transmitted to the network information collector.
US Referenced Citations (515)
Number Name Date Kind
4494108 Langdon, Jr. et al. Jan 1985 A
4558302 Welch Dec 1985 A
4612532 Bacon et al. Sep 1986 A
5023611 Chamzas et al. Jun 1991 A
5159452 Kinoshita et al. Oct 1992 A
5243341 Seroussi et al. Sep 1993 A
5307413 Denzer Apr 1994 A
5357250 Healey et al. Oct 1994 A
5359720 Tamura et al. Oct 1994 A
5373290 Lempel et al. Dec 1994 A
5483556 Pillan et al. Jan 1996 A
5532693 Winters et al. Jul 1996 A
5592613 Miyazawa et al. Jan 1997 A
5602831 Gaskill Feb 1997 A
5608540 Ogawa Mar 1997 A
5611049 Pitts Mar 1997 A
5627533 Clark May 1997 A
5635932 Shinagawa et al. Jun 1997 A
5652581 Furlan et al. Jul 1997 A
5659737 Matsuda Aug 1997 A
5675587 Okuyama et al. Oct 1997 A
5710562 Gormish et al. Jan 1998 A
5748122 Shinagawa et al. May 1998 A
5754774 Bittinger et al. May 1998 A
5802106 Packer Sep 1998 A
5805822 Long et al. Sep 1998 A
5883891 Williams et al. Mar 1999 A
5903230 Masenas May 1999 A
5955976 Heath Sep 1999 A
6000053 Levine et al. Dec 1999 A
6003087 Housel, III et al. Dec 1999 A
6054943 Lawrence Apr 2000 A
6081883 Popelka et al. Jun 2000 A
6084855 Soirinsuo et al. Jul 2000 A
6175944 Urbanke et al. Jan 2001 B1
6191710 Waletzki Feb 2001 B1
6240463 Benmohamed et al. May 2001 B1
6295541 Bodnar et al. Sep 2001 B1
6308148 Bruins et al. Oct 2001 B1
6311260 Stone et al. Oct 2001 B1
6339616 Kovalev Jan 2002 B1
6374266 Shnelvar Apr 2002 B1
6434191 Agrawal et al. Aug 2002 B1
6434641 Haupt et al. Aug 2002 B1
6434662 Greene et al. Aug 2002 B1
6438664 McGrath et al. Aug 2002 B1
6452915 Jorgensen Sep 2002 B1
6463001 Williams Oct 2002 B1
6489902 Heath Dec 2002 B2
6493698 Beylin Dec 2002 B1
6570511 Cooper May 2003 B1
6587985 Fukushima et al. Jul 2003 B1
6614368 Cooper Sep 2003 B1
6618397 Huang Sep 2003 B1
6633953 Stark Oct 2003 B2
6643259 Borella et al. Nov 2003 B1
6650644 Colley et al. Nov 2003 B1
6653954 Rijavec Nov 2003 B2
6667700 McCanne et al. Dec 2003 B1
6674769 Viswanath Jan 2004 B1
6718361 Basani et al. Apr 2004 B1
6728840 Shatil et al. Apr 2004 B1
6738379 Balazinski et al. May 2004 B1
6754181 Elliott et al. Jun 2004 B1
6769048 Goldberg et al. Jul 2004 B2
6791945 Levenson et al. Sep 2004 B1
6842424 Key et al. Jan 2005 B1
6856651 Singh Feb 2005 B2
6859842 Nakamichi et al. Feb 2005 B1
6862602 Guha Mar 2005 B2
6910106 Sechrest et al. Jun 2005 B2
6963980 Mattsson Nov 2005 B1
6968374 Lemieux Nov 2005 B2
6978384 Milliken Dec 2005 B1
7007044 Rafert et al. Feb 2006 B1
7020750 Thiyagaranjan et al. Mar 2006 B2
7035214 Seddigh et al. Apr 2006 B1
7047281 Kausik May 2006 B1
7069268 Burns et al. Jun 2006 B1
7069342 Biederman Jun 2006 B1
7110407 Khanna Sep 2006 B1
7111005 Wessman Sep 2006 B1
7113962 Kee et al. Sep 2006 B1
7120666 McCanne et al. Oct 2006 B2
7145889 Zhang et al. Dec 2006 B1
7149953 Cameron et al. Dec 2006 B2
7177295 Sholander et al. Feb 2007 B1
7197597 Scheid et al. Mar 2007 B1
7200847 Straube et al. Apr 2007 B2
7215667 Davis May 2007 B1
7216283 Shen et al. May 2007 B2
7242681 Van Bokkelen et al. Jul 2007 B1
7243094 Tabellion et al. Jul 2007 B2
7266645 Garg et al. Sep 2007 B2
7278016 Detrick et al. Oct 2007 B1
7318100 Demmer et al. Jan 2008 B2
7366829 Luttrell et al. Apr 2008 B1
7380006 Srinivas et al. May 2008 B2
7383329 Erickson Jun 2008 B2
7383348 Seki et al. Jun 2008 B2
7388844 Brown et al. Jun 2008 B1
7389357 Duffie, III et al. Jun 2008 B2
7389393 Karr et al. Jun 2008 B1
7417570 Srinivasan et al. Aug 2008 B2
7417991 Crawford et al. Aug 2008 B1
7420992 Fang Sep 2008 B1
7428573 McCanne et al. Sep 2008 B2
7451237 Takekawa et al. Nov 2008 B2
7453379 Plamondon Nov 2008 B2
7454443 Ram et al. Nov 2008 B2
7457315 Smith Nov 2008 B1
7460473 Kodama et al. Dec 2008 B1
7471629 Melpignano Dec 2008 B2
7496659 Coverdill et al. Feb 2009 B1
7532134 Samuels et al. May 2009 B2
7555484 Kulkarni et al. Jun 2009 B2
7571343 Xiang et al. Aug 2009 B1
7571344 Hughes et al. Aug 2009 B2
7587401 Yeo et al. Sep 2009 B2
7596802 Border et al. Sep 2009 B2
7617436 Wenger et al. Nov 2009 B2
7619545 Samuels et al. Nov 2009 B2
7620870 Srinivasan et al. Nov 2009 B2
7624333 Langner Nov 2009 B2
7624446 Wilhelm Nov 2009 B1
7630295 Hughes et al. Dec 2009 B2
7633942 Bearden et al. Dec 2009 B2
7639700 Nabhan et al. Dec 2009 B1
7643426 Lee et al. Jan 2010 B1
7644230 Hughes et al. Jan 2010 B1
7676554 Malmskog et al. Mar 2010 B1
7698431 Hughes Apr 2010 B1
7702843 Chen et al. Apr 2010 B1
7714747 Fallon May 2010 B2
7746781 Xiang Jun 2010 B1
7764606 Ferguson et al. Jul 2010 B1
7810155 Ravi Oct 2010 B1
7826798 Stephens et al. Nov 2010 B2
7827237 Plamondon Nov 2010 B2
7849134 McCanne et al. Dec 2010 B2
7853699 Wu et al. Dec 2010 B2
7873786 Singh et al. Jan 2011 B1
7917599 Gopalan et al. Mar 2011 B1
7925711 Gopalan et al. Apr 2011 B1
7941606 Pullela et al. May 2011 B1
7945736 Hughes et al. May 2011 B2
7948921 Hughes et al. May 2011 B1
7953869 Demmer et al. May 2011 B2
7957307 Qiu et al. Jun 2011 B2
7970898 Clubb et al. Jun 2011 B2
7975018 Unrau et al. Jul 2011 B2
8069225 McCanne et al. Nov 2011 B2
8072985 Golan et al. Dec 2011 B2
8090027 Schneider Jan 2012 B2
8095774 Hughes et al. Jan 2012 B1
8140757 Singh et al. Mar 2012 B1
8171238 Hughes et al. May 2012 B1
8209334 Doerner Jun 2012 B1
8225072 Hughes et al. Jul 2012 B2
8271325 Silverman et al. Sep 2012 B2
8271847 Langner Sep 2012 B2
8307115 Hughes Nov 2012 B1
8312226 Hughes Nov 2012 B2
8352608 Keagy et al. Jan 2013 B1
8370583 Hughes Feb 2013 B2
8386797 Danilak Feb 2013 B1
8392684 Hughes Mar 2013 B2
8442052 Hughes May 2013 B1
8447740 Huang et al. May 2013 B1
8473714 Hughes et al. Jun 2013 B2
8489562 Hughes et al. Jul 2013 B1
8516158 Wu et al. Aug 2013 B1
8553757 Florencio et al. Oct 2013 B2
8565118 Shukla et al. Oct 2013 B2
8576816 Lamy-Bergot et al. Nov 2013 B2
8595314 Hughes Nov 2013 B1
8613071 Day et al. Dec 2013 B2
8681614 McCanne et al. Mar 2014 B1
8699490 Zheng et al. Apr 2014 B2
8700771 Ramankutty et al. Apr 2014 B1
8706947 Vincent Apr 2014 B1
8725988 Hughes et al. May 2014 B2
8732423 Hughes May 2014 B1
8738865 Hughes et al. May 2014 B1
8743683 Hughes Jun 2014 B1
8755381 Hughes et al. Jun 2014 B2
8775413 Brown et al. Jul 2014 B2
8811431 Hughes Aug 2014 B2
8843627 Baldi Sep 2014 B1
8850324 Clemm et al. Sep 2014 B2
8885632 Hughes et al. Nov 2014 B2
8891554 Biehler Nov 2014 B2
8929380 Hughes et al. Jan 2015 B1
8929402 Hughes Jan 2015 B1
8930650 Hughes et al. Jan 2015 B1
9003541 Patidar Apr 2015 B1
9036662 Hughes May 2015 B1
9054876 Yagnik Jun 2015 B1
9092342 Hughes et al. Jul 2015 B2
9106530 Wang Aug 2015 B1
9130991 Hughes Sep 2015 B2
9131510 Wang Sep 2015 B2
9143455 Hughes Sep 2015 B1
9152574 Hughes et al. Oct 2015 B2
9171251 Camp et al. Oct 2015 B2
9191342 Hughes et al. Nov 2015 B2
9202304 Baenziger et al. Dec 2015 B1
9253277 Hughes et al. Feb 2016 B2
9306818 Aumann et al. Apr 2016 B2
9307442 Bachmann et al. Apr 2016 B2
9363248 Hughes Jun 2016 B1
9363309 Hughes Jun 2016 B2
9380094 Florencio et al. Jun 2016 B2
9397951 Hughes Jul 2016 B1
9438538 Hughes et al. Sep 2016 B2
9549048 Hughes Jan 2017 B1
9584403 Hughes et al. Feb 2017 B2
9584414 Sung et al. Feb 2017 B2
9613071 Hughes Apr 2017 B1
9626224 Hughes et al. Apr 2017 B2
9647949 Varki et al. May 2017 B2
9712463 Hughes et al. Jul 2017 B1
9716644 Wei et al. Jul 2017 B2
9717021 Hughes et al. Jul 2017 B2
9875344 Hughes et al. Jan 2018 B1
9906630 Hughes Feb 2018 B2
9948496 Hughes et al. Apr 2018 B1
9961010 Hughes et al. May 2018 B2
9967056 Hughes May 2018 B1
10091172 Hughes Oct 2018 B1
10164861 Hughes et al. Dec 2018 B2
20010026231 Satoh Oct 2001 A1
20010054084 Kosmynin Dec 2001 A1
20020007413 Garcia-Luna-Aceves et al. Jan 2002 A1
20020009079 Jungck et al. Jan 2002 A1
20020010702 Ajtai et al. Jan 2002 A1
20020010765 Border Jan 2002 A1
20020040475 Yap et al. Apr 2002 A1
20020061027 Abiru et al. May 2002 A1
20020065998 Buckland May 2002 A1
20020071436 Border et al. Jun 2002 A1
20020078242 Viswanath Jun 2002 A1
20020101822 Ayyagari et al. Aug 2002 A1
20020107988 Jordan Aug 2002 A1
20020116424 Radermacher et al. Aug 2002 A1
20020129158 Zhang et al. Sep 2002 A1
20020129260 Benfield et al. Sep 2002 A1
20020131434 Vukovic et al. Sep 2002 A1
20020150041 Reinshmidt et al. Oct 2002 A1
20020159454 Delmas Oct 2002 A1
20020163911 Wee et al. Nov 2002 A1
20020169818 Stewart et al. Nov 2002 A1
20020181494 Rhee Dec 2002 A1
20020188871 Noehring et al. Dec 2002 A1
20020194324 Guha Dec 2002 A1
20030002664 Anand Jan 2003 A1
20030009558 Ben-Yehezkel Jan 2003 A1
20030012400 McAuliffe et al. Jan 2003 A1
20030033307 Davis et al. Feb 2003 A1
20030046572 Newman et al. Mar 2003 A1
20030048750 Kobayashi Mar 2003 A1
20030067940 Edholm Apr 2003 A1
20030123481 Neale et al. Jul 2003 A1
20030123671 He et al. Jul 2003 A1
20030131079 Neale et al. Jul 2003 A1
20030133568 Stein et al. Jul 2003 A1
20030142658 Ofuji et al. Jul 2003 A1
20030149661 Mitchell et al. Aug 2003 A1
20030149869 Gleichauf Aug 2003 A1
20030204619 Bays Oct 2003 A1
20030214502 Park et al. Nov 2003 A1
20030214954 Oldak et al. Nov 2003 A1
20030233431 Reddy et al. Dec 2003 A1
20040008711 Lahti et al. Jan 2004 A1
20040047308 Kavanagh et al. Mar 2004 A1
20040083299 Dietz et al. Apr 2004 A1
20040086114 Rarick May 2004 A1
20040088376 McCanne et al. May 2004 A1
20040114569 Naden et al. Jun 2004 A1
20040117571 Chang et al. Jun 2004 A1
20040123139 Aiello et al. Jun 2004 A1
20040158644 Albuquerque et al. Aug 2004 A1
20040179542 Murakami et al. Sep 2004 A1
20040181679 Dettinger et al. Sep 2004 A1
20040199771 Morten et al. Oct 2004 A1
20040202110 Kim Oct 2004 A1
20040203820 Billhartz Oct 2004 A1
20040205332 Bouchard et al. Oct 2004 A1
20040243571 Judd Dec 2004 A1
20040250027 Heflinger Dec 2004 A1
20040255048 Lev Ran et al. Dec 2004 A1
20050010653 McCanne Jan 2005 A1
20050044270 Grove et al. Feb 2005 A1
20050053094 Cain et al. Mar 2005 A1
20050055372 Springer, Jr. et al. Mar 2005 A1
20050055399 Savchuk Mar 2005 A1
20050071453 Ellis et al. Mar 2005 A1
20050091234 Hsu et al. Apr 2005 A1
20050111460 Sahita May 2005 A1
20050131939 Douglis et al. Jun 2005 A1
20050132252 Fifer et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050171937 Hughes et al. Aug 2005 A1
20050177603 Shavit Aug 2005 A1
20050182849 Chandrayana et al. Aug 2005 A1
20050190694 Ben-Nun et al. Sep 2005 A1
20050207443 Kawamura et al. Sep 2005 A1
20050210151 Abdo et al. Sep 2005 A1
20050220019 Melpignano Oct 2005 A1
20050220097 Swami et al. Oct 2005 A1
20050235119 Sechrest et al. Oct 2005 A1
20050240380 Jones Oct 2005 A1
20050243743 Kimura Nov 2005 A1
20050243835 Sharma et al. Nov 2005 A1
20050256972 Cochran et al. Nov 2005 A1
20050278459 Boucher et al. Dec 2005 A1
20050283355 Itani et al. Dec 2005 A1
20050286526 Sood et al. Dec 2005 A1
20060013210 Bordogna et al. Jan 2006 A1
20060026425 Douceur et al. Feb 2006 A1
20060031936 Nelson et al. Feb 2006 A1
20060036901 Yang et al. Feb 2006 A1
20060039354 Rao et al. Feb 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060059171 Borthakur et al. Mar 2006 A1
20060059173 Hirsch et al. Mar 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060136913 Sameske Jun 2006 A1
20060143497 Zohar et al. Jun 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060195547 Sundarrajan et al. Aug 2006 A1
20060195840 Sundarrajan et al. Aug 2006 A1
20060212426 Shakara et al. Sep 2006 A1
20060218390 Loughran et al. Sep 2006 A1
20060227717 van den Berg et al. Oct 2006 A1
20060250965 Irwin Nov 2006 A1
20060268932 Singh et al. Nov 2006 A1
20060280205 Cho Dec 2006 A1
20070002804 Xiong et al. Jan 2007 A1
20070008884 Tang Jan 2007 A1
20070011424 Sharma et al. Jan 2007 A1
20070038815 Hughes Feb 2007 A1
20070038816 Hughes et al. Feb 2007 A1
20070038858 Hughes Feb 2007 A1
20070050475 Hughes Mar 2007 A1
20070076693 Krishnaswamy Apr 2007 A1
20070081513 Torsner Apr 2007 A1
20070097874 Hughes et al. May 2007 A1
20070110046 Farrell et al. May 2007 A1
20070115812 Hughes May 2007 A1
20070127372 Khan et al. Jun 2007 A1
20070130114 Li et al. Jun 2007 A1
20070140129 Bauer et al. Jun 2007 A1
20070150497 De La Cruz et al. Jun 2007 A1
20070174428 Lev Ran et al. Jul 2007 A1
20070179900 Daase et al. Aug 2007 A1
20070192863 Kapoor et al. Aug 2007 A1
20070195702 Yuen et al. Aug 2007 A1
20070195789 Yao Aug 2007 A1
20070198523 Hayim Aug 2007 A1
20070226320 Hager et al. Sep 2007 A1
20070237104 Alon et al. Oct 2007 A1
20070244987 Pedersen et al. Oct 2007 A1
20070245079 Bhattacharjee et al. Oct 2007 A1
20070248084 Whitehead Oct 2007 A1
20070258468 Bennett Nov 2007 A1
20070263554 Finn Nov 2007 A1
20070276983 Zohar et al. Nov 2007 A1
20070280245 Rosberg Dec 2007 A1
20080005156 Edwards et al. Jan 2008 A1
20080013532 Garner et al. Jan 2008 A1
20080016301 Chen Jan 2008 A1
20080028467 Kommareddy Jan 2008 A1
20080031149 Hughes et al. Feb 2008 A1
20080031240 Hughes et al. Feb 2008 A1
20080071818 Apanowicz et al. Mar 2008 A1
20080095060 Yao Apr 2008 A1
20080133536 Bjorner et al. Jun 2008 A1
20080133561 Dubnicki et al. Jun 2008 A1
20080184081 Hama et al. Jul 2008 A1
20080205445 Kumar et al. Aug 2008 A1
20080222044 Gottlieb et al. Sep 2008 A1
20080229137 Samuels et al. Sep 2008 A1
20080243992 Jardetzky et al. Oct 2008 A1
20080267217 Colville et al. Oct 2008 A1
20080300887 Chen et al. Dec 2008 A1
20080313318 Vermeulen et al. Dec 2008 A1
20080320151 McCanne et al. Dec 2008 A1
20090006801 Shultz et al. Jan 2009 A1
20090024763 Stepin et al. Jan 2009 A1
20090037448 Thomas Feb 2009 A1
20090060198 Little Mar 2009 A1
20090063696 Wang et al. Mar 2009 A1
20090080460 Kronewitter et al. Mar 2009 A1
20090089048 Pouzin Apr 2009 A1
20090092137 Haigh et al. Apr 2009 A1
20090100483 McDowell Apr 2009 A1
20090158417 Khanna et al. Jun 2009 A1
20090168786 Sarkar Jul 2009 A1
20090175172 Prytz et al. Jul 2009 A1
20090182864 Khan et al. Jul 2009 A1
20090204961 DeHaan et al. Aug 2009 A1
20090234966 Samuels et al. Sep 2009 A1
20090245114 Vijayaraghavan Oct 2009 A1
20090265707 Goodman et al. Oct 2009 A1
20090274294 Itani Nov 2009 A1
20090279550 Romrell et al. Nov 2009 A1
20090281984 Black Nov 2009 A1
20100005222 Brant et al. Jan 2010 A1
20100011125 Yang et al. Jan 2010 A1
20100020693 Thakur Jan 2010 A1
20100054142 Moiso et al. Mar 2010 A1
20100070605 Hughes et al. Mar 2010 A1
20100077251 Liu et al. Mar 2010 A1
20100082545 Bhattacharjee et al. Apr 2010 A1
20100085964 Weir et al. Apr 2010 A1
20100115137 Kim et al. May 2010 A1
20100121957 Roy et al. May 2010 A1
20100124239 Hughes May 2010 A1
20100131957 Kami May 2010 A1
20100150158 Cathey et al. Jun 2010 A1
20100169467 Shukla et al. Jul 2010 A1
20100177663 Johansson et al. Jul 2010 A1
20100225658 Coleman Sep 2010 A1
20100232443 Pandey Sep 2010 A1
20100242106 Harris et al. Sep 2010 A1
20100246584 Ferguson et al. Sep 2010 A1
20100290364 Black Nov 2010 A1
20100318892 Teevan et al. Dec 2010 A1
20100333212 Carpenter et al. Dec 2010 A1
20110002346 Wu Jan 2011 A1
20110022812 van der Linden et al. Jan 2011 A1
20110113472 Fung et al. May 2011 A1
20110154169 Gopal et al. Jun 2011 A1
20110154329 Arcese et al. Jun 2011 A1
20110181448 Koratagere Jul 2011 A1
20110219181 Hughes et al. Sep 2011 A1
20110225322 Demidov et al. Sep 2011 A1
20110258049 Ramer et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276963 Wu et al. Nov 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20120005549 Ichiki et al. Jan 2012 A1
20120036325 Mashtizadeh et al. Feb 2012 A1
20120069131 Abelow Mar 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120173759 Agarwal et al. Jul 2012 A1
20120218130 Boettcher et al. Aug 2012 A1
20120221611 Watanabe et al. Aug 2012 A1
20120230345 Ovsiannikov Sep 2012 A1
20120239872 Hughes et al. Sep 2012 A1
20120290636 Kadous Nov 2012 A1
20130018722 Libby Jan 2013 A1
20130018765 Fork et al. Jan 2013 A1
20130031642 Dwivedi et al. Jan 2013 A1
20130044751 Casado et al. Feb 2013 A1
20130058354 Casado et al. Mar 2013 A1
20130080619 Assuncao et al. Mar 2013 A1
20130083806 Suarez Fuentes et al. Apr 2013 A1
20130086236 Baucke et al. Apr 2013 A1
20130094501 Hughes Apr 2013 A1
20130103655 Fanghaenel et al. Apr 2013 A1
20130117494 Hughes et al. May 2013 A1
20130121209 Padmanabhan et al. May 2013 A1
20130141259 Hazarika et al. Jun 2013 A1
20130142050 Luna Jun 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130250951 Koganti Sep 2013 A1
20130263125 Shamsee et al. Oct 2013 A1
20130282970 Hughes et al. Oct 2013 A1
20130325986 Brady et al. Dec 2013 A1
20130343191 Kim et al. Dec 2013 A1
20140052864 Van Der Linden et al. Feb 2014 A1
20140075554 Cooley Mar 2014 A1
20140086069 Frey Mar 2014 A1
20140101426 Senthurpandi Apr 2014 A1
20140108360 Kunath et al. Apr 2014 A1
20140114742 Lamontagne et al. Apr 2014 A1
20140123213 Vank et al. May 2014 A1
20140181381 Hughes et al. Jun 2014 A1
20140269705 DeCusatis et al. Sep 2014 A1
20140279078 Nukala et al. Sep 2014 A1
20140321290 Jin et al. Oct 2014 A1
20140379937 Hughes et al. Dec 2014 A1
20150058488 Backholm Feb 2015 A1
20150074291 Hughes Mar 2015 A1
20150074361 Hughes et al. Mar 2015 A1
20150078397 Hughes et al. Mar 2015 A1
20150110113 Levy et al. Apr 2015 A1
20150120663 Le Scouarnec et al. Apr 2015 A1
20150143505 Border et al. May 2015 A1
20150170221 Shah Jun 2015 A1
20150281099 Banavalikar Oct 2015 A1
20150281391 Hughes et al. Oct 2015 A1
20150312054 Barabash Oct 2015 A1
20150334210 Hughes Nov 2015 A1
20150365293 Madrigal et al. Dec 2015 A1
20160014051 Hughes et al. Jan 2016 A1
20160034305 Shear et al. Feb 2016 A1
20160093193 Silvers et al. Mar 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160255000 Gattani et al. Sep 2016 A1
20160255542 Hughes et al. Sep 2016 A1
20160380886 Blair et al. Dec 2016 A1
20170026467 Barsness Jan 2017 A1
20170111692 An et al. Apr 2017 A1
20170149679 Hughes et al. May 2017 A1
20170187581 Hughes et al. Jun 2017 A1
20180089994 Dhondse et al. Mar 2018 A1
20180121634 Hughes et al. May 2018 A1
20180123861 Hughes et al. May 2018 A1
20180131711 Chen et al. May 2018 A1
20180205494 Hughes Jul 2018 A1
20180227216 Hughes Aug 2018 A1
20180227223 Hughes Aug 2018 A1
Foreign Referenced Citations (3)
Number Date Country
1507353 Feb 2005 EP
H05061964 Feb 2005 JP
WO0135226 May 2001 WO
Non-Patent Literature Citations (240)
Entry
Request for Trial Granted, dated Jan. 2, 2014, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Notice of Allowance, dated Oct. 23, 2012, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Decision on Appeal, dated Sep. 17, 2012, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Examiner's Answer to Appeal Brief, dated Oct. 27, 2009, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Final Office Action, dated Jan. 12, 2009, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Non-Final Office Action, dated Jul. 17, 2008, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Final Office Action, dated Feb. 22, 2008, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/202,697, filed Aug. 12, 2005.
Request for Trial Granted, dated Jan. 2, 2014, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Notice of Allowance, dated Aug. 30, 2012, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Decision on Appeal, dated Jun. 28, 2012, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Examiner's Answer to Appeal Brief, dated Oct. 27, 2009, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Final Office Action, dated Jan. 5, 2009, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Non-Final Office Action, dated Jul. 10, 2008, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Final Office Action, dated Jan. 22, 2008, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/240,110, filed Sep. 29, 2005.
Notice of Allowance, dated Apr. 28, 2009, U.S. Appl. No. 11/357,657, filed Feb. 16, 2006.
Non-Final Office Action, dated Sep. 17, 2008, U.S. Appl. No. 11/357,657, filed Feb. 16, 2006.
Notice of Allowance, dated Sep. 8, 2009, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Final Office Action, dated May 11, 2009, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Non-Final Office Action, dated Nov. 17, 2008, U.S. Appl. No. 11/263,755, filed Oct. 31, 2005.
Non-Final Office Action, dated Jul. 18, 2011, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Final Office Action, dated Mar. 30, 2011, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Oct. 13, 2010, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Mar. 22, 2010, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Oct. 20, 2009, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Mar. 24, 2009, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Non-Final Office Action, dated Sep. 26, 2008, U.S. Appl. No. 11/285,816, filed Nov. 22, 2005.
Notice of Allowance, dated Feb. 14, 2014, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Jul. 10, 2013, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Feb. 4, 2013, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Sep. 13, 2012, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Mar. 16, 2012, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Dec. 20, 2011, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Aug. 12, 2011, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Dec. 6, 2010, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Advisory Action, dated Oct. 2, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Final Office Action, dated Aug. 7, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 22, 2009, U.S. Appl. No. 11/498,473, filed Aug. 2, 2006.
Notice of Allowance, dated Jun. 10, 2014, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Mar. 25, 2014, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Oct. 9, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Advisory Action, dated Jul. 16, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Apr. 15, 2013, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Sep. 25, 2012, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Advisory Action, dated Nov. 25, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Aug. 17, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 4, 2011, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Jul. 13, 2010, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Feb. 2, 2010, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Final Office Action, dated Sep. 1, 2009, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Non-Final Office Action, dated Jan. 26, 2009, U.S. Appl. No. 11/498,491, filed Aug. 2, 2006.
Notice of Allowance, dated Aug. 31, 2009, U.S. Appl. No. 11/724,800, filed Mar. 15, 2007.
Request for Trial Granted, dated Jun. 11, 2014, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Notice of Allowance, dated Dec. 26, 2012, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Decision on Appeal, dated Nov. 14, 2012, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Examiner's Answer to Appeal Brief, dated Oct. 14, 2009, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Final Office Action, dated Dec. 31, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Non-Final Office Action, dated Jul. 8, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Final Office Action, dated Jan. 9, 2008, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Non-Final Office Action, dated Aug. 24, 2007, U.S. Appl. No. 11/497,026, filed Jul. 31, 2006.
Notice of Allowance, dated Dec. 3, 2009, U.S. Appl. No. 11/796,239, filed Apr. 27, 2007.
Non-Final Office Action, dated Jun. 22, 2009, U.S. Appl. No. 11/796,239, filed Apr. 27, 2007.
Notice of Allowance, dated Jan. 16, 2014, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Aug. 14, 2013, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Advisory Action, dated Jan. 29, 2013, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Nov. 20, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Jul. 18, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Advisory Action, dated Jul. 2, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Apr. 18, 2012, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Sep. 22, 2011, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated Feb. 3, 2011, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Oct. 7, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Final Office Action, dated May 14, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Non-Final Office Action, dated Jan. 6, 2010, U.S. Appl. No. 12/217,440, filed Jul. 3, 2008.
Notice of Allowance, dated Feb. 29, 2012, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated Dec. 30, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Final Office Action, dated Sep. 30, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated May 13, 2011, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Final Office Action, dated Oct. 12, 2010, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Non-Final Office Action, dated May 24, 2010, U.S. Appl. No. 11/825,440, filed Jul. 5, 2007.
Notice of Allowance, dated Nov. 12, 2011, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Notice of Allowance, dated Apr. 21, 2011, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Final Office Action, dated Nov. 4, 2010, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Non-Final Office Action, dated Jun. 18, 2010, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Non-Final Office Action, dated Dec. 9, 2009, U.S. Appl. No. 11/825,497, filed Jul. 5, 2007.
Notice of Allowance, dated Feb. 11, 2011, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Final Office Action, dated May 5, 2010, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Non-Final Office Action, dated Jan. 26, 2010, U.S. Appl. No. 11/903,416, filed Sep. 20, 2007.
Notice of Allowance, dated May 14, 2013, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Nov. 6, 2012, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Apr. 23, 2012, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Dec. 1, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Oct. 13, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Advisory Action, dated May 23, 2011, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Nov. 9, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Final Office Action, dated Jul. 22, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Non-Final Office Action, dated Feb. 3, 2010, U.S. Appl. No. 11/998,726, filed Nov. 30, 2007.
Notice of Allowance, dated Mar. 21, 2013, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Feb. 1, 2013, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Aug. 28, 2012, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Feb. 10, 2012, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Jul. 7, 2011, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Final Office Action, dated Dec. 8, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Jul. 21, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Non-Final Office Action, dated Feb. 4, 2010, U.S. Appl. No. 12/070,796, filed Feb. 20, 2008.
Notice of Allowance, dated Mar. 16, 2012, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Final Office Action, dated Oct. 12, 2011, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Non-Final Office Action, dated Feb. 3, 2011, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Final Office Action, dated Sep. 23, 2010, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Non-Final Office Action, dated Jun. 14, 2010, U.S. Appl. No. 12/151,839, filed May 8, 2008.
Notice of Allowance, dated Apr. 14, 2014, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Final Office Action, dated Jan. 14, 2014, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Jul. 1, 2013, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Advisory Action, dated Aug. 20, 2012, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Final Office Action, dated May 25, 2012, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Oct. 4, 2011, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Mar. 8, 2011, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Non-Final Office Action, dated Aug. 12, 2010, U.S. Appl. No. 12/313,618, filed Nov. 20, 2008.
Notice of Allowance, dated Jan. 20, 2011, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Notice of Allowance, dated Dec. 9, 2010, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Non-Final Office Action, dated Jun. 17, 2010, U.S. Appl. No. 12/622,324, filed Nov. 19, 2009.
Notice of Allowance, dated Mar. 26, 2012, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Final Office Action, dated Feb. 22, 2012, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Non-Final Office Action, dated Sep. 27, 2011, U.S. Appl. No. 13/112,936, filed May 20, 2011.
Advisory Action, dated Dec. 3, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Sep. 26, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated May 20, 2013, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated Jun. 13, 2014, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Dec. 18, 2014, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Advisory Action, dated Mar. 5, 2015, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Non-Final Office Action, dated Jun. 2, 2015, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Jan. 11, 2016, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Final Office Action, dated Apr. 1, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Non-Final Office Action, dated Oct. 22, 2013, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Advisory Action, dated Jun. 27, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Non-Final Office Action, dated Jul. 30, 2014, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Final Office Action, dated Jan. 12, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Advisory Action, dated Mar. 25, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Notice of Allowance, dated May 21, 2015, U.S. Appl. No. 13/274,162, filed Oct. 14, 2011.
Notice of Allowance, dated Jan. 2, 2014, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Advisory Action, dated Sep. 27, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Final Office Action, dated Jul. 17, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Non-Final Office Action, dated Apr. 2, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Advisory Action, dated Jan. 24, 2013, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Final Office Action, dated Nov. 2, 2012, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Non-Final Office Action, dated Jul. 5, 2012, U.S. Appl. No. 13/427,422, filed Mar. 22, 2012.
Notice of Allowance, dated Feb. 19, 2013, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Non-Final Office Action, dated Jan. 3, 2013, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Notice of Allowance, dated Sep. 26, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Advisory Action, dated Apr. 4, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Final Office Action, dated Jan. 11, 2013, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Non-Final Office Action, dated Sep. 20, 2012, U.S. Appl. No. 13/517,575, filed Jun. 13, 2012.
Notice of Allowance, dated Sep. 12, 2014, U.S. Appl. No. 13/657,733, filed Oct. 22, 2012.
Supplemental Notice of Allowability, dated Oct. 9, 2014, U.S. Appl. No. 13/657,733, filed Oct. 22, 2012.
Notice of Allowance, dated Jan. 3, 2014, U.S. Appl. No. 13/757,548, filed Feb. 1, 2013.
Non-Final Office Action, dated Sep. 10, 2013, U.S. Appl. No. 13/757,548, filed Feb. 1, 2013.
Notice of Allowance, dated Nov. 25, 2013, U.S. Appl. No. 13/917,517, filed Jun. 13, 2013.
Non-Final Office Action, dated Aug. 14, 2013, U.S. Appl. No. 13/917,517, filed Jun. 13, 2013.
Non-Final Office Action, dated Jun. 6, 2014, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Non-Final Office Action, dated Oct. 1, 2014, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Notice of Allowance, dated Mar. 16, 2015, U.S. Appl. No. 14/190,940, filed Feb. 26, 2014.
Notice of Allowance, dated Sep. 5, 2014, U.S. Appl. No. 14/248,229, filed Apr. 8, 2014.
Non-Final Office Action, dated Jun. 8, 2015, U.S. Appl. No. 14/248,167, filed Apr. 8, 2014.
Non-Final Office Action, dated Jul. 11, 2014, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Notice of Allowance, dated Jan. 23, 2015, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Corrected Notice of Allowability, dated Aug. 5, 2015, U.S. Appl. No. 14/248,188, filed Apr. 8, 2014.
Notice of Allowance, dated Oct. 6, 2014, U.S. Appl. No. 14/270,101, filed May 5, 2014.
Non-Final Office Action, dated Nov. 26, 2014, U.S. Appl. No. 14/333,486, filed Jul. 16, 2014.
Notice of Allowance, dated Dec. 22, 2014, U.S. Appl. No. 14/333,486, filed Jul. 16, 2014.
Non-Final Office Action, dated Dec. 31, 2014, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Non-Final Office Action, dated Jan. 23, 2015, U.S. Appl. No. 14/548,195, filed Nov. 19, 2014.
Notice of Allowance, dated Jun. 3, 2015, U.S. Appl. No. 14/548,195, filed Nov. 19, 2014.
Non-Final Office Action, dated Mar. 11, 2015, U.S. Appl. No. 14/549,425, filed Nov. 20, 2014.
Notice of Allowance, dated Jul. 27, 2015, U.S. Appl. No. 14/549,425, filed Nov. 20, 2014.
Non-Final Office Action, dated May 6, 2015, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Final Office Action, dated Sep. 18, 2015, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Non-Final Office Action, dated May 18, 2015, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Final Office Action, dated Dec. 21, 2015, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Final Office Action, dated Jul. 14, 2015, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Non-Final Office Action, dated Jul. 15, 2015, U.S. Appl. No. 14/734,949, filed Jun. 9, 2015.
Non-Final Office Action, dated Aug. 11, 2015, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Non-Final Office Action, dated Aug. 18, 2015, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Notice of Allowance, dated Oct. 5, 2015, U.S. Appl. No. 14/734,949, filed Jun. 9, 2015.
Advisory Action, dated Nov. 25, 2015, U.S. Appl. No. 13/482,321, filed May 29, 2012.
Non-Final Office Action, dated Dec. 15, 2015, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Non-Final Office Action, dated Dec. 16, 2015, U.S. Appl. No. 14/859,179, filed Sep. 18, 2015.
Non-Final Office Action, dated Jan. 12, 2016, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
Notice of Allowance, dated Feb. 8, 2016, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Corrected Notice of Allowability, dated Mar. 7, 2016, U.S. Appl. No. 14/543,781, filed Nov. 17, 2014.
Notice of Allowance, dated Feb. 16, 2016, U.S. Appl. No. 14/248,167, filed Apr. 8, 2014.
Notice of Allowance, dated Mar. 2, 2016, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Corrected Notice of Allowability, dated Mar. 14, 2016, U.S. Appl. No. 14/677,841, filed Apr. 2, 2015.
Advisory Action, dated Mar. 21, 2016, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Non-Final Office Action, dated May 3, 2016, U.S. Appl. No. 14/679,965, filed Apr. 6, 2015.
Non-Final Office Action, dated May 6, 2016, U.S. Appl. No. 13/288,691, filed Nov. 3, 2011.
Notice of Allowance, dated Jun. 3, 2016, U.S. Appl. No. 14/859,179, filed Sep. 18, 2015.
Non-Final Office Action, dated Jun. 15, 2016, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Non-Final Office Action, dated Jun. 22, 2016, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Non-Final Office Action, dated Jul. 25, 2016, U.S. Appl. No. 14/067,619, filed Oct. 30, 2013.
Final Office Action, dated Jul. 26, 2016, U.S. Appl. No. 14/477,804, filed Sep. 4, 2014.
“IPsec Anti-Replay Window: Expanding and Disabling,” Cisco IOS Security Configuration Guide. 2005-2006 Cisco Systems, Inc. Last updated: Sep. 12, 2006, 14 pages.
Singh et al. ; “Future of Internet Security—IPSEC”; 2005; pp. 1-8.
Muthitacharoen, Athicha et al., “A Low-bandwidth Network File System,” 2001, in Proc. of the 18th ACM Symposium on Operating Systems Principles, Banff, Canada, pp. 174-187.
“Shared LAN Cache Datasheet”, 1996, <http://www.lancache.com/slcdata.htm>, 8 pages.
Spring et al., “A protocol-independent technique for eliminating redundant network traffic”, ACM SIGCOMM Computer Communication Review, vol. 30, Issue 4 (Oct. 2000) pp. 87-95, Year of Publication: 2000.
Hong, B et al. “Duplicate data elimination in a SAN file system”, In Proceedings of the 21st Symposium on Mass Storage Systems (MSS '04), Goddard, MD, Apr. 2004. IEEE, pp. 101-114.
You, L. L. and Karamanolis, C. 2004. “Evaluation of efficient archival storage techniques”, In Proceedings of the 21st IEEE Symposium on Mass Storage Systems and Technologies (MSST), pp. 1-6.
Douglis, F. et al., “Application specific Delta-encoding via Resemblance Detection”, Published in the 2003 USENIX Annual Technical Conference, pp. 1-14.
You, L. L. et al., “Deep Store an Archival Storage System Architecture” Data Engineering, 2005. ICDE 2005. Proceedings of the 21st Intl. Conf. on Data Eng.,Tokyo, Japan, Apr. 5-8, 2005, pp. 12.
Manber, Udi, “Finding Similar Files in a Large File System”, TR 93-33 Oct. 1994, Department of Computer Science, University of Arizona. <http://webglimpse.net/pubs/TR93-33.pdf>. Also appears in the 1994 winter USENIX Technical Conference.
Knutsson, Bjorn et al., “Transparent Proxy Signalling”, Journal of Communications and Networks, vol. 3, No. 2, Jun. 2001, pp. 164-174.
Definition memory (n), Webster's Third New International Dictionary, Unabridged (1993), available at <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Copy not provided in IPR2013-00402 proceedings.
Definition appliance, 2c, Webster's Third New International Dictionary, Unabridged (1993), available at <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Copy not provided in IPR2013-00402 proceedings.
Newton, “Newton's Telecom Dictionary”, 17th Ed., 2001, pp. 38, 201, and 714.
Silver Peak Systems, “The Benefits of Byte-level WAN Deduplication” (2008), pp. 1-5.
Business Wire, “Silver Peak Systems Delivers Family of Appliances for Enterprise-Wide Centralization of Branch Office Infrastructure; Innovative Local Instance Networking Approach Overcomes Traditional Application Acceleration Pitfalls” (available at http://www.businesswire.com/news/home/20050919005450/en/Silver-Peak-Systems-Delivers-Family-Appliances-Enterprise-Wide#.UVzkPk7u-1 (last visited Aug. 8, 2014)), pp. 1-4.
Riverbed, “Riverbed Introduces Market-Leading WDS Solutions for Disaster Recovery and Business Application Acceleration” (available at http://www.riverbed.com/about/news-articles/pressreleases/riverbed-introduces-market-leading-wds-solutions-fordisaster-recovery-and-business-application-acceleration.html (last visited Aug. 8, 2014)), 4 pages.
Tseng, Josh, “When accelerating secure traffic is not secure” (available at http://www.riverbed.com/blogs/whenaccelerati.html?&isSearch=true&pageSize=3&page=2 (last visited Aug. 8, 2014)), 3 pages.
Riverbed, “The Riverbed Optimization System (RiOS) v4.0: A Technical Overview” (explaining “Data Security” through segmentation) (available at http://mediacms.riverbed.com/documents/TechOverview-Riverbed-RiOS_4_0.pdf (last visited Aug. 8, 2014)), pp. 1-18.
Riverbed, “Riverbed Awarded Patent on Core WDS Technology” (available at: http://www.riverbed.com/about/news-articles/pressreleases/riverbed-awarded-patent-on-core-wds-technology.html (last visited Aug. 8, 2014)), 2 pages.
Final Written Decision, dated Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00403, pp. 1-38.
Final Written Decision, dated Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00402, pp. 1-37.
Final Written Decision, dated Jun. 9, 2015, Inter Partes Review Case No. IPR2014-00245, pp. 1-40.
Notice of Allowance, dated Mar. 22, 2017, U.S. Appl. No. 13/621,534, filed Sep. 17, 2012.
Non-Final Office Action, dated Apr. 27, 2017, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Notice of Allowance, dated Oct. 25, 2017, U.S. Appl. No. 14/447,505, filed Jul. 30, 2014.
Final Office Action, dated May 3, 2017, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Notice of Allowance, dated Sep. 8, 2017, U.S. Appl. No. 14/479,131, filed Sep. 5, 2014.
Non-Final Office Action, dated May 4, 2017, U.S. Appl. No. 14/811,482, filed Jul. 28, 2015.
Notice of Allowance, dated Sep. 5, 2017, U.S. Appl. No. 14/811,482, filed Jul. 28, 2015.
Non-Final Office Action, dated Jul. 27, 2017, U.S. Appl. No. 14/981,814, filed Dec. 28, 2015.
Notice of Allowance, dated Mar. 23, 2017, U.S. Appl. No. 15/091,533, filed Apr. 5, 2016.
Final Office Action, dated Feb. 17, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Non-Final Office Action, dated Jun. 20, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Final Office Action, dated Oct. 5, 2017, U.S. Appl. No. 15/148,933, filed May 6, 2016.
Non-Final Office Action, dated Nov. 2, 2017, U.S. Appl. No. 15/403,116, filed Jan. 10, 2017.
Non-Final Office Action, dated Sep. 11, 2017, U.S. Appl. No. 15/148,671, filed May 6, 2016.
“Notice of Entry of Judgement Accompanied by Opinion”, United States Court of Appeals for the Federal Circuit, Case: 15-2072, Oct. 24, 2017, 6 pages.
“Decision Granting Motion to Terminate”, Inter Partes Review Case No. IPR2014-00245, Feb. 7, 2018, 4 pages.
Related Publications (1)
Number Date Country
20170359238 A1 Dec 2017 US