Communications scheduler

Information

  • Patent Grant
  • 9961010
  • Patent Number
    9,961,010
  • Date Filed
    Tuesday, January 10, 2017
    7 years ago
  • Date Issued
    Tuesday, May 1, 2018
    6 years ago
Abstract
A system for providing communications over a communications network includes a communications interface and a processor. The communications interface communicates over the communications network. The processor directs a communications scheduler to determine at least one metric for a path within the communications network. The processor also selects a data flow for the path and determines whether to transmit a packet in the selected data flow based on the at least one metric. The processor then directs a communications protocol handler to generate the packet for the selected data flow.
Description
BACKGROUND

1. Technical Field


The present invention relates generally to data flow control over a network and more particularly to a communications scheduler controlling data flow over a network.


2. Description of Related Art


As transmission of network data over a communications network has become commonplace, users increasingly rely upon the speed and accuracy of network data transferral. One of the most common protocols to transmit network data is the transmission control protocol/internet protocol (TCP/IP). The TCP/IP protocol, like many protocols, organizes network data into packets and controls the transmission of packets over the communications network.


Slow-start is a part of the congestion control strategy of the TCP/IP protocol. Slow-start is used to avoid sending more packets than the communications network is capable of handling. Slow-start increases a TCP congestion window size until acknowledgements are not received for some packets. The TCP congestion window is the number of packets that can be sent without receiving an acknowledgment from the packet receiver. Initially, packets are slowly sent over the communications network. Transmission of the packets is increased until the communications network is congested. When acknowledgements are not received, the TCP congestion window is reduced. Subsequently, the number of packets sent without an acknowledgement is reduced and the process repeats.


The TCP/IP protocol can allocate bandwidth that is roughly inversely proportional to the long round trip time (RTT). Although many people generally expect bandwidth to be equally shared among users, the bandwidth is often in relation to the RTT ratio. In one example, two different users may be transmitting data. The first user may desire to transmit data to a local digital device with a 1 ms round trip time while the other user may desire to transmit data to another state with a 100 ms round trip time. The standard TCP/IP protocol will, on average, deliver 100× more bandwidth to the local device connection than to the out-of-state connection. The TCP/IP protocol does not consciously try to enforce any kind of explicit fairness policy. As a result, users that transmit data locally may receive better service at the unfair expense of those wishing to transmit data over longer distances.



FIG. 1 is a block diagram of a computer 100 configured to transmit network data in the prior art. The computer 100 depicts hardware and software elements related to transmission of network data. Other hardware and software of the computer 100 are not depicted for the sake of simplicity. The computer 100 comprises an application 110, a TCP/IP stack 120, a network device driver 130, and a network interface card 140. The network interface card 140 is coupled to a communications network over a link 150. The computer 100 can be any digital device configured to transmit network data.


The TCP/IP stack 120 receives the network data from the application 110 and proceeds to organize the network data into packets. Depending on the type of network, a packet can be termed a frame, block, cell, or segment. The TCP/IP stack 120 buffers the network data prior to organizing the network data into packets and subsequently buffers the packets.


The network device driver 130 enables an operating system of the computer 100 to communicate to the network interface card 140. The network interface card 140 is any device configured to send or receive packets over the communications network. The network device driver 130 configures the network interface card 140 to receive the packets and subsequently transmit the packets over the link 150 to the communications network.


In one example, the TCP/IP stack 120 of the sending computer 100 will not send another packet across the communications network until an acknowledgement from the destination is received. The number of packets between acknowledgments increases until a packet is lost and an acknowledgment is not received. At which point the TCP/IP stack 120 slows down the transmission of packets and, again, slowly increases the speed of transmission between acknowledgments until, again, a packet is lost. As a result, the transmission of network data by the TCP/IP stack 120 can be graphed as a saw tooth; the transmission of network data increases until packets are lost and then transmission drops to a slower speed before repeating the process. Under the TCP/IP approach, packets are often transmitted at speeds below the network's capacity. When the packets are not being sent slowly, however, the communications network quickly becomes congested and the process repeats.


While the TCP/IP stack 120 waits to transmit the packets, the packets are buffered. If the TCP/IP stack 120 transmits too slowly, the buffers may overrun and packets may be lost. Further, the process of buffering and retrieving the buffered packets slows packet transmission and increases the costs of hardware.


The TCP/IP stack 120 delivers different performance depending on the distance that packets are to travel. The TCP/IP stack 120 generates packets based on received network data. The destination of the packets dictates the order in which the packets are transmitted. Packets to be transmitted longer distances may be transmitted slower than packets to be transmitted shorter distances. As a result, this procedure may not be fair to users wishing to transmit mission critical data long distances.


Performance enhancing proxies have been used to improve performance of local networks by overriding specific behaviors of the TCP/IP stack 120. In one example, individual digital devices on a local area network are configured to transmit packets based on a proxy protocol. The proxy protocol overrides certain behaviors of the TCP/IP stack 120 to improve the speed of transmission of network data. However, the performance enhancing proxy does not find bottlenecks on networks, control the transmission of network data based on bottlenecks, nor improve fairness in packet transmission.


SUMMARY OF THE INVENTION

The invention addresses the above problems by providing a communications scheduler that controls data flow over a network. The communications scheduler controls the transmission of packets over a path at a rate based on at least one metric. A system for providing communications over a communications network includes a communications interface and a processor. The communications interface communicates over the communications network. The processor directs a communications scheduler to determine at least one metric for a path within the communications network. The processor also selects a data flow for the path and determines whether to transmit a packet in the selected data flow based on the at least one metric. The processor then directs a communications protocol handler to generate the packet for the selected data flow.


The communications interface may transmit the packet in the selected data flow. The communications protocol handler may comprise a transmission control protocol/internet (TCP/IP) protocol stack. At least one metric may be a bandwidth or a bandwidth estimate.


The processor may direct the communications protocol handler to receive information from an application to be included in the packet. The processor may also direct the communications scheduler to determine if the data flow has information to send prior to selecting the data flow of the path. To select the data flow, the processor may direct the communications scheduler to determine a priority of data flows and determine the data flow to generate the packet based on the priority of data flows determination and the metric. The priority of data flows may be based on a fairness policy, or other metrics.


A system for providing communications over a communications network includes a communications scheduler module and a communications network handler module. The communications scheduler module determines at least one metric for a path within the communications network and selects a data flow for the path. The communications scheduler module determines whether to transmit a packet in the selected data flow based on the at least one metric. Further, the communications scheduler module directs a communications protocol handler to generate the packet in the selected data flow. The communications network handler receives the direction from the communications scheduler to generate the packet in the selected data flow and generates the packet based on the direction.


A method for providing communications over a communications network includes determining at least one metric for a path within the communications network, selecting a data flow for the path, determining whether to transmit a packet in the selected data flow based on the at least one metric, and directing a communications protocol handler to generate the packet for the selected data flow.


A software product for providing communications over a communications network includes a communications scheduler software and a storage medium to store the communications scheduler software. The communications scheduler software directs a processor to determine at least one metric for a path within the communications network and select a data flow for the path. The communications scheduler software can also determine whether to transmit a packet in the selected data flow based on the at least one metric and direct a communications protocol handler to generate the packet for the selected data flow.


The system advantageously transmits packets at a rate based on a path's capacity to carry packets. By determining a metric for a selected path, packets of network data can be transmitted to maximize throughput without waiting for lost packets or acknowledgments to prove network congestion. As a result, the overall speed of packet transmission can be improved without sacrificing reliability. Network congestion which can affect other traffic within the path can be avoided by reducing the transmission of packets above what the path can carry. Further, packets may be generated at the speed of packet transmission advantageously reducing or eliminating the need for packet buffering. The reduction or elimination of buffers reduces hardware expense and may increase the speed of packet transmission.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a computer configured to transmit network data in the prior art.



FIG. 2 is an illustration of a network environment in an exemplary implementation of the invention.



FIG. 3 is an illustration of a network device in an exemplary implementation of the invention.



FIG. 4 is a flow chart for the transmission of network data in an exemplary implementation of the invention.



FIG. 5 is a flow chart for the determination of the bandwidth estimate in an exemplary implementation of the invention.



FIG. 6 is a block diagram of the network device in an exemplary implementation of the invention.



FIG. 7 is an illustration of an exemplary implementation of the operation of a network device in the prior art.



FIG. 8 is an illustration of an exemplary implementation of the operation of a network device of the invention.





DETAILED DESCRIPTION OF THE INVENTION

The embodiments discussed herein are illustrative of one example of the present invention. As these embodiments of the present invention are described with reference to illustrations, various modifications or adaptations of the methods and/or specific structures described may become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon the teachings of the present invention, and through which these teachings have advanced the art, are considered to be within the scope of the present invention. Hence, these descriptions and drawings should not be considered in a limiting sense, as it is understood that the present invention is in no way limited to only the embodiments illustrated.


A system for providing communications over a communications network includes a communications interface and a processor. The communications interface communicates over the communications network. The processor directs a communications scheduler to determine at least one metric for a path within the communications network. The processor also selects a data flow for the path and determines whether to transmit a packet in the selected data flow based on the at least one metric. The processor then directs a communications protocol handler to generate the packet for the selected data flow.


The system can advantageously provide reliable transmission of packets through a communications network more efficiently and more fairly than that available in the prior art. Instead of increasing speeds to the point of network congestion before slowing transmission, packets may be transmitted through the communications network at speeds that more closely approximate the capacity of the packet path. As a result, network congestion is reduced and high data transmission rates can be maintained.


Further, the system can advantageously provide fair transmission of packets through the communications network. Instead of transmitting packets based on the conservation of bandwidth over long distances, packets can be transmitted based on an equitable policy. As a result, packets from each user may be transmitted over the communication network at a rate equal to that of other users regardless of the RTT of the transmission. Further, packets may be transmitted at a faster rate based on the relative importance of the packets, the identity of the user providing network data, or the type of digital device generating the packets. Other fairness policies are also possible.



FIG. 2 is an illustration of a network environment 200 in an exemplary implementation of the invention. The network environment 200 includes a source 210, an optional source network 220, a source network device 230, a communications network 240, a destination 250, an optional destination network 260, and a destination network device 270. The source 210 is coupled to the source network device 230 over the source network 220. The source network device 230 is coupled to the communications network 240. The communications network 240 is coupled to the destination network device 270, which is coupled to the destination 250 over the destination network 260.


Source 210 can be any digital device configured to transmit network data. Similarly, the destination 250 is any digital device configured to receive network data. Examples of digital devices include, but are not limited to, computers, personal digital assistants, and cellular telephones. The source network 220 and the destination network 260 are any networks that couple a digital device (e.g., source 210) to a network device (e.g., source network device 230). The source network 220 and the destination network 260 can be a wired network, a wireless network, or any combination.


The communications network 240 can be any network configured to carry network data and/or packets between the source network device 230 and the destination network device 270. In one example, the communications network 240 is the Internet.


The embodiments in FIGS. 2-5 depict an example of packets being transmitted from the source 210 to the destination 250 through the source network device 230, the communications network 240, and the destination network device 270. Other embodiments may include packets being transmitted directly from the destination 250 to the source 210. In an example, the source 210 and the destination 250 may comprise the source network device 230 and the destination network device 270, respectively. While there are numerous variations in where packets are generated and transmitted, the figures below describe one example of packet transmissions from the source 210 to the destination 250 for the sake of simplicity.


The source network device 230 and the destination network device 270 are any device or system configured to process and exchange packets over the communications network 240. A path is any route the packet may take from the source network device 230 to the destination network device 270. The configuration of the source network device 230 and the destination network device 270 are described in further detail below in FIG. 6. One example of the source network device 230 and the destination network device 270 is an appliance in a network memory architecture, which is described in U.S. patent application Ser. No. 11/202,697 entitled “Network Memory Architecture for Providing Data Based on Local Accessibility” filed on Aug. 12, 2005, which is hereby incorporated by reference.



FIG. 3 is an illustration of a network device 300 in an exemplary implementation of the invention. The network device 300 may be the source network device 230 (FIG. 2) or the destination network device 270 (FIG. 2). The network device 300 includes a communications protocol handler 310, a communications scheduler 320, and a communications interface 330. The communications protocol handler 310 is coupled to the communications scheduler 320 over scheduler link 340. The communications scheduler 320 is further coupled to the communications interface 330 over communications link 350. The communications interface 330 is coupled to the communications protocol handler 310 over handler link 360, the communications network 240 (FIG. 2) over network link 370, and the source network 220 over the source link 380. The communications protocol handler 310, the communications scheduler 320, and the communications interface 330 may be software modules. Software modules comprise executable code that may be processed by a processor (not depicted).


The communications protocol handler 310 is any combination of protocols configured to organize network data into packets. In one example, the communications protocol handler 310 is a TCP/IP stack. In other examples, the communications protocol handler 310 is a User Datagram Protocol (UDP) stack or a Real-time Transport Protocol (RTP) stack. The communications protocol handler 310 may receive network data from an application (not depicted). The communications protocol handler 310 organizes the network data from the application into packets which are to be transmitted over the communications network 240. The communications protocol handler 310 may receive the network data from the application directly. Alternately, the application can reside on the source 210 outside of the network device 300. In an example, the communications interface 330 of the network device 300 receives the network data from the application over the source link 380. The communications interface 330 then forwards the application data over the handler link 360 to the communications protocol handler 310.


The communications scheduler 320 is configured to control the transmission of the packets from the communications protocol handler 310. The communications scheduler 320 can determine at least one metric for a path (discussed in FIG. 2, herein) on the communications network 240 and then control the flow of packets on that path based on the one or more metrics. The metric is any measured value related to a quality, operator, or performance of the path. In one example, the metric is a bandwidth estimate for the path. The bandwidth estimate is a value that estimates the number of packets that may be sent over a path during a predetermined time (e.g., the capacity of the path to transmit packets without congestion). If the bandwidth estimate of the path is high, the path may be capable of carrying a large number of packets. Otherwise, if the bandwidth estimate is low, the path may be capable of carrying a smaller number of packets.


The communications scheduler 320 can determine the bandwidth estimate of any number of paths on the communications network 240. In one example, the communications scheduler 320 transmits probe packets through the communications interface 330 over the communications network 240 to another network device 300. The communications scheduler 320 of the other network device 300 receives the probe packet and transmits a monitor packet back to the originating network device 300. The communications scheduler 320 of the originating network device 300 receives the monitor packet and determines a bandwidth estimate for the path. The determination of the metric is further discussed in FIG. 5.


The communications scheduler 320 can control the transmission of the packets from the communications protocol handler 310 based on the metric of the path. In one example, the communications scheduler 320 limits the number of packets transmitted to the capacity of the path based on the metric. This process is further discussed in FIG. 4. Although the communications protocol handler 310 may comprise a protocol that controls the transmission of network data to avoid congestion (e.g., TCP/IP stack methodology), the communications scheduler 320 may override this function.


By determining the capacity of the path and controlling the flow of packets over the communications network 240, the communications scheduler 320 can increase or optimize the speed in which network data flows across the communications network 240. The prior art protocols typically begin at slow-start and increase speed until congestion appears. Subsequently, the prior art protocols slow down the rate of transmission and slowly increase again. The communications scheduler 320 can maintain speeds that the path will allow. The ultimate throughput the communications scheduler 320 achieves may be faster than the average speed of the prior art protocols.


In some embodiments, the communications scheduler 320 pulls packets from the communications protocol handler 310 obviating the need for buffers. The communications protocol handler 310 can generate a packet at the command of the communications scheduler 320. In one example, the speed at which packets are generated and transmitted is equivalent to the bandwidth estimate. Since the communications scheduler 320 is pulling packets from the communications handler 310 rather than determining transmission rates after packet generation, the packets need not be buffered before transmission. As a result, buffering may be reduced or eliminated which can increase the speed of transmission and/or reduce hardware costs.


The communications interface 330 is coupled to the network device 300, the source 210, the source network 220, and/or the communications network 240. The communications interface 330 can transmit the packets received over the communications link 350 from the communications scheduler 320 to the communications network 240. The communications interface 330 also provides packets received from the communications network 240 to the communications protocol handler 310 over the handler link 360. In some embodiments, the communications interface 330 sends any monitor packets received from another network device 300 to the communications scheduler 320. Further, the communications interface 330 may send any network data received from an application over the source link 380 to the communications protocol handler 310 where the network data will be subsequently organized into packets to prepare for further transmission over the communications network 240. In some embodiments, the communications interface 330 is linked to both the source network 220 and the communications network 240 over a single link (e.g., the network link 370 or the source link 380).



FIG. 4 is a flow chart for the transmission of network data in an exemplary implementation of the invention. FIG. 4 begins in step 400. In step 410, the communications scheduler 320 (FIG. 3) selects a path from eligible paths. An eligible path can be any path with a bandwidth estimate, a path with data to send, or any path that meets a predetermined quality of service criteria. Alternatively, the eligible path can be any path with a bandwidth estimate equal to or greater than a predetermined estimate. In another example, the eligible paths can be determined to be a predetermined percentage of paths with a higher bandwidth estimate than the others.


The communications scheduler 320 can select a path from the eligible paths based on the bandwidth estimate. In one example, the communications scheduler 320 selects the path with the highest bandwidth estimate. In some embodiments, the paths may be prioritized. Specific paths may be weighed based on the properties of one or more networks within the communications network 240. In one example, the path may extend through a virtual private network (VPN) or a network with a bandwidth guarantee.


In step 420, the communications scheduler 320 retrieves a bandwidth estimate associated with one or more paths. The communications scheduler 320 can continue to retrieve or receive bandwidth estimates during any step of FIG. 4. The process of determining a bandwidth estimate is further discussed in FIG. 5.


In step 430, the communications scheduler 320 determines if the number of packets generated over the selected path for a predetermined time is less than the bandwidth estimate of the selected path. In one example, the communications scheduler 320 tracks the number of packets that have been transmitted over each path as well as when the packets where transmitted. If the number of packets transmitted over the selected path during a predetermined period of time is greater than the bandwidth estimate, the communications scheduler 320 retrieves the bandwidth estimate associated with other paths in step 420. In some embodiments, the communications scheduler 320 subsequently selects a new path from eligible paths before returning to step 430. In other embodiments, if the number of packets transmitted over the selected path during a predetermined period of time is greater than the bandwidth estimate, FIG. 4 ends. If the number of packets generated for a selected path is less than the bandwidth estimate, then the communications scheduler 320 can prioritize the data flows for the selected path in step 440.


In one example, the communications scheduler 320 queries the communications protocol handler 310 (FIG. 3) for available data flows. A data flow comprises related network data or packets. In one example, packets belonging to the same data flow comprise the same source IP address, destination IP address, IP protocol, source port, and destination port. There may be a separate data flow for separate applications, sessions, or processes. Each data flow may also be prioritized based on the purpose of the network data within the data flow or the source of the data flow (e.g., the digital device that generated the data flow or a user of the digital device that generated the data flow). In some embodiments, the communications protocol handler 310 notifies the communications scheduler 320 of all data flows without being queried.


In exemplary embodiments, the data flows are weighted depending upon the application that originated the data flow, the user of the application, the number of data flows already sent from the application (or the user), and the number of packets already sent from that data flows. In one example, the data flows are all given equal weight and a packet is sent from each eligible data flow in turn (e.g., a round robin approach). In another example, certain applications or users are given priority over other applications or other users (e.g., by weighing certain applications higher than others). Packets generated by a particular source IP address or transmitted to a particular destination IP address may also be given priority. There may be many different methodologies in weighing the data flows.


In step 450, the communications scheduler 320 selects the data flows for the selected path. In one example, the data flows are selected based on an assigned weight or priority. In some embodiments, the data flows are re-weighted (i.e., re-prioritized) after a packet is transmitted.


Instead of transmitting packets based on the round trip time of packets (e.g., the distance that packets are transmitted), packets can be transmitted based on a configurable fairness policy. A fairness policy is any policy that allows for equitable transmission of packets over the communications network. In one example, the fairness policy dictates that every data flow be given equal weight. In another example, the fairness policy dictates that certain users or data flows are more important (e.g., time sensitive) than others and therefore are given greater weight. The fairness policy can base fair transmission of packets on the saliency of users and/or data rather than the preservation of bandwidth over long distances within the communications network 240 (FIG. 2).


In step 460, the communications scheduler 320 performs a function call to the communications protocol handler 310 to generate a packet from the selected data flow. In one example, the communications protocol handler 310 receives network data from an application (not depicted). The network data is organized into data flows. The communications scheduler 320 prioritizes the data flows and selects a data flow. Subsequently, the communications scheduler 320 provides a function call to command the communications protocol handler 310 to organize the network data into packets.


In some embodiments, the communications protocol handler 310 does not generate packets without a function call from the communications scheduler 320. In one example, the packets are not buffered prior to transmission over the network link 370 (FIG. 3). As a result of the communications scheduler 320 pulling packets from the communications protocol handler 310, buffering the packets prior to transmission may be reduced or eliminated.


In step 470, the communications scheduler 320 transmits the packet of the selected data flow. In one example, the communications scheduler 320 commands the communications interface 330 to transmit the packet over the network link 370. The communications scheduler 320 then retrieves a new bandwidth estimate associated with one or more paths in step 420 and the process can continue. In other embodiments, FIG. 4 ends after step 470.


In some embodiments, the communications scheduler 320 overrides the behavior of the communications protocol handler 310 to transmit packets at the bandwidth estimate. In one example, the communications scheduler 320 overrides the cwnd behavior to control the size of the congestion window of the TCP/IP stack (i.e., the communications protocol handler 310). As a result, the communications scheduler 320 can replace or modify the cwnd behavior (or any behavior that influences the congestion window) to cause the communications protocol handler 310 to transmit packets at the rate based on the bandwidth estimate.



FIG. 5 is a flow chart for the determination of the bandwidth estimate in an exemplary implementation of the invention. FIG. 5 begins in step 500. In step 510, the source network device 230 (FIG. 2) generates and transmits one or more probe packets. A probe packet is a packet sent by the source network device 230 to a destination network device 270 (FIG. 2) to determine a metric for a particular path. The metric may be a bandwidth estimate. In an example, the communications scheduler 320 (FIG. 3) of the source network device 230 generates and subsequently transmits one or more probe packets over the communications network 240 (FIG. 2). In some embodiments, the probe packets are stamped with a transmission timestamp based on the time of transmission. Further, the probe packets may be stamped with a selected path over which the probe packet is to be sent.


In step 520, the destination network device 270 receives the probe packets from the source network device 230. In an example, the communications scheduler 320 of the destination network device 270 receives the probe packets. The destination network device 270 marks the arrival of the one or more probe packets with a timestamp in step 530. In one example, the destination network device 270 may collect probe information associated with the one or more probe packets including, but not limited to, the source network device 230 that sent the one or more probe packets, the path over which the probe packet(s) was sent, and/or the transmission timestamp of the probe packet(s).


In step 540, the destination network device 270 determines the bandwidth estimate of the selected path based on the timestamp(s) of the one or more probe packets. In some embodiments, the destination network device 270 determines the bandwidth estimate by determining the number of eligible probe packets received over a predetermined time. Eligible probe packets can be probe packets with timestamps within the predetermined time. In some embodiments, the destination network device 270 determines the bandwidth estimate based on the inter-arrival time between probe packets (e.g., the time between receipt of successive probe packets).


In step 550, the destination network device 270 generates and transmits a monitoring packet with the bandwidth estimate to the source network device 230. In one example, the communications scheduler 320 of the destination network device 270 generates and transmits the monitoring packet to the communications scheduler 320 of the source network device 230.


In step 560, the source network device 230 receives the monitoring packet from the destination network device 270. In some embodiments, the destination network device 270 transmits the monitoring packet over the same selected path as the one or more probe packets. The source network device 230 can confirm the bandwidth estimate contained within the monitoring packet or modify the bandwidth estimate based on the time when the monitoring packet was received. In one example, the destination network device 270 transmits the monitoring packet with a timestamp to allow the source network device 230 to re-calculate the bandwidth estimate for the selected path. In other embodiments, the destination network device 270 transmits the monitoring packets with the timestamp over a different path to allow the source network device 230 to receive the bandwidth estimate for the selected path and calculate the bandwidth estimate for the different path.


In step 570, the source network device 230 determines the bottleneck based on the bandwidth estimate. In one example, the communications scheduler 320 pulls packets from the communications protocol handler 310 based on the bandwidth estimate. The pulled packets are subsequently transmitted over the communications network 240 by the communications interface 330.


In other embodiments, the source network device 230 transmits probe packets without timestamps to the destination network device 270 over a selected path. The destination network device 270 receives the probe packets and transmits monitoring packets with timestamps to the source network device 230 over the same path. The source network device 230 receives the monitoring packets and then determines the bandwidth estimate of the path based on the timestamps of the monitoring packets.


Many different probe packets may be sent over many paths from a source network device 230 to many different destination network devices 270 during a given time. By continuously discovering new paths and modifying the bandwidth estimates of existing paths, the source network device 230 can increase the through-put of packets to the destination 250 (FIG. 2) without continuously decreasing and increasing the speed of packet transmission when congestion occurs. The exemplary implementation flow chart for the determination of the bandwidth estimate depicted in FIG. 5 ends at step 580.



FIG. 6 is a block diagram of a network device 600 in an exemplary implementation of the invention. The network device 600 may have a similar configuration as the source network device 230 (FIG. 2) and/or the destination network device 270 (FIG. 2). The network device 600 includes a processor 610, a memory 620, a network interface 630, and an optional storage 640 which are all coupled to a system bus 650. The processor 610 is configured to execute executable instructions.


The memory 620 is any memory configured to store data. Some examples of the memory 620 are storage devices, such as RAM or ROM.


The network interface 630 is coupled to the communications network 240 (FIG. 2) and the source 210 (FIG. 2) via the link 660. The network interface 630 is configured to exchange communications between the source 210, the communications network 240, and the other elements in the network device 600. In some embodiments, the network interface 630 may comprise a Local Area Network interface for the source 210 and a Wide Area Network interface for the communications network 240.


The optional storage 640 is any storage configured to retrieve and store data. Some examples of the storage 640 are hard drives, optical drives, and magnetic tape. The optional storage 640 can comprise a database or other data structure configured to hold and organize data. In some embodiments, the network device 600 includes memory 620 in the form of RAM and storage 640 in the form of a hard drive.


The above-described functions can be comprised of executable instructions that are stored on storage media. The executable instructions can be retrieved and executed by the processor 610. Some examples of executable instructions are software, program code, and firmware. Some examples of storage media are memory devices, tape, disks, integrated circuits, and servers. The executable instructions are operational when executed by the processor to direct the processor to operate in accord with the invention. Those skilled in the art are familiar with executable instructions, processor(s), and storage media.



FIG. 7 is an illustration of an exemplary embodiment of the operation of a network device 300 in the prior art. The communications protocol handler 310 may receive application data over source link 380. In the exemplary embodiment depicted, communications protocol handler 310 may be a TCP/IP stack. The communications protocol handler 310 may receive Packet A 380A of a specified size, and forward that packet to the communications scheduler 320, which may then place the data packet in its queue for transmission over the network. Communications protocol handler 310 may then receive Packet B 380B and also forward that packet to the communications scheduler 320, which in turn may add Packet B 380B to its queue for transmission over the network. Communications protocol handler 310 may then receive Packet C 380C and forward that packet to the communications scheduler 320, which in turn may add Packet C 380C to its queue for transmission over the network. Thus, communications scheduler 320 may have three separate data packets in its queue for transmission over the network.



FIG. 8 is an illustration of an exemplary embodiment of the operation of a network device 300 according to the present invention. The communications protocol handler 310 may receive application data over source link 380. In the exemplary embodiment depicted, communications protocol handler 310 be a TCP/IP stack. The TCP/IP stack may receive Packet A 380A of a specified size, which is then kept by the TCP/IP stack. Communications protocol handler 310 may then receive Packet B 380B, which is also added to the data held at the TCP/IP stack. Communications protocol handler 310 may then receive Packet C 380C, which is subsequently added to the data held at the TCP/IP stack. The communications scheduler 320 may then be informed that the TCP/IP stack has data to be transmitted. As discussed herein, the communications scheduler 320 may select a suitable path and prioritize data flows for the selected path. The communications scheduler 320 may then direct the TCP/IP stack to generate one or more data packets for the data flow to be transmitted over the network. In an exemplary embodiment, the communications protocol handler 310 may then generate Packet D 380D, containing the data from Packets A, B, and C, and send that packet to the communications scheduler 320, which may in turn direct it to the communications interface 330 for transmission over the network. Packet D 380D may be a single data packet or multiple data packets.


The above description is illustrative and not restrictive. Many variations of the invention will become apparent to those of skill in the art upon review of this disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

Claims
  • 1. A system for providing network communications, the system comprising: a TCP/IP stack module stored in memory and executed by a processor to: receive a plurality of original application data packets from a data flow, inform a communications scheduler module that the data flow has data to be transmitted over a network, and generate at least one data packet for the data flow from the plurality of original application data packets to be transmitted over the network when directed by the communications scheduler module;the communications scheduler module stored in memory and executed by the processor to: determine a capacity of a network path, select a suitable network path, select a data flow for the selected network path, and direct the TCP/IP stack module to generate the at least one data packet for the selected data flow from the application data in the selected data flow for immediate transmission of the selected data flow over the network, overriding standard TCP flow control; anda communications interface module stored in memory and executed by the processor to transmit the at least one data packet for the selected data flow via the selected network path at an optimal transmission rate, regardless of TCP flow control status.
  • 2. The system of claim 1, wherein the execution of the communications scheduler module to determine the capacity of the network path includes determining a bandwidth estimate.
  • 3. The system of claim 1, wherein the communications interface module transmits the at least one data packet for the selected data flow via the selected network path at a rate based on a bandwidth estimate for the capacity of the selected network path.
  • 4. The system of claim 1, wherein the communications scheduler module is executable to determine whether a data flow has information to send prior to selecting the data flow for the selected network path.
  • 5. The system of claim 1, wherein the communications scheduler module is executable to determine respective priorities of data flows and select the data flow based on the priorities and the capacity of the selected network path.
  • 6. The system of claim 1, wherein the data flow is selected based on a fairness policy.
  • 7. The method of claim 1, wherein the data flow is selected based on at least one of an application that originated the data flow, a user of the application, a number of data flows previously sent from the application or the user, and a number of packets previously sent from that data flow.
  • 8. A method for providing network communications, the method comprising: receiving at a TCP/IP stack module application data in a plurality of original application data packets from a data flow;informing a communications scheduler module that the data flow has data to be transmitted over a network;receiving an instruction at the TCP/IP stack module from the communications scheduler module to send the data to the communications scheduler module;generating by the TCP/IP stack module at least one data packet for the data flow from the plurality of original application data packets to be transmitted over the network;sending the at least one data packet to the communications scheduler module for transmission over the network, wherein the communications scheduler is configured to: determine a capacity of a network path, select a suitable network path, select a data flow for the selected network path, and direct the TCP/IP stack module to generate the at least one data packet for the selected data flow from the application data in the selected data flow for immediate transmission of the selected data flow over the network, overriding standard TCP flow control; andexecuting by a processor a communications interface module stored in memory to transmit the at least one data packet for the selected data flow via the selected network path at an optimal transmission rate, regardless of TCP flow control status.
  • 9. The method of claim 8, wherein determining the capacity of the network path by the communications scheduler module includes determining a bandwidth estimate for the network path.
  • 10. The method of claim 8, wherein the at least one data packet for the data flow is transmitted by the communications interface module via the selected network path at a rate based on a bandwidth estimate for the capacity of the selected network path.
  • 11. The method of claim 8, further comprising determining by the communications scheduler module whether a data flow has information to send prior to selecting the data flow for the selected network path.
  • 12. The method of claim 8, further comprising determining respective priorities of data flows by the communications scheduler module, wherein the data flow is selected based on the priorities and the capacity of the selected network path.
  • 13. The method of claim 8, wherein the data flow is selected based on a fairness policy.
  • 14. The method of claim 8, wherein the data flow is selected based on at least one of an application that originated the data flow, a user of the application, a number of data flows previously sent from the application or the user, and a number of packets previously sent from that data flow.
  • 15. A non-transitory computer readable storage medium having a program embodied thereon, the program executable by a processor to perform a method for providing network communications, the method comprising: executing a TCP/IP stack module stored in memory to: receive a plurality of original application data packets from a data flow, inform a communications scheduler module that the data flow has data to be transmitted over a network, and generate at least one data packet for the data flow from the plurality of original application data packets to be transmitted over the network when directed by the communications scheduler module;executing the communications scheduler module stored in memory to: determine a capacity of a network path, select a suitable network path, select a data flow for the selected network path, and direct the TCP/IP stack module to generate the at least one data packet for the selected data flow from the application data in the selected data flow for immediate transmission of the selected data flow over the network, overriding standard TCP flow control; andexecuting a communications interface module stored in memory to transmit the at least one data packet for the selected data flow via the selected network path at an optimal transmission rate, regardless of TCP flow control status.
  • 16. The computer readable storage medium of claim 15, wherein executing the communications scheduler module to determine the capacity of the network path includes determining a bandwidth estimate.
  • 17. The computer readable storage medium of claim 15, wherein executing the communications interface module transmits the at least one data packet for the data flow via the selected network path at a rate based on a bandwidth estimate for the capacity of the selected network path.
  • 18. The computer readable storage medium of claim 15, wherein the method further comprises executing the communications scheduler module to determine whether a data flow has information to send prior to the selection of the data flow for the selected network path.
  • 19. The computer readable storage medium of claim 15, wherein the method further comprises executing the communications scheduler module to determine respective priorities of data flows, wherein the data flow is selected based on the priorities and the capacity of the selected network path.
  • 20. The computer readable storage medium of claim 15, wherein the data flow is selected based on a fairness policy.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 14/477,804 filed on Sep. 4, 2014, now issued as U.S. Pat. No. 9,584,403 issued on Feb. 28, 2017, which in turn is a continuation of U.S. patent application Ser. No. 11/498,491 filed Aug. 2, 2006, now issued as U.S. Pat. No. 8,885,632 issued on Nov. 11, 2014. Each of the above disclosures is incorporated by reference in its entirety.

US Referenced Citations (470)
Number Name Date Kind
4494108 Langdon, Jr. et al. Jan 1985 A
4558302 Welch Dec 1985 A
4612532 Bacon et al. Sep 1986 A
5023611 Chamzas et al. Jun 1991 A
5159452 Kinoshita et al. Oct 1992 A
5243341 Seroussi et al. Sep 1993 A
5271847 Chen et al. Dec 1993 A
5307413 Denzer Apr 1994 A
5357250 Healey et al. Oct 1994 A
5359720 Tamura et al. Oct 1994 A
5373290 Lempel et al. Dec 1994 A
5483556 Pillan et al. Jan 1996 A
5532693 Winters et al. Jul 1996 A
5592613 Miyazawa et al. Jan 1997 A
5602831 Gaskill Feb 1997 A
5608540 Ogawa Mar 1997 A
5611049 Pitts Mar 1997 A
5627533 Clark May 1997 A
5635932 Shinagawa et al. Jun 1997 A
5652581 Furlan et al. Jul 1997 A
5659737 Matsuda Aug 1997 A
5675587 Okuyama et al. Oct 1997 A
5710562 Gormish et al. Jan 1998 A
5748122 Shinagawa et al. May 1998 A
5754774 Bittinger et al. May 1998 A
5802106 Packer Sep 1998 A
5805822 Long et al. Sep 1998 A
5883891 Williams et al. Mar 1999 A
5903230 Masenas May 1999 A
5955976 Heath Sep 1999 A
6000053 Levine et al. Dec 1999 A
6003087 Housel, III et al. Dec 1999 A
6054943 Lawrence Apr 2000 A
6081883 Popelka et al. Jun 2000 A
6084855 Soirinsuo et al. Jul 2000 A
6175944 Urbanke et al. Jan 2001 B1
6191710 Waletzki Feb 2001 B1
6240463 Benmohamed May 2001 B1
6295541 Bodnar et al. Sep 2001 B1
6308148 Bruins et al. Oct 2001 B1
6311260 Stone et al. Oct 2001 B1
6339616 Kovalev Jan 2002 B1
6374266 Shnelvar Apr 2002 B1
6434191 Agrawal et al. Aug 2002 B1
6434641 Haupt et al. Aug 2002 B1
6434662 Greene et al. Aug 2002 B1
6438664 McGrath et al. Aug 2002 B1
6452915 Jorgensen Sep 2002 B1
6463001 Williams Oct 2002 B1
6489902 Heath Dec 2002 B2
6493698 Beylin Dec 2002 B1
6570511 Cooper May 2003 B1
6587985 Fukushima et al. Jul 2003 B1
6614368 Cooper Sep 2003 B1
6618397 Huang Sep 2003 B1
6633953 Stark Oct 2003 B2
6643259 Borella et al. Nov 2003 B1
6650644 Colley et al. Nov 2003 B1
6653954 Rijavec Nov 2003 B2
6667700 McCanne et al. Dec 2003 B1
6674769 Viswanath Jan 2004 B1
6718361 Basani et al. Apr 2004 B1
6728840 Shatil et al. Apr 2004 B1
6738379 Balazinski et al. May 2004 B1
6754181 Elliott et al. Jun 2004 B1
6769048 Goldberg et al. Jul 2004 B2
6791945 Levenson et al. Sep 2004 B1
6842424 Key Jan 2005 B1
6856651 Singh Feb 2005 B2
6859842 Nakamichi et al. Feb 2005 B1
6862602 Guha Mar 2005 B2
6910106 Sechrest et al. Jun 2005 B2
6963980 Mattsson Nov 2005 B1
6968374 Lemieux et al. Nov 2005 B2
6978384 Milliken Dec 2005 B1
7007044 Rafert et al. Feb 2006 B1
7020750 Thiyagaranjan et al. Mar 2006 B2
7035214 Seddigh et al. Apr 2006 B1
7047281 Kausik May 2006 B1
7069268 Burns et al. Jun 2006 B1
7069342 Biederman Jun 2006 B1
7110407 Khanna Sep 2006 B1
7111005 Wessman Sep 2006 B1
7113962 Kee et al. Sep 2006 B1
7120666 McCanne et al. Oct 2006 B2
7145889 Zhang et al. Dec 2006 B1
7177295 Sholander et al. Feb 2007 B1
7197597 Scheid et al. Mar 2007 B1
7200847 Straube et al. Apr 2007 B2
7215667 Davis May 2007 B1
7242681 Van Bokkelen et al. Jul 2007 B1
7243094 Tabellion et al. Jul 2007 B2
7266645 Garg et al. Sep 2007 B2
7278016 Detrick et al. Oct 2007 B1
7318100 Demmer et al. Jan 2008 B2
7366829 Luttrell et al. Apr 2008 B1
7380006 Srinivas et al. May 2008 B2
7383329 Erickson Jun 2008 B2
7383348 Seki et al. Jun 2008 B2
7388844 Brown et al. Jun 2008 B1
7389357 Duffie, III et al. Jun 2008 B2
7389393 Karr et al. Jun 2008 B1
7417570 Srinivasan et al. Aug 2008 B2
7417991 Crawford et al. Aug 2008 B1
7420992 Fang et al. Sep 2008 B1
7428573 McCanne et al. Sep 2008 B2
7451237 Takekawa et al. Nov 2008 B2
7453379 Plamondon Nov 2008 B2
7454443 Ram et al. Nov 2008 B2
7457315 Smith Nov 2008 B1
7460473 Kodama et al. Dec 2008 B1
7471629 Melpignano Dec 2008 B2
7532134 Samuels et al. May 2009 B2
7555484 Kulkarni et al. Jun 2009 B2
7571343 Xiang et al. Aug 2009 B1
7571344 Hughes et al. Aug 2009 B2
7587401 Yeo et al. Sep 2009 B2
7596802 Border et al. Sep 2009 B2
7619545 Samuels et al. Nov 2009 B2
7620870 Srinivasan et al. Nov 2009 B2
7624333 Langner Nov 2009 B2
7624446 Wilhelm Nov 2009 B1
7630295 Hughes et al. Dec 2009 B2
7633942 Bearden et al. Dec 2009 B2
7639700 Nabhan et al. Dec 2009 B1
7643426 Lee et al. Jan 2010 B1
7644230 Hughes et al. Jan 2010 B1
7676554 Malmskog et al. Mar 2010 B1
7698431 Hughes Apr 2010 B1
7702843 Chen et al. Apr 2010 B1
7714747 Fallon May 2010 B2
7746781 Xiang Jun 2010 B1
7764606 Ferguson et al. Jul 2010 B1
7810155 Ravi Oct 2010 B1
7827237 Plamondon Nov 2010 B2
7849134 McCanne et al. Dec 2010 B2
7853699 Wu et al. Dec 2010 B2
7873786 Singh et al. Jan 2011 B1
7917599 Gopalan et al. Mar 2011 B1
7925711 Gopalan et al. Apr 2011 B1
7941606 Pullela et al. May 2011 B1
7945736 Hughes et al. May 2011 B2
7948921 Hughes et al. May 2011 B1
7953869 Demmer et al. May 2011 B2
7957307 Qiu et al. Jun 2011 B2
7970898 Clubb et al. Jun 2011 B2
7975018 Unrau et al. Jul 2011 B2
8069225 McCanne et al. Nov 2011 B2
8072985 Golan et al. Dec 2011 B2
8090027 Schneider Jan 2012 B2
8095774 Hughes et al. Jan 2012 B1
8140757 Singh et al. Mar 2012 B1
8171238 Hughes et al. May 2012 B1
8209334 Doerner Jun 2012 B1
8225072 Hughes et al. Jul 2012 B2
8271325 Silverman et al. Sep 2012 B2
8307115 Hughes Nov 2012 B1
8312226 Hughes Nov 2012 B2
8352608 Keagy et al. Jan 2013 B1
8370583 Hughes Feb 2013 B2
8386797 Danilak Feb 2013 B1
8392684 Hughes Mar 2013 B2
8442052 Hughes May 2013 B1
8447740 Huang et al. May 2013 B1
8473714 Hughes et al. Jun 2013 B2
8489562 Hughes et al. Jul 2013 B1
8516158 Wu et al. Aug 2013 B1
8565118 Shukla et al. Oct 2013 B2
8576816 Lamy-Bergot et al. Nov 2013 B2
8595314 Hughes Nov 2013 B1
8613071 Day et al. Dec 2013 B2
8681614 McCanne et al. Mar 2014 B1
8699490 Zheng et al. Apr 2014 B2
8700771 Ramankutty et al. Apr 2014 B1
8706947 Vincent Apr 2014 B1
8725988 Hughes et al. May 2014 B2
8732423 Hughes May 2014 B1
8738865 Hughes et al. May 2014 B1
8743683 Hughes Jun 2014 B1
8755381 Hughes et al. Jun 2014 B2
8775413 Brown et al. Jul 2014 B2
8811431 Hughes Aug 2014 B2
8850324 Clemm et al. Sep 2014 B2
8885632 Hughes et al. Nov 2014 B2
8891554 Biehler Nov 2014 B2
8929380 Hughes et al. Jan 2015 B1
8929402 Hughes Jan 2015 B1
8930650 Hughes et al. Jan 2015 B1
9003541 Patidar Apr 2015 B1
9036662 Hughes May 2015 B1
9054876 Yagnik Jun 2015 B1
9092342 Hughes et al. Jul 2015 B2
9130991 Hughes Sep 2015 B2
9131510 Wang Sep 2015 B2
9143455 Hughes Sep 2015 B1
9152574 Hughes et al. Oct 2015 B2
9171251 Camp et al. Oct 2015 B2
9191342 Hughes et al. Nov 2015 B2
9202304 Baenziger et al. Dec 2015 B1
9253277 Hughes et al. Feb 2016 B2
9306818 Aumann et al. Apr 2016 B2
9307442 Bachmann et al. Apr 2016 B2
9363309 Hughes Jun 2016 B2
9397951 Hughes Jul 2016 B1
9438538 Hughes et al. Sep 2016 B2
9549048 Hughes Jan 2017 B1
9584403 Hughes et al. Feb 2017 B2
9584414 Sung et al. Feb 2017 B2
9613071 Hughes Apr 2017 B1
9626224 Hughes et al. Apr 2017 B2
9647949 Varki et al. May 2017 B2
9712463 Hughes et al. Jul 2017 B1
9717021 Hughes et al. Jul 2017 B2
9875344 Hughes et al. Jan 2018 B1
9906630 Hughes Feb 2018 B2
20010026231 Satoh Oct 2001 A1
20010054084 Kosmynin Dec 2001 A1
20020007413 Garcia-Luna-Aceves et al. Jan 2002 A1
20020010702 Ajtai et al. Jan 2002 A1
20020010765 Border Jan 2002 A1
20020040475 Yap et al. Apr 2002 A1
20020061027 Abiru et al. May 2002 A1
20020065998 Buckland May 2002 A1
20020071436 Border et al. Jun 2002 A1
20020078242 Viswanath Jun 2002 A1
20020101822 Ayyagari et al. Aug 2002 A1
20020107988 Jordan Aug 2002 A1
20020116424 Radermacher et al. Aug 2002 A1
20020129158 Zhang et al. Sep 2002 A1
20020129260 Benfield et al. Sep 2002 A1
20020131434 Vukovic et al. Sep 2002 A1
20020150041 Reinshmidt et al. Oct 2002 A1
20020163911 Wee et al. Nov 2002 A1
20020169818 Stewart et al. Nov 2002 A1
20020181494 Rhee Dec 2002 A1
20020188871 Noehring et al. Dec 2002 A1
20020194324 Guha Dec 2002 A1
20030002664 Anand Jan 2003 A1
20030009558 Ben-Yehezkel Jan 2003 A1
20030012400 McAuliffe et al. Jan 2003 A1
20030046572 Newman et al. Mar 2003 A1
20030048750 Kobayashi Mar 2003 A1
20030123481 Neale et al. Jul 2003 A1
20030123671 He et al. Jul 2003 A1
20030131079 Neale et al. Jul 2003 A1
20030133568 Stein et al. Jul 2003 A1
20030142658 Ofuji et al. Jul 2003 A1
20030149661 Mitchell et al. Aug 2003 A1
20030149869 Gleichauf Aug 2003 A1
20030204619 Bays Oct 2003 A1
20030214502 Park et al. Nov 2003 A1
20030214954 Oldak et al. Nov 2003 A1
20030233431 Reddy et al. Dec 2003 A1
20040008711 Lahti et al. Jan 2004 A1
20040047308 Kavanagh et al. Mar 2004 A1
20040083299 Dietz et al. Apr 2004 A1
20040086114 Rarick May 2004 A1
20040088376 McCanne et al. May 2004 A1
20040114569 Naden et al. Jun 2004 A1
20040117571 Chang et al. Jun 2004 A1
20040123139 Aiello et al. Jun 2004 A1
20040158644 Albuquerque et al. Aug 2004 A1
20040179542 Murakami et al. Sep 2004 A1
20040181679 Dettinger et al. Sep 2004 A1
20040199771 Morten et al. Oct 2004 A1
20040202110 Kim Oct 2004 A1
20040203820 Billhartz Oct 2004 A1
20040205332 Bouchard et al. Oct 2004 A1
20040243571 Judd Dec 2004 A1
20040250027 Heflinger Dec 2004 A1
20040255048 Lev Ran et al. Dec 2004 A1
20050010653 McCanne Jan 2005 A1
20050044270 Grove et al. Feb 2005 A1
20050053094 Cain et al. Mar 2005 A1
20050055372 Springer, Jr. et al. Mar 2005 A1
20050055399 Savchuk Mar 2005 A1
20050071453 Ellis et al. Mar 2005 A1
20050091234 Hsu et al. Apr 2005 A1
20050111460 Sahita May 2005 A1
20050131939 Douglis et al. Jun 2005 A1
20050132252 Fifer et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050171937 Hughes et al. Aug 2005 A1
20050177603 Shavit Aug 2005 A1
20050182849 Chandrayana Aug 2005 A1
20050190694 Ben-Nun et al. Sep 2005 A1
20050207443 Kawamura et al. Sep 2005 A1
20050210151 Abdo et al. Sep 2005 A1
20050220019 Melpignano Oct 2005 A1
20050220097 Swami et al. Oct 2005 A1
20050235119 Sechrest et al. Oct 2005 A1
20050240380 Jones Oct 2005 A1
20050243743 Kimura Nov 2005 A1
20050243835 Sharma et al. Nov 2005 A1
20050256972 Cochran et al. Nov 2005 A1
20050278459 Boucher et al. Dec 2005 A1
20050283355 Itani et al. Dec 2005 A1
20050286526 Sood et al. Dec 2005 A1
20060013210 Bordogna et al. Jan 2006 A1
20060026425 Douceur et al. Feb 2006 A1
20060031936 Nelson et al. Feb 2006 A1
20060036901 Yang et al. Feb 2006 A1
20060039354 Rao et al. Feb 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060059171 Borthakur et al. Mar 2006 A1
20060059173 Hirsch et al. Mar 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060136913 Sameske Jun 2006 A1
20060143497 Zohar et al. Jun 2006 A1
20060195547 Sundarrajan et al. Aug 2006 A1
20060195840 Sundarrajan et al. Aug 2006 A1
20060212426 Shakara et al. Sep 2006 A1
20060218390 Loughran et al. Sep 2006 A1
20060227717 van den Berg et al. Oct 2006 A1
20060250965 Irwin Nov 2006 A1
20060268932 Singh et al. Nov 2006 A1
20060280205 Cho Dec 2006 A1
20070002804 Xiang et al. Jan 2007 A1
20070008884 Tang Jan 2007 A1
20070011424 Sharma et al. Jan 2007 A1
20070038815 Hughes Feb 2007 A1
20070038816 Hughes et al. Feb 2007 A1
20070038858 Hughes Feb 2007 A1
20070050475 Hughes Mar 2007 A1
20070076693 Krishnaswamy Apr 2007 A1
20070081513 Torsner Apr 2007 A1
20070097874 Hughes et al. May 2007 A1
20070110046 Farrell et al. May 2007 A1
20070115812 Hughes May 2007 A1
20070127372 Khan et al. Jun 2007 A1
20070130114 Li et al. Jun 2007 A1
20070140129 Bauer et al. Jun 2007 A1
20070150497 De La Cruz et al. Jun 2007 A1
20070174428 Lev Ran et al. Jul 2007 A1
20070179900 Daase et al. Aug 2007 A1
20070195702 Yuen et al. Aug 2007 A1
20070195789 Yao Aug 2007 A1
20070198523 Hayim Aug 2007 A1
20070226320 Hager et al. Sep 2007 A1
20070237104 Alon et al. Oct 2007 A1
20070244987 Pedersen et al. Oct 2007 A1
20070245079 Bhattacharjee et al. Oct 2007 A1
20070248084 Whitehead Oct 2007 A1
20070258468 Bennett Nov 2007 A1
20070263554 Finn Nov 2007 A1
20070276983 Zohar et al. Nov 2007 A1
20070280245 Rosberg Dec 2007 A1
20080005156 Edwards et al. Jan 2008 A1
20080013532 Garner et al. Jan 2008 A1
20080016301 Chen Jan 2008 A1
20080028467 Kommareddy et al. Jan 2008 A1
20080031149 Hughes et al. Feb 2008 A1
20080031240 Hughes et al. Feb 2008 A1
20080071818 Apanowicz et al. Mar 2008 A1
20080095060 Yao Apr 2008 A1
20080133536 Bjorner et al. Jun 2008 A1
20080133561 Dubnicki et al. Jun 2008 A1
20080184081 Hama et al. Jul 2008 A1
20080205445 Kumar et al. Aug 2008 A1
20080222044 Gottlieb et al. Sep 2008 A1
20080229137 Samuels et al. Sep 2008 A1
20080243992 Jardetzky et al. Oct 2008 A1
20080267217 Colville et al. Oct 2008 A1
20080300887 Chen et al. Dec 2008 A1
20080313318 Vermeulen et al. Dec 2008 A1
20080320151 McCanne et al. Dec 2008 A1
20090006801 Shultz et al. Jan 2009 A1
20090024763 Stepin et al. Jan 2009 A1
20090037448 Thomas Feb 2009 A1
20090060198 Little Mar 2009 A1
20090063696 Wang et al. Mar 2009 A1
20090080460 Kronewitter et al. Mar 2009 A1
20090089048 Pouzin Apr 2009 A1
20090092137 Haigh et al. Apr 2009 A1
20090100483 McDowell Apr 2009 A1
20090158417 Khanna et al. Jun 2009 A1
20090175172 Prytz et al. Jul 2009 A1
20090204961 DeHaan et al. Aug 2009 A1
20090234966 Samuels et al. Sep 2009 A1
20090245114 Vijayaraghavan Oct 2009 A1
20090265707 Goodman et al. Oct 2009 A1
20090274294 Itani Nov 2009 A1
20090279550 Romrell et al. Nov 2009 A1
20090281984 Black Nov 2009 A1
20100005222 Brant et al. Jan 2010 A1
20100011125 Yang et al. Jan 2010 A1
20100020693 Thakur Jan 2010 A1
20100054142 Moiso et al. Mar 2010 A1
20100070605 Hughes et al. Mar 2010 A1
20100077251 Liu et al. Mar 2010 A1
20100082545 Bhattacharjee et al. Apr 2010 A1
20100085964 Weir et al. Apr 2010 A1
20100115137 Kim et al. May 2010 A1
20100121957 Roy et al. May 2010 A1
20100124239 Hughes May 2010 A1
20100131957 Kami May 2010 A1
20100169467 Shukla et al. Jul 2010 A1
20100177663 Johansson et al. Jul 2010 A1
20100225658 Coleman Sep 2010 A1
20100232443 Pandey Sep 2010 A1
20100242106 Harris et al. Sep 2010 A1
20100246584 Ferguson et al. Sep 2010 A1
20100290364 Black Nov 2010 A1
20100318892 Teevan et al. Dec 2010 A1
20100333212 Carpenter et al. Dec 2010 A1
20110002346 Wu Jan 2011 A1
20110022812 van der Linden et al. Jan 2011 A1
20110113472 Fung et al. May 2011 A1
20110154169 Gopal et al. Jun 2011 A1
20110154329 Arcese et al. Jun 2011 A1
20110181448 Koratagere Jul 2011 A1
20110219181 Hughes et al. Sep 2011 A1
20110225322 Demidov et al. Sep 2011 A1
20110258049 Ramer et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276963 Wu et al. Nov 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20120036325 Mashtizadeh et al. Feb 2012 A1
20120069131 Abelow Mar 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120173759 Agarwal et al. Jul 2012 A1
20120218130 Boettcher et al. Aug 2012 A1
20120221611 Watanabe et al. Aug 2012 A1
20120230345 Ovsiannikov Sep 2012 A1
20120239872 Hughes et al. Sep 2012 A1
20130018722 Libby Jan 2013 A1
20130018765 Fork et al. Jan 2013 A1
20130031642 Dwivedi et al. Jan 2013 A1
20130044751 Casado et al. Feb 2013 A1
20130058354 Casado et al. Mar 2013 A1
20130080619 Assuncao et al. Mar 2013 A1
20130086236 Baucke et al. Apr 2013 A1
20130094501 Hughes Apr 2013 A1
20130103655 Fanghaenel et al. Apr 2013 A1
20130117494 Hughes et al. May 2013 A1
20130121209 Padmanabhan et al. May 2013 A1
20130141259 Hazarika et al. Jun 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130250951 Koganti Sep 2013 A1
20130263125 Shamsee et al. Oct 2013 A1
20130282970 Hughes et al. Oct 2013 A1
20130343191 Kim et al. Dec 2013 A1
20140052864 Van Der Linden et al. Feb 2014 A1
20140075554 Cooley Mar 2014 A1
20140101426 Senthurpandi Apr 2014 A1
20140108360 Kunath et al. Apr 2014 A1
20140114742 Lamontagne et al. Apr 2014 A1
20140123213 Vank et al. May 2014 A1
20140181381 Hughes et al. Jun 2014 A1
20140269705 DeCusatis et al. Sep 2014 A1
20140279078 Nukala et al. Sep 2014 A1
20140379937 Hughes et al. Dec 2014 A1
20150074291 Hughes Mar 2015 A1
20150074361 Hughes et al. Mar 2015 A1
20150078397 Hughes et al. Mar 2015 A1
20150120663 Le Scouamec et al. Apr 2015 A1
20150143505 Border May 2015 A1
20150170221 Shah Jun 2015 A1
20150281099 Banavalikar Oct 2015 A1
20150281391 Hughes et al. Oct 2015 A1
20150334210 Hughes Nov 2015 A1
20160014051 Hughes et al. Jan 2016 A1
20160034305 Shear et al. Feb 2016 A1
20160093193 Silvers et al. Mar 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160255542 Hughes et al. Sep 2016 A1
20160380886 Blair et al. Dec 2016 A1
20170111692 An et al. Apr 2017 A1
20170187581 Hughes et al. Jun 2017 A1
20170359238 Hughes et al. Dec 2017 A1
Foreign Referenced Citations (3)
Number Date Country
1507353 Feb 2005 EP
H05-061964 Mar 1993 JP
WO0135226 May 2001 WO
Non-Patent Literature Citations (23)
Entry
“IPsec Anti-Replay Window: Expanding and Disabling,” Cisco IOS Security Configuration Guide. 2005-2006 Cisco Systems, Inc. Last updated: Sep. 12, 2006, 14 pages.
Singh et al. ; “Future of Internet Security—IPSEC”; 2005; pp. 1-8.
Muthitacharoen, Athicha et al., “A Low-bandwidth Network File System,” 2001, in Proc. of the 18th ACM Symposium on Operating Systems Principles, Banff, Canada, pp. 174-187.
“Shared LAN Cache Datasheet”, 1996, <http://www.lancache.com/slcdata.htm>, 8 pages.
Spring et al., “A protocol-independent technique for eliminating redundant network traffic”, ACM SIGCOMM Computer Communication Review, vol. 30, Issue 4 (Oct. 2000) pp. 87-95, Year of Publication: 2000.
“Hong, B et al. “Duplicate data elimination in a SAN file system”, In Proceedings of the 21st Symposium on Mass Storage Systems (MSS '04), Goddard, MD, Apr. 2004. IEEE, pp. 101-114.”
You, L. L. and Karamanolis, C. 2004. “Evaluation of efficient archival storage techniques”, In Proceedings of the 21st IEEE Symposium on Mass Storage Systems and Technologies (MSST), pp. 1-6.
Douglis, F. et al., “Application specific Delta-encoding via Resemblance Detection”, Published in the 2003 USENIX Annual Technical Conference, pp. 1-14.
You, L. L. et al., “Deep Store An Archival Storage System Architecture” Data Engineering, 2005. ICDE 2005. Proceedings of the 21st Intl. Conf. on Data Eng.,Tokyo, Japan, Apr. 5-8, 2005, pp. 12.
Manber, Udi, “Finding Similar Files in a Large File System”, TR 93-33 Oct. 1994, Department of Computer Science, University of Arizona. <http://webglimpse.net/pubs/TR93-33.pdf>. Also appears in the 1994 winter USENIX Technical Conference.
Knutsson, Bjorn et al., “Transparent Proxy Signalling”, Journal of Communications and Networks, vol. 3, No. 2, Jun. 2001, pp. 164-174.
Definition memory (n), Webster's Third New International Dictionary, Unabridged (1993), available at <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Copy not provided in IPR2013-00402 proceedings.
Definition appliance, 2c, Webster's Third New International Dictionary, Unabridged (1993), available at <http://lionreference.chadwyck.com> (Dictionaries/Webster's Dictionary). Copy not provided in IPR2013-00402 proceedings.
Newton, “Newton's Telecom Dictionary”, 17th Ed., 2001, pp. 38, 201, and 714.
Silver Peak Systems, “The Benefits of Byte-level WAN Deduplication” (2008), pp. 1-5.
“Business Wire, “Silver Peak Systems Delivers Family of Appliances for Enterprise-Wide Centralization of Branch Office Infrastructure; Innovative Local Instance Networking Approach Overcomes Traditional Application Acceleration Pitfalls” (available at http://www.businesswire.com/news/home/20050919005450/en/Silver-Peak-Systems-Delivers-Family-Appliances-Enterprise-Wide#.UVzkPk7u-1 (last visited Aug. 8, 2014)).”
Riverbed, “Riverbed Introduces Market-Leading WDS Solutions for Disaster Recovery and Business Application Acceleration” (available at http://www.riverbed.com/about/news-articles/pressreleases/riverbed-introduces-market-leading-wds-solutions-fordisaster-recovery-and-business-application-acceleration.html (last visited Aug. 8, 2014)), 4 pages.
Tseng, Josh, “When accelerating secure traffic is not secure” (available at http://www.riverbed.com/blogs/whenaccelerati.html?&isSearch=true&pageSize=3&page=2 (last visited Aug. 8, 2014)), 3 pages.
Riverbed, “The Riverbed Optimization System (RiOS) v4.0: A Technical Overview” (explaining “Data Security” through segmentation) (available at http://mediacms.riverbed.com/documents/TechOverview-Riverbed-RiOS_4_0.pdf (last visited Aug. 8, 2014)), pp. 1-18.
Riverbed, “Riverbed Awarded Patent on Core WDS Technology” (available at: http://www.riverbed.com/about/news-articles/pressreleases/riverbed-awarded-patent-on-core-wds-technology.html (last visited Aug. 8, 2014)), 2 pages.
Final Written Decision, Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00403, pp. 1-38.
Final Written Decision, Dec. 30, 2014, Inter Partes Review Case No. IPR2013-00402, pp. 1-37.
Final Written Decision, Jun. 9, 2015, Inter Partes Review Case No. IPR2014-00245, pp. 1-40.
Related Publications (1)
Number Date Country
20170149679 A1 May 2017 US
Continuations (2)
Number Date Country
Parent 14477804 Sep 2014 US
Child 15403116 US
Parent 11498491 Aug 2006 US
Child 14477804 US