Allocating resources in multi-core computing environments

Information

  • Patent Grant
  • 10020979
  • Patent Number
    10,020,979
  • Date Filed
    Tuesday, March 25, 2014
    10 years ago
  • Date Issued
    Tuesday, July 10, 2018
    6 years ago
Abstract
Provided are methods and systems for allocating resources in a multi-core computing environment. The method comprises selecting, by one or more processors, at least one dedicated core for execution of a resource allocation algorithm. After selection of the dedicated core, the dedicated core allocates, based on the resource allocation algorithm, a network resource to a client. Furthermore, the dedicated core assigns the network resource to network packets associated with the client for processing by data cores. After the assigning of the network resource, the data cores process the network packets according to the allocated network resource.
Description
TECHNICAL FIELD

This disclosure relates generally to data processing and, more particularly, to allocation of resources in multi-core computing environments.


BACKGROUND

The approaches described in this section could be pursued but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.


Guaranteeing quality of service (QoS) in computer networking depends on the ability to assign different priority to different data flows, users, or applications, or, in other words, guarantee a certain level of a data flow. Generally, the QoS depends on bandwidth, delay, jitter, packet dropping probability, and/or bit error rate. QoS guarantees are important in networks where the capacity is limited resource and, especially important for realtime or near realtime applications, since these applications often require fixed bandwidth and are delay sensitive.


When multiple users and applications share the same up or down link to transmit network packets, the QoS is needed to guarantee the priority of a user and application traffic, shape the traffic as configured and share the bandwidth efficiently. In a multi-core system, user and application packets can be processed and transmitted by different processing cores. Typically, QoS decisions are also made and coordinated by different cores. QoS algorithms can be very complex as, for example, is the case with hierarchical QoS algorithms. To guarantee the consistency of traffic information and QoS algorithm, only one core can access and execute the QoS algorithm at one time. Typically, locks are used to prevent different cores from executing the same logic at the same time.


SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


The present disclosure is related to approaches for allocating resources in a multi-core computing environment. Specifically, a method for allocating resources in a multi-core computing environment comprises selecting, by one or more processors, at least one dedicated core for execution of a resource allocation algorithm. After selection of the dedicated core, the dedicated core allocates, based on the resource allocation algorithm, a network resource to a client. Furthermore, the dedicated core assigns the network resource to network packets associated with the client for processing by data cores. After the assigning the network resource, the data cores process the network packets according to the allocated network resource.


According to another approach of the present disclosure, there is provided a system for allocating resources in a multi-core computing environment. The system comprises a processor. The processor of the system is operable to select at least one dedicated core for execution of a resource allocation algorithm. The system further comprises a dedicated core. The dedicated core is operable to allocate, based on the resource allocation algorithm, a network resource to a client. Furthermore, the dedicated core is operable to assign the network resource to network packets associated with the client for processing by one or more data cores. The system further comprises data cores operable to process the network packets according to the allocated network resource.


In further example embodiments of the present disclosure, the method steps are stored on a machine-readable medium comprising instructions, which when implemented by one or more processors perform the recited steps. In yet further example embodiments, hardware systems, or devices can be adapted to perform the recited steps. Other features, examples, and embodiments are described below.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are illustrated by way of example, and not by limitation, in the figures of the accompanying drawings, in which like references indicate similar elements.



FIG. 1 shows an environment within which method and system for allocating resources in multi-core computing environments are implemented.



FIG. 2 is a process flow diagram showing a method for allocating resources in a multi-core computing environment.



FIG. 3 is a block diagram showing various modules of a system for allocating resources in a multi-core computing environment.



FIGS. 4A, 4B, and 4C are block diagrams showing allocation of network resources.



FIG. 5 is a flow charge illustrating allocation of a network resource.



FIG. 6 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.





DETAILED DESCRIPTION

The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.


The techniques disclosed herein may be implemented using a variety of technologies. For example, the methods described herein can be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein can be implemented by a series of computer-executable instructions residing on a storage medium such as a disk drive, or computer-readable medium. It should be noted that methods disclosed herein can be implemented by a computer (e.g., a desktop computer, tablet computer, laptop computer), game console, handheld gaming device, cellular phone, smart phone, smart television system, network devices, such as gateways, routers, switches, and so forth.


As outlined in the summary, the embodiments of the present disclosure refer to allocating resources in a multi-core computing environment. Typically, a resource allocation algorithm can be executed by any CPU cores. According to the present disclosure, QoS, which can also be referred to as traffic shaping, packet shaping, or bandwidth management, is the manipulation and prioritization of network traffic to reduce the impact of heavy use by some users on other users. A network resource throttling or rate limiting is performed to guarantee QoS via efficient use of a network resource. Instead of having every CPU core execute the QoS algorithm, one or more dedicated CPU cores are used to run QoS algorithm. According to the QoS algorithm, the dedicated cores allocate a network resource for a client or application. The dedicated cores assign a quantum of network resource to each data core. The network resource can include a bandwidth, a connection, a data packet, an interface (physical, virtual, or logical), or a combination of network resources. The remaining CPU cores can be referred to as data cores. The data cores process and transmit network packets according to the assigned network resource.


According to the allocation algorithm, a quantum of resource is assigned to data cores in advance. As referred herein, the quantum of resource is a portion of an available network resource. The quantum of resource is divided between data CPU cores based on the total resource available for the client and/or application, resource used by a client and/or application, resource waiting to be used, and so forth. In other words, the quantum of resource is based on the packets processed and queued on a core. Using dedicated cores to allocate network resource and data cores makes complex coordination between CPU cores unnecessary, thereby improving performance and reducing packet delays. The described method can be used for any kind of resource allocation as well as for sharing network resources and limiting excessive use.


Referring now to the drawings, FIG. 1 illustrates an environment 100 within which methods and systems for allocating resources in a multi-core computing environment are implemented. The environment 100 includes a network 110, a client 120, a processor 130, cores 135, a dedicated core 140, data cores 150, network packets 160, a quantum of resource 170, a destination server 180, and processed data packets 190. The cores 135 include the data cores 150 and the dedicated core 140.


The network 110 can include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS (Global Positioning System), CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network 110 can further include or interface with any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking. The network 110 may include a network of data processing nodes that are interconnected for the purpose of data communication.


Communications between the data cores 150 and the dedicated core 140 can be facilitated by any inter-core communications methods, such as, for example, message passing or shared memory. In an example embodiment, the dedicated core 140 and the data cores 150 are located within the same integrated circuit shown in FIG. 1 as cores 135.


The client 120 includes a person or an entity that transmits and/or receives traffic via the network 110. The client 120 also includes one or more of an application, group of applications, a user, a group of users, a host, a group of hosts, a network, a group of networks, and so forth. The processor 130 is responsible for selection of the dedicated core 140. A processor 130 for selection of the dedicated core is optional. The dedicated core 140 is one of the cores responsible for execution of a resource allocation algorithm.


The client 120 sends network packets 160 to a destination server 180 via the network 110. During transmission from the client 120 to the destination server, the network packets 160 are received and processed by the cores 135, specifically, by the data cores 150. The dedicated core 140 selected by the optional processor 130 allocates a network resource to the client 120. Furthermore, dedicated core 140 assigns a network resource to the network packets 160 associated with the client 120 by adding a quantum of resource 170 to the data cores 150. The data cores 150 are responsible for processing network packets. The processing includes sending, receiving, forwarding, consuming, modifying, holding, queuing, delaying, and so forth. The processed data packets 190 are sent from the data cores 150 to the destination server 180.



FIG. 2 is a process flow diagram showing a method 200 for allocating resources in a multi-core computing environment. The method 200 is performed by processing logic that can comprise hardware (e.g., decision making logic, dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.


The method 200 commences with selecting at least one dedicated core for execution of a resource allocation algorithm in operation 202. The selection is performed by one or more processors. After selection of the dedicated core, the dedicated core allocates a network resource to a client in operation 204. The allocation is performed based on the resource allocation algorithm. The resource allocation algorithm can include a quality of service algorithm guaranteeing a level of service to the client. In another example embodiment, the resource allocation algorithm is configured to limit resource consumption by the client. Furthermore, the resource allocation algorithm is operable to implement sharing of the network resources between one or more clients.


In various embodiments, the client includes an application, a group of applications, a user, a group of users, a host, a group of hosts, a network, a group of networks. In further embodiments, the client is associated with at least one type of traffic. The type of traffic is a classification or categorization of the traffic based on certain characteristics of the traffic or content of the traffic. The content of the traffic includes, but is not limited to, source IP address and destination IP address, User Datagram Protocol (UDP)/Transmission Control Protocol (TCP) ports, virtual local area network (VLAN) ID, application, and so forth, as indicated in protocol headers. The characteristics of the traffic can relate to statistical aspects, such as size, frequency, latency, primary flow direction, connections, flow geometry, e.g. star (one to many), and the like.


After the network resource is allocated to the client, the dedicated core can assign, based on the allocated network resource, at least in part, the network resource to network packets associated with the client for processing by one or more data cores in operation 206. The assigning of the network resource for processing the network packets includes adding a quantum of resource to the one or more data cores. The quantum of resource is a portion of an available network resource.


In an example embodiment, the quantum of resource is divided on a per data core basis between the one or more data cores based on a number of the network packets queued on a data core. The quantum of resource is periodically computed and readjusted by the dedicated core. Furthermore, the quantum of resource is allocated based on one or more of a total available amount of a network resource, a consumed amount of the network resource, an amount of an allocated network resource, an amount of an available remaining network resource, an amount of a consumed network resource, a total amount of a network resource waiting to be allocated (i.e. the packet waiting to be processed), an amount of a network resource waiting to be processed per core, and so forth.


In operation 208, after assigning, at least in part, the network resource to network packets associated with the client, the data cores process the network packets according to the allocated network resource and processing time limit. More specifically, there are available network resource and a time limit for processing the network packets. Processing the network packets includes one or more of sending, receiving, forwarding, consuming, holding, queuing, delaying, modifying the network packets, and so forth.



FIG. 3 shows a block diagram showing various modules of a system 300 for allocating resources in a multi-core computing environment. Specifically, the system 300 optionally comprises a processor 302. The optional processor 302 may be operable to select at least one dedicated core for execution of a resource allocation algorithm.


Furthermore, the system 300 comprises at least one dedicated core 304. The dedicated core 304 is operable to allocate a network resource to a client. The allocation is performed based on the resource allocation algorithm. The dedicated core 304 allocates the network resource for processing the network packets by adding a quantum of resource to the one or more data cores. The quantum of resource is a portion of an available network resource. In an example embodiment, the quantum of resource is divided on a per data core basis between the one or more data cores. The quantum of resource is allocated based on one or more of the total available amount of the network resource, a consumed amount of the network resource, an amount of an allocated network resource, an amount of an available remaining network resource, an amount of a consumed network resource, a total amount of a network resource waiting to be allocated (i.e. the packet waiting to be processed), an amount of a network resource waiting to be processed per core, and so forth.


The dedicated core 304 is also operable to assign at least in part the network resource to network packets associated with the client for processing by one or more data cores. Assigning of the network resource for processing of the network packets includes adding a quantum of resource to the one or more data cores).


In an example embodiment, the dedicated core 304 periodically readjusts the quantum of resource and allocates the quantum of resource. The allocation is performed based on the allocated network resource, an amount of an available remaining network resource, an amount of a consumed network resource, an amount of a total network resource waiting to be processed, an amount of a network resource waiting to be processed per core, and so forth.


The system 300 also comprises one or more data cores 306. The data cores 306 are operable to process the network packets. The processing is performed according to the allocated network resource.



FIG. 4A shows a block diagram 401 for allocating a network resource showing an amount of bytes sent within a certain period of time. The network resource is allocated based on the total network resource usage of all data core. The network resource can include a bandwidth, a connection, a data packet, an interface (physical, virtual, or logical), or a combination of network resources. According to FIG. 4A, during round n, an amount of ‘bytes can be transmitted’ 404 and a part 408 of quantum 406 of resource were used for transmitting the amount of ‘bytes transmitted’ 402. For the next allocation of the network resource shown as round n1, the amount of ‘bytes can be transmitted’ 412 is equal to the amount of ‘bytes transmitted’ 402 plus new quantum 414 of resource. The new quantum 414 of resource is the sum of the quantum 406 of resource of round n plus a part 410 of the quantum 406 of resource not used in round n.



FIG. 4B shows another block diagram 421 for allocating a network resource showing an amount of bytes sent within a certain period of time. During round n, an amount of ‘bytes transmitted’ 420 was transmitted and a certain amount of bytes shown as ‘bytes in queue’ 422 left in queue, i.e. was not transmitted. The amount of ‘bytes can be transmitted’ 424 and a part 428 of the quantum 426 of resource were used for transmission of the amount of ‘bytes transmitted’ 420 during round n. For the next allocation of the network resource shown as round n1, the amount of ‘bytes can be transmitted’ 432 is equal to the amount of ‘bytes transmitted’ 420 plus new quantum 434 of resource. The new quantum 434 of resource is the sum of the quantum 426 of resource of round n plus a part 430 of the quantum 426 of resource not used in round n.



FIG. 4C shows another block diagram 441 for allocating a network resource illustrating an amount of bytes sent within a certain period of time. During round n, an amount of ‘bytes transmitted’ 440 was transmitted and a certain amount of bytes shown as ‘bytes in queue’ 442 left in queue, i.e. was not transmitted because round n ran out of time. The amount of ‘bytes can be transmitted’ 444 and a part 448 of the quantum 446 of resource were used for transmission of the amount of ‘bytes transmitted’ 440 during round n before. For the next allocation of the network resource shown as round n1, the amount of ‘bytes can be transmitted’ 452 is equal to the amount of ‘bytes transmitted’ 450 plus new quantum 454 of resource. The new quantum 454 of resource is the sum of the quantum 446 of resource of round n plus an amount ‘bytes in queue’ 442 not used in round n. The amount ‘bytes in queue’ 442 can be added because the quantum 446 of resource was allocated but not used during round n.



FIG. 5 shows a scheme flow chart for allocation of a network resource. The dedicated core 510 runs a resource allocation algorithm, such as a QoS algorithm. The dedicated core 510 allocates the network resource, such as a bandwidth, adds quantums 520 of resource to the data cores 530. Addition of quantums 520 is performed based on the resource allocation algorithm. The dedicated core 510 collects packet processing statistics 540, such as bytes or packets transmitted, and bytes or packets waiting or queued, and uses the collected statistics in the next allocation of network resource to assign quantum of resource to each data core.



FIG. 6 shows a diagrammatic representation of a machine in the example electronic form of a computer system 600, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a PC, a tablet PC, a set-top box (STB), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.


The example computer system 600 includes a processor or multiple processors 602 (e.g., a CPU, a graphics processing unit (GPU), or both), a main memory 604 and a static memory 606, which communicate with each other via a bus 608. Each of the multiple processors 602 includes a multi-core processor. The computer system 600 may further include a video display unit 610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 600 may also include an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), a disk drive unit 616, a signal generation device 618 (e.g., a speaker), and a network interface device 620.


The disk drive unit 616 includes a non-transitory computer-readable medium 622, on which is stored one or more sets of instructions and data structures (e.g., instructions 624) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 624 may also reside, completely or at least partially, within the main memory 604 and/or within the processors 602 during execution thereof by the computer system 600. The main memory 604 and the processors 602 may also constitute machine-readable media.


The instructions 624 may further be transmitted or received over a network 626 via the network interface device 620 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)).


In some embodiments, the computer system 600 may be implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computer system 600 may itself include a cloud-based computing environment, where the functionalities of the computer system 600 are executed in a distributed fashion. Thus, the computer system 600, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.


In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners, or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.


The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.


It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. The terms “computer-readable storage medium” and “computer-readable storage media” as used herein refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile media, volatile media and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as a fixed disk. Volatile media include dynamic memory, such as system RAM. Transmission media include coaxial cables, copper wire, and fiber optics, among others, including the wires that comprise one embodiment of a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.


Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution. A bus carries the data to system RAM, from which a CPU retrieves and executes the instructions. The instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.


Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


The corresponding structures, materials, acts, and equivalents of all means or steps plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.


Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


Thus, methods and systems for allocating resources in a multi-core computing environment have been disclosed. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A method by a first processor core of a plurality of processor cores for allocating resources in a multi-core computing environment, the method comprising: assigning a level of service to each client;receiving a request from a client;determining, by a dedicated core of the plurality of processor cores, the level of service associated with the client;executing, by the dedicated core, a resource allocation algorithm, wherein the executing the resource allocation algorithm includes: collecting statistics associated with processing of a network traffic associated with the client, the statistics including one or more of a volume of a transmitted network traffic associated with the client and a volume of a queued network traffic associated with the client; andbased on the statistics, selecting a quantum of a network resource for processing of the network traffic associated with the client;allocating, by the dedicated core, using the resource allocation algorithm, the quantum of the network resource to the network traffic associated with the client, the client including at least one client processor, wherein the network resource is allocated to the network traffic associated with the client based on the level of service; andassigning, by the dedicated core, using the allocating, other processor cores of the plurality of processor cores to processing network packets of the network traffic associated with the client, the other processor cores processing the network packets according to the allocated quantum of the network resource.
  • 2. The method of claim 1, wherein the processing of the network packets includes one or more of sending, receiving, forwarding, consuming, holding, queuing, delaying, and modifying the network packets.
  • 3. The method of claim 1, wherein the assigning of the other processor cores to processing the network packets includes adding a quantum of the network resource to each of the other processor cores.
  • 4. The method of claim 3, wherein the quantum of the network resource is a portion of an available network resource.
  • 5. The method of claim 3, wherein the quantum of the network resource is allocated to each of the other processor cores using a number of network packets queued on the respective other processor core.
  • 6. The method of claim 5, further comprising: readjusting periodically the quantum of the network resource; andallocating further the quantum of the network resource, using at least one of the allocated network resource, an amount of an available remaining network resource, an amount of a consumed network resource, a total amount of a network resource waiting to be processed, and an amount of a network resource waiting to be processed by each of the other processor cores.
  • 7. The method of claim 1, wherein the client includes an application or a group of applications.
  • 8. The method of claim 1, wherein the client includes a user or a group of users.
  • 9. The method of claim 1, wherein the client includes a host or a group of hosts.
  • 10. The method of claim 1, wherein the client includes a network or a group of networks, and at least one type of traffic.
  • 11. The method of claim 1, wherein the plurality of processor cores communicate with each other using at least one of message passing and a shared memory.
  • 12. The method of claim 1, wherein the resource allocation algorithm includes a quality of service algorithm guaranteeing the level of service to the client.
  • 13. The method of claim 1, wherein the resource allocation algorithm is configured to limit resource consumption by the client.
  • 14. The method of claim 1, wherein the resource allocation algorithm is operable to implement sharing of the network resources between one or more clients.
  • 15. A system for allocating resources in a multi-core computing environment, the system comprising: at least one hardware processor configured for: assigning a level of service to each client; andselecting a first processor core of a plurality of processor cores as a dedicated core;the dedicated core configured for: receiving a request from a client;determining the level of service associated with the client;executing a resource allocation algorithm, wherein the executing the resource allocation algorithm includes: collecting statistics associated with processing of a network traffic associated with the client, the statistics including one or more of a volume of a transmitted network traffic associated with the client and a volume of a queued network traffic associated with the client; andbased on the statistics, selecting a quantum of a network resource for processing of the network traffic associated with the client;allocating, using the resource allocation algorithm, the quantum of the network resource to the network traffic associated with the client, the client including at least one client processor, wherein the network resource is allocated to the network traffic associated with the client based on the level of service; andassigning, using the allocating, other processor cores of the plurality of processor cores to processing network packets of the network traffic associated with the client; andthe other processor cores processing the network packets according to the allocated quantum of the network resource.
  • 16. The system of claim 15, wherein the dedicated core allocates the network resource for processing the network packets by adding a quantum of the network resource to each of the other processing cores.
  • 17. The system of claim 15, wherein the quantum of the network resource is a portion of an available network resource.
  • 18. The system of claim 15, wherein the quantum of the network resource is allocated to each of the other processor cores using a number of network packets queued on the respective other processor core.
  • 19. The system of claim 18, wherein the dedicated core periodically readjusts the quantum of the network resource and allocates the quantum of the network resource based on the allocated network resource, an amount of an available remaining network resource, an amount of a consumed network resource, an amount of a total network resource waiting to be processed, and an amount of a network resource waiting to be processed by each of the processor cores.
  • 20. A non-transitory computer-readable storage medium having embodied thereon a program, the program being executable by a first processor core of plurality of processor cores to perform a method for allocating resources in a multi-core computing environment, the method comprising: assigning a level of service to each client;receiving a request from a client;determining, by a dedicated core, the level of service associated with the client;executing, by the dedicated core, a resource allocation algorithm, the first processor core being selected by a processor as the dedicated core for executing the resource allocation algorithm, wherein the executing the resource allocation algorithm includes: collecting statistics associated with processing of a network traffic associated with the client, the statistics including one or more of a volume of a transmitted network traffic associated with the client and a volume of a queued network traffic associated with the client; andbased on the statistics, selecting a quantum of a network resource for processing of the network traffic associated with the client;allocating, by the dedicated core, using the resource allocation algorithm, the quantum of the network resource to the network traffic associated with the client, the client including at least one client processor, wherein the network resource is allocated to the network traffic associated with the client based on the level of service;assigning, by the dedicated core, using the allocating, other processor cores of the plurality of processor cores to processing network packets of the network traffic associated with the client, the other processor cores processing the network packets according to the allocated quantum of the network resource.
US Referenced Citations (338)
Number Name Date Kind
4720850 Oberlander et al. Jan 1988 A
4864492 Blakely-Fogel et al. Sep 1989 A
4882699 Evensen Nov 1989 A
5218676 Ben-Ayed et al. Jun 1993 A
5293488 Riley et al. Mar 1994 A
5432908 Heddes et al. Jul 1995 A
5774660 Brendel et al. Jun 1998 A
5781550 Templin et al. Jul 1998 A
5862339 Bonnaure Jan 1999 A
5875185 Wang et al. Feb 1999 A
5931914 Chiu Aug 1999 A
5958053 Denker Sep 1999 A
6003069 Cavill Dec 1999 A
6047268 Bartoli et al. Apr 2000 A
6075783 Voit Jun 2000 A
6131163 Wiegel Oct 2000 A
6141749 Coss et al. Oct 2000 A
6167428 Ellis Dec 2000 A
6321338 Porras et al. Nov 2001 B1
6324286 Lai et al. Nov 2001 B1
6360265 Falck et al. Mar 2002 B1
6363075 Huang et al. Mar 2002 B1
6374300 Masters Apr 2002 B2
6389462 Cohen et al. May 2002 B1
6415329 Gelman et al. Jul 2002 B1
6456617 Oda et al. Sep 2002 B1
6483600 Schuster et al. Nov 2002 B1
6519243 Nonaka et al. Feb 2003 B1
6535516 Leu et al. Mar 2003 B1
6578066 Logan et al. Jun 2003 B1
6587866 Modi et al. Jul 2003 B1
6600738 Alperovich et al. Jul 2003 B1
6658114 Farn et al. Dec 2003 B1
6772205 Lavian et al. Aug 2004 B1
6772334 Glawitsch Aug 2004 B1
6779033 Watson et al. Aug 2004 B1
6804224 Schuster et al. Oct 2004 B1
6832322 Boden et al. Dec 2004 B1
7010605 Dharmarajan Mar 2006 B1
7013338 Nag et al. Mar 2006 B1
7058718 Fontes et al. Jun 2006 B2
7058789 Henderson et al. Jun 2006 B2
7058973 Sultan Jun 2006 B1
7069438 Balabine et al. Jun 2006 B2
7086086 Ellis Aug 2006 B2
7111162 Bagepalli et al. Sep 2006 B1
7143087 Fairweather Nov 2006 B2
7167927 Philbrick et al. Jan 2007 B2
7181524 Lele Feb 2007 B1
7228359 Monteiro Jun 2007 B1
7254133 Govindarajan et al. Aug 2007 B2
7266604 Nathan et al. Sep 2007 B1
7269850 Govindarajan et al. Sep 2007 B2
7284272 Howard et al. Oct 2007 B2
7290050 Smith et al. Oct 2007 B1
7301899 Goldstone Nov 2007 B2
7308710 Yarborough Dec 2007 B2
7310686 Uysal Dec 2007 B2
7328267 Bashyam et al. Feb 2008 B1
7337241 Boucher et al. Feb 2008 B2
7343399 Hayball et al. Mar 2008 B2
7370100 Gunturu May 2008 B1
7370353 Yang May 2008 B2
7373500 Ramelson et al. May 2008 B2
7391725 Huitema et al. Jun 2008 B2
7398317 Chen et al. Jul 2008 B2
7406709 Maher, III et al. Jul 2008 B2
7423977 Joshi Sep 2008 B1
7430755 Hughes et al. Sep 2008 B1
7441270 Edwards et al. Oct 2008 B1
7451312 Medvinsky et al. Nov 2008 B2
7467202 Savchuk Dec 2008 B2
7506360 Wilkinson et al. Mar 2009 B1
7512980 Copeland et al. Mar 2009 B2
7516485 Lee et al. Apr 2009 B1
7529242 Lyle May 2009 B1
7552323 Shay Jun 2009 B2
7568041 Turner et al. Jul 2009 B1
7583668 Mayes et al. Sep 2009 B1
7584262 Wang et al. Sep 2009 B1
7590736 Hydrie et al. Sep 2009 B2
7591001 Shay Sep 2009 B2
7603454 Piper Oct 2009 B2
7610622 Touitou et al. Oct 2009 B2
7613193 Swami et al. Nov 2009 B2
7613822 Joy et al. Nov 2009 B2
7673072 Boucher et al. Mar 2010 B2
7675854 Chen et al. Mar 2010 B2
7711790 Barrett et al. May 2010 B1
7716369 Le Pennec et al. May 2010 B2
7733866 Mishra et al. Jun 2010 B2
7747748 Allen Jun 2010 B2
7779130 Toutonghi Aug 2010 B1
7826487 Mukerji et al. Nov 2010 B1
7908651 Maher Mar 2011 B2
7948952 Hurtta et al. May 2011 B2
7965727 Sakata et al. Jun 2011 B2
7979694 Touitou et al. Jul 2011 B2
7990847 Leroy et al. Aug 2011 B1
7992201 Aldridge et al. Aug 2011 B2
8079077 Chen et al. Dec 2011 B2
8081640 Ozawa et al. Dec 2011 B2
8090866 Bashyam et al. Jan 2012 B1
8099492 Dahlin et al. Jan 2012 B2
8116312 Riddoch et al. Feb 2012 B2
8122116 Matsunaga et al. Feb 2012 B2
8151019 Le Apr 2012 B1
8185651 Moran et al. May 2012 B2
8244876 Sollee Aug 2012 B2
8255644 Sonnier et al. Aug 2012 B2
8261339 Aldridge et al. Sep 2012 B2
8291487 Chen et al. Oct 2012 B1
8327128 Prince et al. Dec 2012 B1
8332925 Chen et al. Dec 2012 B2
8347392 Chess et al. Jan 2013 B2
8379515 Mukerji Feb 2013 B1
8387128 Chen et al. Feb 2013 B1
8464333 Chen et al. Jun 2013 B1
8520615 Mehta et al. Aug 2013 B2
8559437 Mishra et al. Oct 2013 B2
8560693 Wang et al. Oct 2013 B1
8595383 Chen et al. Nov 2013 B2
8595819 Chen et al. Nov 2013 B1
RE44701 Chen et al. Jan 2014 E
8675488 Sidebottom et al. Mar 2014 B1
8681610 Mukerji Mar 2014 B1
8782221 Han Jul 2014 B2
8904512 Chen et al. Dec 2014 B1
8914871 Chen et al. Dec 2014 B1
8918857 Chen et al. Dec 2014 B1
RE45347 Chun et al. Jan 2015 E
8943577 Chen et al. Jan 2015 B1
8949471 Hall Feb 2015 B2
8977749 Han Mar 2015 B1
9032502 Chen et al. May 2015 B1
9094364 Jalan et al. Jul 2015 B2
9106561 Jalan et al. Aug 2015 B2
9118618 Davis Aug 2015 B2
9118620 Davis Aug 2015 B1
9124550 Chen et al. Sep 2015 B1
9137301 Dunlap et al. Sep 2015 B1
9154584 Han Oct 2015 B1
9258332 Chen et al. Feb 2016 B2
9386088 Zheng et al. Jul 2016 B2
9531846 Han et al. Dec 2016 B2
20010015812 Sugaya Aug 2001 A1
20010023442 Masters Sep 2001 A1
20010042200 Lamberton et al. Nov 2001 A1
20020026515 Michielsens et al. Feb 2002 A1
20020026531 Keane et al. Feb 2002 A1
20020032799 Wiedeman et al. Mar 2002 A1
20020046348 Brustoloni Apr 2002 A1
20020053031 Bendinelli et al. May 2002 A1
20020078164 Reinschmidt Jun 2002 A1
20020091844 Craft et al. Jul 2002 A1
20020103916 Chen et al. Aug 2002 A1
20020138618 Szabo Sep 2002 A1
20020141386 Minert et al. Oct 2002 A1
20020141448 Matsunaga Oct 2002 A1
20020143955 Shimada et al. Oct 2002 A1
20020143991 Chow et al. Oct 2002 A1
20020188678 Edecker et al. Dec 2002 A1
20030009591 Hayball et al. Jan 2003 A1
20030035409 Wang et al. Feb 2003 A1
20030061506 Cooper et al. Mar 2003 A1
20030065950 Yarborough Apr 2003 A1
20030081624 Aggarwal et al. May 2003 A1
20030088788 Yang May 2003 A1
20030135625 Fontes et al. Jul 2003 A1
20030135653 Marovich Jul 2003 A1
20030152078 Henderson et al. Aug 2003 A1
20030167340 Jonsson Sep 2003 A1
20030229809 Wexler et al. Dec 2003 A1
20030236887 Kesselman Dec 2003 A1
20040010545 Pandya Jan 2004 A1
20040054920 Wilson et al. Mar 2004 A1
20040062246 Boucher et al. Apr 2004 A1
20040073703 Boucher et al. Apr 2004 A1
20040078419 Ferrari et al. Apr 2004 A1
20040078480 Boucher et al. Apr 2004 A1
20040103315 Cooper et al. May 2004 A1
20040107360 Herrmann et al. Jun 2004 A1
20040184442 Jones et al. Sep 2004 A1
20040243718 Fujiyoshi Dec 2004 A1
20040250059 Ramelson et al. Dec 2004 A1
20050005207 Herneque Jan 2005 A1
20050027947 Landin Feb 2005 A1
20050033985 Xu et al. Feb 2005 A1
20050036511 Baratakke et al. Feb 2005 A1
20050038898 Mittig et al. Feb 2005 A1
20050039033 Meyers et al. Feb 2005 A1
20050050364 Feng Mar 2005 A1
20050074001 Mattes et al. Apr 2005 A1
20050080890 Yang et al. Apr 2005 A1
20050114492 Arberg et al. May 2005 A1
20050135422 Yeh Jun 2005 A1
20050144468 Northcutt et al. Jun 2005 A1
20050163073 Heller et al. Jul 2005 A1
20050169285 Wills et al. Aug 2005 A1
20050198335 Brown et al. Sep 2005 A1
20050213586 Cyganski et al. Sep 2005 A1
20050240989 Kim et al. Oct 2005 A1
20050251856 Araujo et al. Nov 2005 A1
20050281190 McGee et al. Dec 2005 A1
20060023721 Miyake et al. Feb 2006 A1
20060031506 Redgate Feb 2006 A1
20060036610 Wang Feb 2006 A1
20060041745 Parnes Feb 2006 A1
20060062142 Appanna et al. Mar 2006 A1
20060063517 Oh et al. Mar 2006 A1
20060064440 Perry Mar 2006 A1
20060069804 Miyake et al. Mar 2006 A1
20060080446 Bahl Apr 2006 A1
20060126625 Schollmeier et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060164978 Werner Jul 2006 A1
20060168319 Trossen Jul 2006 A1
20060195698 Pinkerton et al. Aug 2006 A1
20060227771 Raghunath et al. Oct 2006 A1
20060230129 Swami et al. Oct 2006 A1
20060233100 Luft et al. Oct 2006 A1
20060280121 Matoba Dec 2006 A1
20070002857 Maher Jan 2007 A1
20070011419 Conti Jan 2007 A1
20070019543 Wei et al. Jan 2007 A1
20070022479 Sikdar et al. Jan 2007 A1
20070076653 Park Apr 2007 A1
20070124487 Yoshimoto et al. May 2007 A1
20070124502 Li May 2007 A1
20070165622 O'Rourke et al. Jul 2007 A1
20070177506 Singer et al. Aug 2007 A1
20070180119 Khivesara Aug 2007 A1
20070180226 Schory et al. Aug 2007 A1
20070180513 Raz et al. Aug 2007 A1
20070185998 Touitou et al. Aug 2007 A1
20070195792 Chen et al. Aug 2007 A1
20070230337 Igarashi et al. Oct 2007 A1
20070242738 Park Oct 2007 A1
20070243879 Park Oct 2007 A1
20070245090 King et al. Oct 2007 A1
20070248009 Petersen Oct 2007 A1
20070294694 Jeter et al. Dec 2007 A1
20080016161 Tsirtsis Jan 2008 A1
20080031263 Ervin et al. Feb 2008 A1
20080034111 Kamath et al. Feb 2008 A1
20080034419 Mullick et al. Feb 2008 A1
20080040789 Chen et al. Feb 2008 A1
20080076432 Senarath Mar 2008 A1
20080120129 Seubert May 2008 A1
20080216177 Yokosato et al. Sep 2008 A1
20080225722 Khemani et al. Sep 2008 A1
20080253390 Das Oct 2008 A1
20080289044 Choi Nov 2008 A1
20080291911 Lee et al. Nov 2008 A1
20080298303 Tsirtsis Dec 2008 A1
20090024722 Sethuraman et al. Jan 2009 A1
20090031415 Aldridge et al. Jan 2009 A1
20090049537 Chen et al. Feb 2009 A1
20090077651 Poeluev Mar 2009 A1
20090092124 Singhal et al. Apr 2009 A1
20090113536 Zhang et al. Apr 2009 A1
20090138606 Moran et al. May 2009 A1
20090138945 Savchuk May 2009 A1
20090164614 Christian et al. Jun 2009 A1
20090210698 Candelore Aug 2009 A1
20090285196 Lee Nov 2009 A1
20100042869 Szabo et al. Feb 2010 A1
20100054139 Chun et al. Mar 2010 A1
20100061319 Aso et al. Mar 2010 A1
20100064008 Yan et al. Mar 2010 A1
20100095018 Khemani et al. Apr 2010 A1
20100106854 Kim et al. Apr 2010 A1
20100205310 Altshuler et al. Aug 2010 A1
20100228819 Wei Sep 2010 A1
20100235522 Chen et al. Sep 2010 A1
20100238828 Russell Sep 2010 A1
20100257278 Gunturu Oct 2010 A1
20100262819 Yang Oct 2010 A1
20100265824 Chao et al. Oct 2010 A1
20100268814 Cross et al. Oct 2010 A1
20100318631 Shukla Dec 2010 A1
20100322252 Suganthi et al. Dec 2010 A1
20100333101 Pope et al. Dec 2010 A1
20100333209 Alve Dec 2010 A1
20110007652 Bai Jan 2011 A1
20110032941 Quach et al. Feb 2011 A1
20110060831 Ishii et al. Mar 2011 A1
20110083174 Aldridge et al. Apr 2011 A1
20110093522 Chen et al. Apr 2011 A1
20110099623 Garrard et al. Apr 2011 A1
20110149879 Noriega Jun 2011 A1
20110209157 Sumida Aug 2011 A1
20110276982 Nakayama et al. Nov 2011 A1
20110302256 Sureshehandra et al. Dec 2011 A1
20110307606 Cobb Dec 2011 A1
20120008495 Shen et al. Jan 2012 A1
20120026897 Guichard et al. Feb 2012 A1
20120117382 Larson et al. May 2012 A1
20120155495 Clee et al. Jun 2012 A1
20120173759 Agarwal et al. Jul 2012 A1
20120215910 Wada Aug 2012 A1
20120290727 Tivig Nov 2012 A1
20130089099 Pollock et al. Apr 2013 A1
20130135996 Torres May 2013 A1
20130136139 Zheng et al. May 2013 A1
20130166762 Jalan et al. Jun 2013 A1
20130176854 Chisu et al. Jul 2013 A1
20130191486 Someya et al. Jul 2013 A1
20130191548 Boddukuri et al. Jul 2013 A1
20130212242 Mendiratta et al. Aug 2013 A1
20130227165 Liu Aug 2013 A1
20130250765 Ehsan et al. Sep 2013 A1
20130258846 Damola Oct 2013 A1
20130262702 Davis Oct 2013 A1
20130311686 Fetterman et al. Nov 2013 A1
20130315241 Kamat et al. Nov 2013 A1
20140012972 Han Jan 2014 A1
20140169168 Jalan et al. Jun 2014 A1
20140207845 Han et al. Jul 2014 A1
20140258536 Chiong Sep 2014 A1
20140286313 Fu et al. Sep 2014 A1
20140359052 Joachimpillai et al. Dec 2014 A1
20150026794 Zuk et al. Jan 2015 A1
20150047012 Chen et al. Feb 2015 A1
20150156223 Xu et al. Jun 2015 A1
20150207708 Raleigh Jul 2015 A1
20150237173 Virkki et al. Aug 2015 A1
20150244566 Puimedon Aug 2015 A1
20150296058 Jalan et al. Oct 2015 A1
20150312092 Golshan et al. Oct 2015 A1
20150350048 Sampat et al. Dec 2015 A1
20150350379 Jalan et al. Dec 2015 A1
20150350383 Davis Dec 2015 A1
20160014052 Han Jan 2016 A1
20160014126 Jalan et al. Jan 2016 A1
20160065619 Chen et al. Mar 2016 A1
20170048107 Dosovitsky et al. Feb 2017 A1
20170048356 Thompson et al. Feb 2017 A1
Foreign Referenced Citations (83)
Number Date Country
1372662 Oct 2002 CN
1473300 Feb 2004 CN
1529460 Sep 2004 CN
1575582 Feb 2005 CN
1910869 Feb 2007 CN
1921457 Feb 2007 CN
1937591 Mar 2007 CN
101189598 May 2008 CN
101442425 May 2009 CN
101495993 Jul 2009 CN
101682532 Mar 2010 CN
101878663 Nov 2010 CN
ZL 200780001807.5 Feb 2011 CN
102123156 Jul 2011 CN
102577252 Jul 2012 CN
103365654 Oct 2013 CN
103428261 Dec 2013 CN
103533018 Jan 2014 CN
101878663 Jun 2014 CN
103944954 Jul 2014 CN
104040990 Sep 2014 CN
104137491 Nov 2014 CN
104796396 Jul 2015 CN
102577252 Mar 2016 CN
1209876 May 2002 EP
1482685 Dec 2004 EP
1720287 Nov 2006 EP
2575328 Oct 2008 EP
2057552 May 2009 EP
2215863 Aug 2010 EP
2296313 Mar 2011 EP
2667571 Nov 2013 EP
2760170 Jul 2014 EP
2575328 Nov 2014 EP
2760170 Dec 2015 EP
1182547 Nov 2013 HK
1188498 May 2014 HK
1189438 Jun 2014 HK
1190539 Jul 2014 HK
1182547 Apr 2015 HK
1199153 Jun 2015 HK
1199779 Jul 2015 HK
1200617 Aug 2015 HK
261CHE2014 Jul 2016 IN
2000307634 Nov 2000 JP
2004350188 Dec 2004 JP
2005-518595 Jun 2005 JP
2006180295 Jul 2006 JP
2006333245 Dec 2006 JP
2007048052 Feb 2007 JP
2011505752 Feb 2011 JP
5480959 Feb 2013 JP
2013059122 Mar 2013 JP
2013070423 Apr 2013 JP
2013078134 Apr 2013 JP
5364101 Sep 2013 JP
5579820 Jul 2014 JP
5579821 Jul 2014 JP
2014143686 Aug 2014 JP
5906263 Apr 2016 JP
1020130096624 Aug 2013 KR
101576585 Dec 2015 KR
NI086309 Feb 1996 TW
NI109955 Dec 1999 TW
NI130506 Mar 2001 TW
NI137392 Jul 2001 TW
WO2001013228 Feb 2001 WO
WO2001014990 Mar 2001 WO
2003073216 Sep 2003 WO
2003103233 Dec 2003 WO
WO2003103237 Dec 2003 WO
2006065691 Jun 2006 WO
2007076883 Jul 2007 WO
WO2008053954 May 2008 WO
2008021620 Jun 2009 WO
2009073295 Jun 2009 WO
WO2011049770 Apr 2011 WO
WO2011079381 Jul 2011 WO
WO2013081952 Jun 2013 WO
WO2013096019 Jun 2013 WO
WO2014031046 Feb 2014 WO
WO2014093829 Jun 2014 WO
WO2015164026 Oct 2015 WO
Non-Patent Literature Citations (15)
Entry
Chiussi et al., “A Network Architecture for MPLS-Based Micro-Mobility”, IEEE WCNC 02, Orlando, Mar. 2002.
Smith, M. et al; “Network Security Using NAT and NAPT”, 10th IEEE International Converence on Aug. 27-30, 2002, Piscataway, NJ, USA, 2012; Aug. 27, 2002; pp. 355-360.
Cardellini et al., “Dynamic Load Balancing on Web-server Systems”, IEEE Internet Computing, vol. 3, No. 3, pp. 28-39, May-Jun. 1999.
Wang et al., “Shield: Vulnerability Driven Network Filters for Preventing Known Vulnerability Exploits”, SIGCOMM'04, Aug. 30-Sep. 3, 2004, Portland, Oregon, USA.
Goldszmidt et al., “NetDispatcher: A TCP Connection Router,” IBM Research Report RC 20853, May 19, 1997, pp. 1-31.
Koike et al., “Transport Middleware for Network-Based Control,” IEICE Technical Report, Jun. 22, 2000, vol. 100, No. 53, pp. 13-18.
Yamamoto et al., “Performance Evaluation of Window Size in Proxy-based TCP for Multi-hop Wireless Networks,” IPSJ SIG Technical Reports, May 15, 2008, vol. 2008, No. 44, pp. 109-114.
Abe et al., “Adaptive Split Connection Schemes in Advanced Relay Nodes,” IEICE Technical Report, Feb. 22, 2010, vol. 109, No. 438, pp. 25-30.
Gite, Vivek, “Linux Tune Network Stack (Buffers Size) to Increase Networking Performance,” nixCraft [online], Jul. 8, 2009 [retreived on Apr. 13, 2016], Retreived from the Internt: <URL:http://www.cyberciti.biz/faq/linux-tcp-tuning/>, 24 pages.
FreeBSD, “tcp—TCP Protocol,” Linux Programmer's Manual [online], Nov. 25, 2007 [retreived on Apr. 13, 2016], Retreived from the Internet: <URL:https://www.freebsd.org/cgi/man.cgi?query=tcp&apropos=0&sektion=7&manpath=SuSE+Linux%2Fi386+11.0&format=asci>, 11 pages.
“Enhanced Interior Gateway Routing Protocol”, Cisco, Document ID 16406, Sep. 9, 2005 update, 43 pages.
Crotti, Manuel et al., “Detecting HTTP Tunnels with Statistical Mechanisms”, IEEE International Conference on Communications, Jun. 24-28, 2007, pp. 6162-6168.
Haruyama, Takahiro et al., “Dial-to-Connect VPN System for Remote DLNA Communication”, IEEE Consumer Communications and Networking Conference, CCNC 2008. 5th IEEE, Jan. 10-12, 2008, pp. 1224-1225.
Chen, Jianhua et al., “SSL/TLS-based Secure Tunnel Gateway System Design and Implementation”, IEEE International Workshop on Anti-counterfeiting, Security, Identification, Apr. 16-18, 2007, pp. 258-261.
“EIGRP MPLS VPN PE-CE Site of Origin (SoO)”, Cisco Systems, Feb. 28, 2006, 14 pages.