In modern networks, information (e.g., voice, video, or data) is transferred as packets of data. This has lead to the creation of application specific integrated circuits (“ASICs”) known as network processors. Such processors may be customized to receive and route packets of data from a source node to a destination node of a network. Network processors have evolved into ASICs that contain a significant number of processing engines and other resources to manage different aspects of data routing.
As noted above, network processors may contain a significant number of processing engines and other types of resources, such as memory used for queuing packets. As data centers are being moved into virtualized cloud based environments, customers of network computing are provided with a variety of networking options. For example, customers may select from 30 gigabytes to many terabytes of storage. However, allocation of resources in a network processor is controlled by the internal algorithms of the ASIC itself. These internal algorithms, which may be known as quality of service algorithms, determine how to prioritize the ingress and egress of packets. As such, a certain level of performance may not be guaranteed to a customer. For example, a customer paying a premium for high performance may actually receive poor performance when the network processor experiences high packet volume. The load balancing algorithms inside a network processor may not prioritize the packets in accordance with the premiums paid by a customer.
In view of the foregoing, disclosed herein are a system, non-transitory computer readable medium, and method to dedicate resources of a network processor. In one example, an interface to dedicate resources of a network processor may be displayed. In a further example, decisions of the network processor may be preempted by the selections made via the interface. The system, non-transitory computer readable medium, and method disclosed herein permit cloud network providers to offer price structures that reflect the resources of the network processor dedicated to the customer. Furthermore, the techniques disclosed herein permit cloud service providers to maintain a certain level of performance for customers who purchase such a service. The aspects, features and advantages of the present disclosure will be appreciated when considered with reference to the following description of examples and accompanying figures. The following description does not limit the application; rather, the scope of the disclosure is defined by the appended claims and equivalents.
The computer apparatus 104 may be a client computer used by a customer of a network computing or cloud computing service. The computer apparatus 105 is shown in more detail and may contain a processor 110, which may be any number of well known processors, such as processors from Intel® Corporation. Network processor 116 may be an ASIC for handling the receipt and delivery of data packets from a source node to a destination node in network 118 or other network. While only two processors are shown in
Non-transitory computer readable medium (“CRM”) 112 may store instructions that may be retrieved and executed by processor 110. The instructions may include an interface layer 113 and an abstraction layer 114. In one example, non-transitory CRM 112 may be used by or in connection with an instruction execution system, such as computer apparatus 105, or other system that can fetch or obtain the logic from non-transitory CRM 112 and execute the instructions contained therein. “Non-transitory computer-readable media” may be any media that can contain, store, or maintain programs and data for use by or in connection with a computer apparatus or instruction execution system. Non-transitory computer readable media may comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable non-transitory computer-readable media include, but are not limited to, a portable magnetic computer diskette such as floppy diskettes or hard drives, a read-only memory (“ROM”), an erasable programmable read-only memory, a portable compact disc or other storage devices that may be coupled to computer apparatus 105 directly or indirectly. Alternatively, non-transitory CRM 112 may be a random access memory (“RAM”) device or may be divided into multiple memory segments organized as dual in-line memory modules (“DIMMs”). The non-transitory CRM 112 may also include any combination of one or more of the foregoing and/or other devices as well.
Network 118 and any intervening nodes thereof may comprise various configurations and use various protocols including the Internet, World Wide Web, intranets, virtual private networks, local Ethernet networks, private networks using communication protocols proprietary to one or more companies, cellular and wireless networks (e.g., WiFi), instant messaging, HTTP and SMTP, and various combinations of the foregoing. Computer apparatus 105 may also comprise a plurality of computers, such as a load balancing network, that exchange information with different nodes of a network for the purpose of receiving, processing, and transmitting data to multiple remote computers. In this instance, computer apparatus 105 may typically still be at different nodes of the network. While only one node of network 118 is shown, it is understood that a network may include many more interconnected computers.
The instructions residing in non-transitory CRM 112 may comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by processor 110. In this regard, the terms “instructions,” “scripts,” and “programs” may be used interchangeably herein. The computer executable instructions may be stored in any computer language or format, such as in object code or source code. Furthermore, it is understood that the instructions may be implemented in the form of hardware, software, or a combination of hardware and software and that the examples herein are merely illustrative.
The instructions in interface layer 113 may cause processor 110 to display a graphical user interface (“GUI”). As will be discussed in more detail further below, such a GUI may allow a user to dedicate select resources of a network processor to a customer of a cloud networking service. Abstraction layer 114 may abstract the resources of a network processor from the user of interface layer 113, and may contain instructions therein that cause a network processor to distribute resources in accordance with the selections made at the interface layer.
One working example of the system, method, and non-transitory computer-readable medium is shown in
As shown in block 202 of
Referring back to
Abstraction layer 404 may encapsulate the messaging between an interface and a network processor to implement the techniques disclosed herein. Abstraction layer 404 may allocate a data structure or object to each network processor resource dedicated to a customer. In one example, there may be an API function called ResourceMapper( )that associates a customer with the resources of the network processor dedicated thereto. The parameters of the ResourceMapper( )may include a customer identifier, a resource type, and the number of resources to associate with the customer. The function may determine whether the requested resources are available. If so, the resources may be dedicated to the customer. If the resources are not available, the API function may return an error code. In another example, the API may include a function called Balancer( )that balances the load among the dedicated resources. The parameters of the example Balancer( )API function may be the data structures or objects associated with each dedicated resource and a customer identifier. In yet a further example, the Balancer( )function may return a value indicating whether the packets were properly delivered to their destination. In another aspect, the Balancer( )function may return a route within network processor 410 that is least congested. Therefore, the packets associated with the customer may travel along this route. While only two example API functions are described herein, it should be understood that the aforementioned functions are not exhaustive; other functions related to managing network resources in accordance with the techniques presented herein may be added to the suite of API functions.
Advantageously, the foregoing system, method, and non-transitory computer readable medium allow cloud service providers to sustain a certain level of performance in accordance with the expectations of a customer. Instead of exposing a customer to the decisions of a network processor, users may take control of network resources to ensure a certain level of performance.
Although the disclosure herein has been described with reference to particular examples, it is to be understood that these examples are merely illustrative of the principles of the disclosure. It is therefore to be understood that numerous modifications may be made to the examples and that other arrangements may be devised without departing from the spirit and scope of the disclosure as defined by the appended claims. Furthermore, while particular processes are shown in a specific order in the appended drawings, such processes are not limited to any particular order unless such order is expressly set forth herein; rather, processes may be performed in a different order or concurrently and steps may be added or omitted.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2012/052183 | 8/24/2012 | WO | 00 | 2/24/2015 |