This disclosure relates generally to techniques for improving the communication speeds of multiple clients on a network with a central host. More specifically, the disclosure describes techniques for allocating bandwidth in a networked environment.
As devices and components become more complex and undertake heavier workloads, performance has become an increasing concern. Part of the computer performance rests in available data transfer speeds. Transfer speeds can vary for a variety of reasons, from the hardware used, to whether a networked system of computers is configured to send data to a central source at or near the same time. Networked computer systems include a number of components and elements. Often the components and even separate computers are coupled via a bus or some form of interconnect.
In a network where multiple clients need to send data to a central host and those clients share one or more network links, it can become difficult to ensure that each client's data flow receives a fair portion of the available bandwidth on the network. For example, Universal Serial Bus (USB) 3.0 “SuperSpeed USB” limited the number of simultaneous, active clients over network so that only one client was sending data to the host at a time. In addition, allowing only one client to send data at a time leads to wasted time on the network while the host determines the next client to activate.
The subject matter described herein relates to techniques for initiating data flows at one or more hubs in a networked environment. A dataflow, as referred to herein, is a stream of data packets. The hub assigns weights to data flows based on the speed of each data flow. The hub allocates upstream bandwidth based on the weight of each data flow received. The hub may combine upstream data flows and assign combined data flows weights based on the combined weight of each data flow in the combined data flow. In this manner, the hub may transmit multiple data flows simultaneously to upstream components, such as another hub, a host computer, and the like.
In aspects, the techniques described herein may be implemented in Universal Serial Bus (USB) protocol hubs. In USB 3.1 released Jul. 26, 2013, data flows may be propagated at a “SuperSpeed” of about 5 gigabytes per second (GBps) or at a “SuperSpeedPlus” of about 10 GBps. The speed may be indicated in a field of a packet within each data flow. As stated above, the hub assigns weight to a data flow based on the speed indication.
The hub includes an allocation engine 106 configured to allocate upstream bandwidth indicated at 108. The allocation engine 106 may include a receiving module 110, a weight assignment module 112, and an allocation module 114. The receiving module 110 may be configured to receive a data flow comprising packets at the hub 100 from one or more of the client devices 102. A data flow comprises packets from any one of the client devices 102. Each data flow may be received at a buffer 116. In embodiments, one buffer is assigned to each client device 102. The receiving module 110 may identify a speed at which the packets were received in each data flow from the client devices 102. The weight assignment module 112 assigns a weight to the packets based on the speed of the data flow. The allocation module 114 allocates bandwidth of the upstream link 108 based on the weight assigned to the packets.
In some cases, the allocation engine 106 may be implemented as logic, at least partially including hardware logic, such as electrical circuits configured to determine the speed of incoming packets. The allocation engine 106 may, in some cases, be implemented as instructions stored on a storage device 118, and executable by a processor 120. In yet other examples, the allocation engine 106 may be a combination of hardware, software, and firmware.
The modules 110, 112, 114 may be configured to operate independently, in parallel, distributed, or as a part of a broader process. In any case, the modules 120, 122, 124 are configured to carry out operations for bandwidth allocation as described above, and as described further below.
The processor 102 may be a main processor that is adapted to execute the stored instructions. The processor 102 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The processor 102 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 Instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU).
The buffers 116 can include random access memory (RAM) (e.g., static random access memory (SRAM), dynamic random access memory (DRAM), zero capacitor RAM, Silicon-Oxide-Nitride-Oxide-Silicon SONOS, embedded DRAM, extended data out RAM, double data rate (DDR) RAM, resistive random access memory (RRAM), parameter random access memory (PRAM), etc.), read only memory (ROM) (e.g., Mask ROM, programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), flash memory, or any other suitable memory systems. The processor 120 may be connected through a system bus 122 (e.g., Peripheral Component Interconnect (PCI), Industry Standard Architecture (ISA), PCI-Express, HyperTransport®, NuBus, etc.) to components including the buffers 116, the storage device 118, and the allocation engine 106.
The block diagram of
The client devices 206, 208, 210, and 212 may each have a data flow provided to the respective hubs 202 and 204, as indicated at 216, 218, 220, and 222, respectively. For example, the hub 202 may receive a first data flow 216 from the client 206, and a second data flow 218 from client 208. The first hub 202 assigns weights to packets of each of the data flows 216 and 218 based on a speed associated with the data flows 216 and 218. The speed may be indicated in a field of a packet received at the hub 202 as each data flow 216 and 218 is received. Based on the weights assigned to each data flow 216 and 218, the hub 202 assigns a weight to a combined data flow 224 provided upstream to the second hub 204.
For example, the data flow 216 may have a speed of 5 GBps, and, therefore, may be assigned a weight of “1.” The data flow 218 may have a speed of 10 GBps, and may be assigned a weight of “2.” The combined data flow 224 may have a weight of “3” as the weight of the combined data flow 224 is the sum of the weights assigned to the data flows 216 and 218, respectively. The bandwidth allocation at the upstream data flow 224 is based on the individual weights of each data flow 216 and 218. In this example, one third of the bandwidth will be allocated to data flow 216 and two thirds of the bandwidth will be allocated to data flow 218.
As illustrated in
In the networked environment 200, a hub may be associated with a tier level. In techniques described above, bandwidth may be shared for simultaneous transmission of packets from different client devices, regardless of tier level. For example, no matter how many hubs are between a client device and the host 214, a given data flow may preserve a fair share of bandwidth as it progresses through the networked environment 200 towards the host 214. In other words, independent data flows from client devices connected at different tiers in the networked environment 200 will be provided the same bandwidth service on shared upstream links as if each of the client devices were connected at the same tier level sharing the same upstream link to the host 214.
Note that the networked environment 200 of
As discussed above, data flows are assigned a weight by the hub 100 based on the speed of each data flow 302, 304, and 306. As discussed above in reference to
In embodiments, the weighting process described above is a weighted round-robin algorithm used only on asynchronous data, or data that is not time sensitive. In some scenarios, a credit system is implemented by the hub 100, to limit the incoming data flows 302, 304, 306. Credits represent available buffer space for incoming packets. Credits are spent when the client transmits a packet. Credits are returned by the hub 100 when a packet is forwarded upstream and buffer space becomes available. Packets are only forwarded to the hub 100 if the client has enough credits.
It is to be understood that the schematic of
In the example illustrated in
For example, the weight may be assigned, at block 504, to data flows of asynchronous data. The weight assignment may enable data flows from multiple client devices to be forwarded simultaneously through multiple tiers of a networked environment. In some scenarios, the data flow is a first data flow, and the method 500 may include receiving any number of additional data flows at the hub. Weight may be assigned to all of the data flows based on their respective speed. The first data flow and all other data flows may be combined into an upstream data flow. The weight of the upstream data flow is equal to the sum of the weight of all of the combined data flows.
As another example, the hub may be a first hub, and the method 500 includes receiving the combined data flow at a second hub from the first hub. The second hub may be configured to receive a third data flow including packets. A weight may be assigned to the packets of the third data flow based on a speed of the third data flow. As discussed above in regard to
The weight may indicate how much of the bandwidth is allocated to a given data flow. For example, a higher weight results in a higher bandwidth allocation for a given data flow in comparison to a lower weight. It may be important to note that, in some scenarios, the weights are assigned to asynchronous data flows in a USB networked environment. Further, the method 500 may include redistribution of bandwidth as one or more devices are connected, or begin transmitting data flows to the networked environment. In addition, redistribution may occur when a device is disconnected from the networked environment. Further, redistribution may be delayed until all of the packets received by the hub and associated with a disconnected device are forwarded upstream by the hub.
The various software components discussed herein may be stored on the tangible, non-transitory, computer-readable medium 600, as indicated in
Example 1 is an apparatus for bandwidth allocation in a networked environment. The apparatus includes a receiver to receive data flow comprising packets, a weight assignor to assign a weight to the packets based on the speed of the data flow, and an allocator to allocate bandwidth of an upstream link based on the weight assigned to the packets.
Example 2 incorporates the subject matter of Example 1. In this example, the receiver is configured to receive a second data flow, and the weight assignor is assign a weight to packets of the second data flow based on a speed of the second data flow. A combinor is further configured to combine the first data flow and the second data flow into a combined upstream data flow, wherein the weight of the upstream dataflow is equal to the sum of the weight of the first data flow and the weight of the second data flow. A combinor, in some scenarios, may be a module, logic, or any other programmable mechanism.
Example 3 incorporates the subject matter of any combination of Examples 1-2. In this example, the receiver is to receive the combined data flow from a first hub, and receive a third data flow including packets at a second hub. The weight assignor is further configured to assign a weight to packets of the third data flow based on a speed of the third data flow. The combinor also configured to combine the combined dataflow and the third data flow into a combined upstream data flow, wherein the weight of the upstream data flow from the second hub is equal to the sum of the combined weight of the data flow from the first hub and the weight of the second data flow.
Example 4 incorporates the subject matter of any combination of Examples 1-3. In this example, the data flow comprises asynchronous data in an Universal Serial Bus networked environment.
Example 5 incorporates the subject matter of any combination of Examples 1-4. In this example, a higher weight assignment indicates a greater amount of upstream bandwidth allocated in comparison to a lower weight assignment.
Example 6 incorporates the subject matter of any combination of Examples 1-5. In this example, the allocator is to redistribute bandwidth as one or more devices having data flows are connected to the apparatus.
Example 7 incorporates the subject matter of any combination of Examples 1-6. In this example, the allocator is to delay redistribution when one or more devices having data flows are disconnected from the apparatus until all packets received by the apparatus and associated with the disconnected device are forwarded upstream by the apparatus.
Example 8 incorporates the subject matter of any combination of Examples 1-7. In this example, the apparatus is a downstream hub, and the allocation of bandwidth to the upstream link is provided from the downstream hub to an upstream hub.
Example 9 incorporates the subject matter of any combination of Examples 1-8. In this example, the apparatus is an upstream hub, and the logic is further configured to receive weighted packets from a downstream hub
Example 10 incorporates the subject matter of any combination of Examples 1-9. In this example, the weight of the packets is preserved on the upstream link.
Example 11 is a method for bandwidth allocation in a networked environment. The method includes receiving a data flow comprising packets at the hub. The method includes assigning a weight to the packets based on the speed of the data flow, and allocating bandwidth of an upstream link based on the weight assigned to the packets.
Example 12 incorporates the subject matter of Example 11. In this example, the method includes receiving a second data flow at the hub, and assigning a weight to packets of the second data flow based on a speed of the second data flow. The method further includes combining the first data flow and the second data flow into a combined upstream data flow, wherein the weight of the upstream dataflow is equal to the sum of the weight of the first data flow and the weight of the second data flow.
Example 13 incorporates the subject matter of any combination of Examples 11-12. In this example, the hub is a first hub and method includes receiving the combined data flow from the first hub, and receiving a third data flow including packets at a second hub. The method further includes assigning a weight to packets of the third data flow based on a speed of the third data flow. The method also includes combining the combined dataflow and the third data flow into a combined upstream data flow, wherein the weight of the upstream data flow from the second hub is equal to the sum of the combined weight of the data flow from the first hub and the weight of the second data flow.
Example 14 incorporates the subject matter of any combination of Examples 11-13. In this example, the data flow comprises asynchronous data in an Universal Serial Bus networked environment.
Example 15 incorporates the subject matter of any combination of Examples 11-14. In this example, a higher weight assignment indicates a greater amount of upstream bandwidth allocated in comparison to a lower weight assignment.
Example 16 incorporates the subject matter of any combination of Examples 11-15. In this example, the method further includes redistributing bandwidth as one or more devices having data flows are connected to the hub.
Example 17 incorporates the subject matter of any combination of Examples 11-16. In this example, the includes delaying redistribution when one or more devices having data flows are disconnected from the hub until all packets received by the hub and associated with the disconnected device are forwarded upstream by the hub.
Example 18 incorporates the subject matter of any combination of Examples 11-17. In this example, the hub is a downstream hub, and the allocation of bandwidth to the upstream link is provided from the downstream hub to an upstream hub.
Example 19 incorporates the subject matter of any combination of Examples 11-18. In this example, the hub is an upstream hub, and further includes receiving weighted packets from a downstream hub
Example 20 incorporates the subject matter of any combination of Examples 11-19. In this example, the weight of the packets is preserved on the upstream link.
Example 21 is a computer-readable medium. The computer-readable medium may be a non-transitory computer-readable medium. The computer-readable medium may include code, when executed, cause a processing device to perform the method of any combination of Examples 11-20.
Example 22 is a system for bandwidth allocation in a networked environment. The system includes a downstream hub having logic, at least partially including hardware logic. The logic of the downstream hub is configured to receive a data flow comprising packets at the downstream hub, and assign a weight to the packets based on the speed of the data flow. The logic of the downstream hub is further configured to allocate bandwidth of an upstream link based on the weight assigned to the packets. The system further includes an upstream hub. The upstream hub includes logic, at least partially including hardware logic. The logic of the upstream hub is configured to receive the data flow from the downstream up, and identify the assigned weight of the data flow from the downstream hub. The logic of the upstream hub is further configured to combine the data flow from the downstream hub with any additional data flow received at the upstream hub.
Example 23 incorporates the subject matter of Example 22. In this example, the data flow at the downstream hub is a first data flow, and the logic of the downstream hub is to receive a second data flow at the downstream hub. The logic of the downstream hub is further configured to assign a weight to packets of the second data flow at the downstream hub based on a speed of the second data flow, and combine the first data flow and the second data flow into a combined upstream data flow, wherein the weight of the upstream dataflow is equal to the sum of the weight of the first data flow and the weight of the second data flow.
Example 24 incorporates the subject matter of any combination of Example 22-23. In this example, the logic of the upstream hub is to receive a third data flow comprising packets at the upstream hub, and assign a weight to packets of the third data flow based on a speed of the third data flow. The logic of the upstream hub is further configured to combine the dataflow combined at the downstream hub and the third data flow into a new combined upstream data flow at the upstream hub, wherein the weight of the new upstream data flow from the upstream hub is equal to the sum of the combined weight of the data flow combined at the first hub and the weight of the third data flow.
Example 25 incorporates the subject matter of any combination of Example 22-23. In this example, the data flow comprises asynchronous data in an Universal Serial Bus networked environment.
Example 26 incorporates the subject matter of any combination of Example 22-25. In this example, a higher weight assignment indicates a greater amount of upstream bandwidth allocated in comparison to a lower weight assignment.
Example 27 incorporates the subject matter of any combination of Example 22-26. In this example, the logic of the upstream hub is to redistribute bandwidth as one or more devices having data flows are connected to the upstream hub.
Example 28 incorporates the subject matter of any combination of Example 22-27. In this example, the logic of the upstream hub is to delay redistribution when one or more devices having data flows are disconnected from the upstream hub until all packets received by the upstream hub and associated with the disconnected device are forwarded upstream by the upstream hub.
Example 29 incorporates the subject matter of any combination of Example 22-28. In this example, the logic of the downstream hub is to redistribute bandwidth as one or more devices having data flows are connected to the downstream hub.
Example 30 incorporates the subject matter of any combination of Example 22-29. In this example, the logic of the downstream hub is to delay redistribution when one or more devices having data flows are disconnected from the downstream hub until all packets received by the downstream hub and associated with the disconnected device are forwarded upstream by the downstream hub.
Example 31 is an apparatus for bandwidth allocation in a networked environment. The apparatus includes a means to receive a data flow comprising packets at the hub, and a means to assign a weight to the packets based on the speed of the data flow. The apparatus further includes a means to allocate bandwidth of an upstream link based on the weight assigned to the packets.
Example 32 incorporates the subject matter of Example 31. In this example the data flow is a first data flow. The apparatus further includes a means to receive a second data flow at the hub, assign a weight to packets of the second data flow based on a speed of the second data flow, and combine the first data flow and the second data flow into a combined upstream data flow, wherein the weight of the upstream dataflow is equal to the sum of the weight of the first data flow and the weight of the second data flow.
Example 33 incorporates the subject matter of any combination of Examples 31-32. In this example, the hub is a first hub. The apparatus further includes a means to receive the combined data flow from the first hub, receive a third data flow comprising packets at a second hub, and assign a weight to packets of the third data flow based on a speed of the third data flow. The means may be further configured to combine the combined dataflow and the third data flow into a combined upstream data flow, wherein the weight of the upstream data flow from the second hub is equal to the sum of the combined weight of the data flow from the first hub and the weight of the second data flow.
Example 34 incorporates the subject matter of any combination of Examples 31-33. In this example the data flow comprises asynchronous data in an Universal Serial Bus networked environment.
Example 35 incorporates the subject matter of any combination of Examples 31-34. In this example a higher weight assignment indicates a greater amount of upstream bandwidth allocated in comparison to a lower weight assignment.
Example 36 incorporates the subject matter of any combination of Examples 31-35. In this example, the apparatus further includes a means to redistribute bandwidth as one or more devices having data flows are connected to the hub.
Example 37 incorporates the subject matter of any combination of Examples 31-36. In this example, the apparatus further includes a means to delay redistribution when one or more devices having data flows are disconnected from the hub until all packets received by the hub and associated with the disconnected device are forwarded upstream by the hub.
Example 38 incorporates the subject matter of any combination of Examples 31-37. In this example, the hub is a downstream hub, and wherein the allocation of bandwidth to the upstream link is provided from the downstream hub to an upstream hub.
Example 39 incorporates the subject matter of any combination of Examples 31-38. In this example, the hub is an upstream hub, and the apparatus further includes a means to receive weighted packets from a downstream hub.
Example 40 incorporates the subject matter of any combination of Examples 31-39. In this example, the weight of the packets is preserved on the upstream link.
Example 41 is a computer-readable medium. The computer-readable medium may be a non-transitory computer-readable medium. The computer-readable medium may include code, when executed, cause a processing device to receive a data flow comprising packets at the hub, assign a weight to the packets based on the speed of the data flow, and allocate bandwidth of an upstream link based on the weight assigned to the packets.
Example 42 incorporates the subject matter of Example 41. In this example, the data flow is a first data flow, and the code, when executed is to cause the processing device to receive a second data flow at the hub, assign a weight to packets of the second data flow based on a speed of the second data flow, and combine the first data flow and the second data flow into a combined upstream data flow, wherein the weight of the upstream dataflow is equal to the sum of the weight of the first data flow and the weight of the second data flow.
Example 43 incorporates the subject matter of any combination of Examples 41-42. In this example, the code, when executed is to cause the processing device to receive the combined data flow from the first hub, receive a third data flow comprising packets at a second hub, and assign a weight to packets of the third data flow based on a speed of the third data flow. The code, when executed is to further cause the processing device to combine the combined dataflow and the third data flow into a combined upstream data flow, wherein the weight of the upstream data flow from the second hub is equal to the sum of the combined weight of the data flow from the first hub and the weight of the second data flow.
Example 44 incorporates the subject matter of any combination of Examples 41-43. In this example, the data flow comprises asynchronous data.
Example 45 incorporates the subject matter of any combination of Examples 41-44. In this example, the code, when executed is to cause the processing device to redistribute bandwidth as one or more devices having data flows are connected to the hub.
Example 46 incorporates the subject matter of any combination of Examples 41-45. In this example, the code, when executed is to cause the processing device to delay redistribution when one or more devices having data flows are disconnected from the hub until all packets received by the hub and associated with the disconnected device are forwarded upstream by the hub.
Example 47 incorporates the subject matter of any combination of Examples 41-46. In this example, the hub is a downstream hub, and wherein the allocation of bandwidth to the upstream link is provided from the downstream hub to an upstream hub.
Example 48 incorporates the subject matter of any combination of Examples 41-47. In this example, the code, when executed is to cause the processing device to receive weighted packets from a downstream hub.
Example 49 incorporates the subject matter of any combination of Examples 41-48. In this example, the weight of the packets is preserved on the upstream link.
Example 50 incorporates the subject matter of any combination of Examples 41-49. In this example, the data flow comprises asynchronous data in an Universal Serial Bus networked environment.
As used herein, an embodiment is an implementation or example. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “various embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the present techniques. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments.
Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
It is to be noted that, although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments.
In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
It is to be understood that specifics in the aforementioned examples may be used anywhere in one or more embodiments. For instance, all optional features of the computing device described above may also be implemented with respect to either of the methods or the computer-readable medium described herein. Furthermore, although flow diagrams and/or state diagrams may have been used herein to describe embodiments, the techniques are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein.
The present techniques are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present techniques. Accordingly, it is the following claims including any amendments thereto that define the scope of the present techniques.
The present application claims priority to U.S. Provisional Patent Application No. 61/907,688, filed Nov. 22, 2013, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61907688 | Nov 2013 | US |