METHODS AND APPARATUS FOR PERFORMANCE SCALING WITH PARALLEL PROCESSING OF SLIDING WINDOW MANAGEMENT ON MULTI-CORE ARCHITECTURE

Information

  • Patent Application
  • 20230300075
  • Publication Number
    20230300075
  • Date Filed
    July 20, 2021
    2 years ago
  • Date Published
    September 21, 2023
    8 months ago
Abstract
Methods, apparatus, and articles of manufacture have been disclosed for performance scaling with parallel processing of sliding window management on multi-core architecture. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to at least one of execute or instantiate the instructions to partition a packet flow into two or more sub flows based on a packet flow distribution configuration, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel, provide the two or more sub flows to a buffer to schedule distribution of the two or more sub flows, dequeue the two or more sub flows from the buffer to one or more hardware cores, and transmit the two or more sub flows to a destination device.
Description
Claims
  • 1. An apparatus for sliding window management of network packets, the apparatus comprising: at least one memory;instructions in the apparatus; andprocessor circuitry to at least one of execute or instantiate the instructions to: partition a packet flow into two or more sub flows based on a packet flow distribution configuration, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel;provide the two or more sub flows to a buffer to schedule distribution of the two or more sub flows;dequeue the two or more sub flows from the buffer to one or more hardware cores; andtransmit the two or more sub flows to a destination device.
  • 2. The apparatus of claim 1, wherein the processor circuitry is to: update the two or more sliding windows with data included in the two or more sub flows; andslide the two or more sliding windows in response to a window threshold being satisfied.
  • 3. The apparatus of claim 2, wherein the two or more sliding windows include a first sliding window and a second sliding window, and the processor circuitry is to: determine a first slide rate associated with the first sliding window;determine a second slide rate associated with the second sliding window; andidentify a network attack in response to a determination that the first slide rate is greater than the second slide rate.
  • 4. The apparatus of claim 1, wherein the processor circuitry is to: determine that the packet flow distribution configuration is indicative of a round robin distribution or a random distribution; andpartition the packet flow based on the round robin distribution or the random distribution.
  • 5. The apparatus of claim 1, wherein the one or more hardware cores include a first hardware core and a second hardware core, and the processor circuitry is to: identify the first hardware core as available based on a first utilization of the first hardware core;identify the second hardware core as unavailable based on a second utilization of the second hardware core, the second utilization greater than the first utilization; anddequeue the two or more sub flows from the buffer to the first hardware core in response to the identification of the first hardware core as available.
  • 6. The apparatus of claim 5, wherein the processor circuitry is to: dequeue the two or more sub flows from the buffer to an assigned sequence number space of the first hardware core; andcause the first hardware core to provide the two or more sub flows to a transmit sequence number space of a transmitter, the transmitter to transmit the two or more sub flows from the transmit sequence number space to the destination device.
  • 7. The apparatus of claim 1, wherein the packet flow is a second packet flow to be processed after a first packet flow, and the processor circuitry is to: determine a quantity of the two or more sub flows;determine a first flow identifier of the first packet flow;determine a second flow identifier of the second packet flow;determine a third flow identifier of a first one of the two or more sub flows based on a first sum of (1) a multiplication of the quantity of the two or more sub flows and the first flow identifier and (2) a modulo of the quantity of the two or more sub flows and a second sum of the second flow identifier and a constant value; andpartition the second packet flow based on the third flow identifier.
  • 8. The apparatus of claim 1, wherein the processor circuitry is to partition a primary window into the two or more sliding windows based on a modulo of a sequence number of the packet flow and a quantity of the two more sliding windows.
  • 9. The apparatus of claim 1, wherein at least one of the processor circuitry is included in a first accelerated network device, the one or more hardware cores are included in a second accelerated network device, or the destination device is a third accelerated network device.
  • 10. An apparatus for sliding window management of network packets, the apparatus comprising: means for partitioning a packet flow into two or more sub flows based on a packet flow distribution configuration;means for providing to: provide the two or more sub flows to a buffer to schedule distribution of the two or more sub flows, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel; anddequeue the two or more sub flows from the buffer to one or more hardware cores; andmeans for transmitting the two or more sub flows to a destination device.
  • 11. The apparatus of claim 10, further including: means for updating the two or more sliding windows with data included in the two or more sub flows; andmeans for sliding the two or more sliding windows in response to a window threshold being satisfied.
  • 12. The apparatus of claim 11, wherein the two or more sliding windows include a first sliding window and a second sliding window, and further including: the means for providing to: determine a first slide rate associated with the first sliding window; anddetermine a second slide rate associated with the second sliding window; andmeans for identifying a network attack in response to a determination that the first slide rate is greater than the second slide rate.
  • 13. The apparatus of claim 10, further including: means for determining that the packet flow distribution configuration is indicative of a round robin distribution or a random distribution; andthe means for partitioning to partition the packet flow based on the round robin distribution or the random distribution.
  • 14. The apparatus of claim 10, wherein the one or more hardware cores include a first hardware core and a second hardware core, and the means for providing is to: identify the first hardware core as available based on a first utilization of the first hardware core; andidentify the second hardware core as unavailable based on a second utilization of the second hardware core, the second utilization greater than the first utilization; anddequeue the two or more sub flows from the buffer to the first hardware core in response to the identification of the first hardware core as available.
  • 15. The apparatus of claim 14, wherein the means for providing is to: dequeue the two or more sub flows from the buffer to an assigned sequence number space of the first hardware core; andcause the first hardware core to provide the two or more sub flows to a transmit sequence number space of the means for transmitting, the means for transmitting to transmit the two or more sub flows from the transmit sequence number space to the destination device.
  • 16-18. (canceled)
  • 19. At least one computer readable medium comprising instructions that, when executed, cause processor circuitry to at least: partition a packet flow into two or more sub flows based on a packet flow distribution configuration;provide the two or more sub flows to a buffer to schedule distribution of the two or more sub flows, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel;dequeue the two or more sub flows from the buffer to one or more hardware cores; andtransmit the two or more sub flows to a destination device.
  • 20-24. (canceled)
  • 25. The at least one computer readable medium of claim 19, wherein the packet flow is a second packet flow to be processed after a first packet flow, and the instructions, when executed, cause the processor circuitry to: determine a quantity of the two or more sub flows;determine a first flow identifier of the first packet flow;determine a second flow identifier of the second packet flow;determine a third flow identifier of a first one of the two or more sub flows based on a first sum of (1) a multiplication of the quantity of the two or more sub flows and the first flow identifier and (2) a modulo of the quantity of the two or more sub flows and a second sum of the second flow identifier and a constant value; andpartition the second packet flow based on the third flow identifier.
  • 26. The at least one computer readable medium of claim 19, wherein the instructions, when executed, cause the processor circuitry to partition a primary window into the two or more sliding windows based on a modulo of a sequence number of the packet flow and a quantity of the two more sliding windows.
  • 27. The at least one computer readable medium of claim 19, wherein at least one of the processor circuitry is included in a first accelerated network device, the one or more hardware cores are included in a second accelerated network device, or the destination device is a third accelerated network device.
  • 28-36. (canceled)
  • 37. A method for sliding window management of network packets, the method comprising: partitioning a packet flow into two or more sub flows based on a packet flow distribution configuration;providing the two or more sub flows to a buffer to schedule distribution of the two or more sub flows, the two or more sub flows associated respectively with two or more sliding windows that are able to slide in parallel;dequeuing the two or more sub flows from the buffer to one or more hardware cores; andtransmitting the two or more sub flows to a destination device.
  • 38. The method of claim 37, further including: updating the two or more sliding windows with data included in the two or more sub flows; andsliding the two or more sliding windows in response to a window threshold being satisfied.
  • 39. The method of claim 38, wherein the two or more sliding windows include a first sliding window and a second sliding window, and further including: determining a first slide rate associated with the first sliding window;determining a second slide rate associated with the second sliding window; andidentifying a network attack in response to a determination that the first slide rate is greater than the second slide rate.
  • 40. (canceled)
  • 41. The method of claim 37, wherein the one or more hardware cores include a first hardware core and a second hardware core, and further including: identifying the first hardware core as available based on a first utilization of the first hardware core;identifying the second hardware core as unavailable based on a second utilization of the second hardware core, the second utilization greater than the first utilization; anddequeuing the two or more sub flows from the buffer to the first hardware core in response to the identification of the first hardware core as available.
  • 42. The method of claim 41, further including: dequeuing the two or more sub flows from the buffer to an assigned sequence number space of the first hardware core; andcausing the first hardware core to provide the two or more sub flows to a transmit sequence number space of a transmitter, the transmitter to transmit the two or more sub flows from the transmit sequence number space to the destination device.
  • 43. The method of claim 37, wherein the packet flow is a second packet flow to be processed after a first packet flow, and further including: determining a quantity of the two or more sub flows;determining a first flow identifier of the first packet flow;determining a second flow identifier of the second packet flow;determining a third flow identifier of a first one of the two or more sub flows based on a first sum of (1) a multiplication of the quantity of the two or more sub flows and the first flow identifier and (2) a modulo of the quantity of the two or more sub flows and a second sum of the second flow identifier and a constant value; andpartitioning the second packet flow based on the third flow identifier.
  • 44. (canceled)
  • 45. (canceled)
PCT Information
Filing Document Filing Date Country Kind
PCT/US2021/042346 7/20/2021 WO
Provisional Applications (1)
Number Date Country
63054106 Jul 2020 US