Methods, systems, and computer readable media for implementing bandwidth limitations on specific application traffic at a proxy element

Information

  • Patent Grant
  • 11716313
  • Patent Number
    11,716,313
  • Date Filed
    Thursday, December 3, 2020
    5 years ago
  • Date Issued
    Tuesday, August 1, 2023
    2 years ago
Abstract
Methods, systems, and computer readable media for implementing bandwidth limitations on specific application traffic at a proxy element are disclosed. One exemplary method includes receiving, at a proxy element, a packet flow from at least one source client, identifying encrypted packets associated with a specific application traffic type from among the packet flow, and directing the identified encrypted packets to a bandwidth limiter in the proxy element. The method further includes applying a bandwidth limitation operation to the identified encrypted packets and decrypting the identified encrypted packets if an accumulated amount of payload bytes of the identified encrypted packets complies with the parameters of the bandwidth limitation operation.
Description
TECHNICAL FIELD

The subject matter described herein relates to the monitoring and controlling of transmission control protocol (TCP) packet traffic. More particularly, the subject matter described herein relates to methods, systems, and computer readable media for implementing bandwidth limitations on specific application traffic at a proxy element.


BACKGROUND

Network elements, such as transport layer security (TLS) proxy elements, intercept application traffic traversing TCP connections and may forward the traffic after some level of processing. For such network elements, one measurement of performance is the rate at which the TLS proxy can monitor and decrypt bytes of data communicated between endpoints (e.g., a client device and a server device) serviced by the TLS proxy.


While such a TLS proxy-based monitoring operation is generally feasible, the associated processing requirements can be very demanding. Notably, the proxy element must be able to first terminate and decrypt packet flows for monitoring purposes and then subsequently re-encrypt and transmit the packet flows to a destination server in real time or near real time. Additional problems pertaining to TLS proxy-based monitoring arise when the volume of traffic traversing the monitoring proxy element exceeds the decrypt and/or re-encrypt processing capabilities of the proxy element. In some monitoring proxy solutions utilized by network operators, the proxy element may be configured to drop received packets and/or packet flows in the event the volume of traffic traversing the proxy element exceeds the proxy element's processing capacity. This solution is highly undesirable in situations where the monitoring proxy element is being used to implement network security policies. In such instances, packet monitoring systems are incapable of sufficiently responding to temporary surges or transients in monitored traffic volumes received as ingress packet traffic by the proxy element.


Accordingly, in light of these difficulties associated with conventional solutions, there exists a need for methods, systems, and computer readable media for implementing bandwidth limitations on specific application traffic at a proxy element.


SUMMARY

According to one aspect, the subject matter described herein includes a method for implementing bandwidth limitations on specific application traffic at a proxy element. The method includes receiving, at a proxy element, a packet flow from at least one source client, identifying encrypted packets associated with a specific application traffic type from among the packet flow, and directing the identified encrypted packets to a bandwidth limiter in the proxy element. The method further includes applying a bandwidth limitation operation to the identified encrypted packets and decrypting the identified encrypted packets if an accumulated amount of payload bytes of the identified encrypted packets complies with the parameters of the bandwidth limitation operation.


According to another aspect, the subject matter described herein includes a system for implementing bandwidth limitations on specific application traffic at a proxy element. The system includes a monitoring engine in the proxy element for receiving a packet flow from at least one source client, identifying encrypted packets associated with a specific application traffic type from among the packet flow. The system further includes a bandwidth limiter in the proxy element for receiving the identified encrypted packets, applying a bandwidth limitation operation to the identified encrypted packets and decrypting the identified encrypted packets if an accumulated amount of payload bytes of the identified encrypted packets complies with the parameters of the bandwidth limitation operation.


As used herein, the terms “session” and “connection” are used interchangeably and refer to communication between two entities via a telecommunication network, which may be a packet network, including networks that use the TCP protocol. For example, a session may be communication between an application on first node (e.g., a client device) and an application on a second node (e.g., a destination application server), where the first application is identified by a first IP address and TCP port number combination and the second application is identified by a second IP address and TCP port number combination. A request to initiate a session may be referred to as a request to make a connection, or “a connection request.” It will be understood by persons of ordinary skill in the art that two hardware nodes may communicate multiple sessions across the same physical layer connection, e.g., a network cable may support multiple sessions simultaneously. As used herein, the term “active session” refers to a session that is connecting, transmitting and/or receiving data, or disconnecting.


The subject matter described herein can be implemented in software in combination with hardware and/or firmware. For example, the subject matter described herein can be implemented in software executed by a processor. In one exemplary implementation, the subject matter described herein can be implemented using a non-transitory computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.





BRIEF DESCRIPTION OF THE DRAWINGS

Preferred embodiments of the subject matter described herein will now be explained with reference to the accompanying drawings, wherein like reference numerals represent like parts, of which:



FIG. 1 is a block diagram illustrating an exemplary system for implementing bandwidth limitations on specific application traffic at a proxy element according to an embodiment of the subject matter described herein;



FIG. 2 is a block diagram illustrating an exemplary monitoring engine for implementing bandwidth limitations on specific application traffic at a proxy element according to an embodiment of the subject matter described herein;



FIG. 3 is a signaling diagram illustrating an exemplary signaling messages communicated by a TLS proxy element to control window sizing for implementing packet traffic throttling according to an embodiment of the subject matter described herein; and



FIG. 4 is a flow chart illustrating an exemplary process for implementing bandwidth limitations on specific application traffic at a proxy element according to an embodiment of the subject matter described herein.





DETAILED DESCRIPTION

In accordance with the subject matter disclosed herein, systems, methods, and computer readable media for implementing bandwidth limitations on specific patterned application traffic at a proxy element are provided. In some embodiments, the proxy element is a TLS proxy hardware device that is provisioned with a software stack that is adapted to throttle bandwidth throughput associated with TLS decryption operations for a customer. Notably, the software stack may be configured to execute the decryption or encryption operations at various bandwidth throughput levels. Further, the TLS proxy element is configured to throttle connections that are subjected to the decryption and encryption operations without discarding packets or passing uninspected packets. In some embodiments, the TLS proxy includes a monitoring engine that is configured to independently manipulate the TCP stacks that facilitate communications with a source client and a destination server served by the TLS proxy in the manner described below. Although the following description describes the proxy element as an TLS proxy device, it is understood that the proxy element can comprises a secure socket layer (SSL) proxy device or any other device that is configured to function as a proxy element for any other specific application traffic without departing from the scope of the disclosed subject matter.


Reference will now be made in detail to exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.



FIG. 1 includes a proxy element 104 (e.g., a TLS proxy device) that is communicatively connected to a source client 102 and a destination server 106. Proxy element 104 may include one or more processors 105, such as a central processing unit (e.g., a single core or multiple processing cores), a microprocessor, a microcontroller, a network processor, an application-specific integrated circuit (ASIC), or the like. Proxy element 104 may also include memory 107. Memory 107 may comprise random access memory (RAM), flash memory, a magnetic disk storage drive, and the like. In some embodiments, memory 107 may be configured to store a monitoring engine 108 and a packet copier 112. Monitoring engine 108 comprises a number of components including a bandwidth limiter 116, throttle manager 118, decryption manager 120, encryption manager 122, and agent manager 110. Notably, each of bandwidth limiter 116, throttle manager 118, decryption manager 120, encryption manager 122, and agent manager 110 can perform various monitoring, management, communication, and/or throttling functionalities for proxy element 104 when executed by processor(s) 105.


In some embodiments, monitoring engine 108 comprises a software stack that performs the throttling functionalities for proxy element 104. As described in greater detail below, monitoring engine 108 utilizes bandwidth limiter 116 to monitor ingress TCP packet traffic and to determine whether the packet traffic is to be subjected to decryption operations or encryption operations. For example, bandwidth limiter 116 in monitoring engine 108 can be configured to temporarily suspend a current decryption operation (or encryption operation) should the count of the total number of payload bytes received during the current time interval period exceeds the maximum payload byte count threshold. Notably, bandwidth limiter 116 is configured to suspend further decryption operations on the received TCP payload bytes. In such instances, bandwidth limiter 116 subsequently stores the ingress packet traffic in a buffer and/or receiving agent 130 in the event the total payload byte count threshold is exceeded. In particular, bandwidth limiter 116 only throttles a specific application traffic type received by proxy element 104. As such, the overall amount of packet traffic (previously) received by proxy element 104 is unaffected by the bandwidth limiter. Additional explanation of the bandwidth limitation operations is described below and in FIG. 2.


Throttle manager 118 is a component of monitoring engine 108 that is responsible for measuring the throughput of ingress TCP packet traffic and determining whether the volume of ingress flow received by proxy element 104 should be reduced or increased. Decryption manager 120 includes a component that is responsible for executing a decryption operation on ingress encrypted traffic received by proxy element 104. Similarly, encryption manager 122 includes a component that is responsible for executing an encryption operation on outgoing packet traffic that had been previously received by proxy element 104 and decrypted by decryption manager 120.


Agent manager 110 in monitoring engine 108 includes a receiving agent 130 and a transmitting agent 132. Specifically, agent manager 110 is responsible for the creation and management of receiving agent 130 and transmitting agent 132. In some embodiments, each of receiving agent 130 and transmitting agent 132 comprises a virtual client agent (or buffer element) that is responsible for either receiving packet traffic at or sending packet traffic from proxy element 104. Further, each of receiving agent 130 and transmitting agent 132 may be supported by an underlying hardware component, such as a network interface controller (NIC).


Packet copier 112 is a proxy element component that is responsible for capturing and copying packet traffic that has been decrypted by decryption manager 120. Notably, packet copier 112 may be configured to generate a copy the decrypted packet traffic and subsequently forward the copy of the decrypted packet traffic to a local or out-of-band network tool 114, which can be adapted to conduct various packet analysis operations. After the packet traffic has been copied, monitoring engine 108 may then utilize encryption manager 122 to re-encrypt the previously decrypted TLS packet traffic. Such encrypted TLS packet traffic is then provided to transmitting agent 132, which then directs the encrypted TLS packet traffic to destination server 106 in a transparent manner.



FIG. 1 further illustrates proxy element 104 being positioned in between a source client 102 and destination server 106. Source client 102 may include any user equipment (UE) device, such as a mobile smartphone, a personal computer, a laptop computer, or the like. Destination server 106 may include an application server that hosts a service application (e.g., a web server, an email server, etc.) accessible to source client 102 via a network connection that is serviced by proxy element 104. In particular, source client 102 may communicate TCP packet traffic flows to destination server 106. These packet traffic flows may include both clear text packet traffic and encrypted packet traffic.


In some monitoring scenarios, packet flows associated with a communication session conducted by source client 102 and destination server 106 may begin using unencrypted communication protocols and continue in this manner for a period of time. Moreover, the communications session may traverse through proxy element 104 that is positioned between the source client and destination server 106. After some time, the packet communications may be securely encrypted by the proxy element 104 using, for example, SSL or TLS (e.g., via HTTPS). As such, the packet communications may transition from an unsecured/non-encrypted packet flow to a secured/encrypted packet flow during the course of the established communication session as described in greater detail below.


In some embodiments, monitoring engine 108 is configured to inspect the communication packet traffic flows for SYN and SYN-ACK messages that are communicated between source client 102 and destination server 106. Consequently, monitoring engine 108 is able to detect the beginning of every TCP session that traverses proxy element 104. For example, after the TCP sessions are established, monitoring engine 108 is configured to inspect the initial ingress packets in an attempt to detect a ‘client hello’ message that indicates that an SSL session or TLS session is to be initiated. After determining that an SSL session or TLS session is to be established, monitoring engine 108 may initiate a process that triggers proxy element 104 to proxy the session connection between source client 102 and destination server 106.


For example, after identifying and/or detecting the initiation of an SSL or TLS session, monitoring engine 108 is configured to establish a separate TLS session with each of source client 102 and destination server 106. For example, proxy element 104 may receive a ‘client hello’ message sent by source client 102 that is directed to destination server 106. In response, proxy element 104 processes the received ‘client hello’ message and transparently functions as destination server 106 and completes a TLS handshake process to establish an TLS session between source client 102 and proxy element 104. In addition, proxy element 104 initiates its own TLS handshake procedure by sending a ‘client hello’ message to destination server 106. Proxy element 104 and destination server 106 complete the TLS handshake procedure and establish a second TLS session. By establishing a separate and transparent TLS session (or SSL session) with source client 102 and destination server 106 respectively, proxy element 104 functions as a man-in-the-middle entity on a network trunk line. After the two separate TLS sessions are established by proxy element 104, monitoring engine 108 in proxy element 104 begins to receive a number of encrypted TCP payload bytes over the SSL or TLS session.


Prior to deployment, bandwidth limiter 116 in proxy element 104 is configured with a maximum throughput rate (e.g., a maximum bit rate) from which the bandwidth limiter computes a plurality of bandwidth limitation parameters. For example, after having a maximum throughput rate designated, proxy element 104 can be configured by the bandwidth limiter 116 with a time value that represents a ‘monitoring time slot interval’. For example, the monitoring time slot interval can represent a designated period of time by which monitoring engine 108 and/or bandwidth limiter 116 will conduct a counting of a number of bytes (e.g., TCP payload bytes) that is received from a source client. Notably, the monitoring time slot interval can be set or designated to be equal to any period of time or duration (e.g., 200 milliseconds). Likewise, proxy element 104 can also be provisioned with a predefined ‘maximum payload byte count threshold’. The maximum payload byte count threshold represents the maximum number of bytes that can be processed during a monitoring time slot interval. The maximum payload byte count threshold can be designated in the proxy element to be equal to any amount or size (e.g., 100 MB, 200 MB, etc.). In some embodiments, a customer user may configure bandwidth limiter 116 with a maximum bit rate. Based on this configured bit rate, bandwidth limiter 116 can derive the maximum payload byte count threshold (i.e., a maximum byte count) using a predefined monitoring time slot interval that is fixed at 200 milliseconds.


Configured with both the monitoring time slot interval and the maximum payload byte count threshold, bandwidth limiter 116 is able to establish a throughput rate (e.g., a licensed throughput rate) by which the proxy element 104 conducts system operations (e.g., decryption operations and encryption operations). For example, by establishing the duration of the monitoring time slot interval to be equal to 200 milliseconds and the maximum payload byte count threshold to be equal to 200 megabytes, proxy element 104 can establish or set a bandwidth throughput rate of 8 gigabits (Gb) per second (equal to 1 gigabyte per second, i.e., 1 GB/s) for bandwidth limiter 116. If another bandwidth throughput rate (e.g., 2 Gb/s, 4 Gb/s, etc.) needs to be established for a particular TLS proxy device, the bandwidth limiter can appropriately adjust one or both of the monitoring time slot interval and the maximum payload byte count threshold to configure the desired bandwidth throughput rate. After the monitoring time slot interval and the maximum payload byte count threshold are established, proxy element 104 can be deployed in an operator's network.



FIG. 2 is a block diagram illustrating an exemplary monitoring engine (e.g., monitoring engine 108) for implementing bandwidth limitations on specific application traffic at a proxy element. In some embodiments, receiving agent 130 of monitoring engine 108 can be configured to inspect the TCP packet traffic communicated between source client 102 and destination server 106 in order to classify the received TCP network traffic into one or more specific application traffic types (e.g., TLS packet traffic, SSL packet traffic, non-TLS packet traffic, non-SSL packet traffic, etc.) existing at one or more network layers (e.g., transport layer, session layer, etc.). In some embodiments, monitoring engine 108 and/or receiving agent 130 is configured to inspect the first data packet in the TCP connection and attempt to find a TLS Client hello header after the TCP header. If the client hello header is present, monitoring engine 108 and/or receiving agent 130 is configured to identify/designate the current connection as a TLS connection.


Depending on the manner by which the packet traffic is classified (e.g., TLS traffic vs. non-TLS traffic), monitoring engine 108 can be configured to process the specific types of TCP packet traffic differently. Specifically, monitoring engine 108 and/or receiving agent 130 may be configured to identify encrypted packets associated with a specific application traffic type from among the entire packet flow, which may include both clear text packets and encrypted TLS packets (in this example embodiment). For example, monitoring engine 108 may use receiver agent 130 to forward non-TLS traffic 206 (e.g., clear text TCP traffic) directly to a transmitting agent 132. In contrast, monitoring engine 108 may forward encrypted TLS traffic 208 to bandwidth limiter 116.


Afterwards, monitoring engine 108 is configured to direct the identified encrypted packets (which are represented as TLS traffic 208 in FIG. 2) to the bandwidth limiter 116. Specifically, monitoring engine 108 is configured to use bandwidth limiter 116 to apply a bandwidth limitation operation to the received encrypted packets of TLS traffic 208. In some embodiments, bandwidth limiter 116 may be adapted to count the total TCP payload bytes (and not the TLS payload bytes) in order to account for the portion of the data associated with the TLS handshake operation because the handshake operation constitutes a significant amount of computing resource usage/consumption (as compared to the actual decryption of the TLS payload).


After receiving and identifying the encrypted TLS traffic 208, bandwidth limiter 116 processes the encrypted TLS traffic 208 in order to determine whether the packet traffic should be decrypted and subsequently processed prior to the re-encryption. For each decryption operation requiring execution, bandwidth limiter 116 is configured to determine whether a total payload byte count (e.g., TCP payload) volume exceeds the predefined maximum payload byte count threshold (e.g., total_byte_count>maximum_byte_count_threshold). For example, bandwidth limiter 116 sums the number of TCP payload bytes received over the current monitoring time interval (e.g., 200 milliseconds). In the event bandwidth limiter 116 determines that the maximum payload byte count threshold (e.g., 200 megabytes) is exceeded by the total payload byte count, then bandwidth limiter 116 is configured to initiate a function that suspends or postpones the current operation (e.g., a decryption operation or encryption operation) until the start of the next or following time interval period (i.e., when the current time interval period slot expires). In such instances, the encrypted TLS traffic 208 is stored in a buffer and/or receiving agent 130 until the current monitoring time interval expires and processing resumes.


Alternatively, if the bandwidth limiter 116 determines that the maximum payload byte count threshold has not been exceeded by the sum of the amount of received ingress payload bytes and the previously received ingress payload byte amount for the current time interval, then bandwidth limiter 116 is configured to update the time interval's “total payload byte count value” sum to further include the current operation's byte count (e.g., a ‘current bandwidth limitation’). At this stage, bandwidth limiter 116 then forwards the TLS traffic 208 to a decryption manager 120 for decryption processing.


After the bandwidth limiter 116 forwards the encrypted TLS packet traffic to decryption manager 120 for executing the decryption operation, bandwidth limiter 116 then determines if the current execution time is greater than the maximum time interval period. In some embodiments, bandwidth limiter 116 determines that the current time exceeds the duration of the monitoring time slot interval and subsequently computes an end time for a next time interval and resets the total payload byte count. Bandwidth limiter 116 can also alternatively set the maximum payload byte count threshold to a current bandwidth limitation at this time. For example, if a first time interval period (comprising a maximum time interval period of 200 milliseconds) was initiated at t=0 milliseconds, then at t=200 milliseconds the first time interval period would expire and a subsequent/following second time interval period would be initiated. Similarly, a third time interval period would be initiated at t=400 milliseconds. Bandwidth limiter 116 is further configured to reset the total payload byte count (e.g., set total_byte_count equal to zero) at the expiration of each time interval period. Thus, bandwidth limiter 116 continues to process payload bytes (up to a maximum of 200 megabytes) every 200 milliseconds until proxy element 104 ceases to receive TCP packets and or a network operator suspends the operation of monitoring engine 108.


If the encrypted TLS traffic 208 complies with the bandwidth limitation parameters as described above, the encrypted TLS traffic 208 is provided to decryption manager 120, which performs a decryption operation on the encrypted TLS traffic 208. For example, monitoring engine 108 is configured to execute a decryption operation on identified encrypted packets if an accumulated amount of payload bytes of the identified encrypted packets complies with parameters of the bandwidth limitation operation. After producing decrypted TLS traffic, proxy element 104 may utilize a packet copier (not shown in FIG. 2) to copy the decrypted TLS traffic, which can be forwarded to a network tool for analysis and monitoring. At this stage, the decrypted TLS traffic is forwarded to encryption manager 122, which re-encrypts the TLS traffic. Encryption manager 122 subsequently forwards the encrypted TLS traffic to transmitting agent 132, which in turn sends the encrypted TLS traffic to a destination server (e.g., destination server 106 in FIG. 1).


In some embodiments, bandwidth limiter 116 is configured to disregard (i.e., does not count) any packets that require retransmission. Namely, bandwidth limiter 116 is not just simply counting the total number of bytes or total number of encrypted bytes that traverses through proxy element 104. Rather, bandwidth limiter 116 is instead only accounts for “effective throughput traffic.” As used herein, the term “effective throughput traffic” can include packet traffic that is i) received by proxy element 104 and ii) processed by proxy element 104 only once. Notably, effective throughput traffic does not include packets that are retransmitted in the event that packets are lost or misplaced after departing proxy element 104 and prior to being received by destination server 106.



FIG. 3 is a signaling diagram illustrating an exemplary signaling messages communicated by a TLS proxy element to control window sizing for implementing packet traffic throttling. In particular, FIG. 3 depicts a proxy element 104 that is interposed and/or positioned in between a source client 102 and a destination server 106 that are communicating using an encryption protocol, such as SSL or TLS (e.g., via HTTPS). As mentioned above, proxy element 104 includes a throttle manager 118 that is adapted to monitor the volume or rate (e.g., gigabits per second) of network traffic associated with the packet flows and/or sessions being handled by proxy element 104. In the event the network traffic received by proxy element 104 reaches a predefined bandwidth threshold (e.g., 1 GB/sec), throttle manager 118 in proxy element 104 is configured to manipulate the TCP window size parameter associated with some or all of the packet flows. As a result, source clients (and/or associated servers) are forced to temporarily throttle or reduce their effective transmission rates.


As used herein, the window size serves to indicate the maximum amount of received data (e.g., payload bytes) that can be buffered at one time by the proxy element 104. As such, a sending entity (e.g., source client 102) may only send that amount of data before waiting for an acknowledgment and/or a window update from throttle manager 118 in proxy element 104. The window size utilized by throttle manager 118 may be predefined in the throttle manager by a network operator. Alternatively the window size may be dynamically determined by throttle manager 118. Using a window size adjustment in conjunction with the aforementioned bandwidth limiter functionality, proxy element 104 is allowed to continue processing (e.g., decryption and encryption operations) all sessions, albeit at a lower bandwidth rate. In addition, proxy element 104 is configured to avoid the dropping of packet flows and sessions during transient packet flow surges, spikes, and or bursts.


Referring to the example illustrated in FIG. 3, proxy element 104 receives an encrypted packet flow 302 from source client 102. The encrypted packet flow is sent by proxy element 104 to destination server 106 as an encrypted packet flow 304. In block 306, proxy element 104 detects an overflow condition. An overflow condition occurs when encrypted packet flow 302 exceeds a predefined bandwidth overflow threshold (e.g., 1 GB/s). Notably, the predefined bandwidth overflow threshold may be any rate value that is designated by a network operator.


In some embodiments, a detected overflow condition (e.g., block 306) triggers proxy element 104 to instruct source client 102 and destination server 106 to reduce the window size. Specifically, proxy element 104 can send a reduce window size message 308 to source client 102 and a reduce window size message 310 to destination server 106. Each of reduce window size message 308 and reduce window size message 310 includes an updated window size value provided by throttle manager 118. In response to receiving the reduce window size message 308, source client 102 is compelled to temporarily send a reduced amount of encrypted packet flow 312 to proxy element 104. Specifically, source client 102 will only send an amount of data specified by the window size to proxy element 104 until an acknowledgement message (or an updated window size) is received from throttle manager 118. Likewise, destination server 106 similarly temporarily sends a reduced amount of encrypted packet flow 314 to proxy element 104 in response to reduce window size message 310. As such, destination server 106 also only sends the amount of data indicated by the window size to proxy element 104 until an acknowledgement message (or an updated window size) is received from throttle manager 118.


In some embodiments, after a predefined time, throttle manager 118 is configured to periodically increase the TCP window size back to the bandwidth rate that existed prior to the detected overflow condition. Alternatively, throttle manager 118 may be configured to increase the TCP window size back to the predefined maximum TCP window size setting value established by network operator once the volume of packet traffic received by proxy element 104 reduces to more manageable levels.



FIG. 4 is a flow chart illustrating an exemplary method 400 for implementing bandwidth limitations on specific application traffic at a proxy element according to an embodiment of the subject matter described herein. In some embodiments, blocks 402-410 of method 400 may represent an algorithm performed by a monitoring engine and/or a bandwidth limiter that is stored in memory and executed by one or more processors.


In block 402, method 400 includes receiving, at a proxy element, a packet flow from at least one source client. In some embodiments, a proxy element receives a TCP packet flow from a source client that is directed to a destination server. Notably, the TCP packet flow includes both clear text packets and encrypted packets (e.g., encrypted SSL or encrypted TLS packets).


In block 404, method 400 includes identifying encrypted packets associated with a specific application traffic type from the packet flow. In some embodiments, a monitoring engine in the proxy element is configured to inspect the headers of the received packets in order to identify a specific application traffic type that is being monitored (e.g., encrypted TLS packets).


In block 406, method 400 includes directing the identified encrypted packets to a bandwidth limiter in the proxy element. In some embodiments, the monitoring engine is configured to forward the identified encrypted packets to bandwidth limiter included in the proxy element. Similarly, the monitoring engine is further configured to forward packets that are not associated with the specific application traffic type directly to a transmitting agent and/or network interface card for transmitting the non-monitored packet Traffic to the destination server.


In block 408, method 400 includes applying a bandwidth limitation operation to the identified encrypted packets. In some embodiments, bandwidth limiter receives and begins forwarding the identified encrypted packets to a decryption manager that is responsible for conducting that decryption operation. For example, the bandwidth limiter is characterized by a number of predefined and configurable parameters, such as a monitoring time slot interval and a maximum payload byte count threshold. By adjusting these parameters, the proxy element may be configured to provide monitoring functionality at a number of different bandwidth throughput levels or rates (e.g., various licensed rates).


In block 410, method 400 includes decrypting the identified encrypted packets if the payload size of the identified encrypted packets complies with the parameters of the bandwidth limitation operation. In some embodiments, the bandwidth limiter is configured to forward the encrypted packets to a decryption manager if the payload size of received encrypted packets does not exceed the maximum payload byte count threshold within a particular monitoring time slot interval.


It should be noted that each of the proxy element, monitoring engine, bandwidth limiter, throttle manager and/or functionality described herein may constitute a special purpose computing device. Further, the proxy element, monitoring engine, bandwidth limiter, throttle manager, and/or functionality described herein can improve the technological field of computer network communications and security. More specifically, the disclosed system and/or proxy element can be configured to accommodate a number of different throughput rates for a specific application traffic type (e.g., encrypted SSL or TLS traffic) such that proper security measures and/or packet analysis can be performed. As such, the disclosed proxy element affords the technical advantage of effectively managing and throttling a specific application traffic type in order to ensure that proper packet traffic monitoring processes are conducted without dropping or disregarding packets in overflow conditions.


It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.

Claims
  • 1. A method for implementing bandwidth limitations on specific application traffic at a proxy element, the method comprising: receiving, by a proxy element, an encrypted packet flow from at least one source client;utilizing a bandwidth limiter in the proxy element to apply a bandwidth limitation operation to the encrypted packet flow from the at least one source client, wherein the bandwidth limiter detects an amount of the encrypted packet flow by determining a total payload byte count of the encrypted packet flow received during a time interval comprising a predetermined length of time;executing a window size manipulation operation if the detected amount of encrypted packet flow exceeds a predefined bandwidth overflow threshold of the bandwidth limitation operation, wherein the window size manipulation operation includes sending, by a throttle manager in the proxy element, a reduce window size message that specifies an updated window size value to the at least one source client; andstoring at least a portion of the received encrypted packet flow until the time interval expires when the detected amount of encrypted packet flow exceeds the predefined bandwidth overflow threshold of the bandwidth limitation operation.
  • 2. The method of claim 1 wherein the at least one source client is compelled to send, to the proxy element, a reduced amount of encrypted packet flow based on the updated window size value in response to receiving the reduce window size message.
  • 3. The method of claim 1 wherein the updated window size value is dynamically determined by the throttle manager or predefined by a network operator.
  • 4. The method of claim 1 wherein the updated window size value is a transmission control protocol (TCP) window size value.
  • 5. The method of claim 1 wherein the window size manipulation operation includes sending, by a throttle manager in the proxy element, a reduce window size message that specifies an updated window size value to a destination server associated with the encrypted packet flow.
  • 6. The method of claim 5 wherein the destination server is compelled to send, to the proxy element, a reduced amount of encrypted packet flow based on the updated window size value in response to receiving the reduce window size message.
  • 7. The method of claim 1 wherein the encrypted packet flow includes either encrypted secure session layer (SSL) packets or encrypted transport layer security (TLS) packets.
  • 8. A system for implementing bandwidth limitations on specific application traffic at a proxy element, the system comprising: a processor;a memory included in the proxy element;a monitoring engine that is stored in the memory of the proxy element and when executed by the processor is configured for receiving an encrypted packet flow from at least one source client;a bandwidth limiter that is stored in the memory of the proxy element and when executed by the processor is configured for: applying a bandwidth limitation operation to the encrypted packet flow from the at least one source client; anddetecting an amount of the encrypted packet flow by determining a total payload byte count of the encrypted packet flow received during a time interval comprising a predetermined length of time; anda throttle manager that is stored in the memory of the proxy element and when executed by the processor is configured for executing a window size manipulation operation if the detected amount of encrypted packet flow exceeds a predefined bandwidth overflow threshold of the bandwidth limitation operation, wherein the window size manipulation operation includes sending a reduce window size message that specifies an updated window size value to the at least one source client;wherein the memory is configured to store at least a portion of the received encrypted packet flow until the time interval expires when the detected amount of encrypted packet flow exceeds the predefined bandwidth overflow threshold of the bandwidth limitation operation.
  • 9. The system of claim 8 wherein the at least one source client is compelled to send, to the proxy element, a reduced amount of encrypted packet flow based on the updated window size value in response to receiving the reduce window size message.
  • 10. The system of claim 9 wherein the updated window size value is dynamically determined by the throttle manager or predefined by a network operator.
  • 11. The system of claim 9 wherein the updated window size value is a transmission control protocol (TCP) window size value.
  • 12. The system of claim 8 wherein the window size manipulation operation includes sending, by the throttle manager in the proxy element, a reduce window size message that specifies an updated window size value to a destination server associated with the encrypted packet flow.
  • 13. The system of claim 12 wherein the destination server is compelled to send, to the proxy element, a reduced amount of encrypted packet flow based on the updated window size value in response to receiving the reduce window size message.
  • 14. The system of claim 8 wherein the encrypted packet flow includes either encrypted secure session layer (SSL) packets or encrypted transport layer security (TLS) packets.
  • 15. A non-transitory computer readable medium having stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps comprising: receiving, by a proxy element, an encrypted packet flow from at least one source client;utilizing a bandwidth limiter to apply a bandwidth limitation operation to the encrypted packet flow from the at least one source client, wherein the bandwidth limiter detects an amount of the encrypted packet flow by determining a total payload byte count of the encrypted packet flow received during a time interval comprising a predetermined length of time;executing a window size manipulation operation if a detected amount of encrypted packet flow exceeds a predefined bandwidth overflow threshold of the bandwidth limitation operation, wherein the window size manipulation operation includes sending, by a throttle manager in the proxy element, a reduce window size message that specifies an updated window size value to the at least one source client; andstoring at least a portion of the received encrypted packet flow until the time interval expires when the detected amount of encrypted packet flow exceeds the predefined bandwidth overflow threshold of the bandwidth limitation operation.
  • 16. The non-transitory computer readable medium of claim 15 wherein the at least one source client is compelled to send, to the proxy element, a reduced amount of encrypted packet flow based on the updated window size value in response to receiving the reduce window size message.
  • 17. The non-transitory computer readable medium of claim 15 wherein the updated window size value is dynamically determined by the throttle manager or predefined by a network operator.
Priority Claims (1)
Number Date Country Kind
a 2018 00581 Aug 2018 RO national
PRIORITY CLAIM

This application is a continuation of U.S. application Ser. No. 16/103,598, filed Aug. 14, 2018, which claims the priority benefit of Romanian Patent Application Serial No. a 2018 00581, filed Aug. 10, 2018, the disclosures of which are incorporated herein by reference in their entireties.

US Referenced Citations (203)
Number Name Date Kind
5161189 Bray Nov 1992 A
5557678 Ganesan Sep 1996 A
6240416 Immon et al. May 2001 B1
6330671 Aziz Dec 2001 B1
6480488 Huang Nov 2002 B1
6684331 Srivastava Jan 2004 B1
7340744 Chandwadkar et al. Mar 2008 B2
7363353 Ganesan et al. Apr 2008 B2
7373412 Colas et al. May 2008 B2
7421506 Ni et al. Sep 2008 B2
7562213 Timms Jul 2009 B1
7634650 Shah et al. Dec 2009 B1
7684414 Durst Mar 2010 B2
7701853 Malhotra Apr 2010 B2
7778194 Yung Aug 2010 B1
7971240 Guo et al. Jun 2011 B2
8270942 Zabawskyj et al. Sep 2012 B2
8457126 Breslin et al. Jun 2013 B2
8514756 Ramachandra et al. Aug 2013 B1
8566247 Nagel et al. Oct 2013 B1
8595835 Kolton et al. Nov 2013 B2
8601152 Chou Dec 2013 B1
8654974 Anderson et al. Feb 2014 B2
8788805 Herne et al. Jul 2014 B2
8881282 Aziz et al. Nov 2014 B1
8929356 Pandey et al. Jan 2015 B2
8938611 Zhu et al. Jan 2015 B1
8953439 Lin et al. Feb 2015 B1
8964537 Brolin Feb 2015 B2
9065642 Zaverucha et al. Jun 2015 B2
9298560 Janakiraman et al. Mar 2016 B2
9380002 Johansson et al. Jun 2016 B2
9392010 Friedman et al. Jul 2016 B2
9407643 Bavington Aug 2016 B1
9565202 Kindlund et al. Feb 2017 B1
9660913 Newton May 2017 B2
9673984 Jiang et al. Jun 2017 B2
9680869 Buruganahalli et al. Jun 2017 B2
9800560 Guo et al. Oct 2017 B1
9807121 Levy et al. Oct 2017 B1
9860154 Balabine et al. Jan 2018 B2
9882929 Ettema et al. Jan 2018 B1
9893883 Chaubey et al. Feb 2018 B1
9906401 Rao Feb 2018 B1
9998955 MacCarthaigh Jun 2018 B1
10063591 Jiang et al. Aug 2018 B1
10079810 Moore et al. Sep 2018 B1
10079843 Friedman et al. Sep 2018 B2
10116553 Penno et al. Oct 2018 B1
10291651 Chaubey May 2019 B1
10326741 Rothstein et al. Jun 2019 B2
10404597 Bakshi Sep 2019 B2
10419965 Kadosh et al. Sep 2019 B1
10423774 Zelenov et al. Sep 2019 B1
10482239 Liu et al. Nov 2019 B1
10516532 Taub et al. Dec 2019 B2
10749808 MacCarthaigh Aug 2020 B1
10903985 Bergeron Jan 2021 B2
10931797 Ahn et al. Feb 2021 B2
10951660 Rogers et al. Mar 2021 B2
10992652 Putatunda et al. Apr 2021 B2
11075886 Paul et al. Jul 2021 B2
11165831 Higgins et al. Nov 2021 B2
11489666 Bergeron Nov 2022 B2
20010028631 Iwamura Oct 2001 A1
20020116485 Black et al. Aug 2002 A1
20030004688 Gupta et al. Jan 2003 A1
20030161335 Fransdonk Aug 2003 A1
20030163684 Fransdonk Aug 2003 A1
20030165241 Fransdonk Sep 2003 A1
20040083362 Park et al. Apr 2004 A1
20040168050 Desrochers et al. Aug 2004 A1
20040268148 Karjala et al. Dec 2004 A1
20050050362 Peles Mar 2005 A1
20050111437 Maturi May 2005 A1
20050160269 Akimoto Jul 2005 A1
20060056300 Tamura Mar 2006 A1
20060085862 Witt et al. Apr 2006 A1
20060259579 Beverly Nov 2006 A1
20070022284 Vishwanathan Jan 2007 A1
20070033408 Morten Feb 2007 A1
20070043940 Gustave et al. Feb 2007 A1
20070078929 Beverly Apr 2007 A1
20070156726 Levy Jul 2007 A1
20070169190 Kolton et al. Jul 2007 A1
20070179995 Prahlad et al. Aug 2007 A1
20080005782 Aziz Jan 2008 A1
20080031141 Lean et al. Feb 2008 A1
20080320297 Sabo et al. Dec 2008 A1
20090150521 Tripathi Jun 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090150883 Tripathi et al. Jun 2009 A1
20090220080 Herne et al. Sep 2009 A1
20090222567 Tripathi et al. Sep 2009 A1
20090254990 McGee Oct 2009 A1
20100250769 Barreto et al. Sep 2010 A1
20110231659 Sinha Sep 2011 A1
20110286461 Ichino et al. Nov 2011 A1
20110289311 Roy-Chowdhury et al. Nov 2011 A1
20110314269 Stavrou Dec 2011 A1
20120082073 Andreasen et al. Apr 2012 A1
20120137289 Nolterieke et al. May 2012 A1
20120210318 Sanghvi et al. Aug 2012 A1
20120236823 Kompella et al. Sep 2012 A1
20120304244 Xie et al. Nov 2012 A1
20130054761 Kempf et al. Feb 2013 A1
20130070777 Hutchison et al. Mar 2013 A1
20130117847 Friedman et al. May 2013 A1
20130204849 Chacko Aug 2013 A1
20130239119 Garg et al. Sep 2013 A1
20130265883 Henry et al. Oct 2013 A1
20130272136 Ali et al. Oct 2013 A1
20130301830 Bar-El et al. Nov 2013 A1
20130343191 Kim Dec 2013 A1
20140010083 Hamdi et al. Jan 2014 A1
20140059200 Nguyen et al. Feb 2014 A1
20140082348 Chandrasekaran et al. Mar 2014 A1
20140115702 Li et al. Apr 2014 A1
20140189093 Du Toit et al. Jul 2014 A1
20140189861 Gupta et al. Jul 2014 A1
20140189961 He et al. Jul 2014 A1
20140226820 Chopra et al. Aug 2014 A1
20140351573 Martini Nov 2014 A1
20150026313 Chawla et al. Jan 2015 A1
20150039889 Andoni Feb 2015 A1
20150052345 Martini Feb 2015 A1
20150113132 Srinivas et al. Apr 2015 A1
20150113264 Wang et al. Apr 2015 A1
20150124622 Kovvali et al. May 2015 A1
20150172219 Johansson et al. Jun 2015 A1
20150264083 Prenger et al. Sep 2015 A1
20150281954 Warren Oct 2015 A1
20150288679 Ben-Nun et al. Oct 2015 A1
20150295780 Hsiao et al. Oct 2015 A1
20150319030 Nachum Nov 2015 A1
20150341212 Hsiao et al. Nov 2015 A1
20150379278 Thota et al. Dec 2015 A1
20160014016 Guichard et al. Jan 2016 A1
20160019232 Lambright Jan 2016 A1
20160080502 Yadav et al. Mar 2016 A1
20160105469 Galloway et al. Apr 2016 A1
20160105814 Hurst et al. Apr 2016 A1
20160119374 Williams et al. Apr 2016 A1
20160127517 Shcherbakov et al. May 2016 A1
20160142440 Qian et al. May 2016 A1
20160248685 Pignataro et al. Aug 2016 A1
20160277321 Johansson et al. Sep 2016 A1
20160277971 Hamdi et al. Sep 2016 A1
20160294784 Hopkins et al. Oct 2016 A1
20160344754 Rayapeta et al. Nov 2016 A1
20160373185 Wentzloff et al. Dec 2016 A1
20170048328 Korotaev et al. Feb 2017 A1
20170070531 Huston, III et al. Mar 2017 A1
20170237640 Stocker Aug 2017 A1
20170237719 Schwartz et al. Aug 2017 A1
20170289104 Shankar et al. Oct 2017 A1
20170302554 Chandrasekaran et al. Oct 2017 A1
20170339022 Hegde et al. Nov 2017 A1
20170364794 Mahkonen et al. Dec 2017 A1
20180006923 Gao et al. Jan 2018 A1
20180091236 Damus Mar 2018 A1
20180091421 Ma et al. Mar 2018 A1
20180091427 Kumar et al. Mar 2018 A1
20180097787 Murthy et al. Apr 2018 A1
20180097788 Murthy Apr 2018 A1
20180097840 Murthy Apr 2018 A1
20180124025 Lam et al. May 2018 A1
20180176036 Butcher et al. Jun 2018 A1
20180176192 Davis et al. Jun 2018 A1
20180198838 Murgia et al. Jul 2018 A1
20180234322 Cohn et al. Aug 2018 A1
20180241699 Raney Aug 2018 A1
20180254990 Ramaiah Sep 2018 A1
20180278419 Higgins et al. Sep 2018 A1
20180331912 Edmison et al. Nov 2018 A1
20180332078 Kumar et al. Nov 2018 A1
20180351970 Majumder et al. Dec 2018 A1
20180367422 Raney et al. Dec 2018 A1
20180375644 Karagiannis et al. Dec 2018 A1
20190028376 Ganapathy et al. Jan 2019 A1
20190058714 Joshi et al. Feb 2019 A1
20190068561 Caragea Feb 2019 A1
20190068564 Putatunda et al. Feb 2019 A1
20190104437 Bartfai-Walcott et al. Apr 2019 A1
20190116111 Izard et al. Apr 2019 A1
20190166049 Bakshi May 2019 A1
20190205151 Suzuki Jul 2019 A1
20190205244 Smith Jul 2019 A1
20190260794 Woodford et al. Aug 2019 A1
20190303385 Ching et al. Oct 2019 A1
20190349403 Anderson Nov 2019 A1
20190373052 Pollitt et al. Dec 2019 A1
20200036610 Indiresan et al. Jan 2020 A1
20200053064 Oprisan et al. Feb 2020 A1
20200067700 Bergeron Feb 2020 A1
20200076773 Monat et al. Mar 2020 A1
20200104052 Vijayan et al. Apr 2020 A1
20200137021 Janakiraman Apr 2020 A1
20200137115 Janakiraman et al. Apr 2020 A1
20210083857 Bergeron Mar 2021 A1
20210111975 Raney Apr 2021 A1
20210160275 Anderson et al. May 2021 A1
20210194779 Punj et al. Jun 2021 A1
Foreign Referenced Citations (3)
Number Date Country
2777226 Aug 2019 EP
3528430 Aug 2019 EP
2016176070 Nov 2016 WO
Non-Patent Literature Citations (63)
Entry
Non-Final Office Action for U.S. Appl. No. 17/105,411 (dated Mar. 7, 2022).
“Network Monitoring Step 2: The Next-Generation of Packet Brokers,” Mantisnet, pp. 1-5 (2021).
Notice of Allowance and Examiner Initiated Interview Summary for U.S. Appl. No. 16/781,542 (dated Aug. 2, 2021).
Non-Final Office Action for U.S. Appl. No. 16/781,542 (dated Feb. 24, 2021).
Hardegen et al., “Flow-based Throughput Prediction using Deep Learning and Real-World Network Traffic,” 15th International Conference on Network and Service Management (CNSM), pp. 1-9 (2019).
“The ABCs of Network Visibility,” Ixia, pp. 1-57 (2017).
Eisen et al., “goProbe: A Scalable Distributed Network Monitoring Solution,” IEEE Xplore, pp. 1-10 (2015).
Lee et al., “Public Review for Towards Scalable Internet Traffic Measurement and Analysis with Hadoop,” ACM SIGCOMM Computer Communication Review, vol. 43, No. 1, pp. 1-9 (Jan. 2013).
Zou et al., “An Enhanced Netflow Data Collection System,” 2012 Second International Conference on Instrumentation & Measurement, Computer, Communication and Control, pp. 508-511 (2012).
He et al., “Data Deduplication Techniques,” 2010 International Conference on Future Information Technology and Management Engineering, pp. 1-4 (2010).
Applicant-Initiated Interview Summary for U.S. Appl. No. 16/103,598 (dated Oct. 22, 2020).
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 16/103,598 (dated Oct. 16, 2020).
Notice of Allowance and Fee(s) Due and Examiner-Initiated Interview Summary for U.S. Appl. No. 16/113,360 (dated Oct. 15, 2020).
Non-Final Office Action for U.S. Appl. No. 16/781,542 (dated Sep. 25, 2020).
Non-Final Office Action for U.S. Appl. No. 15/980,699 (dated Sep. 22, 2020).
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 15/608,369 (dated Aug. 19, 2020).
Advisory Action and AFCP 2.0 Decision for U.S. Appl. No. 15/608,369 (dated Jul. 1, 2020).
Advisory Action and AFCP 2.0 Decision for U.S. Appl. No. 15/980,699 (dated Jun. 30, 2020).
Non-Final Office Action for U.S. Appl. No. 16/113,360 (dated May 19, 2020).
Non-Final Office Action for U.S. Appl. No. 16/103,598 (dated May 11, 2020).
Final Office Action for U.S. Appl. No. 15/608,369 (dated Apr. 22, 2020).
Final Office Action for U.S. Appl. No. 15/980,699 (dated Apr. 20, 2020).
Commonly-assigned, co-pending U.S. Appl. No. 16/781,542 for “Methods, Systems, and Computer Readable Media for Processing Network Flow Metadata at a Network Packet Broker,” (Unpublished, filed Feb. 4, 2020).
Stankovic, “How to solve duplicated NetFlow caused by multiple exporters,” https://www.netvizura.com/blog/how-to-solve-duplicated-netflow-caused-by-multiple-exporters, pp. 1-4 (Accessed Jan. 15, 2020).
“Jumbo Frame,” Wikipedia, https://en.wikipedia.org/wiki/Jumbo_frame, pp. 1-4 (Jan. 15, 2020).
“How is the MTU is 65535 in UDP but ethernet does not allow frame size more that 1500 bytes,” ServerFault, TCPIP, pp. 1-9 (Accessed Jan. 15, 2020).
“Network Monitoring Step 2: The Next-Generation of Packet Brokers,” MantisNet, pp. 1-6 (2020).
“CPacket cVu 2440NG/3240NG,” https://www.cpacket.com/resources/cvu-3240-2440-datasheet/, pp. 1-4 (Accessed Jan. 15, 2020).
“What are Microservices,” An Introduction to Microservices, https://opensource.com/resources/what-are-microservices, pp. 1-8 (Accessed Jan. 15, 2020).
“IPv6,” Wikipedia, https://en.wikipedia.org/wiki/IPv6, pp. 1-15 (Jan. 8, 2020).
Paul, Santanu, “Network Visibility Component with Netflow Jumbo Frame Support,” The IP.com Journal, pp. 1-8 (Aug. 2019).
Paul, Santanu, “Methods and Systems for Session-Aware Collection of Netflow Statistics,” The IP.com Journal, pp. 1-5 (Jul. 2019).
Pandey, Shardendu; Johansson, Stefan Jan, “Network Packet Broker with Flow Segmentation Capability,” The IP.com Journal, pp. 1-6 (Jul. 2019).
Paul, Santanu,“Network Packet Broker with Flow Segmentation Capability,” The IP.com Journal, pp. 1-6 (Aug. 2019).
Paul, Santanu, “Custom Key Performance Indicator (KPI) Network Visibility System,” The IP.com Journal, pp. 1-4 (Jul. 2019).
Paul, Santanu, “Self-Healing Network Visibility System,” The IP.com Journal, pp. 1-5 (Jun. 2019).
“About NetFlow,” Watchguard Technologies, Inc., pp. 1-3 (2019).
Non-Final Office Action for U.S. Appl. No. 15/980,699 (dated Dec. 9, 2019).
“Multiprotocol Label Switching,” Wikipedia, https://en.wikipedia.org/wiki/multiprotocol_label_switching, pp. 1-7 (Dec. 6, 2019).
“Netflow,” Wikipedia, https://en.wikipedia.org/wiki/NetFlow, pp. 1-9 (Dec. 3, 2019).
Non-Final Office Action for U.S. Appl. No. 15/608,369 (dated Oct. 31, 2019).
“NetFlow Collector,” Kentipedia, Kentik, pp. 1-4 (Sep. 17, 2019).
Advisory Action for U.S. Appl. No. 15/608,369 (dated Sep. 13, 2019).
Nubeva, “Nubeva TLS Decrypt: Out-of-Band Decrypted Visibility for the Cloud,” www.nubeva.com/decryption, pp. 1-8 (Sep. 2019).
Nubeva, “What is Symmetric Key Intercep Architecture?” https://www.nubeva.com/blog/what-is-symmetric-key-intercept-architecture, pp. 1-4 (Aug. 8, 2019).
Final Office Action for U.S. Appl. No. 15/608,369 (dated Jun. 27, 2019).
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 15/826,787 (dated Apr. 25, 2019).
Petryschuk, “NetFlow Basics: An Introduction to Monitoring Network Traffic,” Auvik, https://www.auvik.com/, pp. 1-8 (Mar. 19, 2019).
Non-Final Office Action for U.S. Appl. No. 15/608,369 (dated Mar. 7, 2019).
“Automatic versus Manual NetFlow Deduplication,” Noction, https://www.noction.com/blog/automatic-manual-netflow-deduplication, pp. 1-7 (Feb. 1, 2019).
Paul, Santanu, “Network Visibility System with Integrated Netflow Over Syslog Reporting Capability” The IP.com Journal, pp. 1-7 (Jan. 28, 2019).
Non-Final Office Action for U.S. Appl. No. 15/826,787 (dated Jan. 3, 2019).
Leskiw, “Understanding Syslog: Servers, Messages & Security,” https://www.networkmanagementsoftware.com/what-is-syslog/, pp. 1-7 (Oct. 2018).
Commonly-assigned, co-pending U.S. Appl. No. 16/113,360 for “Monitoring Encrypted Network Traffic Flows in a Virtual Environment Using Dynamic Session Key Acquisition Techniques,” (Unpublished, filed Aug. 27, 2018).
McGillicuddy, “Next-Generation Network Packet Brokers: Defining the Future of Network Visibility Fabrics,” Enterprise Management Associates (EMA) Research, Niagara Networks, pp. 1-27 (Aug. 2018).
Commonly-assigned, co-pending U.S. Appl. No. 16/103,598 for “Methods, Systems, and Computer Readable Media for Implementing Bandwidth Limitations on Specific Application Traffic at a Proxy Element,” (Unpublished, filed Aug. 14, 2018).
Evans, David, “Network Packet Broker with Dynamic Filter Rules,” The IP.com Journal, pp. 1-8 (Jun. 2018).
Schulist et al., “Linux Socket Filtering aka Berkeley Packet Filter (BPF),” Wayback Machine, https://www.kernel.org/doc/Documentation/networking/filter.txt, pp. 1-25 (Jun. 8, 2018).
Commonly-assigned, co-pending U.S. Appl. No. 15/980,699 for “Methods, Systems, and Computer Readable Media for Monitoring Encrypted Network Traffic Flows,” (Unpublished, filed May 15, 2018).
“Principles of Chaos Engineering,” https://principlesofchaos.org/?lang=ENcontent, pp. 1-3 (May 2018).
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 15/980,699 (dated Feb. 8, 2021).
Notice of Allowance and Examiner Interview Summary for U.S. Appl. No. 17/105,411 (dated Aug. 24, 2022).
Notice of Allowability for U.S. Appl. No. 17/105,411 dated Sep. 14, 2022).
Related Publications (1)
Number Date Country
20210119982 A1 Apr 2021 US
Continuations (1)
Number Date Country
Parent 16103598 Aug 2018 US
Child 17111445 US