Today, a datacenter may process different types of flows, including elephant flows and mouse flows. An elephant flow represents a long-lived flow or a continuous traffic flow that is typically associated with high volume connection. Different from an elephant flow, a mouse flow represents a short-lived flow. Mice are often associated with bursty, latency-sensitive applications, whereas elephants tend to be associated with large data transfers in which throughput is far more important than latency.
A problem with elephant flows is that they tend to fill network buffers end-to-end, and this introduces non-trivial queuing delay to anything that shares these buffers. For instance, a forwarding element may be responsible for managing several queues to forward packets, and several packets belonging to a mouse flow may be stuck in the same queue behind a group of other packets belonging to an elephant flow. In a network of elephants and mice, this means that the more latency-sensitive mice are being affected. Another problem is that mice are generally very bursty, so adaptive routing techniques are not effective with them.
Some embodiments provide a forwarding element that inspects the size of each of several packets in a data flow to determine whether the data flow is an elephant flow. The forwarding element inspects the size because, in order for the packet to be of a certain size, the data flow had to already have gone through a slow start in which smaller packets are transferred and by definition be an elephant flow. As an example, the Transmission Control Protocol (TCP) uses a slow start algorithm in order to avoid congesting the network with an inappropriately large burst of data. The TCP also uses the algorithm to slowly probe the network to determine the available capacity. The forwarding element of some embodiments takes advantage of such a slow start algorithm by using it to detect elephant flows.
When the forwarding element receives a packet in a data flow, the forwarding element identifies the size of the packet. The forwarding element then determines if the size of the packet is greater than a threshold size. If the size is greater, the forwarding element specifies that the packet's data flow is an elephant flow. If the size is not greater, the forward element processes the packet normally (e.g. forwards the packet) without any additional processing due to the size of the packet.
In some embodiments, the forwarding element does not examine the sizes of all packets but only examine the sizes of certain packets. This is because there can be many different types of packets flowing through the network, such as (Address Resolution Protocol ARP) packets, unicast flood packets, etc. The forwarding element of some embodiments selectively examines packets associated with a data flow from a network host. The forwarding element of some embodiments selectively examines packets associated with a data to a network host. In some embodiments, the forwarding element accomplishes this by looking at the sizes of packets associated with one or more flow entries in its flow table(s). In some embodiments, the forwarding element monitors the sizes of packets sent over a tunnel that is established between two network hosts.
To selectively examine certain packets, the forwarding element of some embodiments installs a flow entry with a conditional action in the flow table. Typically, an action specifies dropping the packet or outputting the packet to one or more egress ports. However, the flow's conditional action specifies that, if the packet exceeds a certain size, the packet should be sent to another component (e.g., a daemon process) of the forwarding element for further processing. The component may then identify one or more pieces of information associated with the packet that can be used to identify the elephant flow or other packets belonging to the same elephant flow.
In some embodiments, the forwarding element is a hardware-forwarding element. In some embodiments, the forwarding element is a software forwarding element. In some embodiments, the forwarding element is an edge forwarding element that is in a unique position to check the sizes of different packets before they are segmented into maximum transmission unit (MTU) sized packets by a network interface controller or card (NIC). This is important because a non-edge forwarding element that is not in such a unique position may not be able to detect elephant flows based solely on packet size. For instance, if the MTU size is less than the threshold size, then the non-edge forwarding element will not be able to use packet size to differentiate packets belonging to elephant flows from other packets belonging to mice flows.
In some embodiments, when an elephant flow is detected, the forwarding element identifies various pieces of information that can be used to identify packets belonging to the elephant flow. The forwarding element may identify tunnel information, such as the tunnel ID, the IP address of the source tunnel endpoint (e.g., the hypervisor), and the IP address of the destination tunnel endpoint. The forwarding element of some embodiments identifies the elephant flow packet's ingress port, source transport layer (e.g., UDP or TCP) port, destination transport layer port, Ethernet type, source Ethernet address, destination Ethernet address, source IP address, and/or destination IP address.
Once an elephant flow is detected, the forwarding element of some embodiments treats the detected elephant flow differently than other flows (e.g., mouse flows, non-detected elephant flows). The forwarding element may send packets associated with an elephant flow along different paths (e.g., equal-cost multipath routing (ECMP) legs) to break the elephant flow into mice flows. As another example, the forwarding element may send elephant flow traffic along a separate physical network, such as an optical network that is more suitable for slow changing, bandwidth-intensive traffic. In some embodiments, the forwarding element reports the elephant flow to a network controller (e.g., a software-defined networking controller) that can configure one or more other forwarding elements to handle the elephant flow. To notify another forwarding element, the forwarding element may mark packets associated with the detected elephant flow. When notified, the other forwarding element may perform Quality of Service (QOS) configuration to place packets belonging to the elephant flow in a particular queue that is separate from one or more other queues with other packets, break the elephant flow into mice flows, etc.
Additional techniques for detecting and handling elephant flows are described in U.S. patent application Ser. No. 14/231,647, entitled “Detecting and Handling Elephant Flows”, filed Mar. 31, 2014, and now published as U.S. Patent Publication 2015/0163144. Some embodiments that report elephant flows to a network controller are described in U.S. patent application Ser. No. 14/231,654, entitled “Reporting Elephant Flows to a Network Controller, filed Mar. 31, 2014, and now published as U.S. Patent Publication 2015/0163145. These U.S. Patent Applications, now published as U.S. 2015/0163144 and U.S. 2015/016145, are incorporated herein by reference. Furthermore, some embodiments provide a system that detects an elephant flow by examining the operations of a machine. In some embodiments, the machine is a physical machine or a virtual machine (VM). In detecting, the system identifies an initiation of a new data flow associated with the machine. The new data flow can be an outbound data flow or an inbound data flow. The system then determines, based on the amount of data being sent or received, if the data flow is an elephant flow.
The preceding Summary is intended to serve as a brief introduction to some embodiments as described herein. It is not meant to be an introduction or overview of all subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Embodiments described herein provide a forwarding element that inspects the size of each of several packets in a data flow to determine whether the data flow is an elephant flow. The forwarding element inspects the size because, in order for the packet to be of a certain size, the data flow had to already have gone through a slow start in which smaller packets are transferred and by definition be an elephant flow. As an example, the Transmission Control Protocol (TCP) uses a slow start algorithm in order to avoid congesting the network with an inappropriately large burst of data. The TCP also uses the algorithm to slowly probe the network to determine the available capacity. The forwarding element of some embodiments takes advantage of such a slow start algorithm by using it to detect elephant flows.
The term “packet” is used here as well as throughout this application to refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term “packet” may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, TCP segments, UDP datagrams, IP packets, etc.
As shown in
The show-start algorithm can begin in an exponential growth phase initially with a Congestion Window Size (CWND) of 1, 2 or 10 segments and increases it by one segment size for each new acknowledgement (ACK) that is received. If the receiver sends an ACK for every segment, this behavior effectively doubles the window size each round trip of the network. If the receiver supports delayed ACKs, the rate of increase is lower, but still increases by a minimum of one segment size each round-trip time. This behavior can continue until the congestion window size (CWND) reaches the size of the receiver's advertised window or until a loss occurs, in some embodiments.
As shown in
Referring to
In some embodiments, the process 100 does not examine the sizes of all packets but only examine the sizes of certain packets. This is because there can be many different types of packets flowing through the network, such as (Address Resolution Protocol ARP) packets, unicast flood packets, etc. The process 100 of some embodiments selectively examines packets associated with a set of network hosts. In some embodiments, the process 100 accomplishes this by looking at the sizes of packets associated with one or more flow entries in a set of flow table.
The process of some embodiments does not examine the sizes of all packets. In some embodiments, the process examines packets associated with a data flow from a network host. Alternatively, or conjunctively, the process of some embodiments examines packets that are destined for the network host. In other words, the process of some embodiments only examines inbound traffic to and/or outbound traffic from a network host.
To selectively examine certain packets, the process 100 of some embodiments installs a flow entry with a conditional action in the flow table. Typically, an action specifies dropping the packet or outputting the packet to one or more egress ports. However, the flow's conditional action specifies that, if the packet exceeds a certain size, the packet should be sent to another component (e.g., a daemon process) of the forwarding element for further processing. The component may then identify one or more pieces of information associated with the packet that can be used to identify the elephant flow or other packets in the same elephant flow.
Some embodiments perform variations on the process 100. The specific operations of the process 100 may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments.
In the example process of
In addition, as mentioned above, the process 100 of some embodiments is performed by a forwarding element. In some embodiments, the forwarding element is a hardware-forwarding element. The hardware forwarding element may have application-specific integrated circuits (ASICs) that are specifically designed to support in-hardware forwarding. In some embodiments, the forwarding element is a software forwarding element, such as Open vSwitch (OVS). Different from a hardware forwarding element, the software forwarding element may operate on an x86 box or a computing device (e.g., a host machine). In some embodiments, the forwarding element (e.g., software or hardware forwarding element) is a physical forwarding element that operates in conjunction with one or more other physical forwarding elements to collectively implement different logical forwarding elements (e.g., logical switches, logical routers, etc.) for different logical networks of different tenants, users, departments, etc. that use the same shared computing and networking resources. Accordingly, the term “physical forwarding element” is used herein to differentiate it from a logical forwarding element.
In some embodiments, the forwarding element is an edge forwarding element that is in a unique position to check the sizes of different packets before they are segmented into MTU-sized packets by a NIC. This is important because a non-edge forwarding element that is not in such a unique position may not be able to detect an elephant flow based solely on packet size. For instance, if the MTU size is less than the threshold size, then the non-edge forwarding element will not be able to use packet size to differentiate packets belonging to elephant flows and other packets belonging to mice flows.
Several more examples of detection and handling elephant flows will be described in detail below. Specifically, Section I describes examples of how some embodiments detect elephant flows based on packet size. This is followed by Section II, which describes an example logical forwarding element that uses packet size detection to control traffic between several associated machines. Section III then describes an example electronic system with which some embodiments of the invention are implemented.
Having described an example process, an implementation of a forwarding element that examines packet size will now be described by
As shown in
If there is a miss in the datapath, the control is shifted from the kernel module 325 to the userspace daemon 310. In some embodiments, the control is shifted so that a translation can occur at the userspace to generate and push a flow or rule into the datapath. The userspace daemon 310 operates (e.g., as a daemon or background process) in the userspace 345 to handle such a case when there is no matching flow in the datapath 330.
The userspace daemon 310 of some embodiments installs flows in the datapath 330 following on one or more flow entries from a set of one or more flow tables 315. The set of flow tables 315 are maintained in the userspace 345 rather than the kernel space 350, in some embodiments. When there is a miss in the datapath 330, the userspace daemon 310 may install a rule in the datapath based on a flow entry from a flow table. In this manner, the physical forwarding element 305 can quickly process each subsequent packet with the same set of header values using the rule in the datapath 330. The datapath 330 provides a fast path to process incoming packets because it does not involve any translation at the userspace 345 with the userspace daemon 310.
An example of the physical forwarding element 305 installing a rule to monitor packet size will now be described by reference to the four stages 301-304 that are shown in
In the first stage 301, the physical forwarding element 305 receives a packet 335. The second stage 302 shows the physical forwarding element 305 finding no matching flow for the packet 335 in the datapath 330. In particular, the kernel module 325 has consulted the datapath to find a matching flow for the packet 335. As there is no flow in the datapath 330, the kernel module 325 has called the userspace daemon 310 to handle the miss.
In some embodiments, when there is no matching flow, the kernel module 325 sends the packet to the userspace daemon 310. The userspace daemon 310 then performs a lookup operation (e.g., a hash-based lookup) to find one or more matching flows in a set of one or more flow tables 315. If a match is found, the userspace daemon 310 sends the packet back to kernel module 325 with a set of one or more actions to perform on the packet. The userspace daemon 310 also pushes a flow to the datapath 330 to process each subsequent packet with the same set of packet header values.
In the third stage 303, the userspace daemon 325 consults one or more flow tables 315 to generate a flow 340 to store in the datapath 330. The fourth stage 304 shows the datapath 330 after the flow 340 has been stored in datapath. The flow 340 includes a set of match fields and a set of action to perform on each packet that have a set of header values that match the set of match fields. To simplify the description, the match fields are listed as, field1 through field N. Each of these fields could be a number of different fields. Examples of such fields include source MAC address, destination MAC address, source IP address, destination IP address, source port number, destination port number, the protocol in use, etc.
To examine the size of a packet, the physical forwarding element 305 of some embodiments installs a flow with a conditional action. Typically, an action specifies dropping the packet or outputting the packet to one or more ports. However, the flow 340 includes a condition action. This action specifies that, if the packet exceeds a certain size, the packet should be sent to the userspace 345 for further processing. Otherwise, if the packet does not exceed the certain size, then the packet should be output to a particular port (e.g., port 2).
In the example of
In the example described above, the physical forwarding element 305 stores a flow entry 340 with a conditional action in the datapath 330. The conditional action specifies outputting the packet to a particular port if the packet's size does not exceed a certain size. If it exceeds the certain size, the conditional action specifies sending the packet to the userspace for further processing.
Three operational stages 405-415 of the physical forwarding element 305 are shown in
The first stage 405 shows the physical forwarding element 305 receiving the packet 420. The packet 420 is shown with its size, namely 16 KB. The packet 420 also has header values that match the match field values of the flow entry 340 in the datapath 330.
In the second stage 410, the kernel module 325 performs a lookup against one or more flows in the datapath 330 to find a matching flow. In some embodiments, the kernel module 325 parses the packet to extract or strip its header values. The kernel module then performs a hash-based lookup using the packets' the header values. As the flow entry 340 is a match, the kernel module 325 selects the flow entry in order to process the packet 420.
The third stage 415 illustrates the kernel module 325 acting as an elephant flow detector. Specifically, the kernel module 325 applies the conditional action of the flow entry 340 to the packet 420. In this stage 415, the kernel module 325 identifies the size of the packet 420 and determines that it does not exceed a threshold value (e.g., 30K). Hence, the physical forwarding element 305 processes the packet 420 normally without performing the userspace action. In other words, the physical forwarding element 305 output the packet to a specified port and does not send the packet to the userspace daemon 310 for further processing.
In the example described above, the physical forwarding element 305 determines that the size of the packet does not exceed the specified size limit. Therefore, the physical forwarding element 305 processes the packet normally without reporting it to the userspace daemon 310.
The first stage 505 shows the physical forwarding element 305 receiving the packet 525. The packet 525 is shown with its size, namely 32 KB. The packet 525 is larger than the one shown in the previous figure. The size of the packet 525 also exceeds the specified size limit, which is 30 KB. The packet 525 has header values that match the match field values of the flow entry 340 in the datapath 330. In the second stage 510, the kernel module 325 performs a lookup against one or more flows in the datapath 330 to find a matching flow. In some embodiments, the kernel module 325 parses the packet to extract or strip its header values. The kernel module then performs a hash-based lookup using the packets' the header values. As the flow entry 340 is a match, the kernel module 325 selects the flow entry in order to process the packet 525.
The third stage 515 illustrates the kernel module 325 acting as an elephant flow detector. Specifically, the kernel module 325 applies the conditional action of the flow entry 340 to the packet 525. In this stage 515, the kernel module 325 identifies the size of the packet 420 and determines that it does exceed a threshold value (e.g., 30K). As the size limit has been reached, the physical forwarding element 305 treats the packet 525 differently than the one shown in the previous figure. Specifically, the kernel module 325 reports the elephant flow by sending the packet 525 to the userspace daemon 310. In some embodiments, the userspace daemon 310 receives the packet and stores data relating to the elephant flow. Example of such data includes header values (e.g., a group of tuples) or any other piece of data (e.g., tunnel information) that can be used to identify each packet that belongs to the elephant flow. For instance, upon receiving the packet 525, the userspace daemon 310 might store a set of header values that identifies the data flow with a marking or label that identifies or indicates that the data flow is an elephant flow.
The fourth stage 520 illustrates the physical forwarding element 525 processing the packet associated with the elephant flow. Particularly, the userspace daemon 310 of some embodiments instructs the kernel module 325 to update the flow entry 340 in the datapath 330. In some embodiments, the userspace daemon 310 generates another flow entry without the conditional action. This is shown in the fourth stage 520 with the updated flow entry 530. The match field values of the flow entry 530 remains the same as the flow entry 320. However, the conditional action has been replaced with a traditional action. This action specifies outputting any matching packet to a particular output port (e.g., port 2). In addition to updating the flow entry 340, the physical forwarding element outputs the packet 525 through a specified output port.
The physical forwarding element 305 updates the flow entry 320 in the datapath to prevent another packet associated with the same data flow from being reported as one belonging to an elephant flow. That is, the physical forwarding element 305 updates the flow entry 320 to prevent multiple reports for the same elephant flow. Another reason that the physical forwarding element updates the flow entry 320 is because of the network delay that can be introduced with the control shifting from the fast path to the userspace, and the userspace daemon performing additional operations.
In some embodiments, when an elephant is detected, the physical forwarding element 305 identifies various pieces of information that can be used to identify packets belonging to the elephant flow. The physical forwarding element may identify tunnel information, such as the tunnel ID, the IP address of the source tunnel endpoint (e.g., the hypervisor), and the IP address of the destination tunnel endpoint. The physical forwarding element of some embodiments identifies the elephant flow packet's ingress port, source transport layer (e.g., UDP or TCP) port, destination transport layer port, Ethernet type, source Ethernet address, destination Ethernet address, source IP address, and/or destination IP address.
In the example described above, the physical forwarding element installs a flow in the datapath to monitor packet size. One of ordinary skill in the art would understand that this is just one of several different ways that the physical forwarding element can examine packet size. For instance, instead of using the kernel module as an elephant detector, the physical forwarding element may include one or more separate components to determine whether the traffic being sent contains large TCP segments.
In some embodiments, the physical forwarding element 305 is an edge forward element (EFE). Different from a non-edge forwarding element (NEFE), the EFE is in a unique position to identify elephant flows. The EFE has the advance over a NEFE in that it is the last forwarding element before one or more end machines (e.g., VMs, computing device). Thus, the EFE can more easily monitor traffic coming from and going to an end machine than a NEFE. The EFE of some embodiments also has the advantage over the NEFE because the NEFE may not be able to detect an elephant flow based on the size of a packet. For instance, a NEFE may never receive large TCP packets but only receive smaller MTU-sized packets.
In some embodiments, the forwarding element is an edge forwarding element that is in a unique position to check the sizes of different packets before they are segmented into MTU-sized packets by a network interface controller or card (NIC). This is important because a non-edge forwarding element that is not in such a unique position may not be able to detect an elephant flow based solely on packet size. For instance, if the MTU size is less than the threshold size, then the non-edge forwarding element will not be able to use packet size to differentiate packets belonging to elephant flows and other packets belonging to mice flows.
As shown in
In the example of
As shown, the NIC 615 receives each TCP packet from the EFE 305. The NIC 615 also segments each TCP packet into a number of maximum transmission unit (MTU)-sized packets 635. As shown, each MTU-sized packet is smaller than at least some of the larger TCP packet. The MTU may also be less than the threshold packet size that is used to determine an elephant flow. Therefore, a NEFE (not shown) that receives and forwards the MTU-sized packets 635 from the NIC 615 may not be able to use the size of each MTU-sized packet to determine that the data flow is associated with the packets is an elephant flow. Thus, the EFE 305 is in a unique position in it can see TCP packets before they are segmented into MTU-sized packets, and sent over the network through the NIC 615.
In some embodiments, the forwarding element (e.g., software or hardware forwarding element) is a physical forwarding element that operates in conjunction with one or more other physical forwarding elements to collectively implement different logical forwarding elements (e.g., logical switches, logical routers, etc.) for different logical networks of different tenants, users, departments, etc. that use the same shared computing and networking resources.
In the example of
As shown in the physical view 705, each physical forwarding element (715 or 720) examines the size of the packet sent from the corresponding machine (740 or 745) over the established tunnel. This is conceptually shown with a packet size checker (730 or 735) that operates on each forwarding element (715 or 720) to check the size of each packet sent from the corresponding machine (740 or 745).
The logical view 710 shows the logical forwarding element 745 that has been implemented with the instructions from the controller cluster 725. As shown, the logical forwarding element 745 performs elephant detection on each packet sent between the two machines 740 and 745. For instance, packets sent from the machine 740 to the machine 745 is examined by the size checker 730, while packets sent from the machine 745 to the machine 740 is examined by the size checker 735. Thus, the inbound and outbound traffic between these two machines 740 and 745 and are handled by the two size checkers 730 and 735.
One of the benefits of examining packet size to detect elephant flows is that it can be done without expending much computing resources. As shown in
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational or processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, random access memory (RAM) chips, hard drives, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 805 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 800. For instance, the bus 805 communicatively connects the processing unit(s) 810 with the read-only memory 830, the system memory 825, and the permanent storage device 835.
From these various memory units, the processing unit(s) 810 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 830 stores static data and instructions that are needed by the processing unit(s) 810 and other modules of the electronic system. The permanent storage device 835, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 800 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 835.
Other embodiments use a removable storage device (such as a floppy disk, flash memory device, etc., and its corresponding drive) as the permanent storage device. Like the permanent storage device 835, the system memory 825 is a read-and-write memory device. However, unlike storage device 835, the system memory 825 is a volatile read-and-write memory, such a random access memory. The system memory 825 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 825, the permanent storage device 835, and/or the read-only memory 830. From these various memory units, the processing unit(s) 810 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 805 also connects to the input and output devices 840 and 845. The input devices 840 enable the user to communicate information and select commands to the electronic system. The input devices 840 include alphanumeric keyboards and pointing devices (also called “cursor control devices”), cameras (e.g., webcams), microphones or similar devices for receiving voice commands, etc. The output devices 845 display images generated by the electronic system or otherwise output data. The output devices 845 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD), as well as speakers or similar audio output devices. Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. In addition, some embodiments execute software stored in programmable logic devices (PLDs), ROM, or RAM devices.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition,
This application is a continuation of U.S. patent application Ser. No. 14/231,652, filed Mar. 31, 2014, now published as U.S. Patent Publication 2015/0163142. U.S. patent application Ser. No. 14/231,652 claims the benefit of U.S. Provisional Patent Application 61/913,899, entitled “Detecting and Handling Elephant Flows”, filed on Dec. 9, 2013. U.S. patent application Ser. No. 14/231,652, now published as U.S. Patent Publication 2015/0163142, and U.S. Provisional Patent Application 61/913,899 are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5224100 | Lee et al. | Jun 1993 | A |
5245609 | Ofek et al. | Sep 1993 | A |
5265092 | Soloway et al. | Nov 1993 | A |
5504921 | Dev et al. | Apr 1996 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5668810 | Cannella, Jr. | Sep 1997 | A |
5729685 | Chatwani et al. | Mar 1998 | A |
5751967 | Raab et al. | May 1998 | A |
5781534 | Perlman et al. | Jul 1998 | A |
6104699 | Holender et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6141738 | Munter et al. | Oct 2000 | A |
6219699 | McCloghrie et al. | Apr 2001 | B1 |
6430160 | Smith et al. | Aug 2002 | B1 |
6456624 | Eccles et al. | Sep 2002 | B1 |
6512745 | Abe et al. | Jan 2003 | B1 |
6539432 | Taguchi et al. | Mar 2003 | B1 |
6658002 | Ross et al. | Dec 2003 | B1 |
6680934 | Cain | Jan 2004 | B1 |
6721334 | Ketcham | Apr 2004 | B1 |
6785843 | McRae et al. | Aug 2004 | B1 |
6941487 | Balakrishnan et al. | Sep 2005 | B1 |
6963585 | Le Pennec et al. | Nov 2005 | B1 |
6999454 | Crump | Feb 2006 | B1 |
7012919 | So et al. | Mar 2006 | B1 |
7079544 | Wakayama et al. | Jul 2006 | B2 |
7149817 | Pettey | Dec 2006 | B2 |
7149819 | Pettey | Dec 2006 | B2 |
7197572 | Matters et al. | Mar 2007 | B2 |
7200144 | Terrell et al. | Apr 2007 | B2 |
7209439 | Rawlins et al. | Apr 2007 | B2 |
7283473 | Arndt et al. | Oct 2007 | B2 |
7342916 | Das et al. | Mar 2008 | B2 |
7362752 | Kastenholz | Apr 2008 | B1 |
7370120 | Kirsch et al. | May 2008 | B2 |
7391771 | Orava et al. | Jun 2008 | B2 |
7450598 | Chen et al. | Nov 2008 | B2 |
7463579 | Lapuh et al. | Dec 2008 | B2 |
7478173 | Delco | Jan 2009 | B1 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7533176 | Freimuth et al. | May 2009 | B2 |
7555002 | Arndt et al. | Jun 2009 | B2 |
7606260 | Oguchi et al. | Oct 2009 | B2 |
7627692 | Pessi | Dec 2009 | B2 |
7633955 | Saraiya et al. | Dec 2009 | B1 |
7634622 | Musoll et al. | Dec 2009 | B1 |
7640353 | Shen et al. | Dec 2009 | B2 |
7643488 | Khanna et al. | Jan 2010 | B2 |
7649851 | Takashige et al. | Jan 2010 | B2 |
7706266 | Plamondon | Apr 2010 | B2 |
7710874 | Balakrishnan et al. | May 2010 | B2 |
7760735 | Chen et al. | Jul 2010 | B1 |
7764599 | Doi et al. | Jul 2010 | B2 |
7792987 | Vohra et al. | Sep 2010 | B1 |
7802000 | Huang et al. | Sep 2010 | B1 |
7808919 | Nadeau et al. | Oct 2010 | B2 |
7808929 | Wong et al. | Oct 2010 | B2 |
7818452 | Matthews et al. | Oct 2010 | B2 |
7826482 | Minei et al. | Nov 2010 | B1 |
7839847 | Nadeau et al. | Nov 2010 | B2 |
7885276 | Lin | Feb 2011 | B1 |
7936770 | Frattura et al. | May 2011 | B1 |
7937438 | Miller et al. | May 2011 | B1 |
7937492 | Kompella et al. | May 2011 | B1 |
7940763 | Kastenholz | May 2011 | B1 |
7948986 | Ghosh et al. | May 2011 | B1 |
7953865 | Miller et al. | May 2011 | B1 |
7991859 | Miller et al. | Aug 2011 | B1 |
7995483 | Bayar et al. | Aug 2011 | B1 |
8004990 | Callon | Aug 2011 | B1 |
8027354 | Portolani et al. | Sep 2011 | B1 |
8031606 | Memon et al. | Oct 2011 | B2 |
8031633 | Bueno et al. | Oct 2011 | B2 |
8046456 | Miller et al. | Oct 2011 | B1 |
8054832 | Shukla et al. | Nov 2011 | B1 |
8055789 | Richardson et al. | Nov 2011 | B2 |
8060875 | Lambeth | Nov 2011 | B1 |
8131852 | Miller et al. | Mar 2012 | B1 |
8149737 | Metke et al. | Apr 2012 | B2 |
8155028 | Abu-Hamdeh et al. | Apr 2012 | B2 |
8161270 | Parker et al. | Apr 2012 | B1 |
8166201 | Richardson et al. | Apr 2012 | B2 |
8199750 | Schultz et al. | Jun 2012 | B1 |
8223668 | Allan et al. | Jul 2012 | B2 |
8224931 | Brandwine et al. | Jul 2012 | B1 |
8224971 | Miller et al. | Jul 2012 | B1 |
8265075 | Pandey | Sep 2012 | B2 |
8281067 | Stolowitz | Oct 2012 | B2 |
8312129 | Miller et al. | Nov 2012 | B1 |
8339959 | Moisand et al. | Dec 2012 | B1 |
8339994 | Gnanasekaran et al. | Dec 2012 | B2 |
8345558 | Nicholson et al. | Jan 2013 | B2 |
8351418 | Zhao et al. | Jan 2013 | B2 |
8355328 | Matthews et al. | Jan 2013 | B2 |
8456984 | Ranganathan et al. | Jun 2013 | B2 |
8504718 | Wang et al. | Aug 2013 | B2 |
8571031 | Davies et al. | Oct 2013 | B2 |
8611351 | Gooch et al. | Dec 2013 | B2 |
8612627 | Brandwine | Dec 2013 | B1 |
8619731 | Montemurro et al. | Dec 2013 | B2 |
8625594 | Safrai et al. | Jan 2014 | B2 |
8625603 | Ramakrishnan et al. | Jan 2014 | B1 |
8625616 | Vobbilisetty et al. | Jan 2014 | B2 |
8644188 | Brandwine et al. | Feb 2014 | B1 |
8762501 | Kempf et al. | Jun 2014 | B2 |
8819259 | Zuckerman et al. | Aug 2014 | B2 |
8838743 | Lewites et al. | Sep 2014 | B2 |
8976814 | Dipasquale | Mar 2015 | B2 |
9032095 | Traina et al. | May 2015 | B1 |
9548924 | Pettit | Jan 2017 | B2 |
9762507 | Gandham et al. | Sep 2017 | B1 |
20010043614 | Viswanadham et al. | Nov 2001 | A1 |
20020062422 | Butterworth et al. | May 2002 | A1 |
20020093952 | Gonda | Jul 2002 | A1 |
20020194369 | Rawlins et al. | Dec 2002 | A1 |
20030041170 | Suzuki | Feb 2003 | A1 |
20030058850 | Rangarajan et al. | Mar 2003 | A1 |
20030063556 | Hernandez | Apr 2003 | A1 |
20030093341 | Millard et al. | May 2003 | A1 |
20030191841 | Deferranti et al. | Oct 2003 | A1 |
20040073659 | Rajsic et al. | Apr 2004 | A1 |
20040098505 | Clemmensen | May 2004 | A1 |
20040186914 | Shimada | Sep 2004 | A1 |
20040264472 | Oliver et al. | Dec 2004 | A1 |
20040267866 | Carollo et al. | Dec 2004 | A1 |
20040267897 | Hill et al. | Dec 2004 | A1 |
20050018669 | Arndt et al. | Jan 2005 | A1 |
20050027881 | Figueira et al. | Feb 2005 | A1 |
20050053079 | Havala | Mar 2005 | A1 |
20050083953 | May | Apr 2005 | A1 |
20050111445 | Wybenga et al. | May 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050132044 | Guingo et al. | Jun 2005 | A1 |
20050182853 | Lewites et al. | Aug 2005 | A1 |
20050220096 | Friskney et al. | Oct 2005 | A1 |
20050232230 | Nagami et al. | Oct 2005 | A1 |
20060002370 | Rabie et al. | Jan 2006 | A1 |
20060026225 | Canali et al. | Feb 2006 | A1 |
20060028999 | Iakobashvili et al. | Feb 2006 | A1 |
20060029056 | Perera et al. | Feb 2006 | A1 |
20060037075 | Frattura et al. | Feb 2006 | A1 |
20060104286 | Cheriton | May 2006 | A1 |
20060140118 | Alicherry et al. | Jun 2006 | A1 |
20060174087 | Hashimoto et al. | Aug 2006 | A1 |
20060187908 | Shimozono et al. | Aug 2006 | A1 |
20060193266 | Siddha et al. | Aug 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060221961 | Basso et al. | Oct 2006 | A1 |
20060246900 | Zheng | Nov 2006 | A1 |
20060262778 | Haumont et al. | Nov 2006 | A1 |
20060282895 | Rentzis et al. | Dec 2006 | A1 |
20060291388 | Amdahl et al. | Dec 2006 | A1 |
20070050763 | Kagan et al. | Mar 2007 | A1 |
20070055789 | Claise et al. | Mar 2007 | A1 |
20070064673 | Bhandaru et al. | Mar 2007 | A1 |
20070156919 | Potti et al. | Jul 2007 | A1 |
20070258382 | Foll et al. | Nov 2007 | A1 |
20070260721 | Bose et al. | Nov 2007 | A1 |
20070283412 | Lie et al. | Dec 2007 | A1 |
20070286185 | Eriksson et al. | Dec 2007 | A1 |
20070297428 | Bose et al. | Dec 2007 | A1 |
20080002579 | Lindholm et al. | Jan 2008 | A1 |
20080002683 | Droux et al. | Jan 2008 | A1 |
20080049614 | Briscoe et al. | Feb 2008 | A1 |
20080049621 | McGuire et al. | Feb 2008 | A1 |
20080049786 | Ram et al. | Feb 2008 | A1 |
20080059556 | Greenspan et al. | Mar 2008 | A1 |
20080071900 | Hecker et al. | Mar 2008 | A1 |
20080086726 | Griffith et al. | Apr 2008 | A1 |
20080159301 | de Heer | Jul 2008 | A1 |
20080240095 | Basturk | Oct 2008 | A1 |
20090006607 | Bu et al. | Jan 2009 | A1 |
20090010254 | Shimada | Jan 2009 | A1 |
20090046581 | Eswaran et al. | Feb 2009 | A1 |
20090150527 | Tripathi et al. | Jun 2009 | A1 |
20090292858 | Lambeth et al. | Nov 2009 | A1 |
20100128623 | Dunn et al. | May 2010 | A1 |
20100131636 | Suri et al. | May 2010 | A1 |
20100157942 | An et al. | Jun 2010 | A1 |
20100214949 | Smith et al. | Aug 2010 | A1 |
20100232435 | Jabr et al. | Sep 2010 | A1 |
20100254385 | Sharma et al. | Oct 2010 | A1 |
20100257263 | Casado et al. | Oct 2010 | A1 |
20100275199 | Smith et al. | Oct 2010 | A1 |
20100306408 | Greenberg et al. | Dec 2010 | A1 |
20110022695 | Dalal et al. | Jan 2011 | A1 |
20110075664 | Lambeth et al. | Mar 2011 | A1 |
20110085461 | Liu et al. | Apr 2011 | A1 |
20110085557 | Gnanasekaran et al. | Apr 2011 | A1 |
20110085559 | Chung et al. | Apr 2011 | A1 |
20110085563 | Kotha et al. | Apr 2011 | A1 |
20110128959 | Bando et al. | Jun 2011 | A1 |
20110164503 | Yong et al. | Jul 2011 | A1 |
20110194567 | Shen | Aug 2011 | A1 |
20110202920 | Takase | Aug 2011 | A1 |
20110249970 | Eddleston et al. | Oct 2011 | A1 |
20110261825 | Ichino | Oct 2011 | A1 |
20110299413 | Chatwani et al. | Dec 2011 | A1 |
20110299534 | Koganti et al. | Dec 2011 | A1 |
20110299537 | Saraiya et al. | Dec 2011 | A1 |
20110305167 | Koide | Dec 2011 | A1 |
20110317559 | Kern et al. | Dec 2011 | A1 |
20110317696 | Aldrin et al. | Dec 2011 | A1 |
20120054367 | Ramakrishnan et al. | Mar 2012 | A1 |
20120079478 | Galles et al. | Mar 2012 | A1 |
20120131222 | Curtis et al. | May 2012 | A1 |
20120159454 | Barham et al. | Jun 2012 | A1 |
20120182992 | Cowart et al. | Jul 2012 | A1 |
20120243539 | Keesara | Sep 2012 | A1 |
20120287791 | Xi et al. | Nov 2012 | A1 |
20130024579 | Zhang et al. | Jan 2013 | A1 |
20130054761 | Kempf et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130064088 | Yu et al. | Mar 2013 | A1 |
20130067067 | Miri et al. | Mar 2013 | A1 |
20130163427 | Beliveau et al. | Jun 2013 | A1 |
20130163475 | Beliveau et al. | Jun 2013 | A1 |
20130286846 | Atlas et al. | Oct 2013 | A1 |
20130287026 | Davie | Oct 2013 | A1 |
20130322248 | Guo | Dec 2013 | A1 |
20130332602 | Nakil et al. | Dec 2013 | A1 |
20130339544 | Mithyantha | Dec 2013 | A1 |
20140019639 | Ueno | Jan 2014 | A1 |
20140029451 | Nguyen | Jan 2014 | A1 |
20140108738 | Kim et al. | Apr 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140119203 | Sundaram et al. | May 2014 | A1 |
20140173018 | Westphal et al. | Jun 2014 | A1 |
20140195666 | Dumitriu et al. | Jul 2014 | A1 |
20140233421 | Matthews | Aug 2014 | A1 |
20140281030 | Cui et al. | Sep 2014 | A1 |
20140372616 | Arisoylu et al. | Dec 2014 | A1 |
20150016255 | Bisht et al. | Jan 2015 | A1 |
20150071072 | Ratzin et al. | Mar 2015 | A1 |
20150106804 | Chandrashekhar et al. | Apr 2015 | A1 |
20150120959 | Bennett et al. | Apr 2015 | A1 |
20150124825 | Dharmapurikar et al. | May 2015 | A1 |
20150163117 | Lambeth et al. | Jun 2015 | A1 |
20150163144 | Koponen et al. | Jun 2015 | A1 |
20150163145 | Pettit et al. | Jun 2015 | A1 |
20150163146 | Zhang et al. | Jun 2015 | A1 |
20150172075 | Decusatis et al. | Jun 2015 | A1 |
20150180769 | Wang et al. | Jun 2015 | A1 |
20150237097 | Devireddy et al. | Aug 2015 | A1 |
20150341247 | Curtis et al. | Nov 2015 | A1 |
20160094643 | Jain et al. | Mar 2016 | A1 |
20160105333 | Lenglet et al. | Apr 2016 | A1 |
20160156591 | Zhou et al. | Jun 2016 | A1 |
20160182454 | Phonsa et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
1154601 | Nov 2001 | EP |
2002-141905 | May 2002 | JP |
2003-069609 | Mar 2003 | JP |
2003-124976 | Apr 2003 | JP |
2003-318949 | Nov 2003 | JP |
WO 9506989 | Mar 1995 | WO |
WO 2004047377 | Aug 2004 | WO |
WO 2012126488 | Sep 2012 | WO |
WO 2013184846 | Dec 2013 | WO |
Entry |
---|
Anwer, Muhammad Bilal, et al., “Building a Fast, Virtualized Data Plane with Programmable Hardware,” Aug. 17, 2009, pp. 1-8, VISA'09, ACM Barcelona, Spain. |
Author Unknown, “Open vSwitch, An Open Virtual Switch,” Dec. 30, 2010, 2 pages. |
Author Unknown, “OpenFlow Switch Specification, Version 0.9.0 (Wire Protocol 0x98),” Jul. 20, 2009, pp. 1-36, Open Networking Foundation. |
Author Unknown, “OpenFlow Switch Specification, Version 1.0.0 (Wire Protocol 0x01),” Dec. 31, 2009, pp. 1-42, Open Networking Foundation. |
Author Unknown, “OpenFlow Switch Specification, Version 1.1.0 Implemented (Wire Protoco 0x02),” Feb. 28, 2011, pp. 1-56, Open Networking Foundation. |
Casado, Martin, et al. “Ethane: Taking Control of the Enterprise,” SIGCOMM'07, Aug. 27-31, 2007, pp. 1-12, ACM, Kyoto, Japan. |
Curtis, Andrew R., et al., “DevoFlow: Scaling Flow Management for High-Performance Networks,” Aug. 15, 2011, pp. 254-265, SIGCOMM, ACM. |
Das, Saurav, et al. “Simple Unified Control for Packet and Circuit Networks,” Month Unknown, 2009, pp. 147-148, IEEE. |
Das, Saurav, et al., “Unifying Packet and Circuit Switched Networks with OpenFlow,” Dec. 7, 2009, 10 pages. |
Fernandes, Natalia C., et al., “Virtual networks:isolation, performance, and trends,” Oct. 7, 2010, 17 pages, Institut Telecom and Springer-Verlag. |
Foster, Nate, et al., “Frenetic: A Network Programming Language,” ICFP '11, Sep. 19-21, 2011, 13 pages, Tokyo, Japan. |
Greenhalgh, Adam, et al., “Flow Processing and The Rise of Commodity Network Hardware,” ACM SIGCOMM Computer Communication Review, Apr. 2009, pp. 21-26, vol. 39, No. 2. |
Gude, Natasha, et al., “NOX: Towards an Operating System for Networks,” Jul. 2008, pp. 105-110, vol. 38, No. 3, ACM SIGCOMM Computer communication Review. |
Hinrichs, Timothy L., et al., “Practical Declarative Network Management,” WREN'09, Aug. 21, 2009, pp. 1-10, Barcelona, Spain. |
Koponen, Teemu, et al., “Network Virtualization in Multi-tenant Datacenters,” Aug. 2013, pp. 1-22, VMware, Inc., Palo Alto, California, USA. |
Koponen, Teemu, et al., “Onix: A Distributed Control Platform for Large-scale Production Networks,” In Proc. OSDI, Oct. 2010, pp. 1-14. |
Loo, Boon Thau, et al., “Declarative Routing: Extensible Routing with Declarative Queries,” In Proc. of SIGCOMM, Aug. 21-26, 2005, 12 pages, Philadelphia, PA, USA. |
Loo, Boon Thau, et al., “Implementing Declarative Overlays,” In Proc. of SOSP, Oct. 2005, 16 pages. Brighton, United Kingdom. |
Matsumoto, Nobutaka, et al., “LightFlow: Speeding Up GPU-based Flow Switching and Facilitating Maintenance of Flow Table,” 2012 IEEE 13th International Conference on High Performance Switching and Routing, Jun. 24, 2012, pp. 76-81, IEEE. |
Mckeown, Nick, et al., “OpenFlow: Enabling Innovation in Campus Networks,” ACS SIGCOMM Computer communication Review, Apr. 2008, pp. 69-74, vol. 38, No. 2. |
Nygren, Anders, et al., OpenFlow Switch Specification, Version 1.3.4 (Protocol version 0x04), Mar. 27, 2014, pp. 1-84, Open Networking Foundation. (Part 1 of 2). |
Nygren, Anders, et al., OpenFlow Switch Specification, Version 1.3.4 (Protocol version 0x04), Mar. 27, 2014, pp. 85-171, Open Networking Foundation. (Part 2 of 2). |
Pettit, Justin, et al., “Virtual Switching in an Era of Advanced Edges,” Sep. 2010, 7 pages. |
Pfaff, B., et al., “The Open vSwitch Database Management Protocol,” draft-pfaff-ovsdb-proto-00, Aug. 20, 2012, pp. 1-34, Nicira, Inc., Palo Alto, California, USA. |
Pfaff, Ben, et al., “OpenFlow Switch Specification,” Sep. 6, 2012, 128 pages, The Open Networking Foundation. |
Pfaff, Ben., et al., “Extending Networking into the Virtualization Layer,” Proc. Of HotNets, Oct. 2009, pp. 1-6. |
Phaal, Peter, et al., “sFlow Version 5,” Jul. 2004, 46 pages, sFlow.org. |
Phan, Doantam, et al., “Visual Analysis of Network Flow Data with Timelines and Event Plots,” month unknown, 2007, pp. 1-16, VizSEC. |
Popa, Lucian, et al., “Building Extensible Networks with Rule-Based Forwarding,” In USENIX OSDI, Month Unknown, 2010, pp. 1-14. |
Sherwood, Rob, et al., “Carving Research Slices Out of Your Production Networks with OpenFlow,” ACM SIGCOMM Computer Communications Review, Jan. 2010, pp. 129-130, vol. 40, No. 1. |
Sherwood, Rob, et al., “FlowVisor: A Network Virtualization Layer,” Oct. 14, 2009, pp. 1-14, OPENFLOW-TR-2009-1. |
Tavakoli, Arsalan, et al., “Applying NOX to the Datacenter,” month unknown, 2009, 6 pages, Proceedings of HotNets. |
Yu, Minlan, et al., “Scalable Flow-Based Networking with DIFANE,” Aug. 2010, pp. 1-16, In Proceedings of SIGCOMM. |
Number | Date | Country | |
---|---|---|---|
20170118090 A1 | Apr 2017 | US |
Number | Date | Country | |
---|---|---|---|
61913899 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14231652 | Mar 2014 | US |
Child | 15397676 | US |