The present invention is related to the monitoring and control of data packet transmissions between at least two nodes of a communication system. More particularly, the present invention is related to the analysis of the contents of the data packets to identify a service, such that an appropriate traffic data pattern model may be selected or generated for use in scheduling the data packet transmissions.
In wireless communication systems such as IEEE 802.11a/b/g networks, there is no provision to differentiate those services which are highly delay and jitter sensitive so as to assign priority to certain types of services. Although new standard specifications such as IEEE 802.11e provide for such classification, legacy systems that do not implement IEEE 802.11e have no such mechanism. Thus, delay sensitive services must compete with other types of traffic, such as Internet and background traffic.
In congestion situations, services which are highly sensitive to delay and jitter will be the first to experience poor quality. Congestion mechanisms may downgrade these services, although they should be considered as high priority services. In addition, a scheduling scheme that down-prioritizes users with a low signal-to-noise ratio (SNR) may inadvertently down-prioritize “high priority” services.
There are currently mechanisms in the IEEE 802.11e standard that provide a classification mechanism. Applications can request a high priority service from a medium access control (MAC). However, the IEEE 802.11e standard does not support legacy IEEE 802.11a/b/g systems. In addition, the mechanism in the IEEE 802.11e standard depends on the application for implementing classification process. If the application does not fully support the IEEE 802.11e standard or does not provide a classification, it is not possible to determine whether the service is delay and jitter sensitive.
The present invention is related to a communication network which includes at least two nodes that exchange data packets. The communication network further includes a processor and a data transmission scheduling unit. The processor monitors the data packets, collects and analyzes information contained in the data packets, and identifies a particular service based on the monitoring of the data packets and the analysis of the information contained in the data packets. The data transmission scheduling unit schedules the transmission of data packets exchanged between the nodes based on a predefined traffic data pattern model selected by the processor from a plurality of predefined traffic data pattern models which is most appropriate for the identified service. Alternatively, a neural network is used to identify the service and select the most appropriate traffic data pattern model used by the data transmission scheduling unit to schedule the transmission of the data packets.
A more detailed understanding of the invention may be had from the following description of a preferred embodiment, given by way of example and to be understood in conjunction with the accompanying drawings wherein:
The present invention is applicable to any communication systems including, but not limited to, IEEE 802.11-based wireless networks.
The features of the present invention may be incorporated into an integrated circuit (IC) or be configured in a circuit comprising a multitude of interconnecting components.
At least one of the processor 108, the predefined traffic data pattern model library 110 and the data transmission scheduling unit 112 may be incorporated into the network controller 106. The processor 108 and the predefined traffic data pattern model library 110 may be combined to form a single entity either external to or within the network controller 106. The processor 108 may include a memory (not shown) for storing information collected from the data packets 114.
The predefined model library 110 may include predefined categories for a plurality of application services in terms of associated parameters. The application services include, but are not limited to, data, voice, image, video, e-mail, web-browsing, file transfer, background or any other applications.
Each application service can be characterized in terms of associated parameters. The parameters include, but are not limited to, a duty cycle, a peak and average throughput, a mean connection time, size of payload, jitter, or the like. For example, a voice service can be characterized as having a 50% duty cycle, (i.e., approximately equal number of packets in both uplink and downlink), a mean connection time of 120 seconds, approximately 50 packets per second, very small payload, (e.g., about 80 to 160 bytes), very little jitter and a low application throughput, or the like. Similarly, data, image, video, e-mail, file transfer, web browsing, background or all other services may also be characterized in terms of the parameters associated with each service.
The processor 108 may monitor the direction of the data packets 114, the frequency of the data packets 114, (e.g., the number of packets transmitted per second in each direction), jitter in receiving or sending the data packets 114, (e.g., whether two packets are received per second, or whether sometimes no packets are received per second), the size of the MPDU of the data packets 114, the life of the identified service, (i.e., traffic stream), or any other relevant information.
This information can be obtained by analyzing the information in the header of the data packets 114, e.g., the IP header and the TCP header. A MAC header of the MPDU may include a source address, a destination address and data size, (i.e., maximum segment size (MSS)). The IP header has fields to indicate the type of service, a source IP address and a destination IP address. The TCP header includes a source port and a destination port, among others.
Using the obtained information, an application service between the source node and the destination node can be identified in accordance with the predefined categories by the processor 108 operating in conjunction with the predefined traffic data pattern model library 110. If sufficient statistics are collected from the MPDUs of the on-going service, the service can be matched to one of the predefined models in the model library 110, and the most appropriate predefined model is selected.
By classifying each service, the services can be prioritized by the scheduling unit 112 and the priority of each service can be managed appropriately. For example, in congestion situations, the scheduling unit 112 may keep some services and remove other services, or may down-prioritize some of the services. By identifying the type of service, a basis for determining which services to keep or terminate or which services to down-prioritize is provided.
For example, if a higher priority is given to users that have higher signal-to-noise ratio (SNR) to improve overall system throughput, a detrimental effect will result on high priority users with a low SNR. In accordance with the scheduling scheme of the present invention, such situations are avoidable. Alternatively, powerful and sophisticated compression algorithms can be used to minimize the size of the data required for the service and to improve the end-user perception.
At least one of the processor 208, the neural network 210 and the data transmission scheduling unit 112 may be incorporated into the network controller 106. The processor 208 and the neural network 210 may be combined to form a single entity either external to or within the network controller 106. The processor 208 may include a memory (not shown) for storing information collected from the data packets 114.
Operating in conjunction with the processor 208, the neural network 210 analyzes and selects a traffic data pattern model. The neural network 210 is an information processing paradigm that is inspired by the way biological nervous systems, (such as brain), process information. The key element of this paradigm is a structure of the information processing system. The neural network 210 comprises a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. The neural network 210 learns by example through a learning process and is configured for a specific application. For example, the neural network 210 may be used for pattern recognition or data classification. A learning process involves adjustments to the synaptic connections that exist between the neurons. Once trained, the neural network 210 may can be thought of as an “expert” in the category of information it has been given to analyze.
For example, the neural network 210 can be trained to recognize a model for voice services. In this case, the neural network 210 may include a training mode to adjust for the network latencies. Similarly, the neural network 210 may also be trained for data, image, video, e-mail, file transfer, web-browsing, background or all other services.
Although the features and elements of the present invention are described in the preferred embodiments in particular combinations, each feature or element can be used alone without the other features and elements of the preferred embodiments or in various combinations with or without other features and elements of the present invention.
This application claims the benefit of U.S. Provisional Patent Application No. 60/717,444 filed Sep. 15, 2005, which is incorporated by reference as if fully set forth.
Number | Date | Country | |
---|---|---|---|
60717444 | Sep 2005 | US |