Method and system for inbound content-based QoS

Information

  • Patent Grant
  • 8064464
  • Patent Number
    8,064,464
  • Date Filed
    Friday, June 16, 2006
    19 years ago
  • Date Issued
    Tuesday, November 22, 2011
    14 years ago
Abstract
Certain embodiments of the present invention provide a method for communicating inbound network data to provide QoS. The method includes receiving data over a network at a node, prioritizing the data at the node by assigning a priority to the data, and communicating the data to an application at the node based at least in part on the priority of the data. The data priority is based at least in part on message content. Certain embodiments of the present invention provide a system for communicating inbound networking data to provide QoS. The system includes a data prioritization component adapted to prioritize data by assigning a priority to the data and a data communications component adapted to receive the data over a network and to communicate the data to an application based at least in part on the priority of the data. The data priority is based at least in part on message content.
Description
BACKGROUND OF THE INVENTION

The present invention generally relates to communications networks. More particularly, the present invention relates to systems and methods for inbound content-based Quality of Service.


Communications networks are utilized in a variety of environments. Communications networks typically include two or more nodes connected by one or more links. Generally, a communications network is used to support communication between two or more participant nodes over the links and intermediate nodes in the communications network. There may be many kinds of nodes in the network. For example, a network may include nodes such as clients, servers, workstations, switches, and/or routers. Links may be, for example, based modem connections over phone lines, wires, Ethernet links, Asynchronous Transfer Mode (ATM) circuits, satellite links, and/or fiber optic cables.


A communications network may actually be composed of one or more smaller communications networks. For example, the Internet is often described as network of interconnected computer networks. Each network may utilize a different architecture and/or topology. For example, one network may be a switched Ethernet network with a star topology and another network may be a Fiber-Distributed Data Interface (FDDI) ring.


Communications networks may carry a wide variety of data. For example, a network may carry bulk file transfers alongside data for interactive real-time conversations. The data sent on a network is often sent in packets, cells, or frames. Alternatively, data may be sent as a stream. In some instances, a stream or flow of data may actually be a sequence of packets. Networks such as the Internet provide general purpose data paths between a range of nodes and carrying a vast array of data with different requirements.


Communication over a network typically involves multiple levels of communication protocols. A protocol stack, also referred to as a networking stack or protocol suite, refers to a collection of protocols used for communication. Each protocol may be focused on a particular type of capability or form of communication. For example, one protocol may be concerned with the electrical signals needed to communicate with devices connected by a copper wire. Other protocols may address ordering and reliable transmission between two nodes separated by many intermediate nodes, for example.


Protocols in a protocol stack typically exist in a hierarchy. Often, protocols are classified into layers. One reference model for protocol layers is the Open Systems Interconnection (OSI) model. The OSI reference model includes seven layers: a physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. The physical layer is the “lowest” layer, while the application layer is the “highest” layer. Two well-known transport layer protocols are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). A well known network layer protocol is the Internet Protocol (IP).


At the transmitting node, data to be transmitted is passed down the layers of the protocol stack, from highest to lowest. Conversely, at the receiving node, the data is passed up the layers, from lowest to highest. At each layer, the data may be manipulated by the protocol handling communication at that layer. For example, a transport layer protocol may add a header to the data that allows for ordering of packets upon arrival at a destination node. Depending on the application, some layers may not be used, or even present, and data may just be passed through.


One kind of communications network is a tactical data network. A tactical data network may also be referred to as a tactical communications network. A tactical data network may be utilized by units within an organization such as a military (e.g., army, navy, and/or air force). Nodes within a tactical data network may include, for example, individual soldiers, aircraft, command units, satellites, and/or radios. A tactical data network may be used for communicating data such as voice, position telemetry, sensor data, and/or real-time video.


An example of how a tactical data network may be employed is as follows. A logistics convoy may be in-route to provide supplies for a combat unit in the field. Both the convoy and the combat unit may be providing position telemetry to a command post over satellite radio links. An unmanned aerial vehicle (UAV) may be patrolling along the road the convoy is taking and transmitting real-time video data to the command post over a satellite radio link also. At the command post, an analyst may be examining the video data while a controller is tasking the UAV to provide video for a specific section of road. The analyst may then spot an improvised explosive device (IED) that the convoy is approaching and send out an order over a direct radio link to the convoy for it to halt and alerting the convoy to the presence of the IED.


The various networks that may exist within a tactical data network may have many different architectures and characteristics. For example, a network in a command unit may include a gigabit Ethernet local area network (LAN) along with radio links to satellites and field units that operate with much lower throughput and higher latency. Field units may communicate both via satellite and via direct path radio frequency (RF). Data may be sent point-to-point, multicast, or broadcast, depending on the nature of the data and/or the specific physical characteristics of the network. A network may include radios, for example, set up to relay data. In addition, a network may include a high frequency (HF) network which allows long rang communication. A microwave network may also be used, for example. Due to the diversity of the types of links and nodes, among other reasons, tactical networks often have overly complex network addressing schemes and routing tables. In addition, some networks, such as radio-based networks, may operate using bursts. That is, rather than continuously transmitting data, they send periodic bursts of data. This is useful because the radios are broadcasting on a particular channel that must be shared by all participants, and only one radio may transmit at a time.


Tactical data networks are generally bandwidth-constrained. That is, there is typically more data to be communicated than bandwidth available at any given point in time. These constraints may be due to either the demand for bandwidth exceeding the supply, and/or the available communications technology not supplying enough bandwidth to meet the user's needs, for example. For example, between some nodes, bandwidth may be on the order of kilobits/sec. In bandwidth-constrained tactical data networks, less important data can clog the network, preventing more important data from getting through in a timely fashion, or even arriving at a receiving node at all. In addition, portions of the networks may include internal buffering to compensate for unreliable links. This may cause additional delays. Further, when the buffers get full, data may be dropped.


In many instances the bandwidth available to a network cannot be increased. For example, the bandwidth available over a satellite communications link may be fixed and cannot effectively be increased without deploying another satellite. In these situations, bandwidth must be managed rather than simply expanded to handle demand. In large systems, network bandwidth is a critical resource. It is desirable for applications to utilize bandwidth as efficiently as possible. In addition, it is desirable that applications avoid “clogging the pipe,” that is, overwhelming links with data, when bandwidth is limited. When bandwidth allocation changes, applications should preferably react. Bandwidth can change dynamically due to, for example, quality of service, jamming, signal obstruction, priority reallocation, and line-of-sight. Networks can be highly volatile and available bandwidth can change dramatically and without notice.


In addition to bandwidth constraints, tactical data networks may experience high latency. For example, a network involving communication over a satellite link may incur latency on the order of half a second or more. For some communications this may not be a problem, but for others, such as real-time, interactive communication (e.g., voice communications), it is highly desirable to minimize latency as much as possible.


Another characteristic common to many tactical data networks is data loss. Data may be lost due to a variety of reasons. For example, a node with data to send may be damaged or destroyed. As another example, a destination node may temporarily drop off of the network. This may occur because, for example, the node has moved out of range, the communication's link is obstructed, and/or the node is being jammed. Data may be lost because the destination node is not able to receive it and intermediate nodes lack sufficient capacity to buffer the data until the destination node becomes available. Additionally, intermediate nodes may not buffer the data at all, instead leaving it to the sending node to determine if the data ever actually arrived at the destination.


Often, applications in a tactical data network are unaware of and/or do not account for the particular characteristics of the network. For example, an application may simply assume it has as much bandwidth available to it as it needs. As another example, an application may assume that data will not be lost in the network. Applications which do not take into consideration the specific characteristics of the underlying communications network may behave in ways that actually exacerbate problems. For example, an application may continuously send a stream of data that could just as effectively be sent less frequently in larger bundles. The continuous stream may incur much greater overhead in, for example, a broadcast radio network that effectively starves other nodes from communicating, whereas less frequent bursts would allow the shared bandwidth to be used more effectively.


Certain protocols do not work well over tactical data networks. For example, a protocol such as TCP may not function well over a radio-based tactical network because of the high loss rates and latency such a network may encounter. TCP requires several forms of handshaking and acknowledgments to occur in order to send data. High latency and loss may result in TCP hitting time outs and not being able to send much, if any, meaningful data over such a network.


Information communicated with a tactical data network often has various levels of priority with respect to other data in the network. For example, threat warning receivers in an aircraft may have higher priority than position telemetry information for troops on the ground miles away. As another example, orders from headquarters regarding engagement may have higher priority than logistical communications behind friendly lines. The priority level may depend on the particular situation of the sender and/or receiver. For example, position telemetry data may be of much higher priority when a unit is actively engaged in combat as compared to when the unit is merely following a standard patrol route. Similarly, real-time video data from an UAV may have higher priority when it is over the target area as opposed to when it is merely in-route.


There are several approaches to delivering data over a network. One approach, used by many communications networks, is a “best effort” approach. That is, data being communicated will be handled as well as the network can, given other demands, with regard to capacity, latency, reliability, ordering, and errors. Thus, the network provides no guarantees that any given piece of data will reach its destination in a timely manner, or at all. Additionally, no guarantees are made that data will arrive in the order sent or even without transmission errors changing one or more bits in the data.


Another approach is Quality of Service (QoS). QoS refers to one or more capabilities of a network to provide various forms of guarantees with regard to data that is carried. For example, a network supporting QoS may guarantee a certain amount of bandwidth to a data stream. As another example, a network may guarantee that packets between two particular nodes have some maximum latency. Such a guarantee may be useful in the case of a voice communication where the two nodes are two people having a conversation over the network. Delays in data delivery in such a case may result in irritating gaps in communication and/or dead silence, for example.


QoS may be viewed as the capability of a network to provide better service to selected network traffic. The primary goal of QoS is to provide priority including dedicated bandwidth, controlled jitter and latency (required by some real-time and interactive traffic), and improved loss characteristics. Another important goal is making sure that providing priority for one flow does not make other flows fail. That is, guarantees made for subsequent flows must not break the guarantees made to existing flows.


Current approaches to QoS often require every node in a network to support QoS, or, at the very least, for every node in the network involved in a particular communication to support QoS. For example, in current systems, in order to provide a latency guarantee between two nodes, every node carrying the traffic between those two nodes must be aware of and agree to honor, and be capable of honoring, the guarantee.


There are several approaches to providing QoS. One approach is Integrated Services, or “IntServ.” IntServ provides a QoS system wherein every node in the network supports the services and those services are reserved when a connection is set up. IntServ does not scale well because of the large amount of state information that must be maintained at every node and the overhead associated with setting up such connections.


Another approach to providing QoS is Differentiated Services, or “DiffServ.” DiffServ is a class of service model that enhances the best-effort services of a network such as the Internet. DiffServ differentiates traffic by user, service requirements, and other criteria. Then, DiffServ marks packets so that network nodes can provide different levels of service via priority queuing or bandwidth allocation, or by choosing dedicated routes for specific traffic flows. Typically, a node has a variety of queues for each class of service. The node then selects the next packet to send from those queues based on the class categories.


Existing QoS solutions are often network specific and each network type or architecture may require a different QoS configuration. Due to the mechanisms existing QoS solutions utilize, messages that look the same to current QoS systems may actually have different priorities based on message content. However, data consumers may require access to high-priority data without being flooded by lower-priority data. Existing QoS systems cannot provide QoS based on message content at the transport layer.


As mentioned, existing QoS solutions require at least the nodes involved in a particular communication to support QoS. However, the nodes at the “edge” of network may be adapted to provide some improvement in QoS, even if they are incapable of making total guarantees. Nodes are considered to be at the edge of the network if they are the participating nodes in a communication (i.e., the transmitting and/or receiving nodes) and/or if they are located at chokepoints in the network. A chokepoint is a section of the network where all traffic must pass to another portion. For example, a router or gateway from a LAN to a satellite link would be a choke point, since all traffic from the LAN to any nodes not on the LAN must pass through the gateway to the satellite link.


Thus, there is a need for systems and methods providing QoS in a tactical data network. There is a need for systems and methods for providing QoS on the edge of a tactical data network. Additionally, there is a need for adaptive, configurable QoS systems and methods in a tactical data network.


BRIEF SUMMARY OF THE INVENTION

Certain embodiments of the present invention provide a method for communicating inbound network data to provide QoS. The method includes receiving data over a network at a node, prioritizing the data at the node by assigning a priority to the data, and communicating the data to an application at the node based at least in part on the priority of the data. The priority of the data is based at least in part on message content.


Certain embodiments of the present invention provide a system for communicating inbound networking data to provide QoS. The system includes a data prioritization component adapted to prioritize data by assigning a priority to the data and a data communications component adapted to receive the data over a network and to communicate the data to an application based at least in part on the priority of the data. The priority of the data is based at least in part on message content.


Certain embodiments of the present invention provide a computer-readable medium. The computer-readable medium includes a set of instructions for execution on a computer. The set of instructions includes a data prioritization routine configured to prioritize data by assigning a priority to the data and a data communications routine configured to receive the data over a network and to communicate the data to an application based at least in part on the priority of the data. The priority of the data is based at least in part on message content.





BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 illustrates a tactical communications network environment operating with an embodiment of the present invention.



FIG. 2 shows the positioning of the data communications system in the seven layer OSI network model in accordance with an embodiment of the present invention.



FIG. 3 depicts an example of multiple networks facilitated using the data communications system in accordance with an embodiment of the present invention.



FIG. 4 illustrates a data communications environment operating according to an embodiment of the present invention.



FIG. 5 illustrates a flow chart of a method for communicating data according to an embodiment of the present invention.





The foregoing summary, as well as the following detailed description of certain embodiments of the present invention, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, certain embodiments are shown in the drawings. It should be understood, however, that the present invention is not limited to the arrangements and instrumentality shown in the attached drawings.


DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 illustrates a tactical communications network environment 100 operating with an embodiment of the present invention. The network environment 100 includes a plurality of communication nodes 110, one or more networks 120, one or more links 130 connecting the nodes and network(s), and one or more communication systems 150 facilitating communication over the components of the network environment 100. The following discussion assumes a network environment 100 including more than one network 120 and more than one link 130, but it should be understood that other environments are possible and anticipated.


Communication nodes 110 may be and/or include radios, transmitters, satellites, receivers, workstations, servers, and/or other computing or processing devices, for example.


Network(s) 120 may be hardware and/or software for transmitting data between nodes 110, for example. Network(s) 120 may include one or more nodes 110, for example.


Link(s) 130 may be wired and/or wireless connections to allow transmissions between nodes 110 and/or network(s) 120.


The communications system 150 may include software, firmware, and/or hardware used to facilitate data transmission among the nodes 110, networks 120, and links 130, for example. As illustrated in FIG. 1, communications system 150 may be implemented with respect to the nodes 110, network(s) 120, and/or links 130. In certain embodiments, every node 110 includes a communications system 150. In certain embodiments, one or more nodes 110 include a communications system 150. In certain embodiments, one or more nodes 110 may not include a communications system 150.


The communication system 150 provides dynamic management of data to help assure communications on a tactical communications network, such as the network environment 100. As shown in FIG. 2, in certain embodiments, the system 150 operates as part of and/or at the top of the transport layer in the OSI seven layer protocol model. The system 150 may give precedence to higher priority data in the tactical network passed to the transport layer, for example. The system 150 may be used to facilitate communications in a single network, such as a local area network (LAN) or wide area network (WAN), or across multiple networks. An example of a multiple network system is shown in FIG. 3. The system 150 may be used to manage available bandwidth rather than add additional bandwidth to the network, for example.


In certain embodiments, the system 150 is a software system, although the system 150 may include both hardware and software components in various embodiments. The system 150 may be network hardware independent, for example. That is, the system 150 may be adapted to function on a variety of hardware and software platforms. In certain embodiments, the system 150 operates on the edge of the network rather than on nodes in the interior of the network. However, the system 150 may operate in the interior of the network as well, such as at “choke points” in the network.


The system 150 may use rules and modes or profiles to perform throughput management functions, such as optimizing available bandwidth, setting information priority, and managing data links in the network. By “optimizing” bandwidth, it is meant that the presently described technology can be employed to increase an efficiency of bandwidth use to communicate data in one or more networks. Optimizing bandwidth usage may include removing functionally redundant messages, message stream management or sequencing, and message compression, for example. Setting information priority may include differentiating message types at a finer granularity than Internet Protocol (IP) based techniques and sequencing messages onto a data stream via a selected rule-based sequencing algorithm, for example. Data link management may include rule-based analysis of network measurements to affect changes in rules, modes, and/or data transports, for example. A mode or profile may include a set of rules related to the operational needs for a particular network state of health or condition. The system 150 provides dynamic, “on-the-fly” reconfiguration of modes, including defining and switching to new modes on the fly.


The communication system 150 may be configured to accommodate changing priorities and grades of service, for example, in a volatile, bandwidth-limited network. The system 150 may be configured to manage information for improved data flow to help increase response capabilities in the network and reduce communications latency. Additionally, the system 150 may provide interoperability via a flexible architecture that is upgradeable and scalable to improve availability, survivability, and reliability of communications. The system 150 supports a data communications architecture that may be autonomously adaptable to dynamically changing environments while using predefined and predictable system resources and bandwidth, for example.


In certain embodiments, the system 150 provides throughput management to bandwidth-constrained tactical communications networks while remaining transparent to applications using the network. The system 150 provides throughput management across multiple users and environments at reduced complexity to the network. As mentioned above, in certain embodiments, the system 150 runs on a host node in and/or at the top of layer four (the transport layer) of the OSI seven layer model and does not require specialized network hardware. The system 150 may operate transparently to the layer four interface. That is, an application may utilize a standard interface for the transport layer and be unaware of the operation of the system 150. For example, when an application opens a socket, the system 150 may filter data at this point in the protocol stack. The system 150 achieves transparency by allowing applications to use, for example, the TCP/IP socket interface that is provided by an operating system at a communication device on the network rather than an interface specific to the system 150. System 150 rules may be written in extensible markup language (XML) and/or provided via custom dynamic link libraries (DLLs), for example.


In certain embodiments, the system 150 provides quality of service (QoS) on the edge of the network. The system's QoS capability offers content-based, rule-based data prioritization on the edge of the network, for example. Prioritization may include differentiation and/or sequencing, for example. The system 150 may differentiate messages into queues based on user-configurable differentiation rules, for example. The messages are sequenced into a data stream in an order dictated by the user-configured sequencing rule (e.g., starvation, round robin, relative frequency, etc.). Using QoS on the edge, data messages that are indistinguishable by traditional QoS approaches may be differentiated based on message content, for example. Rules may be implemented in XML, for example. In certain embodiments, to accommodate capabilities beyond XML and/or to support extremely low latency requirements, the system 150 allows dynamic link libraries to be provided with custom code, for example.


Inbound and/or outbound data on the network may be customized via the system 150. Prioritization protects client applications from high-volume, low-priority data, for example. The system 150 helps to ensure that applications receive data to support a particular operational scenario or constraint.


In certain embodiments, when a host is connected to a LAN that includes a router as an interface to a bandwidth-constrained tactical network, the system may operate in a configuration known as QoS by proxy. In this configuration, packets that are bound for the local LAN bypass the system and immediately go to the LAN. The system applies QoS on the edge of the network to packets bound for the bandwidth-constrained tactical link.


In certain embodiments, the system 150 offers dynamic support for multiple operational scenarios and/or network environments via commanded profile switching. A profile may include a name or other identifier that allows the user or system to change to the named profile. A profile may also include one or more identifiers, such as a functional redundancy rule identifier, a differentiation rule identifier, an archival interface identifier, a sequencing rule identifier, a pre-transmit interface identifier, a post-transmit interface identifier, a transport identifier, and/or other identifier, for example. A functional redundancy rule identifier specifies a rule that detects functional redundancy, such as from stale data or substantially similar data, for example. A differentiation rule identifier specifies a rule that differentiates messages into queues for processing, for example. An archival interface identifier specifies an interface to an archival system, for example. A sequencing rule identifier identifies a sequencing algorithm that controls samples of queue fronts and, therefore, the sequencing of the data on the data stream. A pre-transmit interface identifier specifies the interface for pre-transmit processing, which provides for special processing such as encryption and compression, for example. A post-transmit interface identifier identifies an interface for post-transmit processing, which provides for processing such as de-encryption and decompression, for example. A transport identifier specifies a network interface for the selected transport.


A profile may also include other information, such as queue sizing information, for example. Queue sizing information identifiers a number of queues and amount of memory and secondary storage dedicated to each queue, for example.


In certain embodiments, the system 150 provides a rules-based approach for optimizing bandwidth. For example, the system 150 may employ queue selection rules to differentiate messages into message queues so that messages may be assigned a priority and an appropriate relative frequency on the data stream. The system 150 may use functional redundancy rules to manage functionally redundant messages. A message is functionally redundant if it is not different enough (as defined by the rule) from a previous message that has not yet been sent on the network, for example. That is, if a new message is provided that is not sufficiently different from an older message that has already been scheduled to be sent, but has not yet been sent, the newer message may be dropped, since the older message will carry functionally equivalent information and is further ahead in the queue. In addition, functional redundancy many include actual duplicate messages and newer messages that arrive before an older message has been sent. For example, a node may receive identical copies of a particular message due to characteristics of the underlying network, such as a message that was sent by two different paths for fault tolerance reasons. As another example, a new message may contain data that supersedes an older message that has not yet been sent. In this situation, the system 150 may drop the older message and send only the new message. The system 150 may also include priority sequencing rules to determine a priority-based message sequence of the data stream. Additionally, the system 150 may include transmission processing rules to provide pre-transmission and post-transmission special processing, such as compression and/or encryption.


In certain embodiments, the system 150 provides fault tolerance capability to help protect data integrity and reliability. For example, the system 150 may use user-defined queue selection rules to differentiate messages into queues. The queues are sized according to a user-defined configuration, for example. The configuration specifies a maximum amount of memory a queue may consume, for example. Additionally, the configuration may allow the user to specify a location and amount of secondary storage that may be used for queue overflow. After the memory in the queues is filled, messages may be queued in secondary storage. When the secondary storage is also full, the system 150 may remove the oldest message in the queue, logs an error message, and queues the newest message. If archiving is enabled for the operational mode, then the de-queued message may be archived with an indicator that the message was not sent on the network.


Memory and secondary storage for queues in the system 150 may be configured on a per-link basis for a specific application, for example. A longer time between periods of network availability may correspond to more memory and secondary storage to support network outages. The system 150 may be integrated with network modeling and simulation applications, for example, to help identify sizing to help ensure that queues are sized appropriately and time between outages is sufficient to help achieve steady-state and help avoid eventual queue overflow.


Furthermore, in certain embodiments, the system 150 offers the capability to meter inbound (“shaping”) and outbound (“policing”) data. Policing and shaping capabilities help address mismatches in timing in the network. Shaping helps to prevent network buffers form flooding with high-priority data queued up behind lower-priority data. Policing helps to prevent application data consumers from being overrun by low-priority data. Policing and shaping are governed by two parameters: effective link speed and link proportion. The system 150 may form a data stream that is no more than the effective link speed multiplied by the link proportion, for example. The parameters may be modified dynamically as the network changes. The system may also provide access to detected link speed to support application level decisions on data metering. Information provided by the system 150 may be combined with other network operations information to help decide what link speed is appropriate for a given network scenario.



FIG. 4 illustrates a data communications environment 400 operating according to an embodiment of the present invention. The data communications environment 400 includes one or more nodes 410, one or more networks 420, and one or more links 430 connecting the nodes 410 and the networks 420, and the data communications system 450 facilitating communications over the other components of the data communications environment 400. The data communications environment 400 may be similar to the data communications environment 100 of FIG. 1, as described above.


The data communications system 450 may operate within the node 410, as shown in FIG. 4. Alternatively, the data communications system 450 may operate within the network 420 and/or between the node 410 and the network 420. The node 410 may include one or more applications 415, such as Application A and Application B, as shown in FIG. 4.


The data communications system 450 is adapted to receive, store, organize, prioritize, process, transmit, and/or communicate data. The data received, stored, organized, prioritized, processed, transmitted, and/or communicated by the data communications system 450 may include, for example, a block of data, such as a packet, cell, frame, and/or stream.


In certain embodiments of the present invention, the data communications system 450 may include a data prioritization component 460 and a data communications component 470, which are described below in more detail.


The data prioritization component 460 prioritizes data. In certain embodiments of the present invention, the data prioritization component 460 may prioritize data based at least in part on one or more prioritization rules, such as differentiation rules and/or sequencing rules. The prioritization rules may be user defined. The prioritization rules may be written in XML and/or provided in one or more DLLs.


In certain embodiments of the present invention, the data prioritization component 460 may prioritize data based at least in part on message content. For example, the data priority may be based at least in part on data type, such as video, audio, telemetry, and/or position data. As another example, the data priority may be based at least in part on data source. For example, communications from a general may be assigned a higher priority than communications from a lower ranking officer.


In certain embodiments of the present invention, the data prioritization component 460 may prioritize data based at least in part on protocol information, such as source address and/or transport protocol. In certain embodiments of the present invention, the data prioritization component 460 prioritize data based at least in part on mode.


In certain embodiments of the present invention, the data prioritization component 460 may prioritize data by assigning a priority to the data. For example, position data and emitter data for a near threat may be associated with a priority of “HIGH,” next to shoot data may be associated with a priority of “MED HIGH,” top-ten shoot list data may be associated with a priority of “MED,” emitter data for a threat over one hundred miles away and situational awareness (SA) data from satellite communications (SATCOM) may be associated with a priority of “MED LOW,” and general status data may be assigned a priority of “LOW.”


As described above, data may be assigned and/or associated with a priority. For example, the data priority may include “HIGH,” “MED HIGH,” “MED,” “MED LOW,” or “LOW.” As another example, the data priority may include “KEEP PILOT ALIVE,” “KILL ENEMY,” or “INFORMATIONAL.”


In certain embodiments of the present invention, the data priority may be based at least in part on a type, category, and/or group of data. For example, types of data may include position data, emitter data for a near threat, next to shoot data, top-ten shoot list data, emitter data for a threat over one hundred miles away, SA data from SATCOM, and/or general status data. Additionally, the data may be grouped into categories, such as “KEEP PILOT ALIVE,” “KILL ENEMY,” and/or “INFORMATIONAL.” For example, “KEEP PILOT ALIVE” data, such as position data and emitter data for a near threat, may relate to the health and safety of a pilot. As another example, “KILL ENEMY” data, such as next to shoot data, top-ten shoot list data, and emitter data for a threat over one hundred miles away, may relate to combat systems. As another example, “INFORMATIONAL” data, such as SA data from SATCOM and general status data, may relate to non-combat systems.


As described above, the data type, category, and/or group may be the same as and/or similar to the data priority. For example, “KEEP PILOT ALIVE” data, such as position data and emitter data for a near threat, may be associated with a priority of “KEEP PILOT ALIVE,” which is more important than “KILL ENEMY” data, such as next to shoot data, top-ten shoot list data, and emitter data for a threat over one hundred miles away, associated with a priority of “KILL ENEMY.” As another example, “KILL ENEMY” data, such as next to shoot data, top-ten shoot list data, and emitter data for a threat over one hundred miles away, may be associated with a priority of “KILL ENEMY,” which is more important than “INFORMATIONAL” data, such as SA data from SATCOM and general status data, associated with a priority of “INFORMATIONAL.”


In certain embodiments of the present invention, the data prioritization component 460 may include a differentiation component 462, a sequencing component 464, and a data organization component 466, which are described below in more detail.


The differentiation component 462 differentiates data. In certain embodiments of the present invention, the differentiation component 462 may differentiate data based at least in part on one or more differentiation rules, such as queue selection rules and/or functional redundancy rules. The differentiation rules may be user defined. The differentiation rules may be written in XML and/or provided in one or more DLLs.


In certain embodiments of the present invention, the differentiation component 462 may add data to the data organization component 466. For example, the differentiation component 462 may add data to the data organization component 466 based at least in part on one or more queue selection rules.


In certain embodiments of the present invention, the differentiation component 462 may remove and/or withhold data from the data organization component 46. For example, the differentiation component 462 may remove data from the data organization component 466 based at least in part on one or more functional redundancy rules.


The sequencing component 464 sequences data. In certain embodiments of the present invention, the sequencing component 464 may sequence data based at least in part on one or more sequencing rules, such as such as starvation, round robin, and relative frequency. The sequencing rules may be user defined. The sequencing rules may be written in XML and/or provided in one or more DLLs.


In certain embodiments of the present invention, the sequencing component 464 may select and/or remove data from the data organization component 466. For example, the sequencing component 464 may remove data from the data organization component 46 based at least in part on the sequencing rules.


The data organization component 466 stores and/or organizes data. In certain embodiments of the present invention, the data organization component 466 may store and/or organize the data based at least in part on priority, such as “KEEP PILOT ALIVE,” “KILL ENEMY,” and “INFORMATIONAL.”


In certain embodiments of the present invention, the data organization component 466 may include, for example, one or more queues, such as Q1, Q2, Q3, Q4, and Q5. For example, data associated with a priority of “HIGH” may be stored in Q1, data associated with a priority of “MED HIGH” may be stored in Q2, data associated with a priority of “MED” may be stored in Q3, data associated with a priority of “MED LOW” may be stored in Q4, and data associated with a priority of “LOW” may be stored in Q5. Alternatively, the data organization component 466 may include, for example, one or more trees, tables, linked lists, and/or other data structures for storing and/or organizing data.


The data communications component 470 communicates data. In certain embodiments of the present invention, the data communications component 470 receives data, for example, from a node 410 and/or an application 415 running on the node 410, or over a network 420 and/or a link 430 connecting the node 410 to the network 420. In certain embodiments of the present invention, the data communications component 470 transmits data, for example, to a node 410 and/or an application 415 running on the node 410, or over a network 420 and/or a link connecting the node 410 to the network 420.


In certain embodiments of the present invention, the data communications component 470 communicates with the data prioritization component 460. More particularly, the data communications component 470 transmits data to the differentiation component 462 and receives data from the sequencing component 464. Alternatively, the data communications component 470 may communicate with the data organization component 466.


In certain embodiments of the present invention, the data prioritization component 460 may perform one or more of the functions of the data communications component 470.


In certain embodiments of the present invention, the data communications component 470 may communicate data based at least in part on data priority.


In operation, for example, data is received over the network 420 by the data communications component 470. The received data is prioritized by the data prioritization component 460 based at least in part on message content and/or mode. The prioritized data is transmitted to one or more applications 415 by the data communications component 470.


In certain embodiments of the present invention, the data communication system 450 may not receive all of the data. For example, some of the data may be stored in a buffer and the data communication system 450 may receive only header information and a pointer to the buffer. As another example, the data communication system 450 may be hooked into the protocol stack of an operating system and when an application passes data to the operating system through a transport layer interface (e.g., sockets), the operating system may then provide access to the data to the data communication system 450.


In certain embodiments of the present invention, the data communications system 450 may not drop data. That is, although the data may be lower priority, it is not dropped by the data communications system 450. Rather, the data may be delayed for a period of time, potentially dependent on the amount of higher priority data that is received.


In certain embodiments of the present invention, the data communications system 450 is transparent to other applications. For example, the processing, organizing, and/or prioritization performed by the data communications system 450 may be transparent to one or more nodes 410 or other applications or data sources. As another example, an application 415 running on the same system as the data communications system 450, or on a node 410 connected to the data communications system 450, may be unaware of the prioritization of data performed by the data communications system 450.


In certain embodiments of the present invention, the data communications system 450 may provide QoS.


As discussed above, the components, elements, and/or functionality of the data communication system 450 may be implemented alone or in combination in various forms in hardware, firmware, and/or as a set of instructions in software, for example. Certain embodiments may be provided as a set of instructions residing on a computer-readable medium, such as a memory, hard disk, DVD, or CD, for execution on a general purpose computer or other processing device.



FIG. 5 illustrates a flow diagram of a method 500 for communicating data according to an embodiment of the present invention. The method 500 includes the following steps, which will be described below in more detail. At step 510, data is received. At step 520, the data is prioritized. At step 530, the data is communicated. The method 500 is described with reference to elements of the data communications environment 400 of FIG. 4, but it should be understood that other implementations are possible.


At step 510, the data is received. The data may be received, for example, by the data communications system 450, as described above. As another example, the data may be received from a node 410 and/or an application 415 running on the node 410. As another example, the data may be received, for example, over a network 420 and/or a link connecting the node 410 and the network 420.


In certain embodiments of the present invention, the data may be received over a network 420 at a node 410.


At step 520, the data is prioritized. The data prioritized may be the data received at step 510, for example. The data may be prioritized, for example, by the data communications system 450 of FIG. 4, as described above. As another example, the data may be prioritized by the data prioritization component 460 of the data communications system 450 based at least in part on data prioritization rules.


In certain embodiments of the present invention, the data may be prioritized based at least in part on one or more prioritization rules. In certain embodiments of the present invention, the data may be prioritized based at least in part on message content. In certain embodiments of the present invention, the priority of the data may be based at least in part on mode. In certain embodiments of the present invention, the data may be prioritized based at least in part on protocol information. In certain embodiments of the present invention, the data may be prioritized at the node 410 by assigning a priority to the data.


At step 530, the data is communicated. The data communicated may be the data received at step 510, for example. The data communicated may be the data prioritized at step 520, for example. The data may be communicated, for example, by the data communications system 450, as described above. As another example, the data may be communicated to a node 410 and/or an application 415 running on the node 410. As another example, the data may be communicated over a network 420 and/or a link connecting the node 410 and the network 420.


In certain embodiments of the present invention, the data may be communicated to an application at the node 410 based at least in part on the priority of the data. The priority of the data may be the data priority determined at step 520, for example.


One or more of the steps of the method 500 may be implemented alone or in combination in hardware, firmware, and/or as a set of instructions in software, for example. Certain embodiments may be provided as a set of instructions residing on a computer-readable medium, such as a memory, hard disk, DVD, or CD, for execution on a general purpose computer or other processing device.


Certain embodiments of the present invention may omit one or more of these steps and/or perform the steps in a different order than the order listed. For example, some steps may not be performed in certain embodiments of the present invention. As a further example, certain steps may be performed in a different temporal order, including simultaneously, than listed above.


In one embodiment of the present invention, a method for communicating inbound network data to provide QoS includes receiving data over a network, prioritizing the data by assigning a priority to the data, and communicating the data to an application based at least in part on the priority of the data. The priority of the data is based at least in part on message content.


In one embodiment of the present invention, a system for communicating inbound networking data to provide QoS includes a data prioritization component adapted to prioritize data by assigning a priority to the data and a data communications component adapted to receive the data over a network and to communicate the data to an application based at least in part on the priority of the data. The priority of the data is based at least in part on message content.


In one embodiment of the present invention, a computer-readable medium includes a set of instructions for execution on a computer. The set of instructions includes a data prioritization routine configured to prioritize data by assigning a priority to the data and a data communications routine configured to receive the data over a network and to communicate the data to an application based at least in part on the priority of the data. The priority of the data is based at least in part on message content.


Certain embodiments of the present invention provide a method for inbound content-based QoS. The method includes receiving TCP and/or UDP addressed network data over a network, prioritizing the network data using a priority algorithm, processing the network data for redundancy, and using an extraction algorithm to forward the network data to one or more applications. The extraction algorithm may be similar to the sequencing algorithm, as described above. For example, the extraction algorithm may be based at least in part on starvation, relative frequency, or a combination of starvation and relative frequency. Starvation refers to servicing the highest priority queue, unless it is empty, and then servicing lower priority queues. Starvation may be advantageous because the highest priority data never waits for lower priority data. However, starvation may be disadvantageous because if there is enough of the highest priority data, lower priority queues will never be serviced. Relative frequency is similar to starvation, except that there is a cap on the number of times that a queue gets serviced before the next queue is to be serviced. Relative frequency may be advantageous because all of the queues are serviced However, relative frequency may be disadvantageous because the highest priority data may sometimes wait for lower priority data. A combination of starvation and relative frequency allows a user to select a subset of queues to be processed via starvation and another subset of queues to be processed via relative frequency. In certain embodiments, the extraction algorithm may be configured by a user.


Thus, certain embodiments of the present invention provide systems and methods for inbound content-based QoS. Certain embodiments provide a technical effect of inbound content-based QoS.


While the invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims
  • 1. A method for communicating inbound network data to provide quality of service, the method including: receiving inbound data at a processor, wherein inbound data comprises data received over a network at the processor for communication to an application at the processor;prioritizing the inbound data at the processor by assigning a priority to the inbound data and differentiating the inbound data based at least in part on queue selection and functional redundancy, wherein differentiating the inbound data based on functional redundancy comprises processing the inbound data by removing functionally redundant inbound data, wherein the priority of the inbound data is based at least in part on message content, wherein the prioritizing the inbound data occurs at a transport layer of a network communications protocol stack; andcommunicating the inbound data to the application at the processor based at least in part on the assigned priority of the inbound data.
  • 2. The method of claim 1, wherein the inbound data includes at least one of a cell, a frame, a packet, and a stream.
  • 3. The method of claim 1, wherein the priority of the inbound data includes one or more of a type of data, a category of data, and a group of data.
  • 4. The method of claim 1, wherein the receiving step includes receiving the inbound data at a node on the edge of a network.
  • 5. The method of claim 1, wherein the priority of the inbound data is based at least in part on protocol information.
  • 6. The method of claim 1, wherein the priority of the inbound data is based at least in part on mode.
  • 7. The method of claim 1, wherein the inbound data is prioritized based at least in part on a user defined rule.
  • 8. The method of claim 1, wherein the prioritizing step includes sequencing the inbound data.
  • 9. The method of claim 8, wherein the inbound data is sequenced based at least in part on at least one of starvation, round robin, and relative frequency.
  • 10. The method of claim 1, wherein the prioritizing step is transparent to an application program.
  • 11. The method of claim 1, wherein the inbound data is prioritized to provide quality of service.
  • 12. A system for communicating inbound network data to provide quality of service, the system including: a data prioritization component configured to prioritize inbound data by assigning a priority to the inbound data and differentiating the inbound data based at least in part on queue selection and functional redundancy, wherein differentiating the inbound data based on functional redundancy comprises processing the inbound data by removing functionally redundant inbound data, wherein inbound data comprises data received over a network at a node for communication to an application at the node, wherein the priority of the inbound data is based at least in part on message content, wherein the prioritization of the inbound data occurs at a transport layer of a network communications protocol stack; anda data communications component configured to receive the inbound data over the network at the node and to communicate the prioritized and processed inbound data to the application at the node based at least in part on the assigned priority of the inbound data.
  • 13. The system of claim 12, wherein the data prioritization component includes a data organization component configured to organize the inbound data based at least in part on the priority of the inbound data.
  • 14. The system of claim 13, wherein the data organization component includes a data structure.
  • 15. The system of claim 14, wherein the data structure includes at least one of a queue, a tree, a table, and a list.
  • 16. A non-transitory computer-readable storage medium encoded with a set of instructions for execution on a computer, the set of instructions including: a data prioritization routine configured to prioritize inbound data by assigning a priority to the data and differentiating the inbound data based at least in part on queue selection and functional redundancy, wherein differentiating the inbound data based on functional redundancy comprises processing the inbound data by removing functionally redundant inbound data, wherein inbound data comprises data received over a network at a node for communication to an application at the node, wherein the priority of the inbound data is based at least in part on message content, wherein the prioritization of the inbound data occurs at a transport layer of a network communications protocol stack; anda data communications routine configured to receive the inbound data over the network at the node and to communicate the prioritized and processed inbound data to an application at the node based at least in part on the assigned priority of the inbound data.
US Referenced Citations (275)
Number Name Date Kind
5241632 O'Connell et al. Aug 1993 A
5559999 Maturi et al. Sep 1996 A
5560038 Haddock Sep 1996 A
5627970 Keshav May 1997 A
5664091 Keen Sep 1997 A
5671224 Pyhalammi et al. Sep 1997 A
5748739 Press May 1998 A
5761445 Nguyen Jun 1998 A
5784566 Viavant et al. Jul 1998 A
5844600 Kerr Dec 1998 A
5949758 Kober Sep 1999 A
5960035 Sridhar et al. Sep 1999 A
6028843 Delp et al. Feb 2000 A
6044419 Hayek et al. Mar 2000 A
6067557 Hegde May 2000 A
6072781 Feeney et al. Jun 2000 A
6075770 Chang et al. Jun 2000 A
6115378 Hendel et al. Sep 2000 A
6124806 Cunningham et al. Sep 2000 A
6154778 Koistinen et al. Nov 2000 A
6170075 Schuster et al. Jan 2001 B1
6185520 Brown et al. Feb 2001 B1
6205486 Wei et al. Mar 2001 B1
6233248 Sautter et al. May 2001 B1
6236656 Westerberg et al. May 2001 B1
6247058 Miller et al. Jun 2001 B1
6279035 Brown et al. Aug 2001 B1
6301527 Butland et al. Oct 2001 B1
6314425 Serbinis et al. Nov 2001 B1
6332163 Bowman-Amuah Dec 2001 B1
6343085 Krishnan et al. Jan 2002 B1
6343318 Hawkins et al. Jan 2002 B1
6363411 Dugan et al. Mar 2002 B1
6397259 Lincke et al. May 2002 B1
6401117 Narad et al. Jun 2002 B1
6404776 Voois et al. Jun 2002 B1
6407998 Polit et al. Jun 2002 B1
6408341 Feeney et al. Jun 2002 B1
6421335 Kilkki et al. Jul 2002 B1
6438603 Ogus Aug 2002 B1
6446204 Pang et al. Sep 2002 B1
6449251 Awadallah et al. Sep 2002 B1
6490249 Aboul-Magd et al. Dec 2002 B1
6498782 Branstad et al. Dec 2002 B1
6507864 Klein et al. Jan 2003 B1
6532465 Hartley et al. Mar 2003 B2
6542593 Bowman-Amuah Apr 2003 B1
6556982 McGaffey et al. Apr 2003 B1
6557053 Bass et al. Apr 2003 B1
6560592 Reid et al. May 2003 B1
6563517 Bhagwat et al. May 2003 B1
6587435 Miyake et al. Jul 2003 B1
6587875 Ogus Jul 2003 B1
6590588 Lincke et al. Jul 2003 B2
6598034 Kloth Jul 2003 B1
6600744 Carr et al. Jul 2003 B1
6611522 Zheng et al. Aug 2003 B1
6614781 Elliott et al. Sep 2003 B1
6618385 Cousins Sep 2003 B1
6625133 Balachandran et al. Sep 2003 B1
6625650 Stelliga Sep 2003 B2
6633835 Moran et al. Oct 2003 B1
6640184 Rabe Oct 2003 B1
6640248 Jorgensen Oct 2003 B1
6650902 Richton Nov 2003 B1
6668175 Almgren et al. Dec 2003 B1
6671589 Holst et al. Dec 2003 B2
6671732 Weiner Dec 2003 B1
6680922 Jorgensen Jan 2004 B1
6687735 Logston et al. Feb 2004 B1
6691168 Bal et al. Feb 2004 B1
6700871 Harper et al. Mar 2004 B1
6715145 Bowman-Amuah Mar 2004 B1
6728749 Richardson Apr 2004 B1
6732185 Reistad May 2004 B1
6732228 Willardson May 2004 B1
6741562 Keirouz et al. May 2004 B1
6748070 Kalmanek, Jr. et al. Jun 2004 B2
6760309 Rochberger et al. Jul 2004 B1
6771609 Gudat et al. Aug 2004 B1
6772223 Corl et al. Aug 2004 B1
6778530 Greene Aug 2004 B1
6778546 Epps et al. Aug 2004 B1
6798776 Cheriton et al. Sep 2004 B1
6816903 Rakoshitz et al. Nov 2004 B1
6819655 Gregson Nov 2004 B1
6819681 Hariharasubrahmanian Nov 2004 B1
6820117 Johnson Nov 2004 B1
6822940 Zavalkovsky et al. Nov 2004 B1
6826627 Sjollema et al. Nov 2004 B2
6832118 Heberlein et al. Dec 2004 B1
6832239 Kraft et al. Dec 2004 B1
6839731 Alexander et al. Jan 2005 B2
6839768 Ma et al. Jan 2005 B2
6845100 Rinne Jan 2005 B1
6850486 Saleh et al. Feb 2005 B2
6854009 Hughes Feb 2005 B1
6854069 Kampe et al. Feb 2005 B2
6862265 Appala et al. Mar 2005 B1
6862622 Jorgensen Mar 2005 B2
6865153 Hiel et al. Mar 2005 B1
6870812 Kloth et al. Mar 2005 B1
6873600 Duffield et al. Mar 2005 B1
6879590 Pedersen et al. Apr 2005 B2
6882642 Kejriwal et al. Apr 2005 B1
6885643 Teramoto et al. Apr 2005 B1
6888806 Miller et al. May 2005 B1
6888807 Heller et al. May 2005 B2
6891839 Albert et al. May 2005 B2
6891842 Sahaya et al. May 2005 B2
6891854 Zhang et al. May 2005 B2
6892309 Richmond et al. May 2005 B2
6901484 Doyle et al. May 2005 B2
6904054 Baum et al. Jun 2005 B1
6904058 He et al. Jun 2005 B2
6907243 Patel Jun 2005 B1
6907258 Tsutsumi et al. Jun 2005 B2
6907462 Li et al. Jun 2005 B1
6910074 Amin et al. Jun 2005 B1
6912221 Zadikian et al. Jun 2005 B1
6914882 Merani et al. Jul 2005 B2
6917622 McKinnon, III et al. Jul 2005 B2
6920145 Matsuoka et al. Jul 2005 B2
6922724 Freeman et al. Jul 2005 B1
6928085 Haartsen Aug 2005 B2
6928471 Pabari et al. Aug 2005 B2
6934250 Kejriwal et al. Aug 2005 B1
6934752 Gubbi Aug 2005 B1
6934795 Nataraj et al. Aug 2005 B2
6937154 Zeps et al. Aug 2005 B2
6937561 Chiussi et al. Aug 2005 B2
6937566 Forslow Aug 2005 B1
6937591 Guo et al. Aug 2005 B2
6937600 Takagi Aug 2005 B2
6940808 Shields et al. Sep 2005 B1
6940813 Ruutu Sep 2005 B2
6940832 Saadawi et al. Sep 2005 B2
6941341 Logston et al. Sep 2005 B2
6944168 Paatela et al. Sep 2005 B2
6947378 Wu et al. Sep 2005 B2
6947943 DeAnna et al. Sep 2005 B2
6947996 Assa et al. Sep 2005 B2
6950400 Tran et al. Sep 2005 B1
6950441 Kaczmarczyk et al. Sep 2005 B1
6952401 Kadambi et al. Oct 2005 B1
6952416 Christie, IV Oct 2005 B1
6975638 Chen et al. Dec 2005 B1
6975647 Neale et al. Dec 2005 B2
7023851 Chakravorty Apr 2006 B2
7065084 Seo Jun 2006 B2
7068599 Jiang et al. Jun 2006 B1
7076552 Mandato Jul 2006 B2
7095715 Buckman et al. Aug 2006 B2
7149898 Marejka et al. Dec 2006 B2
7200144 Terrell et al. Apr 2007 B2
7251242 Schrodi Jul 2007 B2
7260102 Mehrvar et al. Aug 2007 B2
7289498 Yu et al. Oct 2007 B2
7330908 Jungck Feb 2008 B2
7337236 Bess et al. Feb 2008 B2
7349422 Duong et al. Mar 2008 B2
7359321 Sindhu et al. Apr 2008 B1
7376829 Ranjan May 2008 B2
7392323 Yim et al. Jun 2008 B2
7408932 Kounavis et al. Aug 2008 B2
7424579 Wheeler et al. Sep 2008 B2
7433307 Hooper et al. Oct 2008 B2
7434221 Hooper et al. Oct 2008 B2
7437478 Yokota et al. Oct 2008 B2
7471689 Tripathi et al. Dec 2008 B1
7477651 Schmidt et al. Jan 2009 B2
7489666 Koo et al. Feb 2009 B2
7499457 Droux et al. Mar 2009 B1
7539175 White et al. May 2009 B2
7543072 Hertzog et al. Jun 2009 B1
7590756 Chan Sep 2009 B2
7720047 Katz et al. May 2010 B1
7756134 Smith et al. Jul 2010 B2
7813359 Yamawaki Oct 2010 B1
7869428 Shake et al. Jan 2011 B2
7894509 Smith et al. Feb 2011 B2
7916626 Smith et al. Mar 2011 B2
20010030970 Wiryaman et al. Oct 2001 A1
20020009060 Gross Jan 2002 A1
20020009081 Sampath et al. Jan 2002 A1
20020010792 Border Jan 2002 A1
20020062395 Thompson et al. May 2002 A1
20020064128 Hughes et al. May 2002 A1
20020091802 Paul et al. Jul 2002 A1
20020122387 Ni Sep 2002 A1
20020122395 Bourlas et al. Sep 2002 A1
20020141338 Burke Oct 2002 A1
20020143948 Maher Oct 2002 A1
20020160805 Laitinen et al. Oct 2002 A1
20020188871 Noehring et al. Dec 2002 A1
20020191253 Yang et al. Dec 2002 A1
20030004952 Nixon et al. Jan 2003 A1
20030016625 Narsinh et al. Jan 2003 A1
20030021291 White et al. Jan 2003 A1
20030033394 Stine Feb 2003 A1
20030067877 Sivakumar et al. Apr 2003 A1
20030110286 Antal et al. Jun 2003 A1
20030112802 Ono et al. Jun 2003 A1
20030112822 Hong et al. Jun 2003 A1
20030112824 Acosta Jun 2003 A1
20030118107 Itakura et al. Jun 2003 A1
20030158963 Sturdy et al. Aug 2003 A1
20030163539 Piccinelli Aug 2003 A1
20030186724 Tsutsumi et al. Oct 2003 A1
20030189935 Warden et al. Oct 2003 A1
20030195983 Krause Oct 2003 A1
20030236828 Rock et al. Dec 2003 A1
20040001493 Cloonan et al. Jan 2004 A1
20040038685 Nakabayashi Feb 2004 A1
20040057437 Daniel et al. Mar 2004 A1
20040076161 Lavian et al. Apr 2004 A1
20040077345 Turner et al. Apr 2004 A1
20040105452 Koshino et al. Jun 2004 A1
20040125815 Shimazu et al. Jul 2004 A1
20040131014 Thompson et al. Jul 2004 A1
20040151114 Ruutu Aug 2004 A1
20040165528 Li et al. Aug 2004 A1
20040172476 Chapweske Sep 2004 A1
20040174898 Kadambi et al. Sep 2004 A1
20040190451 Dacosta Sep 2004 A1
20040218532 Khirman Nov 2004 A1
20040228363 Adamczyk et al. Nov 2004 A1
20040252698 Anschutz et al. Dec 2004 A1
20050021806 Richardson et al. Jan 2005 A1
20050030952 Elmasry Feb 2005 A1
20050041669 Cansever et al. Feb 2005 A1
20050060427 Phillips et al. Mar 2005 A1
20050078672 Caliskan et al. Apr 2005 A1
20050114036 Fruhling et al. May 2005 A1
20050157660 Mandato et al. Jul 2005 A1
20050169257 Lahetkangas et al. Aug 2005 A1
20050171932 Nandhra Aug 2005 A1
20050220115 Romano et al. Oct 2005 A1
20050226233 Kryuchkov et al. Oct 2005 A1
20050232153 Bishop et al. Oct 2005 A1
20050281277 Killian Dec 2005 A1
20060036906 Luciani et al. Feb 2006 A1
20060039381 Anschutz et al. Feb 2006 A1
20060039404 Rao et al. Feb 2006 A1
20060083261 Maeda et al. Apr 2006 A1
20060104287 Rogasch et al. May 2006 A1
20060106753 Yoon et al. May 2006 A1
20060109857 Herrmann May 2006 A1
20060140121 Kakani et al. Jun 2006 A1
20060149845 Malin et al. Jul 2006 A1
20060165051 Banerjee et al. Jul 2006 A1
20060215593 Wang et al. Sep 2006 A1
20060286993 Xie et al. Dec 2006 A1
20070008883 Kobayashi Jan 2007 A1
20070058561 Virgile Mar 2007 A1
20070060045 Prautzsch Mar 2007 A1
20070070895 Narvaez Mar 2007 A1
20070133582 Banerjee et al. Jun 2007 A1
20070153798 Krstulich Jul 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070171910 Kumar Jul 2007 A1
20070189327 Konda Aug 2007 A1
20070206506 Purpura Sep 2007 A1
20070253412 Batteram et al. Nov 2007 A1
20070258445 Smith et al. Nov 2007 A1
20070263616 Castro et al. Nov 2007 A1
20070275728 Lohr et al. Nov 2007 A1
20070291647 Smith et al. Dec 2007 A1
20070291656 Knazik et al. Dec 2007 A1
20070291751 Smith et al. Dec 2007 A1
20070291766 Knazik et al. Dec 2007 A1
20080065808 Hoese et al. Mar 2008 A1
20080144493 Yeh Jun 2008 A1
20080293413 Sharif-Ahmadi et al. Nov 2008 A1
20090161741 Ginis et al. Jun 2009 A1
Foreign Referenced Citations (88)
Number Date Country
0853404 Jul 1998 EP
0886454 Dec 1998 EP
1052816 Nov 2000 EP
1052816 Nov 2000 EP
1146704 Oct 2001 EP
1191751 Mar 2002 EP
1193938 Mar 2002 EP
1193938 Apr 2002 EP
1193938 Apr 2002 EP
1300991 Apr 2003 EP
1300991 Apr 2003 EP
1180882 Oct 2004 EP
1575224 Feb 2005 EP
1575224 Sep 2005 EP
1622322 Jan 2006 EP
1648125 Apr 2006 EP
3019545 Jan 1991 JP
H07-007516 Jan 1995 JP
H08-307454 Nov 1996 JP
H08-316989 Nov 1996 JP
H08-316990 Nov 1996 JP
H09-149051 Jun 1997 JP
9191314 Jul 1997 JP
10051495 Feb 1998 JP
11122264 Apr 1999 JP
2000-049866 Feb 2000 JP
2000-207234 Jul 2000 JP
2001045056 Feb 2001 JP
2001-186173 Jul 2001 JP
2001186173 Jul 2001 JP
2001-308947 Nov 2001 JP
2001-522115 Nov 2001 JP
2002-044136 Feb 2002 JP
2003-078555 Mar 2003 JP
2003-209577 Jul 2003 JP
2003-298593 Oct 2003 JP
2003298593 Oct 2003 JP
2004-222010 May 2004 JP
2005-027240 Jan 2005 JP
2005-217491 Aug 2005 JP
2005-244269 Sep 2005 JP
2006-031063 Feb 2006 JP
2006031063 Feb 2006 JP
2006-087147 Mar 2006 JP
2006-121192 May 2006 JP
2006-166426 Jun 2006 JP
2002-45703 Jun 2002 KR
2004-71761 Aug 2004 KR
9824208 Jun 1998 WO
W099-23786 May 1999 WO
W09922494 May 1999 WO
0008817 Feb 2000 WO
W00174027 Oct 2001 WO
0230066 Apr 2002 WO
WO0230066 Apr 2002 WO
W003053013 Jun 2003 WO
W003058466 Jul 2003 WO
W02004023323 Mar 2004 WO
W02004036845 Apr 2004 WO
2005006664 Jan 2005 WO
W02005006664 Jan 2005 WO
W02005076539 Aug 2005 WO
W02006006632 Jan 2006 WO
2006001155 Jul 2006 WO
W02006071155 Jul 2006 WO
WO2006071155 Jul 2006 WO
2007149165 Feb 2007 WO
2007149166 Feb 2007 WO
2007130414 Nov 2007 WO
2007130415 Nov 2007 WO
2007147032 Dec 2007 WO
2007147040 Dec 2007 WO
2007149769 Dec 2007 WO
2007149805 Dec 2007 WO
WO2007147032 Dec 2007 WO
WO2007147040 Dec 2007 WO
WO2007149769 Dec 2007 WO
WO2007149805 Dec 2007 WO
2008008865 Jan 2008 WO
WO2008008865 Jan 2008 WO
2008016845 Feb 2008 WO
2008016846 Feb 2008 WO
2008016848 Feb 2008 WO
2008016850 Feb 2008 WO
WO2008016845 Feb 2008 WO
WO2008016846 Feb 2008 WO
WO2008016848 Feb 2008 WO
WO2008016850 Feb 2008 WO
Related Publications (1)
Number Date Country
20070291766 A1 Dec 2007 US