Empirical scheduling of network packets

Information

  • Patent Grant
  • 7529247
  • Patent Number
    7,529,247
  • Date Filed
    Wednesday, September 17, 2003
    21 years ago
  • Date Issued
    Tuesday, May 5, 2009
    15 years ago
Abstract
A method of transmitting packets over a network includes steps of partitioning a packet delivery schedule into discrete time slots; transmitting a plurality of test packets from a first endpoint on the network to an intended recipient in the network using different time slots; evaluating the reliability of the network to transmit the plurality of test packets in each time slot; and selecting one or more time slots in the delivery schedule according to the evaluation step.
Description
BACKGROUND OF THE INVENTION

The present invention relates generally to a system for allowing devices connected to a network (e.g., an IP or Ethernet network) to collaborate with other such devices so as to transmit and receive data packets without impairment on the network


As is generally known, Ethernet and Internet Protocol (IP) are systems for transmitting packets between different points on a communications network. These switching systems are known as “contention-based” systems. That is, all transmitters contend for network resources. All transmitters may transmit simultaneously. If they do, then network resources may be oversubscribed. When this happens, data may be delayed or lost, resulting in network impairment.


As illustrated in FIG. 1, four streams of packets are input to a packet switch 112, which routes the packets to one or more outputs based on addressing information contained in each packet. Packets may arrive at the switch at unpredictable times, leading to bursts of inputs that must be handled. The switch typically maintains a packet queue 114 that is able to store a small number of packets. The queue may comprise multiple queues arranged by packet priority level, such that priority 3 packets, for example, take precedence over priority 1 packets. If the inputs are too bursty, the queues fill up and some packets may be discarded. The higher-priority queues are typically emptied before the lower-priority queues, such that the lower-priority queues are more likely to lose data first.


IP systems suffer from impairments such as packet loss and jitter. This happens because there is no control over how many such packets reach a router at any given instant. If two packets arrive at a router at the same time, destined for the same port, one will have to be delayed. Both cannot be transmitted simultaneously. One of the packets will be saved in the queue until the first packet is completely transmitted.



FIG. 2 shows a computer network comprising endpoints 100, 101, 102, and 103. The network includes routers 104 through 107. As can be seen in the figure, if endpoints 100 and 101 communicate with endpoints 102 and 103 at the same time, a bottleneck may develop between routers 105 and 106. This may occur because too many packets may be simultaneously transmitted between the routers, causing the routers to discard overflow packets. This can happen even at low levels of network utilization.


Various methods have been developed to overcome data loss on Ethernet and IP networks. The primary approach has been to use additional protocols to replace lost data. This is an after-the-fact solution. An example is the well-known Transmission Control Protocol (TCP). TCP is able to detect data loss and it causes retransmission of the data, until a perfect copy of the complete data file is delivered to the recipient device.


Many devices may be unable to use TCP or any retransmission method because it is far too slow. Real-time applications require delivery of data, accurately, the first time. For these applications to operate well, even the speed of light causes undesired delay. It is not feasible or desirable to add retransmission delay.


The problem is determining how to provide reliable, first-time delivery on a contention-based network. Various approaches have been tried. The most commonly proposed system relies on prioritization of data in the network. With this approach, data having real-time constraints is identified with priority coding so that it may be transmitted before other data.


Prioritization seems at first to be a good solution. However, on reflection it suffers from the same difficulty. Prioritization only provides a delivery advantage relative to the lower-priority data. It provides no advantage against the other priority data.


Analysis and testing shows that this approach can work in certain circumstances, but only when the amount of priority data is small. For simple applications like voice, the percentage of the total may need to be 8% or less. Other applications must occupy an even smaller percentage of total network resource. As shown in FIG. 1, even high-priority priority packets may be discarded if too many high-priority packets are transmitted within a short time interval. For many networks this makes prioritization impractical.


Another approach is to multiplex the data. With this method the bursts of data associated with one flow of data are separated from the burst of another.


Multiplexing usually uses some type of time-domain system (known as Time Domain Multiplexing (TDM)) to separate flows. Flows may be separated in groups, so that one group does not contend with another group. This can be an improvement but still leaves the possibility of contention between groups. The only way to eliminate contention is to multiplex each flow individually. A central problem with multiplexing is that it eliminates a principal advantage of the network, namely that average bandwidth available to all is reduced. In other words, each potential transmitter on the network is guaranteed a slot of time on the network, even if that time is infrequently used. This leads to inefficient resource usage.


Asynchronous Transfer Mode (ATM) is another technology for multiplexing a data network, to reduce contention. ATM breaks all data flows into equal length data blocks. Further, ATM can limit the number of data blocks available to any flow or application. The result is a virtual TDM multiplex system.


Both TDM and ATM provide contention reduction, but at the cost of considerable added complexity, cost, components, and lost bandwidth performance. Other approaches rely on specialized hardware to schedule packet delivery, driving up hardware costs.


SUMMARY OF THE INVENTION

The invention overcomes many of the above-identified disadvantages by providing an empirically determined delivery schedule for packets that are to be delivered between two endpoints on the network. A transmitting node having the need to transmit packets according to a known data rate (e.g., to support a voice telephone call) transmits a series of test packets over the network to the intended recipient using different delivery times. The test packets are evaluated to determine which of the delivery times suffered the least latency and/or packet loss, and that delivery time is used to schedule the packets for the duration of the transmission. Other endpoints use a similar scheme, such that each endpoint is able to evaluate which delivery schedule is best suited for transmitting packets with the least likely packet loss and latency. Different priority levels are used to transmit the data; the test packets; and other data in the network. The system empirically determines a desirable time schedule for transmission of data packets between two endpoints on the network. The delivery scheme can be implemented without specialized hardware.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows the problem of bursty packets creating an overflow condition at a packet switch, leading to packet loss.



FIG. 2 shows how network congestion can lead to a bottleneck where two sets of endpoints share a common network resource under bursty conditions.



FIG. 3 shows one approach for assigning different priority levels to scheduled data (realtime level); test packets (discovery level); and other network traffic (data level).



FIG. 4 shows a frame structure in which a delivery schedule can be decomposed into a master frame; subframes; and secondary subframes.



FIG. 5 shows a flow chart having steps for carrying out various principles of the invention.



FIG. 6 shows a system using a delivery schedule for test packets from a first endpoint to a second endpoint.



FIG. 7 shows a system wherein queues for realtime traffic (priority 3) are nearly full at one packet switch and yet the traffic still gets through the network.





DETAILED DESCRIPTION OF THE INVENTION

According to one variation of the invention, a priority scheme is used to assign priority levels to data packets in a network such that delivery of packets intended for real-time or near real-time delivery (e.g., phone calls, video frames, or TDM data packets converted into IP packets) are assigned the highest priority in the network. A second-highest priority level is assigned to data packets that are used for testing purposes (i.e. the so-called test packets). A third-highest priority level is assigned to remaining data packets in the system, such as TCP data used by web browsers. FIG. 3 illustrates this scheme. These priority levels can be assigned by enabling the packet priority scheme already available in many routers.


Other priority levels above and below these three levels can be accommodated as well. For example, a priority level above the real-time level can be assigned for emergency purposes, or for network-level messages (e.g., messages that instruct routers or other devices to perform different functions).



FIG. 4 shows how an arbitrary delivery time period of one second (a master frame) can be decomposed into subframes each of 100 millisecond duration, and how each subframe can be further decomposed into secondary subframes each of 10 millisecond duration. Each secondary subframe is in turn divided into time slots of 1 millisecond duration. According to one variation of the invention, the delivery time period for each second of transmission bandwidth is decomposed using a scheme such as that shown in FIG. 4 and packets are assigned to one or more time slots according to this schedule for purposes of transmitting test packets and for delivering data using the inventive principles. In this sense, the scheme resembles conventional TDM systems. However, unlike TDM systems, no endpoint can be guaranteed to have a particular timeslot or timeslots. Instead, nodes on the network transmit using timeslots that are empirically determined to be favorable based on the prior transmission of test packets between the two endpoints.



FIG. 5 shows method steps that can be used to carry out the principles of the invention. Beginning in step 501, a determination is made that two endpoints on the network (e.g., and Ethernet network or an IP network) desire to communicate. This determination may be the result of a telephone receiver being picked up and a telephone number being dialed, indicating that two nodes need to initiate a voice-over-IP connection. Alternatively, a one-way connection may need to be established between a node that is transmitting video data and a receiving node. Each of these connection types can be expected to impose a certain amount of data packet traffic on the network. For example, a voice-over-IP connection may require 64 kilobits per second transfer rate using 80-byte packet payloads (not including packet headers). A video stream would typically impose higher bandwidth requirements on the network.


Note that for two-way communication, two separate connections must be established:


one for node A transmitting to node B, and another connection for node B transmitting to node A. Although the inventive principles will be described with respect to a one-way transmission, it should be understood that the same steps would be repeated at the other endpoint where a two-way connection is desired.


In step 502, a delivery schedule is partitioned into time slots according to a scheme such as that illustrated in FIG. 4. (This step can be done in advance and need not be repeated every time a connection is established between two endpoints). The delivery schedule can be derived from a clock such as provided by a Global Positioning System (GPS). As one example, an arbitrary time period of one second can be established for a master frame, which can be successively decomposed into subframes and secondary subframes, wherein each subframe is composed of 10 slots each of 10 milliseconds in duration and each secondary subframe is compose of 10 slots each of 1 millisecond in duration. Therefore, a period of one second would comprise 1,000 slots of 1 millisecond duration. Other time periods could of course be used, and the invention is not intended to be limited to any particular time slot scheme.


In step 503, the required bandwidth between the two endpoints is determined. For example, for a single voice-over-IP connection, a bandwidth of 64 kilobits per second might be needed. Assuming a packet size of 80 bytes or 640 bits (ignoring packet overhead for the moment), this would mean that 100 packets per second must be transmitted, which works out to (on average) a packet every 10 milliseconds.


Returning to the example shown in FIG. 4, this would mean transmitting a packet during at least one of the slots in the secondary subframe at the bottom of the figure. (Each slot corresponds to one millisecond).


In step 504, a plurality of test packets are transmitted during different time slots at a rate needed to support the desired bandwidth. Each test packet is transmitted using a “discovery” level priority (see FIG. 3) that is higher than that accorded to normal data packets (e.g., TCP packets) but lower than that assigned to realtime data traffic (to be discussed below). For example, turning briefly to FIG. 6, suppose that the schedule has been partitioned into one millisecond time slots. The test packets might be transmitted during time slots 1, 3, 5, 7, 9, 11, and 12 as shown. Each test packet preferably contains the “discovery” level priority; a timestamp to indicate when the packet was sent; a unique sequence number from which the packet can be identified after it has been transmitted; and some means of identifying what time slot was used to transmit the packet. (The time slot might be inferred from the sequence number). The receiving endpoint upon receiving the test packets returns the packets to the sender, which allows the sender to (a) confirm how many of the sent packets were actually received; and (b) determine the latency of each packet. Other approaches for determining latency can of course be used. The evaluation can be done by the sender, the recipient, or a combination of the two.


In step 506, the sender evaluates the test packets to determine which time slot or slots are most favorable for carrying out the connection. For example, if it is determined that packets transmitted using time slot #1 suffered a lower average dropped packet rate than the other slots, that slot would be preferred. Similarly, the time slot that resulted in the lowest packet latency (round-trip from the sender) could be preferred over other time slots that had higher latencies. The theory is that packet switches that are beginning to be stressed would have queues that are beginning to fill up, causing increases in latency and dropped packets. Accordingly, according to the inventive principles other time slots could be used to avoid transmitting packets during periods that are likely to increase queue lengths in those switches. In one variation, the time slots can be “overstressed” to stretch the system a bit. For example, if only 80-byte packets are actually needed, 160-byte packets could be transmitted during the test phase to represent an overloaded condition. The overloaded condition might reveal bottlenecks where the normal 80-byte packets might not.


Rather than the recipient sending back time-stamped packets, the recipient could instead perform statistics on collected test packets and send back a report identifying the latencies and dropped packet rates associated with each time slot.


As explained above, packet header overhead has been ignored but would typically need to be included in the evaluation process (i.e., 80-byte packets would increase by the size of the packet header). Slot selection for the test packets could be determined randomly (i.e., a random selection of time slots could be selected for the test packets), or they could be determined based on previously used time slots. For example, if a transmitting node is already transmitting on time slot 3, it would know in advance that such a time slot might not be a desirable choice for a second connection. As another example, if the transmitting node is already transmitting on time slot 3, the test packets could be transmitted in a time slot that is furthest away from time slot 3, in order to spread out as much as possible the packet distribution.


In step 506, a connection is established between the two endpoints and packets are transmitted using the higher “realtime” priority level and using the slot or slots that were determined to be more favorable for transmission. Because the higher priority level is used, the connections are not affected by test packets transmitted across the network, which are at a lower priority level. In one variation, the IP precedence field in IP packet headers can be used to establish the different priority levels.



FIG. 6 shows a system employing various principles of the invention. As shown in FIG. 6, two endpoints each rely on a GPS receiver for accurate time clock synchronization (e.g., for timestamping and latency determination purposes). The IP network may be comprised of a plurality of routers and/or other network devices that are able to ultimately route packets (e.g., IP or Ethernet packets) from one endpoint to the other. It is assumed that the organization configuring the network has the ability to control priority levels used on the network, in order to prevent other nodes from using the discovery priority level and realtime priority level.


It should be appreciated that rather than transmitting test packets simultaneously during different time slots, a single slot can be tested, then another slot, and so on, until an appropriate slot is found for transmission. This would increase the time required to establish a connection. Also, as described above, for a two-way connection, both endpoints would carry out the steps to establish the connection.


It should also be understood that the phase of all frames may be independent from one another; they need only be derived from a common clock. Different endpoints need not have frames synchronized with each other. Other approaches can of course be used.


The invention will also work with “early discard” settings in router queues since the empirical method would detect that a discard condition is approaching.


In another variation, packet latencies and packet dropped rates can be monitored during a connection between endpoints and, based on detecting a downward trend in either parameter, additional test packets can be transmitted to find a better time slot in which to move the connection.



FIG. 7 shows a system in which a first endpoint 701 communicates with a second endpoint 706 through a plurality of packet switches 703 through 705. Each packet switch maintains a plurality of packet queues. For illustrative purposes, four different priority levels are shown, wherein 4 is the highest level, and level 1 is the lowest level. Assume that endpoint 701 attempts to initiate a connection with endpoint 706 through the network. Endpoint 701 transmits a plurality of “test” packets using priority level 2. As can be seen, packet switch 703 is lightly loaded and the queues have no difficulty keeping up with the traffic.


Packet switch 704, however, is heavily loaded. In that switch, the queue for priority level 1 traffic is fall, leading to dropped packets and latencies. Similarly, the test packets transmitted by endpoint 701 at priority level 2 cause that queue to overflow, causing dropped packets and longer latencies. However, the priority level 3 queue (existing realtime traffic) is not yet full, so those packets are transported through the network unaffected. In accordance with the invention, upon detecting that test packets sent during certain time slots are dropped and/or suffer from high latencies, endpoint 701 selects those time slots having either the lowest drop rate and/or the lowest latencies, and uses those time slots to schedule the packets (which are then transmitted using level 3 priority).


It is assumed that each endpoint in FIG. 7 comprises a node (i.e., a computer having a network interface) including computer-executable instructions for carrying out one or more of the above-described functions.


While the invention has been described with respect to specific examples including presently preferred modes of carrying out the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the spirit and scope of the invention as set forth in the appended claims. Any of the method steps described herein can be implemented in computer software and stored on computer-readable medium for execution in a general-purpose or special-purpose computer, and such computer-readable media is included within the scope of the intended invention. Numbering associated with process steps in the claims is for convenience only and should not be read to imply any particular ordering or sequence.

Claims
  • 1. A method of transmitting packets over an Internet Protocol (IP) or Ethernet packet-switched network, comprising the steps of: (1) transmitting from a network endpoint a plurality of test packets over the network during a plurality of different time slots, wherein each test packet has a priority level that is lower than a priority level assigned to data packets that are to be transmitted between endpoints on the network, and wherein the test packets are transmitted so as to emulate data packets that are to be transmitted between the endpoints on the network;(2) on the basis of step (1), evaluating which of the plurality of different time slots corresponds to favorable network traffic conditions; and(3) transmitting a plurality of data packets comprising one or more of voice data, video data, and TDM-over-IP data over the network at a priority level higher than the test packets using one or more favorable time slots evaluated in step (2).
  • 2. The method of claim 1, wherein step (2) comprises the step of evaluating packet latencies associated with the test packets.
  • 3. The method of claim 1, wherein step (2) comprises the step of evaluating dropped packet rates associated with the test packets.
  • 4. The method of claim 1, wherein step (1) comprises the step of transmitting the test packets at a data rate corresponding to an expected connection bandwidth.
  • 5. The method of claim 1, wherein step (2) comprises the step of a network endpoint performing an evaluation of packet statistics associated with the test packets transmitted over the plurality of different time slots.
  • 6. The method of claim 1, wherein step (2) comprises the step of a network endpoint performing an evaluation of latencies and dropped packet rates associated with the test packets transmitted over the plurality of different time slots.
  • 7. The method of claim 1, wherein the test packets and the data packets comprise Internet Protocol (IP) packets transmitted over a packet-switched network.
  • 8. The method of claim 7, wherein the IP packets are scheduled for transmission within time slots within a frame that is synchronized to a clock.
  • 9. The method of claim 1, wherein the test packets are transmitted at a priority level that is lower than the data packets in step (3), but higher than other data packets containing other data transmitted on the network.
  • 10. The method of claim 1, wherein the data packets comprise voice data.
  • 11. The method of claim 1, further comprising the step of repeating steps (1) through (3) for each side of a two-way connection between two endpoints in the network.
  • 12. The method of claim 1, wherein the network is a packet-switched network comprising packet switches that maintain packet queues.
  • 13. The method of claim 12, wherein each packet switch comprises at least two packet queues, a higher-priority queue for transmitting the data packets of step (3) and a lower-priority queue for transmifting the test packets of step (1).
  • 14. In an Internet Protocol (IP) or Ethernet network comprising a plurality of packet switches, a method of transmitting data packets, comprising the steps of: (1) establishing a time reference frame comprising a plurality of time slots during which packets are to be transmitted across the network between two network endpoints;(2) from a first network endpoint , empirically determining which of the plurality of time slots is associated with a reduced level of packet contention with respect to an intended second network endpoint; and(3) synchronously transmitting a plurality of data packets comprising one or more of voice data, video data, and TDM-over-IP data from the first network endpoint to the second network endpoint during one or more time slots empirically determined to be associated with the reduced level of packet contention in step (2).
  • 15. The method of claim 14, wherein step (2) comprises the step of transmifting a plurality of test packets during a plurality of different time slots from the first network endpoint to the second network endpoint.
  • 16. The method of claim 15, wherein step (2) comprises the step of transmitting the test packets using a packet priority level lower than a packet priority level used to transmit the plurality of data packets in step (3).
  • 17. The method of claim 16, wherein step (2) comprises the step of transmitting test packets at a data rate sufficient to support a desired bandwidth in step (3).
  • 18. An apparatus having a network interface and a processor programmed with computer-executable instructions that, when executed, perform the steps of: (1) transmitting a plurality of test packets at a first priority level, wherein the test packets are transmitted at a data rate that emulates data packets that are to be transmitted between endpoints on the network;(2) on the basis of step (1), evaluating which of the plurality of different time slots corresponds to favorable network traffic conditions; and(3) transmitting a plurality of data packets comprising one or more of voice data, video data, and TDM-over-IP data over the network at a second priority level using one or more favorable time slots evaluated in step (2), wherein the second priority level is higher than the first priority level.
  • 19. The apparatus of claim 18, wherein the computer-executable instructions further perform the step of evaluating packet latencies of the plurality of test packets with a second apparatus connected to the network.
  • 20. The method of claim 1, wherein step (2) comprises the step of transmitting the test packets at a data rate that exceeds an expected data rate for packets that are to be transmitted between two network endpoints on the network.
  • 21. The method of claim 14, wherein the reduced level of packet contention corresponds to zero contention.
  • 22. The apparatus of claim 18, wherein step (2) comprises the step of evaluating packet statistics associated with the test packets.
  • 23. The apparatus of claim 22, wherein the packet statistics comprise a dropped packet rate.
  • 24. The apparatus of claim 22, wherein the packet statistics comprise packet latencies.
  • 25. The apparatus of claim 18, wherein the test packets and the data packets comprise Internet Protocol (IP) packets transmitted over a packet-switched network.
  • 26. The apparatus of claim 25, wherein the IP packets are scheduled for transmission within time slots within a frame that is synchronized to a clock.
  • 27. The apparatus of claim 18, wherein the test packets are transmitted at a priority level that is lower than the data packets in step (3), but higher than other data packets containing other data transmitted on the network.
  • 28. The apparatus of claim 18, wherein the data packets compnse voice data.
  • 29. The apparatus of claim 18, wherein the network is a packet-switched network comprising packet switches that maintain packet queues.
  • 30. A system comprising at least three network endpoints that contend for resources in a shared packet switch, each endpoint comprising a processor programmed with computer-executable instructions that, when executed, perform steps including: (1) transmitting a plurality of test packets over the network during a plurality of different time slots, wherein each test packet has a priority level that is lower than a priority level assigned to data packets that are to be transmitted between endpoints on the network, and wherein the test packets are transmitted so as to emulate data packets that are to be transmitted between the endpoints on the network;(2) on the basis of step (1), evaluating which of the plurality of different time slots corresponds to favorable network traffic conditions; and(3) synchronously transmitting a plurality of data packets comprising one or more of voice data, video data, and TDM-over-IP data over the network using one or more favorable time slots evaluated in step (2).
  • 31. The system of claim 30, wherein the processor is further programmed to perform steps including: evaluating packet statistics corresponding to the test packets transmitted as part of step (2).
  • 32. The method of claim 1, wherein the data packets comprise video data.
  • 33. The method of claim 1, wherein the data packets comprise time-division multiplex (TDM) data converted into IP packets.
  • 34. The apparatus of claim 18, wherein the data packets comprise video data.
  • 35. The apparatus of claim 18, wherein the data packets comprise time-division multiplex (TDM) data converted into IP packets.
  • 36. A method of transmitting packets over an Internet Protocol (IP) network comprising a plurality of network switches, comprising: (1) establishing a time reference frame comprising a plurality of time slots corresponding to candidate times during which packets may be transmitted between network endpoints on the network;(2) transmitting over a plurality of the time slots a plurality of test packets from a first endpoint on the IP network to a second endpoint on the IP network, wherein the plurality of test packets are transmitted at a first priority level and are transmitted at a data rate corresponding to an expected rate to be experienced during a subsequent communication between the first and second endpoints on the IP network,(3) evaluating, at one of the first and second endpoints, packet statistics for the test packets, wherein the packet statistics are indicative of contention conditions in one or more of the plurality of network switches,(4) identifying one or more time slots that correspond to a low level of contention conditions; and(5) synchronously transmitting based on the time reference frame a plurality of data packets comprising one or more of voice data, video data, and TDM-over-IP data during the one or more of the time slots identified in step (4) that correspond to the low level of contention conditions in the one or more network switches, wherein the data packets are transmitted at a priority level higher than the first priority level of the test packets.
US Referenced Citations (87)
Number Name Date Kind
4745593 Stewart May 1988 A
4821259 DeBruler et al. Apr 1989 A
5271000 Engbersen et al. Dec 1993 A
5373504 Tanaka et al. Dec 1994 A
5408465 Gusella et al. Apr 1995 A
5432775 Crayford Jul 1995 A
5455865 Perlman Oct 1995 A
5477531 McKee et al. Dec 1995 A
5517620 Hashimoto et al. May 1996 A
5541921 Swenson et al. Jul 1996 A
5563875 Hefel et al. Oct 1996 A
5610903 Crayford Mar 1997 A
5734656 Prince et al. Mar 1998 A
5774668 Choquier et al. Jun 1998 A
5781534 Perlman et al. Jul 1998 A
5859835 Varma et al. Jan 1999 A
5859979 Tung et al. Jan 1999 A
5917822 Lyles et al. Jun 1999 A
5974056 Wilson et al. Oct 1999 A
6047054 Bayless et al. Apr 2000 A
6058117 Ennamorato et al. May 2000 A
6067572 Jensen et al. May 2000 A
6088361 Hughes et al. Jul 2000 A
6134589 Hultgren Oct 2000 A
6208666 Lawrence et al. Mar 2001 B1
6240084 Oran et al. May 2001 B1
6247061 Douceur et al. Jun 2001 B1
6259695 Ofek Jul 2001 B1
6272131 Ofek Aug 2001 B1
6359885 Kim et al. Mar 2002 B1
6360271 Schuster et al. Mar 2002 B1
6373822 Raj et al. Apr 2002 B1
6377579 Ofek Apr 2002 B1
6385198 Ofek et al. May 2002 B1
6426814 Berger et al. Jul 2002 B1
6426944 Moore Jul 2002 B1
6480506 Gubbi Nov 2002 B1
6487593 Banks Nov 2002 B2
6496477 Perkins et al. Dec 2002 B1
6502135 Munger et al. Dec 2002 B1
6529480 Stewart et al. Mar 2003 B1
6556564 Rogers Apr 2003 B2
6560222 Pounds et al. May 2003 B1
6574193 Kinrot Jun 2003 B1
6611519 Howe Aug 2003 B1
6618360 Scoville et al. Sep 2003 B1
6628629 Jorgensen Sep 2003 B1
6633544 Rexford et al. Oct 2003 B1
6711137 Klassen et al. Mar 2004 B1
6731600 Patel et al. May 2004 B1
6778536 Ofek et al. Aug 2004 B1
6871078 Nishioka et al. Mar 2005 B2
6885641 Chan et al. Apr 2005 B1
6914900 Komatsu et al. Jul 2005 B1
6999422 Ishioka Feb 2006 B2
7080160 Cognet et al. Jul 2006 B2
7116639 Gail et al. Oct 2006 B1
7197010 Berstein et al. Mar 2007 B1
7200158 Gustin Apr 2007 B2
20010033565 Rogers Oct 2001 A1
20010033649 Rogers Oct 2001 A1
20020010792 Border et al. Jan 2002 A1
20020031144 Barton Mar 2002 A1
20020044557 Isoyama Apr 2002 A1
20020054611 Seta May 2002 A1
20020080719 Parkvall et al. Jun 2002 A1
20020086641 Howard Jul 2002 A1
20020110129 Matsuoka et al. Aug 2002 A1
20020186660 Bahadiroglu Dec 2002 A1
20020191592 Rogers et al. Dec 2002 A1
20030012163 Cafarelli et al. Jan 2003 A1
20030021287 Lee et al. Jan 2003 A1
20030058880 Sarkinen et al. Mar 2003 A1
20030067903 Jorgensen Apr 2003 A1
20030107991 Tezuka et al. Jun 2003 A1
20030117959 Taranov Jun 2003 A1
20030188188 Padmanabhan et al. Oct 2003 A1
20030219029 Pickett Nov 2003 A1
20040008655 Park et al. Jan 2004 A1
20040014491 Weigand Jan 2004 A1
20040024550 Doerken et al. Feb 2004 A1
20040160340 Thomson et al. Aug 2004 A1
20040160916 Vukovic et al. Aug 2004 A1
20040179530 Verbesselt et al. Sep 2004 A1
20050058083 Rogers Mar 2005 A1
20050086362 Rogers Apr 2005 A1
20060168336 Koyanagi et al. Jul 2006 A1
Foreign Referenced Citations (12)
Number Date Country
0827307 Mar 1998 EP
2004-056322 Feb 2004 JP
WO 0028705 May 2000 WO
WO 0028706 May 2000 WO
WO 0147162 Jun 2001 WO
WO 0150146 Jul 2001 WO
WO 0159994 Aug 2001 WO
WO 0160029 Aug 2001 WO
WO 0241505 May 2002 WO
WO 02100023 Dec 2002 WO
WO 02100023 Dec 2002 WO
WO 03084137 Oct 2003 WO
Related Publications (1)
Number Date Country
20050086362 A1 Apr 2005 US