This description relates to active handoffs in a network.
Cellular wireless communications systems, for example, are designed to serve many access terminals distributed over a large geographic area by dividing the area into regions called “cells”. At or near the center of each cell, a network-side access device (e.g., an access point) is located to serve client-side access devices located in the cell and commonly referred to as “access terminals” or “mobile stations.”
Examples of access terminals include cellular telephones, laptops, and PDAs. An access point generally establishes a call, also referred to as a “connection,” with an access point to communicate with other entities (e.g., servers in the internet or other users in the network).
A handoff refers to the process of transferring an ongoing call from one network-side access device to another. An ongoing call may be referred to as a “connection” or a “session,” both of which can be used interchangeably. A handoff may occur when an access terminal moves from the area covered by a first access point (with which it has established a call) to an area covered by a second access point. In this case, the handoff transfers the call from the first access point to the second access point to avoid call termination when the access point moves outside of the range of the first access point. A handoff may also occur when the capacity for connecting new calls of a particular access point is reached. In this scenario, the access point may transfer an existing call (or a new call) to another access point located within an overlapping cell.
In general, in one aspect, the invention features methods and computer programs for transferring a communication connection for a client device from a source network device to a target network device. The method includes receiving, at a target network device, first and second sequence numbers, the first sequence number corresponding to a data segment sent from the source network device to the client device, and the second sequence number corresponding to a data segment received by the source network device from the client device; associating the first sequence number with a data segment sent from the target network device to the client device; associating the second sequence number with a data segment received by the target network device from the client device; and applying a first processing technique to data segments associated with sequence numbers that succeed the first and the second sequence numbers.
In general, in another aspect, the invention features a system that includes memory storing first and second sequence numbers received from a source network device where the first sequence number correspond to a data segment sent from the source network device to a client device, and the second sequence number corresponds to a data segment received by the source network device from the client device. The system also includes one or more processors configured to associate the first sequence number with a data segment sent from the source network device to the client device; associate the second sequence number with a data segment received from the client device; and apply a first processing technique to data segments associated with sequence numbers that succeed the first and the second sequence numbers.
In general, in a further aspect, the invention features a method and computer program for transferring a communication connection for a client device from a source network device to a target network device. The method includes storing copies of complete data packets that are transferred between the source network device and the client device before a handoff is triggered, the copies including a first set of data packets and a second set of data packets, the first set of data packets originating from the source network device and the second set of data packets originating from the client device; transferring the copies of the first set of data packets from the source network device to the target network device after the handoff is triggered. receiving first and second sequence numbers, the first sequence number corresponding to the first set of data packets, the second sequence number corresponding to the second set of data packets; and processing the first and the second set of data packets initially.
In general, in yet a further aspect, the invention features a system that includes memory storing data segments received from a source network device over a first tunnel, the received data segments being associated with sequence numbers that precede the first sequence number and that need to be sent to a client device; and one or more processors configured to send data segments to the source network device over a second tunnel, the sent data segments being associated with sequence number that precede the second sequence number and are received from the client device.
Implementations may include one or more of the following. The first processing technique may be performed at a first network layer and a second processing technique, performed at a second, higher network layer, may be applied to the data segments. No data loss may occur during transfer of the communication connection from the source network device to the target network device. A first number of data segments received from the source network device may be buffered; a second number data segments received from the client device may be buffered; the first and second numbers of data segments being may be sufficient to reconstruct packets of data transmitted during transfer of the communication connection from the source network device to the target network device; and control of the communication connection may be transferred from the source network device to the target network device. At the source network device, a copy of the complete data packets may be stored, and in response to detecting a timeout, the copy may be removed from memory. At least a portion of the data within the copy of the complete data packets may bes stored in an uncompressed form
Advantages may include one or more of the following. During a handoff, a call may be transferred from one access point to another access point while the call is in progress or the call may be transferred from a radio network controller to and another radio network controller. A variety radio link protocol (RLP) transfer methods and compression (including decompression) methods (e.g., robust header compression (ROHC) are provided. Some of the RLP transfer methods and compression schemes described herein may be implemented with little delay and overhead, but are more susceptible to data loss. Other RLP methods and compression techniques described herein, by contrast, eliminate or nearly eliminate data loss, but in turn require more overhead and incur more delay than those techniques that are more susceptible to data loss. A combination of an RLP transfer method and a compression scheme may be selected based on the application being implemented to take advantage of the tradeoffs between various RLP transfer methods and compression techniques. For example, some applications may be intolerant of data loss yet relatively tolerant of handoff delay while in other applications, it may be preferable to perform handoffs rapidly even at the expense of some data loss. The RLP methods and compression techniques reduce the amount of historical data needed to be transferred during a handoff while maintaining the compression ratio established before the handoff.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
An Internet protocol (IP) layer and radio link protocol (RLP) layer, or equivalent, of communication stack of a wireless network provide functional and procedural means for implementing a call between a network-side access device (e.g., an access point) and a client-side access device (e.g., an access terminal). At the IP layer (also referred to as a “network layer”), information (e.g., voice, data) transmitted during the call is organized into IP packets, each of which includes a header. The header of a packet usually precedes the actual data that is transmitted and includes destination addresses, error-checking fields, and other information for transmitting and handling the data. The header also typically includes a packet number that uniquely identifies the packet and indicates the order of the packet in a sequence of packets. For the duration of a particular call, some information contained in the headers of packets being sent from the network-side access device to the access terminal (or vice versa) may vary from packet to packet and some header information may remain constant. Header information that remains the same for different packets sent between a source and a destination is referred to as “static context” while header information that varies for different packets is referred to as “dynamic context.” Static context, for example, includes the destination address of packets sent from the network-side access device to the access terminal or vice versa so long as the destination address remains the same for the duration of the call. Dynamic context, for example, includes the packet numbers assigned to IP packets transmitted during a call.
Often associated with the IP layer is a header compression scheme to compress or decompress the header of a packet. As the bandwidth of communication channels is limited, it is often desirable to reduce the size of the header using one or more compression schemes before transmitting the packet over a communication channel. An example of a header compression scheme is robust header compression (ROHC). Compression schemes at the IP layer may compress or decompress IP packet headers based on the static context only or based on both the static context and the dynamic context. For the compression scheme to operate based on dynamic context, each of the network-side access device and the access terminal may require a minimum amount of historical or state information regarding the dynamic context of the transmitted and/or received IP packets. For example, if the most-significant digits of the packet numbers being transmitted are already known to both the client-side and network-side devices, the compression scheme may prune some of the most-significant digits from the packet numbers so that the packet numbers are represented by the remaining least-significant digits. In a further example, a network-side access device truncates a packet number of “1,000,101” to “101” (i.e., the last three significant digits of the packet number) before the packet is sent to an access terminal (or vice versa). Using historical information that indicates the most significant digits of the packet numbers received, the decompressor at the receiving device restores the truncated packet number to the full packet number.
Header compression/decompression using ROHC may be implemented by having the transmitting device and the receiving device retain state information about the headers of previously transmitted packets. During a handoff, the state information may need to be transferred from the “source” (i.e., the device that is servicing the call before the handoff) to the “target” (i.e., the device that services the call after the handoff). ROHC uses redundant info of the packets so that it does not have to be sent in each packet. When using dynamic context of packets, ROHC realizes the historical relationship between packets, and sends only the parts that change. In addition to header compression, the IP layer may also perform other state-dependent or “stateful” processes on packets that are transmitted and received.
After processing the information at the IP layer, a network-access device and a client access device processes the information at the RLP layer or its equivalent. The RLP layer (or its equivalent), which is located beneath the IP layer, is included in a number of digital communication standards, including the wireless standards promulgated by the 3GPP, the 3GPP2 and the IEEE. At the RLP layer, the transmitting device breaks up an IP packet into segments and assigns a sequence number to each of the segments. Note that the IP packet itself may have a compressed header resulting from the earlier processing at the IP layer. The transmitting device then transmits the segments over an error prone or lossy link (e.g., a wireless link) to a receiving device. By analyzing the sequence numbers of the segments received, the receiving device determines which, if any, segments are missing. During the course of the transmission, the receiving device may send acknowledgement messages to the transmitting device to indicate that certain segments have been received successfully. The receiving device may send a negative acknowledgement to indicate that a segment has not been received and/or to request the source to resend the segment. After receiving the segments, the receiving device reorders the segments in consecutive order to assemble the IP packet, which may a compressed header. The reordering of the segments is performed at the RLP layer. The receiving device then operates at the IP layer to decompress the header of the IP packet as needed. As described above, the decompression may be based on static context only or based on both the static context and the dynamic context.
The header compression ratio achieved by a network-access device or a client-access device (e.g., an access terminal) depends on the amount of historical information accumulated and maintained by both the client-access device and the network-access device. Thus, to maintain the header compression ratio of a call during handoff, the source network-access device may potentially have to transfer large amounts of accumulated historical information to the target network-access device.
In some conventional handoff schemes, the source must transfer most or all of the historical information that has been accumulated to the target before the target can assume control of the call. In particular, the dynamic context maintained by the compressor of the source network-access may need to be transferred in order for the compressor of the target network-access device to maintain the compression ratio. Transferring large amounts of historical information can introduce noticeable communication delays to a user at an access terminal. Furthermore, such delays may be unacceptable for particular applications (e.g., voice over IP). Other handoff schemes avoid transferring historical information altogether; however, these schemes achieve little to no header compression and therefore require more bandwidth. Here we describe a variety of handoff methods that selectively transfer historical information to reduce handoff latency and still achieve high header compression ratios.
Referring to
High Data Rate (1XEV-DO) is an emerging mobile wireless access technology that enables personal broadband Internet services to be accessed anywhere, anytime (see P. Bender, et al., “CDMA/1XEV-DO: A Bandwidth-Efficient High-Speed Wireless Data Service for Nomadic Users”, IEEE Communications Magazine, July 2000, and 3GPP2, “Draft Baseline Text for 1xEV-DO,” Aug. 21, 2000). Developed by Qualcomm, 1XEV-DO is an air interface optimized for Internet Protocol (IP) packet data services that can deliver a shared forward link transmission rate of up to 2.46 Mbit/s per sector using only (1X) 1.25 MHz of spectrum. Compatible with CDMA2000 radio access (TIA/EIA/IS-2001, “Interoperability Specification (IOS) for CDMA2000 Network Access Interfaces,” May 2000) and wireless IP network interfaces (TLA/EIA/TSB-115, “Wireless IP Architecture Based on IETF Protocols,” Jun. 6, 2000, and TIA/EIA/IS-835, “Wireless IP Network Standard,” 3rd Generation Partnership Project 2 (3GPP2), Version 1.0, Jul. 14, 2000), 1XEV-DO networks can be built entirely on IP technologies, all the way from the mobile Access Terminal (AT) to the global Internet, thus taking advantage of the scalability, redundancy and low-cost of IP networks.
Examples of communication protocols used by the RAN 10 include, the evolution data-only (1x EV-DO) protocol and other CDMA 2000 protocols. The 1x EV-DO protocol is an evolution of the current 1xRTT standard for high-speed data-only (DO) services and has been standardized by the Telecommunication Industry Association (TIA) as TIA/EIA/IS-856, “CDMA2000 High Rate Packet Data Air Interface Specification”, 3GPP2 C.S0024-0, Version 4.0, Oct. 25, 2002, which is incorporated herein by reference. Revision A to this specification has been published as TIA/EIA/IS-856, “CDMA2000 High Rate Packet Data Air Interface Specification”, 3GPP2 C.S0024-A, Version 2.0, June 2005, and is also incorporated herein by reference.
The system 10 and methods described below are not restricted to the EV-DO standard and may use other communications standards. Furthermore, the ATs may be used with any version of the EV-DO protocol, including the 1x EV-DO protocol, and the term “access terminal” is interchangeable with the term “mobile station.”
The radio nodes 20a-d of the RAN 10 include CDMA carrier elements (CCE's) 18a-18d (collectively referred to as CCE's 18), respectively and the RNRs 12a-b include radio network elements (RNEs) 14a-14b (collectively referred to as RNE's 14), respectively. The CCE's 18 and RNE's 14 communicate with each other over the IP network 16. The IP network 16 may include multiple networks and supports various methods of IP transport service by which CCEs and RNEs communication, including but not limited to frame relay, metro Ethernet, ATM, 802.16, and other wireless backhaul communication protocols. The CCEs 18 support forward and reverse link channels established between an AT and their respective radio node. The CCEs of a particular radio node also perform physical layer functions as well as media access control (MAC) layer functions of the airlink. Alternately, the CCE and the RNE may be part of a single access point and can communicate directly without the need for an external IP network. In this case, the different access points communicate to one another and to the core network using the IP network.
The RNE's perform traditional radio access and link maintenance functions of both a radio network controller (RNC) and a packet data serving node (PDSN), among other functions. These traditional functions include controlling the transmitters and receivers of the radio nodes 20, initiating and maintaining client sessions and connections with ATs, routing data packets received from an external network (not shown) to which the RNE's are coupled, initiating handoffs, and sector selection. The RNEs 14 also transmit and receive data packets (e.g., voice over IP packets) to and from external devices (e.g., servers) connected to the external network.
The RNE's 14 can be viewed as application-layer routing engines for communication networks (e.g., CDMA networks), which serve all CCE's 18 in the IP RAN 10. By contrast to existing 3GPP2 CDMA architecture, in the IP RAN 10 there is no fixed association between the CCE's 18 and the RNE's 14. For example, a CCE (e.g, CCE 18a) may be simultaneously serving any number of RNE's 14 in the RAN 10.
In some implementations, the RAN 10 includes one or more sub-networks or “subnets”, e.g., 1xEV-DO subnets to which individual CCEs are assigned. For example, CCEs 18a may be assigned to a first 1xEV-DO subnet and CCEs 18b-18c may be assigned to a second, different 1xEV-DO subnet. In these implementations, a single RNE may serve CCEs that belong to a single 1xEV-DO subnet.
Radio nodes 20 are physical nodes that are often, but not always, located at or within a cell site. In some embodiments, a radio node (e.g., radio node 20a) is split into two separate physical nodes, one being a digital unit that includes the CCEs and one being an RF unit supporting RF communications. The RF unit and the digital unit may be connected via a fiber-optic cable, and the digital unit may be located at a central site away from the cell site. A radio node typically includes CCEs and may additionally include RNEs. For example, radio node 20b is shown to include both CCEs 18b and RNEs 22. Radio node 20b is an integrated radio node that can serve as an IP wireless access point.
RNRs 12 are physical nodes that are sometimes located at a central office or data center. RNEs (e.g., RNEs 14) may be located within an RNR. Multiple RNE's may be present in the same RNR, and multiple RNR's may be placed in the same location, thereby allowing the operator to grow RNE capacity at a single site in a scalable fashion. As described above, an RNE may also be located inside an RN.
Although all CCE's and RNE's of RAN 10 can communicate with each other over the IP network 16, those CCE's and/or RNE's that are physically located in close proximity to one another, e.g., in the same node or same site, may communicate with each other using a form of Ethernet. For example, CCE 18b and RNEs 22 of integrated radio node 20 may communicate using an Ethernet protocol.
The RAN 10 is architecturally flexible and can be organized as a variety of architectures including a centralized, semi-distributed, a distributed architecture, and combinations thereof. In a centralized architecture, one or more RNR's 12, each with one or more RNE's 14, are clustered in one central site and this one RNR cluster serves all radio nodes 20 in the RAN 10. In a semi-distributed (also termed “semi-centralized”) architecture, the RNR's 12 are deployed in multiple geographic sites, possibly in central offices very close to the radio nodes 20, but not co-located with radio nodes 20. Finally, in a fully distributed architecture, the RNEs 14 of the RNRs 12 are within close proximity to the CCE's 18, either in the same site or in an integrated radio node (such as the radio node 20b). The handoff methods described below can reduce handoff latency and improve header compression ratios in all three types of architectures, but are especially beneficial for the fully distributed architecture.
Referring to
The compressor 52 operates at the IP layer to compress and decompress headers of packets. The compressor 52 may implement robust header compression (ROHC) or other variants of ROHC. The compressor 52 can compress or decompress IP packet headers based on the static context only or based on both the static context and the dynamic context depending on the availability of historical context information, as described above. Security protocols 54 may be used to encrypt, decrypt and/or authenticate the packet. The RLP routines 56 operate at the RLP layer, which, as described above, is included in a number of digital communication standards, including the standards promulgated by the 3GPP, the 3GPP2 and the IEEE. The RLP routines break up an IP packet, which may be compressed and/or encrypted, into segments and assign a sequence number to each of the segments. The MAC and physical layer routines 58 (situated below the RLP layer) handle the transmission and reception of the segments to and from the AT 24 over the respective forward and reverse links of the airlink 26. During the course of the transmission, the radio node 20a may receive acknowledgment messages that indicate the segments that have been received successfully by the AT 24. The radio node 20a may also receive negative acknowledgment messages that indicate the segments that have not been received by the AT 24. The RLP routines 56 analyze the acknowledgement and negative acknowledgement messages and respond appropriately. For example, in response to receiving a negative acknowledgement message associated with a segment that had been previously sent, the RLP routines 56 typically retransmit the segment.
The RLP routines 56 also handle the receipt and assembly of segments received by the radio node 20a from the AT 24. The RLP routines 56 send an acknowledgement message to the AT 24 to indicate that a segment has been received successfully. The RLP routines 56 also send a negative acknowledgement to indicate that a segment has not been received and/or to request the AT 24 to resend the segment. After segments have been received, the RLP routines 56 reorder the segments in consecutive order to assemble the IP packet. The security protocols 54 may decrypt and/or authenticate the IP packet, and the compressor 52 decompresses the packet header using historical information stored in the memory 48. Depending on the historical information available in the radio node 20a, the decompressor 52 may decompress the packet using only static context or using both static context and dynamic context.
The RLP queues 59 include a forward link (FL) RLP queue 59a for storing segments to be sent to the AT 24 over the forward link and a reverse link (RL) RLP queue 59b for storing segments received from the AT 24 over the reverse link. After the radio node 20a sends a segment stored in the FL RLP queue 59a and receives an acknowledgement from the AT 24 that the segment has been received, the processor 24 may delete the segment from the FL RLP queue 59a. After the radio node 20a assembles a packet from the segments stored in the RL RLP queue 59b, the processor 24 may flush those segments from the RL RLP queue 59b. The communication module 44, processor 42, and software 50 shown in
Referring to
Like the compressor 52 (shown in
The RLP routines 96 also handle the receipt and assembly of segments received by the AT 24 from the radio node 20a. The RLP routines 96 send an acknowledgement message to the radio node 20a to indicate that a segment has been received successfully. The RLP routines 96 also send a negative acknowledgement to indicate that a segment has not been received and/or to request the radio node 20a to resend the segment. After segments have been received from the radio node 20a, the RLP routines 96 reorder the segments in consecutive order to assemble the IP packet. The security protocols 94 may decrypt and/or authenticate the IP packet and the compressor 92 decompresses the packet header using historical information stored in the memory 88. Depending on the historical information available in the AT 24, the decompressor 92 may decompress the packet using only static context or using both static context and dynamic context.
The RLP queues 99 include a forward link (FL) RLP queue 99a for storing segments received from the radio node 20a over the forward link and a reverse link (RL) RLP queue 99b for storing segments that are to be sent to the radio node 20a over the reverse link. After the AT 24 sends a segment stored in the RL RLP queue 99b and receives an acknowledgement from the radio node 20a that the segment has been received, the processor 84 may delete the segment from the RL RLP queue 99b. After the AT 24 assembles a packet from the segments stored in the FL RLP queue 99a, the processor 82 may flush those segments from the FL RLP queue 99a. As described further below, during a handoff, the RLP queues 59 of the radio node 20a and the RLP queues 99 of the AT 24 may be controlled (e.g., populated and flushed) in a variety of ways depending on the particular handoff scheme that is implemented. The RLP queues 99 also store static and dynamic contexts for use by the compressor 92. For instance the FL RLP queue 99a stores static and dynamic contexts related to communications on the forward link, and the RL RLP queue 99b stores static and dynamic contexts related to communications on the reverse link.
An active handoff occurs when the AT 24 is active, as opposed to a dormant handoff that occurs when the AT 24 is dormant (e.g, without an active call). During a handoff, the call may be transferred from one radio node (e.g., radio node 20a) to another radio node (radio node 20b). The call may also be transferred between different RNEs. For example the call may be transferred between different RNEs of the same RNR (e.g., a first RNC to a second RNC contained with RNEs 14a) or between RNEs belonging to different RNRs (e.g., RNEs 14a of RNR 12a and RNEs 14b of RNR 12b). A handoff performed between radio nodes is referred to as an “inter-CCE handoff” and a handoff performed between RNEs is referred to as an “inter-RNE handoff.”
As each of the RNEs 14 has connectivity to all CCE's 18 of radio nodes 20 in the RAN 10, it is possible to implement handoffs between radio nodes 20 without requiring an inter-RNE handoff. However, inter-RNE handoffs may be implemented in various situations. For example, in RANs with multiple RNRs, an inter-CCE handoff may be followed by an inter-RNE handoff in order to reduce the use of resources of the network 16. During an inter-RNE handoff, a target RNR that incurs the least routing cost (e.g., requires the least amount of router hops) is selected to serve the AT. Routing cost, in general, scales with the number of router hopes between the target RNR and the cell site where the serving CCE is located. Therefore, as the AT moves within the coverage area of the RAN 10, inter-RNE handoffs, although not required, could be performed to lower routing costs and to avoid delays caused by unnecessary router hops. Thus, inter-RNE handoffs may be performed more frequently in distributed architectures (where backhaul increase will occur in the expensive access links) than in centralized or semi-distributed architectures where the backhaul increase occurs in high-speed inter-router links.
Referring to
The second and third stages of the handoff 112 and 114, shown in
As shown in
Referring to
Referring to
As we propose below, the RLP transfer process (142) and the stateful IP flow process (144) can be performed in a variety of different ways.
Referring to
The target AP handoff process 162 receives (164) the RLP sequence numbers X1 and X2, that may be sent from the source AP, and initializes its RLP to begin from sequence numbers X1 and X2. The process 162 divides (166) the IP packet into segments (referred to as FL segments) and assigns (168) a sequence number of X1 to the first FL segment. The target AP then transmits (170) the FL segments beginning at sequence number X1 to the AT. Over the reverse link, the target AP receives (172) segments from the AT (referred to as RL segments) and processes (174) the RL segments having sequence numbers of X2 or higher. Any FL segments having sequences numbers preceding X1 that have not been successfully sent to the AT over the forward link may be dropped by the source AP. Similarly any RL segments having sequence numbers preceding X2 may be dropped by the target AP. Thus, small data loss is possible when the control of the RLP is passed to the target AP using RLP transfer process 150. For example, data loss may occur if the source AP has received first segments of a packet but not the remaining segments of the packet when the source AP sends the RLP sequence numbers X1 and X2 to the target AP. In this scenario, a first portion of the packet is received by the source AP and the remaining portion is received by the target AP as the target AP does not receive the RL segments with sequence numbers preceding X2 and the source AP does not receive the RL segments with sequence numbers including and following X2.
Referring to
The target AP handoff process 192 receives (194) the RLP sequence numbers X1 and X2 sent from the source AP and initializes its RLP to begin from sequence numbers X1 and X2. The process 162 divides (196) the packet into segments (referred to as FL segments) and assigns (198) the sequence number of X1 to the first FL segment. The target AP then transmits (200) the FL segments beginning at sequence number X1 to the AT over the forward link. Over the reverse link, the target AP receives (202) segments from the AT (referred to as RL segments) and processes (204) the RL segments having sequence numbers of X2 or higher.
If there are any FL segments at the source AP that have sequences numbers preceding X1 and that have not already been sent to the AT, the source AP tunnels (186) these segments to the target AP for further processing (206). In the first RLP transfer process 150 of
Referring to
The source AP handoff process 222 delays transferring control of the RLP to the target AP until it 222 determines (224) that the buffering is sufficient. In some embodiments, the source AP handoff process waits until it receives an acknowledgement message from the target AP. After the source AP handoff process 222 determines (224) that the buffering is sufficient, it freezes the RLP state of the source AP, otherwise it continues to wait for the target AP to buffer more FL and RL segments. After freezing (226) the RLP state, the source AP handoff process 222 passes (228) to the target AP, the RLP sequence numbers X1 and X2 for the forward and reverse links respectively. The sequence number X1, as may be determined by the source AP, is the sequence number that the target AP should use to stamp the first RLP fragment that the target AP forms from the first full IP packet it processes for the forward link. Similarly, the sequence number X2, as may be determined by the source AP, is the sequence number of the first RLP segment that the target AP should start its reverse link RLP processing from.
The target AP handoff process 232 receives (234) the RLP sequence numbers X1 and X2 sent from the source AP and initializes its RLP to begin from sequence numbers X1 and X2. The target AP handoff process 232 divides (236) the packet into segments (referred to as FL segments) and assigns (238) a sequence number of X1 to the first FL segment. The target AP then transmits (240) the FL segments beginning at sequence number X1 to the AT. Over the reverse link, the target AP receives (242) segments from the AT (referred to as RL segments) and processes (244) the RL segments having sequence numbers of X2 or higher.
As the target AP has built up a buffer of RL and FL segments, it can handle retransmission requests and feedbacks, eliminating or reducing the chance of data loss. Compared to the first RLP transfer process 150 of
In contrast to the third RLP transfer process 220, the second RLP transfer process 180 of
Referring to
The IP packets cached by both the AT and source AP are cached with no header compression or application of security, though the AT and source AP typically apply security protocols and header compression before sending the packets to each other. When the target AP successfully receives a full FL IP packet sent from the source AP, the target AP handoff process 280 may optionally send (284) an acknowledgement message (“ACK”) to the source AP.
The source AP maintains the full FL IP packet in memory until it determines (268) that the complete packet has been successfully acknowledged by the AT. When the source AP handoff process 262 determines (268) that the complete packet has been acknowledged, the process 262 removes (270) the full FL IP packet from memory. Likewise, the AT maintains the full RL packet in memory until it determines (306) that the complete packet has successfully been acknowledged by the access network. When the AT handoff process 300 determines (306) the complete packet has been acknowledged, the process 300 removes (308) the full RL packet from memory.
The source AP handoff process 262 passes (272) to the target AP, the packet sequence numbers X1 and X2 for the forward and reverse links respectively. The sequence number X1, as may be determined by the source AP, is the sequence number that the target AP should use to stamp the first RLP fragment that the target AP forms from the first full IP packet it processes for the forward link. Similarly, the sequence number X2, as may be determined by the source AP, is the sequence number of the first RLP segment that the target AP should start its reverse link RLP processing from.
The target AP handoff process 280 receives (286) the packet sequence numbers X1 and X2 sent from the source AP and initializes its RLP to begin processing FL and RL packets with sequence numbers X1 and X2. After the handoff, the Source AP tunnels the full IP packets to the target AP, beginning from the earliest unfinished or not completely acknowledged cached packet. The target AP handoff process 232 receives the tunneled packets, divides (288) the full packets into FL segments and stamps them with sequence numbers starting from X1 and transmits (290) the FL segments to the AT.
Over the reverse link, once the AT determines that the handoff has occurred, it discards any of the unfinished RL and FL segments and clears up its reverse and forward link RLP queues. It then re-starts its reverse link RLP processing by processing the unfinished, cached full IP packets, segmenting them and transmitting them over the reverse link to the target AP. The target AP receives segments from the AT (referred to as RL segments) and processes (296) the RL segments belonging to the RL packet having sequence number X2.
Since after the handoff, both the target AP and the AT re-start the RLP processing and transmission from full IP packets, the target AP does not need to keep track of which partial packets were sent or received before the handoff since it maintains and stores whole packets. Furthermore, there is no data loss because target has copies of the full RL and FL packets that were sent just before the handoff happens. Unlike the third RLP transfer method 180 of
The following example illustrates an occurrence of redundant data transmission. The source AP prepares a FL packet to send to the AT, and the FL packet includes a total of four segments. Before the handoff, the source AP sends only the first and second segments of a FL packet to the AT; the third and fourth segments have not been sent. The target AP has also received a copy of the FL packet (see step 282) that was sent just before the handoff as well as the sequence number of the FL packet (see step 286). After the handoff occurs, the target AP again starts from first and second segments of the FL packet; therefore, AT receives the first and second segments from the target AP even though it had already received the same first and second segments from the source AP. The target AP also sends the third and further segments of the FL packet to the AT, the very segments that the source AP was not able to send before the handoff. Although there is some redundant transmission of the first and second segments, the third and fourth segments are not lost during the handoff.
After the handoff, along with the transferring of RLP from the source AP to the target AP, a stateful IP flow process of a handoff is implemented between the source AP, the target AP, and the AT. As described above with respect to
In many cases, before a handoff, the source AP and the AT compress and decompress the headers of the packets they send to each other using both static and dynamic context. During a call, the source AP and AT cache both reverse-link and forward-link RLP segments of unfinished packets (i.e., packets that have not been completely transmitted and received) until transmission of the packets is complete (i.e., the entire packet has been transmitted and received). As described above with respect to
Referring to
Upon a handoff, once the RLP transfer is complete, the source AP handoff process 322 flushes the RLP queues of the source AP, including all partial packets remaining in the queues, and the dynamic context from the memory of the source AP. After the RLP transfer, the source AP is no longer involved in the handoff; therefore, the memory that had been used to store dynamic context is cleared as well as the RLP queues.
Upon sensing the initiation of a handoff, the AT handoff process 350 flushes the RLP queues of the AT, including all partial packets remaining in the queues, and the dynamic context from the memory of the AT. Both the AT handoff process 350 and the target AP handoff process 330 start their compressor state machines with either static context only or no context. Correspondingly, the decompressors at the target AP and the AT have either the static context or no context respectively. At this point, the compressors at the Target AP and the AT would perform ROHC using static context only (steps 332 and 354, respectively) or no context. The source AP may transfer the static context to the target AP's decompressor in advance, in preparation for the handoff. The AT's decompressor may retain the static context from before the handoff. The target AP processes 332 the packets with ROHC (static context only, or no context) including any packets tunneled to the target AP from the source AP during the RLP transfer process (e.g., RLP transfer process 260, shown in
Both the AT and the target AP continue to process packets with ROHC using static context only until they receive (338 and 356) the IR-DYN packets from the other end and send (340 and 358) an acknowledgement (ACK) indicating that each of their respective decompressors has been initialized with the dynamic context. After the ACK has been sent from the AT and received (340) by the target AP and vice versa (358), both the AT handoff process 350 and target AP handoff process 330 perform ROHC using both static and dynamic contexts (steps 360 and 342, respectively).
Referring to
The AT handoff process 351 of the second stateful IP flow process 321 is the same as the AT handoff process 350 of the first stateful IP flow process 320 of
The first and second stateful IP flows 320 and 321 can be used with the fourth RLP transfer method 260 of
Referring to
The AT handoff process 400 begins by flushing the reverse link dynamic context only—the forward link dynamic context is still intact. At this point, the AT receives FL packets that were processed by both the source AP and the target AP using different context information. As the AT's FL decompressor has the dynamic context information, it is able to process the packets. The AT may need to handle out of order FL packets coming from the source AP and the target AP. For this, the AT may use a small sequence numbering to reorder the packets, or modify the interpretation interval offset, an explanation of which can be found in the article by G Pelletier, L-E. Jonsson, K. Sandlund, “Robust Header Compression (ROHC): ROHC over Channels That Can Reorder Packets,” IETF RFC 4224, January 2006.
Alternatively, the AT may use separate queues for FL packets received from source AP and target AP to distinguish packets coming out of order and process them with appropriate context information to decompress the headers. The target AP handoff process 380 then initializes (383) the compressor of the AT with the dynamic context by sending IR-DYN packets to the AT. The IR-DYN packets include the dynamic context. The AT receives the IR-DYN packets and uses them to initialize (401) its decompressor. Once the target AP compressor receives (384) an acknowledgement indicating that the AT's FL decompressor has the context information, it can then begin to compress (385) the FL packet headers fully using both the static and the dynamic context. The AT also processes (402) the FL packets using both the static and dynamic context.
For the reverse link, the AT handoff process 400 compressor loses (403) its dynamic context information right at the time of the handoff to the target AP. The target AP decompressor starts off with only the static context information. The target AP may get the static context from the source AP as part of the handoff preparation phase, just before the actual handoff. Right after the handoff, the AT's compressor performs header compression using only the static information (404). The AT may still have unfinished fragments from before the handoff that are awaiting transmission or retransmission.
The target AP RL decompressor starts processing (386) packets following the sequence number X2, as indicated by the source AP, using static information available. Any packets received by the target AP preceding the sequence number X2 are tunneled (387) to the source AP which processes (376) them. This would include the unfinished fragments at the AT awaiting transmission from before the handoff. As these packets were compressed before the handoff using full context, the source AP would be able to process them which also has the full context from before the handoff. The first few packets transmitted by the AT to the target AP include information regarding the dynamic context of the packets. From the first few packets it receives from the AT, the target AP handoff process 380 determines the dynamic context. The AT compressor then initializes the decompressor at the target AP with the dynamic context by sending IR-DYN packets. The IR-DYN packets include the dynamic context. Both the AT and the target AP continues to process packets with ROHC using static context only until each of them receives IR-DYN packets from one another (steps 408 and 338). The AT and target AP initialize their compressors with the dynamic context included in the IR-DYN packets and from then on, both the AT handoff process 400 and target AP handoff process 380 perform ROHC using both static and dynamic contexts (steps 410 and 390, respectively).
Referring to
The AT handoff process 412 of the fourth stateful IP flow process 371 is the same as the AT handoff process 400 of the third stateful IP flow process 370 of
To perform the AT handoff process 400 of the third stateful IP flow 370 (
The third and fourth stateful IP flows 370 and 371 can be used with the first, second, and third RLP transfer methods 150 (
Referring to
The target AP handoff process 430 receives (431) the RL dynamic context from the source AP and initializes (432) its ROHC decompressor with the RL dynamic context. The Target AP handoff process 430 processes (434) any received RL packets, which may have full header compression based on the already established header compression context (i.e., the static and the dynamic context received from the source AP). On the forward link, the target AP handoff process 380 selects (436) the FL packets with sequence numbers following X1 and compresses (438) the FL packets that have sequence numbers following X1 using static context only and sends the FL packets to the AT.
At this point, the AT receives FL packets that were processed by both the source AP and the target AP using different context information. That is, the AT processes (452) FL packets received from the target AP using static context only and processes (454) FL packets received from the source AP using both static and dynamic context. The AT may need to handle out of order FL packets coming from the source AP and the target AP. For this, the AT may use a small sequence numbering to reorder the packets, or modify the interpretation interval offset as described above. Alternatively, the AT may use separate queues for FL packets received from source AP and target AP to distinguish packets coming out of order and process them with appropriate context information to decompress the headers.
The first few FL packets transmitted (440) from the target AP to the AT include information regarding the dynamic context of the packets. From the first few packets, the decompressor on the AT for the forward link determines (456) the dynamic context of the FL packets. For example, the target AP handoff process 430 sends IR-DYN packets, which contain the dynamic context, to the AT to initialize the decompressor of the AT with the dynamic context. After the IR-DYN packets have been received, the target AP and the AT handoff process 450 can process FL packets using both static and dynamic contexts. For the remainder of their connection, the AT handoff process 450 and target AP handoff process 430 perform ROHC using both static and dynamic contexts (steps 458 and 442, respectively).
The fifth stateful IP flows 420 and 421 can be used with the first, second, and third RLP transfer methods 150 (
The techniques described herein can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The techniques can be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. For example, the CCE's and RNE's can be viewed as software entities, hardware entities, and/or entities that include a combination of hardware and software.
Method steps of the techniques described herein can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, the techniques described herein can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer (e.g., interact with a user interface element, for example, by clicking a button on such a pointing device). Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
The techniques described herein can be implemented in a distributed computing system that includes a back-end component, e.g., as a data server, and/or a middleware component, e.g., an application server, and/or a front-end component, e.g., a client computer having a graphical user interface and/or a Web browser through which a user can interact with an implementation of the invention, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet, and include both wired and wireless networks.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact over a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The following are examples for illustration only and not to limit the alternatives in any way. The techniques described herein can be performed in a different order and still achieve desirable results. Other embodiments are within the scope of the following claims.