In recent years demand for mobile wireless video has steadily increased, and its growth is predicted to increase with the new infrastructure of the LTE/LTE-Advanced network that offers significantly higher user data rates. Although present-day wireless networks have increased capacity, and smart phones are now capable of generating and displaying video, actually transporting video across these advanced wireless communication networks has become challenging.
Embodiments described herein include methods for using wireless packet loss data in the encoding of video data. In one embodiment, the method comprises receiving wireless packet loss data at a wireless transmit receive unit (WTRU); generating video packet loss data from the wireless packet loss data, and providing the video packet loss data to a video encoder application running on the WTRU for use in encoding, video data. The video encoder may perform an error propagation reduction process in response to the video packet loss data. The error propagation reduction process may include one or more of generating an Instantaneous Decode Refresh (IDR) frame or generating an Intra Refresh (I) frame. Some embodiments may be characterized as using a reference picture selection (RPS) method, or a reference set of pictures selection (RPSP) method.
In some embodiments, the wireless packet loss data is provided by a base station to the wireless transmit receive unit (WTRU). The wireless packet loss data may be generated at the Radio Link Control (RLC) protocol layer, which may be operating in acknowledged mode or unacknowledged mode. The wireless packet loss data may include, or be generated from a NACK message. The NACK message may be synchronous with uplink transmissions. In some embodiments, the video packet loss data is generated from a mapping using a packet data convergence protocol (PDCP) sequence number and/or a real time protocol (RTP) sequence number and/or a radio link control (RLC) sequence number. The video packet loss data may be generated using a mapping from an RLC packet to a PDCP sequence number to an RTP sequence number. The video packet identifier may be a network abstraction layer (NAL) unit Various other embodiments include apparatuses such as a WTRU or base station configured to implement the methods described herein.
A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:
Disclosed herein are methods and systems for early detection and concealment of errors caused by lost packets in wireless video telephony and video streaming applications. In some embodiments, the video information may be carried in RTP packets or packets of any other standardized or proprietary transport protocol that does not guarantee delivery of packets. Early packet-loss detection mechanisms include analysis of MAC- and/or RLC-layer retransmission mechanisms (including HARQ) to identify situations when data packets were not successfully transmitted over the local wireless link. The mechanisms for prevention of error propagation include adaptive encoding or transcoding of video information, where subsequent video frames are encoded without referencing any of the prior frames that have been affected by the lost packets. The use of referenced prior frames in encoding or transcoding operations includes using them for prediction and predictive coding. Proposed packet-loss detection and encoding or nanscoding logic may reside in user equipment (mobile terminal devices), base-stations, or backhaul network servers or gateways. Additional system-level optimization techniques, such as assignment of different QoS levels to local and remote links also are described.
An example of a mobile video telephony and video streaming system operating with RTP transport and RTCP-type feedback is shown in
When a packet is lost (e.g., at the local link 15, in the Internet 28, or at the remote wireless link 26 through remote network 23), this loss is eventually noticed by user B's application, and communicated back to user A by means of an RTCP receiver report (RR). In practice, such error notification reports usually are sent periodically, but infrequently, e.g., at about 600 ms-1 s intervals. When an error notification reaches the sender (application of user A), it can be used to direct the video encoder to insert an Intra (or IDR) frame, or use other codec-level means to stop error propagation at the decoder. However, the longer the delay between the packet loss and receiver report, the more frames of the video sequence will he affected by the error. In practice, video decoders usually employ so-called error concealment (EC) techniques, but even with state-of-art concealment, a one-second delay before refresh can cause significant and visible artifacts (so-called “ghosting”).
In embodiments described herein, error propagation caused by loss of packets is reduced. The embodiments include methods and systems to provide early packet loss detection and notification functions at the local link 15, and/or use advanced video codec tools, such as Reference Picture Selection (RPS) or Reference Set of Pictures Selection (RSPS) to stop error propagation. The signaling associated with such techniques used at the local link is generically represented by line 16 in
In some embodiments, techniques for enhancing performance of video conferencing systems, such as introducing different QoS modes at the local and remote links, and using transcoder and packet loss detection logic at the remote eNB are used. RTSP/RTP-type video streaming applications are described, as example uses of some of the embodiments described herein.
Early packet loss detection techniques and identification of corresponding video packet loss data will now be described. For convenience of presentation, attention is focused on embodiments using RTP transport and an LTE stack, but alternative embodiments include other transports and link-layer stacks as well.
An example of a stack of layers and protocols involved in transmission of video data is shown in
In this embodiment, it may also be assumed that:
1. the PHY/MAC supports multiple radio bearers or logical channels;
2. each class of video traffic has its own radio bearer or logical channel; and
3. each video logical channel can support multiple video applications.
In LTE, for instance, it is the RLC sub-layer 212 that may become aware of lost packets based on its exchange with the MAC layer 214. The frame or frames contained within the lost RLC layer packets should be determined, in order to apply the aforementioned error propagation reduction techniques. Hence, the NAL layer 202 or application layer 222 packet or packets contained within those lost RLC layer 212 packets 213 must be determined. Hence, the contents of the packets in the various layers above and including the RLC layer can be mapped to each other.
The methods of detecting lost wireless packets and corresponding video packet loss may accommodate situations where PDCP applies encryption to secure data from higher layers, as shown in
Table 1 summarizes operations that may be performed at different layers/sublayers to obtain information about transmission errors, and which NAL units 203 they affect.
The mapping of packets at various layers/sublayers is summarized in Table 2.
Described herein are additional details of actions that are performed at each layer/sublayer.
At the RLC sublayer, packet loss detection and mapping to PDCP sequence number (SN) may be performed. In LTE there are three modes of operation at the RLC layer (as defined in 3GPP TS 36.322), as set forth below:
1. Transparent mode (TM):
2. Unacknowledged mode (UM):
3. Acknowledged mode (AM):
Basic operations/data flow in RLC AM model are shown in
Retransmission protocols, such as ARQ or HARQ may be counterproductive with video transmissions if used in conjunction with the feedback and error correction techniques in accordance with the present invention. Thus, in one embodiment, wireless packet loss indication may be obtained without invoking ARQ retransmission. There are at least the following approaches for detecting packet loss at the RLC sublayer:
Once an RLC packet has failed transmission, the corresponding PDCP packet(s) may be identified. Segmentation is possible from PDCP to RLC, such that the mapping is not necessarily one to one. Because the RLC SDUs are PDCP PDUs, one may identify the PDCP SN (the sequence number in the compressed header) of PDCP packets that are lost in the transmission from the RLC ACK/NACKs. Note that the RLC sublayer is not able to identify the RTP sequence number because the PDCP ciphers its data SDUs.
At the PDCP sublayer, lost RTP/UDP/JP packets may be identified. Basic operations and data-flow at the PDCP sublayer are shown in
At the Application layer 202 or 222, lost NAL units may be identified. After identifying a failed RTP packet, the application layer is tasked to identify the NAL packet that failed transmission. If the NAL→RTP mapping is one-to-one, then identifying the NAL packet is straightforward. If the mapping is not one-to-one, then again a method such as table lookup may be used.
Details of an example table-lookup technique are described herein.
This table may be built and maintained by the RLC segmentor. It records which SDUs are mapped to which PDUs and vice versa. For example, if a PDU,j, is deemed to have failed transmission, then a table lookup will identify SDUs i−1, i, and i+1 to be the ones that have failed transmission.
Segmentation is known to exist at the RLC sublayer and the application layer where NAL packets are mapped to RTP packets. Similar methods can be used in both layers. An overall diagram of one packet loss detection procedure is shown in
If, at 705, it is determined that a particular packet has been lost, then flow proceeds to 711. At 711, the lost RLC layer packet is mapped to the corresponding PDCP layer SN. Then the PDCP SN is mapped to the corresponding RTP layer SN, IP address, and port numbers (713). The IP address discloses the user to which the video data is being sent and the port numbers discloses the application to which the video data is being sent. The RTP SN is then mapped to the corresponding NAL packet (715). The NAL packet identifies the frame or frames that were in the RLC packet that was lost. Flow then returns to 703.
With such early knowledge of the loss of video data and knowledge of the particular frame(s) that were lost, the video encoder in the UE can then implement measures to reduce error propagation at the decoder and/or recover video data, including any of the techniques discussed in detail herein.
Standard prediction structures will now be described. Video encoding structures in real time applications may include an instantaneous decode refresh (IDR) frame 801 followed by backward prediction frames (P-frames). This structure is illustrated in
The predictive nature of encoded video makes it susceptible to loss propagation in case of channel errors. Thus, if during transmission, one of the P-frames, such as P frame 803x, is lost, successive P-frames, such as P frames 803y are corrupted, as illustrated in
In video coding, there are two known methods for prevention of error propagation based on feedback: Intra Refresh (IR) and Reference Picture Selection (RPS). Both methods do not add latency to the encoder, and produce standard-compliant bitstreams. These methods may be used in association with many existing video codecs, including H.263 and H.264. In a further embodiment, a Reference Set of Pictures Selection (RSPS) is described that may be specific to H.264 and future codecs using multiple reference pictures.
In a first embodiment illustrated in
In a second embodiment illustrated in
RPS uses a predictive P-frame instead of Infra (IDR) frame to stop error propagation. In most cases, P-frames use many fewer bits than I frames, which leads to capacity savings.
In further embodiments, aspects of the IR and RPS approaches may be combined. For instance, the encoder may encode the next frame in both IDR and P-predicted modes, and then decide which one to send over the channel.
In a further embodiment illustrated in
Due to the flexibility in prediction, RSPS may yield better prediction and thereby better rate-distortion performance over the IF and RPS methods.
In some encoding techniques, each frame may be further spatially partitioned into a number of regions called slices. Some embodiments of the RSPS technique may therefore operate on a slice level. In other words, it may be only subsets of frames that are removed from prediction. Such subsets are identified by analyzing information about packets/slices that were lost, and the chain of subsequent spatially aligned, slices that would be affected by loss propagation.
The effectiveness of the above described embodiments were tested using a simulated channel with 10e-2 packet error rate (which is typical for conversationat/VOIP services in LTE), and using notification and IR, RPS, and RSPS mechanisms in the encoder. We have used an H.264 standard-compliant encoder, and implemented RPS and RSPS methods by using memory management control operation (MMCO) instructions. Standard video test sequence “Students” (CIF-resolution, 30 fps), was looped in forward-backward fashion to produce an input video stream for the experiments. The results are shown in
Based on these experiments, the following observations are supported:
Sonic of the embodiments described herein use a combination of two techniques: (i) Detect packet loss as early as possible, and if it happens at local link—signal it back to the application/codec immediately; and (ii) Prevent propagation of errors caused by lost packets by using RPS or RSPS techniques. The gain of using the combined technique as compared to conventional approaches, such as RTCP feedback combined with Intra refresh is analyzed in
It can be observed that, if RTCP feedback delay is increased from 30 ms to 420 ms delay, gain improvement for this embodiment drops by about 0.6-0.7 dB gain. When RTCP feedback is further increased to 1 second, the PSNR drop widens to about 1.0-1.2 dB as compared to a delay of 30 ms.
As can be seen by the above-described results, the methods and systems described herein may deliver appreciable improvements in visual quality in practical scenarios. In average PSNR metric, improvements may be in the 0.5-1 dB range. Perceptually, the improvements will be apparent because early feedback will prevent artifacts such as “frozen” pictures or progressively increasing “ghosting” caused by the use of error concealment logic in the decoder.
Numerous embodiments are described for providing information about packet loss at the local link to the encoder and may include an interface for communication of information about packet loss to the encoder. In one embodiment, the encoder, before encoding each frame, may call a function that returns the following information: (1) an indicator that identifies if any of the previously transmitted NAL units were sent successfully (or not sent successfully); and (2) if some NAL units were not sent successfully, the indices of those NAL units that have been lost recently. The encoder may then use RPS or RSPS to cause prediction to be made from a frame Or frames that were sent before the first frame affected by the packet loss.
In one embodiment, such an interface may be provided as part of Khronos' OpenMAX DL framework. In alternative embodiments, the set of information exchanges between the RLC and the application layer are standardized as normative extensions in 3GPP/LTE.
In a further embodiment, a custom message in RTCP (for example an APP-type message) is used to signal local link packet-loss notification to the encoder. This communication process may be encapsulated in the framework of existing IETF protocols.
Various applications in mobile video telephony are depicted in
In some embodiments, feedback and loss propagation prevention methods described herein may be applied to the “local link”. In some embodiments, these may be combined with various methods to reduce the effect of errors on “remote links”. These methods may include one or more of (i) setting different QoS levels to the remote and local wireless links; and (ii) using transcoding of video coupled with the early packet loss detection and RPS or RSPS technique at the remote base station.
Different QoS levels may be determined and set through a negotiation as described in U.S. Provisional Application Ser. No. 61/600,568, entitled “Video QOE Scheduling”, filed Feb. 17, 2012, the contents of which are incorporated herein by reference.
Using a higher QoS at the remote link will tend to cause most transmission errors to occur at the local/weaker link, thereby minimizing lost packets at the more distant, remote link, where the delay between transmission of a lost packet and the feedback of such error information to the encoder may be too long to permit error propagation reduction techniques to provide the desired picture quality.
QoS Differentiation for Local Link and Remote Links will be discuss with respect to the scenarios depicted in
Scenario 2 shown in
In scenario 3 depicted in
In scenario 4 depicted in
In scenario 5 depicted in
Scenario 6 depicted in
Finally, scenario 7 depicted in
In summary, the large delay between the wireless downlinks and the video encoder may apply to Scenarios 4-7 of
Techniques for setting different QoS levels in LTE are now described with respect to two exemplary embodiments that enable such QoS differentiation between wireless uplink and wireless downlink Each approach may involve any (or none) of the following three functions: (i) the network(s) determine the QoS level for the uplink; (ii) the network(s) determine whether a feedback mechanism for packet loss detection will be used at the uplink and downlink; and, (iii) the network(s) determine the QoS level for the downlink. For the uplink, in general, the feedback mechanism is recommended.
The current 3GPP specification defines nine QoS levels (QCI values). Each QoS level is recommended for a number of applications. Simply following the recommendation in the 3GPP specification, the video packets transmitted over the downlink will receive the same QoS level as the video packets over the uplink, because the application is the same on the uplink and on the downlink.
However, some embodiments may leverage the PCC capability of the current 3GPP specification to enable QoS differentiation between uplink and downlink. In one such embodiment, the following procedures may be performed:
One embodiment may use deep packet inspection (DPI), and an alternative embodiment may use application functions to determine application type, both of which are described in further detail below.
Referring now to
At the P-GW 1411 or the uplink, DPI is performed to detect the SDF, as shown at 2-a. Similarly, DPI is used at the P-GW of the downlink, as shown at 2-b.
The P-GWs 1409 and 1411 then each send a message 3-a and 3-b, respectively, to their PCRFs 1405 and 1419 requesting the PCC rules associated with the SDF. The PCC rules may include the QoS level, whether to reject the SDF or not, etc.
The PCRFs 1405, 1419 contact their respective SPRs 1407 and 1417 to get the subscription information associated with the UE of the detected SDF, as shown at 4-a and 4-b.
The SPRs 1407 and 1417 reply with the subscription information, as shown at 5-a and 5-b.
The PCRFs 1405, 1419 use the subscription information and the policies uploaded by the network operators to derive the PCC rules for their respective SDFs, as shown at 6-a and 6-b. However, the derived PCC rules may be different in the two LTE/SAE networks, because the desired QoS levels for uplink and downlink may be different.
The PCRFs 1405, 1419 send the PCC rules to their respective P-GWs 1409, 1411, as shown at 7-a and 7-b.
Next, it will be determined if a feedback mechanism will be employed for communications between the sending and receiving UEs 1401 and 1423. This may involve some or all of the steps labeled 8-i through 9-a shown in
While, in one embodiment, it is possible to make a decision as to whether to employ feedback in the uplink and/or downlink simply by categorizing the scenario in question into one of the seven scenarios as depicted in
Next, the P-GW 1413 in the downlink LTE/SAE network may forward the request to its own subscription service (not shown) and receive in response the IP address of the eNB 1421 that currently serves the UE receiver 1423 (also not shown), and then send a reply, message 8-2, with the IP address to the requesting P-GW 1409.
Next, in the uplink LTE/SAE network, the P-GW 1409 sends a request message 8-3 to the uplink eNB 1403 asking it to send a delay test packet to the eNB 1421 in the downlink network. This message may contain the address of the eNB 1421 in the downlink network.
In response, eNB 1403 sends a delay test packet 8-4 to the downlink eNB 1421. The delay test packet contains at least (1) its own address, (2) the address of the downlink eNB, and (3) a timestamp. The test packet may be an ICMP Ping message.
The downlink eNB 1421 sends back an ACK 8-5. The ACK message may contain the following information: (1) the address of the uplink eNB; (2) the address of the downlink eNB; (3) a time stamp when the ACK is generated; and (4) a time stamp copied from the delay test packet
Next, the uplink eNB 1403 calculates the delay between itself and the downlink eNB 1421 and sends a report message 8-6 to the uplink P-GW 1409.
The uplink P-GW 1409 sends back an ACK message 8-7 to the uplink eNB 1403 to confirm the reception of the delay report. The report may contain the following information: (1) the address of the uplink P-GW; (2) the address of the uplink eNB; and (3) the address of the downlink eNB.
The uplink P-GW 1409 then evaluates the feedback delay based on the delay reported from the uplink eNB and compares the feedback delay with the PCC rules. It then decides whether a feedback mechanism for detecting packet losses should be used for the uplink and/or the downlink.
Then the uplink P-GW 1409 informs the downlink P-GW 1413 of its decision whether to use a feedback mechanism in message 8-9. Message 8-9 may have the following information: (1) the address of the uplink P-GW; (2) the address of the downlink P-GW; (3) the address of the uplink eNB; (4) the address of the downlink eNB; (5) the address of the UE sender; (6) the address of the UE receiver; (7) the application type; and (8) a message ID.
The downlink P-GW 1413 replies with an ACK 8-10, which may contain the same type of information contained in message 8.9. Additionally, it may contain its own message ID.
Note that, in the case the two UEs are in the same LTE/SAE network, Messages 8.1, 8.2, 8.9, and 8.10 will not be used.
The uplink and downlink P-GWs 1409 and 1412 send messages 9-a and 9-b, respectively, to the sending and receiving eNBs 1401 and 1423 indicating whether the feedback mechanism for detecting packet losses on the respective wireless link will be enabled or not.
In one embodiment, feedback is always enabled for the uplink. For the downlink, on the other hand, the decision should depend on the actual delay between the sender UE 1401 (where the video encoder is located) and the wireless downlink in question.
Next, each P-GW 1409, 1413 initiates the set up of the EPS bearer, and assigns a QoS level to the EPS bearer based on the PCC rules received from the PCRF. This series of events is labeled as 10-a and 10-b for the uplink and downlink networks, respectively, in
Finally, if the sending UE 1401 sends a video packet, this video packet will be served at the new QoS levels in the LTE/SAE networks. These events are labeled as 11-a and 11-b, respectively, in
Alternately, an application function-based approach may be used. For instance, in the DPI method, the use of encryption may render it quite difficult for the P-GW to obtain the information from the passing video packets that is needed to determine the desired QoS level. In an application function-based approach, the P-GW does not inspect data (video) packets. Instead, the application function extracts necessary information from the application used by the UEs, and passes that information to the PCRF. For example, an application function could be the P-CSCF (Proxy-Call Service Control Function) used in the IMS system. The application signaling may be carried by SIP. A SIP INVITE packet (RFC 3261) payload may contain a Session Description Protocol (SDP) (RFC 2127) packet, which in turn may contain the parameters to be used by the multimedia session.
In some embodiments, attributes for the SDP packet are defined to describe the desired QoS levels for the uplink traffic and downlink traffic and the delay threshold for triggering the packet loss detection feedback mechanism. For example, per the SDP syntax (RFC 2327):
a=uplinkLoss:2e-3
a=downlinkLoss:1e-3
a=maxFeedbackDelay:2e-1
The meaning of the above is:
Tolerable uplink packet loss is 2×10-3
Tolerable downlink packet loss is 1×10-3
Maximum feedback delay for any packet loss detection is 2×10-1 sec or 200 ms
Signaling, and operation in accordance with one exemplary embodiment of an application function-based approach is depicted in
The uplink UE 1501 sends an application packet, which could be a SIP INVITE packet described above with attributes defined by the uplink UE. This packet traverses both LTE/SAE networks. These events are labeled as 21-a and 21-b in the uplink and downlink networks, respectively.
The AF 1505 and 1521 in each of the uplink and downlink networks extracts the application information and possibly QoS parameters from the application packet. These events are labeled as 22-a and 22-b, respectively.
The AFs 1505 and 1521 send the extracted application information and QoS information to their respective PCRFs 1507 and 1519, as shown at 23-a and 23-b, respectively.
As in the DPI-based embodiment of
Next, using the uplink network as an example, if QoS parameters are specified, the PCRF 1507 will find the matching QoS level (e.g., QCI value) for that SDF, as shown at 26-1-a, and may send a message 26-2-a to the UE 1501 to notify it of the result of the QoS request. Otherwise, the PCRF will derive the QoS level.
The same occurs in the downlink message, as illustrated by operation 26-1-b in which PCRF 1519 finds the matching QoS level and may send a message 26-2-b to the downlink UE 1525 notifying it of the result of the QoS request.
Messages 26-2-a and 26-2-b may have the following, information: (1) address of the UE (2) identifier of the SDF, e.g., destination IP address, source port number, destination port number, protocol number, (3) whether the QoS request is accepted or not; and (4) if the QoS request is rejected, the recommend QoS for use.
The remaining signaling and operations 27-a, 27-b, 28-1, 28-2, 28-3, 28-4, 28-5, 28-6, 28-7, 28-8, 29-a, 29-b, 30-a, 30-b, 31-a, and 31-b are essentially the same as the corresponding signaling and operations in
Transcoding for prevention of error propagation at remote links also may be used in some embodiments, including RPS or RSPS operations at the remote base station. A system diagram illustrating an embodiment of such an approach is shown in
Similarly to the system shown in
The techniques described hereinabove for early detection of packet loss and error propagation reduction may he used at the local wireless link 1615 as previously discussed and are generically represented by line 1626 in
In such cases, similar early packet loss detection and error propagation reduction techniques to those described above primarily in connection with local wireless links may be applied at the remote link 1623. However, in these embodiments, the remote base station performs transcoding of the video packets it receives as input and encoding operations are performed between the remote base station 1622 and the receiving UE 2624. These operations are represented by line 1626 in
In some embodiments, transcoding at the remote base station 1622 may be invoked only if and when packets are lost. In the absence of packet loss, the base station 1622 may simply send the incoming sequence of RTP packets on to the UE 1624 over the wireless link 1623.
Then, when packet loss is detected, the base station may prevent loss propagation by commencing transcoding. In one embodiment, upon detection of packet loss, the base station 1622 transcodes the next frame/packet by using RPS or RSPS to the last successfully transmitted frame. To prevent ghosting, the frames following the lost frame (until the next IDR frame is received) are transcoded as P-pictures, referring to prior frames that were successfully transmitted. In this transcoding process, many encoding parameters, such as QP levels, macro-block types, and motion vectors can be kept intact, or used as a good starting point to simplify the decision process and maintain relatively low-complexity in the process.
Coupled with RPS/RSPS (see, e.g., 1616) on the local link 1615, this technique should be sufficient to combat errors introduced by wireless links. The global RTCP feedback still may be used to deal with cases when packets are delayed or lost due to congestion on the wired portion of the communication chain.
As shown above, early packet loss detection methods may be used as a supplemental technique for improving quality of delivery in video-phone applications. It also may be used as a separate technique for improving performance of RTSP/RTP-based streaming applications. One such architecture is shown in
In many cases, the transcoder in the base-station 1720 need not be even aware of the type of stream or application that it is dealing with. It may just parse packet headers to detect RTP and video content, and check if it was successfully delivered. If not successfully delivered, it may invoke transcoding to minimize error propagation without ever having, the need to know the type of the data of the application.
Unlike video conferencing or VoIP, streaming systems can tolerate delay and, in principle, use RTCP or proprietary protocols to implement application-level ARQ (and the accompanying retransmission of lost packets). To prevent such retransmissions, the transcoder additionally may generate and send delayed RTP packets with sequence numbers corresponding to lost packets. Such packets may contain no payload, or transparent (all skip mode) P frames.
As shown in
The communications systems 100 may also include a base station 114a and a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106, the Internet 110, and/or the networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
The base station 114a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes. etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell, in another embodiment, the base station 114a may employ multiple-input multiple Output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established, using any suitable radio access technology (RAT).
More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA), WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) arid/or Evolved HSPA (HSPA+), HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
In another embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A).
In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
The base station 114b in
The RAN 104 may be in communication with the core network 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. For example, the core network 106 may provide call control, billing, services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in
The core network 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities, i.e., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 102c shown in FIG. 18A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While
The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
In addition, although the transmit/receive element 122 is depicted in
The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example.
The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 106 and/or the removable memory 132. The non-removable memory 106 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering, the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e,g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
As shown in
The core network 106 shown in
The RNC 142a in the RAN 104 may be connected to the MSC 146 in the core network 106 via an IuCS interface. The MSC 146 may be connected to the MGW 144. The MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
The RNC 142a in the RAN 104 may also he connected to the SGSN 148 in the core network 106 via an IuPS interface. The SGSN 148 may be connected to the GGSN 150. The SGSN 148 and the GGSN 150 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between and the WTRUs 102a, 102b, 102c and IP-enabled devices.
As noted above, the core network 106 may also be connected to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in
The core network 106 shown in
The MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.
The serving gateway 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The serving gateway 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The serving, gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
The serving gateway 164 may also be connected to the PDN gateway 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
The core network 106 may facilitate communications with other networks. For example, the core network 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the core network 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 106 and the PSTN 108. In addition, the core network 106 may provide the AIR Us 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
As shown in
The air interface 116 between the WTRUs 102a, 102b, 102c and the RAN 104 may be defined as an RI reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 102a, 102b, 102c may establish a logical interface (not shown) with the core network 106. The logical interface between the WTRUs 102a, 102b, 102c and the core network 106 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.
The communication link between each of the base stations 170a, 170b, 170c may be defined an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations 170a, 170b, 170c and the ASN gateway 172 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 102a, 102b, 100c.
As shown in
The MIP-HA 174 may be responsible for IP address management, and may enable the WTRUs 102a, 102b, 102c to roam between different ASNs and/or different core networks. The MIP-HA 174 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The AAA server 176 may be responsible for user authentication and for supporting user services. The gateway 178 may facilitate interworking with other networks. For example, the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. In addition, the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
Although not shown in
Various acronyms, terms, and abbreviations have been used herein and may have included some of the following:
In one embodiment, a method is implemented of transmitting video data over a network comprising: receiving wireless packet loss data at a wireless transmit receive unit (WTRU), determining video packet loss data from the wireless packet loss data; and providing the video packet loss data to a video encoder application running on the WTRU for use in encoding video data.
In accordance with this embodiment, the method may further comprise: the video encoder conducting an error propagation reduction process responsive to the video packet loss data.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes generating an Instantaneous Decode Refresh frame.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes generating an Intra Refresh frame.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes generating encoded video using a reference picture selection method.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes generating encoded video using reference set of pictures selection method.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes generating encoded video using one or more reference pictures selected based on the packet loss indication data.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes: generating an intra Refresh frame or an instantaneous Decode Refresh frame; generating encoded video using a P-predicted encoding mode; and selecting one of the Intra Refresh frame or an Instantaneous Decode Refresh frame, on the one hand, and the encoded video using a P-predicted encoding mode for transmission.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is provided by a base station to the wireless transmit receive unit (WTRU).
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is generated at the Radio Link Control (RLC) protocol layer.
One or more of the preceding embodiments may further comprise: wherein the video packets are transported using Real Time Protocol (RTP).
One or more of the preceding embodiments may further comprise: wherein a wireless transport protocol is LIE.
One or more of the preceding embodiments may further comprise: wherein the RLC layer is operating in acknowledged mode.
One or more of the preceding embodiments may further comprise: wherein a number of ARQ retransmissions is set to zero in acknowledged mode.
One or more of the preceding embodiments may further comprise: wherein maxRetxThreshold is set to zero.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is obtained from RLC Status PDUs received from the base station.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is generated locally from a MAC transmitter.
One or more of the preceding embodiments may further comprise: wherein the video packet loss data is determined by identifying a PDCP sequence number in a header of as PDCP packet.
One or more of the preceding, embodiments may further comprise: wherein the RLC layer is operating in unacknowledged mode.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data includes as NACK message.
One or more of the preceding embodiments may further comprise: wherein the NACK message is synchronous with uplink transmissions:
One or more of the preceding embodiments may further comprise: wherein the video packet loss data is generated from a mapping using a packet data convergence protocol (PDCP) sequence number.
One or more of the preceding embodiments may further comprise: wherein the determining the video packet loss data includes using a mapping from a real time protocol (RTP) sequence number in the RLC to a PDCP PDU sequence number.
One or more of the preceding embodiments may further comprise: wherein the mapping includes using a table lookup process.
One or more of the preceding embodiments may further comprise: wherein the determining the video packet loss data further includes mapping the PDCP PDU sequence number to an IP address, Port number, and RTP sequence number.
One or more of the preceding embodiments may further comprise: wherein the determining the video packet loss data further comprises performing deep packet inspection on the PDCP PDU.
One or more of the preceding embodiments may further comprise: wherein the mapping the PDCP PDU sequence number to an IP address, Port number, and RTP sequence number includes using a PDCP PDU sequence number lookup table.
One or more of the preceding embodiments may further comprise: mapping the RTP sequence number to a NAL packet identifier.
One or more of the preceding embodiments may further comprise: wherein the mapping the RTP sequence number to a NAL packet identifier comprises using a RTP sequence number to NAL packet identifier lookup table.
One or more of the preceding embodiments may further comprise: wherein the PDCP PDU sequence number lookup table is built using an RLC segmentor,
One or more of the preceding embodiments may further comprise: wherein the determining the video packet loss data includes mapping from an RLC packet to a PDCP sequence number to an RTP sequence number to a NAL.
One or more of the preceding embodiments may further comprise: wherein the video packet loss data is generated from the wireless packet loss data using a mapping from a radio link control (RLC) sequence number.
One or more of the preceding embodiments may further comprise: wherein the method is implemented in a network environment comprising at least a downlink wireless link and an uplink wireless link between the WTRU and a destination of the video data, the downlink wireless link disposed closer to the WTRU than the uplink wireless link, and wherein the wireless packet loss data pertains to the downlink wireless link, the method further comprising: implementing a higher QoS in the remote wireless link than in the local wireless link.
One or more of the preceding embodiments may further comprise: wherein the method is the network determining the QoS level for the remote wireless link.
One or more of the preceding embodiments may further comprise: wherein the method is implemented in a network environment comprising at least a downlink wireless link between the WTRU and an uplink base station and an uplink wireless link between a downlink base station and a destination receiver of the video data, the downlink wireless link disposed closer to the WTRU than the uplink wireless link, and wherein the wireless packet loss data pertains to the downlink wireless link, the method further comprising: the network determining whether to generate additional wireless packet loss data pertaining to the downlink wireless link.
One or more of the preceding embodiments may further comprise: wherein the determining, whether to generate additional wireless packet loss data pertaining to the remote wireless link includes determining the delay of data transmission between the WTRU and the downlink wireless link.
One or more of the preceding embodiments may further comprise: wherein the determining whether to generate additional wireless packet loss data pertaining to the downlink wireless link further comprises determining an application type of the video packet data using Deep Packet Inspection (DPI).
One or more of the preceding embodiments may further comprise: wherein the determining whether to generate additional wireless packet loss data pertaining to the downlink wireless link comprises: the WTRU sending a video packet over the network; performing DPI to detect the Service Data Flow (SDF) of the video packet data to determine an application type corresponding to the video packet data; the uplink base station sending a delay test packet to the downlink base station; the downlink base station sending an ACK message to the uplink base station in response to receipt of the delay Lest packet; the uplink base station calculating a delay between the uplink base station and the downlink base station; the uplink base station sending a delay report message to a network gateway; the network gateway deciding whether to generate additional wireless packet loss data pertaining to the remote wireless link based at least in part on the delay report message; and the gateway sending a message to the downlink base station indicating whether to generate additional wireless packet loss data pertaining to the remote wireless link.
One or more of the preceding embodiments may further comprise: wherein the delay test packet contains at least (1) the network address of the uplink base station, (2) the network address of the downlink eNB, and (3) a timestamp.
One or more of the preceding embodiments may further comprise: wherein the ACK message contains (1) the network address of the uplink base station, (2) the network address of the downlink base station; (3) a time stamp when the ACK is generated; and (4) a copy of the time stamp from the delay test packet.
One or more of the preceding embodiments may further comprise: a gateway in the network sending a request message to the uplink base station requesting the uplink base station to send the delay test packet to the downlink base station; and wherein the sending of the delay test packet by the uplink base station is performed responsive to receipt of the request message from the gateway.
One or more of the preceding embodiments may further comprise: wherein the delay test packet is an ICMP Ping message.
One or more of the preceding embodiments may further comprise: wherein the determining whether to generate additional wireless packet loss data pertaining to the downlink wireless link further comprises determining an application type of the video packet data using an Application Function process.
One or more of the preceding embodiments may further comprise: wherein the determining whether to generate additional wireless packet loss data pertaining to the downlink wireless link comprises: the WTRU sending an application packet through the network to a receiving node; an Application function (AF) in the network extracting application information from the application packet; the AF sending the extracted application information to a Policy Charging and Rule Function (PCRF) in the network; the PCRF determining an application type corresponding to the video data, determining QOS parameters for the video data as a function thereof and sending the QoS parameters to a gateway in the network; the uplink base station sending a delay test packet to the downlink base station; the downlink base station sending an ACK message to the uplink base station in response to receipt of the delay test packet; the uplink base station calculating a delay between the uplink base station and the downlink base station; the uplink base station sending a delay report message to a network gateway; the network gateway determining whether to generate additional wireless packet loss data pertaining to the remote wireless link based at least in part on the delay report and the QoS parameters received from the PCRF and the gateway sending a message to the downlink base station indicating whether to generate additional wireless packet loss data pertaining to the remote wireless link.
One or more of the preceding embodiments may further comprise: wherein the Application Function is P-CSCF (Proxy-Call Service Control Function).
One or more of the preceding embodiments may further comprise: wherein the application packet is a session Initiation Protocol (SIP) INVITE packet.
One or more of the preceding embodiments may further comprise: storing policies indicating QoS levels to be used for uplink traffic and for downlink traffic for at least one particular type of application; the network determining an application type of the video encoder; and the network setting a QoS level for the downlink wireless link and a QoS level for the uplink wireless link as a function of the policies and the application type of the video encoder.
One or more of the preceding embodiments may further comprise: wherein the downlink QoS is higher than the uplink QoS for each application.
One or more of the preceding embodiments may further comprise: wherein the at least one application is a video encoder.
One or more of the preceding embodiments may further comprise: wherein the method is implemented in a network environment comprising at least a downlink wireless link and an uplink wireless link between the WTRU and a destination receiver of the video data, the downlink wireless link disposed closer to the WTRU than the uplink wireless link, and wherein the wireless packet loss data pertains to the downlink wireless link, the method further comprising: transmitting the wireless packet data over the downlink wireless link; receiving wireless packet loss data from the destination node at the downlink base station; and determining video packet loss data from the wireless packet loss data received, at the downlink base station.
One or more of the preceding embodiments may further comprise providing the video packet loss data received at the downlink base station to a transcoder in the downlink base station for use in encoding the video data; and transcoding the video data at a downlink base station before passing through the downlink wireless link to the destination node;
One or more of the preceding embodiments may further comprise: wherein the transcoder performs the transcoding responsive to the wireless packet loss data.
One or more of the preceding embodiments may further comprise: wherein the transcoder conducts an error propagation reduction process on the video data responsive to the video packet loss data.
In another embodiment, a WTRU includes as processor configured to transmit video data over a network comprising, the processor further configured to: receive wireless packet loss data; determine video packet loss data from the wireless packet loss data; and provide the video packet loss data to a video encoder application running on the WTRU for use in encoding video data.
One or more of the preceding embodiments may further comprise: wherein the video encoder is configured to conduct an error propagation reduction process responsive to the video packet loss data.
One or more of the preceding embodiments may further comprise: wherein the error propagation reduction process includes at least one of: (a) generating an Instantaneous Decode Refresh frame: (h) generating an Intra Refresh frame; (c) generating encoded video using a reference picture selection method: (d) generating encoded video using a reference set of pictures selection method; and (e) generating encoded video using one or more reference pictures selected based on the packet loss indication data.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is received from a base station.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is in a Radio Link Control (RLC) protocol layer.
One or more of the preceding embodiments may further comprise: wherein the video packets are in Real Time Protocol (RTP).
One or more of the preceding embodiments may further comprise: wherein the RLC layer is operating in acknowledged mode.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data is obtained from received RLC Status PDUs.
One or more of the preceding embodiments may further comprise: wherein the video packet loss data is determined by identifying a PDCP sequence number in a header of a PDCP packet.
One or more of the preceding embodiments may further comprise: wherein the RLC layer is operating in unacknowledged mode.
One or more of the preceding embodiments may further comprise: wherein the wireless packet loss data includes a NACK message.
One or more of the preceding embodiments may further comprise: wherein the NACK message is synchronous with uplink transmissions.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to generate the video packet loss data from a mapping using a packet data convergence protocol (PDCP) sequence number.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to generate the video packet loss data from using a mapping from a real time protocol (RTP) sequence number in the RLC to a PDCP PDU sequence number.
One or more of the preceding embodiments may further comprise: wherein the mapping includes using a table lookup process.
One or more of the preceding embodiments may further comprise: wherein the processor is configured to determine the video packet loss data by mapping the PDCP PDU sequence number to an IP address, Port number, and RTP sequence number.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to determine the video packet loss data by performing deep packet inspection on the PDCP PDU.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to map the PDCP PDU sequence number to an IP address, Port number, and RTP sequence number using a PDCP PDU sequence number lookup table.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to map the RTP sequence number to a NAL packet identifier.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to map the RTP sequence number to a NAL packet identifier.
One or more of the preceding embodiments may further comprise: wherein the processor is configured to determine the video packet loss data by mapping from an RLC packet to a PDCP sequence number to an RTP sequence number to a NAL.
In another embodiment, a base station in a network environment includes a processor configured to: receive input wireless packet data via the network; transmit the wireless packet data to a destination node over a wireless link: receive wireless packet loss data from the destination node; determine application layer packet loss data from the wireless packet loss data; and provide the application packet loss data to a transcoder in the base station fur use in transcoding application layer data in the wireless packet data.
One or more of the preceding embodiments may further comprise: wherein the application layer data is video data.
One or more of the preceding embodiments may further comprise: wherein the transcoder is configured to transcode the wireless packet data to application layer data and back before transmitting the passing through the downlink wireless link to the destination node.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to cause the transcoder to perform the transcoding responsive to the wireless packet loss data.
One or more of the preceding embodiments may further comprise: wherein the processor is further configured to cause the transcoder to conduct an error propagation reduction process on the application layer data responsive to the wireless packet loss data.
In another embodiment, an apparatus comprising a computer readable storage medium has instructions thereon that when executed by a processor cause the processor to provide wireless packet loss data to a video encoder.
In another embodiment, an apparatus comprising a computer readable storage medium has instructions thereon that when executed by a processor cause the processor to encode video data based on an indication of lost video packets.
Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Variations of the method, apparatus, and system described above are possible without departing from the scope of the invention. In view of the wide variety of embodiments that can be applied, it should be understood that the illustrated embodiments are exemplary only, and should not be taken as limiting the scope of the following claims.
Moreover, in the embodiments described above, processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit (“CPU”) and memory. In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being “executed,” “computer executed” or “CPU executed.”
One of ordinary skill in the art will appreciate that the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU. An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the exemplary embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the described methods.
The data bits may also be maintained, on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (“RAM”)) or non-volatile Read-Only Memory (“ROM”)) mass storage system readable by the CPU. The computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It should be understood that the exemplary embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods.
No element, act, or instruction used in the description of the present: application should be construed as critical or essential to the invention unless explicitly described as such. In addition, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the terms “any of” followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of,” “any combination of,” “any multiple of,” and/or “any combination of multiples of” the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Further, as used herein, the term “set” is intended to include any number of items, including zero. Further, as used herein, the term “number” is intended to include any number, including zero.
Moreover, the claims should not be read as limited to the described order or elements unless stated to that effect. In addition, use of the term “means” in any claim is intended to invoke 35 U.S.C. §112, ¶6, and any claim without, the word “means” is not so intended.
This application is anon-provisional of U.S. Provisional Patent Application No. 61/603212 filed Feb. 24, 2012, which is incorporated herein fully by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/026353 | 2/15/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61603212 | Feb 2012 | US |