One of the most popular, and network intensive, computer applications is video or multimedia playback over the internet. Several years ago, it was believed that the internet, with its aging protocols and lack of guaranteed delivery, would be unable to support video playback at a quality satisfactory to users. However, applications such as YouTube and Netflix have shown that quality video playback is possible.
There were various mechanisms that were created to improve the user experience. For example, in some embodiments, the client, which has a media player, requests and receives, the entire multimedia file before beginning playback of the file. This insures that, once the file begins playing, it will be able to continue uninterrupted. While this may be acceptable for shorter clips, such a scheme is unacceptable for longer files for several reasons. First, the user is forced to wait until the entire file is downloaded before seeing any portion of the clip. Such a wait may be unacceptable. Second, in an environment with limited network bandwidth, the entire file is transmitted, even if the user only watches the first few seconds of the video.
Another method is known as progressive download. In this embodiment, the client begins downloading the multimedia file from the server. When a certain threshold is reached, such as 3 seconds of the download has been completed, the media player on the client begins displaying the video. The threshold used may be fixed, or may be based on the resolution of the file, the average available network bandwidth or other parameters. This has several advantages over the previous method. First, the user does not need to wait until the entire file is downloaded before beginning to view the video. Secondly, bandwidth is potentially saved if the user chooses to navigate away from the video before it is completely downloaded. However, one shortcoming of this mechanism is that once the video has begun playing, the available bandwidth must remain above a minimum level to insure that the downloading of each subsequent portion of the video completes before that portion needs to be displayed. If the bandwidth decreases, the video may appear choppy, or may pause in order to allow the buffer to fill again.
Progressive download methods typically download the multimedia content at the maximum rate supported by the underlying transport within the limits of the player buffer (i.e. limiting the downloaded content not to overflow the player buffer by TCP acknowledgement mechanisms). These methods improve the quality of experience (QOE) by reducing stutters since the content is transferred to the client buffer, thus minimizing the possibility of player stopping during content presentation. However, one disadvantage is that when the transit network bandwidth is high compared to the media stream rate, significant content is downloaded to the player, and if the user cancels the current presentation, such as by moving on to a different media-clip, the network bandwidth used for the downloaded content that is not viewed is wasted.
Another mechanism used by Adobe Systems, Inc. is known as Real Time Messaging Protocol (RTMP). This mechanism controls content delivery using RTMP protocol over TCP, by controlled streaming of the content. Thus, the streaming mechanism maintains a window of what is delivered to the player relative to what is currently being played to the user. Thus bandwidth wasted due to user cancellations of active sessions is reduced, by limiting the content delivered to locations relatively close to the current display position. RTMP has also options to use HTTP or HTTPS as transport options.
Another mechanism is Real Time Streaming Protocol/Real-time Transport Protocol (RTSP/RTP). In this protocol, the server controls the rate of delivery of the content to the presentation rate, rather than delivering at the maximum rate of the underlying transport, such as TCP or UDP over IP.
In addition, several HTTP streaming protocols have been defined to deliver live or stored multimedia content at controlled rate (determined by the server or based on cooperation between the client and server) that matches the multimedia stream rate.
RTMP and HTTP streaming methods pace the content delivery, by limiting the rate at which content is delivered. RTMP uses protocol components in the client and the server, where as HTTP Streaming uses additional tags in the HTTP Requests and Responses.
In Adaptive Bit Rate Streaming (ABR Streaming), the client monitors the bandwidth to the server at the start up of multi-media session and at regular intervals during the multi-media play. Based on the monitored bandwidth, the client selects alternative encodings with different screen resolutions of the same content. The switching between alternative resolutions of the same media content is done explicitly by the client. Several applications, such as Ankeena TV, use ABR streaming over HTTP for live content delivery.
However, in this protocol, it is the client that determines the available bandwidth. Thus, in a radio access network, where devices are routinely added and removed from a particular cell, the client's limited knowledge of the overall network utilization may compromise its ability to accurately predict a suitable encoding and screen resolution in a timely manner.
It would be advantageous if a component in the radio access network, with visibility to total network traffic, were able to determine when a particular user was requesting multimedia content and based on that total network traffic, were able to configure the encoding and resolution of that multimedia content to maximize the user's quality of experience. In addition, it would be beneficial if that component could decouple wireless RAN traffic from core network traffic, by buffering multimedia content and delivering this content in a just-in-time manner to the end user device.
A network device, capable of understanding communications between an end user and the core network on a RAN network is disclosed. In some embodiments, the device is able to decode the control plane and the user plane. As such, it is able to determine when the end user has requested multimedia content. Once this is known, the device can optimize the delivery of that content in several ways. In one embodiment, the device requests the content from the content server (located in the core network) and transmits this content in a just-in-time manner to the end user. In another embodiment, the device automatically changes/limits the options for encoding and resolution of the content available to the client, or pro-actively controls bandwidth for the specific flow so that the end user device initiates switching to a different encoding/resolution, based on overall monitored network traffic. In another embodiment, the device automatically selects the appropriate format and resolution based on overall bandwidth limitations, independent of the end user.
The device 112 is capable of understanding communications between UE 107 and the core network. In some embodiments, the device 112 is able to decode the control plane and the user plane. Such a device is described in co-pending patent application Ser. No. 12/536,537, filed Aug. 6, 2009, the disclosure of which is incorporated herein by reference in its entirety. As such, it is able to determine when the UE 107 has requested multimedia content. Once this is known, the device 112 can optimize the delivery of that content in several ways. In one embodiment, the device 112 requests the content from the server (located on the internet) and transmits this content in a just-in-time manner to the UE 107. In another embodiment, the device 112 automatically changes or limits the options for encoding/resolution available to the client, and/or changes the delivery rate (bandwidth) for the specific multi-media flow, thereby causing the client device to initiate switching to a different resolution, based on overall monitored network and traffic to the client device. If the bandwidth available is insufficient to admit the newly requested video flow, the device 112 may block sending the video, or return error code indicating unavailability of resources, thus improving the fair allocation of network resources to other users, and not degrading the quality of experience to already started, multi-media flows. In another embodiment, the device 112 selects the appropriate format and resolution based on overall bandwidth limitations, independent of the UE 107.
In another embodiment, a dedicated hardware device having embedded instructions or state machines may be used to perform the functions described. Throughout this disclosure, the terms “control logic” and “processing unit” are used interchangeably to designate an entity adapted to perform the set of functions described.
The device also contains software capable of performing the functions described herein. The software may be written in any suitable programming language and the choice is not limited by this disclosure. Additionally, all applications and software described herein are computer executable instructions that are contained on a computer-readable media. For example, the software and applications may be stored in a read only memory, a rewritable memory, or within an embedded processing unit. The particular computer on which this software executes is application dependent and not limited by the present invention.
Specifically, in one embodiment, the software includes a content aware pacing algorithm. This algorithm monitors the file parameters delivered in a container file. Based on these parameters and the available RAN bandwidth, the pacing algorithm delivers multimedia data to UE 107 in such a way so as to optimize playback quality and RAN network usage.
As described above, a client or UE 107 may have a media player, such as a Flash based video player. Media players typically buffer a predetermined amount of data before the video begins playing. However, once the media player begins displaying the content, the amount of content needed by the player must be delivered to the client before it is needed for display in order for the video to be stutter free. Hence, the device 112, and specifically the content aware pacing algorithm, uses a first configuration parameter referred to as keep_ahead, which is the amount of data, as measured in seconds, needed before starting to play the video. In other words, the video file is played in a media player, and as such, it takes a determinable time to display the content. Thus, when keep_ahead amount of data is provided, it is known that this amount of data will require keep_ahead second to display. Note that this amount of data is measured in terms of time. Therefore, depending on the video encoding and compression, two segments, each supplying keep_ahead worth of data, may comprise different numbers of bytes.
A second configuration parameter is referred to as latency_period, which is the Round Trip Time (RTT) from the UE 107 to the network device or server delivering the content. The round trip delay is estimated at different intervals during content delivery, or can be preconfigured. The parameter latency_period is meant to compensate for variation in network latencies to ensure stutter free delivery. Typically, the media player in the UE 107 needs keep_ahead seconds of data before starting the presentation, and needs to continue receiving that amount of data during play, so that the player does not run out of data during the play. If the network cannot keep up, i.e. cannot keep the buffer filled, the player is forced to pause and stutter during the playback of the content.
In other words, in a simple example, the device uses a keep_ahead parameter of 10 seconds and a latency_period of 2 seconds. In this scenario, the device 112 sends content to the UE 107 in 10 second chunks. Knowing that it may take up to 2 seconds (i.e. latency_period) to transfers the next chunk to the UE 107, the device 112 may start transmitting the next chunk 8 seconds later (i.e. keep_ahead—latency_period). In order to perform this scheme, the device 112 needs to know that the requested content is indeed multimedia, and also must know the amount of data that is required for each time period.
The device and method of the current invention use the metadata stored within the container file that describes the video object and uses this information along with a predetermined amount of read-ahead buffering to ensure that the Quality of Experience (QoE) of the video is retained. In some embodiments, the container file is referred to as an .flv file. The interested reader is referred to http://www.adobe.com/devnet/flv/ for a complete description of the FLV file format. Other container file formats also exist, and the present invention is not limited to this particular format. It is important to note that the device 112 constructs the delivery schedule based on the media container-type meta-data that it monitors and decodes from the incoming stream when the video is first accessed and not yet cached.
As described above and in co-pending patent application Ser. No. 12/536,537, the device 112 intercepts all communications between the UE 107 and the core network. It then decapsulates all traffic from the UE 107 to understand the communications that are occurring and re-encapsulates the traffic before transmitting it upstream to the core network. Similarly, the device 112 decapsulates traffic coming from the core network to understand communications back to the UE 107. As before, it re-encapsulates the data before transmitting it to the UE 107. In one embodiment, when a video download is requested by the UE 107, the device 112 may optionally parse the FLV file returned by the content server in the core network to determine the details of the video. The number of bytes to be downloaded per second is computed by the content-aware pacing algorithm located within the device 112. A brief description of the FLV file format is provided for completeness.
The FLV file contains a header and a body. The header contains information on whether there is video/audio content in the body of the file. The body of the file describes the stream and is composed of a set of tags. A tag contains the type of stream, length of data, and time of presentation of the data. For example, a tag might include that the stream is a video stream containing 99K bytes that should be presented 40 sec into the clip. Tags can be of three types: audio, video or script-data. A video tag describes a frame including the frame type, codec id, and the data. The frame type can be one of five options: Key frame, inter frame, disposable inter frame for H.263 only, a generated key frame or a command frame. Further the tag contains information about the type of codec used (JPEG, H.263, Screen video, AVC, etc.). The content-aware pacing algorithm utilizes timestamp and data size parameters in the FLV tag. However, it can use other information such as disposable inter-frame to drop frames in case of buffer under runs. While the operation is described here uses media-container of type FLV, the current methods are applicable to other container types as well (for example MP4, 3GP, 3G2) and are not limited by this disclosure.
The content-aware algorithm parses the FLV tags for timestamp and data size parameters. For VBR (Variable Bit Rate) video, the amount of data needed to play a second of video varies. The content-aware pacing algorithm creates a transmit schedule based on the information parsed from the FLV file. It computes the amount of data that needs to be delivered per second from the timestamps for frame (FLV tag) delivery and the associated data lengths. In some embodiments, this transmit schedule is generated as the file is received from the content server, and is kept only until the content has been transmitted to the UE 107. In other embodiments, the transmit schedule, in the form of per second information, is stored in a schedule file. This schedule file may be stored in the storage element 202 of the device 112. The schedule file contains entries for the amount of video content to be served per second. The content-aware pacing algorithm uses the schedule file along with the configured buffering parameters to implement a just-in-time video content delivery to the end user.
The adaptive chunked content-aware pacing algorithm of the current invention starts by scheduling download of content data-size to meet keep_ahead seconds of play-time. Since the underlying transport is TCP, the actual delivery time for sending the scheduled chunk depends on the link-bandwidth achieved to the client. The next data chunk delivery happens at (current_time+keep_ahead−latency_period). The amount of data is determined from the schedule file associated with the video object. Assuming a keep_ahead of 5 seconds, and a latency_period of 1 sec, the next chunk is scheduled at time=4 seconds. Subsequent chunks are scheduled 4 seconds after the transmission of the previous chunk. While the example shows burst scheduling at regular intervals, the scheduling intervals for chunks could vary depending on the current position in the media play, and relative variations of client link bandwidths when previous chunks or other multi-media, web pages, files etc. are transmitted to the same client device or other devices that share the same sector. At every burst, a minimum of keep_ahead worth of data would be sent. Each data chunk delivery happens at the maximum transfer rate that the underlying network layer supports.
As and when video content is delivered to the device 112, the video delivery schedule is built incrementally and, optionally, is written to a schedule file, examples of which are shown in
As the device 112 receives data bytes from the multi-media content, it sends this data until keep_ahead bytes of data have been transmitted to the UE 107. Once this amount of data is transmitted to the UE 107, the device 112 pauses until the next configured chunk delivery time, as described above. The device 112 stores any bytes that it receives from the content server, such as in cache 205, for future transmissions to the UE 107. The device 112 continues to receive data from the content server and continues building the schedule file. At the specified time, such as keep_ahead-latency_period seconds after transmission of the previous transfer, the device 112 sends a second packet of keep_ahead bytes of data to the UE 107. This continues until either the file is completely transferred, or the user navigates away from the video. This latter action is detected by the device 112 typically by the termination of the TCP connection to the UE 107. Note that the device 112 interfaces on one side (i.e. to/from the content server) which is typically wireline and therefore has more predictable bandwidth and latency characteristics. The device 112 also interfaces to the UE 107, where wireless communication and varying numbers of devices and usage patterns impact latency. Thus, the device 112 serves to separate these two interfaces, allowing the content server to deliver data at one rate, and pacing the data to the UE 107 at a second rate.
While
The determination of the keep_ahead and latency_period parameters may be done in a variety of ways. In some embodiments, these values are fixed and remain constant for all UEs. In other embodiments, the parameters vary based on network conditions. For example, the latency_period is most obviously related to overall network bandwidth. The device 112 can monitor actual RAN bandwidth and adjust the latency_period parameter in real time. In other embodiments, this parameter may vary based on other characteristics, such as time of day. For example, the device 112 may decrease the latency_period during the late night hours, since the expected network traffic is lower. In other embodiments, the latency_period may be based on previously monitored bandwidth to this UE 107.
Similarly, the keep_ahead parameter can be modified in a number of ways. The choice of an optimal value for the keep_ahead parameter is based on balancing two competing goals. On one hand, smaller keep_ahead values allow the video to begin playing on the UE 107 sooner. These small values also minimize wasted bandwidth in the event that the user navigates away from the video. On the other hand, larger keep_ahead values are less susceptible to abrupt changes in network bandwidth, thereby minimizing the probability of pausing or stuttering. As suggested above, monitored network bandwidth, time of day, previously observed bandwidth or other characteristics can be used to set or modify the keep_ahead parameter.
There could be possibility of buffer under-run wherein server is not able to send out keep_ahead worth of data in specified time interval, in which case content-aware pacing algorithm sends out data at whatever rate that the underlying transport layer supports, and pacing benefits will not be seen here. However optionally information about disposable inter frame can be used here to drop inter-frames in case of under run.
In another embodiment, the device 112 is able to modify and select video formats and resolutions based on network activity. Again, since the device 112 is able to intercept and decapsulate user and control planes, it is able to determine the content of requests going from the UE 107 to a content server. Thus, the device 112 is able to modify these responses or the replies to these requests as appropriate. Using this technique, the device 112 is able to achieve network controlled bit rate selection. This allows the device 112 to set the initial resolution, which is referred to as Network Controlled Bit Rate selection.
In one embodiment, the UE 107 requests a video clip from a content server, such as YouTube. In response, the content server may provide a listing of the different resolutions in which that video is available. This list may be referred to as a format MAP. Typically, this list is supplied to the UE 107, which then selects a particular resolution. However, the device 112, having knowledge of this transaction may modify the format map returned to the US 107, based on observed network bandwidth. For example, the client may request a video which is available in a plurality of formats, from 240 to 720 p. In response to a request from the client, the content provider will supply this full list of formats. The device 112 may, based on observed bandwidth, determine that it is not possible to display the 720 p version of this video without pauses or stutters. Therefore, before sending this response back to the client, the device 112 modifies the response, and removes any formats which require bandwidth in excess of that which is currently available. Thus, the response received by the client contains only those formats which the device 112 believes can be displayed with an acceptable QoE.
In another embodiment, the request from the UE 107 may contain a flag, such as “&hd=1”, indicating that the content server should include high definitions versions of the video in the format map. The device 112, determining that a high definition video cannot be properly displayed, may modify the request before transmitting it to the content server, such as by removing the flag “&hd=1”. In this way, the content server will only return those formats which are not high definition.
Therefore, the device 112 is able to modify the format map received by the UE 107, either by modifying the request to the content server, or by modifying the response from the content server.
In another embodiment, the device 112 modifies the resolution of the video during transmission. For example, smooth streaming or adaptive streaming over HTTP is known. In this mechanism, the client continues to monitor its perceived network bandwidth, and adjusts the resolution or format of the video being played in real time. In other words, the client may be playing a 720 p version of a video and detect that the bandwidth of the network has dramatically dropped. In response, it may, without closing the media player, request a lower resolution version of the video, thereby changing resolution in mid-stream. Variations of this technique exist in Microsoft Silverlight, Netflix player and Adobe OSMF. However, in all of these embodiments, it is the client that determines the available bandwidth. Therefore, the client is typically reactive to changes (both positive and negative) in bandwidth.
RAN networks offer unique challenges in that the number of devices in the network and the amount of traffic each generates can change rapidly and unpredictably. Thus, it would be beneficial if a component, such as device 112, were able to monitor the network activity and make decisions about changing video resolutions. In one embodiment, the device 112 transmits video data to the UE 107. This can be done according to the technique described above, or using other traditional methods. While this transmission of the video is occurring, the device 112 continuously monitors network activity. If a change is noted in network bandwidth (either positive or negative), the device 112 can adapt.
In one embodiment, the device 112, upon determining a change in RAN bandwidth, begins requesting the alternative resolution from the content server (if it is not cached). Once the device 112 has received this video in the alternative resolution, it automatically begins transmitting it to the UE 107. The device 112 may recognize a convenient boundary point (for example, key frames or at the GOP Frame boundaries) for this transition. This embodiment assumes that the media player of the UE 107 is able to adapt to changes in resolution which are made unilaterally by the device 112.
In other embodiments, the media player of the UE 107 may only be able to adapt to changes in resolution if those changes were requested by the media player itself. In this embodiment, the device 112 may modify its behavior so as to encourage the media player to change resolutions. For example, assume that the RAN bandwidth greatly increased. In order to get the media player to request a higher resolution format of the video, the device 112 may accelerate its delivery of video data to the UE 107, so as to fill or overcome its buffer. This increased data rate to the UE 107 may cause the media player to request a higher resolution format. Conversely, the device 112 may slow its delivery of data to the UE 107 so that the media player's buffer empties or nearly empties, causing the media player to request a lower resolution version.
If these techniques are used in conjunction with the initial resolution selection, the network is able to determine and optimize video resolutions throughout the RAN, based on actual monitored traffic and congestion.
In another embodiment, the device 112 parses the communications between the UE 107 and the core network to determine that the UE 107 is requesting a multi-media file. The device 112, aware of the bandwidth required by such a multi-media file, first determines the available bandwidth of the RAN. For example, if there is little other traffic, the device 112 will honor the request. However, if the available bandwidth is low, due to a high number of devices, a large number of data-intensive transactions, or a combination of these factors, the device 112 may deny the request of the UE 107. In this case, the device 112 may return an error message or simply not deliver the file to the UE 107 if the available bandwidth is above a predetermined threshold. In some embodiments, the device 112 denies the request before the request for data is forwarded to the content server. In other embodiments, the device 112 makes the determination to deny the request after receiving at least part of the file from the content server, but prior to delivering the content to the UE 107.
The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described (or portions thereof). It is also recognized that various modifications are possible within the scope of the claims. Other modifications, variations, and alternatives are also possible. Accordingly, the foregoing description is by way of example only and is not intended as limiting.
This application claims priority of U.S. Provisional Patent Application Ser. No. 61/313,872, filed Mar. 15, 2010, the disclosure of which is incorporated herein by reference in its entirety. This application is a continuation-in-part of U.S. patent application Ser. No. 12/696,378, filed Jan. 29, 2010, which claims priority of U.S. Provisional Patent Application Ser. No. 61/148,454, filed Jan. 30, 2009, the disclosures of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
5990810 | Williams | Nov 1999 | A |
6105064 | Davis et al. | Aug 2000 | A |
6694349 | Zou | Feb 2004 | B1 |
6798786 | Lo et al. | Sep 2004 | B1 |
6907501 | Tariq et al. | Jun 2005 | B2 |
6917984 | Tan | Jul 2005 | B1 |
6996085 | Travostino et al. | Feb 2006 | B2 |
7047312 | Aweya et al. | May 2006 | B1 |
7177273 | Peelen et al. | Feb 2007 | B2 |
7318100 | Demmer et al. | Jan 2008 | B2 |
7333431 | Wen et al. | Feb 2008 | B2 |
7412531 | Lango et al. | Aug 2008 | B1 |
7489690 | Kakadia | Feb 2009 | B2 |
7568071 | Kobayashi et al. | Jul 2009 | B2 |
7583594 | Zakrzewski | Sep 2009 | B2 |
7602872 | Suh et al. | Oct 2009 | B2 |
7710873 | Pulkka et al. | May 2010 | B2 |
7715418 | Cho et al. | May 2010 | B2 |
7734804 | Lorenz et al. | Jun 2010 | B2 |
7739383 | Short et al. | Jun 2010 | B1 |
7797369 | Glickman | Sep 2010 | B2 |
7852763 | Ghanadan et al. | Dec 2010 | B2 |
7965634 | Aoyanagi | Jun 2011 | B2 |
7991905 | Roussos et al. | Aug 2011 | B1 |
8111630 | Kovvali et al. | Feb 2012 | B2 |
8161158 | Curcio et al. | Apr 2012 | B2 |
8190674 | Narayanan et al. | May 2012 | B2 |
8208430 | Valmikam et al. | Jun 2012 | B2 |
8576744 | Kovvali et al. | Nov 2013 | B2 |
8717890 | Kovvali et al. | May 2014 | B2 |
8755405 | Kovvali et al. | Jun 2014 | B2 |
8799480 | Kovvali et al. | Aug 2014 | B2 |
20030003919 | Beming et al. | Jan 2003 | A1 |
20030095526 | Froehlich et al. | May 2003 | A1 |
20030120805 | Couts et al. | Jun 2003 | A1 |
20030145038 | Bin Tariq et al. | Jul 2003 | A1 |
20030179720 | Cuny | Sep 2003 | A1 |
20030195977 | Liu et al. | Oct 2003 | A1 |
20040068571 | Ahmavaara | Apr 2004 | A1 |
20040098748 | Bo et al. | May 2004 | A1 |
20040193397 | Lumb et al. | Sep 2004 | A1 |
20040214586 | Loganathan et al. | Oct 2004 | A1 |
20040223505 | Kim et al. | Nov 2004 | A1 |
20040240390 | Seckin | Dec 2004 | A1 |
20040258070 | Arima | Dec 2004 | A1 |
20040264368 | Heiskari et al. | Dec 2004 | A1 |
20050033857 | Imiya | Feb 2005 | A1 |
20050047416 | Heo et al. | Mar 2005 | A1 |
20050097085 | Shen et al. | May 2005 | A1 |
20050117583 | Uchida et al. | Jun 2005 | A1 |
20050135428 | Hellgren | Jun 2005 | A1 |
20050136973 | Llamas et al. | Jun 2005 | A1 |
20050157646 | Addagatla et al. | Jul 2005 | A1 |
20060018294 | Kynaslahti et al. | Jan 2006 | A1 |
20060019677 | Teague et al. | Jan 2006 | A1 |
20060117139 | Kobayashi et al. | Jun 2006 | A1 |
20060159121 | Sakata et al. | Jul 2006 | A1 |
20060167975 | Chan et al. | Jul 2006 | A1 |
20060193289 | Ronneke et al. | Aug 2006 | A1 |
20060198378 | Rajahalme | Sep 2006 | A1 |
20060274688 | Maxwell et al. | Dec 2006 | A1 |
20070019553 | Sagfors et al. | Jan 2007 | A1 |
20070019599 | Park et al. | Jan 2007 | A1 |
20070025301 | Petersson et al. | Feb 2007 | A1 |
20070070894 | Wang et al. | Mar 2007 | A1 |
20070113013 | Knoth | May 2007 | A1 |
20070143218 | Vasa | Jun 2007 | A1 |
20070174428 | Lev Ran et al. | Jul 2007 | A1 |
20070223379 | Sivakumar et al. | Sep 2007 | A1 |
20070230342 | Skog | Oct 2007 | A1 |
20070248048 | Zhu et al. | Oct 2007 | A1 |
20070254671 | Liu | Nov 2007 | A1 |
20080026789 | Llamas et al. | Jan 2008 | A1 |
20080052366 | Olsen et al. | Feb 2008 | A1 |
20080081637 | Ishii et al. | Apr 2008 | A1 |
20080082753 | Licht et al. | Apr 2008 | A1 |
20080162713 | Bowra et al. | Jul 2008 | A1 |
20080186912 | Huomo | Aug 2008 | A1 |
20080191816 | Balachandran et al. | Aug 2008 | A1 |
20080195745 | Bowra et al. | Aug 2008 | A1 |
20080212473 | Sankey et al. | Sep 2008 | A1 |
20080244095 | Vos et al. | Oct 2008 | A1 |
20080273533 | Deshpande | Nov 2008 | A1 |
20080320151 | McCanne et al. | Dec 2008 | A1 |
20090019178 | Melnyk et al. | Jan 2009 | A1 |
20090019229 | Morrow et al. | Jan 2009 | A1 |
20090024835 | Fertig et al. | Jan 2009 | A1 |
20090029644 | Sue et al. | Jan 2009 | A1 |
20090043906 | Hurst et al. | Feb 2009 | A1 |
20090156213 | Spinelli et al. | Jun 2009 | A1 |
20090196233 | Zhu et al. | Aug 2009 | A1 |
20090210904 | Baron et al. | Aug 2009 | A1 |
20090270098 | Gallagher et al. | Oct 2009 | A1 |
20090274161 | Liu | Nov 2009 | A1 |
20090274224 | Harris | Nov 2009 | A1 |
20090287842 | Plamondon | Nov 2009 | A1 |
20090291696 | Cortes et al. | Nov 2009 | A1 |
20100020685 | Short et al. | Jan 2010 | A1 |
20100023579 | Chapweske et al. | Jan 2010 | A1 |
20100034089 | Kovvali et al. | Feb 2010 | A1 |
20100057887 | Wang et al. | Mar 2010 | A1 |
20100067378 | Cohen et al. | Mar 2010 | A1 |
20100085962 | Issaeva et al. | Apr 2010 | A1 |
20100088369 | Sebastian et al. | Apr 2010 | A1 |
20100106770 | Taylor et al. | Apr 2010 | A1 |
20100158026 | Valmikam et al. | Jun 2010 | A1 |
20100161756 | Lewis et al. | Jun 2010 | A1 |
20100184421 | Lindqvist et al. | Jul 2010 | A1 |
20100195602 | Kovvali et al. | Aug 2010 | A1 |
20100205375 | Challener et al. | Aug 2010 | A1 |
20100215015 | Miao et al. | Aug 2010 | A1 |
20100254462 | Friedrich et al. | Oct 2010 | A1 |
20100272021 | Kopplin et al. | Oct 2010 | A1 |
20100291943 | Mihaly et al. | Nov 2010 | A1 |
20100325334 | Tsai et al. | Dec 2010 | A1 |
20110116460 | Kovvali et al. | May 2011 | A1 |
20110243553 | Russell | Oct 2011 | A1 |
20120077500 | Shaheen | Mar 2012 | A1 |
20120099533 | Kovvali et al. | Apr 2012 | A1 |
20120120788 | Hu | May 2012 | A1 |
20120184258 | Kovvali et al. | Jul 2012 | A1 |
20120191862 | Kovvali et al. | Jul 2012 | A1 |
20120220328 | Yu et al. | Aug 2012 | A1 |
20130246638 | Kovvali et al. | Sep 2013 | A1 |
20140056137 | Kovvali et al. | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
1754369 | Mar 2006 | CN |
101299770 | Nov 2008 | CN |
1445703 | Aug 2004 | EP |
1523171 | Apr 2005 | EP |
2197187 | Jun 2010 | EP |
2001-518744 | Oct 2001 | JP |
2006-92341 | Apr 2006 | JP |
2006-155121 | Jun 2006 | JP |
2006-196008 | Jul 2006 | JP |
2007-536818 | Dec 2007 | JP |
9917499 | Apr 1999 | WO |
2005109825 | Nov 2005 | WO |
2007016707 | Feb 2007 | WO |
2008076073 | Jun 2008 | WO |
2009096833 | Aug 2009 | WO |
2010017308 | Feb 2010 | WO |
2010060438 | Jun 2010 | WO |
2010088490 | Aug 2010 | WO |
Entry |
---|
European Communication mailed Jul. 5, 2012 in co-pending European patent application No. EP 1073640.6. |
3GPP TR 23.829 V0.4.0 (Jan. 2010), Technical Report, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Local IP Access and Selected IP Traffic Offload; (Release 10)”, 29 pages, 3GPP Organizational Partners. |
Http header enrichment, http://news.thomasnet.com/fullstory/Software-optimizes-high-speed-wireless-data-networks-485934, “Software optimizes high-speed wireless data networks”, Jun. 26, 2006, 10 pages, Thomasnet News. |
International Search Report and Written Opinion dated May 13, 2011 in corresponding foreign patent application No. PCT/ US 11/28477. |
Notice of Allowance mailed Oct. 13, 2011 in co-pending U.S. Appl. No. 12/536,537. |
Office Action mailed Nov. 10, 2011 in co-pending U.S. Appl. No. 12/645,009. |
Office Action mailed Oct. 23, 2012 in co-pending U.S. Appl. No. 12/696,378. |
Office Action mailed Jan. 2, 2013 in co-pending U.S. Appl. No. 13/339,629. |
Chinese Communication dispatched Feb. 16, 2013 in co-pending Chinese patent application No. CN 201080010586.X. |
International Search Report/Written Opinion mailed Feb. 29, 2012 in co-pending PCT application No. PCT/US2011/044156. |
International Search Report/Written Opinion mailed Feb. 29, 2012 in co-pending PCT application No. PCT/US2011/044361. |
International Preliminary Report on Patentability mailed Feb. 23, 2012 in co-pending PCT application No. PCT/US09/52871. |
Proceedings of the USENIX Symposium on Internet Technologies and Systems, Dec. 1997, “Cost-Aware WWW Proxy Caching Algorithms”, 15 pages, CAO, et al. |
The Book of Webmin . . . Or: How I Learned to Stop Worrying and Love UNIX, 2003, Chapter 12—Squid, 23 pages, Cooper. |
Proceedings of the 3rd International Workshop on Modeling Analysis and Simulation of Wireless and Mobil Systems (MSWIM '00), ACM, 2000, pp. 77-84, “Prefetching Policies for Energy Saving and Latency Reduction in a Wireless Broadcast Data Delivery System”, Grassi. |
Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, IEEE, 2007, “An Integrated Prefetching and Caching Scheme for Mobile Web Caching System”, p. 522-527, Jin, et al. |
Proceedings of the 22nd International Conference on Distributed Computing Systems (ICDCS '02), IEEE, 2002, “Power-Aware Prefetch in Mobile Environments”, 8 pages, Yin, et al. |
Office Action mailed Apr. 12, 2013 in co-pending U.S. Appl. No. 13/185,066. |
Office Action dated Mar. 15, 2011 in co-pending U.S. Appl. No. 12/536,537. |
International Search Report/Written Opinion dated Oct. 6, 2009 in co-pending international application PCT/US2009/052871. |
International Search Report/Written Opinion dated Mar. 1, 2010 in co-pending international application PCT/US2009/069260. |
International Search Report/Written Opinion dated Mar. 12, 2010 in co-pending international application PCT/US2010/22542. |
RFC 1644-T/TCP—TCP Extensions for Translations Functional Specification, Jul. 1994—http://www.faqs.org/rfcs/rfc1644.html, R. Braden, et al. |
RFC 3135—Performance Enhancing Proxies Intended to Mitigate Link-Related Degradations, Jun. 2001—http://www.faqs.org.rfcs/rfc3135.html, J. Border et al. |
RFC 2045—Multipurpose Internet Mail Extensions (MIME) Part One: Formal of Internet Message Bodies; Nov. 1996—http://www.faqs.org/rfcs/r.fc2045.html, N. Freed, et al. |
Notice of Allowance mailed Apr. 12, 2012 in co-pending U.S. Appl. No. 12/645,009. |
Chinese Communication, with English translation, issued May 10, 2013 in co-pending Chinese patent application No. 200980139488.3. |
English translation of Japanese Communication, mailed Aug. 13, 2013 in co-pending Japanese patent application No. 2011-522222. |
Notice of Allowance mailed Sep. 19, 2013 in co-pending U.S. Appl. No. 13/339,629. |
International Search Report and Written Opinion completed Mar. 3, 2011 in PCT application No. PCT/US2010/056073. |
Office Action mailed Sep. 18, 2012 in co-pending U.S. Appl. No. 12/942,913. |
Office Action mailed Jul. 19, 2013 in co-pending U.S. Appl. No. 12/942,913. |
Notice of Allowance mailed May 29, 2013 in co-pending U.S. Appl. No. 13/339,629. |
HP Labs Report No. HPL-1999-69, May 1999, pp. 1-17, “Enhancement and Validation of Squid's Cache Replacement Policy”, 18 pages, Dilley, et al. |
Final Rejection mailed Nov. 26, 2013 in co-pending U.S. Appl. No. 12/696,378. |
Notice of Allowance mailed Dec. 23, 2013 in co-pending U.S. Appl. No. 12/696,378. |
Notice of Allowance mailed Jan. 6, 2014 in co-pending U.S. Appl. No. 12/942,913. |
Office Action—Restriction—mailed Dec. 26, 2013 in co-pending U.S. Appl. No. 13/183,777. |
Final Rejection mailed Nov. 18, 2013 in co-pending U.S. Appl. No. 13/185,066. |
Office Action mailed Aug. 26, 2014 in co-pending U.S. Appl. No. 14/071,009. |
Office Action mailed May 23, 2014 in co-pending U.S. Appl. No. 13/183,777. |
Notice of Allowance mailed Mar. 27, 2014 in U.S. Appl. No. 13/185,066 (now US Patent No. 8,799,480). |
Number | Date | Country | |
---|---|---|---|
20110167170 A1 | Jul 2011 | US |
Number | Date | Country | |
---|---|---|---|
61313872 | Mar 2010 | US | |
61148454 | Jan 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12696378 | Jan 2010 | US |
Child | 13048378 | US |