Embodiments of the invention relate to the latency of data exchanged via the Data Over Cable Service Interface Specification (DOCSIS) protocol by a cable service provider.
Many users of residential broadband Internet connections experience latency at some point, which is a frustrating experience. For example, cable subscribers who play online games may occasionally experience latency while playing their game, which can negatively affect game performance. To avoid such frustrations, many cable subscribers pay for a high service tier in the hopes of avoiding latency in their Internet connection. However, cable subscribers report that they seldom see an improvement in latency when moving to a higher service tier. The ability to receive an improved experience in broadband Internet connections with less latency is a significant benefit to not only broadband subscribers, but also to cable operators who can more readily justify incremental service charges for improved quality of service.
Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Approaches for providing configurable levels of latency for data flows are presented herein. In the following description, for the purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the embodiments of the invention described herein. It will be apparent, however, that the embodiments of the invention described herein may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form or discussed at a high level to avoid unnecessarily obscuring teachings of embodiments of the invention.
Embodiments of the invention are directed towards advancements in how data flows are exchanged between cable modems (CMs) and cable modem termination system (CMTS). Embodiments enable the levels of latency experienced by data flows exchanged between CMs and the CMTS to be reduced to satisfy certain specified or configurable levels. As a result, cable operators may choose to offer a service embodying the invention to cable subscribers to provide those subscribers with certain guaranteed levels of quality of service, and more particularly, with service that exhibits a guaranteed lower level of latency.
To better understand how embodiments of the invention operate, it will be helpful to appreciate how the existing art behaves. To that end, consider
A cable modem (CM), such as CM 110 of
A cable modem termination system (CMTS), such as CMTS 110 of
Data that is sent from the CMTS to the CM is said to be sent in the downstream direction, while data that is sent from the CM to the CMTS is said to be sent in the upstream direction. For example, communication path 130 is in the downstream direction, and communication path 140 is in the upstream direction. It is common for the throughput capacity of a communication path to differ between the upstream direction and the downstream direction, e.g., downstream communication path 130 supports 100 Mbps, while upstream communication path 140 supports 20 Mbps. In a convention system, the capacity of the downstream direction far exceeds that of the upstream direction, as shown in the example of
Latency may be experienced by any cable subscriber connecting to their CM 110 or by a cable operator at CMTS 120. The latency experienced at either CM 110 or CMTS 120 may be based on the aggregate behavior of both downstream communication path 130 and upstream communication path 140, as any delays in any leg of the roundtrip path of exchanged communications between both CM 110 and CMTS 120 contribute to the overall latency experienced at either end.
Embodiments of the invention are concerned with providing improved performance by handling Internet traffic differently based on the application type of the application associated with the data flow. Application type, in this context, refers to a type of queuing behavior exhibited by the application used by the cable subscriber in the performance of exchanging data flows between that application and the CMTS. It is observed that queuing behaviour is an important factor in terms of the overall latency of an application and the overall variation in latency across the system.
Embodiments of the invention treat data flows from different application types differently. Any number of application types may be recognized and handled by embodiments of the invention. Two different application types will be discussed herein, namely the queue building application type and the non-queue building application type. For this reason, many illustrative embodiments will be discussed in relation to two different application types, but embodiments may subdivide any application type discussed herein into multiple subtypes or otherwise arrange application behavior into application types in any manner without deviating from the teachings of embodiments of the invention.
A first type of application is referred to herein as a queue building application. These types of applications send data from the CM to the CMTS at a rate which is typically faster than the communication path over which the data packets travel can currently support. Common examples of this type are applications which utilize TCP, UDP, or QUIC protocols in issuing traffic flows. UDP, TCP, and QUIC protocols use legacy flow control algorithms to manage congestion on the communication path.
Most network traffic today, by volume, is issued by a queue-building application. Non-limiting examples of queue-building applications are (a) video services such as YouTube and Netflix and (b) large file or application downloads.
A second type of application is referred to herein as a non-queue-building application. These types of applications issue data flows at a relatively low data rate and generally time their data packets in a way that does not cause queuing in the network. Common examples of this type of application include (a) non-capacity-seeking applications, such as multiplayer online games (such as Fortnite by Epic Games of Gary, and (b) IP-based communication applications (communication applications that communicate using the Internet Protocol (IP)), such as FaceTime or Skype. Non-queue-building applications issue data flows without any feedback on the queueing or delay in the network to rate limit the transition.
Queue building applications issue data flows that are not sensitive to latency, as the primary concern is reliability. If data packets of a queue building application are lost, transmission of the lost data packets occurs. In contrast, non-queue building applications issue data flows are sensitive to latency, and so if any data packets go missing or any not received, then there is no point in resending those missing or unreceived data packets, as any resent data packets will not be received in time to be useful by the recipient.
Anytime that the amount of aggregate traffic to be sent over a communication path exceeds its throughput capacity, it is necessary to delay data packets momentarily in a queue until the opportunity presents itself to send those data packets over the communication path. The magnitude of the delay induced by the queue (the “queueing delay”) directly contributes to the latency experienced over the communication path.
In the current state of the art, data packets for data flows associated with all application types are enqueued in a single queue in both the upstream direction and the downstream direction. For example,
It is observed that data flows of queue-building application are typically the source of queuing delay, and data flows of non-queue-building applications typically suffer from the latency caused by the queue-building application data flows. The conflict between queue-building application data flows and non-queue-building application data flows can occur both within a single physical location serviced by a single cable modem (such as one family member gaming, while another family member using the same cable modem is watching a 4K Video stream or uploading a large file) or in a particular DOCSIS network segment or Serving Group.
The embodiment depicted in
An aim of an embodiment is not to absolutely minimize the latency, but to deliver a cable broadband service which offers a consistent, low latency approach, which is critical to many consumer applications, such as gaming. Embodiments may do so without requiring any specific hardware in either the cable modem or the CMTS, as embodiments may be embodied entirely in software and delivered to a consumer's cable modem via a software update.
Embodiments make use a dual queueing implementation whereby traffic from queue-building and non-queue-building application flows is treated separately. The dual queueing approach is depicted in
The DOCSIS protocol may be used to assign a priority to a data flow. However, unfortunately it is difficult to treat data flows assigned to same DOCSIS priority level differently in the existing state of the art, even though it is often the case that data flows possessing the same DOCSIS priority level often having various levels of observed susceptibility to latency in the eyes of the cable subscriber. Advantageously, embodiments of the invention may use the identified application type of a data flow to process that data flow uniquely without dependence upon, or in relation to, any DOCSIS priority level which may be assigned thereto.
The steps of
In step 410, an application type associated with a data flow is identified. The application type identified for the data flow may be one of any number of different application types. For purposes of providing a concrete example, an embodiment will be described in which the application type identified in step 410 is one of two different application types, namely a queue building application type and a non-queue building application type.
The queue-building application type is associated with applications that typically send or receive data flows at a faster rate than a communication channel traversed by the data flows can presently support. The non-queue-building application type is associated with applications that typically send or receive data flows no faster than the rate presently supported by the communication channel traversed by the data flows can presently support.
In an embodiment, classifier 310 may identify the application type of a data flow. Classifier 310 may identify the application type of a data flow using a variety of different mechanisms, such as but not limited to using one or more known behavior patterns for data flows of one or more application types, using a configuration maintained by the cable modem, using an external classification or marking system, or using machine learning techniques, such as but not limited to supervised learned. Cloud based games and online gaming (most of which are using the same engine) have similar traffic behavior. By analyzing traffic parameters (such as protocols, rate, avg packet size, burstiness, and the like) as features to train a machine learning algorithm, embodiment may cluster online gaming data flows and differentiate them from any other traffic.
In an embodiment, classifier 310 may identify the application type of a data flow without inspecting or relying upon any priority assigned using or identified by the Data Over Cable Service Interface Specification (DOCSIS) protocol.
In step 420, data packets of the data flow are enqueued to a particular queue within the cable subscriber's DOCSIS Service Flow based on the identified application type for that data flow. Each of said two or more queues store data packets of a different application type to be sent across the communication channel. Doing so ensures that only those specific data flows which would benefit are proactively moved to the ‘low latency’ queue.
For example,
In step 430, data flows associated with non-queue-building applications are preferentially transmitted over the communication channel so that data flows associated with non-queue-building applications possess a smaller magnitude of latency than data flows associated with the queue-building applications. In this example, data flows associated with the online game Fortnite would be preferentially transmitted over the communication channel over data flows associated with the streaming service Netflix.
Once classifier 310 that resides at a cable modem identifies a data flow as a particular application type, classifier 310 creates an US Service Flow on the Cable modem so that upstream traffic of low latency sources will go through similar separate service flow on the upstream. For example,
Embodiments may be further optimized by ensuring that only those cable subscribers with an appropriate ‘low latency’ or ‘gaming’ subscription would be permitted to benefit from the approach of
Embodiments require no specific hardware to be present at either the Cable Modem or the CMTS. Embodiments may be implemented using currently deployed Cable Modems and are not limited to advanced DOCSIS 3.1 devices. Embodiments are fully compatible with DOCSIS 2.0 onwards.
Embodiments of the invention allow the CMTS operator to offer a differentiated low latency ‘gaming’ service to their high tier customers who wish to experience latency in their broadband Internet access. Such a low latency approach also has benefits for other services such as Virtual Reality Video and the backhaul of traffic to 5G base stations.
In an embodiment, computer system 500 includes processor 504, main memory 506, ROM 508, storage device 510, and communication interface 518. Computer system 500 includes at least one processor 504 for processing information. Computer system 500 also includes a main memory 506, such as a random-access memory (RAM) or other dynamic storage device, for storing information and instructions to be executed by processor 504. Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Computer system 500 further includes a read only memory (ROM) 508 or other static storage device for storing static information and instructions for processor 504. A storage device 510, such as a magnetic disk or optical disk, is provided for storing information and instructions.
Embodiments of the invention are related to the use of computer system 500 for implementing the techniques described herein. According to one embodiment of the invention, computer system 500 may perform any of the actions described herein in response to processor 504 executing one or more sequences of one or more instructions contained in main memory 506. Such instructions may be read into main memory 506 from another machine-readable medium, such as storage device 510. Execution of the sequences of instructions contained in main memory 506 causes processor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement embodiments of the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “non-transitory machine-readable storage medium” as used herein refers to any non-transitory tangible medium that participates in storing instructions which may be provided to processor 504 for execution. Note that transitory signals are not included within the scope of a non-transitory machine-readable storage medium. A non-transitory machine-readable storage medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 510. Volatile media includes dynamic memory, such as main memory 506.
Non-limiting, illustrative examples of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 504 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network link 520 to computer system 500.
Communication interface 518 provides a two-way data communication coupling to a network link 520 that is connected to a local network. For example, communication interface 518 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 520 typically provides data communication through one or more networks to other data devices. For example, network link 520 may provide a connection through a local network to a host computer or to data equipment operated by an Internet Service Provider (ISP).
Computer system 500 can send messages and receive data, including program code, through the network(s), network link 520 and communication interface 518. For example, a server might transmit a requested code for an application program through the Internet, a local ISP, a local network, subsequently to communication interface 518. The received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent modification. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/877,682, filed Jul. 23, 2019, entitled “Gaming Low Latency Flow Control,” the entire contents of which are hereby incorporated by reference for all purposes as if fully set forth herein.
Number | Date | Country | |
---|---|---|---|
62877682 | Jul 2019 | US |