Cable television systems employ a network of devices (e.g., network elements) for delivering television programming to paying subscribers, typically by way of radio frequency (RF) signals transmitted through coaxial cables and/or light pulses through fiber-optic cables. Other services provided by cable television systems include high-speed Internet, home security, and telephone. Multiple television channels are distributed to subscriber residences from a “headend”. Typically, each television channel is translated to a different frequency at the headend, giving each channel a different frequency “slot” so that the television signals do not interfere with one another. At the subscriber's residence, a desired channel is selected with the user's equipment (e.g., a cable modem (CM), a set-top box, a television, a computer, etc.; collectively referred to herein as “user equipment”, or “UE”) and displayed on a screen. These are referred to as the “downstream” channels in a cable television system. “Upstream” channels in the system send data from the UE to the headend for various reasons including pay-per-view requests, Internet uploads and communication, and cable telephone service.
With the various forms of UEs, device protocols, content deliveries, and networks, data control has become exceptionally complex and difficult. For example, coordinating content deliveries from multiple independently operating network elements to an individual UE in a cable television network creates multiple layers of messaging and unbalanced traffic flows which can congest portions of the network.
Systems and methods presented herein provide a software defined network (SDN) controller in a cable television system that virtualizes network elements in the cable television system and provides content delivery and data services through the virtualized network elements. In one embodiment, the SDN controller is operable in a cloud computing environment to balance data traffic through the virtualized network elements. For example, the SDN controller may abstract Layer 2 Control Protocol (L2CP) frame processing of the network elements into the cloud computing environment to relieve the network elements from the burdens of Ethernet frame processing. In this regard, the SDN controller comprises a L2CP decision module that determines how L2CP should be processed for the network elements in the cable television system.
The various embodiments disclosed herein may be implemented in a variety of ways as a matter of design choice. For example, some embodiments herein are implemented in hardware whereas other embodiments may include processes that are operable to implement and/or operate the hardware. Other exemplary embodiments, including software and firmware, are described below.
Some embodiments of the present invention are now described, by way of example only, and with reference to the accompanying drawings. The same reference number represents the same element or the same type of element on all drawings.
The figures and the following description illustrate specific exemplary embodiments of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within the scope of the invention. Furthermore, any examples described herein are intended to aid in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited examples and conditions. As a result, the invention is not limited to the specific embodiments or examples described below.
In this embodiment, the SDN controller 110 virtualizes a plurality of network elements 302-1-302-N in the cable television system 100 (where the reference “N” merely indicates an integer greater than “1” and not necessarily equal to any other “N” reference designated herein). The network elements 302 are communicatively coupled to a plurality of content servers 303-1-303-N. The content servers 303 store content to be consumed by subscribers of the cable television system 100. For example, the content servers 303 may store movies, television shows, advertisements, and the like that a subscriber can view through their UEs 102 and/or end devices 101. The UEs 102 are communicatively coupled to the CMTS 301 as shown and described in
The SDN controller 110 balances the traffic associated with the content being delivered from the content servers 303 through the virtualized network elements 302. For example, when a subscriber requests certain content, the SDN controller 110 having full network topology identifies a path from the content servers 303 to the UE 102 through the network elements 302. The SDN controller 110 determines the bandwidth capabilities of the content and the network elements 302 to identify a data path for the content to the UE 102. Thus, the SDN controller 110 is any combination of device(s) and software operable within a cloud computing environment 120 to virtualize network elements in a cable television system for the purposes of controlling traffic in the cable television system.
An upstream link of the cable television communication system 100 may provide the high speed data services being delivered over devices conforming to the Data Over Cable Service Interface Specification (DOCSIS) specification. The hub 420 is coupled to a downstream node 421 via optical communication links 405 and 406. The node 421 is similarly configured with an optical to electrical converter 408 and an electrical to optical converter 407.
Several hubs may be connected to a single headend 401 and the hub 420 may be connected to several nodes 421 by fiber optic cable links 405 and 406. The CMTS 301 may be configured in the headend 401 or in the hub 420. The fiber optic links 405 and 406 are typically driven by laser diodes, such as Fabry Perot and distributed feedback laser diodes.
Downstream, in homes and businesses are CMs (i.e., UEs 102, not shown). The CM acts as a host for an Internet Protocol (IP) device such as personal computer. Transmissions from the CMTS 301 to the CM are carried over the downstream portion of the cable television communication system generally from 54 to 860 MHz. Downstream digital transmissions are continuous and are typically monitored by many CMs. Upstream transmissions from the CMs to the CMTS 301 are typically carried in the 5-42 MHz frequency band, the upstream bandwidth being shared by the CMs that are on-line. However, with greater demands for data, additional frequency bands and bandwidths are continuously being considered and tested, including those frequency bands used in the downstream paths.
The CMTS 301 connects the local CM network to the Internet backbone. The CMTS 301 connects to the downstream path through the electrical to optical converter 404 that is connected to the fiber optic cable 406, which in turn, is connected to the optical to electrical converter 408 at the node 421. The signal is transmitted to a diplexer 409 that combines the upstream and downstream signals onto a single cable. The diplexer 409 allows the different frequency bands to be combined onto the same cable. The downstream channel width in the United States is generally 6 megahertz with the downstream signals being transmitted in the 54 to 860 MHz band. Upstream signals are presently transmitted between 5 and 42 MHz, but again other larger bands are being considered to provide increased capacity.
After the downstream signal leaves the node 421, the signal is typically carried by a coaxial cable 430. At various stages, a power inserter 410 may be used to power the coaxial line equipment, such as amplifiers or other equipment. The signal may be split with a splitter 411 to branch the signal. Further, at various locations, bi-directional amplifiers 412 may boost and even split the signal. Taps 413 along branches provide connections to subscriber's homes 414 and businesses.
Upstream transmissions from subscribers to the hub 420/headend 401 occur by passing through the same coaxial cable 430 as the downstream signals, in the opposite direction on a different frequency band. The upstream signals are sent typically utilizing Quadrature Amplitude Modulation (QAM) with forward error correction. The upstream signals can employ any level of QAM, including 8 QAM, 32 QAM, 64 QAM, 128 QAM, and 256 QAM. Modulation techniques such as Synchronous Code Division Multiple Access (S-CDMA) and Orthogonal Frequency Division Multiple Access (OFDMA) can also be used. Of course, any type of modulation technique can be used, as desired.
Transmissions, in this embodiment, are typically sent in a frequency/time division multiplexing access (FDMA/TDMA) scheme, as specified in the DOCSIS standards. The diplexer 409 splits the lower frequency signals from the higher frequency signals so that the lower frequency, upstream signals can be applied to the electrical to optical converter 407 in the upstream path. The electrical to optical converter 407 converts the upstream electrical signals to light waves which are sent through fiber optic cable 405 and received by optical to electrical converter 403 in the node 420.
Generally, devices complying with the L2CP standards become outdated when the standard is changed. By abstracting the functionality into the cloud computing environment 120, the SDN controller 110 is operable to ensure that changes to L2CP standards do not need to be propagated to each network element 302 in the cable television system 100. For example, the Metro Ethernet Forum (MEF) defines the specification that deals with L2CP frame handling. The specification requires equipment vendors to update their equipment to comply with the standard is updated. However, cable television system networks are quite large and complex. These networks are therefore difficult to update when standards change. The SDN controller 110 manages these updates and performs the L2CP processing and relays the information to various network elements 302 in the cable television system 100.
Some L2CP frames 151 that enter the decision node 201 from the external interface 322 are processed by L2CP module 111 and passed to the virtual connection 304 (i.e., Ethernet Virtual Connections, “EVCs”, or Operator Virtual Connections, “OVCs”). Other L2CP frames 151 are “peered” (203) by redirecting the frames to a protocol entity 210. And yet other L2CP frames 151 are discarded 306. Each of these actions depends upon the destination address, the protocol identifier, and configuration values of the L2CP service attributes within any given L2CP frame 151. In any case, once a decision is made by the L2CP module 111 of the SDN controller 110, that information is impressed upon the network element 302 to which the L2CP frame 151 was directed.
If the frame is not an L2CP frame, the SDN controller 110 performs a threshold evaluation on the data traffic of the frames for the purposes of balancing traffic through the network elements. The SDN controller 110 marks Ethernet frames based on the bandwidth profile before allowing them to be transferred. For example, the SDN controller 110 can receive information about an inbound data frame from a network element 302 and instruct the network element 302 to flag the frames and/or give instructions to handle those frames according to a particular bandwidth profile of the frames.
Ethernet services of the network elements 302 in the cable television system 100 can be generally classified into two categories of quality of service (QoS): committed information rate (CIR)/committed burst size (CBS) and excess information rate (EIR)/excess burst size (EBS). CIR/CBS service guarantees its user a fixed amount of bandwidth/burst size. EIR/EBS service offers best-effort only transport. This relates to the cable television system 100's objective of offering a maximum amount of best-effort EIR/EBS services over its network while reliably serving its committed CIR/CBS services.
As different levels of QoS exist, balancing traffic through the cable television network becomes complex. For example, network elements 302 in the cable television system 100 previously had to process the incoming frames and determine the level of QoS within that frame. In this embodiment, the SDN controller 110 processes the incoming frames to the network elements 302, determines the level of QoS within those frames, and flags them with markers that the network elements 302 can identify as “colors” without having to extract the information from the frames.
The SDN controller 110 flags the frames with markings of green, yellow, or red. Green frames indicate that the frames are within CIR/CBS bandwidth settings. Yellow frames indicate that the frames are exceeding CIR/CBS but not exceeding EIR/EBS. Red frames indicate that the frames that are over EIR/EBS values.
In this regard, the SDN controller 110 processes the incoming data frame and determines whether it comprises a bandwidth or burst size greater than the bandwidth/burst size threshold “1” (i.e., the CIR/CBS), in the process element 255. If not, the SDN controller 110 flags the frame as “green”, in the process element 256, and allows it to pass through the network element 302. If the incoming data frame comprises a bandwidth or burst size greater than the bandwidth/burst size threshold “1”, then the SDN controller 110 determines whether it is greater than the bandwidth/burst size threshold “2” (i.e., the EIR/EBS), in the process element 257. If not, the SDN controller 110 flags the frame as yellow, in the process element 258, and allows it to pass through the network element 302. However, the SDN controller 110 monitors the yellow frames to determine whether the QoS needs to be changed. The flagged frames (i.e., process elements 256 and 258) are then sent to egress, by the SDN controller 110.
If the bandwidth or burst size of the frame is greater than the bandwidth/burst size threshold “2”, then the SDN controller 110 drops the frame, in the process element 259, and notifies the receiving network element 302 that the frame has been dropped, in the process element 260. In whatever instance, the SDN controller 110 continues to monitor inbound frames of the network elements 302 under its control to abstract the processing thereof into the cloud computing environment 120.
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In one embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
The medium 506 can be any tangible electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of a computer readable medium 506 include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Some examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
The computing system 500, suitable for storing and/or executing program code, can include one or more processors 502 coupled directly or indirectly to memory 508 through a system bus 510. The memory 508 can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices 504 (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the computing system 500 to become coupled to other data processing systems, such as through host systems interfaces 512, or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
This patent application is a continuation of U.S. patent application Ser. No. 16/922,547 (filed Jul. 7, 2020), which is a continuation of U.S. patent application Ser. No. 15/992,015 (filed May 29, 2018, now U.S. Pat. No. 10,979,741), which is a continuation of U.S. patent application Ser. No. 14/970,024 (filed Dec. 15, 2015), which application claims priority to, and thus the benefit of, an earlier filing date from, U.S. Provisional Patent Application No. 62/091,954 (filed Dec. 15, 2014). The entire contents of each of the above-mentioned patent applications are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4169226 | Fukuji | Sep 1979 | A |
4804972 | Schudel | Feb 1989 | A |
5528253 | Franklin | Jun 1996 | A |
6023242 | Dixon | Feb 2000 | A |
6072440 | Bowman | Jun 2000 | A |
6538612 | King | Mar 2003 | B1 |
6832070 | Perry | Dec 2004 | B1 |
6904609 | Pietraszak | Jun 2005 | B1 |
7075492 | Chen | Jul 2006 | B1 |
7076202 | Billmaier | Jul 2006 | B1 |
7239274 | Lee | Jul 2007 | B2 |
7472409 | Linton | Dec 2008 | B1 |
7685621 | Matsuo | Mar 2010 | B2 |
8121126 | Moisand | Feb 2012 | B1 |
8296813 | Berkey | Oct 2012 | B2 |
8368611 | King | Feb 2013 | B2 |
8843979 | Berkey | Sep 2014 | B2 |
8935726 | Patel | Jan 2015 | B2 |
9131255 | Smith | Sep 2015 | B2 |
9325597 | Clasen | Apr 2016 | B1 |
20020054087 | Noll | May 2002 | A1 |
20030051246 | Wilder | Mar 2003 | A1 |
20030214449 | King | Nov 2003 | A1 |
20040128689 | Pugel | Jul 2004 | A1 |
20040227655 | King | Nov 2004 | A1 |
20050108751 | Dacosta | May 2005 | A1 |
20050193415 | Ikeda | Sep 2005 | A1 |
20050225495 | King | Oct 2005 | A1 |
20050281197 | Honda | Dec 2005 | A1 |
20060020978 | Miyagawa | Jan 2006 | A1 |
20060139499 | Onomatsu | Jun 2006 | A1 |
20060187117 | Lee | Aug 2006 | A1 |
20070152897 | Zimmerman | Jul 2007 | A1 |
20070230369 | McAlpine | Oct 2007 | A1 |
20080013547 | Klessig | Jan 2008 | A1 |
20080129885 | Yi | Jun 2008 | A1 |
20080186242 | Shuster | Aug 2008 | A1 |
20080186409 | Kang | Aug 2008 | A1 |
20090135309 | DeGeorge | May 2009 | A1 |
20090260038 | Acton | Oct 2009 | A1 |
20090265443 | Moribe | Oct 2009 | A1 |
20090310030 | Litwin | Dec 2009 | A1 |
20100214482 | Kang | Aug 2010 | A1 |
20100235858 | Muehlbach | Sep 2010 | A1 |
20100315307 | Syed | Dec 2010 | A1 |
20110126232 | Lee | May 2011 | A1 |
20130207868 | Venghaus | Aug 2013 | A1 |
20130212632 | Berkey | Aug 2013 | A1 |
20130346541 | Codavalli | Dec 2013 | A1 |
20140337500 | Lee | Nov 2014 | A1 |
20150106526 | Arndt | Apr 2015 | A1 |
20150161236 | Beaumont | Jun 2015 | A1 |
20150161249 | Knox | Jun 2015 | A1 |
20150271268 | Finkelstein | Sep 2015 | A1 |
20150350102 | Leon-Garcia | Dec 2015 | A1 |
20150382217 | Odio Vivi | Dec 2015 | A1 |
20160173945 | Oh | Jun 2016 | A1 |
20160255394 | Yang | Sep 2016 | A1 |
20170064528 | Daly | Mar 2017 | A1 |
20170317408 | Hamada | Nov 2017 | A1 |
20180120169 | Jackson | May 2018 | A1 |
20180359541 | Park | Dec 2018 | A1 |
20180359795 | Baek | Dec 2018 | A1 |
20190037418 | Gunasekara | Jan 2019 | A1 |
20190079659 | Adenwala | Mar 2019 | A1 |
20190335221 | Walker | Oct 2019 | A1 |
20200107250 | So | Apr 2020 | A1 |
20200297955 | Shouldice | Sep 2020 | A1 |
20200305003 | Landa | Sep 2020 | A1 |
20200374721 | Kumar | Nov 2020 | A1 |
20210075678 | Seetharaman | Mar 2021 | A1 |
20210105308 | Bouazizi | Apr 2021 | A1 |
20220256451 | Ianev | Aug 2022 | A1 |
Number | Date | Country | |
---|---|---|---|
62091954 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16922547 | Jul 2020 | US |
Child | 17648139 | US | |
Parent | 15992015 | May 2018 | US |
Child | 16922547 | US | |
Parent | 14970024 | Dec 2015 | US |
Child | 15992015 | US |