A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
The present disclosure relates generally to the field of delivery of digital content over a network, and in one exemplary aspect to a network architecture for providing a cloud-based and edge-based content storage and delivery functionality, including delivery to Internet Protocol (IP)-enabled client devices.
Digital video recorders (DVRs) and personal video recorders (PVRs) are devices which record video content, in digital format, to a disk drive or other medium. The use of such devices is now ubiquitous, and they provide conveniences to TV viewers such as e.g., (i) allowing a user to record a program for later review, (ii) allowing a user to record every episode of a program for a period, and/or (iii) automatically recording programs for the user based on viewing habits and preferences. Further, the presentation of the recorded programming content can be manipulated by exercising rewind, pause, play, stop, and fast-forward functions (hereinafter referred to as “trick mode” functions) in such DVRs and PVRs.
Traditional DVRs are maintained and managed by an end user; e.g., subscriber of a cable or satellite network. While having utility, such premises recording devices have several disabilities, including the need for the user to possess the physical “box”, the need to maintain the recording or storage device powered up at all times when recording may be required, as well as the finite storage volume limitations of the device (the latter which can effectively limit the user's selection for content).
Cloud-based Storage—
Such disabilities have made providing virtual ownership of content delivery and virtual storage, i.e., storage in the “cloud”, more appealing over time, and hence network operators are increasingly turning to such solutions. One such cloud-based approach is the so-called “nPVR” or network PVR. An nPVR is a form of a PVR which can store content on a remote network device instead of a local storage medium such as a DVR. The nPVR allows the user to perform the analogous DVR functions through use of a network entity or process, rather than a local DVR at the user premises, thereby ostensibly relieving the user of the burdens of ownership and maintenance of a DVR unit, and providing greater digital data storage capacity.
Moreover, physically secure storage of content at the content distribution network as opposed to the premises may also provide certain assurances regarding physical security and unauthorized reproduction.
Numerous nPVR architectures exist. See, e.g., co-owned U.S. patent application Ser. No. 10/302,550, filed Nov. 22, 2002, issued as U.S. Pat. No. 7,073,189 on Jul. 4, 2006, and entitled “Program Guide and Reservation System for Network Based Digital Information and Entertainment Storage and Delivery System”, incorporated by reference herein in its entirety, which discloses one exemplary network architecture and functionalities for implementing nPVR service. Generally, nPVR systems employ Video on-demand (VOD) or similar architecture of a content distribution network (CDN) to provide content storage and retrieval.
Similarly, so called “start-over” is a feature offered to some network users which allows the user to jump to the beginning of a program in progress without any preplanning or in-home recording devices (e.g., DVR). Start-over is enabled by a software upgrade to the existing video on-demand (VOD) platform, and to the installed base of digital set-top boxes. In other words, the start-over feature utilizes an nPVR system to maintain content which users may request, and delivers content in a manner similar to VOD. The typical start-over system instantaneously captures live television programming for immediate, on-demand viewing. Start-over functionality is the result of MSO-initiated nPVR storage of broadcast programs in real time. In other words, the MSO determines which programs will be start-over enabled, and stores this content as it is broadcast to an nPVR which is accessible by the various client devices utilizing a mechanism similar to VOD (discussed below).
When tuning to a start-over enabled show in progress, customers are alerted to the feature through an on-screen prompt. By pressing appropriate remote control buttons, the program is restarted from the beginning. Under one type of approach, start-over enabled programs may only be restarted within the shows' original telecast window (i.e., during the time window set for broadcasting the program), and may not be restarted after the show has finished broadcast. Thus, the start-over feature generally functions as an nPVR for predefined content (i.e., content on a start-over enabled channel) during a predefined period (i.e., the broadcast window). Co-owned, U.S. patent application Ser. No. 10/913,064, filed Aug. 6, 2004, and entitled “Technique for Delivering Programming Content Based on a Modified Network Personal Video Recorder Service”, incorporated herein by reference in its entirety, discloses exemplary network architecture and functionalities for implementing start-over service within a content-based (e.g., cable) network.
As noted above, start-over services generally employ a VOD or similar architecture to provide content storage and retrieval. A typical prior art VOD architecture useful for prior art nPVR and start-over functionality is shown in
As illustrated, audio/video content is received by the MSO. The MSO sends the content to a staging processor 102 adapted to “stage” content for transmission over the network. The staging processor 102 is an entity adapted to prepare content for segmenting and/or for transmission to a VOD server 105 for streaming to one or more users.
Content is prepared for transmission and/or segmenting by processing through various staging processes, or software applications adapted to run on the digital processor associated with the staging processor 102. The processes effected by the staging processor 102 include, inter alia, at least one segmenting process 104. The segmenting process 104 divides the content video feed on valid GOP boundaries, or I-frames.
Segmenting the video feed at the segmenting process 104 results in content which is segmented based on a schedule. The segmented content is then examined by a business management process (BMS) 107. The management process 107, inter alia, creates a data file regarding the segmented content. The data file gives metadata regarding the content and “points” to the segmented portions of the content on the disk.
Once the management process 107 has created a data file for the content, it is sent to a VOD server 105. As described in greater detail subsequently herein, the VOD server 105 stores the content and/or data on hard disks; the VOD server 105 streams the content from these disks as well. The VOD server 105 is also sent a playlist of advertisements.
The VOD server 105, therefore, will receive the segmented content as well as a file indicating where the various portions of the content are and in what order they should be arranged; the VOD server also receives advertisements for insertion into the segmented content.
When a CPE 106 requests the content from the VOD server 105 via the network 101, the VOD server 105 utilizes the data file (not shown) created by the management process 107 to find the start 124 and end 126 points of the content segments 122, and the start 134 and end 136 points for the advertisement segments 132. The first content segment 122a is delivered to the user, and at its end point 126a, the VOD server 105 sends the first advertisement segment 132a. At the end point 136a of the first advertisement segment 132a, the VOD server 105 sends the second content segment 122b. At the end point 126b of the second content segment 122b, the second advertisement segment 132b is sent. This pattern continues until the last of the content segments 122n and/or the last of the advertisement segments 132x have been presented to the user. The user will receive a seamless content-plus-advertisement stream 140 comprised of the various segments 122a, 132a, 122b, 132b . . . 122n, 132x sent. It is recognized that the first segment sent to the user may comprise either the first advertisement or the first content segment, still utilizing the pattern outlined above.
In nPVR and start-over enabled systems, MSOs ingest a large quantities of content to the VOD servers for storage and streaming, so as to offer the nPVR or start-over features on a variety of channels and/or for a variety of programs. Doing so quickly becomes exceedingly expensive. As the number of users or subscribers of services such as nPVR and start-over within a content delivery network grows, so does the required network-side digital data storage and processing capacity. To enable each given subscriber or household to record even a relatively limited number of hours of programming requires many terabytes (TB) of storage, which can be quite expensive to both initially procure and maintain.
Further, given that start-over capabilities are made available on a channel-by-channel basis, a large portion of the content stored and available for streaming from the VOD server is often never requested, such as during times when there are fewer viewers (e.g., between 12 midnight and 6 am). Thus, in the present systems, even when content is not requested, it must still be sent to the VOD server as discussed above.
Additionally, most content is received by the network operator (e.g., cable or satellite network MSO) in an encoding format (such as MPEG-2) that is not optimized in terms of storage or downstream bandwidth delivery requirements. Hence, maintenance of both the storage and delivery infrastructure necessary to keep pace with literally millions of users wanting to record several hours of programming per day via their nPVR or start-over service or equivalent becomes unduly burdensome and at some point, cost-inefficient.
In a typical storage/delivery paradigm, multiple copies of the same content are encoded and encrypted and sent to multiple different respective requesters, thereby requiring significant extra cloud storage space, and downstream bandwidth (i.e., from the network core outward) for delivery of the multiple copies.
Extant cloud-based architectures also generally utilize asymmetric delivery capability in their delivery of cloud content, in many cases by design. Technologies such as HFC in-band RF, DOCSIS (aka cable modems), A-DSL, and even some wireless and optical fiber (e.g., FiOS) technologies recognize that the asymmetry or ratio of DL to UL traffic is very high, and hence such solutions are optimized for such scenarios, including via allocation of more radio frequency or optical carriers for DL traffic versus UL traffic. This presents somewhat of a “check valve” for data flow into and from a given user CPE; i.e., users can obtain data much more quickly than they can upload it.
Such extant delivery technologies may also have significant temporal latency associated therewith; e.g., resolving URLs and IP addresses via DNS queries, accessing particular edge or origin servers for content chunks, etc., can require appreciable amounts of time (comparatively speaking), and result in reduced user QoE (quality of experience), manifest as e.g., stutters and delays in obtaining and rendering content for the user at their premises or via mobile device.
Moreover, current cloud-based solutions make limited use of CPE (e.g., DSTB, Smart TV, etc.) assets, instead offloading much of the processing to cloud servers or other distributed processing entities. Many CPE natively have significant processing capability in the form of multi-core CPU, GPUs, outsized flash and other memory, and high-speed data bus architectures such as USB and PCIe. They are also “application” heavy, including e.g., apps for gaming, social media, media decode and rendering, Internet search, and voice recognition. While many of these capabilities are used indigenously by the user, they are generally not accessed in any meaningful way by cloud-based processes or entities, or “repurposed” for other tasks. As such, the CPE capabilities are often under-utilized.
Radio Access Technologies Including 5G “New Radio”—
A multitude of wireless networking technologies, also known as Radio Access Technologies (“RATs”), provide the underlying means of connection for radio-based communication networks to user devices, including both fixed and mobile devices. Such RATs often utilize licensed radio frequency spectrum (i.e., that allocated by the FCC per the Table of Frequency Allocations as codified at Section 2.106 of the Commission's Rules). Currently only frequency bands between 9 kHz and 275 GHz have been allocated (i.e., designated for use by one or more terrestrial or space radio communication services or the radio astronomy service under specified conditions). For example, a typical cellular service provider might utilize spectrum for so-called “3G” (third generation) and “4G” (fourth generation) wireless communications as shown in Table 1 below:
Alternatively, unlicensed spectrum may be utilized, such as that within the so-called ISM-bands. The ISM bands are defined by the ITU Radio Regulations (Article 5) in footnotes 5.138, 5.150, and 5.280 of the Radio Regulations. In the United States, uses of the ISM bands are governed by Part 18 of the Federal Communications Commission (FCC) rules, while Part 15 contains the rules for unlicensed communication devices, even those that share ISM frequencies. Table 2 below shows typical ISM frequency allocations:
ISM bands are also been shared with (non-ISM) license-free communications applications such as wireless sensor networks in the 915 MHz and 2.450 GHz bands, as well as wireless LANs (e.g., Wi-Fi) and cordless phones in the 915 MHz, 2.450 GHz, and 5.800 GHz bands.
Additionally, the 5 GHz band has been allocated for use by, e.g., WLAN equipment, as shown in Table 3:
User client devices (e.g., smartphone, tablet, phablet, laptop, smartwatch, or other wireless-enabled devices, mobile or otherwise) generally support multiple RATs that enable the devices to connect to one another, or to networks (e.g., the Internet, intranets, or extranets), often including RATs associated with both licensed and unlicensed spectrum. In particular, wireless access to other networks by client devices is made possible by wireless technologies that utilize networked hardware, such as a wireless access point (“WAP” or “AP”), small cells, femtocells, or cellular towers, serviced by a backend or backhaul portion of service provider network (e.g., a cable network). A user may generally access the network at a node or “hotspot,” a physical location at which the user may obtain access by connecting to modems, routers, APs, etc. that are within wireless range.
NG-RAN or “NextGen RAN (Radio Area Network)” is part of the 3GPP “5G” next generation radio system. 3GPP is currently specifying Release 15 NG-RAN, its components, and interactions among the involved nodes including so-called “gNBs” (next generation Node B's or eNBs). NG-RAN will provide ultra-high bandwidth, ultra-low latency wireless communication and efficiently utilize, depending on application, both licensed and unlicensed spectrum of the type described supra in a wide variety of deployment scenarios, including indoor “spot” use, urban “macro” (large cell) coverage, rural coverage, use in vehicles, and “smart” grids and structures. NG-RAN will also integrate with 4G/4.5G systems and infrastructure, and moreover new LTE entities are used (e.g., an “evolved” LTE eNB or “eLTE eNB” which supports connectivity to both the EPC (Evolved Packet Core) and the NR “NGC” (Next Generation Core).
NG-RAN is further configured specifically to provide its high bandwidths in a substantially symmetric fashion (as compared to prior art technologies such as e.g., DOCSIS described supra); i.e., afford high bandwidth data capability in both downlink (DL) and uplink (UL) transmissions relative to the end user node(s), with very low latencies induced by the RAN itself (and supporting backhaul). Hence, rather than being a largely asymmetric data pipe as in DOCSIS, NG-RAN enables high wireless bandwidths with low latency in both DL and UL, and out to ranges compatible with its underlying RAT (e.g., from very short “PAN” ranges out to metropolitan area ranges, depending on the RAT utilized).
In some aspects, Release 15 NG-RAN leverages technology and functions of extant LTE/LTE-A technologies (colloquially referred to as 4G or 4.5G), as bases for further functional development and capabilities. The NG-RAN also employs a “split” architecture—where gNB/ngeNB is split into (i) a CU (central or centralized unit) and (ii) a DU (distributed or disaggregated unit)—so as to provide inter alia, great flexibility in utilization and sharing of infrastructure resources.
Solutions Needed—
Given the pervasive nature of cloud-based delivery systems and asymmetric (and comparatively latent) bearer technologies, cloud-based systems in their present incarnations include several undesirable or non-optimal aspects, including: (i) high cloud data storage requirements; (ii) latency in delivery of stored data to requesting users (even when using network “edge” caching devices); and (iii) asymmetry in UL and DL capabilities. While asymmetry does not per se generally impact QoE, it does in effect “tie the hands” of operators in terms of where and how content can be positioned and delivered.
Based on the foregoing, there is a salient need for improved apparatus and methods of storing and delivering digitally rendered content to a large number of users associated with a content delivery network. Ideally, such improved apparatus and methods would minimize delivery latency and enhance QoE, and enable more flexible positioning and distribution of content within the network operator infrastructure (including CPE operative therein).
The present disclosure addresses the foregoing needs by providing, inter alia, methods and apparatus for providing enhanced content storage and distribution within a content delivery network.
In a first aspect, an architecture for storing content within a content distribution network is disclosed. In one embodiment, the architecture includes (i) one or more network-based content storage locations; (ii) a plurality of edge-based content storage locations, and (iii) one or more network databases for tracking storage locations of various portions of the content. In one variant, content elements (e.g., digitally encoded movies, gaming applications, etc.) are initially ingested and stored with the network-based content storage location(s); the stored content is then fragmented into a plurality of components. These components are distributed among the plurality of edge-based storage locations (e.g., DSTBs and associated mass storage of network subscribers) according to a distribution scheme. High-bandwidth, low-latency data connections between the various edge-based storage locations are utilized to assemble complete versions of the fragmented content by one or more of the edge-based devices to enable decode and rendering on an end-user device associated therewith.
In another variant, one or both of content-specific and fragment-specific encryption keys are generated and utilized to, inter alia, protect the content (fragments) from surreptitious de-fragmentation and use or copying.
In another aspect, a method of storing content within a network is disclosed. In one embodiment, the method includes: (i) fragmenting the content (element) according to a fragmentation algorithm; (ii) causing distribution of the constituent fragments to a plurality of different edge storage devices; and (iii) enabling inter-edge device communication channels to enable assembly of the constituent fragments from two or more edge devices into the content element.
In one variant, the method of storing further provides significant redundancy for the content by storing various combinations of the fragments across multiple different edge storage devices.
In another aspect, a computerized apparatus for use within a content delivery network is disclosed. In one embodiment, the computerized apparatus is configured to provide storage for a plurality of components or fragments of a digital content element, the storage accessible to other ones of computerized apparatus also within the network.
In one variant, the computerized apparatus comprises a DSTB or other CPE disposed at a user or subscriber premises or service area of an MSO network, and includes one or more computer programs configured to store and utilize fragmentation data to enable re-assembly of the (fragmented) content element, including via obtaining missing fragments from other similarly configured CPE.
In another variant, the computerized apparatus comprises a “communal” or shared server or device (or group of devices) which is/are dedicated to a prescribed sub-group of users or premises (e.g., dedicated to an apartment building or a particular business enterprise or university).
In yet another variant, the computerized apparatus comprises a computerized edge server disposed at an edge node or hub of an MSO network.
In another aspect of the disclosure, an algorithm for content fragmentation and distribution is disclosed. In one embodiment, the algorithm is embodied as part of a computer readable apparatus (e.g., program memory, HDD, SSD, etc.) having one or more computer programs, and is configured to, when executed, both: (i) fragment content elements into two or more constituent fragments, and (ii) distribute the two or more distributed fragments so as to optimize one or more parameters.
In one variant, the one or more parameters include “temporal proximity” (i.e., how long it takes to access the fragment from a prescribed accessing device or topological location), such as to support QoS (quality of service) and/or QoE requirements specified by the content originator or service provider. In another variant, the one or more parameters include redundancy (i.e., distribution such that re-constitution of the original content element can occur under the most likely one or more loss scenarios).
In yet another variant, all or portions of the constituent fragments are encrypted, and the one or more parameters includes distribution in order to optimize security from surreptitious re-constitution. In one implementation, the distribution of encrypted content fragments and cryptographic material increases as a number of participating users/devices increases, thereby enhancing security and authenticity.
In another aspect, a method of providing a delayed-provision service to users of content delivery network is disclosed. In one embodiment, the method includes purposefully caching fragments of content elements locally to a consuming entity or device (or group of devices) so as to mitigate redundant network core storage of multiple copies of the content element.
In another aspect, methods and apparatus for securing digitally rendered content are disclosed.
In another aspect, computerized network apparatus configured to enable fragmentation of digital content elements into a plurality of fragments, and distribution of the plurality of fragments to a plurality of 5G NR(New Radio) enabled client devices is disclosed. In one embodiment, the computerized network apparatus includes: digital processor apparatus; data interface apparatus in data communication with the digital processor apparatus and a digital content element store; data storage apparatus in data communication with the digital processor apparatus and configured to store at least one database, the at least one database comprising data relating to fragmentation of the digital content elements; and program storage apparatus in data communication with the digital processor apparatus and comprising at least one computer program.
In one variant, the at least one computer program is configured to, when executed by the digital processor apparatus: obtain a first one of the digital content elements via the data interface apparatus; fragment the obtained first one of the content elements into a plurality of fragments according to a fragmentation algorithm; create a logical cluster comprising a subset of the 5G NR enabled client devices; and cause distribution of the plurality of fragments of the obtained first one of the content elements to two or more of the subset of the 5G NR enabled client devices according to a distribution algorithm.
In a further aspect, a computerized client apparatus configured to support fragmented content element reassembly is disclosed. In one embodiment, the computerized client apparatus includes: digital processor apparatus; data interface apparatus in data communication with the digital processor apparatus and digital content distribution network; wireless data interface apparatus in data communication with the digital processor apparatus and configured to communicate data with a wireless network node; data storage apparatus in data communication with the digital processor apparatus and configured to store (i) at least one database, the at least one database comprising data relating to fragmentation of one or more digital content elements; and (ii) at least one of a plurality of fragments of a fragmented digital content element; and program storage apparatus in data communication with the digital processor apparatus and comprising at least one computer program.
In one variant, the at least one computer program is configured to, when executed by the digital processor apparatus: obtain via the data interface apparatus, the at least one of a plurality of fragments of the fragmented digital content element; store the obtained at least one fragment in the data storage apparatus; obtain via the data interface apparatus, data relating to a fragmentation scheme used to fragment the fragmented digital content element; store the obtained data relating to the fragmentation scheme in the data storage apparatus; receive data corresponding to a request for access to the digital content element, the request initiated by a user of the computerized client apparatus; based at least on the received data corresponding to the request, access both: (i) the stored at least one fragment; and (ii) the stored data relating to the fragmentation scheme; based at least on the accessed stored data relating to the fragmentation scheme, causing access via at least the wireless data interface apparatus of at least one other computerized client apparatus, the at least one other client apparatus comprising one or more of the plurality of fragments of the fragmented digital content element; using at least (i) the accessed one or more of the plurality of fragments of the fragmented digital content element, and (ii) the stored obtained at least one fragment, reassemble the fragmented digital content element; and cause decode and rendering of the reassembled fragmented digital content element to service the request.
These and other aspects shall become apparent when considered in light of the disclosure provided herein.
All figures © Copyright 2018 Charter Communications Operating, LLC. All rights reserved.
Reference is now made to the drawings wherein like numerals refer to like parts throughout.
As used herein, the term “application” (or “app”) refers generally and without limitation to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could include a downloadable Java Xlet™ that runs within the JavaTV™ environment. As used herein, the term “central unit” or “CU” refers without limitation to a centralized logical node within a wireless network infrastructure. For example, a CU might be embodied as a 5G/NR gNB Central Unit (gNB-CU), which is a logical node hosting RRC, SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that controls the operation of one or more gNB-DUs, and which terminates the F1 interface connected with one or more DUs (e.g., gNB-DUs) defined below.
As used herein, the terms “client device” or “user device” or “UE” include, but are not limited to, set-top boxes (e.g., DSTBs), gateways, modems, personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, PDAs, personal media devices (PMDs), tablets, “phablets”, smartphones, and vehicle infotainment systems or portions thereof. As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the like.
As used herein, the term “distributed unit” or “DU” refers without limitation to a distributed logical node within a wireless network infrastructure. For example, a DU might be embodied as a 5G/NR gNB Distributed Unit (gNB-DU), which is a logical node hosting RLC, MAC and PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU (referenced above). One gNB-DU supports one or multiple cells, yet a given cell is supported by only one gNB-DU. The gNB-DU terminates the F1 interface connected with the gNB-CU.
As used herein, the term “DOCSIS” refers to any of the existing or planned variants of the Data Over Cable Services Interface Specification, including for example DOCSIS versions 1.0, 1.1, 2.0, 3.0 and 3.1.
As used herein, the term “headend” or “backend” refers generally to a networked system controlled by an operator (e.g., an MSO) that distributes programming to MSO clientele using client devices, or provides other services such as high-speed data delivery and backhaul.
As used herein, the terms “Internet” and “internet” are used interchangeably to refer to inter-networks including, without limitation, the Internet. Other common examples include but are not limited to: a network of external servers, “cloud” entities (such as memory or storage not local to a device, storage generally accessible at any time via a network connection, and the like), service nodes, access points, controller devices, client devices, etc.
As used herein, the term “LTE” refers to, without limitation and as applicable, any of the variants or Releases of the Long-Term Evolution wireless communication standard, including LTE-U (Long Term Evolution in unlicensed spectrum), LTE-LAA (Long Term Evolution, Licensed Assisted Access), LTE-A (LTE Advanced), 4G LTE, WiMAX, VoLTE (Voice over LTE), and other wireless data standards.
As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), 3D memory, and PSRAM.
As used herein, the terms “microprocessor” and “processor” or “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the terms “MSO” or “multiple systems operator” refer to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.
As used herein, the terms “MNO” or “mobile network operator” refer to a cellular, satellite phone, WMAN (e.g., 802.16), or other network service provider having infrastructure required to deliver services including without limitation voice and data over those mediums. The term “MNO” as used herein is further intended to include MVNOs, MNVAs, and MVNEs.
As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets). Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications technologies or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25, Frame Relay, 3GPP, 3GPP2, LTE/LTE-A/LTE-U/LTE-LAA, SGNR, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.).
As used herein, the term “QAM” refers to modulation schemes used for sending signals over e.g., cable or other networks. Such modulation scheme might use any constellation level (e.g. QPSK, 16-QAM, 64-QAM, 256-QAM, etc.) depending on details of a network. A QAM may also refer to a physical channel modulated according to the schemes.
As used herein, the term “server” refers to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, or other services to one or more other devices or entities on a computer network.
As used herein, the term “storage” refers to without limitation computer hard drives, DVR device, memory, RAID devices or arrays, optical media (e.g., CD-ROMs, Laserdiscs, Blu-Ray, etc.), or any other devices or media capable of storing content or other information.
As used herein, the term “Wi-Fi” refers to, without limitation and as applicable, any of the variants of IEEE Std. 802.11 or related standards including 802.11 a/b/g/n/s/v/ac/ax, 802.11-2012/2013 or 802.11-2016, as well as Wi-Fi Direct (including inter alia, the “Wi-Fi Peer-to-Peer (P2P) Specification”, incorporated herein by reference in its entirety).
Exemplary embodiments of the apparatus and methods of the present disclosure are now described in detail. While these exemplary embodiments are described in the context of a managed network (e.g., hybrid fiber coax (HFC) cable) architecture having a multiple systems operator (MSO), digital networking capability, high-speed data (HSD) and IP delivery capability, and a plurality of client devices, the general principles and advantages of the disclosure may be extended to other types of networks and architectures that are configured to deliver digital media data (e.g., text, video, files, and/or audio), whether managed or unmanaged. Such other networks or architectures may be broadband, narrowband, wired or wireless, or otherwise.
It will also be appreciated that while described generally in the context of a network providing service to a customer or consumer (e.g., residential) end user domain, the present disclosure may be readily adapted to other types of environments including, e.g., commercial/enterprise and government/military applications. Myriad other applications are possible.
Also, while certain aspects are described primarily in the context of the well-known Internet Protocol (described in, inter alia, RFC 791 and 2460), it will be appreciated that the present disclosure may utilize other types of protocols (and in fact bearer networks to include other internets and intranets) to implement the described functionality.
Moreover, while these exemplary embodiments are described in the context of the previously mentioned wireless access nodes (e.g., gNBs) associated with or supported at least in part by a managed network of a service provider (e.g., MSO and/or MNO networks), other types of radio access technologies (“RATs”), other types of networks and architectures that are configured to deliver digital data (e.g., text, images, games, software applications, video and/or audio) may be used consistent with the present disclosure.
Other features and advantages of the present disclosure will immediately be recognized by persons of ordinary skill in the art with reference to the attached drawings and detailed description of exemplary embodiments as given below.
Cloud Digital Fragmentation and Distribution Architecture (FDA)—
Referring now to
As shown in
The requesting client device 206 may include DSTBs, home gateway devices and/or media client devices. In one embodiment, the media client device is a portable device such as a wireless-enabled tablet computer or smartphone. Alternatively, the client device may include a Smart TV or the like. The present disclosure also contemplates a household or person using two or more client devices, and therefore may have access to two or more independent communications paths to the content being distributed (e.g., via the packager 203). For example, a user may have access to a DSTB 206a, Smart TV 206b, and a tablet 206c connected to the cable modem via a wireless communications network such as a wireless LAN (e.g., Wi-Fi) 222, as well as a smartphone 206d interfacing with a wireless service provider (WSP) network such as via an LTE or LTE-A interface, the WSP network in data communication with the Internet 216 (or directly to the distribution network 202, not shown).
In one variant, the user IP-enabled client devices 206a-d may also include an MSO-authored application program (“app”) 217 operative thereon to interface with the MSO Fragmentation and Distribution Controller (FDC) 218 or other entity of the MSO network, so as to facilitate various user functions such as content re-constitution, decryption, program guides, browsing, recording, and even playback/rendering, as described in greater detail subsequently herein.
As shown, the FD architecture of
Content which is recorded, either by a user-initiated or an MSO-initiated request, is initially input into a first storage entity 205. In one embodiment, the content is placed in storage 205 as a single uncompressed copy (so as to avoid lossy compression degradation), and distributed as fragments to a plurality of user CPE 206.
As a brief aside, digital encoding utilizes one or more forms of video compression in order to economize on storage space and transmission bandwidth. Without such video compression, digital video content can require extremely large amounts of data storage capacity, making it difficult or even impossible for the digital video content to be efficiently stored, transmitted, or viewed.
Consequently, video coding standards have been developed to standardize the various video coding methods so that the compressed digital video content is rendered in formats that a majority of video decoders can recognize. For example, the Motion Picture Experts Group (MPEG) and International Telecommunication Union (ITU-T) have developed video coding standards that are in wide use. Examples of these standards include the MPEG-1, MPEG-2, MPEG-4, ITU-T H.261, and ITU-T H.263 standards. The MPEG-4 Advanced Video Coding (AVC) standard (also known as MPEG-4, Part 10) is a newer standard jointly developed by the International Organization for Standardization (ISO) and ITU-T. The MPEG-4 AVC standard is published as ITU-T H.264 and ISO/IEC 14496-10. For purposes of clarity, MPEG-4 AVC is referred to herein as H.264.
As noted above, content often arrives from content sources at a content distribution network (CDN) in a digitally encoded format, such as MPEG-2. The MPEG-2 standard is ubiquitous and specifies, inter alia, methodologies for video and audio data compression and encoding. Specifically, in accordance with the MPEG-2 standard, video data is compressed based on a sequence of GOPs, made up of three types of picture frames: coded picture frames (“I-frames”), forward predictive frames (“P-frames”) and bilinear frames (“B-frames”). Each GOP may, for example, begin with an I-frame which is obtained by spatially compressing a complete picture using discrete cosine transform (DCT). As a result, if an error or a channel switch occurs, it is possible to resume correct decoding at the next I-frame. The GOP may represent additional frames by providing a much smaller block of digital data that indicates how small portions of the I-frame, referred to as macroblocks, move over time.
MPEG-2 achieves its compression by assuming that only small portions of an image change over time, making the representation of these additional frames compact. Although GOPs have no relationship between themselves, the frames within a GOP have a specific relationship which builds off the initial I-frame.
In a traditional content delivery scheme (e.g., for a cable network), the compressed video and audio data are carried by continuous elementary streams, respectively, which are broken into access units or packets, resulting in packetized elementary streams (PESs). These packets are identified by headers that contain time stamps for synchronizing, and are used to form MPEG-2 transport streams, which utilize MPEG-2 encoded video content as their payload.
However, despite its ubiquity, MPEG-2 has salient limitations, especially relating to transmission bandwidth and storage. The more recently developed H.264 video coding standard is able to compress video much more efficiently than earlier video coding standards, including MPEG-2. H.264 is also known as MPEG-4 Part 10 and Advanced Video Coding (AVC). H.264 exhibits a combination of new techniques and increased degrees of freedom in using existing techniques. Among the new techniques defined in H.264 are 4×4 discrete cosine transform (DCT), multi-frame prediction, context adaptive variable length coding (CAVLC), SI/SP frames, and context-adaptive binary arithmetic coding (CABAC). The increased degrees of freedom come about by allowing multiple reference frames for prediction and greater macroblock flexibility. These features add to the coding efficiency (at the cost of increased encoding and decoding complexity in terms of logic, memory, and number of operations). Notably, the same content encoded within H.264 can be transmitted with only roughly half (50%) of the requisite bandwidth of a corresponding MPEG-2 encoding, thereby providing great economies in terms of CDN infrastructure and content storage.
Digital encoding also advantageously lends itself to transcoding of content. As used herein, the term “transcoding” refers generally to the process of changing content from one encoding to another. This may be accomplished for example by decoding the encoded content, and then re-encoding this into the target format. Transcoding can also accomplish the encoding of content to a lower bitrate without changing video formats, a process that is known as transrating.
Transcoding is used in many areas of content adaptation; however, it is commonly employed in the area of mobile devices such as smartphones, tablets, and the like. In such mobile applications, transcoding is essential due to the diversity of mobile devices. This diversity effectively requires an intermediate state of content adaptation, so as to ensure that the source content will adequately present or “render” on the target mobile device.
Delivery of encoded content may also utilize a technology known as “adaptive bitrate streaming.” Adaptive bitrate (ABR) streaming is a technique to distribute program content over a large distributed network in an efficient manner based on, inter alia, available streaming capacity. In one implementation, multiple bitrates of a particular piece of content are available to stream to a viewer, and the selection of the bitrate is based on current network conditions. This means that when there is greater bandwidth availability, a larger bitrate version of the content may be selected. If available bandwidth narrows, a lower bitrate (i.e., smaller) version of the content may be selected to provide a seamless user experience. Typical ABR streaming solutions include e.g., DASH (dynamic adaptive streaming over HTTP), Microsoft Smooth Streaming, and Adobe HTTP Dynamic Streaming, which are further particularly adapted for HTTP-based environments such as Internet delivery. ABR streaming protocols are typically codec-agnostic (e.g., may use content encoded in e.g., H.264, MPEG-2, or others), and are notably distinguishable from such underlying encoding.
Returning again to the FD architecture 200 of
In one embodiment, both the fragmentation scheme(s) and indexing for each content element are determined/maintained by the FDC 218; notably, the FDC may alter one or both of the scheme(s) for each different content element, depending on e.g., one or more parameters or attributes of (i) the content element itself (e.g., content encoded in a first format, being of a first size in GOPs, being of a first topical category, having a first geographic relevance, etc., may be fragmented/indexed differently than another content element having different attributes), and/or (ii) the target CPE 206 or other “edge” storage medium (e.g., edge caches at hubs) to which the fragments are to be distributed. As but one example, the granularity of fragmentation (e.g., into N “chunks”) may be varied depending on the topological location and/or intervening PHY bearers of the network to those distribution points; e.g., a mobile device which receives the fragments initially via an LTE/LTE-A bearer and WSP 207 may be better optimized by receiving smaller sized chunks as opposed to a DSTB served by in-band or DOCSIS DL channel(s), due to e.g., mobility considerations for the mobile device.
Also shown in
A packager entity 203 is also in data communication with the key store 214, the fragment store 210, and the fragmenter 208 (and the FDC 218). The packager in one implementation is configured to access relevant fragments or chunks within the fragmentation DB 210, apply necessary encryption as directed by the FDC 218, store the relevant encryption data in the key store 214, and provide any other relevant operates necessary to distribute the packaged (and optionally encrypted) fragments according to the distribution scheme specified by the FDC 218. For example, the packager may also include a manifest file generation process, which generates manifest files to be sent to individual CPE 206 so as to enable, inter alia, re-constitution of the parent content element by the CPE for rendering.
In one implementation, the original content and the degree of fragmentation of the content, the content key database are all controlled by the MSO via the FDC 218. Consider an example where multiple users pre-order and download the same VOD content element; users in a given geographical or topological portion of the network (which can be for instance as large as a subdivision or MDU or even a service group) will each be downloaded an encrypted portion(s) of the parent content element, and prior to (or as) the content is being decoded and rendered at a given CPE, it will fetch the remaining fragments or chunks from their designated “neighbors” in the topological portion (e.g., same service group) for a seemingly continued stream of content. In that all storage of needed fragments is at or proximate to the edge of the MSO network, latencies and bandwidth limitations associated with prior art URL-based approaches of accessing origin servers or other content stores are advantageously avoided.
As described in greater detail elsewhere herein, one key premise of the architecture 200 of
It will be appreciated that while local network (e.g., geographical or topological group or network) membership may remain static across multiple content elements (e.g., the same CPE always consult other CPE in the same group to obtain the needed fragments), such membership may be dynamically varied, including on a per-content element basis. Moreover, a design level of redundancy can be built in (and dynamically varied), such that each given node has two or more choices or available nodes for the fragment(s) it needs. Hence, in one variant, “complete” redundancy is maintained, such that every node has two or more other nodes in its membership group from which it can obtain each “missing” fragment for a given content element. Alternatively, “thinner” redundancy/coverage can be utilized, such as where one redundant copy of each fragment is available to each edge node (e.g., CPE 206). In yet another implementation, less-than-complete redundancy can be maintained by the node's membership group, such as where less that all fragments of a given content element are maintained (e.g., lower interest fragments such as those at the very beginning or end of a content element, those having very low ME (motion estimation) change on an inter-frame or inter-GOP basis and hence being indicative of little “lost action,” etc.).
In one embodiment, the FDC 218 is configured such that as the number of users/subscribers streaming the fragmented content increases, the fragmentation ratio (i.e., the number of fragments per content element) is increased proportionally, as is the distribution scope of the fragments, thereby, inter alia, providing increased security in that more fragments, each ostensibly carrying their own unique encryption, will need to be “broken” (e.g., via brute force or other decryption techniques), in effect making obtaining and decrypting all content fragments untenable. As a simple illustration, consider a movie that is fragmented into 2 unique (or at least partly unique) parts. These two parts can be distributed to two different entities; further distribution of the same parts to additional entities may, while aiding in redundancy, reduces security, in that a surreptitious entity can obtain the necessary (two) constituent parts from more places, albeit perhaps with different encryption. Conversely, the same content element fragmented into 10 parts, and distributed to 10 different entities, makes such surreptitious use significantly harder, especially where the 10 parts are non-duplicative of one another in terms of encryption (or content).
Similarly, with the increase in the number of users/subscribers using the service (e.g., belonging to a particular membership group), more copies of the same fragment will be available to a given user, resulting in redundancy should one or a group of subscribers lose connectivity, as discussed supra.
It will further be appreciated that while certain embodiments are described herein as being limited to particular finite-sized groups (e.g., based on service group membership, topological or geographic location, etc.), the present disclosure contemplates other use cases; e.g., where the entire MSO topology is utilized as the basis of the group or pool from which content fragments can be drawn. This approach has the benefits of, inter alia, large degrees of redundancy and security; however, depending on the geographic/topological spread of the various fragments, QoS/QoE requirements may not be met in all cases.
In yet another embodiment, perpetual or non-expiring content fragment keys are assigned for any users or subscribers who purchase a VOD or other content asset, and as such the asset can be played back on any device by that user. In one variant, the fragments of the content asset are assembled on the purchasing user's CPE after purchase, such that they do not need to obtain fragments from other CPE or edge nodes for subsequent playback. Alternatively, the FDC 218 may direct that the fragments of the purchased asset remain distributed; however, they can be tagged (or other mechanisms used) to ensure unimpeded or open access by the purchasing user(s)/subscribers. For instance, where multiple subscribers purchase the same asset (e.g., a new first-run movie), fragments for that asset can be made accessible according to a whitelist listing the MAC address or other unique identifying user/CPE information associated with each fragment (or groups of fragments), such that unimpeded access is provided to the purchasers while maintaining redundancy and security via distribution of the fragments.
It will further be appreciated that the FDC 218 (including via the corresponding CUfe) may communicate with the various CPE 106 and edge devices (e.g., edge caches disposed at hubs, etc.) to maintain data on (i) the topological location of content element fragments within the network; (ii) access to the various fragments by various CPE (i.e., to generate a “heat map” and related data for determination of optimal positioning of the individual fragments such as to reduce latency and meet QoS/QoE targets), and (iii) move or relocate individual fragments based on the data of (ii), changes in network topology due to e.g., maintenance or outages/equipment failure, or yet other considerations such as the geographic/temporal relevance of the content, and any “blackouts” that may be imposed or lifted as a function of time.
In one model, the FDC 218 maintained data on the specific locations of fragmented content elements which are available to the target user/subscriber CPE 206 requesting the content. When a specific fragmented content unit is required at the target CPE, the FDC sends control information (e.g., via control plane messaging or other technique) to one or more source CPE which has the required content fragment(s), as well as to the target CPE that is requesting the fragment(s). In some implementations, the FDC may also send control data to the CUfe or other edge node responsible for coordinating the re-constitution of the content element, in effect to establish a session between the two or more CPE (source(s) and target(s)). In one variant, the requested fragment(s) is/are sent by the source CPE and relayed to the target CPE through the DU 406 associated with a given NR CUfe 404, as described below with respect to
It also be appreciated that in some implementations, the network (cloud) fragment store 210 can be utilized as a CPE or edge cache proxy (albeit disposed further into the network core, such as at an MSO headend). For instance, where the service membership group (e.g., a topologically local grouping of CPE acting as fragment sources/sinks with one another) is topologically disposed proximate to the store 210, such that QoE/QoS requirements can be met through delivery of fragments to sink CPE 206 within the membership group from the store 210, then the store 210 can be instructed by the FDC 218 to supply fragments to the sink CPE, as if the store was just another CPE in the membership group. This approach can be particularly useful where, for instance, the membership group has few members (for whatever reason, including e.g., geographic isolation or lack of user/subscriber CPE with enough indigenous capacity); additional “genetic diversity” from a redundancy/security aspect can be added using the super-CPE capability afforded by the store 210 and other cloud-based delivery systems. Moreover, in cases where the fragments supplied by the could store 210 and related components may be supplied with latency (i.e., use of a look-ahead fragment buffering approach), the store 210 and related components may not need to obey the stringent QoS/QoE requirements that can be supported through use of local CPE and the high symmetric bandwidth, low latency connections (e.g., 5G NR links) described below.
The network architecture 200 of
First, it is assumed that the clients (e.g., CPE or nSTBs) have an available mean upload speed U Mbit/s (e.g. 100 Mbit/s). Further, it is assumed that no more than u % (u being less than 100%) of this uplink capacity from contributing nSTBs may be used for sharing content with other subscribers; however, it will be appreciated that the value of u may be varied on a per-nSTB, per-cluster, per-subgroup, etc. basis, including dynamically as a function of time or other parameters such as network available bandwidth or D (described below); that is, u may scale with D and/or U.
Next, it is assumed that client devices or nSTBs each have a given download speed D Mbit/s (e.g. 1000 Mbit/s), which may be the same or different than the upload speed U. No more than d % (d being less than 100%) of this downlink capacity of the receiving client devices may be used for downloading content from other subscribers; however, it will be appreciated that the value of u may be varied on a per-nSTB, per-cluster, per-subgroup, etc. basis, including dynamically as a function of time or other parameters such as network available bandwidth or U (described above); that is, d may scale with D and/or U.
One or more given clusters of nSTBs or client devices are also defined as Ci (Ci>1), where i is the index identifying a particular cluster within the system 250, and the value of Ci is the number of nSTBs in that ith cluster. In one implementation, each cluster is nSTB-specific (i.e., not only does c specify how many nstbs, but also their particular identities (e.g., via the FIT, which maps nSTBs to fragments)
It is further assumed that each cluster Ci is “self-sufficient” in terms of uplink capacity for all of the clients in that cluster to be consuming video content simultaneously. In one implementation, sufficiency is determined in relation to another quantity such as downlink capacity for that cluster (i.e., that there is sufficient uplink capacity for the cluster as a whole relative to the downlink capacity, thereby obviating the need for additional uplink). For example, the total uplink capacity within a given cluster of nSTB (Ci)=a·Ci·u·U Mbit/s, while the total downlink capacity required when all active nSTB are consuming video content=a·Ci·V Mbit/s. For a given cluster to be self-sufficient, downlink demand should be ≤80% of total uplink capacity, per Eqn. (1):
a·Ci·V≤80% a·Ci·u·U Eqn. (1)
and therefore u·U≥V/0.8 Mbit/s.
A given cluster Ci of clients may have some percentage (e.g., a %) of then-active devices which are able to contribute to the total uplink capacity. As devices become active or inactive (due to e.g., equipment failure, power-up/down, etc.), the value of a may vary.
Another assumption of the exemplary model implementation is that the maximum bitrate required for a given class of designated content (e.g., video)=V Mbit/s, with a stream or file duration=T seconds. Maximum dimensioned downlink demand is e.g., 80% of total uplink capacity and the maximum downlink throughput is a prescribed percentage (e.g., 80%) of total downlink capacity (D). The “≤” relationship and the prescribed fraction (e.g., 80% factor) assure self-sufficiency, including in environments where the supporting network protocols (e.g., Ethernet IEEE Std. 802.3) cannot continuously sustain data throughput at the peak data rate. It is further assumed that minimum media (e.g., video) buffer playback duration=P seconds, and the buffer download (buffer fill) time=L seconds.
Moreover, it is assumed that a media “library” or total number of video titles=v files. The exemplary model also creates various identifiers (IDs) for use in the fragmentation, distribution, and re-assembly processes. Specifically, a global network nSTB identifier (Global_nSTB_ID) is defined per Eqn. (2):
Global_nSTB_ID=Node_ID·210+nSTB_ID(Total 32bits) Eqn. (2)
wherein the Node_ID (e.g., the particular servicing node 418 within the topology; see
A global network fragment identifier (Global_Fragment_ID) is defined per Eqn. (3):
Global_Fragment_ID=Title_ID·232+Fragment_ID·222+Redundancy_ID(64 bits) Eqn. (3)
wherein the Redundancy_ID is 10 bits (210=1K possible combinations), the Fragment_ID is 22 bits (≈4.2 million possible combinations), and the Title_ID is 32 bits (232≈4.3 Billion possible combinations).
The minimum downlink speed required at a participating nSTB is defined by Eqn. (4):
D≥(P·V)/(d·L·80%) Eqn. (4)
In the exemplary implementation, the content (e.g., video or other media) file of size FC is divided (distributed within the network) so that no single user or nSTB may have all the fragments. The minimum fragment size (Fmin) is therefore given by Eqn. (5):
Fmin≤FC/2 Mbits Eqn. (5)
Conversely, the maximum fragment size Fmax is defined by the initial video buffer size and the capacity of the uplink, such that:
Fmax≤Minimum(P·V,u·U·L)Mbits Eqn. (6)
A “typical” fragment size is given by Eqn. (7):
Ft=Minimum(P·V,u·U·L,FC/2)Mbits Eqn. (7)
A minimum number of fragments per video file is defined per Eqn. (8):
fV≥T·V/Ft Eqn. (8)
Further, a redundancy factor R is defined for e.g., storage failure contingency in active nSTBs (such as where all or a portion of an HDD or SSD fails or the nSTB is otherwise unavailable). In one variant, the content fragments and nSTBs are divided into R lists, where R is the redundancy factor, and the list index r is determined by:
r=Global_Fragment_ID Mod R+1(for content fragments); and Eqn. (9)
r=Global_nSTB_ID Mod R+1(for nSTBs). Eqn. (10)
In this exemplary implementation, each file also has a Fragment Index Table (FIT) that maps all fragments (regardless of r index) to one or more recipient nSTBs. As described in greater detail below with respect to
Table 4 shows how the nSTBs are mapped to different list index r for a sample network with 2 Nodes and 5 nSTB connected to each Node.
Table 5 shows how each list index r is then associated with a different number of nSTB, according to Eqn. (10):
Table 6 shows how 3 exemplary content titles, each with 4 fragments, and 3 instances of each fragment for redundancy (r=3) are mapped to nSTB by common list index r value:
In this fashion, each participating nSTB includes data relating to (i) local storage of certain fragments of a given content file, and (ii) non-local (e.g., other nSTB) storage of the same and/or other fragments of the same given content file. Hence, each individual nSTB can access its own local database to determine where each of the individual fragments of the given content file can be obtained (whether locally, or at one or more other nSTBs within the cluster).
Moreover, in another variant, a given nSTB can make “calls” (e.g., transmit a request message to prompt a reply from one or more other entities, such as another nSTB in the cluster, or a network supervisory or management process having a “master” database of FITs such as the fragmenter 208 and master FID 209 in
Moreover, such capability can be leveraged if is determined that it is easier/more cost effective to upgrade network infrastructure to enable lower latency (and network-based buffering), as opposed to adding more capability on the multitude of nSTBs in service.
Yet further, the foregoing functionality allows for a much higher order of redundancy to, inter alia, support seamless user experience when streaming.
Consider also the scenario of when a new content file has just become available (e.g., at a central content repository of the network), and the end-user or subscriber knows that this content should become available at that time; e.g., because the release of the content has been publicized in advance, Since there is insufficient time for the network content management and distribution elements to pre-load or populate the nSTBs with encrypted fragments of the new content, requesting nSTBs can obtain the Fragment Index Database for the section of the FIT applicable to the desired (new) content during such periods. Since the fragments and nSTB are identified in the FIT using their global IDs, the requesting nSTB is not restricted to fetching missing fragments from other nSTBs connected to the same Node.
As shown in
It will appreciated, however, that other approaches for mapping may be used, whether alone or in combination with the foregoing, including for example (i) allocating two or more sequential file fragments (e.g., Fragmenta1 and Fragmentb1 in
In the exemplary implementation, the number of unique fragments in a given cluster Ci is given by Eqn. (11):
fC
The total number of fragments stored per nSTB is given by Eqn. (12):
fS=fC·R/C Eqn. (12)
Accordingly, a total amount of data storage per client (e.g., nSTB) S is given by Eqn. (13),
S=F·fS/(8.1024)GB Eqn. (13)
In the exemplary embodiment, the FIT 251 for a given content file is distributed as individual FITs 251a-n as shown, to all nSTBs which are recipients of the fragments of that file. The receiving nSTB stores the FIT 251a-n within its own Fragment Index Database (FID) in local or attached storage (e.g., HDD or SSD or other memory device) for use during reconstitution/reassembly. In this manner, unnecessary duplication is avoided (i.e., nSTBs only have FITs for content files for which they possess or will possess fragments). It will be appreciated, however, that alternatively, FITs can be “broadcast” or “multicast” to wider subsets of the nSTB population, such as e.g., for the purpose of performing local or edge analytics based on the data of where all fragments of a given content file (or set of files) are stored, and not merely just those to which a given nSTB fragments relate. For instance, for purposes of most efficiently pulling fragments from a given one or more nSTBs, it may be desirous for the requesting (target) nSTB to know which of the candidate source nSTBs is most heavily loaded or encumbered, such as by determining how many different content files it maintains fragments. Small, lightly-loaded fragment repositories that are “local” to the target nSTB may in fact be better candidates than distant, mass-fragment repositories for purposes of obtaining one or more fragments within QoS and/or latency requirements.
Alternatively, broader distribution of FITs may be used for FIT/FID redundancy purposes (i.e., so that the “big picture” mapping of all content element files and their fragments can be reconstituted in the event of e.g., fragmenter 208/database 209 corruption or failure via access to local (nSTB) stores of FITs).
Referring again to
Note that in the case where a given decoding/rendering nSTB maintains no fragments for a given content file, it downloads a complete FIT from e.g., the network database 209 (or alternatively another local nSTB possessing the complete FIT for that file), and then invoke the foregoing procedure to obtain all fragments for that file (which are sourced from at least two different nSTBs within the cluster). Alternatively, as described above, the target nSTB may already possess the FIT for a file which it has no fragments (e.g., per the broadcast/multicast mechanisms) to obviate contacting the network database 209.
The following exemplary scenarios further illustrate application of the foregoing principles.
Scenario 1—
In a first scenario, an 8K HDR (high dynamic range), 60 frames per second (fps) movie-length video is streamed. As a brief aside, 8K resolution, or 8K UHD, is the greatest current ultra-high definition television (UHDTV) resolution in digital television and digital cinematography. The term “8K” refers to the horizontal resolution of 7,680 pixels, forming the total image dimensions of (7680×4320), otherwise known as 4320p (based on a 16:9 aspect ratio).
The 8K streaming HDR video at 60 fps consumes V=150 Mbit/s. If no more than u=50% of a user's upload capacity should be used to contribute content to other subscribers, a subscriber's uplink speed should be U≥375 Mbit/s. Likewise, if no more than d=50% of a user's download capacity should be used to initially buffer a video stream, and if a 2-second video playback buffer (P=2) is to be downloaded in 1 second for responsiveness (L=1), then a subscriber's downlink speed should be D≥750 Mbit/s. The maximum fragment size Fmax=Minimum (300, 187.5) Mbits; therefore, Fmax=187.5 Mbits (or ≈23.44 MB).
Given a 90-minute movie video file duration, then T=5400 seconds. The minimum number of fragments per video file fV=5400.150/187.5=4320. With an assumed 1000 titles (v=1000) stored with redundancy factor 2 (R=2) in a cluster of 1000 nSTB (Ci=1000), the total fragments stored per nSTB=1000.4320.2/1000, fS=8640, thereby requiring a total storage per nSTB=187.5·8640/(8.1024), or S=197.8 GB per nSTB.
Scenario 2—
In a second exemplary scenario, a full-HD 30 fps movie-length video is streamed. Assuming that full-HD at 30 fps consumes V=8 Mbit/s and if no more than u=50% of a user's upload capacity should be used to contribute content to other subscribers, then a subscriber's uplink speed should be U≥20 Mbit/s. If no more than d=50% of a user's download capacity should be used to initially buffer a video stream, and if a 2-second video playback buffer (P=2) is to be downloaded in 1 second for responsiveness (L=1), then a subscriber's downlink speed should be D≥40 Mbit/s. The maximum fragment size F=Minimum (16, 10) Mbits; with an F=10 Mbits (=1.25 MB), a 90-minute movie video file duration T=5400 seconds, and the minimum number of fragments per video file fV=4320. Assuming 1000 titles (v=1000) stored with redundancy factor 2 (R=2) in a cluster of 1000 nSTB (Ci=1000), the total fragments stored per nSTB=1000.4320.2/1000, fS=8640, and hence the total storage per nSTB=187.5.8640/(8.1024), S=10.6 GB. It is noted that this comparatively low value allows for one or more other factors to be expanded (e.g., use of a higher redundancy factor R, the ability to have more titles v in the library, and/or others), since 10.6 GB of storage is well below that typically found on current nSTB and other mass storage devices (whether SSD or HDD).
Distributed FDC/gNB Architectures—
Referring now to
As a brief aside, and referring to
Accordingly, to implement e.g., the Fs interfaces 308, 310 and other user/control plane functions, a (standardized) F1 interface is employed. It provides a mechanism for interconnecting a gNB-CU 304 and a gNB-DU 306 of a gNB 302 within an NG-RAN, or for interconnecting a gNB-CU and a gNB-DU of an en-gNB within an E-UTRAN. The F1 Application Protocol (F1AP) supports the functions of F1 interface by signaling procedures defined in 3GPP TS 38.473. F1AP consists of so-called “elementary procedures” (EPs). An EP is a unit of interaction between gNB-CU and gNB-DU. These EPs are defined separately and are intended to be used to build up complete messaging sequences in a flexible manner. Generally, unless otherwise stated by the restrictions, the EPs may be invoked independently of each other as standalone procedures, which can be active in parallel.
Within such an architecture 300, a gNB-DU 306 (or ngeNB-DU) is under the control of a single gNB-CU 304. When a gNB-DU is initiated (including power-up), it executes the F1 SETUP procedure (which is generally modeled after the above-referenced S1 SETUP procedures of LTE) to inform the controlling gNB-CU of, inter alia, the number of cells (together with the identity of each particular cell) in the F1 SETUP REQUEST message. The gNB-CU at its discretion may choose to activate some or all cells supported by that gNB-DU, and even alter certain operational parameters relating thereto, indicating these selections/alterations in the F1 SETUP RESPONSE message. The identity of each cell to be activated is also included in F1 SETUP RESPONSE.
In the 5G NR model, the DU(s) 306 comprise logical nodes that each may include varying subsets of the gNB functions, depending on the functional split option. DU operation is controlled by the CU 304 (and ultimately for some functions by the NG Core 403). Split options between the DU 406 and CUfe 404 in the present disclosure may include for example:
Under Option 1 (RRC/PDCP split), the RRC (radio resource control) is in the CUfe 404 while PDCP (packet data convergence protocol), RLC (radio link control), MAC, physical layer (PHY) and RF are kept in the DU 406, thereby maintaining the entire user plane in the distributed unit.
Under Option 2 (PDCP/RLC split), there are two possible variants: (i) RRC, PDCP maintained in the CUfe, while RLC, MAC, physical layer and RF are in the DU(s) 406; and (ii) RRC, PDCP in the CUe (with split user plane and control plane stacks), and RLC, MAC, physical layer and RF in the DU's 406.
Under Option 3 (Intra RLC Split), two splits are possible: (i) split based on ARQ; and (ii) split based on TX RLC and RX RLC.
Under Option 4 (RLC-MAC split), RRC, PDCP, and RLC are maintained in the CUfe 404, while MAC, physical layer, and RF are maintained in the DU's.
Under Option 5 (Intra-MAC split), RF, physical layer and lower part of the MAC layer (Low-MAC) are in the DU's 406, while the higher part of the MAC layer (High-MAC), RLC and PDCP are in the CUfe 404.
Under Option 6 (MAC-PHY split), the MAC and upper layers are in the CUfe, while the PHY layer and RF are in the DU's 406. The interface between the CUfe and DU's carries data, configuration, and scheduling-related information (e.g. Moduclation and Coding Scheme or MCS, layer mapping, beamforming and antenna configuration, radio and resource block allocation, etc.) as well as measurements.
Under Option 7 (Intra-PHY split), different sub-options for UL (uplink) and DL downlink) may occur independently. For example, in the UL, FFT (Fast Fourier Transform) and CP removal may reside in the DU's 406, while remaining functions reside in the CUfe 404. In the DL, iFFT and CP addition may reside in the DU 406, while the remainder of the PHY resides in the CUfe 404.
Finally, under Option 8 (PHY-RF split), the RF and the PHY layer may be separated to, inter alia, permit the centralization of processes at all protocol layer levels, resulting in a high degree of coordination of the RAN. This allows optimized support of functions such as CoMP, MIMO, load balancing, and mobility.
The foregoing split options are intended to enable flexible hardware implementations which allow scalable cost-effective solutions, as well as coordination for e.g., performance features, load management, and real-time performance optimization. Moreover, configurable functional splits enable dynamic adaptation to various use cases and operational scenarios. Factors considered in determining how/when to implement such options can include: (i) QoS requirements for offered services (e.g. low latency, high throughput); (ii) support of requirements for user density and load demand per given geographical area (which may affect RAN coordination); (iii) availability of transport and backhaul networks with different performance levels; (iv) application type (e.g. real-time or non-real time); (v) feature requirements at the Radio Network level (e.g. Carrier Aggregation).
As shown in
The individual DU's 406 in
In the architecture 420 of
In the architecture 440 of
It will be appreciated that the FD controller logic may be, in whole or part, distributed or placed in alternate location(s) within the network, whether within the MSO domain or outside thereof. For instance, in one variant, the FDC 218 comprises a series or set of distributed FDCc (client) entities in communication with one or more FDCs (server) portions, as shown in the architecture 460 of
It will also be appreciated that while described primarily with respect to a unitary gNB-CU entity or device 404 as shown in
It is also noted that heterogeneous architectures of eNBs or femtocells (i.e., E-UTRAN LTE/LTE-A Node B's or base stations) and gNBs may be utilized consistent with the architectures of
In certain embodiments, each DU 406 is located within and/or services one or more areas within one or more venues or residences (e.g., a building, room, or plaza for commercial, corporate, academic purposes, and/or any other space suitable for wireless access). Each DU is configured to provide wireless network coverage within its coverage or connectivity range for its RAT (e.g., 5G NR), as shown in the exemplary coverage area 240 of
As a brief aside, a number of different identifiers are used in the NG-RAN architecture, including those of UE's and for other network entities (each of which may comprise CPE 206 herein for content fragmentation, distribution, and re-constitution). Specifically:
In the exemplary implementation of the system architecture 200 of
In another embodiment, each content fragment is separately encrypted using different keys, such that multiple keys are required to decrypt and access each of the fragments of a given content element.
FDC Apparatus—
As shown, the FDC 218 includes, inter alia, a processor apparatus or subsystem 602, a program memory module 604, mass storage 605, FDC controller function logic 606, and one or more network interfaces 608.
In the exemplary embodiment, the processor 602 may include one or more of a digital signal processor, microprocessor, field-programmable gate array, or plurality of processing components mounted on one or more substrates. The processor 602 may also comprise an internal cache memory, and is in communication with a memory subsystem 604, which can comprise, e.g., SRAM, flash and/or SDRAM components. The memory subsystem may implement one or more of DMA type hardware, so as to facilitate data accesses as is well known in the art. The memory subsystem of the exemplary embodiment contains computer-executable instructions which are executable by the processor 602.
The processing apparatus 602 is configured to execute at least one computer program stored in memory 604 (e.g., a non-transitory computer readable storage medium); in the illustrated embodiment, such programs include FDC controller logic 606, such as how and where to allocate content fragments generated by the fragmenter 208, and other logical functions performed by the FDC as described elsewhere herein. Other embodiments may implement such functionality within dedicated hardware, logic, and/or specialized co-processors (not shown). The FDC controller logic 606 is a firmware or software module that, inter alia, communicates with a corresponding CUfe logic portion (i.e., for message exchange and protocol implementation), and/or other upstream or backend entities such as those within the NG Core 403 in alternate embodiments.
In some embodiments, the FDC logic 606 utilizes memory 604 or other storage 705 configured to temporarily hold a number of data relating to the various fragments, encryption key schemes, and the like before transmission via the network interface(s) 708 to the CUfe 404 or NG Core 403.
In other embodiments, application program interfaces (APIs) such as those included in an MSO-provided application or those natively available on the FDC may also reside in the internal cache or other memory 704. Such APIs may include common network protocols or programming languages configured to enable communication with the CUfe 404 and other network entities (e.g., via API “calls” to the FDC by MSO network processes tasked with gathering load, configuration, or other data).
In one implementation, the MSO subscriber or client database may also optionally include the provisioning status of the particular CUfe or DU that is associated with (i.e., which provides service to) an MSO subscriber.
It will be appreciated that any number of physical configurations of the FDC 218 may be implemented consistent with the present disclosure. As noted above, the functional “split” between DU's and CU has many options, including those which may be invoked dynamically (e.g., where the functionality may reside in both one or more DUs and the corresponding CUe, but is only used in one or the other at a time based on e.g., operational conditions); as such, FDC functionality may also be distributed or split in similar fashion, as described elsewhere herein.
CUfe Apparatus—
In one exemplary embodiment as shown, the CUfe 404 includes, inter alia, a processor apparatus or subsystem 702, a program memory module 704, CUfe controller logic 706 (here implemented as software or firmware operative to execute on the processor 702), network interfaces 710 for communications and control data communication with the relevant DU's 406, and a communication with the NG Core 403 and FDC 218 as shown in
In one exemplary embodiment, the CUfe's 404 are maintained by the MSO and are each configured to utilize a non-public IP address within an IMS/Private Management Network “DMZ” of the MSO network. As a brief aside, so-called DMZs (demilitarized zones) within a network are physical or logical sub-networks that separate an internal LAN, WAN, PAN, or other such network from other untrusted networks, usually the Internet. External-facing servers, resources and services are disposed within the DMZ so they are accessible from the Internet, but the rest of the internal MSO infrastructure remains unreachable or partitioned. This provides an additional layer of security to the internal infrastructure, as it restricts the ability of surreptitious entities or processes to directly access internal MSO servers and data via the untrusted network, such as via a CUfe “spoof” or MITM attack whereby an attacker might attempt to hijack one or more CUfe to obtain data from the corresponding DU's (or even CPE or UE's utilizing the DU's).
Although the exemplary CUfe 404 may be used as described within the present disclosure, those of ordinary skill in the related arts will readily appreciate, given the present disclosure, that the “centralized” controller unit 404 may in fact be virtualized and/or distributed within other network or service domain entities (e.g., within one of the DU 406 of a given gNB 402, within the NG Core 403 or an MSO entity such as a server, a co-located eNB, etc.), and hence the foregoing apparatus 404 of
In one embodiment, the processor apparatus 702 may include one or more of a digital signal processor, microprocessor, field-programmable gate array, or plurality of processing components mounted on one or more substrates. The processor apparatus 702 may also comprise an internal cache memory. The processing subsystem is in communication with a program memory module or subsystem 704, where the latter may include memory which may comprise, e.g., SRAM, flash and/or SDRAM components. The memory module 704 may implement one or more of direct memory access (DMA) type hardware, so as to facilitate data accesses as is well known in the art. The memory module of the exemplary embodiment contains one or more computer-executable instructions that are executable by the processor apparatus 702. A mass storage device (e.g., HDD or SSD, or even NAND flash or the like) is also provided as shown.
The processor apparatus 702 is configured to execute at least one computer program stored in memory 704 (e.g., the logic of the CUfe including content fragmentation and distribution/key management in the form of software or firmware that implements the various functions described herein). Other embodiments may implement such functionality within dedicated hardware, logic, and/or specialized co-processors (not shown).
In one embodiment, the CUfe 404 is further configured to register known downstream devices (e.g., access nodes including DU's 406), other CUfe devices), and centrally control the broader gNB functions (and any constituent peer-to-peer sub-networks or meshes). Such configuration include, e.g., providing network identification (e.g., to DU's, gNBs, client devices such as roaming MNO UEs, and other devices, or to upstream devices such as MNO or MSO NG Core portions 403 and their entities, or the FDC 218), and managing capabilities supported by the gNB's NR RAN.
The CUfe may further be configured to directly or indirectly communicate with one or more authentication, authorization, and accounting (AAA) servers of the network, such as via the interface 708 shown in
nSTB Apparatus—
In one exemplary embodiment as shown, the nSTB 206 includes, inter alia, a processor apparatus or subsystem 802, a program memory module 804, nSTB fragmentation controller logic 806 (here implemented as software or firmware operative to execute on the processor 802), a DOCSIS network interface for communication with the host service provider (e.g., cable MSO) network including the backhaul and FDC 218, and 5g NR modem 810 for communications and control data communication with the relevant DUs 406 (i.e., for file fragment sourcing/sinking).
In one exemplary embodiment, the nSTBs 206 are maintained by the MSO and are each configured to utilize an IP address within the MSO network, which is related to the nSTB_ID previously described (the latter being used for fragmentation location identification, the former being used for network-layer addressing of the nSTB within the MSO network).
In one embodiment, the processor apparatus 802 may include one or more of a digital signal processor, microprocessor, field-programmable gate array, or plurality of processing components mounted on one or more substrates. The processor apparatus 802 may also comprise an internal cache memory. The processing subsystem is in communication with a program memory module or subsystem 804, where the latter may include memory which may comprise, e.g., SRAM, flash and/or SDRAM components. The memory module 804 may implement one or more of direct memory access (DMA) type hardware, so as to facilitate data accesses as is well known in the art. The memory module of the exemplary embodiment contains one or more computer-executable instructions that are executable by the processor apparatus 802. A mass storage device (e.g., HDD or SSD, or even NAND flash or the like) is also provided as shown, which in the illustrated embodiment includes the FID 818 which maintains the various FITs 251 as previously described.
The processor apparatus 802 is configured to execute at least one computer program stored in memory 804 (e.g., the logic of the nSTB including FIT receipt and management, content reassembly and key management in the form of software or firmware that implements the various functions described herein). Other embodiments may implement such functionality within dedicated hardware, logic, and/or specialized co-processors (not shown). As previously described, the controller logic 806 is also in logical communication with the FDC 218, as well as other participating nSTBs 206 within the relevant cluster(s). It will be appreciated that while a given nSTB participates in one cluster, it may also (concurrently or in the alternative, depending on how the logic 806 is configured) participate in one or more other clusters within the network as established by the FDC 218.
It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.
While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.
It will be further appreciated that while certain steps and aspects of the various methods and apparatus described herein may be performed by a human being, the disclosed aspects and individual methods and apparatus are generally computerized/computer-implemented. Computerized apparatus and methods are necessary to fully implement these aspects for any number of reasons including, without limitation, commercial viability, practicality, and even feasibility (i.e., certain steps/processes simply cannot be performed by a human being in any viable fashion).
This application is a continuation of and claims priority to co-owned U.S. patent application Ser. No. 16/058,520 filed on Aug. 8, 2018 entitled “APPARATUS AND METHODS FOR CONTENT STORAGE, DISTRIBUTION AND SECURITY WITHIN A CONTENT DISTRIBUTION NETWORK,” and issuing as U.S. Pat. No. 10,939,142 on Mar. 2, 2021, which claims priority to U.S. Provisional Patent Application Ser. No. 62/636,020 filed Feb. 27, 2018 and entitled “APPARATUS AND METHODS FOR CONTENT STORAGE, DISTRIBUTION AND SECURITY WITHIN A CONTENT DISTRIBUTION NETWORK,” each of which are incorporated herein by reference in its entirety. The subject matter of this application is also generally related to aspects of the subject matter of co-owned U.S. patent application Ser. No. 15/170,787 filed Jun. 1, 2016 and entitled “CLOUD-BASED DIGITAL CONTENT RECORDER APPARATUS AND METHODS,” incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4521881 | Stapleford et al. | Jun 1985 | A |
4546382 | McKenna et al. | Oct 1985 | A |
4602279 | Freeman | Jul 1986 | A |
4930120 | Baxter et al. | May 1990 | A |
5155591 | Wachob | Oct 1992 | A |
5233423 | Jernigan et al. | Aug 1993 | A |
5313454 | Bustini et al. | May 1994 | A |
5361091 | Hoarty et al. | Nov 1994 | A |
5600364 | Hendricks et al. | Feb 1997 | A |
RE35651 | Bradley et al. | Nov 1997 | E |
5734380 | Adams et al. | Mar 1998 | A |
5758257 | Herz et al. | May 1998 | A |
5774170 | Hite et al. | Jun 1998 | A |
5793410 | Rao | Aug 1998 | A |
5815662 | Ong | Sep 1998 | A |
5862312 | Mann et al. | Jan 1999 | A |
5878324 | Borth et al. | Mar 1999 | A |
5886995 | Arsenault et al. | Mar 1999 | A |
5914945 | Abu-Amara et al. | Jun 1999 | A |
5926205 | Krause et al. | Jul 1999 | A |
5935206 | Dixon et al. | Aug 1999 | A |
5963844 | Dail | Oct 1999 | A |
6002393 | Hite et al. | Dec 1999 | A |
6018359 | Kermode et al. | Jan 2000 | A |
6047327 | Tso et al. | Apr 2000 | A |
6081830 | Schindler | Jun 2000 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6105134 | Pinder et al. | Aug 2000 | A |
6124878 | Adams et al. | Sep 2000 | A |
6128316 | Takeda et al. | Oct 2000 | A |
6134532 | Lazarus et al. | Oct 2000 | A |
6157377 | Shah-Nazaroff et al. | Dec 2000 | A |
6161142 | Wolfe et al. | Dec 2000 | A |
6167432 | Jiang | Dec 2000 | A |
6169728 | Perreault et al. | Jan 2001 | B1 |
6175856 | Riddle | Jan 2001 | B1 |
6182050 | Ballard | Jan 2001 | B1 |
6211869 | Loveman et al. | Apr 2001 | B1 |
6211901 | Imajima et al. | Apr 2001 | B1 |
6216129 | Eldering | Apr 2001 | B1 |
6216152 | Wong et al. | Apr 2001 | B1 |
6219710 | Gray et al. | Apr 2001 | B1 |
6219840 | Corrigan et al. | Apr 2001 | B1 |
6240243 | Chen et al. | May 2001 | B1 |
6240553 | Son et al. | May 2001 | B1 |
6252964 | Wasilewski et al. | Jun 2001 | B1 |
6256393 | Safadi et al. | Jul 2001 | B1 |
6275618 | Kodama | Aug 2001 | B1 |
6330609 | Garofalakis et al. | Dec 2001 | B1 |
6337715 | Inagaki et al. | Jan 2002 | B1 |
6339785 | Feigenbaum | Jan 2002 | B1 |
6345279 | Li et al. | Feb 2002 | B1 |
6353626 | Sunay et al. | Mar 2002 | B1 |
6378130 | Adams | Apr 2002 | B1 |
6434141 | Oz et al. | Aug 2002 | B1 |
6446261 | Rosser | Sep 2002 | B1 |
6463508 | Wolf et al. | Oct 2002 | B1 |
6463585 | Hendricks et al. | Oct 2002 | B1 |
6487721 | Safadi | Nov 2002 | B1 |
6498783 | Lin | Dec 2002 | B1 |
6502139 | Birk et al. | Dec 2002 | B1 |
6516412 | Wasilewski et al. | Feb 2003 | B2 |
6560578 | Eldering | May 2003 | B2 |
6590865 | Ibaraki et al. | Jul 2003 | B1 |
6594699 | Sahai et al. | Jul 2003 | B1 |
6601237 | Ten et al. | Jul 2003 | B1 |
6604138 | Virine et al. | Aug 2003 | B1 |
6615039 | Eldering | Sep 2003 | B1 |
6615251 | Klug et al. | Sep 2003 | B1 |
6651103 | Markowitz et al. | Nov 2003 | B1 |
6671736 | Virine et al. | Dec 2003 | B2 |
6687735 | Logston et al. | Feb 2004 | B1 |
6700624 | Yun | Mar 2004 | B2 |
6718551 | Swix et al. | Apr 2004 | B1 |
6725459 | Bacon | Apr 2004 | B2 |
6728269 | Godwin et al. | Apr 2004 | B1 |
6728840 | Shatil et al. | Apr 2004 | B1 |
6738978 | Hendricks et al. | May 2004 | B1 |
6742187 | Vogel | May 2004 | B1 |
6745245 | Carpenter | Jun 2004 | B1 |
6763391 | Ludtke | Jul 2004 | B1 |
6771290 | Hoyle | Aug 2004 | B1 |
6772435 | Thexton et al. | Aug 2004 | B1 |
6775843 | McDermott | Aug 2004 | B1 |
6799196 | Smith | Sep 2004 | B1 |
6839757 | Romano et al. | Jan 2005 | B1 |
6842783 | Boivie et al. | Jan 2005 | B1 |
6859839 | Zahorjan et al. | Feb 2005 | B1 |
6868439 | Basu et al. | Mar 2005 | B2 |
6891841 | Leatherbury et al. | May 2005 | B2 |
6898800 | Son et al. | May 2005 | B2 |
6917628 | McKinnin et al. | Jul 2005 | B2 |
6944166 | Perinpanathan et al. | Sep 2005 | B1 |
6948183 | Peterka | Sep 2005 | B1 |
6961430 | Gaske et al. | Nov 2005 | B1 |
6977691 | Middleton et al. | Dec 2005 | B1 |
6981045 | Brooks | Dec 2005 | B1 |
6985934 | Armstrong et al. | Jan 2006 | B1 |
6986156 | Rodriguez et al. | Jan 2006 | B1 |
7010801 | Jerding et al. | Mar 2006 | B1 |
7017174 | Sheedy | Mar 2006 | B1 |
7024461 | Janning et al. | Apr 2006 | B1 |
7024676 | Klopfenstein | Apr 2006 | B1 |
7027460 | Iyer et al. | Apr 2006 | B2 |
7031348 | Gazit | Apr 2006 | B1 |
7039116 | Zhang et al. | May 2006 | B1 |
7039169 | Jones | May 2006 | B2 |
7039614 | Candelore | May 2006 | B1 |
7039938 | Candelore | May 2006 | B2 |
7047309 | Baumann et al. | May 2006 | B2 |
7058387 | Kumar et al. | Jun 2006 | B2 |
7058721 | Ellison | Jun 2006 | B1 |
7069573 | Brooks et al. | Jun 2006 | B1 |
7073189 | McElhatten et al. | Jul 2006 | B2 |
7075945 | Arsenault et al. | Jul 2006 | B2 |
7085839 | Baugher et al. | Aug 2006 | B1 |
7086077 | Giammaressi | Aug 2006 | B2 |
7088910 | Potrebic et al. | Aug 2006 | B2 |
7089577 | Rakib et al. | Aug 2006 | B1 |
7093272 | Shah-Nazaroff et al. | Aug 2006 | B1 |
7100183 | Kunkel et al. | Aug 2006 | B2 |
7103906 | Katz et al. | Sep 2006 | B1 |
7107462 | Fransdonk | Sep 2006 | B2 |
7110457 | Chen et al. | Sep 2006 | B1 |
7127619 | Unger et al. | Oct 2006 | B2 |
7133837 | Barnes, Jr. | Nov 2006 | B1 |
7143431 | Eager et al. | Nov 2006 | B1 |
7146627 | Ismail et al. | Dec 2006 | B1 |
7152237 | Flickinger et al. | Dec 2006 | B2 |
7155508 | Sankuratripati et al. | Dec 2006 | B2 |
7174385 | Li | Feb 2007 | B2 |
7178161 | Fristoe et al. | Feb 2007 | B1 |
7181010 | Russ et al. | Feb 2007 | B2 |
7181760 | Wallace | Feb 2007 | B1 |
7191461 | Arsenault et al. | Mar 2007 | B1 |
7194752 | Kenyon et al. | Mar 2007 | B1 |
7194756 | Addington et al. | Mar 2007 | B2 |
7200788 | Hiraki et al. | Apr 2007 | B2 |
7203940 | Barmettler et al. | Apr 2007 | B2 |
7207055 | Hendricks et al. | Apr 2007 | B1 |
7209973 | Tormasov | Apr 2007 | B2 |
7216265 | Hughes et al. | May 2007 | B2 |
7225164 | Candelore et al. | May 2007 | B1 |
7225458 | Klauss et al. | May 2007 | B2 |
7228555 | Schlack | Jun 2007 | B2 |
7237250 | Kanojia et al. | Jun 2007 | B2 |
7246150 | Donoho et al. | Jul 2007 | B1 |
7246172 | Yoshiba et al. | Jul 2007 | B2 |
7246366 | Addington et al. | Jul 2007 | B1 |
7254608 | Yeager et al. | Aug 2007 | B2 |
7257650 | Maciesowicz | Aug 2007 | B2 |
7266198 | Medvinsky | Sep 2007 | B2 |
7266611 | Jabri et al. | Sep 2007 | B2 |
7266726 | Ladd et al. | Sep 2007 | B1 |
7283782 | Sinnarajah et al. | Oct 2007 | B2 |
7296074 | Jagels | Nov 2007 | B2 |
7299290 | Karpoff | Nov 2007 | B2 |
7305691 | Cristofalo | Dec 2007 | B2 |
7308415 | Kimbrel et al. | Dec 2007 | B2 |
7317728 | Acharya et al. | Jan 2008 | B2 |
7320134 | Tomsen et al. | Jan 2008 | B1 |
7325073 | Shao et al. | Jan 2008 | B2 |
7327692 | Ain et al. | Feb 2008 | B2 |
7334044 | Allen | Feb 2008 | B1 |
7340759 | Rodriguez | Mar 2008 | B1 |
7346688 | Allen et al. | Mar 2008 | B2 |
7346917 | Gatto et al. | Mar 2008 | B2 |
7352775 | Powell | Apr 2008 | B2 |
7355980 | Bauer et al. | Apr 2008 | B2 |
7363371 | Kirby et al. | Apr 2008 | B2 |
7370120 | Kirsch et al. | May 2008 | B2 |
7376386 | Phillips et al. | May 2008 | B2 |
7379494 | Raleigh et al. | May 2008 | B2 |
7403618 | Van Rijnsoever et al. | Jul 2008 | B2 |
7434245 | Shiga et al. | Oct 2008 | B1 |
7457520 | Rosetti et al. | Nov 2008 | B2 |
7464179 | Hodges et al. | Dec 2008 | B2 |
7555006 | Wolfe et al. | Jun 2009 | B2 |
7567565 | La | Jul 2009 | B2 |
7577118 | Haumonte et al. | Aug 2009 | B2 |
7602820 | Helms et al. | Oct 2009 | B2 |
7617516 | Huslak et al. | Nov 2009 | B2 |
7630401 | Iwamura | Dec 2009 | B2 |
7689995 | Francis et al. | Mar 2010 | B1 |
7690020 | Lebar | Mar 2010 | B2 |
7720986 | Savoor et al. | May 2010 | B2 |
7721313 | Barrett | May 2010 | B2 |
7757251 | Gonder et al. | Jul 2010 | B2 |
7763360 | Paul et al. | Jul 2010 | B2 |
7779097 | Lamkin et al. | Aug 2010 | B2 |
7783316 | Mitchell | Aug 2010 | B1 |
7805052 | Nakamura et al. | Sep 2010 | B2 |
7805741 | Yeh | Sep 2010 | B2 |
7836178 | Bedell et al. | Nov 2010 | B1 |
7908626 | Williamson et al. | Mar 2011 | B2 |
7917008 | Lee et al. | Mar 2011 | B1 |
7930382 | Tormasov | Apr 2011 | B1 |
7930715 | Hendricks et al. | Apr 2011 | B2 |
8122479 | Britt | Feb 2012 | B2 |
8170065 | Hasek et al. | May 2012 | B2 |
8213358 | Goyal et al. | Jul 2012 | B1 |
8280982 | La et al. | Oct 2012 | B2 |
8290351 | Plotnick et al. | Oct 2012 | B2 |
8291453 | Boortz | Oct 2012 | B2 |
8341242 | Dillon et al. | Dec 2012 | B2 |
8359351 | Istvan et al. | Jan 2013 | B2 |
8365212 | Orlowski | Jan 2013 | B1 |
8375140 | Tippin | Feb 2013 | B2 |
8392952 | Carlucci et al. | Mar 2013 | B2 |
8458125 | Chong, Jr. et al. | Jun 2013 | B1 |
8468099 | Headings et al. | Jun 2013 | B2 |
8516533 | Davis et al. | Aug 2013 | B2 |
8521002 | Yahata et al. | Aug 2013 | B2 |
8561116 | Hasek | Oct 2013 | B2 |
8613089 | Holloway et al. | Dec 2013 | B1 |
8634703 | Barton | Jan 2014 | B1 |
8667548 | Chen | Mar 2014 | B2 |
8726303 | Ellis, III | May 2014 | B2 |
8804519 | Svedberg | Aug 2014 | B2 |
8843973 | Morrison | Sep 2014 | B2 |
8997136 | Brooks et al. | Mar 2015 | B2 |
9071859 | Lajoie | Jun 2015 | B2 |
9112938 | Tippin | Aug 2015 | B2 |
9178634 | Tidwell et al. | Nov 2015 | B2 |
9277266 | Riedl et al. | Mar 2016 | B2 |
9591069 | Thornburgh | Mar 2017 | B2 |
9699236 | Gopalakrishnan | Jul 2017 | B2 |
9910742 | Faibish | Mar 2018 | B1 |
10007673 | Faibish | Jun 2018 | B1 |
10078639 | Faibish | Sep 2018 | B1 |
10116629 | Crofton | Oct 2018 | B2 |
10162828 | Foster | Dec 2018 | B2 |
10264072 | Crofton | Apr 2019 | B2 |
10356158 | Crofton | Jul 2019 | B2 |
10404798 | Crofton | Sep 2019 | B2 |
10687115 | Muvavarirwa | Jun 2020 | B2 |
10893468 | da Silva | Jan 2021 | B2 |
10939142 | Jayawardene | Mar 2021 | B2 |
10958948 | Badawiyeh | Mar 2021 | B2 |
20010013123 | Freeman et al. | Aug 2001 | A1 |
20010030785 | Pangrac et al. | Oct 2001 | A1 |
20010050901 | Love et al. | Dec 2001 | A1 |
20020004912 | Fung | Jan 2002 | A1 |
20020019984 | Rakib | Feb 2002 | A1 |
20020032754 | Logston et al. | Mar 2002 | A1 |
20020049902 | Rhodes | Apr 2002 | A1 |
20020049980 | Hoang | Apr 2002 | A1 |
20020053082 | Weaver, III et al. | May 2002 | A1 |
20020054589 | Ethridge et al. | May 2002 | A1 |
20020059577 | Lu et al. | May 2002 | A1 |
20020059619 | Lebar | May 2002 | A1 |
20020063621 | Tseng et al. | May 2002 | A1 |
20020087975 | Schlack | Jul 2002 | A1 |
20020087976 | Kaplan et al. | Jul 2002 | A1 |
20020095684 | St. John et al. | Jul 2002 | A1 |
20020100059 | Buehl et al. | Jul 2002 | A1 |
20020104083 | Hendricks et al. | Aug 2002 | A1 |
20020112240 | Bacso et al. | Aug 2002 | A1 |
20020120498 | Gordon et al. | Aug 2002 | A1 |
20020123928 | Eldering et al. | Sep 2002 | A1 |
20020124182 | Bacso et al. | Sep 2002 | A1 |
20020124249 | Shintani et al. | Sep 2002 | A1 |
20020129378 | Cloonan et al. | Sep 2002 | A1 |
20020144262 | Plotnick et al. | Oct 2002 | A1 |
20020144263 | Eldering et al. | Oct 2002 | A1 |
20020144275 | Kay et al. | Oct 2002 | A1 |
20020147771 | Traversat et al. | Oct 2002 | A1 |
20020152299 | Traversat et al. | Oct 2002 | A1 |
20020154655 | Gummalla et al. | Oct 2002 | A1 |
20020154694 | Birch | Oct 2002 | A1 |
20020154885 | Covell et al. | Oct 2002 | A1 |
20020162109 | Shteyn | Oct 2002 | A1 |
20020163928 | Rudnick et al. | Nov 2002 | A1 |
20020164151 | Jasinschi et al. | Nov 2002 | A1 |
20020166119 | Cristofalo | Nov 2002 | A1 |
20020170057 | Barrett et al. | Nov 2002 | A1 |
20020172281 | Mantchala et al. | Nov 2002 | A1 |
20020174430 | Ellis et al. | Nov 2002 | A1 |
20020178447 | Plotnick et al. | Nov 2002 | A1 |
20020196939 | Unger et al. | Dec 2002 | A1 |
20030002862 | Rodriguez et al. | Jan 2003 | A1 |
20030004810 | Eldering | Jan 2003 | A1 |
20030005453 | Rodriguez et al. | Jan 2003 | A1 |
20030007516 | Abramov et al. | Jan 2003 | A1 |
20030014759 | Van Stam | Jan 2003 | A1 |
20030021412 | Candelore et al. | Jan 2003 | A1 |
20030023981 | Lemmons | Jan 2003 | A1 |
20030025832 | Swart et al. | Feb 2003 | A1 |
20030033199 | Coleman | Feb 2003 | A1 |
20030037331 | Lee | Feb 2003 | A1 |
20030046704 | Laksono et al. | Mar 2003 | A1 |
20030056217 | Brooks | Mar 2003 | A1 |
20030061619 | Giammaressi | Mar 2003 | A1 |
20030067554 | Klarfeld et al. | Apr 2003 | A1 |
20030067877 | Sivakumar et al. | Apr 2003 | A1 |
20030074565 | Wasilewski et al. | Apr 2003 | A1 |
20030077067 | Wu et al. | Apr 2003 | A1 |
20030088876 | Mao et al. | May 2003 | A1 |
20030093311 | Knowlson | May 2003 | A1 |
20030093784 | Dimitrova et al. | May 2003 | A1 |
20030093790 | Logan et al. | May 2003 | A1 |
20030093792 | Labeeb et al. | May 2003 | A1 |
20030095791 | Barton et al. | May 2003 | A1 |
20030101449 | Bentolila et al. | May 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030110499 | Knudson et al. | Jun 2003 | A1 |
20030115612 | Mao et al. | Jun 2003 | A1 |
20030118014 | Iyer et al. | Jun 2003 | A1 |
20030135860 | Dureau | Jul 2003 | A1 |
20030139980 | Hamilton | Jul 2003 | A1 |
20030140351 | Hoarty et al. | Jul 2003 | A1 |
20030145323 | Hendricks et al. | Jul 2003 | A1 |
20030149975 | Eldering et al. | Aug 2003 | A1 |
20030161473 | Fransdonk | Aug 2003 | A1 |
20030179773 | Mocek et al. | Sep 2003 | A1 |
20030182261 | Patterson | Sep 2003 | A1 |
20030208763 | McElhatten et al. | Nov 2003 | A1 |
20030208783 | Hillen et al. | Nov 2003 | A1 |
20030214962 | Allaye-Chan et al. | Nov 2003 | A1 |
20030217365 | Caputo | Nov 2003 | A1 |
20030229681 | Levitan | Dec 2003 | A1 |
20030235393 | Boston et al. | Dec 2003 | A1 |
20030235396 | Boston et al. | Dec 2003 | A1 |
20030237090 | Boston et al. | Dec 2003 | A1 |
20040006625 | Saha et al. | Jan 2004 | A1 |
20040010807 | Urdang et al. | Jan 2004 | A1 |
20040031053 | Lim et al. | Feb 2004 | A1 |
20040045030 | Reynolds et al. | Mar 2004 | A1 |
20040078809 | Drazin | Apr 2004 | A1 |
20040101271 | Boston et al. | May 2004 | A1 |
20040103437 | Allegrezza et al. | May 2004 | A1 |
20040109672 | Kim et al. | Jun 2004 | A1 |
20040113936 | Dempski | Jun 2004 | A1 |
20040123313 | Koo et al. | Jun 2004 | A1 |
20040133907 | Rodriguez et al. | Jul 2004 | A1 |
20040146006 | Jackson | Jul 2004 | A1 |
20040158858 | Paxton et al. | Aug 2004 | A1 |
20040163109 | Kang et al. | Aug 2004 | A1 |
20040179605 | Lane | Sep 2004 | A1 |
20040181800 | Rakib et al. | Sep 2004 | A1 |
20040187150 | Gonder et al. | Sep 2004 | A1 |
20040187159 | Gaydos et al. | Sep 2004 | A1 |
20040193648 | Lai et al. | Sep 2004 | A1 |
20040193704 | Smith | Sep 2004 | A1 |
20040194134 | Gunatilake et al. | Sep 2004 | A1 |
20040226044 | Goode | Nov 2004 | A1 |
20040244058 | Carlucci et al. | Dec 2004 | A1 |
20040254999 | Bulleit et al. | Dec 2004 | A1 |
20040255336 | Logan et al. | Dec 2004 | A1 |
20040261114 | Addington et al. | Dec 2004 | A1 |
20040261116 | McKeown et al. | Dec 2004 | A1 |
20040267880 | Patiejunas | Dec 2004 | A1 |
20040267965 | Vasudevan et al. | Dec 2004 | A1 |
20050010697 | Kinawi et al. | Jan 2005 | A1 |
20050034171 | Benya | Feb 2005 | A1 |
20050039205 | Riedl | Feb 2005 | A1 |
20050039206 | Opdycke | Feb 2005 | A1 |
20050041679 | Weinstein et al. | Feb 2005 | A1 |
20050047596 | Suzuki | Mar 2005 | A1 |
20050050160 | Upendran et al. | Mar 2005 | A1 |
20050055685 | Maynard et al. | Mar 2005 | A1 |
20050058115 | Levin et al. | Mar 2005 | A1 |
20050060742 | Riedl et al. | Mar 2005 | A1 |
20050060745 | Riedl et al. | Mar 2005 | A1 |
20050060758 | Park | Mar 2005 | A1 |
20050071669 | Medvinsky | Mar 2005 | A1 |
20050071882 | Rodriguez et al. | Mar 2005 | A1 |
20050076092 | Chang et al. | Apr 2005 | A1 |
20050086691 | Dudkiewicz et al. | Apr 2005 | A1 |
20050097598 | Pedlow, Jr. et al. | May 2005 | A1 |
20050108529 | Juneau | May 2005 | A1 |
20050108768 | Deshpande et al. | May 2005 | A1 |
20050108769 | Arnold et al. | May 2005 | A1 |
20050111844 | Compton et al. | May 2005 | A1 |
20050114141 | Grody | May 2005 | A1 |
20050114900 | Ladd et al. | May 2005 | A1 |
20050123001 | Craven et al. | Jun 2005 | A1 |
20050125528 | Burke et al. | Jun 2005 | A1 |
20050125832 | Jost et al. | Jun 2005 | A1 |
20050135476 | Gentric et al. | Jun 2005 | A1 |
20050152397 | Bai et al. | Jul 2005 | A1 |
20050168323 | Lenoir et al. | Aug 2005 | A1 |
20050198686 | Krause et al. | Sep 2005 | A1 |
20050210510 | Danker | Sep 2005 | A1 |
20050223409 | Rautila et al. | Oct 2005 | A1 |
20050276284 | Krause et al. | Dec 2005 | A1 |
20050283818 | Zimmermann et al. | Dec 2005 | A1 |
20050289618 | Hardin | Dec 2005 | A1 |
20050289619 | Melby | Dec 2005 | A1 |
20060010075 | Wolf | Jan 2006 | A1 |
20060020984 | Ban et al. | Jan 2006 | A1 |
20060036750 | Ladd et al. | Feb 2006 | A1 |
20060037060 | Simms et al. | Feb 2006 | A1 |
20060047957 | Helms et al. | Mar 2006 | A1 |
20060050784 | Lappalainen et al. | Mar 2006 | A1 |
20060059098 | Major et al. | Mar 2006 | A1 |
20060059342 | Medvinsky et al. | Mar 2006 | A1 |
20060062059 | Smith et al. | Mar 2006 | A1 |
20060064728 | Son et al. | Mar 2006 | A1 |
20060066632 | Wong et al. | Mar 2006 | A1 |
20060073843 | Aerrabotu et al. | Apr 2006 | A1 |
20060075449 | Jagadeesan et al. | Apr 2006 | A1 |
20060080408 | Istvan et al. | Apr 2006 | A1 |
20060084417 | Melpignano et al. | Apr 2006 | A1 |
20060085824 | Bruck et al. | Apr 2006 | A1 |
20060088063 | Hartung et al. | Apr 2006 | A1 |
20060117374 | Kortum et al. | Jun 2006 | A1 |
20060127039 | Van Stam | Jun 2006 | A1 |
20060130107 | Gonder et al. | Jun 2006 | A1 |
20060130113 | Carlucci et al. | Jun 2006 | A1 |
20060133398 | Choi et al. | Jun 2006 | A1 |
20060133644 | Wells et al. | Jun 2006 | A1 |
20060140584 | Ellis et al. | Jun 2006 | A1 |
20060171390 | La | Aug 2006 | A1 |
20060171423 | Helms et al. | Aug 2006 | A1 |
20060173783 | Marples et al. | Aug 2006 | A1 |
20060197828 | Zeng et al. | Sep 2006 | A1 |
20060212906 | Cantalini | Sep 2006 | A1 |
20060218601 | Michel | Sep 2006 | A1 |
20060218604 | Riedl et al. | Sep 2006 | A1 |
20060248553 | Mikkelson et al. | Nov 2006 | A1 |
20060248555 | Eldering | Nov 2006 | A1 |
20060253328 | Kohli et al. | Nov 2006 | A1 |
20060253864 | Easty | Nov 2006 | A1 |
20060256376 | Hirooka | Nov 2006 | A1 |
20060271946 | Woundy et al. | Nov 2006 | A1 |
20060277569 | Smith | Dec 2006 | A1 |
20060291506 | Cain | Dec 2006 | A1 |
20060294250 | Stone et al. | Dec 2006 | A1 |
20070022459 | Gaebel, Jr. et al. | Jan 2007 | A1 |
20070033531 | Marsh | Feb 2007 | A1 |
20070047449 | Berger et al. | Mar 2007 | A1 |
20070053293 | McDonald et al. | Mar 2007 | A1 |
20070061818 | Williams et al. | Mar 2007 | A1 |
20070076728 | Rieger et al. | Apr 2007 | A1 |
20070078910 | Bopardikar | Apr 2007 | A1 |
20070089127 | Flickinger et al. | Apr 2007 | A1 |
20070094691 | Gazdzinski | Apr 2007 | A1 |
20070101157 | Faria | May 2007 | A1 |
20070101370 | Calderwood | May 2007 | A1 |
20070104456 | Craner | May 2007 | A1 |
20070106805 | Marples et al. | May 2007 | A1 |
20070113243 | Brey | May 2007 | A1 |
20070118852 | Calderwood | May 2007 | A1 |
20070121569 | Fukui et al. | May 2007 | A1 |
20070121678 | Brooks et al. | May 2007 | A1 |
20070124416 | Casey et al. | May 2007 | A1 |
20070124781 | Casey et al. | May 2007 | A1 |
20070130581 | Del Sesto et al. | Jun 2007 | A1 |
20070133405 | Bowra et al. | Jun 2007 | A1 |
20070153820 | Gould | Jul 2007 | A1 |
20070156539 | Yates | Jul 2007 | A1 |
20070157234 | Walker | Jul 2007 | A1 |
20070162927 | Ramaswamy et al. | Jul 2007 | A1 |
20070204300 | Markley et al. | Aug 2007 | A1 |
20070204310 | Hua et al. | Aug 2007 | A1 |
20070204311 | Hasek et al. | Aug 2007 | A1 |
20070204313 | McEnroe et al. | Aug 2007 | A1 |
20070204314 | Hasek et al. | Aug 2007 | A1 |
20070217436 | Markley et al. | Sep 2007 | A1 |
20070223380 | Gilbert et al. | Sep 2007 | A1 |
20070233857 | Cheng et al. | Oct 2007 | A1 |
20070241176 | Epstein et al. | Oct 2007 | A1 |
20070250872 | Dua | Oct 2007 | A1 |
20070250880 | Hainline | Oct 2007 | A1 |
20070271386 | Kurihara et al. | Nov 2007 | A1 |
20070274400 | Murai et al. | Nov 2007 | A1 |
20070276925 | La Joie et al. | Nov 2007 | A1 |
20070276926 | LaJoie et al. | Nov 2007 | A1 |
20080016526 | Asmussen | Jan 2008 | A1 |
20080022012 | Wang | Jan 2008 | A1 |
20080040403 | Hayashi | Feb 2008 | A1 |
20080052157 | Kadambi et al. | Feb 2008 | A1 |
20080066112 | Bailey et al. | Mar 2008 | A1 |
20080092181 | Britt | Apr 2008 | A1 |
20080098212 | Helms et al. | Apr 2008 | A1 |
20080098446 | Seckin et al. | Apr 2008 | A1 |
20080101460 | Rodriguez | May 2008 | A1 |
20080112405 | Cholas et al. | May 2008 | A1 |
20080134156 | Osminer et al. | Jun 2008 | A1 |
20080134165 | Anderson et al. | Jun 2008 | A1 |
20080134615 | Risi et al. | Jun 2008 | A1 |
20080141175 | Sarna et al. | Jun 2008 | A1 |
20080141317 | Radloff et al. | Jun 2008 | A1 |
20080152316 | Sylvain | Jun 2008 | A1 |
20080155059 | Hardin et al. | Jun 2008 | A1 |
20080159714 | Harrar et al. | Jul 2008 | A1 |
20080184297 | Ellis et al. | Jul 2008 | A1 |
20080192820 | Brooks et al. | Aug 2008 | A1 |
20080201748 | Hasek et al. | Aug 2008 | A1 |
20080209464 | Wright-Riley | Aug 2008 | A1 |
20080212947 | Nesvadba et al. | Sep 2008 | A1 |
20080229354 | Morris et al. | Sep 2008 | A1 |
20080235732 | Han et al. | Sep 2008 | A1 |
20080235746 | Peters et al. | Sep 2008 | A1 |
20080244667 | Osborne | Oct 2008 | A1 |
20080244682 | Sparrell et al. | Oct 2008 | A1 |
20080271068 | Ou et al. | Oct 2008 | A1 |
20080273591 | Brooks et al. | Nov 2008 | A1 |
20080276270 | Kotaru et al. | Nov 2008 | A1 |
20090010610 | Scholl et al. | Jan 2009 | A1 |
20090019485 | Ellis et al. | Jan 2009 | A1 |
20090025027 | Craner | Jan 2009 | A1 |
20090028182 | Brooks et al. | Jan 2009 | A1 |
20090037960 | Melby | Feb 2009 | A1 |
20090052863 | Parmar et al. | Feb 2009 | A1 |
20090052870 | Marsh et al. | Feb 2009 | A1 |
20090077614 | White et al. | Mar 2009 | A1 |
20090083813 | Dolce et al. | Mar 2009 | A1 |
20090100182 | Chaudhry | Apr 2009 | A1 |
20090100459 | Riedl et al. | Apr 2009 | A1 |
20090165053 | Thyagarajan et al. | Jun 2009 | A1 |
20090207866 | Cholas et al. | Aug 2009 | A1 |
20090210899 | Lawrence-Apfelbaum et al. | Aug 2009 | A1 |
20090210912 | Cholas et al. | Aug 2009 | A1 |
20090217326 | Hasek | Aug 2009 | A1 |
20090217332 | Hindle et al. | Aug 2009 | A1 |
20090220216 | Marsh et al. | Sep 2009 | A1 |
20090254600 | Lee et al. | Oct 2009 | A1 |
20090260042 | Chiang | Oct 2009 | A1 |
20090274212 | Mizutani et al. | Nov 2009 | A1 |
20090317065 | Fyock et al. | Dec 2009 | A1 |
20100023579 | Chapweske | Jan 2010 | A1 |
20100061708 | Barton | Mar 2010 | A1 |
20100157928 | Spinar et al. | Jun 2010 | A1 |
20100223491 | Ladd et al. | Sep 2010 | A1 |
20100235432 | Trojer | Sep 2010 | A1 |
20100247067 | Gratton | Sep 2010 | A1 |
20100251289 | Agarwal et al. | Sep 2010 | A1 |
20100333131 | Parker et al. | Dec 2010 | A1 |
20110103374 | Lajoie et al. | May 2011 | A1 |
20110162007 | Karaoguz et al. | Jun 2011 | A1 |
20110264530 | Santangelo et al. | Oct 2011 | A1 |
20120014255 | Svedberg | Jan 2012 | A1 |
20120210382 | Walker et al. | Aug 2012 | A1 |
20120278841 | Hasek et al. | Nov 2012 | A1 |
20130227608 | Evans et al. | Aug 2013 | A1 |
20130246643 | Luby et al. | Sep 2013 | A1 |
20130325870 | Rouse et al. | Dec 2013 | A1 |
20130346766 | Tani | Dec 2013 | A1 |
20140013349 | Millar | Jan 2014 | A1 |
20140119195 | Tofighbakhsh et al. | May 2014 | A1 |
20140189749 | Gordon et al. | Jul 2014 | A1 |
20150092837 | Chen et al. | Apr 2015 | A1 |
20150271541 | Gonder et al. | Sep 2015 | A1 |
20150324379 | Danovitz et al. | Nov 2015 | A1 |
20160188344 | Tamir et al. | Jun 2016 | A1 |
20160191133 | Noh et al. | Jun 2016 | A1 |
20160191147 | Martch | Jun 2016 | A1 |
20160307596 | Hardin et al. | Oct 2016 | A1 |
20170302959 | Samchuk et al. | Oct 2017 | A1 |
20170366833 | Amidei et al. | Dec 2017 | A1 |
20180097690 | Yocam | Apr 2018 | A1 |
20180098292 | Gulati | Apr 2018 | A1 |
20180131975 | Badawiyeh et al. | May 2018 | A1 |
20180131979 | Bayoumi et al. | May 2018 | A1 |
20180192094 | Barnett et al. | Jul 2018 | A1 |
20180213251 | Ikonin et al. | Jul 2018 | A1 |
20180368122 | Kuchibhotla et al. | Dec 2018 | A1 |
20180376474 | Khoryaev et al. | Dec 2018 | A1 |
20190014337 | Skupin et al. | Jan 2019 | A1 |
20190014363 | Skupin et al. | Jan 2019 | A1 |
20190069038 | Phillips | Feb 2019 | A1 |
20190166306 | Zen et al. | May 2019 | A1 |
20190320494 | Jayawardene | Oct 2019 | A1 |
20200045110 | Varnica et al. | Feb 2020 | A1 |
20210250196 | Das | Aug 2021 | A1 |
Number | Date | Country |
---|---|---|
2643806 | Jun 2013 | CA |
2405567 | Mar 2005 | GB |
WO-0110125 | Feb 2001 | WO |
WO-0176236 | Oct 2001 | WO |
WO-0191474 | Nov 2001 | WO |
WO-0219581 | Mar 2002 | WO |
WO-2004008693 | Jan 2004 | WO |
WO-2018106166 | Jun 2018 | WO |
Entry |
---|
CableLabs Asset Distribution Interface (ADI) Specification, Version 1 1, MD-SP-ADI1.103-040107, Jan. 7, 2004. Pages 1-26. |
Cisco Intelligent Network Architecture for Digital Video—SCTE Cable-Tec Expo 2004 information page, Orange County Convention Center, Jun. 2004, 24 pages. |
Deering, S., et al., “Internet Protocol, Version 6 (IPv6) Specification”, Internet Engineering Task Force (IETF) RFC 2460, Dec. 1998, 39 pages. |
DOCSIS 1.0: Cable Modem to Customer Premise Equipment Interface Specification, dated Nov. 3, 2008, 64 pages. |
DOCSIS 1.1: Operations Support System Interface Specification, dated Sep. 6, 2005, 242 pages. |
DOCSIS 1.1: Radio Frequency Interface Specification, dated Sep. 6, 2005, 436 pages. |
DOCSIS 2.0: Radio Frequency Interface Specification, dated Apr. 21, 2009, 499 pages. |
DOCSIS 3.0: Cable Modem to CPE Interface Specification, dated May 9, 2017, 19 pages. |
DOCSIS 3.0: MAC and Upper Layer Protocols Interface Specification, dated Jan. 10, 2017, 795 pages. |
DOCSIS 3.0: Operations Support System Interface Specification, dated Jan. 10, 2017, 547 pages. |
DOCSIS 3.0: Physical Layer Specification, dated Jan. 10, 2017, 184 pages. |
DOCSIS 3.1: Cable Modem Operations Support System Interface Specification, dated May 9, 2017, 308 pages. |
DOCSIS 3.1: CCAP Operations Support System Interface Specification, dated May 9, 2017, 703 pages. |
DOCSIS 3.1: MAC and Upper Layer Protocols Interface Specification, dated May 9, 2017, 838 pages. |
DOCSIS 3.1: Physical Layer Specification, dated May 9, 2017, 249 pages. |
“Internet Protocol, DARPA Internet Program, Protocol Specification”, IETF RCF 791, Sep. 1981, 50 pages. |
Kanouff, Communications Technology: Next-Generation Bandwidth Management—The Evolution of the Anything-to-Anywhere Network, 8 pages, Apr. 1, 2004. |
Motorola DOCSIS Cable Module DCM 2000 specifications, 4 pages, copyright 2001. |
OpenVision Session Resource Manager—Open Standards-Based Solution Optimizes Network Resources by Dynamically Assigning Bandwidth in the Delivery of Digital Services article, 2 pages, (copyright 2006), (http://www.imake.com/hopenvision). |
SCTE 130-1 2008 Digital Program Insertion—Advertising Systems Interfaces standards. |
SCTE 130-1 2013. Part 1: Digital Program Insertion—Advertising Systems Interfaces, Part 1—Advertising Systems Overview, 20 pages. |
SCTE 130-10 2013: Digital Program Insertion—Advertising Systems Interfaces Part 10—Stream Restriction Data Model. |
SCTE 130-2 2008a: Digital Program Insertion—Advertising Systems Interfaces Part 2—Core Data Elements. |
SCTE 130-2 2014 Digital Program Insertion—Advertising Systems Interfaces standards. |
SCTE 130-3 2013: Digital Program Insertion—Advertising Systems Interfaces Part 3—Ad Management Service Interfaces. |
SCTE 130-4 2009: Digital Program Insertion—Advertising Systems Interfaces Part 4—Content Information Service. |
SCTE 130-5 2010: Digital Program Insertion—Advertising Systems Interfaces Part 5—Placement Opportunity Information Service. |
SCTE 130-6 2010: Digital Program Insertion—Advertising Systems Interfaces Part 6—Subscriber Information Service. |
SCTE 130-7 2009: Digital Program Insertion—Advertising Systems Interfaces Part 7—Message Transport. |
SCTE 130-8 2010a: Digital Program Insertion Advertising Systems Interfaces Part 8—General Information Service. |
SCTE 130-9 2014: Recommended Practices for SCTE 130 Digital Program Insertion—Advertising Systems Interfaces. |
SCTE130-3 2010: Digital Program Insertion—Advertising Systems Interfaces Part 3—Ad Management Service Interface. |
Wikipedia, Digital Video Recorder, obtained from the Internet Nov. 11, 2014. |
Number | Date | Country | |
---|---|---|---|
20210274228 A1 | Sep 2021 | US |
Number | Date | Country | |
---|---|---|---|
62636020 | Feb 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16058520 | Aug 2018 | US |
Child | 17189068 | US |