Apparatus and methods for monitoring and diagnosing a wireless network

Information

  • Patent Grant
  • 11146470
  • Patent Number
    11,146,470
  • Date Filed
    Friday, December 21, 2018
    5 years ago
  • Date Issued
    Tuesday, October 12, 2021
    3 years ago
Abstract
Apparatus and methods for monitoring a wireless local area network (WLAN) to identify inoperative or degraded devices and restore network connectivity to end users. In one embodiment, the network includes one or more access points (APs) in data communication with a cable modem, which in turn communicates with managed network entities via a backhaul connection. Each AP is configured to provide connectivity to client devices, as well as monitor the operation of other network components including the cable modem, via logic indigenous to the AP, and invoke corrective action when failures or degraded performance is detected. In one variant, the logic operative to run on the AP includes both diagnostic and self-healing functionality, so as to enable at least partial automated diagnosis, localization, and recovery from faults, thereby obviating costly troubleshooting by the network operator or service personnel.
Description
COPYRIGHT

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.


BACKGROUND
1. Technological Field

The present disclosure relates generally to the field of wireless networks, and specifically in one or more embodiments, to apparatus and methods for monitoring and diagnosing or correcting (e.g., self-diagnosing or correcting) an identified device within a content distribution network that spans from an operator or distribution portion to client devices or nodes (e.g., indoors or outdoors).


2. Description of Related Technology

Wireless networking technologies enable wireless devices to connect to one another. One ubiquitous application for wireless technology is to provide network access to client devices, such as laptops, smartphones, and other wireless-enabled user devices. One such technology that enables a user to engage in wireless communication (e.g., via services provided through the cable network) is Wi-Fi® (IEEE Std. 802.11), which has become the de facto standard for wireless networking in consumer electronics. Wi-Fi enables convenient access to networks (e.g., the Internet, intranets, other interconnected devices) via at least one access point (“AP,” also colloquially referred to as “hotspots”) to client devices within the AP's coverage area.


Commercially, Wi-Fi provides high-value services to users within their premises, within establishments, as well as venues outside of home, including houses, apartments, offices, cafes, hotels, business centers, restaurants, etc. A typical home setup may include a client device in wireless communication with an AP and/or modem (e.g., cable modem or CM) that are in communication with the backhaul portion of a service provider network. Although the AP and the CM either stand alone or are integrated into one “box,” they are often physical and logically as if they were two different entities with no awareness of each other's status.


Today, Wi-Fi has become the standard choice for providing convenient means of Internet or other network access. Much of one's work-related activities (e.g., editing documents, reading emails), means of communication (e.g., instant messaging, social networking, sharing media) and means of entertainment (e.g., videos, music, books) may be performed or enabled with remote servers that are accessible via the Internet and/or the service provider's infrastructure. For example, myriad services are available to, e.g., stream content, collaborate with remote personnel, and store files online. As a result, consumers of all demographics are becoming less dependent on local content storage and less dependent on location. Rather, most information or content desired by consumers is stored and retrieved via the Internet or other network storage (i.e., from the “cloud”), which advantageously enables client devices to be used “on the go” and placed generally within the premises as long as an AP is nearby. Consequently, consumers depend on reliable network connectivity and expect, ideally, 100% “uptime,” whether they are using mobile devices or personal computers.


However, unforeseen disconnections from the network are inevitable. Any network device (including the AP) may go offline because of traffic overload, firmware update, maintenance, physical disconnection, overheating, lack of user authentication, etc. Moreover, despite simplifications and enhancements in “user experience” in networking technology over the years, many consumers are typically aware of only the basic mechanisms of connectivity, such as being generally aware that they must connect to an AP within range (e.g., by identifying the desired network based on e.g., its name and service set identifier (SSID)). When a connection goes offline, e.g., when a laptop or smartphone can no longer access the Internet, the end user may not know the cause of the disconnection, or how to diagnose it. That is to say, the user does not know whether the responsibility for the issue lies with the client device itself, one or more of several devices within the premises (e.g., modem, router, range extender, repeater, or other access points), and/or elsewhere (e.g., backhaul infrastructure of the service provider, coaxial cable or optical fiber to the premises, etc.). In fact, even the service provider or its diagnostic equipment/software may not know the origin of the problem until the issue is further investigated.


Hence, a user is typically left with few choices, such as rebooting the modem or client device (such as via unplugging and replugging the power supply), looking for obvious connection issues such as a loose connector or plug, and/or simply waiting (particularly when the consumer has no control over the hotspot). However, this does not necessarily restore the connection because the device at issue may be upstream of the premises (e.g., at the controller). Moreover, a local device (e.g., AP, CM) may be down for reasons that cannot be solved with a manually forced reboot; it may require a device-induced reboot.


Accordingly, the foregoing issues result in a frustrating experience for the end user, whose primary concern is to maintain connectivity to the wireless network and backhaul, especially when such user has no visibility into when their network service will come back online.


This problem extends not only to individual users, but establishments or enterprises as well. For example, public establishments may derive business from offering free Wi-Fi to customers. When such means for attracting and retaining potential clientele are disabled, current or future business may be affected.


Moreover, a service provider may receive calls from individual or enterprise customers alerting the provider to the disconnection, and/or requesting them to send technicians to diagnose equipment (aka a “truck roll”). However, these manual approaches require user reporting, as well as continuous investigation, searching, and monitoring of potential issues throughout the network on the part of the service provider, including the very costly aforementioned truck rolls.


To these ends, improved solutions are needed for more precise and intelligent mechanisms to identify and recover problematic devices or connections within the service provider network (including even at the customer's premises). Specifically, what are needed are methods and apparatus to automatically monitor, diagnose, and “heal” devices associated with the network (e.g., access points, cable or satellite modems, controllers), and quickly recover a specific device or connection that is responsible for the loss of service.


SUMMARY

The present disclosure addresses the foregoing needs by providing, inter alia, methods and apparatus for monitoring and self-diagnosing a wireless network.


In one aspect, wireless radio frequency access point apparatus is disclosed. In one embodiment, the apparatus includes a first radio frequency modem configured to enable wireless communication between a plurality of user devices and the wireless radio frequency access point apparatus, the first radio frequency modem configured to operate according to a first protocol and comprising: a baseband module; a resource module in data communication with the baseband module and comprising logic operative to run thereon; and a data interface in data communication with the baseband module and configured to enable data communication between the first radio frequency modem and a second modem, the second modem configured to operate according to a second protocol different than the first protocol.


In one variant, the logic is configured to selectively monitor a plurality of network entities via the second modem to evaluate an upstream connectivity status; and when the upstream connectivity status is offline, the first radio frequency modem is configured to stop transmission of a service set identifier (SSID).


In another variant, the first radio frequency modem comprises a Wireless Local Area Network (WLAN) modem having an air interface, and the second modem comprises a Data Over Cable Services Interface Specification (DOCSIS) compliant cable modem configured to transmit and receive signals over a wireline medium.


In a further variant, the plurality of network entities includes at least one cable modem in communication with the second modem, and the logic is configured to determine that the upstream connectivity status is offline when the at least one cable modem is in a reboot sequence. In one implementation, the logic is configured to determine that the upstream connectivity status is offline when the at least one cable modem has lost upstream connectivity to a service provider backbone network.


In another implementation, the logic is configured to determine that the upstream connectivity status is offline when the second modem has lost connectivity with the at least one cable modem.


In another aspect, embedded access point apparatus is disclosed. In one embodiment, the apparatus includes: a first radio frequency modem configured to enable wireless communication between a plurality of user devices and the embedded access point apparatus, the first radio frequency modem configured to operate according to a first protocol; a cable modem configured to operate according to a second protocol different than the first protocol, wherein the cable modem is configured to communicate with a cable modem termination system of a service provider backbone network; and a resource module configured to transact data between the first radio frequency modem and the cable modem and comprising logic operative to run thereon.


In one variant, the logic is configured to selectively monitor a plurality of network entities of the service provider backbone network via the cable modem to evaluate an upstream connectivity status; and when the upstream connectivity status is offline, the first radio frequency modem is configured to stop transmission of a service set identifier (SSID). In one implementation, the cable modem is further configured to cause the first radio frequency modem to stop transmission of a service set identifier (SSID) when the cable modem is in a reboot sequence.


In another implementation, the first radio frequency modem stops transmission of a service set identifier (SSID) when in a reboot sequence.


In a further implementation, the first radio frequency modem stops transmission of a service set identifier (SSID) while a configuration mismatch is present between the first radio frequency modem and the cable modem; and when the configuration mismatch is present, the resource module is further configured to initiate a self-healing process to correct the configuration mismatch. The configuration mismatch may include, for example, at least a billing code mismatch or an internet protocol (IP) address mismatch.


In another aspect, a method executed by a network device to assist in evaluating network connectivity is disclosed. In one embodiment, the method includes: transmitting a heartbeat signal to one or more network entities, the heartbeat signal causing the one or more network entities to respond when successfully received; waiting for a response to the heartbeat signal; and when a response is successfully received, repeating the transmitting the heartbeat signal and waiting.


In one variant, the method further includes, when the response is not successfully received, evaluating at least one network entity that is offline; and implementing corrective action for the at least one network entity based at least on the evaluating. In one implementation, the at least one network entity is the network device and the evaluating comprises determining whether the network device is upgrading or rebooting; and the implementing of the corrective action comprises waiting for the upgrading or rebooting to complete and thereafter self-healing the network device.


In another implementation, the self-healing further comprises one or more actions such as checking whether the network device is online; checking at least one billing code and at least one internet protocol (IP) address of the network device; resetting at least one upstream cable modem; and/or checking a controller configuration associated with an upstream controller and pinging the upstream controller.


In a further implementation, the at least one network entity is the network device and the evaluating further comprises verifying one or more controller configuration parameters when one or more end users are unable to connect to the network device despite the network device being online.


In yet another implementation, the at least one network entity is the network device, and the method further includes pinging an authorization server of the network, when multiple end users are unable to connect to the network device despite the network device being online.


In yet another implementation, the at least one network entity comprises multiple access points, and the method further comprises executing a troubleshooting process that includes at least one of: checking connectivity of one or more of the multiple access points; identifying a network entity correlated with all of the multiple access points that is likely to be defective; and identifying a network component correlated with all of the multiple access points that is likely to be defective. The identifying the network entity comprises, for example, checking one or more of a shared access point controller, a shared cable modem termination system, or checking one or more of a shared fiber, a shared switch, or a shared concentrator.


In a further aspect of the present disclosure, a method for restoring access within a wireless network is provided.


In another aspect, an apparatus configured to restore access within a wireless network is provided.


In another aspect, a non-transitory computer-readable apparatus is provided.


In a further aspect, a system for use within a wireless network is disclosed.


In one aspect of the present disclosure, a computerized method of operating a content distribution network to compensate for faults within the content distribution network is disclosed. In one embodiment, the computerized method includes: transmitting a test signal addressed to the WLAN controller entity from at least one of the plurality of WLAN APs; failing to receive, at the at least one WLAN AP, an expected response signal from the WLAN controller entity in response to the test signal; and based at least on the failing to receive the expected response signal from the WLAN controller entity, causing transmission of data to the WLAN controller entity via an alternate communication channel commonly controlled with the plurality of WLAN APs and the WLAN controller entity by an operator of the content distribution network, the transmission of the data causing use of an alternate wireless access node by a user device then-currently associated with the at least one WLAN AP.


In another aspect of the present disclosure, a computerized method of operating a content distribution network to compensate for faults within the content distribution network is disclosed. In one embodiment, the content distribution network has at least one wireless local area network (WLAN) access point (AP) and a WLAN controller entity, and the computerized method includes: transmitting a first signal addressed to the WLAN controller entity from at least one of the plurality of WLAN APs and via a first data backhaul; failing to receive, at the at least one WLAN AP, an expected response signal from the WLAN controller entity in response to the first signal; and based at least on the failing to receive the expected response signal from the WLAN controller entity, causing transmission of data to the WLAN controller entity via an alternate communication channel, the transmission of the data causing use of the alternate communication channel as backhaul for the at least one WLAN AP.


In a further aspect, a controller apparatus configured for fault compensation within a content distribution network is disclosed. In one embodiment, the controller apparatus includes: a data connection configured for data communication with distribution infrastructure of the content distribution network; and computerized logic embodied in a plurality of computer-readable instructions, the computerized logic configured for data communication with one or more premises modems, the one or more premises modems each being capable of data communication with the content distribution network via at least a first data interface and a second data interface, the first and second data interfaces being commonly controlled by an operator of the content distribution network, the one or more premises modems each in data communication with at least one wireless access node.


In one variant, the controller apparatus is further configured to, via at least the computerized logic: transmit respective first signals addressed to corresponding ones of the one or more wireless access nodes via a first communication channel established via a first data interface of a corresponding one of the premises modems; and responsive to at least a determination that a one of second signals was not received within a prescribed time period from a corresponding one of the wireless access nodes, enable the corresponding one of the premises modems to cause transmission of data via the second data interface.


These and other aspects shall become apparent when considered in light of the disclosure provided herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a functional block diagram illustrating an exemplary hybrid fiber network configuration useful with various aspects of the present disclosure.



FIG. 1a is a functional block diagram illustrating one exemplary network headend configuration useful with various aspects of the present disclosure.



FIG. 1b is a functional block diagram illustrating one exemplary local service node configuration useful with various aspects of the present disclosure.



FIG. 1c is a functional block diagram illustrating one exemplary broadcast switched architecture (BSA) network useful with various aspects of the present disclosure.



FIG. 1d is a functional block diagram illustrating one exemplary packetized content delivery network architecture useful with various aspects of the present disclosure.



FIG. 2 is a graphical representation of an exemplary embodiment of a wireless network, including “meshing”, useful with various embodiments of the present disclosure.



FIG. 3 is a graphical representation of an exemplary embodiment of an end-to-end cable network architecture useful with various embodiments of the present disclosure.



FIGS. 4A and 4B are flow diagrams of a downstream cable modem registration process according to a DOCSIS protocol.



FIG. 5 is a functional block diagram illustrating a typical prior art access point.



FIG. 6 is a functional block diagram illustrating an exemplary access point useful with various embodiments of the present disclosure.



FIG. 7 is a flow diagram showing the typical steps for configuring an Internet protocol (IP) address by a Dynamic Host Configuration Protocol (DHCP) server.



FIG. 7a is a flow diagram showing one prior art Discover, Offer, Request, Acknowledgement (DORA) process, useful with various embodiments of the present disclosure.



FIG. 8 is a flow diagram for configuring an IP address by a DHCP server to allow network access, useful with various embodiments of the present disclosure.



FIG. 9 is a graphical representation of an exemplary wireless network deployed indoors.



FIGS. 9A-9D illustrate various potential fault scenario occurring within the exemplary wireless network of FIG. 9.



FIG. 10 is a graphical representation of an exemplary wireless network deployed outdoors.



FIGS. 10A-10B illustrate various potential fault scenarios associated with the network of FIG. 10.



FIG. 11 is a logical flow diagram of an exemplary method for an access point to monitor a wireless network and restore network access to client devices.



FIG. 12 is a logical flow diagram of an exemplary method for an upstream network entity to monitor a wireless network and restore network access to client devices.



FIG. 13 is a functional block diagram of an exemplary embodiment of a controller apparatus according to the present disclosure.





All figures © Copyright 2016 Time Warner Enterprises LLC. All rights reserved.


DETAILED DESCRIPTION

Reference is now made to the drawings wherein like numerals refer to like parts throughout.


As used herein, the term “access point” refers generally and without limitation to a network node which enables communication between a user or client device and another entity within a network, such as for example a Wi-Fi AP, or a Wi-Fi-Direct enabled client or other device acting as a Group Owner (GO).


As used herein, the term “application” refers generally and without limitation to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could include a downloadable Java Xlet™ that runs within the JavaTV™ environment.


As used herein, the term “client device” includes, but is not limited to, set-top boxes (e.g., DSTBs), gateways, modems, personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, PDAs, personal media devices (PMDs), tablets, “phablets”, smartphones, and vehicle infotainment or similar systems.


As used herein, the term “codec” refers to a video, audio, or other data coding and/or decoding algorithm, process or apparatus including, without limitation, those of the MPEG (e.g., MPEG-1, MPEG-2, MPEG-4/H.264, H.265, etc.), Real (RealVideo, etc.), AC-3 (audio), DiVX, XViD/ViDX, Windows Media Video (e.g., WMV 7, 8, 9, 10, or 11), ATI Video codec, or VC-1 (SMPTE standard 421M) families.


As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the like.


As used herein, the term “DOCSIS” refers to any of the existing or planned variants of the Data Over Cable Services Interface Specification, including for example DOCSIS versions 1.0, 1.1, 2.0, 3.0 and 3.1.


As used herein, the term “headend” or “backend” refers generally to a networked system controlled by an operator (e.g., an MSO) that distributes programming to MSO clientele using client devices. Such programming may include literally any information source/receiver including, inter alia, free-to-air TV channels, pay TV channels, interactive TV, over-the-top services, streaming services, and the Internet.


As used herein, the terms “heartbeat” and “heartbeat signal” refer generally and without limitation to a signal generated by hardware or software and sent to and/or acknowledged by a different network entity. Receipt and/or response to a heartbeat generally indicates normal operation (e.g., providing data communication and/or network connectivity), and may be used to synchronize multiple network devices or portions of a network. Such signals may be generated, transmitted, and received by any network entity configured to do so, from access points to headend or intermediary/local apparatus (e.g., controller apparatus), as well as between clients in a network (e.g., two clients in an ad hoc Wi-Fi network at a premises). Heartbeats may also be e.g., “one way” (i.e., a device is programmed to issue heartbeat signals according to a prescribed scheme, and failure of a monitoring device or process to receive such signals is indicative of a potential loss of functionality), or “two way” (i.e., a monitoring device issues a “ping” or the like to invoke a response from the target device or process being monitored; failure of the monitoring device to receive the response being indicative of the potential loss of functionality).


As used herein, the terms “Internet” and “internet” are used interchangeably to refer to inter-networks including, without limitation, the Internet. Other common examples include but are not limited to: a network of external servers, “cloud” entities (such as memory or storage not local to a device, storage generally accessible at any time via a network connection, and the like), service nodes, access points, controller devices, client devices, etc.


As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), 3D memory, and PSRAM.


As used herein, the terms “microprocessor” and “processor” or “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.


As used herein, the terms “MSO” or “multiple systems operator” refer to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.


As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets). Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25, Frame Relay, 3GPP, 3GPP2, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.).


As used herein, the term “network interface” refers to any signal or data interface with a component or network including, without limitation, those of the FireWire (e.g., FW400, FW800, etc.), USB (e.g., USB 2.0, 3.0. OTG), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), LTE/LTE-A, Wi-Fi (802.11), WiMAX (802.16), Z-wave, PAN (e.g., 802.15), or power line carrier (PLC) families.


As used herein, the term “QAM” refers to modulation schemes used for sending signals over e.g., cable or other networks. Such modulation scheme might use any constellation level (e.g. QPSK, 16-QAM, 64-QAM, 256-QAM, etc.) depending on details of a network. A QAM may also refer to a physical channel modulated according to the schemes.


As used herein the terms “reboot” and “re-initialization” include, without limitation, both “soft” reboots (i.e., those targeted at reinitializing one or more host device software/firmware processes without electrical power-down), and “hard” reboots (i.e., those which may interrupt power to the host as a whole, or particular components thereof). In some cases, hard reboots are further characterized in that they require a manual intervention or trigger (e.g., a user has to physically depress a button, etc.)


As used herein, the term “server” refers to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, or other services to one or more other devices or entities on a computer network.


As used herein, the term “storage” refers to without limitation computer hard drives, DVR device, memory, RAID devices or arrays, optical media (e.g., CD-ROMs, Laserdiscs, Blu-Ray, etc.), or any other devices or media capable of storing content or other information.


As used herein, the term “Wi-Fi” refers to, without limitation and as applicable, any of the variants of IEEE-Std. 802.11 or related standards including 802.11 a/b/g/n/s/v/ac or 802.11-2012/2013, as well as Wi-Fi Direct (including inter alia, the “Wi-Fi Peer-to-Peer (P2P) Specification”, incorporated herein by reference in its entirety).


As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, Zigbee®, Z-wave, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA).


Overview


As noted above, a wireless local area network (WLAN) is configured to provide network connectivity (e.g., to the Internet) via a service provider network, so as to deliver data and provide access to network services to nearby client devices (smartphone, laptop, desktop, tablet, etc.) via one or more wireless access points (e.g., WLAN APs). The data may travel through multiple network entities, such as a cable modem (CM) or satellite modem, intermediary entities (e.g., data center, backhaul infrastructure), AP controller, cable modem termination system (CMTS), and other backend apparatus.


An end user utilizing the wireless network may become disconnected from the network, or experience loss of service via the network, for various reasons.


The present disclosure provides a system, apparatus and methods to facilitate detection and tracking of defective or inoperative network devices, discovery of reasons for service outages, outage durations, and service restoration status in a substantially automated fashion so as to, inter alia, enhance network and service provision reliability, avoid or minimize loss of user experience, as well as catalog and characterize various types of events so as to enable subsequent use by network personnel/processes of a “living” database of outage scenarios and types. By enhancing the capabilities for data collection, monitoring, and communication between the various customer premises entities (as well as entities of the MSO network), identification of a variety of problem or fault scenarios such as CM failure, AP failure, connection failure, continuous reboot, loss of network/IP address, and user authentication/login failures is readily performed, thereby obviating service visits, technical support calls, and other costly and time consuming activities by the network operator or its agents.


The exemplary embodiments also advantageously remove much of the burden typically placed on a service provider customer to self-diagnose or troubleshoot issues with WLAN and modem implementations; i.e., “trial and error” normally conducted even before calling technical support or requesting a service visit.


In one embodiment of the present disclosure, a customer or user premises AP is configured to monitor the network health by transmitting “heartbeat” signals directed to one or more upstream network devices. The AP expects a return signal from each of the “pinged” upstream devices. Moreover, the intelligent AP can obtain data, such as via a data push or pull from the local CM, AP controller, or other entity, relating to performance of the various components (e.g., RF upstream and downstream power levels for the CM, accessible frequency bands within the available spectrum, etc.) so as to further enable isolation of the problem(s) within the network.


Moreover, in another aspect, an “intelligent” CM configuration is disclosed, wherein the CM can, such as in the event of detected problems with an associated AP; store configuration or other information relating to the AP for transmission upstream (e.g., to the AP controller or other analytical/management process within the MSO network) for further use in evaluating or diagnosing the problem(s) within the customer's WLAN and associated infrastructure.


When the device at issue is identified, the AP may send a reboot signal or similar instruction that causes the device to reboot or take other corrective action. In one variant, the AP checks whether this self-correcting process has properly brought the device at issue back online. If there does not appear to be a problem with the network, the AP may send an alert to the client device by pushing a message, e.g., via “bit stuffed” beacons.


In another embodiment, an upstream network entity (e.g., a controller) may participate in self-corrective actions. In this case, the controller expects information (such as via a heartbeat signal) from one or more downstream devices, and sends a response to acknowledge receipt and inform the downstream device (e.g., an AP) that it is operational. When the information or heartbeat is no longer received, the controller may cause a remote restart of one or more identified offline devices that should have been sending heartbeats to the controller. In one variant, the controller may act like the AP as described supra, originating the heartbeat signals as well as reboot signals (including to itself).


Detailed Description of Exemplary Embodiments

Exemplary embodiments of the apparatus and methods of the present disclosure are now described in detail. While these exemplary embodiments are described in the context of the previously mentioned Wi-Fi WLAN(s) associated with a managed network (e.g., hybrid fiber coax (HFC) cable architecture having a multiple systems operator (MSO), digital networking capability, IP delivery capability, and a plurality of client devices), the general principles and advantages of the disclosure may be extended to other types of networks and architectures that are configured to deliver digital media data (e.g., text, images, video, and/or audio). Such other networks or architectures may be broadband, narrowband, wired or wireless, or otherwise, the following therefore being merely exemplary in nature.


It will also be appreciated that while described generally in the context of a network providing service to a customer or consumer or end user (i.e., residential), the present disclosure may be readily adapted to other types of environments including, e.g., outdoors, commercial/retail, or enterprise domain (e.g., businesses), and government/military applications. Myriad other applications are possible.


Also, while certain aspects are described primarily in the context of the well-known Internet Protocol (described in, inter alia, Internet Protocol DARPA Internet Program Protocol Specification, IETF RCF 791 (September 1981) and Deering et al., Internet Protocol, Version 6 (Ipv6) Specification, IETF RFC 2460 (December 1998), each of which is incorporated herein by reference in its entirety), it will be appreciated that the present disclosure may utilize other types of protocols (and in fact bearer networks to include other internets and intranets) to implement the described functionality.


Other features and advantages of the present disclosure will immediately be recognized by persons of ordinary skill in the art with reference to the attached drawings and detailed description of exemplary embodiments as given below.


Service Provider Network—



FIG. 1 illustrates a typical service provider network configuration useful with the features of the wireless network described herein. This service provider network 100 is used in one embodiment of the disclosure to provide backbone and Internet access from the service provider's wireless access points (e.g., Wi-Fi APs operated or maintained by the service provider or its customers/subscribers), one or more cable modems (CMs) in data communication therewith, or even third party access points accessible to the service provider via e.g., an interposed network such as the Internet (e.g., with appropriate permissions from the access point owner/operator/user).


As opposed to an unmanaged network, the managed service-provider network of FIG. 1 advantageously allows, inter alia, control and management of a given user's access (such user which may be a network subscriber, or merely an incidental/opportunistic user of the service) via the wireless access point(s), including imposition and/or reconfiguration of various access “rules” or other configurations applied to the wireless access points. As but one example, the wireless access points (see discussion of FIG. 1a infra) disposed at the service location(s) can be coupled to the bearer managed network (FIG. 1) via, e.g., a cable modem termination system (CMTS) and associated local DOCSIS cable modem (CM), a wireless bearer medium (e.g., an 802.16 WiMAX system), a fiber-based system such as FiOS or similar, a third-party medium which the managed network operator has access to (which may include any of the foregoing), or yet other means.


Advantageously, the service provider network 100 also allows components at the service location (e.g., Wi-Fi APs and any supporting infrastructure such as routers, switches, etc.) to be remotely reconfigured by the network MSO, based on e.g., prevailing operational conditions in the network, changes in user population and/or makeup of users at the service location, business models (e.g., to maximize profitability), etc. In certain embodiments, the service provider network also advantageously permits the aggregation and/or analysis of subscriber- or account-specific data (including inter alia, particular mobile devices associated with such subscriber or accounts) as part of the provision of services to users under the exemplary delivery models described herein.


The various components of the exemplary embodiment of the network 100 include (i) one or more data and application origination sources 102; (ii) one or more content sources 103, (iii) one or more application distribution servers 104; (iv) one or more VOD servers 105, (v) client devices and/or Customer Premises Equipment (CPE) 106, (vi) one or more routers 108, (vii) one or more wireless access point controllers 110 (may be placed more locally as shown or in the headend or core” portion of network), (viii) one or more cable modems 112, and/or (ix) one or more access points 114. The distribution server(s) 104, VOD servers 105 and CPE/client device(s) 106 are connected via a bearer (e.g., HFC) network 101. A simple architecture comprising one of each of certain components 102, 103, 104, 105, 108, 110 is shown in FIG. 1 for simplicity, although it will be recognized that comparable architectures with multiple origination sources, distribution servers, VOD servers, controllers, and/or client devices (as well as different network topologies) may be utilized consistent with the present disclosure. For example, the headend architecture of FIG. 1a (described in greater detail below), or others, may be used.


It is also noted that cable network architecture is typically a “tree-and-branch” structure, and hence multiple tiered APs may be linked to each other or cascaded via such structure.



FIG. 1a shows one exemplary embodiment of a headend architecture. As shown in FIG. 1a, the headend architecture 150 comprises typical headend components and services including billing module 152, subscriber management system (SMS) and client/CPE configuration management module 154, cable modem termination system (CMTS) and 00B system 156, as well as LAN(s) 158, 160 placing the various components in data communication with one another. It will be appreciated that while a bar or bus LAN topology is illustrated, any number of other arrangements as previously referenced (e.g., ring, star, etc.) may be used consistent with the disclosure. It will also be appreciated that the headend configuration depicted in FIG. 1a is high-level, conceptual architecture, and that each MSO may have multiple headends deployed using custom architectures.


The exemplary architecture 150 of FIG. 1a further includes a conditional access system (CAS) 157 and a multiplexer-encrypter-modulator (MEM) 162 coupled to the HFC network 101 adapted to process or condition content for transmission over the network. The distribution servers 164 are coupled to the LAN 160, which provides access to the MEM 162 and network 101 via one or more file servers 170. The VOD servers 105 are coupled to the LAN 160 as well, although other architectures may be employed (such as for example where the VOD servers are associated with a core switching device such as an 802.3z Gigabit Ethernet device). As previously described, information is carried across multiple channels. Thus, the headend must be adapted to acquire the information for the carried channels from various sources. Typically, the channels being delivered from the headend 150 to the client devices/CPE 106 (“downstream”) are multiplexed together in the headend, as previously described and sent to neighborhood hubs (as shown in the exemplary scheme of FIG. 1b) via a variety of interposed network components.


As shown in FIG. 1b, the network 101 of FIGS. 1 and 1a comprises a fiber/coax arrangement wherein the output of the MEM 162 of FIG. 1a is transferred to the optical domain (such as via an optical transceiver 177 at the headend or further downstream). The optical domain signals are then distributed to a fiber node 178, which further distributes the signals over a distribution network 180 to a plurality of local servicing nodes 182. This provides an effective 1:N expansion of the network at the local service end.


Content (e.g., audio, video, data, files, etc.) is provided in each downstream (in-band) channel associated with the relevant service group. To communicate with the headend or intermediary node (e.g., hub server), the client devices/CPE 106 may use the out-of-band (00B) or DOCSIS channels and associated protocols. The OCAP 1.0, 2.0, 3.0, 3.1 (and subsequent) specification provides for exemplary networking protocols both downstream and upstream, although the present disclosure is in no way limited to these approaches.



FIG. 1c illustrates an exemplary “switched” network architecture. Specifically, the headend 150 contains switched broadcast control 190 and media path functions 192; these element cooperating to control and feed, respectively, downstream or edge switching devices 194 at the hub site which are used to selectively switch broadcast streams to various service groups. Broadcast switched architecture (BSA) media path 192 may include a staging processor 195, source programs, and bulk encryption in communication with a switch 275. A BSA server 196 is also disposed at the hub site, and implements functions related to switching and bandwidth conservation (in conjunction with a management entity 198 disposed at the headend). An optical transport ring 197 is utilized to distribute the dense wave-division multiplexed (DWDM) optical signals to each hub in an efficient fashion.


In addition to “broadcast” content (e.g., video programming), the systems of FIGS. 1a and 1c (and 1d discussed below) also deliver Internet data services using the Internet protocol (IP), although other protocols and transport mechanisms of the type well known in the digital communication art may be substituted. One exemplary delivery paradigm comprises delivering MPEG-based video content, with the video transported to user client devices (including IP-based STBs or IP-enabled consumer devices) over the aforementioned DOCSIS channels comprising MPEG (or other video codec such as H.264 or AVC) over IP over MPEG. That is, the higher layer MPEG- or other encoded content is encapsulated using an IP protocol, which then utilizes an MPEG packetization of the type well known in the art for delivery over the RF channels. In this fashion, a parallel delivery mode to the normal broadcast delivery exists; i.e., delivery of video content both over traditional downstream QAMs to the tuner of the user's STB or other receiver device for viewing on the television, and also as packetized IP data over the DOCSIS QAMs to the user's client device or other IP-enabled device via the user's cable modem. Delivery in such packetized modes may be unicast, multicast, or broadcast.


Referring again to FIG. 1c, the IP packets associated with Internet services are received by the edge switch 194, and in one embodiment forwarded to the cable modem termination system (CMTS) 199. The CMTS examines the packets, and forwards packets intended for the local network to the edge switch 194. Other packets are discarded or routed to another component. As an aside, a cable modem is used to interface with a network counterpart (e.g., CMTS) so as to permit two-way broadband data service between the network and users within a given service group, such service which may be symmetric or asymmetric as desired (e.g., downstream bandwidth/capabilities/configurations may or may not be different than those of the upstream).


The edge switch 194 forwards the packets received from the CMTS 199 to the QAM modulator, which transmits the packets on one or more physical (QAM-modulated RF) channels to the CPE/client devices. The IP packets are typically transmitted on RF channels (e.g., DOCSIS QAMs) that are different that the RF channels used for the broadcast video and audio programming, although this is not a requirement. The client devices/CPE 106 are each configured to monitor the particular assigned RF channel (such as via a port or socket ID/address, or other such mechanism) for IP packets intended for the subscriber premises/address that they serve. For example, in one embodiment, a business customer premises obtains its Internet access (such as for a connected Wi-Fi AP) via a DOCSIS cable modem or other device capable of utilizing the cable “drop” to the premises (e.g., a premises gateway, etc.).


While the foregoing network architectures described herein can (and in fact do) carry packetized content (e.g., IP over MPEG for high-speed data or Internet TV, MPEG2 packet content over QAM for MPTS, etc.), they are often not optimized for such delivery. Hence, in accordance with another embodiment of the disclosure, a “packet optimized” delivery network is used for carriage of the packet content (e.g., Internet data, IPTV content, etc.). FIG. 1d illustrates one exemplary implementation of such a network, in the context of a 3GPP IMS (IP Multimedia Subsystem) network with common control plane and service delivery platform (SDP), as described in co-owned and co-pending U.S. patent application Ser. No. 12/764,746 filed Apr. 21, 2010 and entitled “METHODS AND APPARATUS FOR PACKETIZED CONTENT DELIVERY OVER A CONTENT DELIVERY NETWORK”, which is now published as U.S. Patent Application Publication No. 2011/0103374 of the same title, incorporated herein by reference in its entirety. Such a network provides, inter alia, significant enhancements in terms of common control of different services, implementation and management of content delivery sessions according to unicast or multicast models, etc.; however, it is appreciated that the various features of the present disclosure are in no way limited to this or any of the other foregoing architectures.


It will be appreciated that the foregoing MSO or managed network can advantageously be leveraged for easy installation of the various APs (and/or any lower-level “children APs” as described in co-owned U.S. patent application Ser. No. 15/002,232 entitled “APPARATUS AND METHOD FOR WIRELESS NETWORK SERVICES IN MOVING VEHICLES” and filed Jan. 20, 2016, issued as U.S. Pat. No. 9,918,345, incorporated herein by reference in its entirety) within a geographic region. Consider, for example, a MSO network that is already pervasive throughout a given area (i.e., the MSO has numerous customers, both business and residential and otherwise); in such networks, the MSO already has significant infrastructure deployed, at a very high level of granularity. Hence, if an AP needs to be placed at a given location in order to effect the coverage/operation for the Wi-Fi network described herein, the MSO can easily “tap off” the existing infrastructure in that area to enable the AP placement. This may take the form of e.g., placement of an AP coincident with a given customer's extant equipment, and/or placement of new equipment that taps off a local service node. The present disclosure further contemplates provision by the MSO (or other parties) of consideration to the customer for allowing the placement of the equipment on their premises (e.g., payments, credits on their bill, special services or features, etc.).


It is also contemplated that the service provider may utilize or “piggyback” off the infrastructure of other service providers, utilities, etc. For instance, a third party service provider may have a high-bandwidth backhaul “drop” near a location desired by the MSO; the MSO can then lease, pay, rent, etc. that third party for use of the drop. Similarly, traffic signal poles, lighting, bridges, tunnels, etc. all contain a wide variety of cabling, conduits, and other infrastructure which the (host) MSO could make use of so as to obviate having to perform a new installation (and all of the attendant costs and delays thereof).


Hence, by virtue of the sheer quantity of network devices (i.e., APs, hotspots, and/or other nodes) and backhaul infrastructures (as listed above), end users' constant access to the Internet via, e.g., ubiquitous numbers of APs installed within modern infrastructure, present challenges to the MSO (or dedicated portions thereof, such as AP controller, CMTS, etc.) to monitor and identify problematic devices and connections within the network (including within the home or other premises) in order to correct them. The present disclosure alleviates at least a portion of the challenge by offloading the workload of monitoring and identifying devices at issue to local devices (e.g., AP) capable of such functions.


Mesh Architecture—



FIG. 2 illustrates a diagram of an exemplary mesh network deployed across a managed (here, cable) network that is useful consistent with the various feature of the present disclosure. In this context, a mesh network is a network topology that enables various nodes within the network to communicate and distribute data with one another. Various point-to-point and point-to-multi-point scenarios are possible in such topology. Hence, several access points in a mesh network may wirelessly share a single Internet connection. In an exemplary embodiment, the mesh network 200 includes one or more end users and client devices 202a, 202b (e.g., laptop, desktop, smartphone, tablet) in data communication with one or more network nodes (e.g., a mesh access point 204 and/or root access point 206 via wired or wireless connection, e.g., Wi-Fi) to access the network (e.g., internets, intranets). In some embodiments, root APs (and even mesh APs) may comprise cable modems and/or other nodes for client devices to access the network. For instance, in one implementation, all of the APs reside within the end user's premises, such as within a home or business. In another implementation, only some of the APs reside within the end user's premises, while one or more others are disposed external thereto (e.g., outdoor Wi-Fi APs broadcasting beacons that are recognized and received by client devices for, e.g., general access, or delivering contextually relevant information, as described in co-owned and co-pending U.S. patent application Ser. No. 15/063,314 filed Mar. 7, 2016 and entitled “APPARATUS AND METHODS FOR DYNAMIC OPEN-ACCESS NETWORKS”, incorporated supra. Outdoor applications include deployment in public areas with numerous potential users, such as an airport, stadium, mall, subway, etc.


As shown, an end user may be wirelessly connected with a mesh access point 204 that is in data communication with a root access point 206 (e.g., via wired or wireless “local backhaul”), which is in turn in communication with a router 208, controller 210, backend 212, external device 214 (e.g., a printer), other APs or mesh networks, etc., thereby providing network access to the end user(s). Multiple mesh APs 204a, 204b may be daisy chained (e.g., in a repeater configuration within relatively large premises) to relay information to other end users (not shown) connected to any one of the APs.



FIG. 3 illustrates an exemplary cable network architecture that extends from client devices to, inter alia, data centers. In the exemplary embodiment, the architecture 300 is divided into four main logical groups: an access network 302, a regional data center 304, a national data center 306, and a service platform 308. The access network 302 includes one or more APs (e.g., wireless APs 308a, 308b, 308c) and one or more end users 310 connected thereto via client devices (which, as in FIG. 2, may include “cascaded” topologies including one or more mesh elements). The regional data center 304 assists in providing services to the end users 310 by receiving, transmitting, and processing data between the access network 302 and the backbone 312 of the cable network. In one embodiment, the regional data center 304 is a local infrastructure that includes controllers (e.g., AP controllers), switches, policy servers and network address translators (NATs) in communication with the backbone 312. The regional data center 304 may be, for example, an intermediate data center on premises away to local APs (e.g., mesh APs) and user premises, and disposed within a larger infrastructure.


In the exemplary embodiment, the backbone 312 of the network enables data communication and services between the regional data center 304 and the national data center 306 via backhaul, and/or connection to the (public) Internet 314. In one implementation, the national data center 306 provides further top-level provisioning services to the regional data center 304 (e.g., load balancing, support of Trivial File Transfer Protocols (TFTP), Lightweight Directory Access Protocols (LDAP), and Dynamic Host Configuration Protocols (DHCP)), as well as providing the same to other data centers and/or access networks which may be part of the network operator's (e.g., MSO's) national-level architecture. National data center 306 also houses more advanced backend apparatus (e.g., CMTS 199, AP controllers, Layer 3 switches, and servers for the aforementioned provisioning services). In one embodiment, a separate service platform 308 may provide auxiliary services to the end users subscribed to the network provider, including access to mail exchange servers, remote storage, etc. Thus, it can be appreciated that myriad network nodes and entities, as well as connections therebetween, enable client devices (and ultimately end users 310) to maintain end-to-end connectivity across the network.


Cable Modem—


The CM provides multiple functionalities to the network. It is a modem (i.e., modulates and demodulates radio frequency signals), can facilitate encryption-decryption and conditional access (CA), and can act as a bridge, a router, a network monitoring/management (e.g., Simple Network Management Protocol (SNMP)) agent, an Ethernet hub, etc. As such, the CM is somewhat of a “chokepoint” for many processes and services delivered to or originating from the customer's premises; accordingly even partial failure of the CM can result in loss of AP functionality or connectivity to the MSO network (and hence other networks such as the public Internet).


When sending and receiving data, a CM may use several modulation schemes, but two used most frequently are Quadrature Phase Shift Keying (QPSK) (allowing a data bitrate up to approximately 10 Mbps) and 64-QAM (allowing data bitrate up to approximately 36 Mbps). Moreover, a CM typically sends and receives data (i.e., upstream and downstream, respectively) in two different fashions. In one embodiment, when the CM sends data in the downstream direction, the digital data is modulated somewhere between a frequency range of 42 MHz and 750 MHz and then placed on a typical 6 MHz television carrier. Since cable networks have a tree-and-branch network structure (for instance, CM may be connected to multiple root APs via a switch, each of the root APs being connected to multiple mesh APs via a switch), noise is added as signals travel upstream and combine (e.g., multiple mesh APs sending traffic to a root AP, multiple root APs sending traffic to CM). To remedy this problem, the QPSK modulation scheme may be used in the upstream direction, as QPSK provides more robust modulation in a noisy environment. However, QPSK does not allow a bitrate as high as that of QAM. Thus, when the CM sends data upstream, the transmission rate tends to be slower than when the CM sends data downstream (i.e., is asymmetric); this asymmetry is typically acceptable as most users characteristically download much more data than they upload. Notably, all CMs (and hence APs) within a given service group share both downstream and upstream bandwidth among themselves as well.


Nonetheless, such bandwidth limitations do not affect response times nor significantly limit the quantity and frequency of “low overhead” heartbeat signals exchanged with other devices. For instance, when the premises AP sends a “heartbeat” to the AP controller, and the controller returns a response signal (as described in greater detail below), the transmission latency upstream is similar to the response transmission latency downstream (i.e., from the AP controller back to the CM/AP).


In terms of hardware, the CM's RF interface comprises an external F-connector and is configured for Ethernet connection with twisted-pair cables capable of transmitting at 10, 100 or 1000 Mbps. The CM may also support IPv4 and IPv6 protocols. A DOCSIS-enabled CM shares channels using a Time Division Multiple Access (TMDA) scheme or an Advanced Time Division Multiple Access (ATMDA); i.e., when the CM is not transmitting data, its RF transmitter is turned off, and to transmit data, it must transmit bursts of data.


In a typical CM, its downstream maximum data rate is approximately 343 Mbps across 8 downstream (RF) channels. The CM is capable of downstream communication within a frequency range of e.g., 88 MHz to 1002 Mhz, via 64- or 256-QAM modulation. The CM's RF input/output power ranges from e.g., −15 to +15 dBmV. The exemplary CM's upstream maximum data rate is approximately 122 Mbps across 4 upstream channels, and the CM is capable of upstream communication within a frequency range of 5 MHz to 42 MHz, via various modulation schemes (e.g., QPSK, or 8-, 16-, 32-, 64- or 128-QAM).


The CM's RF output power varies depending on modulation and time-division scheme. For example, for 32-QAM and 64-QAM (ATMDA only), typical RF output power is +8 to +54 dBmV. For 8-QAM and 16-QAM, typical RF output power is +8 to +55 dBmV, and +8 tp+58 dBmV for QPSK. For all modulations based on Synchronous-Code Division Multiple Access (S-CDMA), typical RF output power is +8 to +53 dBmV.


Referring now to FIG. 4A, a flow diagram of a downstream cable modem registration process according to a typical prior art DOCSIS protocol is shown, so as to illustrate typical cable modem operation and indicate potential areas for device or other failures which can be addressed by the features of the present disclosure. Specifically, as shown in FIG. 4A, an internal flow occurs between a cable modem termination system (CMTS) 402 (such as e.g., the CMTS 199 of FIG. 1c) and a cable modem (CM) 404, such as the modem 112 of FIG. 1, attempting to acquire a QAM digital channel.


In an exemplary embodiment, when the CM 404 turns on and evaluates signals present on the RF cable (e.g., coaxial connection to the cable network), it searches for a valid downstream DOCSIS channel. Meanwhile, CMTS 402 transmits a “sync” (synchronization) broadcast every 200 milliseconds for system timing. In addition, the CMTS 402 sends an Upstream Channel Descriptor (UCD) every 2 seconds to instruct the CM 404 the upstream frequency to be used for transmission, along with other parameters needed to communicate over the network. The CMTS 402 also sends Media Access Protocol (MAP) messages to allocate time periods for each CM 404 according to a time-division scheme. The CM 404, in turn, looks for the SYNC, UCD and MAP messages from CMTS 402. If the CM 404 receives all three messages, it acknowledges that it is on a valid DOCSIS channel; otherwise, the CM 404 continues searching through QAM channels to lock onto.



FIG. 4B illustrates a flow diagram of a typical process for acquiring an IP address as implemented by the CM. In an exemplary embodiment, the CM 404 requests permission to transmit data to the CMTS 402 in order to acquire an IP address for itself (which also may include other devices within the premises network). The request is contained in a bandwidth request. In response, the CMTS 402 broadcasts a MAP (containing an assigned time slot) for the CM that sent the request (i.e., CM 404). During the time slot, the CM 404 sends a DHCP discover message to find a DHCP server. The DHCP server exchanges an offer of an IP address along with other network information and parameters with acknowledgements from CM 404 to complete the acquisition process. Once the transactions are confirmed, CM 404 requests the current date and time of day from the time-of-day server, useful for accurately timestamping messages and error logs, and synchronizing all the clocks on the network.


It is appreciated that while the above flows between the CMTS and CM are described in terms of DOCSIS CMs in general, DOCSIS 3.0-enabled CMs (as opposed to purely DOCSIS 2.0 compliant devices) are advantageously capable of accessing a greater range of signals from the CMTS (as well as supporting downstream and upstream channel bonding), and hence aspects of the present disclosure may be readily adapted to any type of CM (or for that matter other modem such as satellite wireless modem, or optical interface/modulator device for interface with an optical fiber bearer network such as FiOS).


Moreover, devices compliant with the incipient DOCSIS 3.1 and CCAP (converged cable access platform) standards, such as e.g., the Cisco cBR-8 Converged Broadband Router and counterpart DOCSIS 3.1-enabled modems, which make use of full RF spectrum, may be adapted for use consistent with the various features described herein.


Access Point Apparatus—



FIG. 5 illustrates a block diagram of typical hardware and processes of a prior art WLAN access point (AP). The AP 500 supports multiple in, multiple out (MIMO) and/or omnidirectional communication (i.e., in multiple directions, e.g., 360 degrees for outdoor applications). In the diagram as shown, three 2.4 GHz and three 5 GHz integrated “omni” antennas 501 are each coupled to an antenna module 502. The omnidirectional and/or MIMO antennas support multiple users connecting to the AP simultaneously. A baseband module 504 is supported by a CPU and a memory module (e.g., DRAM), and is configured to manage the AP's radio functions (e.g., using computer software, firmware, operating system and/or other instructions stored thereon and/or at a discrete memory module).


A baseband module 504 is configured to communicate with various components of the front end 506 of the AP (e.g., a radio resource module 508) in order to enable and control the antenna functions. The baseband module 504 is further supported by a radio resource module (RRM) 508, a discrete memory module (e.g., DRAM) 511, and processing unit 513 (e.g., a dual-core CPU, as shown in FIG. 5). It is appreciated that while the RRM 508 is shown in FIG. 5 as a discrete component, all or part of its functionality may be combined with or integrated into other components, including without limitation the baseband module 504.


The radio resource module 508 manages radio resources (e.g., the antenna module 502, beacon module, dynamic frequency selector module) for efficient utilization thereof. For example, the radio resource module 508 may control radio transmission characteristics such as transmit power, user allocation, data rates, handover criteria, and modulation scheme.


A power module 510 supplies power (and may draw power from an external cord) to the front-end components and CM interface module 512, which share the power supply with baseband module 504, CPU 513, memory 511, and other associated components. The CM interface module 512 is configured to manage communications with the backend side of the network, e.g., the CMTS of the MSO. Thus, the architecture for the AP 500 as shown in FIG. 5 allows input and output of radio frequency signals via both the antenna module 502 and the CM module 512.



FIG. 6 illustrates a block diagram of an exemplary access point architecture useful for operation in accordance with the present disclosure. In one exemplary embodiment, AP architecture 600 includes an antenna module 602 enabling wireless communication with other devices in the vicinity. As with the device of FIG. 5, the antenna module 602 supports communication via omnidirectional and/or MIMO (2×2, 3×3, 4×4, etc.) antenna configurations. In the diagram as shown, three 2.4 GHz and three 5 GHz integrated “omni” antennas 601 are each coupled to antenna module 602, although the number of antennas are not limited to six. Such omnidirectional (i.e., 360 degrees) communication may have particularly useful application in outdoor settings that have numerous users within range and at different azimuths, e.g., at a sports stadium, mall, airport, and other large public venues.


In the exemplary embodiment of FIG. 6, the baseband module 604 is configured to communicate with various components of the front-end 606 of the AP (e.g., a radio resource module 608) in order to enable and control the antenna functions, including e.g., in one exemplary embodiment, advertising SSID(s) to devices within range. In this embodiment, the baseband module 604 runs a locally self-contained communications stack, and does not rely on an external operating system for real-time processing, although other configurations may be used. In some variants, the baseband module 604 is further supported by a discrete memory module (e.g., DRAM) 611 and processing unit 613 (e.g., a dual-core CPU, as shown in FIG. 6). It is also appreciated that while the RRM 608 is shown in FIG. 6 as a discrete component, all or part of its functionality may be combined with or integrated into other components, including without limitation the baseband module 604.


One of the radio control functions of particular utility is the ability of baseband module 604 of the device 600 of FIG. 6 to stop transmitting the SSID of the AP when the CM is no longer accessible via the AP (e.g., at the baseband module 604). In the exemplary embodiment, radio resource module 608 is configured to monitor signals from other network devices (e.g., a known CM), as well as to send signals to other network entities. An online status of the known CM is monitored by sending a signal (e.g., a “heartbeat” signal) to the CM that is intended to cause a return signal to be issued from the CM. Heartbeat or other signals to a controller (see e.g., the controller 210 of FIG. 2) and/or other upstream network entities, whether at or near the edge of the MSO infrastructure, or further toward the core, may be transmitted and monitored as well by the AP 600. When an expected heartbeat is not recognized and/or received by radio resource module 608 (e.g., the AP logic is configured to measure the time between received heartbeat responses, and when such time exceeds a prescribed threshold (and/or other acceptance criteria fail), an error condition of flag is set), it is an indication that an upstream connection has gone offline or is otherwise unavailable because of one or more disabled network entities. The baseband module 604 then ceases to transmit the SSID, making the AP unavailable for connection to end users, as well as saving power.


Based on the capabilities enabled by the AP architecture 600 as described supra, rather than pinging each individual network entity from the backend (i.e., network operator side) and then remotely rebooting any downed devices via transmission of a reset or similar command as in the prior art, one or more backend (network) entities may instead (or in addition) directly monitor client APs as a way to infer that one or more devices have gone offline within the network. Specifically, in one embodiment, a given AP, using the architecture 600 of FIG. 6, monitors and logs transmitted heartbeats and/or (lack of) responses from local devices such as the CM, and those within the local infrastructure (e.g., regional data center 304). An AP controller 210, network monitoring center, or other entity/process within the MSO network accesses the aforementioned logs from the client APs to determine that a connection upstream thereto has been “severed” or is otherwise not fully functional (note that various levels of degradation may be logged/detected in various implementations of the architecture 600 of FIG. 6). An appropriate response may then be given by the accessing AP controller or other network entity (or a proxy thereof, such as local controller) such as, e.g., rebooting the problematic device.


In another embodiment, AP capabilities are sufficiently robust to offload monitoring partly or even entirely from the backend, such as by sending heartbeats to, and detecting responses from, devices further upstream within the network (e.g., the CMTS, AP controller, etc.). In one implementation, the AP architecture 600 is configured to determine the offline device and/or connection, and take appropriate remedial action. In some variants, such AP is configured to detect a precise location of the inoperative connection and/or device, such as by sending an identifiable signal to each monitored device (e.g., by including a unique identifier within the transmitted signal, and detecting failure to receive a return signal associated with the unique identifier within the expected time period).


As such, the AP logic (or that of an analytical or supervisory process accessing the logged data of the AP) can at least determine which portion or “link” in the network is potentially problematic. For example, the AP failing to receive a response to a heartbeat signal addressed specifically to a network AP controller associated with the AP would indicate that at least the portion of the network between the controller and the AP (inclusive) is non-functional, and a similar response successfully received from the CM connected to the AP being indicative that the pathway is good at least to the CM, thereby indicating that the problem lies anywhere between the upstream (backend) of the CM and the AP controller. Note also that the AP can provide such data either to a servicing technician (via direct physical access such as a connector or local wireless interface), or to one or more upstream entities such as via an alternate (unaffected) communication channel, such as e.g., out-of-band (00B) signaling, or even cellular or “copper” service in the premises (e.g., a landline) that is unaffected by the network deficiency. The AP may also be configured to prompt the user to call for assistance, and provide error codes or information which the user can give to the technical support personnel which will indicate the origin of the problem(s). Moreover, the CM associated with the AP may be configured to perform functions comparable to those described above with respect to the AP. Notably, in many premises network configurations, the AP is downstream of the CM. Hence, a CM equipped to log relevant data, ping the AP (and/or detect heartbeats), etc. can provide useful data to upstream (network) processes, such as the AP controller. Hence, in one variant, both the AP and CM include within their software stacks logical processes configured to probe, monitor, and log data relating to themselves and other connected devices, so as to further enhance problem detection and identification. For instance, the CM may be configured to “pull” logged data from the AP upon failure of the AP, so as to enable evaluation of what components of the AP are at fault, and/or the type of failure. A simple reboot or power cycle may cure some issues, while others may require a device replacement, and yet others may be rooted within network-side entities (e.g., authentication or RADIUS servers, billing systems, etc.). It is also appreciated that the foregoing approach may be used at one or more network entities (such as the AP controller), whether alone or in combination with AP-based functionality. For instance, in one such implementation, the AP controller includes a complementary “heartbeat” module functionality, such that it can send (and specifically address) test or other signals to particular devices downstream (or even upstream) of itself so as to elicit a response therefrom and log the results, in similar fashion to the AP architecture 600. Hence, in one such approach, the AP controller and AP can coordinate or even work in tandem to “localize” the deficiency. To the degree that the AP in such case is still accessible to the network (e.g., the network AP controller), the latter can “push” instructions and test regimes to the AP (and other APs serviced by the same MSO network edge device) to attempt to localize the problem from both ends.


Call Flow—



FIG. 7 is a diagram illustrating a typical prior art call flow 700 for configuring an Internet protocol (IP) address by a Dynamic Host Configuration Protocol (DHCP) server, such as to allow network access to a client device. At step 702, SYNC, UCD and MAP messages are transmitted (e.g., broadcast) from a CMTS to an external CM and/or other CMs (e.g., within user premises), thereby enabling subsequent registration by the downstream CMs 750 with the CMTS (e.g., as described with respect to FIG. 4A supra.


At step 704, if the CM is a DOCSIS 3.0-enabled CM, the CM receives a MAC Domain Descriptor (MDD) message from the CMTS once a downstream DOCSIS channel is acquired (via, e.g., the registration process noted supra). MDD messages inform the CM with which channels to bond, by relaying the downstream channel ID of the primary downstream channel for the CMTS sending the MDD message. In response, at step 706, the CM sends a B-INIT-RNG-REQ message on the first channel on which it initializes.


At step 708, a DORA (Discover, Offer, Request, Acknowledgement) process is initiated at the CMTS. More specifically, the CM sends a DHCP Discovery request to the CMTS, asking for IP information from any listening DHCP servers.



FIG. 7a illustrates one prior art Discover, Offer, Request, Acknowledgement (DORA) process. As shown, DHCP servers offer configuration information to the CM, including any IP information that would be necessary for the CM to connect to the server (see e.g., DHCP OFF packet). The CM selects the optimal offer, and requests to lease the corresponding DHCP server (e.g., DHCP server 752) (see e.g., DHCP REQUEST packet). The selected DHCP server 752 acknowledges the CM's request, and leases the IP configuration information (see e.g., DHCP RESPONSE packet), thereby providing the CM an IP address to enable an AP 754 or a client device 756 to access the network per step 710.


Once AP 754 receives the IP address, the AP begins transmitting its SSID(s). End users may see the SSID(s) on their client device 756, and select an appropriate SSID. In some variants, the end user must submit credentials, e.g., by authenticating with one or more authentication, authorization, and accounting (AAA) servers 758 of the network, before entering a browsing session. This flow process is repeated for each IP that needs to be assigned to AP 754 or to client device 756.



FIG. 8 is a diagram illustrating an exemplary inventive call flow 800; i.e., for configuring an IP address by a Dynamic Host Configuration Protocol (DHCP) server to allow network access to a client device. In an exemplary embodiment, at step 802, SYNC, UCD and MAP messages are transmitted (e.g., broadcast) from a CMTS to an external standalone CM and/or embedded CMs (e.g., within user premises, on an external outdoor pole installation, etc.), thereby enabling registration of one or more CMs 850 (e.g., as described with respect to FIG. 4A supra).


In some variants, a standalone CM is connected to a backhaul (via, e.g., coaxial or Ethernet connection) that is in turn in data communication with the MSO network, as well external networks such as e.g., the Internet. The standalone CM further is in data communication with one or more premises APs, such as via CAT-5 or similar cabling.


Alternatively, an embedded CM may be integrated with the AP form factor, and additionally connect to the AP at the baseband module or baseband processor via an Ethernet port, such as shown in the architecture 600 of FIG. 6.


At step 804, an exemplary DOCSIS 3.0-enabled CM receives a MAC Domain Descriptor (MDD) message from the CMTS once a downstream DOCSIS channel is acquired (via, e.g., the registration process noted supra). In response, at step 806, the CM sends a B-INIT-RNG-REQ message on the first channel on which it initializes.


At step 808, CM 850 exchanges and/or synchronizes one or more upstream power levels and one or more downstream power levels with the AP(s) 854 (e.g., indoor AP 854a or outdoor AP 854b). In one embodiment, upstream power levels are measured by the CM 850 to determine, inter alia, maximum and minimum values for data transmission to CMTS. Downstream power levels are measured by the CM 850 to determine, inter alia, maximum and minimum values for data transmission downstream to the CM. In one variant, diagnostic services or software on the CM 850 monitors the upstream and downstream power levels to take action (e.g., modify or reject connection parameters) if a predefined power threshold is not met. In another variant, AP 854 accepts or rejects (i.e., considers unable to connect) a connection based on a value corresponding to the difference between the maximum and minimum values of power levels and/or available bandwidth.


At step 810, AP 854 receives and tracks upstream frequency and downstream frequency values via the CM 850. In some embodiments, the AP tracks a maximum and minimum boundaries (e.g., in Hz) of frequency values, and can optionally cause the CM to accept or reject a connection based on the boundaries. Signal-to-noise ratio (SNR) and received signal strength indicator (RSSI) may also be measured and/or adjusted. In some embodiments, the SNR must be above a predetermined threshold for the AP to accept or maintain a connection via the CM.


In some embodiments, AP 854 receives information that allows the AP to track whether the CM is locked into the appropriate channels and frequencies. Such information may include downstream channel ID (e.g., a non-parametric identifier associated with a given RF channel), downstream channel frequency, downstream received signal power, upstream channel ID, and upstream channel frequency. For example, the AP, after receiving information from the CM, may have information that reads: downstream channel ID=3; downstream channel frequency=403,000,000 Hz; downstream received signal power=0.0 dBmV; upstream channel ID=2; and upstream channel frequency=35,984,000 Hz. In some variants, the AP monitors all eight downstream channels and four upstream channels (e.g., if the CM is DOCSIS 3.0 enabled) or more (e.g., if the CM is DOCSIS 3.1 enabled), and may further communicate via heartbeats/responses. For example, such monitoring may include the ability of the downstream tuner to allow for reception of channels distributed across the downstream spectrum, either in groups or individually. Similarly, exemplary monitoring for an upstream transmitter configuration may include the transmitter's ability to access channels distributed anywhere in the upstream spectrum; failure of either of these criteria may indicate that the CM is not functioning properly (e.g., cannot tune to or transmit on all prescribed frequencies/bands).


At step 812, a discovery/handshake protocol (e.g., DORA or similar process) is initiated at the CMTS, as described with respect to FIG. 7 supra.


Once the DHCP provides the CM an IP address to enable the AP to access the network (see discussion supra), the CM's management interface must continue to function when pinged by an Internet Control Message Protocol (ICMP) Echo Request packet (e.g., by returning an ICMP Echo Reply). In one embodiment, the AP baseband module 604 sends an ICMP Echo packet, such as with a packet size greater than a prescribed value (typically e.g., 1,500 octets). As a brief aside, the packet size is administered and handled by the wireless controller and AP. Larger payloads may be handled with fragmentation and aggregation; for example, if a payload is 1,800 octets, the first frame may be truncated down to 1,500 octets with the remaining 300 octets appended to the next frame. Frames are application dependent (e.g., video clips, photos, etc.) and may span packet payload limitations.


The CM is further configured to, in the case where e.g., the bridge between the Ethernet port of the CM and another port (e.g., USB) is lost, to inform the AP 854, such as with “SNMP traps,” at step 814. SNMP traps are alerts that enable significant events and issues to be reported to a managing entity; e.g., the AP. In some embodiments, the SNMP traps are transmitted to the CMTS alternatively or concurrently.


Likewise, in embedded applications, the (embedded) CM may be configured to notify the AP when the status of the Ethernet link between the embedded CM and the AP baseband device is lost.


It can be appreciated that CM 852/AP may be “disconnected” or unable to communicate with the CMTS or other entities of the MSO network (or distant entities such as web for any number of reasons, including a denial-of-service (DoS) or similar attack which occurs at a layer above the PHY of the CM/CMTS. In some such attacks, the CM and CMTS interoperate; however, the user's client cannot successfully negotiate and connect to e.g., a web server at the transport or other layers. Accordingly, in such cases, the exemplary embodiments of the CM herein are configured to continue operation to the extent possible; including response to SNMP commands from the AP baseband module, forwarding of traffic from the CM to the AP (and hence client), etc.


Moreover, in one embodiment, the AP baseband module is configured to send a reset command (e.g., command similar to docDevResetNow counterpart for the control center) within an SNMP message configured to remotely reset the CM (versus reset from the network side) via the AP/CM data layer.


At step 816, AP 854 measures one or more forward error correction (FEC) parameters to control errors in data transmission (e.g., bit error rate (BER), packet error rate (PER), cyclic redundancy check (CRC) failures, number of bits lost, number of bits sent, etc.) As used herein, the term “bit error” means a received bit at the CM or AP (or their respective controller interfaces), that have been altered due to noise, interference, distortion, bit synchronization errors, etc. As used herein, the term “bit error rate” means a number of bit errors per unit time per definition. As used herein, the term “bit error ratio” (also abbreviated BER) is a number of bit errors divided by the total number of transferred bits during a measured time interval. Both bit error rate and bit error ratio may be improved by one or more of: using a stronger signal strength, choosing a slower and more robust modulation scheme or line coding scheme, and/or applying channel coding schemes such as redundant forward error correction (FEC) codes.


As a brief aside, the bit error ratio (BER) is calculated by comparing the transmitted sequence of bits to the received sequence of bits and counting the number of errors. The ratio bits received in error compared to the total number of total bits received is the BER. Similarly, the packet error ratio (also abbreviated PER) is a number of incorrectly received data packets divided by the total number of received packets. A packet is declared incorrect if at least one bit is erroneous.


In one variant, the AP informs the CM of measured errors according to a prescribed abstract scale; e.g., ranging from 0 to 10. This information may be included for example in a message from the CM to the CMTS, e.g., to notify the CMTS of the performance of the link, that the FEC parameters require adjustment, etc.


At step 818, once AP 854 receives the IP address via the CM and begins transmitting SSID(s) via its (Wi-Fi) air interface, end users see the SSID(s) on their client device 856, and select the appropriate SSID and enters a browsing session or other type of operation as desired. In some embodiments, the end user must submit credentials, e.g., by authenticating with one or more authentication, authorization, and accounting (AAA) servers 758 of the network, before entering a browsing session. In one variant, the AAA server 858 is configured to provide services for, e.g., authorization and/or control of network subscribers for controlling access to computer resources or entitlements to access/receive protected content, enforcing policies, auditing usage, and providing the information necessary to bill for services.


In another variant, the Internet and/or other network services are only accessible by way of MSO-authorized client devices, or client devices running a downloadable application or “app” (comprising, e.g., an application programming interface (API) available from the service provider operating the AP).


The foregoing flow process may be repeated for each IP address that needs to be assigned to the AP 754 or to the client device 756. The AP continuously monitors network devices thereafter, e.g., by sending heartbeat signals to the CM and expecting response heartbeat signals. In various embodiments, the continuous monitoring process includes transmitting ICMP pings, looking for any SNMP traps and/or measuring FEC, as described supra.


In various embodiments, when the AP loses connectivity to the CM for any reason, the AP stops transmitting its SSID(s), so as to remove the AP from advertisement to prospective users. This suspension of SSID advertisement is conducted along with one or more of the self-diagnosis and/or self-healing functions described elsewhere herein (dependent on particular client premises configuration). Contrast the foregoing with the implementation corresponding to FIG. 7, where end users still see an SSID even when the AP has lost connection with the CM and the rest of the network. In the latter case, a user may select the SSID hoping to access the Internet but cannot since the AP is disconnected from the backhaul. The end result is a frustrating experience for the end user (as well as any technicians) who sees the SSID yet cannot identify and rectify the downed device or connection.


Wireless Network Fault Scenarios and Solutions—



FIG. 9 illustrates an exemplary configuration of a wireless network 900 deployed indoors. The illustrated configuration represents an end-to-end view of the wireless network accessible by a client device 902 via, e.g., Wi-Fi, when all network devices are in proper working condition. In the exemplary configuration, end users access the wireless network by connecting with an access point 904 via their client devices 902 (laptop, smartphone, tablet, etc.). One or more APs 904 advertise their SSIDs, which an end user may select to connect to the AP(s) on one or more of the client devices 902.


In an exemplary variant, the AP is in data communication with a cable modem 906 within the indoor premises. The CM, in turn, connects to an external source of data via coaxial cable, Ethernet and/or other wired means of accessing the cable network to which the end users are subscribed. In other variants, the AP may be connected to another AP (e.g., range extender) before the other AP is connected to the CM, or the CM may be connected to multiple APs within the premises, but the APs are not directly connected to each other. Moreover, a router (not shown) may be present to manage and connect multiple APs to the same CM or multiple CMs to the same data source (e.g., one coaxial port connected).


In another implementation, the CM may communicate (via wireline or wirelessly) to multiple APs, and/or the CM may act as a router. As can be appreciated, numerous configurations exist to connect the end user to the network, each of which can benefit from one form or another of the functionality described herein, the exemplary configuration of FIG. 9 being merely exemplary.


The traffic exchanged within the configuration of FIG. 9 is supported by the backhaul or backbone portion 908 of the service provider network, which operates to connect local network devices (e.g., the AP 904 and CM 906) with the edge and ultimately the core of cable network as described elsewhere herein. In various embodiments, the MSO network includes various backend apparatus and services, including but not limited to one or more AP controllers 912, CMTS 914, Layer 3 switch 916, and network monitoring center 918. These apparatus and services operate to enable data transmission to and from the end user as described elsewhere herein, e.g., with respect to FIG. 3.



FIG. 9A illustrates a first fault scenario occurring within the network 900 of FIG. 9. As shown in this scenario, AP 904 becomes disabled for any of various reasons (due to “locking up” within the software running on the CPU, component failure, overheating, ongoing firmware update, maintenance, physical disconnection from the network, etc.), or continues to reboot itself continuously.


During the AP reboot process, the AP does not broadcast its SSID(s); thus, the end user cannot access the wireless network. In this scenario, the end user does not receive much information about where the fault or disconnection lies and what caused the fault. However, once the AP is rebooted, the end user may reconnect to the AP and resume browsing activity if the reboot is successful (i.e., addresses the root issue of the failure). In a continuous reboot situation, no connectivity between the STA (client) and AP will be established, and the user will be provided with e.g., a “connection failed” message by their client indigenous wireless management software/process.



FIG. 9B illustrates a second fault scenario in the context of the exemplary wireless network of 900 of FIG. 9. In this scenario, the CM 906 becomes disabled (similarly due to e.g., software lock, component failure, overheating, firmware update, maintenance, physical disconnection from the network, etc.) or unable to provide data connectivity for the AP 904 due to continuous rebooting. There is no fault in the AP, but rather the AP's “pipe” to the MSO network has failed; however, to users, there is a loss of service similar to that experienced in the scenario of FIG. 9A (with the exception that the AP continues to broadcast its SSID in the scenario of FIG. 9B).


In the scenario of FIG. 9C, the CM 906 has lost connectivity to the MSO or service provider backbone network 908, such as due to cable disconnection, cable fault or severance, failure of a the user's premises interface to the local distribution node (e.g., bad switch), etc. The AP and CM operate, with the exception that the user has no connectivity to the MSO network, since the CM cannot acquire the proper RF channel(s) for transmission/reception.


In the scenario of FIG. 9D, the AP 904 operates and broadcasts its SSID as normal; yet, the connection between the CM 906 and AP has been lost (e.g., unplugged, severed, etc.). For instance, one common issue is failure of the relatively fragile retaining mechanism on Ethernet/RJ45 connectors which may be used to interface the CM to the AP in some installations. The CM operates properly and has no faults, and the AP has no faults and operates properly other than providing no connectivity for the end user to the MSO network.



FIG. 10 illustrates an exemplary configuration of a wireless network 1000 deployed outdoors. The illustrated configuration represents an end-to-end view of the local wireless network accessible by a client device 902 via, e.g., Wi-Fi, and upstream components when all network devices are in proper working condition. In this configuration, end users access the wireless network by connecting with an embedded access point 1002 via their client devices 902 (laptop, smartphone, tablet, etc.). Moreover, the traffic exchanged within the network is supported by the backhaul or backbone portion 908 of the MSO/service provider network, as is described with respect to FIG. 9, supra.


In one configuration of the embedded AP 1002, the AP is integrated with a cable modem (CM). Although a device may have both an AP and a CM within its chassis, the AP and CM may be separate logical entities. Multiple APs may also be integrated with the CM. Each embedded AP may include omnidirectional antennae as illustrated in FIG. 6, such as for providing a wide azimuthal coverage, and to support a large number of client devices which may be within range. One or more embedded APs 1002 advertise their SSIDs, which an end user may select to connect to the AP(s) via one or more client devices 902.


In another configuration of the wireless network 1000, multiple embedded APs 1002 are deployed at the same “tower” configured to broadcast their services to client devices 902. Data connectivity may be congregated at the tower (e.g., coaxial cables running along a bundle connected to the tower), while embedded APs are placed at relatively separate locations, thereby enabling more widespread network coverage (depending on the venue). For example, a baseball stadium may require multiple embedded APs or even multiple consolidated towers, each stationed at intervals around the stadium.


Referring now to FIG. 10A, an exemplary fault scenario is shown occurring within the wireless network 900 illustrated in FIG. 10. In this scenario, the backhaul link between the backbone 908 and the embedded AP 1002 (particularly its CM) is disconnected; i.e., the end users of the AP detect that they cannot connect to backbone 908 of the cable network via the CM, but otherwise have no visibility into the fault. The SSID continues to be broadcast from the AP(s), since the AP is operating normally (other than having no connectivity).


In the scenario of FIG. 10B, end user connectivity via the AP is lost due to one or both of the embedded APs failing, and/or continuously rebooting. During reboot, service is interrupted and the connection between the user device (e.g., Wi-Fi “STA” or station) and the AP is lost. The user is informed that the connection to the AP has failed, but no other information is available.


Once the embedded AP 1002 has rebooted and recovered, the CM and the AP may remain disconnected from each other, such as due to loss of assigned network address (e.g., IP address) by the AP during reboot. In such cases, the SSID may not transmit its SSID to the local wireless clients. More directly, the SSID is not transmitted unless there is end-to-end connectivity with the broader network; this reduces user confusion (i.e., prevents the user from connecting with an SSID that “goes nowhere”)


Alternatively, the SSID may be transmitted by the AP, and the clients will recognize it; however, no connection to the MSO network is enabled since no IP address has been assigned, and the client cannot negotiate with any distant entities (via the AP/CM) without such address.


Other potential failure scenarios include, without limitation where (i) the AP continues to send repetitive heartbeat alarms to the AP controller of the MSO network for an extended period of time when an issue is encountered (indicating a persistent issue upstream e.g., an unavailable controller, a loss of connectivity to the backhaul, etc.); (ii) logical communication (e.g., communications session) between the AP and the AP controller is lost; (iii) the AP is operational and connected to the CM and MSO network, yet individual users are disconnected or cannot connect initially; (iv) the AP controller indicates authorization failure for multiple clients (i.e., they cannot log in); and (v) multiple APs fail to establish sessions with a common AP or respective APs in logical arrangement with one another (e.g., via controller access concentrator).


Methods—


Various methods of addressing the foregoing faults or failures within the network according to the present disclosure are now described with respect to FIGS. 11-12.



FIG. 11 illustrates an exemplary embodiment of a method 1100 implemented by an access point to monitor a wireless premises network, and restore network access to client devices. The wireless network useful with method 1100 is not limited to those embodied in FIGS. 2 and 3 herein, and may be used with any wireless-enabled client device and any architecture utilizing data communication among nodes.


At step 1102 of the method, the AP transmits one or more heartbeat signals. Specifically, heartbeats are sent to one or more upstream network entities in data communication therewith, e.g., cable modem, any backhaul entities (e.g., data centers), AP controller, CMTS, etc. In one variant, the heartbeats comprise preformatted messages addressed to the target devices/entities that are configured to elicit a reply or “ack” from the entity after receipt. In the exemplary embodiment, the heartbeats are transmitted according to a periodic or aperiodic temporal schedule; i.e., a heartbeat is sent at every predetermined interval (which may or may not be equal, and/or predicated on the occurrence of an event). In one variant, where multiple devices are targeted, the periodic heartbeats are staggered so as to be delivered to each targeted upstream device in a prescribed order. For example, four heartbeats may be sent at every given interval: one to the CM, one to a data center, one to the CMTS, and one to the AP controller, in that order.


Alternatively, in a different variant, the AP may only send heartbeats to the nearest upstream device, such as the CM 906 in FIG. 9, so that any troubleshooting process is limited to the premises.


In another variant, the pulse intervals are spaced such that any expected response signals are received before the next round of heartbeat signals (i.e., within a receive “window” of time). On the other hand, the AP may send heartbeats independently of receiving response signals, whether periodically or otherwise; i.e., in an asynchronous fashion. The AP may adjust accordingly depending on the number of devices and any significant latencies on the network.


In another variant, the periodic signals are sent at a predetermined interval that may be modified by the AP. In a different variant, the signals are sent at intervals depending on network conditions, e.g., traffic load, number of expected pings, expected network conditions (e.g., known offline connections in the network), size of network, time of day (e.g., peak hours). For instance, pings are sent at relatively longer intervals during peak times to keep traffic from being congested.


In another variant, the received signals include at least one unique identifier. The unique ID may be a value (formatted in alphanumeric, hex, binary, etc.) that identifies the originating AP as well as the destination or target. In one implementation, the unique ID values are associated with or derived from known values, such as MAC address or IP address assigned to the AP and/or the controller. In another implementation, the MAC address or IP address itself is the identifier. The transmitted signals (and return signals) may also include time stamps, such as those associated with and assigned by the underlying transmission protocol, indicating e.g., (system) time of transmission, time of receipt, etc. Such timestamps can be useful in determining propagation delays, including whether a responding entity in fact responded within the prescribed window. For example, a responding entity may transmit a heartbeat response within the prescribed time window, yet the response may not actually be delivered to the issuing entity (e.g., AP) within the window due to packet queuing, buffering, and/or propagation delays within the network infrastructure.


In another variant, rather than continuously monitoring the network, the AP transmits heartbeat signals upstream only when the AP determines that some anomaly (e.g., timeout, retransmission request, etc.) has been received or occurred.


At step 1104, the AP waits for a response from the upstream device(s) to which the heartbeats were transmitted. A response signal indicates to the AP that the initial heartbeat was acknowledged by a target device, and presumably that the connection between the recipient device and the AP is in working condition. A response signal that is absent when expected within a period of time may indicate a possible issue with the device to which the heartbeat was sent (or such problem may reside within the AP or the client device, as described below).


In another embodiment, a lack of response within an expected time does not necessarily indicate that the network has lost connectivity, but rather may have experienced some level of performance degradation (e.g., bottlenecking in one process or another) or other situation wherein normal operation is not achieved. As discussed further below, there may be several levels of expected thresholds or ranges of time; such different thresholds or criteria may also be correlated with different types of component problems or failures, so as to aid in identification of the root cause.


Moreover, an upstream device may send a preemptive notice that it is or will be entering a temporary downtime.


At step 1106, the AP determines whether an expected response signal was acceptable (e.g., received within the expected time, carries an acceptable timestamp, receives an “ack” issued by a proper entity, is a notice or alert that tells the AP that the device has gone offline; e.g., lost connection to devices upstream of the responding device, or that the device will go offline for maintenance, etc.). If a response was received and is acceptable, the AP returns to step 1102. If a response was not received, the AP stops transmitting its SSID(s) (so as avoid advertising an inoperable service), and proceeds to step 1108. In another embodiment, the AP continues to advertise its SSID(s) while attempting to diagnose or rectify the issue.


At step 1108, the AP identifies any upstream device(s) at issue. The target device may be offline, malfunctioning, going through a maintenance or reboot process, undergoing firmware update, or the connection between the AP and the upstream device may have been throttled or severed (thus, the heartbeat itself was not likely delivered to the CM). In one embodiment, the identification of the upstream device(s) is based on a unique identifier included with the original heartbeat signal, as described supra with respect to step 1102. For example, if an AP sends four heartbeat signals having ID=01, 02, 03, 04, and response signals corresponding to only ID=01, 02, 04 are received, it may be deduced that the upstream device that received (or should have received) the heartbeat signal containing ID=03 is at issue.


In another embodiment, the AP may make a time-based determination as to which device may be down. For example, the AP may expect a return signal to the CM to take 75±50 milliseconds, a reasonable ping latency that accounts for e.g., occasional traffic spikes. In some variants, the transmitted signal from the AP contains a descriptor that specifies the expected response time, which may be significantly longer a reasonable ping latency, so as to e.g., let the responding entity (here, the CM) schedule the reply in with other traffic, such as where the CM logs the time of receipt of the initial message from the AP, and issues a reply carrying a timestamp within the expected window, yet does not transmit the reply until later according to its scheduling. The AP, upon receiving the timestamped reply, evaluates the timestamp and notes that the timestamp falls within the expected window (even though the reply was not physically transmitted until a later time).


In one embodiment of the method 1100, if the AP does not receive an acceptable response, the AP determines that the target device (e.g., CM) is at issue and initiates a reboot, diagnosis, etc. of the CM, as discussed elsewhere herein. It will be appreciated, however, that the AP (or its proxy) may invoke a “tiered” response, depending on a scoring or other evaluation of performance. As a simple example, assume that the AP expects a return ping to take 75±50 ms. If the return signal is received within 75 ms, the upstream status is normal. If the return signal is received within 125 ms (75+25), the upstream device (e.g., CM) is flagged as a potential device at issue. If the return signal is received but within an unreasonable time, such as 1000 ms, the AP only then actively investigates the upstream device by, e.g., performing diagnosis, requesting or determining power level readings, adjusting a frequency, determining a signal-to-noise ratio.


Alternatively, in one implementation, the AP may initiate a reboot of the CM based on a lack of/unacceptable response, even though the CM may be online and exchanging traffic (i.e., in the absence of any other indicia of problems).


In another variant, the AP requires multiple repeated violations of the expected threshold ping timing (or other acceptance criteria) to determine the appropriate response as described above.


At step 1112, if the AP determines that an upstream device is offline, it transmits a reboot signal to the identified device to attempt to restore the network connection to the client devices.


If the AP determines that the upstream premises devices (e.g., the CM in the example configuration of FIG. 9) are in proper working condition, or if the AP cannot determine any upstream devices at issue, the AP proceeds to step 1110. At step 1110, the AP checks its internal status (e.g., based on Example Operation 2, infra) to determine whether the AP itself is a source of the problem(s), and/or whether a client device is lacking authorization.


At step 1112, if the AP determines that it is responsible, it begins a self-reboot process to attempt to restore the network connection to the client devices.


At step 1114, if the AP determines that none of the upstream devices nor the AP itself are problematic, yet the client device is unable to access the network, the AP attempts to alert the end user by sending a message to any known client devices (e.g., currently or previously connected with the AP, currently in range of the AP). In one embodiment, the message simply informs the end user that there may be a malfunction with the client device. The message may suggest user-friendly solutions, such as recommending a restart of the device or trying to connect with another available device. In another embodiment, if the issue is determined to be a lack of authorization, the message may suggest providing proper user credentials (e.g., attempt to log in again) to see if the connection restores.



FIG. 12 illustrates an exemplary embodiment of a method 1200 for an upstream or backend network entity to monitor a wireless network (and attempt to restore network access to client devices). The backend network entity includes devices that are outside of the user's premises; i.e., between the cable modem and the communication destination (e.g., Internet access or backhaul within the MSO network). In one exemplary embodiment, the upstream network entity is a controller (e.g., AP controller 912). In other embodiments, the upstream entity may be, e.g., the CMTS 199. As with the method 1100 above, the wireless network useful with method 1200 is not limited to those embodied in FIGS. 2 and 3, and may be used with any wireless-enabled client device and any architecture utilizing data communication among nodes.


At step 1202 of the method, the controller receives one or more periodic signals (e.g., pings, heartbeat signals, probing signals) from one or more downstream network entities (e.g., an AP). The controller is configured to expect the periodic signals and, in response, return one or more corresponding signals (responses). In one embodiment, in order to assist in troubleshooting (e.g., step 1208), a log is kept to record all instances of receipt and acknowledgement of the received signals, along with timestamps.


In one variant, the periodic signals are received at a predetermined interval that may be modified by the AP and/or the controller. For example, the AP controller may instruct corresponding APs which it is monitoring to transmit heartbeats at different staggered times or periodicities, e.g., in a round-robin or other fashion so as to mitigate arrival at the same time (e.g., the AP controller being flooded by numerous signals simultaneously).


Various schemes for timestamping and device identification described above with respect to FIG. 11 may also be used consistent with the method of FIG. 12.


At step 1204, the controller responds to the heartbeat by transmitting a signal to the originating downstream device (e.g., the AP). The response signal may be sent back to the originating device immediately or according to an indicated response time (e.g., as described above with respect to FIG. 11). For example, the AP may send a heartbeat signal that includes a time limit (e.g., 3 seconds), and the responding AP controller may “de-prioritize” such response in favor of another that allows less time.


In another embodiment, the controller begins at step 1204 rather than step 1202. That is, the controller may act similarly to the AP as discussed with respect to FIG. 11; i.e., it may function as an originating device from which heartbeat signals and reboot signals originate. In one such embodiment, the controller may reset upstream or downstream devices, as well as itself, using similar methodologies as those used by the AP, e.g., transmit heartbeats, wait for a response, determine acceptability of the response, identify devices at issue, and cause remote restart of identified devices (or other diagnostic or remedial action). In some variants, the initial signal (or the response signal in the prior scenario) may comprise a notice or alert that tells the AP that the controller has gone offline (e.g., lost connection to other backend devices) or that it will go offline for maintenance, etc. Such notification is a preemptive measure that indicates to the AP that any ensuing downtime is not to be cause for invoking the self-healing processes described herein.


In another embodiment of the method 1200, the transmitted heartbeat includes instructions regarding addressing of the response signal. The instructions may contain identification information, such as an IP address, MAC address, relative address, a recognized unique ID, URL, etc., and may not correspond to the (heartbeat issuing) device. An AP controller (or AP) would typically send and acknowledge a response signal at the same network location or address. However, the AP may prefer to collect response signals at a proxy entity (e.g., processing-heavy entity, such as a root AP that manages several mesh APs (see FIG. 2)). In such a case, the responding device (e.g., AP or controller) transmits the response signal according to the instructions.


At step 1206, if the controller expects another heartbeat (e.g., controller and AP are configured to exchange signals regularly, initiated by the AP), the controller determines whether the heartbeat was received. If it was received, the controller returns to step 1204 to acknowledge the received signal and send a response signal, as appropriate. If not received, the controller proceeds to step 1208.


At step 1208, the controller seeks to identify which downstream device, if any, is offline. In one embodiment, the controller evaluates the unique identifier associated with the heartbeat signals that stopped arriving at the controller, such as via previously received signals before cessation. The controller may use a log that has collected the instances of receiving the signals to track any abnormalities or discrepancies. For example, an expected heartbeat signal having an ID=05 may no longer be received. This would signal to the controller that there is a potential issue with the downstream device associated with ID=05.


In another embodiment, the controller may make a temporal or other determination as to which device in the downstream “chain” of addressable device may be down. For example, the controller may stagger responses and accordingly expect a signal from a given target device to arrive according to its prescribed schedule (e.g., periodicity). Receipt by the controller of signals associated with the first inline downstream device (e.g., CM), yet no others, may indicate a fault on any portion of the downstream network that is downstream of the CM input (i.e., the CM to AP connection may be bad, the AP itself may be bad, etc.). Likewise, the controller can “cascade” signals, such as where communication is established between the last device in the chain (e.g., the AP), and upon failing to receive signals from the AP, the next device in the chain is targeted for a ping/reply test, and so forth. In this way, the controller can work itself back up the chain in an attempt to identify the fault.


If the controller determines that if a subsequent heartbeat signal is never received (e.g., based on the foregoing methods), determined by e.g., waiting for an expected downstream signal for an unreasonable time or other acceptance criteria, the controller implements further diagnostic action; e.g., requesting or determining power level readings, adjusting a frequency, determining a signal-to-noise ratio, actively pinging the device, etc.


At step 1210, if the controller determines that a downstream device is ostensibly offline, the controller initiates one or more corrective actions, such as a reboot of the downstream device by, e.g., transmitting a reboot signal to attempt to restore the network connection to the client devices. Note that the corrective actions (e.g., reboots) may also be invoked in a cascaded or sequenced fashion; e.g., CM first, then AP, etc. In this fashion, attempts to restore the viability of each link in the chain are progressive, and rebooted (and responding) devices can be eliminated from further consideration.


If the controller determines that the downstream devices are in proper working condition, or if the controller cannot determine any upstream devices at issue, the controller may run internal diagnostic procedures or communicate with other backend apparatus to determine any fault with the controller itself.


Example Operation 1


Based on the foregoing failure scenarios of FIGS. 9-10B, exemplary processes for self-diagnosis and self-healing of particular issues within a wireless network according to the present disclosure will now be described.


In a first exemplary scenario, an end user is connected to a single access point on the wireless network (e.g., mesh AP 204 or root AP 206 as shown in FIG. 2 or embedded AP as shown in FIG. 10-10B) via a client device (smartphone, laptop, desktop, tablet, etc.). The AP is configured to consistently transmit heartbeat signals to an AP controller located at an intermediate or regional data center). The AP detects a continuing or chronic series of “loss of heartbeat” events or alarms associated with the AP, ostensibly indicating an issue with the AP or other component. Using the exemplary improvements described herein, the CM 906 and AP baseband module 604 (FIG. 6) may then be used to evaluate one or more factors such as higher-than-normal latency (e.g., higher than running average), and/or dropped packets, and moreover the RF performance of the CM can be evaluated (e.g., upstream/downstream power, ability to access various RF channels, etc.), such as via access to native DOCSIS tools associated with the CM.


In one variant, a continuous or intensive ping transmission and reception regime is used in order to conduct the foregoing evaluation(s) in a timely fashion for critical processes, although it will be appreciated that other test protocols may be used. Such implementations may provide robust connectivity between e.g., the controller and the AP, etc.


Example Operation 2


Another exemplary approach for self-diagnosis and self-resolution of issues within a wireless network is now described. In this second exemplary embodiment, an end user is connected to a single access point on the wireless network (e.g., mesh AP 204 or root AP 206 as shown in FIG. 2 or embedded AP as shown in FIG. 10-10B) via a client device (smartphone, laptop, desktop, tablet, etc.). Although the AP is aware of other network entities (e.g., CM), it is aware of its own network performance as well.


As discussed above with respect to FIGS. 9A and 9D, an AP may go offline or experience degraded performance for various reasons (e.g., overheating, firmware update, maintenance, physical disconnection to network, etc.). In the present embodiment, the AP architecture 600 is configured give itself a “grace period” (e.g., 30 minutes) upon e.g., shut down its critical features that prevents other devices (e.g., CM, client devices) from connecting to it, so as to ensure time to complete any incipient upgrade process, maintenance and/or rebooting, etc., as well as institute a self-healing protocol.


In one embodiment, the self-healing process begins after the grace period has passed. In one variant, the process includes: (i) checking whether the AP is online (i.e., whether the AP has been addressed and responds to network management entities); (ii) checking whether the billing code and IP address(es) are correct for the AP and/or the end user, (iii) resetting the CM (e.g., via native DOCSIS reset functions), (iv) check the configuration of upstream entities (e.g., AP controller), and (v) performing one or more test pings upstream from the AP to the controller to test the communications path. In one implementation thereof, sending heartbeat signals marks that normal AP operation and activity has resumed.


In another variant, when the AP checks the billing code and IP address (i.e., step (ii) above), the AP and/or auxiliary services (e.g., AAA 858) further checks whether any end user attempting to connect to the AP is a subscriber to the network provider; i.e., the end user and/or the client device has been authorized for billing and being assigned the IP address. In one implementation, the client device must have installed an MSO-provided software (e.g., an app) for authorization and access to the network.


Example Operation 3


In the scenario where the AP (and CM) are operational, and connections to the CM and the backhaul are functional (as evidenced by e.g., heartbeats/responses being normally transmitted and received, and CM operational parameters such as upstream/downstream power, RF channel access, etc. indicating normally), yet users of the AP are unable to gain (MSO) network access, the AP must look to other causes. For instance, in ne embodiment, the AP 600 is configured to implement logic to determine whether the issue is caused by insufficient or improper authorization, a problem with the one of the network-side devices (e.g., a AAA or RADIUS server, billing module, etc.), or a network protocol-level problem (e.g., an incorrect or unavailable IP address or other incorrect network information).


As discussed above, the self-healing process may include checking whether the billing codes and IP addresses are verified for the AP and/or the end user; such testing may be useful where the AP appears to be online, but the users remain disconnected because of an incorrect billing code configuration (e.g., business configurations do not support residential use, and vice versa). For example, after an initial self-healing process has completed, the AP can attempt to restore service to end users; if the users cannot join then the AP initiates an internal configuration verification sequence. Specifically, in one implementation, the AP causes a reset of the CM. After the reset sequence, the AP controller configuration (e.g., billing codes, IP addresses, etc.) is verified, and the relevant AP controller is pinged to verify continuity. If the AP is offline but the CM is online, then the CM is reset once more via DOCSIS. In one such variant, the reset sequence is performed by a network operations center (NOC) (or equivalent logical entity) that sends a reset command to the CM, based on information received from the AP.


In the case of an authorization failure for multiple clients associated with a given AP, new users will be unable to access the AP. Hence, in one variant, the AP 600 is configured to check that its authentication protocols are functional and/or are able to communicate with the appropriate network authentication/entitlement entities. For example, the AP checks that AAA 858 and/or its Remote Authentication Dial-in User Service (RADIUS) server is online The AP also may run subsequent ping or other tests to determine AAA/RADIUS server health and availability.


In another embodiment, the AP first checks whether the self-healing process referenced above has properly brought the device at issue back online. If the AP is determined (whether actively via detection of faulty parameters, configuration, etc., or via process of elimination of other devices/processes) to not be online or functional, the AP reboots itself. In one variant, the AP runs further self-diagnostic assessments, including of its antenna module 602, baseband module 604, radio resource module 608, cable modem interface module 612, etc. If the AP is determined to be online and functional, the AP takes the two additional steps.


If the connection issue persists, the AP determines the issue is likely with the client device. In one variant, the AP sends an alert to the client device by transmitting beacons that are “bit stuffed” with the alert to directly push the message to the client device (i.e., connection need not be established), as described in co-owned and co-pending U.S. patent application Ser. No. 15/063,314 filed Mar. 7, 2016 and entitled “APPARATUS AND METHODS FOR DYNAMIC OPEN-ACCESS NETWORKS”, incorporated supra. The end user may be alerted to investigate the user's client device, such as by restarting the device, verifying that other devices can connect to the network, etc.


Example Operation 4


In the event that one or more APs become disconnected from their host or associated AP controller in the MSO network, the issue is correlated to the particular upstream network entity, e.g., root AP, CMTS, controller, backbone, backhaul connections, such as by executing an automatic diagnostic troubleshooting process on the AP controller. In the exemplary embodiment, this troubleshooting process includes: (i) checking the connectivity between the AP to the CM and the backbone (e.g., pinging each upstream device, as described elsewhere herein); (ii) checking the controller to see whether the AP has been configured and registered properly on the controller; (iii) checking whether the IP address or range of IP addresses assigned to the AP is/are properly registered by the controller and/or the CMTS; (iv) checking connections to local devices surrounding the AP for any regional issues, e.g., physical fiber or cable disconnections within the backhaul; and (v) checking the connectivity between the AP and headend apparatus, e.g., CMTS, AP controller, Layer 2 and 3 switches, control center.


Through the above process, the network identifies the problematic entity, whether it be an upstream root AP, a backend apparatus, or one in-between. Once the issue is correlated to one or more particular network entities, the MSO and/or the AP(s) may then send reboot commands to correct the issue, or implements other corrective action.


Controller Apparatus—



FIG. 13 illustrates an exemplary embodiment of a controller apparatus 1300 according to the present disclosure. As shown, the controller includes, inter alia, a processor 1302, a memory module 1304, a peer controller (PC) 1306, a backend (e.g., headend, backhaul) network interface 1308, and a network (e.g., LAN, WLAN) interface 1310. Although the exemplary controller 1300 may be used as described within the present disclosure, those of ordinary skill in the related arts will readily appreciate, given the present disclosure, that the controller apparatus may be virtualized and/or distributed within other core network entities (thus having ready access to power for continued operation), and hence the foregoing apparatus 1300 is purely illustrative.


More particularly, the exemplary controller is located within near or at the centralized manager, e.g., MSO; an intermediate entity, e.g., within a data center, such as an AP controller; and/or within “cloud” entities or other portions of the infrastructure of which the rest of the wireless network (as discussed supra) is a part. In some embodiments, the controller 1300 may be one of several controllers, each having equivalent effectiveness or different levels of use, e.g., within a hierarchy (e.g., controller 1300 may be under a “parent” controller that manages multiple slave or subordinate controllers).


In one embodiment, the processor 1302 may include one or more of a digital signal processor, microprocessor, field-programmable gate array, or plurality of processing components mounted on one or more substrates. The processor 1302 may also comprise an internal cache memory. The processing subsystem is in communication with a memory subsystem 1304, the latter including memory which may for example comprise SRAM, flash, and/or SDRAM components. The memory subsystem may implement one or more of DMA type hardware, so as to facilitate data accesses as is well known in the art. The memory subsystem of the exemplary embodiment contains computer-executable instructions which are executable by the processor subsystem.


The processing apparatus 1302 is configured to execute at least one computer program stored in memory 1304 (e.g., a non-transitory computer readable storage medium). The computer program may include a plurality of computer readable instructions configured to perform the complementary logical functions of a peer controller (PC) 1306. Other embodiments may implement such functionality within dedicated hardware, logic, and/or specialized co-processors (not shown). For instance, the peer controller (or portions of the functionality thereof) can be located in one or more MSO data centers, and/or in other “cloud” entities, whether within our outside of the MSO network.


In the exemplary embodiment as shown, controller 1300 includes a heartbeat manager module 1312. The heartbeat manager 1312 is a hardware and/or software module that is in data communication with the processor 1302, memory 1304 and/or one or more interfaces 1308, 1310 to the external network. In some embodiments, the heartbeat manager 1312 is internal to the processor, memory, or other components of the controller 1300, such as via being rendered in software or firmware operative to run on the processor core(s).


At a high level, the exemplary heartbeat manager 1312 is configured to implement (or facilitate implementation of) the methods described above with respect to FIGS. 11-12, as applicable. The heartbeat manager is configured to manage signals or messages received upstream (sent from, e.g., an AP such as one utilizing the architecture 600 of FIG. 6) that contain, inter alia, (i) information that allows the controller (or other network devices that receive and/or relay the signal upstream) to ascertain that such heartbeat signal requires a response, (ii) information about the origin/routing of the signal, and/or (iii) instructions on handling of the signal (e.g., send a response heartbeat signal to the originating device or a device identified by a unique ID, MAC address, IP address, etc.).


In one embodiment, the heartbeat manager accesses the memory module 1304 to retrieve stored data. The data or information may relate to open-access features such as available bandwidth, power level readings, logs for received and transmitted signals, network conditions, quality of service, etc. Such features are accessible by other backend entities or may be included in response signals (e.g., back to AP).


In other embodiments, application program interfaces (APIs) such as those included in an MSO-provided applications, installed with other proprietary software, or natively available on the controller apparatus (e.g., as part of the computer program noted supra or exclusively internal to the heartbeat manager module 1312) may also reside in the internal cache or other memory 1304. Such APIs may include common network protocols or programming languages configured to enable communication with other network entities as well as receipt and transmit signals that a receiving device (e.g., AP) may interpret.


In one embodiment, the PC 1306 is configured to register known downstream devices, other backend devices, and wireless client devices (remotely located or otherwise), and centrally control the broader wireless network (and any constituent peer-to-peer sub-networks). Such configuration include, e.g., providing network identification (e.g., to APs, CMs and other downstream devices, or to upstream devices), managing network congestion, and managing capabilities supported by the wireless network.


In another embodiment, the PC 1306 is further configured to communicate with one or more authentication, authorization, and accounting (AAA) servers of the network. The AAA servers are configured to provide services for, e.g., authorization and/or control of network subscribers for controlling access to computer resources, enforcing policies, auditing usage, and providing the information necessary to bill for services.


In some variants, authentication processes are configured to identify an AP, a client device, or an end user, such as by having the end user enter valid credentials (e.g., user name and password) before access is granted, or other methods as described supra. The process of authentication may be based on each subscriber having a unique set of criteria or credentials (e.g., unique user name and password, challenge questions, entry of biometric data, entry of human-verification data such as “CAPTCHA” data, etc.) for gaining access to the network. For example, the AAA servers may compare a user's authentication credentials with user credentials stored in a database therein. If the authentication credentials satisfy the access requirements (e.g., provided credentials match the stored credentials), the user may then be granted access to the network and its features and services. If the credentials are at variance, authentication fails and network access may be denied.


Following authentication, the AAA servers are configured to grant authorization to a subscriber user for certain features, functions, and/or tasks. After logging into the wireless network, for instance, the subscriber may try to access an MSO-provided email account, cloud storage account, or streaming content. The authorization process determines whether the user has the authority to access those services or issue commands related thereto. Simply put, authorization is the process of enforcing policies, i.e., determining what types or qualities of activities, resources, or services a user is permitted. Usually, authorization occurs within the context of authentication. Once a user is authenticated, they may be authorized for different types of access or activity. A given user may also have different types, sets, or levels of authorization, depending on any number of aspects.


The AAA servers may be further configured for accounting, which measures the resources a user consumes during access. This may include the amount of system time or the amount of data a user has sent and/or received during a session, somewhat akin to cellular data plans based on so many consumed or available GB of data. Accounting may be carried out by logging of session statistics and usage information, and is used for, inter alia, authorization control, billing, trend analysis, network resource utilization, and capacity planning activities. It will be appreciated that in other examples, one or more AAA servers may be linked to a third-party or proxy server, such as that of an event management entity.


In one embodiment, one or more backend interfaces 1308 are configured to transact one or more network address packets with other networked devices, particularly backend apparatus (e.g., CMTS, Layer 3 switch, network monitoring center, MSO) according to a network protocol. Common examples of network routing protocols include for example: Internet Protocol (IP), Internetwork Packet Exchange (IPX), and Open Systems Interconnection (OSI) based network technologies (e.g., Asynchronous Transfer Mode (ATM), Synchronous Optical Networking (SONET), Synchronous Digital Hierarchy (SDH), Frame Relay). In one embodiment, the backend network interface 1308 operates in signal communication with the backbone of the content delivery network (CDN), such as that of FIGS. 1-1d. These interfaces might comprise, for instance, GbE (Gigabit Ethernet) or other interfaces of suitable bandwidth capability.


In one embodiment, one or more network interfaces 1310 are utilized in the illustrated embodiment for communication with downstream network entities, e.g., APs, backbone entities, data centers, and/or CMs, such as via Ethernet or other wired and/or wireless data network protocols. Heartbeat pings received from downstream are routed via the network interface to the heartbeat manager 1312.


It will also be appreciated that the two interfaces 1308, 1310 may be aggregated together and/or shared with other extant data interfaces, such as in cases where a controller function is virtualized within another component, such as an MSO network server performing that function.


It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.


While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.


It will be further appreciated that while certain steps and aspects of the various methods and apparatus described herein may be performed by a human being, the disclosed aspects and individual methods and apparatus are generally computerized/computer-implemented. Computerized apparatus and methods are necessary to fully implement these aspects for any number of reasons including, without limitation, commercial viability, practicality, and even feasibility (i.e., certain steps/processes simply cannot be performed by a human being in any viable fashion).

Claims
  • 1. A computerized method of operating a content distribution network to compensate for faults within the content distribution network, the content distribution network having at least a plurality of wireless local area network (WLAN) access points (APs) and a WLAN controller entity, the computerized method comprising: transmitting a test signal addressed to the WLAN controller entity from at least one of the plurality of WLAN APs;failing to receive, at the at least one WLAN AP, an expected response signal from the WLAN controller entity in response to the test signal; andbased at least on (i) a detection of a faulty condition of at least one device upstream of the at least one WLAN AP subsequent to the failing to receive the expected response signal from the WLAN controller entity, and (ii) an evaluation of an internal condition of the at least one WLAN AP that has transmitted the test signal to determine that the WLAN AP is not faulty, initiating a restoration process for at least one device upstream, and causing transmission of data to the WLAN controller entity via an alternate communication channel jointly controlled with the plurality of WLAN APs and the WLAN controller entity by an operator of the content distribution network, the jointly controlled alternate communication channel configured for use with a different wireless communication protocol than that used for the transmitting by the at least one WLAN AP, the transmission of the data causing use of an alternate wireless access node by a user device then-currently associated with the at least one WLAN AP, the alternate wireless access node configured for use with the different wireless communication protocol.
  • 2. The computerized method of claim 1, wherein the content distribution network further comprises a premises modem apparatus capable of communicating data between the at least one WLAN AP and the WLAN controller entity; and wherein the computerized method further comprises: transmitting, subsequent to the transmitting the test signal, a second test signal from the at least one WLAN AP, the second test signal addressed to the premises modem apparatus;obtaining an expected response signal to the second test signal; andbased at least on the failing to receive and the obtaining, localizing a fault within the content distribution network to a prescribed portion thereof that does not include the at least one WLAN AP or the premises modem apparatus.
  • 3. The computerized method of claim 2, further comprising establishing data communication with the WLAN controller entity via a backhaul cabling connection with the content distribution network, the backhaul cabling connection being configured to enable data communication between the premises modem apparatus and the WLAN controller entity via at least radio frequency (RF) signals carried over the backhaul cabling.
  • 4. The computerized method of claim 1, further comprising dynamically selecting one or more network parameters for the transmission of the data via the alternate communication channel, the one or more network parameters including one or more radio frequency bands selected by the at least one WLAN AP.
  • 5. The computerized method of claim 1, wherein the causing use of an alternate wireless access node by a user device then-currently associated with the at least one WLAN AP comprises causing the at least one WLAN AP to cease advertisement of said at least one WLAN AP via wireless transmissions therefrom.
  • 6. The computerized method of claim 1, wherein the transmitting of the test signal comprises transmitting a heartbeat signal, the heartbeat signal comprising at least identification data associated with the at least one WLAN AP; and wherein the computerized method further comprises, based on the failing to receive the expected response signal from the WLAN controller entity, determining a presence of a fault at or between one or more of (i) the WLAN controller entity or (ii) a premises modem apparatus capable of communicating data between the at least one WLAN AP and the WLAN controller entity.
  • 7. A computerized method of operating a content distribution network to compensate for faults within the content distribution network, the content distribution network having at least one wireless local area network (WLAN) access point (AP) and a WLAN controller entity, the computerized method comprising: transmitting a first signal addressed to the WLAN controller entity from at least one WLAN AP and via a first data backhaul;failing to receive, at the at least one WLAN AP, an expected response signal from the WLAN controller entity in response to the first signal; andbased at least on (i) an assessment to determine whether a network apparatus disposed upstream of the at least one WLAN AP is non-functional in at least one aspect that would cause the failing to receive, and (ii) a self-evaluation of the at least one WLAN AP to determine whether the at least one WLAN AP is non-functional in at least one aspect that would cause the failing to receive, the assessment and the self-evaluation initiated based on the failing to receive the expected response signal from the WLAN controller entity, causing transmission of data to the WLAN controller entity via an alternate communication channel that is co-deployed by an operator of the content distribution network and that is configured to effectuate the transmission of data via a cellular-based transmission protocol, the transmission of the data causing use of the alternate communication channel as backhaul for the at least one WLAN AP.
  • 8. The computerized method of claim 7, wherein the causing transmission of data to the WLAN controller entity via the alternate communication channel comprises a causing transmission via a wireless backhaul between a premises modem apparatus and a wireless node of the content distribution network.
  • 9. The computerized method of claim 8, wherein the premises modem apparatus and the wireless node of the content distribution network are each backhauled by at least coaxial cable of the content distribution network.
  • 10. The computerized method of claim 7, wherein the content distribution network further comprises a premises modem apparatus capable of communicating data between the at least one WLAN AP and the WLAN controller entity; and wherein the computerized method further comprises: transmitting, subsequent to the transmitting the first signal, a second signal from the at least one WLAN AP, the second signal addressed to the premises modem apparatus;obtaining an expected response signal to the second signal; andbased at least on the failing to receive and the obtaining, localizing a fault within the content distribution network to a prescribed portion thereof that does not include the at least one WLAN AP or the premises modem apparatus.
  • 11. The computerized method of claim 7, wherein the failing to receive the expected response comprises failing to receive the expected response within a prescribed period of time.
  • 12. The computerized method of claim 11, further comprising: determining a level of congestion of at least a portion of the content distribution network; andadjusting the prescribed period of time based at least on the determined level of congestion.
  • 13. The computerized method of claim 7, further comprising: determining, prior to the transmitting of the first signal, a loss or reduction in available bandwidth of at least a portion of the content distribution network; andbased at least on the determining, initiating the transmitting of the first signal.
  • 14. A controller apparatus configured for fault compensation within a content distribution network, the controller apparatus comprising: a data connection configured for data communication with distribution infrastructure of the content distribution network; andnon-transitory computer-readable apparatus comprising a plurality of computer-readable instructions, the plurality of instructions configured for data communication with one or more premises modems, the one or more premises modems each configured for data communication with the content distribution network via at least a first data interface and a second data interface, the first and second data interfaces being controlled by an operator of the content distribution network, the one or more premises modems each in data communication with one or more wireless access nodes;wherein the controller apparatus is further configured to, via at least the plurality of instructions: transmit respective first signals addressed to corresponding ones of the one or more wireless access nodes via a first communication channel established via a first data interface of at least one of the one or more premises modems, the first data interface being configured to access the first communication channel via one of a plurality of data links controlled by the operator of the content distribution network; andresponsive to a detection of a communicational fault that has occurred in at least one of at least one of the one or more premises modems or at least one of the one or more wireless access nodes, the detection being based at least on a determination that at least one of second signals was not received from at least one of a corresponding one of the one or more premises modems or a corresponding one of the one or more wireless access nodes within a prescribed period of time, enable the corresponding one of the one or more premises modems to cause transmission of data via the second data interface, the second data interface being configured to access, via another one of the plurality of data links controlled by the operator, a communication channel alternate to the first communication channel;wherein the plurality of controlled data links comprise at least (i) a wireless local area network (WLAN)-based data link, and (ii) a non-WLAN-based data link.
  • 15. The controller apparatus of claim 14, wherein the enablement of the corresponding one of the one or more premises modems to cause the transmission of data comprises transmission of data to the corresponding one of the one or more premises modems via the communication channel alternate to the first communication channel.
  • 16. The controller apparatus of claim 14, wherein the first data interface comprises a wireline interface serviced by a coaxial cable of the content distribution network distribution infrastructure in communication with the first data interface of the corresponding one of the one or more premises modems, and the second data interface comprises a wireless interface serviced by a wireless distribution node serviced by a coaxial cable of the content distribution network distribution infrastructure.
  • 17. The controller apparatus of claim 14, wherein the first data interface and the second data interface are each configured to receive radio frequency (RF) signals from the content distribution network distribution infrastructure.
PRIORITY AND RELATED APPLICATIONS

This application is a divisional of and claims the benefit of priority to co-owned U.S. patent application Ser. No. 15/183,159 of the same title filed Jun. 15, 2016 and issuing as U.S. Pat. No. 10,164,858 on Dec. 25, 2018, the foregoing being incorporated herein by reference in its entirety. The present application is generally related to the subject matter of co-pending and co-owned U.S. patent application Ser. No. 15/063,314 filed Mar. 7, 2016 and entitled “APPARATUS AND METHODS FOR DYNAMIC OPEN-ACCESS NETWORKS”, co-pending and co-owned U.S. patent application Ser. No. 15/002,232 filed Jan. 20, 2016 and entitled “APPARATUS AND METHOD FOR WIRELESS NETWORK SERVICES IN MOVING VEHICLES”, co-pending and co-owned U.S. patent application Ser. No. 14/959,948 filed Dec. 4, 2015 and entitled “APPARATUS AND METHOD FOR WIRELESS NETWORK EXTENSIBILITY AND ENHANCEMENT”, and co-pending and co-owned U.S. patent application Ser. No. 14/959,885 filed Dec. 4, 2015 and entitled “APPARATUS AND METHODS FOR SELECTIVE DATA NETWORK ACCESS”, each of the foregoing incorporated herein by reference in its entirety.

US Referenced Citations (636)
Number Name Date Kind
5313454 Bustini et al. May 1994 A
5369707 Follendore, III Nov 1994 A
5528284 Iwami et al. Jun 1996 A
5577209 Boyle et al. Nov 1996 A
5708961 Hylton et al. Jan 1998 A
5715403 Stefik Feb 1998 A
5774170 Hite et al. Jun 1998 A
5787172 Arnold Jul 1998 A
5818438 Howe et al. Oct 1998 A
5828832 Holden et al. Oct 1998 A
5862312 Mann et al. Jan 1999 A
5870474 Wasilewski et al. Feb 1999 A
5878324 Borth et al. Mar 1999 A
5897635 Torres et al. Apr 1999 A
5926205 Krause et al. Jul 1999 A
5935206 Dixon et al. Aug 1999 A
5982412 Nulty Nov 1999 A
6002393 Hite et al. Dec 1999 A
6009103 Woundy Dec 1999 A
6092178 Jindal et al. Jul 2000 A
6128316 Takeda et al. Oct 2000 A
6134532 Lazarus et al. Oct 2000 A
6148400 Arnold Nov 2000 A
6154844 Touboul et al. Nov 2000 A
6157719 Wasilewski et al. Dec 2000 A
6167432 Jiang Dec 2000 A
6167521 Smith et al. Dec 2000 A
6169728 Perreault et al. Jan 2001 B1
6181697 Nurenberg et al. Jan 2001 B1
6211901 Imajima et al. Apr 2001 B1
6212636 Boyle et al. Apr 2001 B1
6219710 Gray et al. Apr 2001 B1
6219840 Corrigan et al. Apr 2001 B1
6233341 Riggins May 2001 B1
6233687 White May 2001 B1
6240553 Son et al. May 2001 B1
6249680 Wax et al. Jun 2001 B1
6256393 Safadi et al. Jul 2001 B1
6259701 Shur et al. Jul 2001 B1
6266421 Domyo et al. Jul 2001 B1
6330609 Garofalakis et al. Dec 2001 B1
6353626 Sunay et al. Mar 2002 B1
6378130 Adams Apr 2002 B1
6434141 Oz et al. Aug 2002 B1
6456716 Arnold Sep 2002 B1
6463585 Hendricks et al. Oct 2002 B1
6498783 Lin Dec 2002 B1
6519062 Yoo Feb 2003 B1
6523696 Saito et al. Feb 2003 B1
6590865 Ibaraki et al. Jul 2003 B1
6601171 Carter et al. Jul 2003 B1
6640145 Hoffberg et al. Oct 2003 B2
6657991 Akgun et al. Dec 2003 B1
6687735 Logston et al. Feb 2004 B1
6694145 Riikonen et al. Feb 2004 B2
6711148 Hills Mar 2004 B1
6718551 Swix et al. Apr 2004 B1
6738978 Hendricks et al. May 2004 B1
6742116 Matsui et al. May 2004 B1
6760768 Holden et al. Jul 2004 B2
6763391 Ludtke Jul 2004 B1
6782550 Cao Aug 2004 B1
6785810 Lirov et al. Aug 2004 B1
6788676 Partanen et al. Sep 2004 B2
6799047 Bahl et al. Sep 2004 B1
6807573 Saito et al. Oct 2004 B2
6813505 Walley et al. Nov 2004 B2
6842783 Boivie et al. Jan 2005 B1
6859535 Tatebayashi et al. Feb 2005 B1
6891841 Leatherbury et al. May 2005 B2
6898708 Hori et al. May 2005 B2
6910064 Astarabadi et al. Jun 2005 B1
6925257 Yoo Aug 2005 B2
6944150 McConnell et al. Sep 2005 B1
6948183 Peterka Sep 2005 B1
6954632 Kobayashi Oct 2005 B2
6957261 Lortz Oct 2005 B2
6957328 Goodman et al. Oct 2005 B2
6973576 Giobbi Dec 2005 B2
6975730 Kuroiwa et al. Dec 2005 B1
6985355 Allirot Jan 2006 B2
6986156 Rodriguez et al. Jan 2006 B1
6996544 Sellars et al. Feb 2006 B2
7006881 Hoffberg et al. Feb 2006 B1
7007170 Morten Feb 2006 B2
7009972 Maher et al. Mar 2006 B2
7016963 Judd et al. Mar 2006 B1
7017189 Demello et al. Mar 2006 B1
7027460 Iyer et al. Apr 2006 B2
7039048 Monta et al. May 2006 B1
7054443 Jakubowski et al. May 2006 B1
7054902 Toporek et al. May 2006 B2
7055040 Klemba et al. May 2006 B2
7065216 Benaloh et al. Jun 2006 B1
7068639 Varma et al. Jun 2006 B1
7069449 Weaver et al. Jun 2006 B2
7069573 Brooks et al. Jun 2006 B1
7072950 Toft Jul 2006 B2
7073199 Raley Jul 2006 B1
7075945 Arsenault et al. Jul 2006 B2
7086077 Giammaressi Aug 2006 B2
7092397 Chandran et al. Aug 2006 B1
7099308 Merrill et al. Aug 2006 B2
7103181 Ananth Sep 2006 B2
7106382 Shiotsu Sep 2006 B2
7107326 Fijolek et al. Sep 2006 B1
7143431 Eager et al. Nov 2006 B1
7149772 Kalavade Dec 2006 B1
7154912 Chong et al. Dec 2006 B2
7165268 Moore et al. Jan 2007 B1
7174126 McElhatten et al. Feb 2007 B2
7174127 Otten et al. Feb 2007 B2
7174371 Elo et al. Feb 2007 B2
7174385 Li Feb 2007 B2
7194756 Addington et al. Mar 2007 B2
7209458 Ahvonen et al. Apr 2007 B2
7225333 Peinado et al. May 2007 B2
7228427 Fransdonk Jun 2007 B2
7228555 Schlack Jun 2007 B2
7237112 Ishiguro et al. Jun 2007 B1
7242960 Van Rooyen et al. Jul 2007 B2
7248694 Husemann et al. Jul 2007 B2
7254608 Yeager et al. Aug 2007 B2
7257227 Chen et al. Aug 2007 B2
7266726 Ladd et al. Sep 2007 B1
7289534 Bailey et al. Oct 2007 B1
7299502 Schmeling et al. Nov 2007 B2
7305460 Park Dec 2007 B2
7308415 Kimbrel et al. Dec 2007 B2
7313611 Jacobs et al. Dec 2007 B1
7324531 Cho Jan 2008 B2
7325073 Shao et al. Jan 2008 B2
7330483 Peters, Jr. et al. Feb 2008 B1
7330967 Pujare et al. Feb 2008 B1
7334044 Allen Feb 2008 B1
7340759 Rodriguez Mar 2008 B1
7346688 Allen et al. Mar 2008 B2
7353543 Ohmori et al. Apr 2008 B2
7363371 Kirby et al. Apr 2008 B2
7373506 Asano et al. May 2008 B2
7376386 Phillips et al. May 2008 B2
7376976 Fierstein et al. May 2008 B2
7379494 Raleigh et al. May 2008 B2
7409546 Platt Aug 2008 B2
7453844 Lee et al. Nov 2008 B1
7457520 Rosetti et al. Nov 2008 B2
7464179 Hodges et al. Dec 2008 B2
7472280 Giobbi Dec 2008 B2
7477621 Loc et al. Jan 2009 B1
7486869 Alexander et al. Feb 2009 B2
7487363 Alve et al. Feb 2009 B2
7506367 Ishibashi Mar 2009 B1
7551574 Peden et al. Jun 2009 B1
7567565 La Joie Jul 2009 B2
7577118 Haumonte et al. Aug 2009 B2
7592912 Hasek et al. Sep 2009 B2
7602820 Helms et al. Oct 2009 B2
7673004 Sherstinsky et al. Mar 2010 B1
7690020 Lebar Mar 2010 B2
7693171 Gould Apr 2010 B2
7707644 Choi et al. Apr 2010 B2
7721314 Sincaglia et al. May 2010 B2
7730321 Gasparini et al. Jun 2010 B2
7742074 Minatogawa Jun 2010 B2
7752617 Blinick et al. Jul 2010 B2
7757101 Nonaka et al. Jul 2010 B2
7783891 Perlin et al. Aug 2010 B2
7809942 Baran et al. Oct 2010 B2
7860507 Kalika et al. Dec 2010 B2
7865440 Jaquette Jan 2011 B2
7870599 Pemmaraju Jan 2011 B2
7925592 Issa et al. Apr 2011 B1
7930558 Hori Apr 2011 B2
7930715 Hendricks et al. Apr 2011 B2
7954131 Cholas et al. May 2011 B2
7983418 Oyama et al. Jul 2011 B2
8041785 Mazur et al. Oct 2011 B2
8084792 Lehmann et al. Dec 2011 B2
8166508 Mitsuji et al. Apr 2012 B2
8181262 Cooper et al. May 2012 B2
8234387 Bradley et al. Jul 2012 B2
8280982 La Joie et al. Oct 2012 B2
8306634 Nguyen et al. Nov 2012 B2
8332370 Gattegno et al. Dec 2012 B2
8341242 Dillon et al. Dec 2012 B2
8442265 Bosworth et al. May 2013 B1
8583484 Chalawsky et al. Nov 2013 B1
8713623 Brooks Apr 2014 B2
8838863 Henriksson et al. Sep 2014 B2
8842615 Kalbag et al. Sep 2014 B1
8862155 Stern et al. Oct 2014 B2
8866911 Sivertsen Oct 2014 B1
8898270 Stack et al. Nov 2014 B1
9003436 Tidwell et al. Apr 2015 B2
9027062 Patel et al. May 2015 B2
9071859 Lajoie Jun 2015 B2
9215423 Kimble et al. Dec 2015 B2
9300919 Cholas et al. Mar 2016 B2
9906838 Cronk et al. Feb 2018 B2
9918345 Gunasekara et al. Mar 2018 B2
20010004768 Hodge et al. Jun 2001 A1
20010014946 Ichinoi et al. Aug 2001 A1
20010019614 Madoukh et al. Sep 2001 A1
20010029581 Knauft Oct 2001 A1
20010030785 Pangrac et al. Oct 2001 A1
20010053223 Ishibashi et al. Dec 2001 A1
20010053226 Akins et al. Dec 2001 A1
20010056541 Matsuzaki et al. Dec 2001 A1
20020013772 Peinado Jan 2002 A1
20020026575 Wheeler et al. Feb 2002 A1
20020027883 Belaiche Mar 2002 A1
20020032754 Logston et al. Mar 2002 A1
20020049902 Rhodes Apr 2002 A1
20020054589 Ethridge et al. May 2002 A1
20020055978 Joon-Bo et al. May 2002 A1
20020056125 Hodge et al. May 2002 A1
20020059619 Lebar May 2002 A1
20020062440 Akama May 2002 A1
20020063621 Tseng et al. May 2002 A1
20020066033 Dobbins et al. May 2002 A1
20020077984 Ireton Jun 2002 A1
20020087976 Kaplan et al. Jul 2002 A1
20020123928 Eldering et al. Sep 2002 A1
20020126654 Preston et al. Sep 2002 A1
20020129358 Buehl et al. Sep 2002 A1
20020129378 Cloonan et al. Sep 2002 A1
20020147771 Traversat et al. Oct 2002 A1
20020152299 Traversat et al. Oct 2002 A1
20020152393 Thoma et al. Oct 2002 A1
20020183985 Hori et al. Dec 2002 A1
20020188744 Mani Dec 2002 A1
20020188869 Patrick Dec 2002 A1
20020199105 Ishiguro et al. Dec 2002 A1
20030002862 Rodriguez et al. Jan 2003 A1
20030005453 Rodriguez et al. Jan 2003 A1
20030007516 Abramov et al. Jan 2003 A1
20030009681 Harada et al. Jan 2003 A1
20030021421 Yokota et al. Jan 2003 A1
20030041336 Del Sordo et al. Feb 2003 A1
20030046560 Inomata et al. Mar 2003 A1
20030046704 Laksono et al. Mar 2003 A1
20030048380 Tamura Mar 2003 A1
20030056217 Brooks Mar 2003 A1
20030061619 Giammaressi Mar 2003 A1
20030069965 Ma et al. Apr 2003 A1
20030071117 Meade Apr 2003 A1
20030074571 Fujiwara et al. Apr 2003 A1
20030084003 Pinkas et al. May 2003 A1
20030097340 Okamoto et al. May 2003 A1
20030099212 Anjum et al. May 2003 A1
20030114162 Chheda et al. Jun 2003 A1
20030115267 Hinton et al. Jun 2003 A1
20030139980 Hamilton Jul 2003 A1
20030140227 Asano et al. Jul 2003 A1
20030163697 Pabla et al. Aug 2003 A1
20030163739 Armington et al. Aug 2003 A1
20030165241 Fransdonk Sep 2003 A1
20030166401 Combes et al. Sep 2003 A1
20030174838 Bremer Sep 2003 A1
20030179773 Mocek et al. Sep 2003 A1
20030187799 Sellars et al. Oct 2003 A1
20030205763 Park et al. Nov 2003 A1
20030208763 McElhatten et al. Nov 2003 A1
20030208767 Williamson et al. Nov 2003 A1
20030217137 Roese et al. Nov 2003 A1
20030217365 Caputo Nov 2003 A1
20040019691 Daymond et al. Jan 2004 A1
20040024688 Bi et al. Feb 2004 A1
20040034877 Nogues Feb 2004 A1
20040045032 Cummings et al. Mar 2004 A1
20040045035 Cummings et al. Mar 2004 A1
20040045037 Cummings et al. Mar 2004 A1
20040078602 Rothbarth et al. Apr 2004 A1
20040088558 Candelore May 2004 A1
20040106403 Mori et al. Jun 2004 A1
20040109569 Ellison et al. Jun 2004 A1
20040117836 Karaoguz et al. Jun 2004 A1
20040123129 Ginter et al. Jun 2004 A1
20040128499 Peterka et al. Jul 2004 A1
20040133907 Rodriguez et al. Jul 2004 A1
20040133923 Watson et al. Jul 2004 A1
20040137918 Varonen et al. Jul 2004 A1
20040146006 Jackson Jul 2004 A1
20040181800 Rakib et al. Sep 2004 A1
20040187159 Gaydos et al. Sep 2004 A1
20040193609 Phan et al. Sep 2004 A1
20040193680 Gibbs et al. Sep 2004 A1
20040224425 Gjerde et al. Nov 2004 A1
20040240478 Goren et al. Dec 2004 A1
20040250273 Swix et al. Dec 2004 A1
20040260798 Addington et al. Dec 2004 A1
20040261093 Rebaud et al. Dec 2004 A1
20040268386 Logan et al. Dec 2004 A1
20050005287 Claussen Jan 2005 A1
20050007278 Anson et al. Jan 2005 A1
20050015810 Gould et al. Jan 2005 A1
20050021985 Ono et al. Jan 2005 A1
20050022227 Shen et al. Jan 2005 A1
20050034171 Benya Feb 2005 A1
20050039205 Riedl Feb 2005 A1
20050039212 Baran et al. Feb 2005 A1
20050049886 Grannan et al. Mar 2005 A1
20050055220 Lee et al. Mar 2005 A1
20050058112 Lahey et al. Mar 2005 A1
20050060742 Riedl et al. Mar 2005 A1
20050060745 Riedl et al. Mar 2005 A1
20050065888 Benaloh Mar 2005 A1
20050086683 Meyerson Apr 2005 A1
20050086691 Dudkiewicz et al. Apr 2005 A1
20050091173 Alve Apr 2005 A1
20050097006 Nyako May 2005 A1
20050108763 Baran et al. May 2005 A1
20050111844 Compton et al. May 2005 A1
20050114686 Ball et al. May 2005 A1
20050114900 Ladd et al. May 2005 A1
20050125832 Jost et al. Jun 2005 A1
20050138357 Swenson et al. Jun 2005 A1
20050168323 Lenoir et al. Aug 2005 A1
20050169468 Fahrny et al. Aug 2005 A1
20050172127 Hartung et al. Aug 2005 A1
20050176444 Tanaka Aug 2005 A1
20050177740 Athaide et al. Aug 2005 A1
20050177741 Chen et al. Aug 2005 A1
20050177855 Maynard et al. Aug 2005 A1
20050182931 Robert et al. Aug 2005 A1
20050188210 Perlin et al. Aug 2005 A1
20050190912 Hopkins et al. Sep 2005 A1
20050195975 Kawakita Sep 2005 A1
20050198693 Choi et al. Sep 2005 A1
20050268107 Harris et al. Dec 2005 A1
20050273629 Abrams et al. Dec 2005 A1
20050278259 Gunaseelan et al. Dec 2005 A1
20050289618 Hardin Dec 2005 A1
20050289619 Melby Dec 2005 A1
20060002551 Brown et al. Jan 2006 A1
20060004662 Nadalin et al. Jan 2006 A1
20060008256 Khedouri et al. Jan 2006 A1
20060020786 Helms et al. Jan 2006 A1
20060020950 Ladd et al. Jan 2006 A1
20060021004 Moran et al. Jan 2006 A1
20060036750 Ladd et al. Feb 2006 A1
20060041903 Kahn et al. Feb 2006 A1
20060047801 Haag et al. Mar 2006 A1
20060047957 Helms et al. Mar 2006 A1
20060064583 Birnbaum et al. Mar 2006 A1
20060095940 Yearwood May 2006 A1
20060130099 Rooyen Jun 2006 A1
20060130107 Gonder et al. Jun 2006 A1
20060130113 Carlucci et al. Jun 2006 A1
20060136964 Diez et al. Jun 2006 A1
20060137005 Park Jun 2006 A1
20060137015 Fahrny et al. Jun 2006 A1
20060148362 Bridges Jul 2006 A1
20060149850 Bowman Jul 2006 A1
20060154674 Landschaft et al. Jul 2006 A1
20060161635 Lamkin et al. Jul 2006 A1
20060165090 Kalliola et al. Jul 2006 A1
20060165197 Morita et al. Jul 2006 A1
20060168219 Ahluwalia et al. Jul 2006 A1
20060171390 La Joie Aug 2006 A1
20060171423 Helms et al. Aug 2006 A1
20060179138 Van Gassel et al. Aug 2006 A1
20060184972 Rafey et al. Aug 2006 A1
20060187900 Akbar Aug 2006 A1
20060200856 Salowey et al. Sep 2006 A1
20060206712 Dillaway et al. Sep 2006 A1
20060209799 Gallagher et al. Sep 2006 A1
20060212400 Kamperman et al. Sep 2006 A1
20060218604 Riedl et al. Sep 2006 A1
20060218632 Corley et al. Sep 2006 A1
20060236131 Vauclair Oct 2006 A1
20060248553 Mikkelson et al. Nov 2006 A1
20060248555 Eldering Nov 2006 A1
20060253328 Kohli et al. Nov 2006 A1
20060253864 Easty Nov 2006 A1
20060259927 Acharya et al. Nov 2006 A1
20060277569 Smith Dec 2006 A1
20060291506 Cain Dec 2006 A1
20070011335 Burns et al. Jan 2007 A1
20070019645 Menon Jan 2007 A1
20070022459 Gaebel et al. Jan 2007 A1
20070022469 Cooper et al. Jan 2007 A1
20070033531 Marsh Feb 2007 A1
20070046791 Wang et al. Mar 2007 A1
20070049245 Lipman Mar 2007 A1
20070067851 Fernando et al. Mar 2007 A1
20070076728 Rieger et al. Apr 2007 A1
20070079381 Hartung et al. Apr 2007 A1
20070086383 Watanabe et al. Apr 2007 A1
20070089127 Flickinger et al. Apr 2007 A1
20070094691 Gazdzinski Apr 2007 A1
20070098178 Raikar May 2007 A1
20070113243 Brey May 2007 A1
20070115900 Liang et al. May 2007 A1
20070121678 Brooks et al. May 2007 A1
20070124488 Baum et al. May 2007 A1
20070157295 Mangalore et al. Jul 2007 A1
20070174888 Rubinstein Jul 2007 A1
20070192615 Varghese et al. Aug 2007 A1
20070195727 Kinder et al. Aug 2007 A1
20070204314 Hasek et al. Aug 2007 A1
20070206799 Wingert et al. Sep 2007 A1
20070209059 Moore et al. Sep 2007 A1
20070217436 Markley et al. Sep 2007 A1
20070219910 Martinez Sep 2007 A1
20070220024 Putterman et al. Sep 2007 A1
20070233857 Cheng et al. Oct 2007 A1
20070237077 Patwardhan et al. Oct 2007 A1
20070250872 Dua Oct 2007 A1
20070250880 Hainline Oct 2007 A1
20070261116 Prafullchandra et al. Nov 2007 A1
20070263818 Sumioka et al. Nov 2007 A1
20070266395 Lee et al. Nov 2007 A1
20070276925 La Joie et al. Nov 2007 A1
20070276926 Lajoie et al. Nov 2007 A1
20070294178 Pinder et al. Dec 2007 A1
20080008321 Gagnon et al. Jan 2008 A1
20080008371 Woods et al. Jan 2008 A1
20080021836 Lao Jan 2008 A1
20080022012 Wang Jan 2008 A1
20080037493 Morton Feb 2008 A1
20080046542 Sano et al. Feb 2008 A1
20080059804 Shah et al. Mar 2008 A1
20080066112 Bailey et al. Mar 2008 A1
20080091805 Malaby et al. Apr 2008 A1
20080091807 Strub et al. Apr 2008 A1
20080098212 Helms et al. Apr 2008 A1
20080101460 Rodriguez May 2008 A1
20080103976 Read et al. May 2008 A1
20080103977 Khosravy et al. May 2008 A1
20080104634 Gajdos et al. May 2008 A1
20080109307 Ullah May 2008 A1
20080112405 Cholas et al. May 2008 A1
20080117920 Tucker May 2008 A1
20080123862 Rowley May 2008 A1
20080133551 Wensley et al. Jun 2008 A1
20080134274 Derrenberger et al. Jun 2008 A1
20080141317 Radloff et al. Jun 2008 A1
20080141353 Brown Jun 2008 A1
20080148362 Gilder et al. Jun 2008 A1
20080155059 Hardin et al. Jun 2008 A1
20080155614 Cooper et al. Jun 2008 A1
20080162353 Tom et al. Jul 2008 A1
20080165460 Whitby-Strevens Jul 2008 A1
20080177998 Apsangi et al. Jul 2008 A1
20080182591 Krikorian Jul 2008 A1
20080183705 Shivaji-Rao et al. Jul 2008 A1
20080192820 Brooks et al. Aug 2008 A1
20080212945 Khedouri et al. Sep 2008 A1
20080222684 Mukraj et al. Sep 2008 A1
20080229354 Morris et al. Sep 2008 A1
20080235746 Peters et al. Sep 2008 A1
20080244667 Osborne Oct 2008 A1
20080256510 Auerbach Oct 2008 A1
20080270307 Olson et al. Oct 2008 A1
20080273591 Brooks et al. Nov 2008 A1
20080282299 Koat et al. Nov 2008 A1
20080288618 Vardi et al. Nov 2008 A1
20090007234 Birger et al. Jan 2009 A1
20090013210 McIntosh et al. Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090025075 Chow et al. Jan 2009 A1
20090028182 Brooks et al. Jan 2009 A1
20090031371 Munsell et al. Jan 2009 A1
20090064251 Savoor et al. Mar 2009 A1
20090077620 Ravi et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090094648 Patel et al. Apr 2009 A1
20090098861 Kalliola et al. Apr 2009 A1
20090100459 Riedl et al. Apr 2009 A1
20090102983 Malone et al. Apr 2009 A1
20090119751 Koga May 2009 A1
20090125374 Deaton et al. May 2009 A1
20090151006 Saeki et al. Jun 2009 A1
20090170479 Jarenskog Jul 2009 A1
20090182815 Czechowski, III et al. Jul 2009 A1
20090185576 Kisel et al. Jul 2009 A1
20090187939 Lajoie Jul 2009 A1
20090201917 Maes et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090210912 Cholas et al. Aug 2009 A1
20090225760 Foti Sep 2009 A1
20090244290 McKelvey et al. Oct 2009 A1
20090265750 Jones et al. Oct 2009 A1
20090282241 Prafullchandra et al. Nov 2009 A1
20090282449 Lee Nov 2009 A1
20090292922 Park Nov 2009 A1
20090293101 Carter et al. Nov 2009 A1
20100014496 Kalika et al. Jan 2010 A1
20100020683 Gummalla et al. Jan 2010 A1
20100030578 Siddique et al. Feb 2010 A1
20100031299 Harrang et al. Feb 2010 A1
20100042478 Reisman Feb 2010 A1
20100070867 Lemmers Mar 2010 A1
20100081416 Cohen Apr 2010 A1
20100082983 Shah et al. Apr 2010 A1
20100083329 Joyce et al. Apr 2010 A1
20100088236 Karabulut et al. Apr 2010 A1
20100088292 Tirpak et al. Apr 2010 A1
20100106846 Noldus et al. Apr 2010 A1
20100122288 Minter et al. May 2010 A1
20100131973 Dillon et al. May 2010 A1
20100138900 Peterka et al. Jun 2010 A1
20100144340 Sudak Jun 2010 A1
20100150027 Atwal et al. Jun 2010 A1
20100151816 Besehanic et al. Jun 2010 A1
20100159951 Shkedi Jun 2010 A1
20100167743 Palanki et al. Jul 2010 A1
20100169977 Dasher et al. Jul 2010 A1
20100185855 Margolus et al. Jul 2010 A1
20100198888 Blomstedt et al. Aug 2010 A1
20100217837 Ansari et al. Aug 2010 A1
20100232355 Richeson et al. Sep 2010 A1
20100251305 Kimble et al. Sep 2010 A1
20100287609 Gonzalez et al. Nov 2010 A1
20100303022 Maas et al. Dec 2010 A1
20100309051 Moshfeghi Dec 2010 A1
20100312826 Sarosi et al. Dec 2010 A1
20100313225 Cholas et al. Dec 2010 A1
20100313226 Cholas et al. Dec 2010 A1
20110015989 Tidwell et al. Jan 2011 A1
20110071841 Fomenko et al. Mar 2011 A1
20110078721 Wang et al. Mar 2011 A1
20110093900 Patel et al. Apr 2011 A1
20110103374 Lajoie et al. May 2011 A1
20110107389 Chakarapani May 2011 A1
20110112717 Resner May 2011 A1
20110116428 Seong et al. May 2011 A1
20110138064 Rieger et al. Jun 2011 A1
20110158095 Alexander et al. Jun 2011 A1
20110163888 Goedde Jul 2011 A1
20110164753 Dubhashi et al. Jul 2011 A1
20110167440 Greenfield Jul 2011 A1
20110169977 Masuda Jul 2011 A1
20110179184 Breau et al. Jul 2011 A1
20110197070 Mizrah Aug 2011 A1
20110206136 Bekedam et al. Aug 2011 A1
20110213688 Santos et al. Sep 2011 A1
20110219229 Cholas et al. Sep 2011 A1
20110225619 Kesireddy et al. Sep 2011 A1
20110235577 Hintermeister et al. Sep 2011 A1
20110247029 Yarvis et al. Oct 2011 A1
20110252236 De Atley et al. Oct 2011 A1
20110252243 Brouwer et al. Oct 2011 A1
20110286437 Austin et al. Nov 2011 A1
20110299411 Chen et al. Dec 2011 A1
20110299422 Kim et al. Dec 2011 A1
20120008786 Cronk et al. Jan 2012 A1
20120011567 Cronk et al. Jan 2012 A1
20120023535 Brooks et al. Jan 2012 A1
20120030716 Zhang et al. Feb 2012 A1
20120046049 Curtis et al. Feb 2012 A1
20120054785 Yang et al. Mar 2012 A1
20120079531 Hasek et al. Mar 2012 A1
20120079546 Kalidindi et al. Mar 2012 A1
20120115501 Zheng May 2012 A1
20120151549 Kumar et al. Jun 2012 A1
20120159603 Queck Jun 2012 A1
20120167173 Nadalin et al. Jun 2012 A1
20120202447 Edge et al. Aug 2012 A1
20120203822 Floyd et al. Aug 2012 A1
20120222081 Schaefer et al. Aug 2012 A1
20120230193 Fang Sep 2012 A1
20120278654 Shen Nov 2012 A1
20120291062 Pearson et al. Nov 2012 A1
20120302259 Busch Nov 2012 A1
20120330759 Aggarwal et al. Dec 2012 A1
20130016648 Koskela et al. Jan 2013 A1
20130017794 Kloper et al. Jan 2013 A1
20130045681 Dua Feb 2013 A1
20130046623 Moritz et al. Feb 2013 A1
20130081097 Park et al. Mar 2013 A1
20130095848 Gold et al. Apr 2013 A1
20130100818 Qiu et al. Apr 2013 A1
20130132789 Watford et al. May 2013 A1
20130145152 Maino et al. Jun 2013 A1
20130176885 Lee et al. Jul 2013 A1
20130235774 Jo et al. Sep 2013 A1
20130242812 Khoryaev et al. Sep 2013 A1
20130254787 Cox et al. Sep 2013 A1
20130260820 Schmandt et al. Oct 2013 A1
20130308622 Uhlik Nov 2013 A1
20130317892 Heerboth et al. Nov 2013 A1
20130347089 Bailey et al. Dec 2013 A1
20140010219 Dor et al. Jan 2014 A1
20140010225 Puregger Jan 2014 A1
20140019635 Reznik et al. Jan 2014 A1
20140046624 Miettinen Feb 2014 A1
20140066098 Stern et al. Mar 2014 A1
20140105061 Kannan Apr 2014 A1
20140177611 Corrales Lopez Jun 2014 A1
20140213256 Meylan et al. Jul 2014 A1
20140215506 Kalmes et al. Jul 2014 A1
20140242991 Yanover et al. Aug 2014 A1
20140274110 Mehta et al. Sep 2014 A1
20140280901 Balachandran et al. Sep 2014 A1
20140281489 Peterka et al. Sep 2014 A1
20140282721 Kuncl et al. Sep 2014 A1
20140283137 Rebaud et al. Sep 2014 A1
20140308923 Faulkner et al. Oct 2014 A1
20140309868 Ricci Oct 2014 A1
20140328257 Kamlani Nov 2014 A1
20140359649 Cronk et al. Dec 2014 A1
20150009869 Clegg Jan 2015 A1
20150036514 Zhu et al. Feb 2015 A1
20150058883 Tidwell et al. Feb 2015 A1
20150058909 Miller et al. Feb 2015 A1
20150094098 Stern et al. Apr 2015 A1
20150103685 Butchko et al. Apr 2015 A1
20150106501 Malets et al. Apr 2015 A1
20150106846 Chen et al. Apr 2015 A1
20150140981 Balasaygun May 2015 A1
20150146537 Panaitopol et al. May 2015 A1
20150156129 Tsuruoka et al. Jun 2015 A1
20150189377 Wheatley et al. Jul 2015 A1
20150288617 Dasher et al. Oct 2015 A1
20150288732 Phillips et al. Oct 2015 A1
20150305082 Elliott Oct 2015 A1
20150334625 Banks Nov 2015 A1
20150365833 Stafford et al. Dec 2015 A1
20160019103 Basra Jan 2016 A1
20160057794 Morita Feb 2016 A1
20160066234 Cho et al. Mar 2016 A1
20160105691 Zucchetta Apr 2016 A1
20160119939 Himayat et al. Apr 2016 A1
20160127185 McAllister et al. May 2016 A1
20160143005 Ghosh et al. May 2016 A1
20160204934 Smith Jul 2016 A1
20160242071 Chen et al. Aug 2016 A1
20160301525 Canard et al. Oct 2016 A1
20170099327 Negalaguli et al. Apr 2017 A1
20170164378 Gunasekara et al. Jun 2017 A1
20170164416 Yeddala et al. Jun 2017 A1
20170223536 Gupta et al. Aug 2017 A1
20170257750 Gunasekara et al. Sep 2017 A1
20170265084 Clegg Sep 2017 A1
Foreign Referenced Citations (63)
Number Date Country
1139198 Oct 2001 EP
2113860 Nov 2009 EP
2381709 May 2003 GB
H08263440 Oct 1996 JP
2000156676 Jun 2000 JP
2000332746 Nov 2000 JP
2001243707 Sep 2001 JP
2001274786 Oct 2001 JP
2001274788 Oct 2001 JP
2001285821 Oct 2001 JP
2002163396 Jun 2002 JP
2002352094 Dec 2002 JP
2003058657 Feb 2003 JP
2003162600 Jun 2003 JP
2003233690 Aug 2003 JP
2003248508 Sep 2003 JP
2003296484 Oct 2003 JP
2003348508 Dec 2003 JP
2004030111 Jan 2004 JP
2004072721 Mar 2004 JP
2004120736 Apr 2004 JP
2004120738 Apr 2004 JP
2004303111 Oct 2004 JP
2005506627 Mar 2005 JP
2005519365 Jun 2005 JP
2005519501 Jun 2005 JP
2005339093 Dec 2005 JP
2006185473 Jul 2006 JP
2006311267 Nov 2006 JP
2007020144 Jan 2007 JP
2008005047 Jan 2008 JP
2008015936 Jan 2008 JP
2008021293 Jan 2008 JP
2008507905 Mar 2008 JP
2008167018 Jul 2008 JP
2008186272 Aug 2008 JP
2008206039 Sep 2008 JP
2009071786 Apr 2009 JP
2009515238 Apr 2009 JP
2009176060 Aug 2009 JP
2009211632 Sep 2009 JP
2010502109 Jan 2010 JP
2010079902 Apr 2010 JP
2012505436 Mar 2012 JP
2012523614 Oct 2012 JP
WO-0103410 Jan 2001 WO
WO-0110125 Feb 2001 WO
WO-0137479 May 2001 WO
WO-0169842 Sep 2001 WO
WO-0177778 Oct 2001 WO
WO-0213032 Feb 2002 WO
WO-0221841 Mar 2002 WO
WO-0242966 May 2002 WO
WO-02080556 Oct 2002 WO
WO-03038704 May 2003 WO
WO-03087799 Oct 2003 WO
WO-03093944 Nov 2003 WO
WO-2004027622 Apr 2004 WO
WO-2005015422 Feb 2005 WO
WO-2006020141 Feb 2006 WO
WO-2008080556 Jul 2008 WO
WO-2009020476 Feb 2009 WO
WO-2012021245 Feb 2012 WO
Non-Patent Literature Citations (32)
Entry
5C Digital Transmission Content Protection White Paper, Hitachi, Ltd., et al., dated Jul. 14, 1998, 15 pages.
Cantor, et al., Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V2.0, OASIS Standard, Mar. 15, 2005. Document ID: saml-core-2.0-os (http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf).
Cantor, et al., Bindings for the OASIS Security Assertion Markup Language (SAML) V2.0, OASIS Standard, Mar. 2005, Document ID saml-bindings-2.0-os ,(http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf).
Cisco Intelligent Network Architecture for Digital Video—SCTE Cable-Tec Expo 2004 information page, Orange County Convention Center, Jun. 2004, 24 pages.
DCAS Authorized Service Domain, Version 1.2, dated Nov. 4, 2008, 58 pages.
DCAS Licensed Specification Abstracts, CableLabs Confidential Information, Jan. 12, 2006, 4 pages.
Deering et al., Internet Protocol, Version 6 (lpv6) Specification, IETF RFC 2460 (Dec. 1998).
DVB (Digital Video Broadcasting), DVB Document A045 Rev. 3, Jul. 2004, “Head-end Implementation of SimulCrypt,” 289 pages.
DVB (Digital Video Broadcasting); DVB SimulCrypt; Part 1: “Head-end architecture and synchronization” Technical Specification—ETSI TS 101 197 V1.2.1 (Feb. 2002), 40 pages.
Federal Information Processing Standards Publication, US FIPS PUB 197, Nov. 26, 2001, “Advanced Encryption Standards (AES),” 47 pages.
Gomez, Conserving Transmission Power in Wireless Ad Hoc Networks, 2001, Network Protocols.
Griffith, et al.,Resource Planning and Bandwidth Allocation in Hybrid Fiber-Coax Residential Networks, National Institute of Standards and Technology (NIST), 10 pages, no date.
Gupta V., et al., “Bit-Stuffing in 802.11 Beacon Frame: Embedding Non-Standard Custom Information,” International Journal of Computer Applications, Feb. 2013, vol. 63 (2), pp. 6-12.
High-bandwidth Digital Content Protection System, Revision 1.091, dated Apr. 22, 2003, Digital Content Protection LLC Draft, 78 pages.
Internet Protocol DARPA Internet Program Protocol Specification, IETF RFC 791 (Sep. 1981).
Kanouff, Communications Technology: Next-Generation Bandwidth Management—The Evolution of the Anything-to-Anywhere Network, 8 pages, Apr. 1, 2004.
Marusic, et al., “Share it!—Content Transfer in Home-to-Home Networks.” IEEE MELECON 2004, May 12-15, 2004, Dubrovnik, Croatia.
Media Server; 1 Device Template Version 1.01 Jun. 25, 2002.
Miao , et al., “Distributed interference-aware energy-efficient power optimization,” IEEE Transactions on Wireless Communications, Apr. 2011, vol. 10 (4), pp. 1323-1333.
Motorola DOCSIS Cable Module DCM 2000 specifications, 4 pages, copyright 2001.
OpenCable Application Platform Specification, OCAP 2.0 Profile, OC-SP-OCAP2.0-I01-020419, Apr. 19, 2002.
OpenCable Application Platform Specifications, OCAP Extensions, OC-SP-OCAP—HNEXT-I03-080418, 2005-2008.
OpenCable Host Device, Core Functional Requirements, OC-SP-HOST-CFR-I13-030707, Jul. 7, 2003.
OpenCable, HOST-POD Interface Specification, OC-SP-HOSTPOD-IF-113-030707, Jul. 7, 2003.
OpenCable Specification, Home Networking Protocol 2.0, OC-SP-HNP2.0-I01-08418, 2007.
OpenCable Specifications, Home Networking Security Specification, OC-SP-HN-SEC-DO1-081027, draft (Oct. 27, 2008).
OpenVision Session Resource Manager—Open Standards-Based Solution Optimizes Network Resources by Dynamically Assigning Bandwidth in the Delivery of Digital Services article, 2 pages, (copyright 2006), (http://www.imake.com/hopenvision).
OpenVision Session Resource Manager features and information, 2 pages, no date, (http://www.imake.com/hopenvision).
Primergy BX300 Switch Blade user's manual, Fujitsu Corp., Sep. 30, 2002, first edition, pp. 1 to 20.
Real System Media Commerce Suite (Technical White Paper), at http://docs.real.com/docs/drm/DRM.sub-WP1.pdf, 12 pages, Nov. 2001.
Van Moffaert, A., et al. (“Digital Rights Management: DRM is a key enabler for the future growth of the broadband access market and the telecom/networking market in general”, Alcatel Telecommunications Review, Alcatel, Paris Cedex FR, Apr. 1, 2003, XP007005930ISSN; 8 pages.
Zhang, et al., “A Flexible Content Protection System for Media-On-Demand” Multimedia Software Engineering, 2002 Proceedings. Fourth International Symposium on Dec. 11-13, 2002, Piscataway, NJ, USAA, IEEE, Dec. 11, 2002, pp. 272-277, XP010632760ISBN: 978-0-7695-1857-2.
Related Publications (1)
Number Date Country
20190149443 A1 May 2019 US
Divisions (1)
Number Date Country
Parent 15183159 Jun 2016 US
Child 16231076 US