Building and maintaining a network

Information

  • Patent Grant
  • 10326707
  • Patent Number
    10,326,707
  • Date Filed
    Friday, January 17, 2014
    10 years ago
  • Date Issued
    Tuesday, June 18, 2019
    5 years ago
Abstract
Techniques and systems for establishing and maintaining networks. The technique includes assigning a network device to an interregional redirector system and load balancer systems. The network device can be assigned based upon the regions or subregions of the network device. The technique includes the load balancer systems assigning the network device to network device management engines. The status of the network device management engines can be monitored to determine if one of the network device management engines has failed. In the event that a network device management engine has failed, the network device can be assigned to a different network device management engine.
Description
BACKGROUND

An area of ongoing research and development is improving the ease by which a person or an enterprise can set up a network. In particular importance is improving the ease by which a person or an enterprise can add devices to an already existing network to further expand and improve the network. Specifically, in establishing a network or adding devices to an already existing network, an administrator must configure the device in order to establish a new network or incorporate a new device into an already existing network. There therefore exists a need for systems in which a person or an enterprise can easily setup a network or add devices to an already existing network without having to configure the device.


Another area of ongoing research and development is improving the ease by which a network can be monitored and managed to continue to function if a device fails. Typical systems usually connect a plurality of network devices to a single server to manage the network device. Therefore, if the server fails, all of the network devices managed by the failed server are inoperable. There therefore exists a need for a system that monitors the servers or engines that manage network devices to determine whether or not they have failed. There also exists a need for a system capable of reassigning the network devices to different servers or engines that manage network devices in the event that a server or engine that manages network devices has failed.


The foregoing examples of the related art are intended to be illustrative and not exclusive. Other limitations of the relevant art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.


SUMMARY

The following implementations and aspects thereof are described and illustrated in conjunction with systems, tools, and methods that are meant to be exemplary and illustrative, not necessarily limiting in scope. In various implementations, one or more of the above-described problems have been addressed, while other implementations are directed to other improvements.


Techniques and systems for building and maintaining a network. The technique involves assigning network devices to device management engines that mange the flow of data packets into and out of the network devices. The technique can include connecting a network device to an interregional redirector system. The network device can be a newly purchased device that is being powered on for the first time by the purchaser. The technique can include the interregional redirector system receiving network device information about the network device. The technique can also include the interregional redirector system validating the network device. The interregional redirector system can then assign the network to a load balancer system. The load balancer system can be associated with or part of one or multiple regional device management systems. The regional device management systems can be regionally unique in that they contain engines in specific regions or subregions. The load balancer systems can be regionally unique in that they are associated with or part of one or multiple regional network device management systems that are regionally unique. The interregional redirector system can assign the network device to a load balancer based upon the regions or subregions of the engines of the regional network device management systems or the engines themselves that which the load balancer systems are associated.


The technique can also involve a load balancer system assigning a network device to a network device management engine. The load balancer system can receive both network device information and network device management engine information. The load balancer system can assign the network device to a network device management engine based upon the regions or subregions of the network devices and the regions or subregions of the other network devices that they network device management engine already manages.


The technique can also include the load balancer system monitoring the status of the network device management engines associated with it and reassign network devices to different management engines in the event that one of the management engines fails. The load balancer system can monitor the status of the network device management engines associated with the load balancer system by retrieving network device management engine status messages from a network device management engine message queue. The status messages can be sent to the network device management engine message queue by the network device management engines. The load balancer system can use the status messages of the network device management engines to determine whether or not a network device management engine has failed. If the load balancer system determines that a network device management engine has failed, then the load balancer system can reassign the network device to another network device management engine that is not failing. The load balancer system can also send a notification to an administrator system that the network device management engine has failed.


These and other advantages will become apparent to those skilled in the relevant art upon a reading of the following descriptions and a study of the several examples of the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a diagram of an example of a system configured to couple a network device to a regional network device management system.



FIG. 2 depicts a diagram of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine.



FIG. 3 depicts a diagram of an example of a load balancer system.



FIG. 4 depicts a flowchart of an example of a method for assigning a network device to a regional network device management system.



FIG. 5 depicts a flowchart of an example of a method of a load balancer system for assigning a network device to a network device management engine.



FIG. 6 depicts a flowchart of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine.





DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 depicts a diagram 100 of an example of a system configured to couple a network device to a regional network device management system. The system includes an interregional redirector system 102, a load balancer system 104, regional network device management systems 106-1 . . . 106-n, a computer readable medium 108, and network devices 110-1 . . . 110-n. As used in this paper, a system can be implemented as an engine or a plurality of engines.


While the system is shown to include multiple network devices 110-1 . . . 110-n, in a specific implementation, the system can only include one network device (e.g. 110-1). The network device s 110-1 . . . 110-n are coupled to client devices 112-1 . . . 112-n. Each client device can be coupled to a single network device 110-1 . . . 110-n (e.g. client device 112-1) or can be coupled to more than one network device (e.g. client device 112-2). The client devices 112-1 . . . 112-n can include a client wireless device, such as a laptop computer or a smart phone. The client devices 112-1 . . . 112-n can also include a repeater or a plurality of linked repeaters. Therefore, the client devices 112-1 . . . 112-n can be comprised of a plurality of repeaters and a client wireless device that can be coupled together as a chain.


A network device, as is used in this paper, can be an applicable device used in connecting a client device to a network. For example, a network device can be a virtual private network (hereinafter referred to as “VPN”) gateway, a router, an access point (hereinafter referred to as “AP”), or a device switch. The network devices 110-1 . . . 110-n can be integrated as part of router devices or as stand-alone devices coupled to upstream router devices. The network devices 110-1 . . . 110-n can be coupled to the client devices 112-1 . . . 112-n through either a wireless or a wired medium. The wireless connection may or may not be IEEE 802.11-compatible. In this paper, 802.11 standards terminology is used by way of relatively well-understood example to discuss implementations that include wireless techniques that connect stations through a wireless medium. A station, as used in this paper, may be referred to as a device with a media access control (MAC) address and a physical layer (PHY) interface to a wireless medium that complies with the IEEE 802.11 standard. Thus, for example, client devices 112-1 . . . 112-n and network devices 110-1 . . . 110-n with which the client devices 112-1 . . . 112-n associate can be referred to as stations, if applicable. IEEE 802.11a-1999, IEEE 802.11b-1999, IEEE 802.11g-2003, IEEE 802.11-2007, and IEEE 802.11n TGn Draft 8.0 (2009) are incorporated by reference.


As used in this paper, a system that is 802.11 standards-compatible or 802.11 standards-compliant complies with at least some of one or more of the incorporated documents' requirements and/or recommendations, or requirements and/or recommendations from earlier drafts of the documents, and includes Wi-Fi systems. Wi-Fi is a non-technical description generally correlated with the IEEE 802.11 standards, as well as Wi-Fi Protected Access (WPA) and WPA2 security standards, and the Extensible Authentication Protocol (EAP) standard. In alternative implementations, a station may comply with a different standard than Wi-Fi or IEEE 802.11 and may be referred to as something other than a “station,” and may have different interfaces to a wireless or other medium.


IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group defining the physical layer and data link layer's MAC of wired Ethernet. This is generally a local area network technology with some wide area network applications. Physical connections are typically made between nodes and/or infrastructure devices (hubs, switches, routers) by various types of copper or fiber cable. IEEE 802.3 is a technology that supports the IEEE 802.1 network architecture. As is well-known in the relevant art, IEEE 802.11 is a working group and collection of standards for implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. The base version of the standard IEEE 802.11-2007 has had subsequent amendments. These standards provide the basis for wireless network products using the Wi-Fi brand. IEEE 802.1 and 802.3 are incorporated by reference.


The network devices 110-1 . . . 110-n are coupled to the interregional redirector system 102, the load balancer system 104 and regional network device management systems 106-1 . . . 106-n through a computer-readable medium 108. The computer-readable medium 108 is intended to represent a variety of potentially applicable technologies. For example, the computer-readable medium 108 can be used to form a network or part of a network. Where two components are co-located on a device, the computer-readable medium 108 can include a bus or other data conduit or plane. Where a first component is co-located on one device and a second component is located on a different device, the computer-readable medium 108 can include a wireless or wired back-end network or LAN. The computer-readable medium 108 can also encompass a relevant portion of a WAN or other network, if applicable.


The computer-readable medium 108, the interregional redirector system 102, the load balancer system 104, the regional network device management systems 106-1 . . . 106-n, and other applicable systems described in this paper can be implemented as parts of a computer system or a plurality of computer systems. A computer system, as used in this paper, is intended to be construed broadly. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor. The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller.


The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. As used in this paper, the term “computer-readable storage medium” is intended to include only physical media, such as memory. As used in this paper, a computer-readable medium is intended to include all mediums that are statutory (e.g., in the United States, under 35 U.S.C. 101), and to specifically exclude all mediums that are non-statutory in nature to the extent that the exclusion is necessary for a claim that includes the computer-readable medium to be valid. Known statutory computer-readable mediums include hardware (e.g., registers, random access memory (RAM), non-volatile (NV) storage, to name a few), but may or may not be limited to hardware.


The bus can also couple the processor to the non-volatile storage. The non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software on the computer system. The non-volatile storage can be local, remote, or distributed. The non-volatile storage is optional because systems can be created with all applicable data available in memory.


Software is typically stored in the non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at an applicable known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.


In one example of operation, a computer system can be controlled by operating system software, which is a software program that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage.


The bus can also couple the processor to the interface. The interface can include one or more input and/or output (I/O) devices. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. Interfaces enable computer systems and other devices to be coupled together in a network.


The computer systems described throughout this paper can be compatible with or implemented through one or a plurality of cloud-based computing systems. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to client devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the client devices can access over a communication interface, such as a network. “Cloud” may be a marketing term and for the purposes of this paper can include any of the networks described herein. The cloud-based computing system can involve a subscription for services or use a utility pricing model. Users can access the protocols of the cloud-based computing system through a web browser or other container application located on their client device.


The computer systems described throughout this paper can be implemented as or can include engines to perform the functions of each system. An engine, as used in this paper, includes a dedicated or shared processor and, typically, firmware or software modules executed by the processor. Depending upon implementation-specific or other considerations, an engine can be centralized or its functionality distributed. An engine can include special purpose hardware, firmware, or software embodied in a computer-readable medium for execution by the processor.


The engines described throughout this paper can be cloud-based engines. A cloud-based engine is an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities can be distributed across multiple computing devices, and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices.


The computer systems described throughout this paper can include datastores. A datastore, as described in this paper, can be cloud-based datastores compatible with a cloud-based computing system.


The regional network device management systems 106-1 . . . 106-n can function to manage the network devices 110-1 . . . 110-n. Each regional network device management system 106-1 . . . 106-n can include a plurality of engines that manage the network devices 110-1 . . . 110-n. The engines can be grouped into regional network device management systems 106-1 . . . 106-n based upon regions and subregions of the network devices 110-1 . . . 110-n that the engines manage. Therefore the regional network device management systems 106-1 . . . 106-n can be characterized by the regions and subregions of the network devices 110-1 . . . 110-n that the engines within a specific network device management system 106-1 . . . 106-n manage. For example, the engines that manage network devices 110-1 . . . 110-n in the same region or subregion can be grouped into the same regional network device management system (e.g. 106-1). As a result, the regional network device management systems 106-1 . . . 106-n can be regionally unique in that they contain engines that manage network devices 110-1 . . . 110-n within specific regions or subregions.


As the regional network device management systems 106-1 . . . 106-n can be implemented as a cloud-based system, and as the regional network device management systems 106-1 . . . 106-n can be characterized by the region and subregions of the network devices 110-1 . . . 110-n, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions and subregions of the network devices 110-1 . . . 110-n. Specifically, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions or the subregions of the network devices 110-1 . . . 110-n that the regional network device management systems 106-1 . . . 106-n manage. In a specific implementation, the subregions of the network devices 110-1 . . . 110-n together form a region of the network devices 110-1 . . . 110-n.


The regions or subregions of the network devices 110-1 . . . 110-n can be defined based upon geography, an enterprise network or a combination of both geography and an enterprise network. In a specific implementation, the region can be defined based upon geography to include the network devices 110-1 . . . 110-n associated with or located within a geographical area or location, such as a city or a building within a city. Similarly, a subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a geographical area or location within the geographical area or location used to define the region. For example, the region can be defined to include the network devices 110-1 . . . 110-n located in or associated with a state, while the subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a city in the state that defines the region. In another implementation, the region can be defined based upon an enterprise to include the network devices 110-1 . . . 110-n associated with or are used in an enterprise network. In yet another implementation, the region can be defined based upon a combination of both geography and an enterprise to include the network devices 110-1 . . . 110-n associated with or located within a geographical location or area within an enterprise network. For example, the region can include the network devices 110-1 . . . 110-n associated with or located within a specific office site of the enterprise.


The regions of the network devices 110-1 . . . 110-n can not only be defined according to the previously described classifications but can also be defined based upon the number of network devices 110-1 . . . 110-n in or associated with the region. In a specific implementation, the region can be defined to include only basic service set (BSS). A BSS includes one network device and all of the stations or other devices (i.e. repeaters) coupled to the network device. The BSS can be identified by a unique basic service set identification (BSSID). The BSSID can be the MAC address of the network device in the BSS. In another implementation, the region can be defined to include an extended service set (ESS) that comprises plurality BSSs. The plurality of BSSs can be interconnected so that stations or devices are connected to multiple network devices within the ESS. The ESS can be identified by a unique extended service set identification (ESSID). The ESSID can be the MAC addresses of the network devices in the ESS.


The system shown in FIG. 1 includes a load balancer system 104 coupled to the regional network device management systems 106-1 . . . 106-n and the network devices 110-1 . . . 110-n through the computer readable medium 108. The load balancer system 104 is also coupled to the interregional redirector system 102 through the computer-readable medium 108. The system can include multiple load balancer systems 104 that can be coupled to and associated with different regional network device management systems 106-1 . . . 106-n. In a specific implementation, the specific regional network device management systems 106-1 . . . 106-n that the load balancer system 104 is coupled to and associated with can be based upon the regions and subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage. For example, a load balancer system 104 can be coupled to and associated with the regional network device management systems 106-1 . . . 106-n that manage the network devices 110-1 . . . 110-n within an entire state. As the load balancer systems 104 can be coupled to and associated with specific regional network device management systems 106-1 . . . 106-n based on the regions or the subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage, the load balancer systems 104 can be regionally unique. For example, the load balancer systems 104 can be regionally unique in that they are associated with network devices 106-1 . . . 106-n in the same enterprise network.


Additionally, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with multiple load balancer systems 104 based upon the regions and subregions of the network devices 110-1 . . . 110-n managed by the engines in the specific regional network device management system 106-1 . . . 106-n. For example, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with a first load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a specific region, such as a state. Additionally, the specific regional network device management system 106-1 . . . 106-n can also be coupled to or associated with a second load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a subregion of the specific region, such as a city within the state.


The load balancer system 104, as will be discussed in greater detail later with respect to FIG. 2, can function to monitor the usage of specific engines grouped into regional network device management systems 106-1 . . . 106-n as the engines within the regional network device management systems 106-1 . . . 106-n manage various network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n so that the assigned engine or engines can manage the assigned network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to another load balancer system 104 that can then assign the network devices 110-1 . . . 110-n to another load balancer system 104 or one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n. In a specific implementation, the load balancer systems 104 can assign a newly purchased network device 110-1 . . . 110-n to either or both another load balancer system 104 and engines in a regional network device management system 106-1 . . . 106-n.


The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the other network devices 110-1 . . . 110-n that the specific engines manage. The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to other load balancers systems. The other load balancer systems can be coupled to or associated with specific engines within the regional network device management systems 106-1 . . . 106-n. Specifically, the other load balancer systems can be associated with specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the network devices 110-1 . . . 110-n that the other load balancer systems assign.


The system shown in the example of FIG. 1 includes an interregional redirector system 102. The interregional redirector system 102 is coupled to the network devices 110-1 . . . 110-n and the load balancer systems 104 through the computer readable medium 108. In a specific implementation, the interregional redirector system 102 is not associated with any specific region. Specifically, the interregional redirector system 102 can be coupled to all of the load balancer systems 104, and through the load balancer systems to all of the regional network device management systems 106-1 . . . 106-n. As the regional network device management systems 106-1 . . . 106-n and the load balancer systems 104 can be regionally unique, and as the interregional redirector system 102 can be coupled to all of the regional network device management systems 106-1 . . . 106-n, the interregional redirector system 102 is associated with every region or subregion. Therefore, the interregional redirector system 102 is not unique to a single region, but is rather globally applicable to at least a subplurality of the regions.


In being coupled to the network devices 110-1 . . . 110-n, the interregional redirector system 102 can function to receive identification information from the network devices 110-1 . . . 110-n and validate the network devices. In being coupled to the load balancer systems 104, the interregional redirector system 102 can further function to assign specific network devices 110-1 . . . 110-n to one or a plurality of load balancer systems 104. As the load balancer systems can be regionally unique, the interregional redirector system 102 can assign the network devices 110-1 . . . 110-n to one or a plurality of specific load balancer systems 104 based upon the regions or subregions of the network devices 110-1 . . . 110-n that are being assigned.


In a specific implementation, a newly purchased network device 110-1 . . . 110-n is configured to be directed to the interregional redirector system 102 upon the first turning on of the network device 110-1 . . . 110-n by the purchaser of the network device. In being directed to the interregional redirector system 102, the network device 110-1 . . . 110-n can send the identification information of the network device to the interregional redirector system 102. The interregional redirector system 102 can both validate the network device 110-1 . . . 110-n and assign the newly purchased network device 110-1 . . . 110-n based on the region or subregion of the network device to one or a plurality of load balancer systems 104. The one or a plurality of load balancer systems 104 can then assign the newly purchased network device 110-1 . . . 110-n to one or a plurality of regional network device management systems 106-1 . . . 106-n. In another example, additional server resources, such as additional new regional network device management systems can be added and the load balancer system 104 can assign the newly purchased network device 110-1 . . . 110-n to an added new regional network device management system.


In a specific implementation, the regions or subregions of the network devices 110-1 . . . 110-n can be part of the identification information received by the interregional redirector system 102 from the network devices 110-1 . . . 110-n. In another implementation, the interregional redirector system 102 can trace through the computer readable medium 108 to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. Alternatively, the interregional redirector system 102 can trace through the computer readable medium 108 the regions or subregions of already activated network devices 110-1 . . . 110-n that neighbor the newly purchased network device 110-1 . . . 110-n either physically or on a network structure level to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. The interregional redirector system 102 can determine the regions or subregions of neighboring network devices 110-1 . . . 110-n based upon the MAC addresses of the neighboring network devices 110-1 . . . 110-n. In an alternate implementation, the interregional director system 102 can determine the region or subregion of the newly purchased network device 110-1 . . . 110-n through the identity of the purchaser of the network device. Specifically, the interregional redirector system 102 can use the MAC address of the newly purchased network device 110-1 . . . 110-n that can be received from the newly purchased network device 110-1 . . . 110-n to determine the identity of the purchaser of the network device 110-1 . . . 110-n and the region or subregion of the network device 110-1 . . . 110-n. For example, the interregional redirector system 102 can determine that company A purchased the network device 110-1 . . . 110-n and because company A occupies a specific location within a region, such as a city, determine that the city is the region of the newly purchased network device 110-1 . . . 110-n.



FIG. 2 depicts a diagram 200 of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine. The system includes a regional network device management system 202 coupled to network devices 204-1, 204-3 and 204-3. While only three network devices 204-1, 204-2 and 204-3 are shown, the regional network device management system 202 can be coupled to more or less than three network devices. The system can also include an administrator system 214 coupled to the regional network device management system 202.


The regional network device management system 202 includes a load balancer system 206, network device management engines 208-1, 208-2 and 208-3 and a network device management engine message queue 210. While only three network device management engines 208-1, 208-2 and 208-3 are shown, the regional network device management system 202 can include more or less than three network device management engines.


The network device management engines 208-1, 208-2 and 208-3, within the regional network device management systems 202, are coupled to the network devices 204-1, 204-2 and 204-3. A network device (e.g. 204-3) can be coupled to more than one network device management engines (e.g. 208-2 and 208-3). The network device management engines (e.g.



208-1) can manage the flow of data into and out of the network devices (e.g. 204-1) coupled to the specific network device management engines (e.g. 208-1). The network device management engines (e.g. 208-1) can be regionally unique in that they manage the flow of data into and out of the network devices in specific regions or subregions. Furthermore, the network device management engines can be grouped into a network device management system 202 based upon the regions or subregions of the network devices that the network device management engines manage.


In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can manage the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by controlling routers connected to the network devices. In another implementation, the network device management engines 208-1, 208-2 and 208-3 can control the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by functioning as a router themselves, and switching between different data paths coupled to the network device management engines. For example, network device management engines 208-1, 208-2 and 208-3 that manage network devices 204-1, 204-2 and 204-3 within the same region or subregion can be grouped into the same network device management system 202.


The network device management engines 208-1, 208-2 and 208-3 can be a server that can perform the previously described functions. In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can be configured in accordance with the control and provisioning of wireless access points (CAPWAP) protocol. Specifically, the network device management engines 208-1, 208-2 and 208-3 can be CAPWAP servers. CAPWAP servers are servers that can be configured in accordance with the CAPWAP protocol. The CAPWAP protocol is similar to the light weight access point protocol (LWAPP), but differs in that it includes the integration of a full datagram transport layer security (DTLS) tunnel. Data is transmitted through the CAPWAP protocol over an unencrypted data channel while control messages of the data are transmitted in the DTLS tunnel. The CAPWAP protocol is described in RFC 5415 (2009), which is hereby incorporated by reference, and IEEE 802.11 which was previously incorporated by reference.


In a specific implementation, the network devices 204-1, 204-2 and 204-3 can function to determine whether a network device management engine 208-1, 208-2 and 208-3 that the network devices are coupled to has failed. For example, if a network device 204-1, 204-2 and 204-3 does not receive traffic from a network device management engine 208-1, 208-2 and 208-3 that is coupled to the network device, then the network device can determine that the network device management engine has failed. Further in the specific implementation, the network device 204-1, 204-2 and 204-3 can alert the load balancer system 206 to a failure of a network device management engine 208-1, 208-2 and 208-3. For example, upon detecting a failure in a network device management engine 208-1, 208-2 and 208-3 can generate and send a network device management engine failure message to the load balancer system 206. In one example, the network device management engine failure message identifies the specific network device management engine 208-1, 208-2 and 208-3 that has failed.


The network device management engines 208-1, 208-2 and 208-3 are coupled to the network device management engine message queue 210. The network device management engine message queue 210 is coupled to the load balancer system 206. The network device management engine message queue 210 can function to receive status messages sent from the network device management engines 208-1, 208-2 and 208-3. The status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 periodically, after a predetermined interval of time. In another implementation the status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 when the load balancer system 206 sends a status request to the network device management engines 208-1, 208-2 and 208-3.


The status messages sent from the network device management engine 208-1, 208-2 and 208-3 can include the amount of used bandwidth and available bandwidth that exist on network devices coupled to the specific network device management engines 208-1, 208-2 and 208-3. The status messages can also include information about the number of network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing. The status messages can further include the amount of bandwidth on the network device management engines 208-1, 208-2 and 208-3 that each network device 204-1, 204-2 and 204-3 is using. In a specific implementation, the regions and subregions of the network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing is included in the status messages. Further, the status message can include the amount of memory available and is being used by the network device management engines 208-1, 208-2 and 208-3 and how much memory of the network device management engines is being used by each network device 204-1, 204-2 and 204-3 managed by the network device management engines 208-1, 208-2 and 208-3.


The load balancer system 206 is also coupled to the network devices 204-1, 204-2 and 204-3. The load balancer system 206 can become coupled to network devices 204-1, 204-2 and 204-3 when the network device 204-1, 204-2 and 204-3 is assigned to the specific load balancer system 206 of a specific regional network device management system 202. The network devices 204-1, 204-2 and 204-3 can be assigned to a specific load balancer system 206 within a specific regional network device management system 202 by either another load balancer system 104 or the interregional redirector system 102, shown in FIG. 1. As discussed previously with FIG. 1, the network device 204-1, 204-2 and 204-3 can be assigned to a specific regional network device management system 202 based on the region or subregion of the network device 204-1, 204-2 and 204-3.


The load balancer system 206 can function to assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 when the network device 204-1, 204-2 and 204-3 is assigned to the load balancer system 206. In a specific implementation, the load balancer system 206 can assign a newly purchased network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3. The load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based upon the region or subregion of the network devices already assigned to a network device management engine. For example, the load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 that already manages network devices in the same or related region or subregion of the network device that is being assigned.


Additionally, the load balancer system 206 can assign the network device 204-1, 204-2 and 204-3 to one or a plurality of network device management engines 208-1, 208-2 and 208-3 based in part upon the status message that the load balancer system 206 reads for each network device management engine 208-1, 208-2 and 208-3 from the network device management engine message queue 210. For example, if the status messages retrieved by the load balancer system 206 indicate that network device management engine 208-1 has a greater amount of bandwidth than network device management engine 208-2, the load balancer system 206 can assign network device 204-1 to network device management engine 208-1. As a result, network device 204-1 is managed by network device management engine 208-1. In another implementation, the load balancer system 206 can also assign the network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based not only on the available bandwidth of the network device management engines, but also on the expected amount of resources, such as bandwidth that the specific network device 204-1, 204-2 and 204-3 will use from the network device management engines.


The load balancer system 206 can also function to monitor the status of the network device management engines 208-1, 208-2 and 208-3 and reassign the network devices 204-1, 204-2 and 204-3 to other network device management engines in the event of a failure of the network device management engine or engines 208-1, 208-2 and 208-3 to which specific network devices are assigned. For example, the load balancer system 206 can detect a failure in network device management engine 208-1 connected along dashed line 212 to network device 204-2. In response to the failure of network device management engine 208-1, the load balancer system 206 can assign the network device 204-2 to network device management engine 208-2 that is not failing.


In a specific implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the network device management engine does not send a status message to the network device management engine message queue 210. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the engine does not send a specific number of status messages to the network device management engine message queue 210. The number of status messages that a network device management engine 208-1, 208-2 and 208-3 fails to send to the network device management engine message queue 210 before the load balancer system 206 determines that a failure has occurred can be predefined. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the status message sent by a network device management engine indicates that the resources of the engine reach a certain level. For example, the load balancer system 206 can detect a failure of one of the specific network device management engine 208-1, 208-2 and 208-3 when the amount of available bandwidth of the specific network device management engines falls below a certain predefined available bandwidth level.


In a specific implementation, the load balancer system 206 functions to detect a failure of a network device management engine based on network device management engine failure messages generated by the network devices 204-1, 204-2 and 204-3. For example, if the load balancer system 206 receives a network device management engine failure message from the network devices 204-1, 204-2 and 204-3 identifying the specific network device management engine 208-1, 208-2 and 208-3 that has failed, then the load balancer system 206 can determine/detect that the specific network device management engine 208-1, 208-2 and 208-3 has failed.


In a specific implementation, if the load balancer system 206 detects a failure in one of the network device management engines 208-1, 208-2 and 208-3, the load balancer system 206 can reassign the one or plurality of network devices 204-1, 204-2 and 204-3 connected to the network device management engine to other network device management engines in either the same regional network device management system 202 or different regional network device management systems. Alternatively, the load balancer system 206 can reassign all of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 to one or a plurality of other network device management engines. In yet another alternative, the load balancer system can reassign a portion of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 so that the failed network device management engine is cured, and is no longer failing. For example, if an network device management engine is failing due to a lack of available bandwidth, the load balancer system 206 can reassign a portion of the network devices assigned to the failed network device management engine so that the available bandwidth of the failing network device management engine is increased to a level where the network device management engine is no longer failing.


The load balancer system 206 can also be coupled to the administrator system 214, thereby coupling the regional network device management system 202 to the administrator system 214. The load balancer system 206 can send a notification to the administrator system 214 in the event that the load balancer system detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 from the status messages sent to the network device management engine queue 210. In a specific implementation, when the load balancer system 206 detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 because a specific network device management engine does not send a status message to the network device management engine message queue 210, the load balancer system can send a notification to the administrator system 214. The notification sent to the administrator system can include the reason why the load balancer system 206 detects a fault in a specific network device management engine, such as a failure caused by not sending a status message to network device management engine message queue 210, or caused by the resources of a specific network device management engine have reached a specific level. The administrator system 214 can include a computer implemented process for fixing the failed network device management engine based upon the reason why the load balancer system 206 detects a failure in a specific network device management engine.



FIG. 3 is a diagram 300 of an example of a load balancer system 302. The load balancer system 302 can be configured to assign network devices to network device management engines, monitor the status of the network management engines, reassign a network device to a new network device management engine if the network device management engine fails and notify the administrator system of a failure of a network management engine.


In the example of FIG. 3, the load balancer system 302 is coupled through computer-readable medium 304 to an administrator system 306, network devices 308 and the network device management engine message queue 310. The load balancer system 302 includes a message queue access engine 314 coupled through the computer readable medium to the network device management engine message queue 310. The message queue access engine 314 can be configured to retrieve status information of network device management engines from the status messages in the network device management engine message queue 310. The status messages can include information as to the status of the network device management engines, such as the amount of available bandwidth on the network device management engines. The status message also can include information as to when a status message was sent to the network device management engine message queue 310, which can be used to determine whether a network device management engine has stopped sending status messages, and thus may have failed. The message queue access engine 314 can be configured to retrieve status information each time a status message is sent to the network device management engine message queue 310. The status information retrieved by the message queue access engine 314 can be stored on a network device management engine status profiles datastore 318.


The load balancer system 302 can also include a network device access engine 312. The network device access engine 312 can be coupled to network devices 308 coupled to the load balancer system 302 through computer-readable medium 304. The network device access engine 312 can be configured to retrieve or receive information from network devices 308 coupled to the load balancer system 302. In a specific implementation, the network device access engine 312 is configured to retrieve or receive information from newly purchased network devices 308 coupled to the load balancer system 302 for the first time. The newly purchased network devices can become coupled to the load balancer system 302 after being assigned to the load balancer system 302 by either or both another load balancer system or an interregional redirector system, as is shown in FIG. 1. The information retrieved or received by the network device access engine 312 can include the region or subregions of the network device 308. The information can also include the amount of bandwidth that the network device 308 expects to use. The network device access engine 312 can store the information retrieved or received from the network devices 308 on a network device profiles datastore 316.


The load balancer system 302 includes a network device assignment engine 320. The network device assignment engine 320 is coupled to the network device management engine status profiles datastore 318 and the network device profiles datastore 316. The network device assignment engine 320 is also coupled to the network devices 308 through the computer-readable medium 304. The network device assignment engine 320 can function to assign a network device 308 to one or a plurality of network device management engines. Specifically, as the network devices 308 are coupled to the network device assignment engine 320, in assigning the network devices 308 to network device management engines, the network device assignment engine 320 can direct the network devices 308 to couple to network device management engines, so that the engines can manage the flow of data packets into and out of the network devices 308. The network device assignment engine 320 can store the assignment information on the network device management engine assignment profiles datastore 322. The assignment information can include which network devices 308 are assigned to be managed by specific network device management engines.


The network device assignment engine 320 can also function to determine that a network device 308 is assigned to a failing network device management engine and reassign the network device 308 to another one or plurality of network device management engines that are not failing. Specifically, the network device assignment engine 320 can determine that a network device management engine is failing form the information stored in the network device management engine status profiles datastore 318. The network device assignment engine can then determine which network devices 308 are being managed by the specific network device management engine that is failing from the network device management engine assignment profiles datastore 322. The network device assignment engine 320 can then reassign the network devices 308 that are being managed by failing network device management engines to different network device management engines that are not failing. In a specific implementation, in reassigning the network devices 308 to different network device management engines the network device assignment engine 320 can use the information about the network devices 308 stored in the network device profiled datastore 316. For example, the network device assignment engine 320 can use the information about the region or subregion of the network device 308 to reassign the network device 308 to another network device management engine.


The network device assignment engine 320 can also be coupled to the administrator system notification engine 324. The administrator system notification engine 324 is coupled to the administrator system 306 through the computer-readable medium 304. In a specific implementation, the network device assignment engine 320 can function to initiate the sending of a notification about the failure of an AP management engine to the administrator system 306. Specifically, the network device assignment engine 320 can send failure information about a network device management engine to the administrator system notification engine 324. The failure information can include why the network device assignment engine 320 has determined that a network device management engine has failed. The administrator system notification engine 324 can send a notification to the administrator system that a network device management engine has failed, as can be determined by the network device assignment engine 320. The notification sent by the administrator system notification engine 324 can include the information used by the network device assignment engine 320 to determine that the network device management engine has failed.



FIG. 4 depicts a flowchart 400 of an example of a method for assigning a network device to a regional network device management system. The flowchart starts at module 402 with powering on a network device. In a specific implementation, the network device can be a newly purchased device that is powered on for the first time by the purchaser of the network device.


In the example of FIG. 4, the flowchart continues to module 404 with connecting the network device to the interregional redirector system. The flowchart continues to module 406 where the interregional redirector system receives information about the network device connected to the interregional redirector system at module 404. The information about the network device can include information about the region or subregion of the network device. The information about the network device can also include the MAC address of the network device and information about the purchaser of the network device. The information about the network device can also include the amount of bandwidth that the network device expects to use.


The flowchart then continues to module 408, where the network device is validated. In one example, the interregional redirector system validates the network device by using the MAC address received from the network device. The flowchart continues to module 410, where the network device is assigned to a load balancer system. In one example, the interregional redirector system can assign the network device to a load balancer system based on the region or the subregion of the network device. In another example, the load balancer system can be associated with a single or multiple regional AP management systems. In still another example, the region or the subregion of the network device can be determined form the network device at module 406.



FIG. 5 depicts a flowchart 500 of an example of a method of a load balancer system assigning a network device to a network device management engine. In one example, the flowchart can further include the load balancer system determining whether or not a network device management engine has failed and reassigning network devices that are being managed by the failed network device management engine to other network device management engine.


The flowchart beings at module 502, where a load balancer system receives network device information. The network device information can be received from a network device assigned to the load balancer system or from another load balancer system or interregional redirector system that assigns the network device to the load balancer system. The network device information can include information about the region or the subregion of the network device assigned to the load balancer system. The flowchart continues to module 504, where the load balancer system receives network device management engine information. The management engine information can be status information of the network device management engines. The status information can be determined by the load balancer system from messages sent to a network device management engine message queue from network device management engines. The status information can include the amount of bandwidth available on a network device management engine. The status information can also include whether or not the network device management engine has failed. The status information can also include any other information related to network device management engines that has been discussed in this paper.


The flowchart continues to module 506 where the load balancer system assigns a network device to a network device management engine or a plurality of network device management engines for management of the network device. As discussed previously, the load balancer system can assign a network device to a network device management engine based on the region of the network device and the regions or subregions of the other network devices that the assigned network device management engine are managing. The load balancer system can also assign a network device to a network device management engine based on the amount of available bandwidth that the network device management has or any other method described in this paper.


The flowchart continues to module 508, where the load balancer system retrieves network device management engine status messages from a network device management engine message queue. The status messages can include information as to the amount of available bandwidth that a network device management engine has. The status messages can also include time stamps to determine when the status message was sent to the network device management engine message queue by the network device management engines. The load balancer system can continuously retrieve status messages form the network device management engine message queue, or at set times when the network device management engines are scheduled to send a status message.


The flowchart continues to module 510, where the load balancer system monitors a network device management engine and determines the status of a network device management engine. The load balancer system can use the number of times that a network device management engine was supposed to send a status message and did not do so in order to determine the status of the network device management engine. Alternatively, the load balancer system can use the available bandwidth to determine the status of a network device management engine.


The flowchart continues to decision point 512, where the load balancer system determines whether the network device management engine has failed. The load balancer system can determine whether a network device management engine has failed based on the status of the network device management engine determined at module 510. For example, if the network device management engine was supposed to send a status message and did not do so, then the load balancer system can determine that the network device management engine has failed. Alternatively, if the network device management engine does not have enough available bandwidth or the network devices coupled to the network device management engine do not have enough available bandwidth then the load balancer system can determine that the network device management engine has failed. If it is determined at decision point 512, that the network device management engine has not failed, then the flowchart proceeds to module 508, where the load balancer system retrieves network device management engine status messages. At decision point 512, if the load balancer system determines that a network device management engine has failed, then the flowchart continues to module 514, where the load balancer system sends a notification to an administrator system that a specific network device management engine has failed. The flowchart then proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine. In an alternative implementation, if the load balancer system determines at decision point 512 that a network device management engine has failed, then the flowchart skips module 514 and proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine.



FIG. 6 depicts a flowchart 600 of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine. The flowchart begins at module 602, with determining that a network device management engine has failed by a network device that is managed by the network device management engine. In one example, the network device determines that the network device management engine has failed when the network device stops receiving traffic from the network device management engine.


The flowchart continues to module 604 where a network device management engine failure message is sent form the network device to a load balancer system. In one example, the network device generates and sends the network device management engine failure message to the load balancer system after determining that the network device management engine has failed. In another example, the network device management engine failure message identifies the network device management engine that has failed.


The flowchart continues to module 606 where the load balancer system detects that the network device management engine has failed. In one example, the load balancer system detects that the network device management engine has failed after receiving the network device management engine failure message sent from the network device at module 604. In another example, the load balancer system determines the identification of the failed network device management engine from the network device management engine failure message sent by the network device.


The flowchart continues to module 608 where the load balancer reassigns a new network device management engine to the network device. In one example, the new network device management engine is in the same region or subregion as the network device. In another example, the new network device management engine manages other network devices in the same region or subregion as the network device to which the network device management engine is being assigned.


While preferred implementations of the present inventive apparatus and method have been described, it is to be understood that the implementations described are illustrative only and that the scope of the implementations of the present inventive apparatus and method is to be defined solely by the appended claims when accorded a full range of equivalence, many variations and modifications naturally occurring to those of skill in the art from a perusal thereof.

Claims
  • 1. A method for building and maintaining a network, the method comprising: operationally connecting an access point in a region that is connectable to one or more client devices in the region to an interregional redirector engine associated with a plurality of regions including the region;receiving at the interregional redirector engine network device information of the access point, the network device information including geography information of the region and enterprise network information of the access point;determining, by the interregional redirector engine, based on the network device information, a load balancer system uniquely associated with the region selectively from a plurality of load balancer systems that are uniquely associated with different regions and coupled to the interregional redirector engine associated with the plurality of regions;assigning, by the interregional redirector engine, the access point to the load balancer system;assigning, by the load balancer system, the access point to a regional network device management engine associated with the region based on the network device information, the regional network device management engine being determined selectively from a plurality of regional network device management engines that are associated with the region and coupled to different sets of one or more access points;managing, by the load balancer system, a failure of the regional network device management engine in communication with the access point based on network device management engine failure information provided from the access point to the load balancer system without passing through the failed regional network device management engine;managing, by the regional network device management engine, the access point in providing access to an enterprise network.
  • 2. The method of claim 1, further comprising validating the access point.
  • 3. The method of claim 1, further comprising: receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;reassigning, by the load balancer system, the access point to the second network device management engine based on the network device management engine status information from the first and second network device management engines.
  • 4. The method of claim 1, further comprising: receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;determining the first network device management engine has failed based on the network device management engine status information;reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines.
  • 5. The method of claim 4, further comprising assigning, by the load balancer system, a second access point to the second network device management engine.
  • 6. The method of claim 4, further comprising sending, from the load balancer system, a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.
  • 7. A system for building and maintaining a network, the system comprising: a plurality of access points provided in a region and configured to provide access to an enterprise network to one or more client devices in the region;a plurality of load balancer systems uniquely associated with different regions;a plurality of regional network device management engines associated with the region and coupled to different sets of one or more of the access points;an interregional redirector engine associated with a plurality of regions including the region, coupled to the plurality of load balancer systems and the access points, and configured to: receive network device information from one of the access points, the network device information including geography information of the region and enterprise network information of said one of the access points;determine, based on the network device information, a load balancer system uniquely associated with the region selectively from the plurality of load balancer systems;assign said one of the access points to the load balancer system;the load balancer system configured to assign said one of the access points to a regional network device management engine determined selectively from the plurality of regional network device management engines based on the network device information and manage a failure of the regional network device management engine in communication with said one of the access points based on network device management engine failure information provided from said one of the access points to the load balancer system without passing through the failed regional network device management engine, the regional network device management engine configured to manage the access points in providing access to the enterprise network.
  • 8. The system of claim 7, wherein the interregional redirector engine is further configured to validate said one of the access points.
  • 9. The system of claim 7, wherein the load balancer system is configured to: receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;reassign said one of the access points to the second network device management engine based on the network device management engine status information from the first and second network device management engines.
  • 10. The method of claim 7, wherein the load balancer system is configured to: receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;determine that the first network device management engine has failed;reassign the access point to a second network device management engine of the plurality of network device management engines.
  • 11. The system of claim 10, wherein the load balancer system is further configured to assign a second access point of the plurality of access points to the second network device management engine.
  • 12. The system of claim 10, wherein the load balancer system is further configured to send a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.
  • 13. The method of claim 1, further comprising: receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.
  • 14. The method of claim 1, further comprising: receiving, by a network device management engine message queue uniquely associated with the region, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;retrieving, by the load balancer system, the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.
  • 15. The method of claim 1, further comprising: receiving, by the load balancer system, the network device management engine failure information from the access point coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from the access point.
  • 16. The system of claim 7, wherein the load balancer system is further configured to: receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;reassign a first portion of the plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.
  • 17. The system of claim 7, further comprising: a network device management engine message queue uniquely associated with the region and configured to receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines,wherein the load balancer system is further configured to:retrieve the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;reassign a first portion of a plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.
  • 18. The system of claim 7, wherein the load balancer system is further configured to: receive the network device management engine failure information from said one of the access points coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;reassign said one of the access points to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from said one of the access points.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Patent Provisional Application Ser. No. 61/788,621, filed Mar. 15, 2013, which is incorporated herein by reference.

US Referenced Citations (261)
Number Name Date Kind
5471671 Wang et al. Nov 1995 A
5697059 Carney Dec 1997 A
5726984 Kubler et al. Mar 1998 A
5956643 Benveniste Sep 1999 A
6061799 Eldridge et al. May 2000 A
6112092 Benveniste Aug 2000 A
6154655 Borst et al. Nov 2000 A
6201792 Lahat Mar 2001 B1
6233222 Wallentin May 2001 B1
6314294 Benveniste Nov 2001 B1
6473413 Chiou et al. Oct 2002 B1
6496699 Benveniste Dec 2002 B2
6519461 Andersson et al. Feb 2003 B1
6628623 Noy Sep 2003 B1
6628938 Rachabathuni et al. Sep 2003 B1
6636498 Leung Oct 2003 B1
6775549 Benveniste Aug 2004 B2
6865393 Baum et al. Mar 2005 B1
6957067 Iyer et al. Oct 2005 B1
7002943 Bhagwat et al. Feb 2006 B2
7057566 Theobold Jun 2006 B2
7085224 Oran Aug 2006 B1
7085241 O'Neill et al. Aug 2006 B1
7130629 Leung et al. Oct 2006 B1
7154874 Bhagwat et al. Dec 2006 B2
7164667 Rayment et al. Jan 2007 B2
7174170 Steer et al. Feb 2007 B2
7177646 O'Neill et al. Feb 2007 B2
7181530 Halasz et al. Feb 2007 B1
7224697 Banerjea et al. Mar 2007 B2
7216365 Bhagwat et al. May 2007 B2
7251238 Joshi et al. Jul 2007 B2
7336670 Calhoun Feb 2008 B1
7339914 Bhagwat et al. Mar 2008 B2
7346338 Calhoun et al. Mar 2008 B1
7366894 Kalimuthu et al. Apr 2008 B1
7369489 Bhattacharya May 2008 B1
7370362 Olson et al. May 2008 B2
7440434 Chaskar et al. Oct 2008 B2
7512379 Nguyen Mar 2009 B2
7536723 Bhagwat et al. May 2009 B1
7562384 Huang Jul 2009 B1
7593356 Friday et al. Sep 2009 B1
7656822 AbdelAziz et al. Feb 2010 B1
7706789 Qi et al. Apr 2010 B2
7716370 Devarapalli May 2010 B1
7751393 Chaskar et al. Jul 2010 B2
7768952 Lee Aug 2010 B2
7793104 Zheng et al. Sep 2010 B2
7804808 Bhagwat et al. Sep 2010 B2
7843907 Abou-Emara et al. Nov 2010 B1
7844057 Meier et al. Nov 2010 B2
7856209 Rawat Dec 2010 B1
7921185 Chawla et al. Apr 2011 B2
7949342 Cuffaro et al. May 2011 B2
7961725 Nagarajan et al. Jun 2011 B2
7970894 Patwardhan Jun 2011 B1
8000308 Dietrich et al. Aug 2011 B2
8069483 Matlock Nov 2011 B1
8219688 Wang Jul 2012 B2
8249606 Neophytou et al. Aug 2012 B1
8493918 Karaoguz et al. Jul 2013 B2
8553612 Alexandre Oct 2013 B2
8789191 Bhagwat et al. Jul 2014 B2
8824448 Narayana Sep 2014 B1
8948046 Kang et al. Feb 2015 B2
8953453 Xiao Feb 2015 B1
9003527 Bhagwat et al. Apr 2015 B2
20010006508 Pankaj et al. Jul 2001 A1
20020012320 Ogier et al. Jan 2002 A1
20020021689 Robbins et al. Feb 2002 A1
20020041566 Yang Apr 2002 A1
20020071422 Amicangioli Jun 2002 A1
20020091813 Lamberton et al. Jul 2002 A1
20020114303 Crosbie Aug 2002 A1
20020116463 Hart Aug 2002 A1
20020128984 Mehta et al. Sep 2002 A1
20030005100 Barnard et al. Jan 2003 A1
20030039212 Lloyd et al. Feb 2003 A1
20030084104 Salem May 2003 A1
20030087629 Juitt May 2003 A1
20030104814 Gwon et al. Jun 2003 A1
20030129988 Lee et al. Jul 2003 A1
20030145091 Peng et al. Jul 2003 A1
20030179742 Ogier et al. Sep 2003 A1
20030198207 Lee Oct 2003 A1
20040003285 Whelan et al. Jan 2004 A1
20040013118 Borella Jan 2004 A1
20040022222 Clisham Feb 2004 A1
20040054774 Barber et al. Mar 2004 A1
20040064467 Kola et al. Apr 2004 A1
20040077341 Chandranmenon et al. Apr 2004 A1
20040103282 Meier et al. May 2004 A1
20040109466 Van Ackere et al. Jun 2004 A1
20040162037 Shpak Aug 2004 A1
20040185876 Groenendaal Sep 2004 A1
20040192312 Li et al. Sep 2004 A1
20040196977 Johnson et al. Oct 2004 A1
20040236939 Watanabe et al. Nov 2004 A1
20040255028 Chu et al. Dec 2004 A1
20050053003 Cain et al. Mar 2005 A1
20050074015 Chari et al. Apr 2005 A1
20050085235 Park Apr 2005 A1
20050099983 Nakamura et al. May 2005 A1
20050122946 Won Jun 2005 A1
20050154774 Giaffreda et al. Jul 2005 A1
20050207417 Ogawa et al. Sep 2005 A1
20050259682 Yosef et al. Nov 2005 A1
20050262266 Wiberg et al. Nov 2005 A1
20050265288 Liu et al. Dec 2005 A1
20050266848 Kim Dec 2005 A1
20060010250 Eisl et al. Jan 2006 A1
20060013179 Yamane Jan 2006 A1
20060026289 Lyndersay et al. Feb 2006 A1
20060062250 Payne, III Mar 2006 A1
20060107050 Shih May 2006 A1
20060117018 Christiansen et al. Jun 2006 A1
20060140123 Conner et al. Jun 2006 A1
20060146748 Ng et al. Jul 2006 A1
20060146846 Yarvis et al. Jul 2006 A1
20060165015 Melick et al. Jul 2006 A1
20060187949 Seshan et al. Aug 2006 A1
20060221920 Gopalakrishnan et al. Oct 2006 A1
20060233128 Sood et al. Oct 2006 A1
20060234701 Wang et al. Oct 2006 A1
20060245442 Srikrishna et al. Nov 2006 A1
20060251256 Asokan et al. Nov 2006 A1
20060268802 Faccin Nov 2006 A1
20060294246 Stieglitz et al. Dec 2006 A1
20070004394 Chu et al. Jan 2007 A1
20070010231 Du Jan 2007 A1
20070025274 Rahman et al. Feb 2007 A1
20070025298 Jung Feb 2007 A1
20070030826 Zhang Feb 2007 A1
20070049323 Wang et al. Mar 2007 A1
20070077937 Ramakrishnan et al. Apr 2007 A1
20070078663 Grace Apr 2007 A1
20070082656 Stieglitz et al. Apr 2007 A1
20070087756 Hoffberg Apr 2007 A1
20070091859 Sethi et al. Apr 2007 A1
20070115847 Strutt et al. May 2007 A1
20070116011 Lim et al. May 2007 A1
20070121947 Sood et al. May 2007 A1
20070133407 Choi et al. Jun 2007 A1
20070140191 Kojima Jun 2007 A1
20070150720 Oh et al. Jun 2007 A1
20070153697 Kwan Jul 2007 A1
20070153741 Blanchette et al. Jul 2007 A1
20070156804 Mo Jul 2007 A1
20070160017 Meier et al. Jul 2007 A1
20070171885 Bhagwat et al. Jul 2007 A1
20070192862 Vermeulen et al. Aug 2007 A1
20070195761 Tatar et al. Aug 2007 A1
20070206552 Yaqub Sep 2007 A1
20070247303 Payton Oct 2007 A1
20070248014 Xie Oct 2007 A1
20070249324 Jou et al. Oct 2007 A1
20070263532 Mirtorabi et al. Nov 2007 A1
20070280481 Eastlake et al. Dec 2007 A1
20070288997 Meier et al. Dec 2007 A1
20080002642 Borkar et al. Jan 2008 A1
20080022392 Karpati et al. Jan 2008 A1
20080037552 Dos Remedios et al. Feb 2008 A1
20080080369 Sumioka Apr 2008 A1
20080080377 Sasaki et al. Apr 2008 A1
20080090575 Barak et al. Apr 2008 A1
20080095094 Innami Apr 2008 A1
20080095163 Chen et al. Apr 2008 A1
20080107027 Allan et al. May 2008 A1
20080109879 Bhagwat et al. May 2008 A1
20080130495 Dos Remedios et al. Jun 2008 A1
20080146240 Trudeau Jun 2008 A1
20080151751 Ponnuswamy et al. Jun 2008 A1
20080159128 Shaffer Jul 2008 A1
20080159135 Caram Jul 2008 A1
20080170527 Lundsgaard et al. Jul 2008 A1
20080186932 Do et al. Aug 2008 A1
20080194271 Bedekar et al. Aug 2008 A1
20080207215 Chu et al. Aug 2008 A1
20080209186 Boden Aug 2008 A1
20080212562 Bedekar et al. Sep 2008 A1
20080219286 Ji et al. Sep 2008 A1
20080225857 Lange Sep 2008 A1
20080229095 Kalimuthu et al. Sep 2008 A1
20080240128 Elrod Oct 2008 A1
20080253370 Cremin et al. Oct 2008 A1
20080273520 Kim et al. Nov 2008 A1
20080279161 Stirbu et al. Nov 2008 A1
20090019521 Vasudevan Jan 2009 A1
20090028052 Strater et al. Jan 2009 A1
20090040989 da Costa et al. Feb 2009 A1
20090043901 Mizikovsky et al. Feb 2009 A1
20090082025 Song Mar 2009 A1
20090088152 Orlassino Apr 2009 A1
20090097436 Vasudevan et al. Apr 2009 A1
20090111468 Burgess et al. Apr 2009 A1
20090113018 Thomson Apr 2009 A1
20090141692 Kasslin et al. Jun 2009 A1
20090144740 Gao Jun 2009 A1
20090168645 Tester et al. Jul 2009 A1
20090172151 Davis Jul 2009 A1
20090197597 Kotecha Aug 2009 A1
20090207806 Makela et al. Aug 2009 A1
20090239531 Andreasen et al. Sep 2009 A1
20090240789 Dandabany Sep 2009 A1
20090247170 Balasubramanian et al. Oct 2009 A1
20090257380 Meier Oct 2009 A1
20090303883 Kucharczyk et al. Dec 2009 A1
20090310557 Shinozaki Dec 2009 A1
20100020753 Fulknier Jan 2010 A1
20100046368 Kaempfer et al. Feb 2010 A1
20100057930 DeHaan Mar 2010 A1
20100061234 Pai et al. Mar 2010 A1
20100067379 Zhao et al. Mar 2010 A1
20100112540 Gross et al. May 2010 A1
20100115278 Shen et al. May 2010 A1
20100115576 Hale et al. May 2010 A1
20100132040 Bhagwat et al. May 2010 A1
20100195585 Horn Aug 2010 A1
20100208614 Harmatos Aug 2010 A1
20100228843 Ok et al. Sep 2010 A1
20100238871 Tosic Sep 2010 A1
20100240313 Kawai Sep 2010 A1
20100254316 Sendrowicz Oct 2010 A1
20100260091 Seok Oct 2010 A1
20100290397 Narayana Nov 2010 A1
20100304738 Lim et al. Dec 2010 A1
20100311420 Reza et al. Dec 2010 A1
20100322217 Jin et al. Dec 2010 A1
20100325720 Etchegoyen Dec 2010 A1
20110004913 Nagarajan et al. Jan 2011 A1
20110040867 Kalbag Feb 2011 A1
20110051677 Jetcheva et al. Mar 2011 A1
20110055326 Michaelis et al. Mar 2011 A1
20110055928 Brindza Mar 2011 A1
20110058524 Hart et al. Mar 2011 A1
20110064065 Nakajima et al. Mar 2011 A1
20110085464 Nordmark et al. Apr 2011 A1
20110182225 Song et al. Jul 2011 A1
20110185231 Balestrieri et al. Jul 2011 A1
20110222484 Pedersen Sep 2011 A1
20110258641 Armstrong et al. Oct 2011 A1
20110292897 Wu et al. Dec 2011 A1
20120014386 Xiong et al. Jan 2012 A1
20120290650 Montuno et al. Nov 2012 A1
20120322435 Erceg Dec 2012 A1
20130003729 Raman et al. Jan 2013 A1
20130003739 Raman et al. Jan 2013 A1
20130003747 Raman et al. Jan 2013 A1
20130028158 Lee et al. Jan 2013 A1
20130059570 Hara et al. Mar 2013 A1
20130086403 Jenne et al. Apr 2013 A1
20130103833 Ringland et al. Apr 2013 A1
20130188539 Han Jul 2013 A1
20130227306 Santos et al. Aug 2013 A1
20130227645 Lim Aug 2013 A1
20130230020 Backes Sep 2013 A1
20130250811 Vasseur et al. Sep 2013 A1
20140269327 Fulknier et al. Sep 2014 A1
20140298467 Bhagwat et al. Oct 2014 A1
20150120864 Unnimadhavan et al. Apr 2015 A1
Foreign Referenced Citations (10)
Number Date Country
1642143 Jul 2005 CN
0940999 Sep 1999 EP
1732276 Dec 2006 EP
1771026 Apr 2007 EP
1490773 Jan 2013 EP
0059251 Oct 2000 WO
0179992 Oct 2001 WO
2004042971 May 2004 WO
2006129287 Dec 2006 WO
2009141016 Nov 2009 WO
Non-Patent Literature Citations (15)
Entry
Clausen, T., et al., “Optimized Link State Routing Protocol (OLSR),” Network Working Group, pp. 1-71, Oct. 2003.
He, Changhua et al., “Analysis of the 802.11i 4-Way Handshake,” Proceedings of the 3rd ACM Workshop on Wireless Security, pp. 43-50, Oct. 2004.
Lee, Jae Woo et al, “z2z: Discovering Zeroconf Services Beyond Local Link,” 2007 IEEE Globecom Workshops, pp. 1-7, Nov. 26, 2007.
Perkins, C., et al., “Ad hoc On-Demand Distance Vector (AODV) Routing,” Network Working Group, pp. 1-35, Oct. 2003.
International Application No. PCT/US2008/061674, International Search Report and Written Opinion dated Oct. 14, 2008.
International Application No. PCT/US2011/047591, International Search Report and Written Opinion dated Dec. 19, 2011.
International Application No. PCT/US2012/059093, International Search Report and Written Opinion dated Jan. 4, 2013.
Chirumamilla, Mohan K. et al., “Agent Based Intrustion Detection and Response System for Wireless LANs,” CSE Conference and Workshop Papers, Paper 64, Jan. 1, 2003.
Craiger, J. Philip, “802.11, 802.1x, and Wireless Security,” SANS Institute InfoSec Reading Room, Jun. 23, 2002.
Finlayson, Ross et al., “A Reverse Address Resolution Protocol,” Nework Working Group, Request for Comments: 903 (RFC 903), Jun. 1984.
Wu, Haitao et al., “Layer 2.5 SoftMAC: End-System Based Media Streaming Support on Home Networks,” IEEE Global Telecommunications Conference (GLOBECOM '05), vol. 1, pp. 235-239, Nov. 2005.
European Patent Application No. 12879114.2, Search Report dated Jan. 21, 2016.
European Patent Application No. 11823931.8, Search Report dated Aug. 29, 2016.
IEEE Computer Society, “IEEE Std 802.11i—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment 6: Medium Access Control (MAC) Security Enhancements,” Section H.4.1, pp. 165-166, Jul. 23, 2014.
Cisco Systems, Inc., “Wi-Fi Protected Access 2 (WPA 2) Configuration Example,” Document ID 67134, Jan. 21, 2008 [retrieved online at https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/67134-wpa2-config.html on Dec. 4, 2018].
Related Publications (1)
Number Date Country
20140280967 A1 Sep 2014 US
Provisional Applications (1)
Number Date Country
61788621 Mar 2013 US