Container telemetry in data center environments with blade servers and switches

Information

  • Patent Grant
  • 11695640
  • Patent Number
    11,695,640
  • Date Filed
    Monday, October 25, 2021
    3 years ago
  • Date Issued
    Tuesday, July 4, 2023
    a year ago
Abstract
The present disclosure provides systems, methods, and non-transitory computer-readable storage media for determining container to leaf switch connectivity information in a data center in a presence of blade switches and servers. In one aspect of the present disclosure, a method of determining container to leaf switch connectivity information of a data center utilizing at least one blade switch and at least one blade server, includes receiving, at a network controller, link connectivity information that includes south-bound neighboring information between the at least one blade switch of the data center and the at least one blade server of the data center; determining, at the network controller, the container to leaf switch connectivity information of the data center, based on the link connectivity information; and generating a visual representation of a topology of the data center based on the container to leaf switch connectivity information.
Description
TECHNICAL FIELD

The present technology pertains to operations and management of a data center environment which utilizes blade switches and blade servers on which containers providing microservices are deployed.


BACKGROUND

Microservices and containers have been gaining traction in the new age of application development coupled with, for example, container integration (CI)/continuous deployment (CD) model. One prevalent example standard of this system is provided by Docker. Containers have introduced new challenges to data centers in areas of provisioning, monitoring, manageability and scale. Monitoring and analytics tools are key asks from IT and data center operators to help the operators understand the network usage especially with the set of hybrid workloads present within a data center.


Typically, orchestration tools (e.g., Kubernetes, Swarm, Mesos etc.) are used for deploying containers. On the other hand, discovering and visualizing the container workloads on the servers is achieved by (1) listening to container events (start, stop etc.) from the container orchestrators along with associated metadata as to what container is running what application, the IP, the MAC, the image etc. and (2) gleaning physical connectivity via link discovery protocols (e.g., link layer discovery protocol (LLDP)/Cisco discovery protocol (CDP)) from the server and leaf (top of the rack (ToR)) switches. A Data Center controller like Data Center Network Manager (DCNM) already has the Data Center (DC) fabric topology information and therefore, visualizing the container workloads is a matter of correlating the connectivity information with the physical topology.


However, the above discovery and visualization process is inadequate in case of deployment of blade switches within a DC that sits between a blade server and leaf switches or ToRs. This is due to the fact that a blade switch consumes the LLDP/CDP frames and thus the connectivity information of a blade server (and consequently containers running on such blade server) to a network leaf will become invisible and cannot be gleaned easily. Therefore, it is not possible to determine information about container deployments across blade servers within a data center environment that connect to leaf switches within a fabric of a data center via one or more blade switches.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific examples thereof which are illustrated in the appended drawings. Understanding that these drawings depict only examples of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 illustrates a data center structure, according to an aspect of the present disclosure;



FIG. 2 illustrates components of a network device, according to an aspect of the present disclosure;



FIG. 3 illustrates a method according to an aspect of the present disclosure;



FIG. 4 illustrates a method according to an aspect of the present disclosure;



FIG. 5 illustrates a method according to an aspect of the present disclosure; and



FIG. 6 illustrates a method according to an aspect of the present disclosure.





DETAILED DESCRIPTION

Overview


In one aspect of the present disclosure, a method of determining container to leaf switch connectivity information of a data center utilizing at least one blade switch and at least one blade server, includes receiving, at a network controller, link connectivity information that includes south-bound neighboring information between the at least one blade switch of the data center and the at least one blade server of the data center; determining, at the network controller, the container to leaf switch connectivity information a the data center, based on the link connectivity information, and generating a visual representation of a topology of the data center based on the container to leaf switch connectivity information.


In one aspect of the present disclosure, a data center includes at least one blade switch; at least one blade server; and at least one network controller configured to receive link connectivity inform ad on that includes south-bound neighboring information between the at least one blade switch of the data center and the at least one blade server of the data center; determine container to leaf switch connectivity information of the data center based on the link connectivity information; and generate a visual representation of a topology of the data center based on the container to leaf switch connectivity information.


In one aspect of the present disclosure, a non-transitory computer-readable medium has computer-readable instruction stored therein, which when executed by a processor, causes the processor to determine container to leaf switch connectivity information of a data center utilizing at least one blade switch and at least one blade server, by receiving link connectivity information that includes south-bound neighboring information between the at least one blade switch of the data center and the at least one blade server of the data center; determining, at the network controller, the container to leaf switch connectivity information of the data center, based on the link connectivity information; and generating a visual representation of a topology of the data center based on the container to leaf switch connectivity information.


DESCRIPTION

Various examples of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.


References to one or an example embodiment in the present disclosure can be, but not necessarily are, references to the same example embodiment; and, such references mean at least one of the example embodiments.


Reference to “in one example embodiment” or an example embodiment means that a particular feature, structure, or characteristic described in connection with the example embodiment is included in at least one example of the disclosure. The appearances of the phrase “in one example embodiment” in various places in the specification are not necessarily all referring to the same example embodiment, nor are separate or alternative example embodiments mutually exclusive of other example embodiments. Moreover, various features are described which may be exhibited by some example embodiments and not by others. Similarly, various features are described which may be features for some example embodiments but not other example embodiments.


The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various examples given in this specification.


Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to examples of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.


Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of this disclosure. As used herein, the term “and/or,” includes any and all combinations of one or more of the associated listed items.


When an element is referred to as being “connected,” or “coupled,” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. By contrast, when an element is referred to as being “directly connected,” or “directly coupled,” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.).


The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.


Specific details are provided in the following description to provide a thorough understanding of examples. However, it will be understood by one of ordinary skill in the art that examples may be practiced without these specific details. For example, systems may be shown in block diagrams so as not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring examples.


In the following description, illustrative examples will be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented as program services or functional processes include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be implemented using hardware at network elements. Non-limiting examples of such hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs), application-specific-integrated-circuits, field programmable gate arrays (FPGAs) computers or the like.


Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.


A data center (DC) is an example of a cloud computing environment that can provide computing services to customers and users using shared resources. Cloud computing can generally include Internet-based computing in which computing resources are dynamically provisioned and allocated to client or user computers or other devices on-demand, from a collection of resources available via the network (e.g., “the cloud”). Cloud computing resources, for example, can include any type of resource, such as computing, storage, and network devices, virtual machines (VMs), containers, etc. For instance, resources may include service devices (firewalls, deep packet inspectors, traffic monitors, load balancers, etc.), compute/processing devices (servers, CPU's, memory, brute force processing capability), storage devices (e.g., network attached storages, storage area network devices), etc. In addition, such resources may be used to support virtual networks, virtual machines (VM), databases, containers that provide microservices, applications (Apps), etc.


Cloud computing resources may include a “private cloud,” a “public cloud,” and/or a “hybrid cloud.” A “hybrid cloud” can be a cloud infrastructure composed of two or more clouds that inter-operate or federate through technology. In essence, a hybrid cloud is an interaction between private and public clouds where a private cloud joins a public cloud and utilizes public cloud resources in a secure and scalable manner. Cloud computing resources can also be provisioned via virtual networks in an overlay network, such as a VXLAN.



FIG. 1 illustrates a data center structure, according to an aspect of the present disclosure. DC 100 shown in FIG. 1 includes a data center fabric 102. As can be seen, the data center fabric 102 can include spine switches 104-1, 104-2, . . . , 104-n (Collectively referred to as spine switches 104, with n being a positive integer greater than 2) connected to leaf switches 106-1, 106-2, 106-3, . . . , 106-n(Collectively referred to as leaf switches 106 with n being a positive integer greater than 2 that may or may not have the same value as ‘n’). The number of spine switches 104 and leaf switches 106 are not limited to that shown in FIG. 1 and may be more or less than shown (e.g., at least one spine switch 104 or more and at least one or more leaf switch 106). Any one of spine switches 104 can be connected to one or more or all of leaf switches 106. However, interconnectivity of spine switches 104 and leaf switches 106 are not limited to that shown in FIG. 1.


Spine switches 104 can be L3 switches in the data center fabric 102. However, in some cases, the spine switches 104 can also, or otherwise, perform L2 functionalities. Further, the spine switches 104 can support various capabilities, such as 40 or 10 Gbps Ethernet speeds. To this end, the spine switches 104 can include one or more 40 Gigabit Ethernet ports. Each port can also be split to support other speeds. For example, a 40 Gigabit Ethernet port can be split into four 10 Gigabit Ethernet ports. Functionalities of spine switches 104 are not limited to that described above but may include any other known, or to be developed, functionality as known to those skilled in the art.


Spine switches 104 connect to leaf switches 106 in the data center fabric 102. Leaf switches 104 can include access ports (or non-fabric ports) and fabric ports. Fabric ports can provide uplinks to the spine switches 104, while access ports can provide connectivity for devices, hosts, endpoints, VMs, switches, servers running containers or external networks to the data center fabric 102.


Leaf switches 106 can reside at the edge of the data center fabric 102, and can thus represent the physical network edge. In some cases, the leaf switches 106 can be top-of-rack (“ToR”) switches configured according to a ToR architecture. In other cases, the leaf switches 104 can be aggregation switches in any particular topology, such as end-of-row (EoR) or middle-of-row (MoR) topologies. The leaf switches 106 can also represent aggregation switches, for example.


Leaf switches 104 can be responsible for routing and/or bridging the tenant packets and applying network policies. In some cases, a leaf switch can perform one or more additional functions, such as implementing a mapping cache, sending packets to the proxy function when there is a miss in the cache, encapsulate packets enforce ingress or egress policies, etc.


Network connectivity in data center fabric 102 can flow through leaf switches 106. Here, leaf switches 106 can provide servers, resources, endpoints, external networks, switches or VMs access to data center fabric 102, and can connect leaf switches 106 to each other. In some cases leaf switches 102 can connect Endpoint Groups (EPGs) to data center fabric 102 and/or any external networks. Each EPG can connect to data center fabric 102 via one of the leaf switches 106, for example.


DC 100 further includes one or more blade switches 108-1 and 108-2 (collectively referred to as blade switches 108 or switches 108). Blade switches 108 may be any known or to be developed blade switch such as various types of Fabric Interconnect (FI) blade switches designed and manufactured by Cisco Technology Inc., of San Jose, Calif. Alternatively, one or more of blade switches 108 may be any other known, or to be developed, blade switches that are available through any other switch/server and data center components manufacturer. Number of blade switches 108 is not limited to that shown in FIG. 1 but can be one or more.


Each of blade switches 108 can be connected to one or more of leaf switches 106.


DC 100 further includes one or more servers 110-1, 110-2 and 110-3 (collectively referred to as servers 110). Servers 110 can be any type of known or to be developed server including, but not limited to, known or to be developed blade servers. Servers 110 can be Uniform Computing System (UCS) blade servers designed and manufactured by Cisco Technology Inc., of San Jose, Calif. Alternatively, servers 110 can be any known, or to be developed, non-UCS blade servers available through any other switch/server and data center components manufacturer. The number of servers 110 is not limited to that shown in FIG. 1 and may be one or more servers.


In one example, one or more of the servers 110 may be a non-blade server that can be directly connected to one or more of leaf switches 106 without first being connected to an intermediary switch such as one of blade switches 108.


Each blade switch 108 can include a north-bound interface and south-bound interface that identify north-bound and south-bound neighbors of switch 108. In example of FIG. 1, northbound neighbors of each of blade switches 108 can be any one or more leaf switches 106 while southbound neighbors of blade switch 108 can be any one or more servers 110 connected thereto.


Each one of blade servers 110 can be connected to one or more of blade switches 108 and/or, as described above, be directly connected to one or more of leaf switches 106.


Each of servers 110 may host one or more containers. Containers hosted by each of the servers 110 may be any know or to be developed container available and known to those skilled in the art. Furthermore, while reference is being made throughout this disclosure to containers and container workloads hosted by servers 110, servers 110 are not limited to hosting containers but can execute any other mechanism (e.g., a virtual machine) for providing virtual access to applications available thereon for consumers and customers. In one example, a given microservice (e.g., an application) can be provided via a single container or can span multiple containers that can be hosted on a single one of servers 110 or two or more of servers 110.


In one example, DC 100 includes a UCS manager 112 connected to each of servers 110 that is a blade server designed and manufactured by Cisco Technology Inc., of San Jose, Calif. (e.g., blade servers 110-1 and 110-2 shown in FIG. 1). In one example, one or more blade switches 108 utilized in conjunction with servers 110 are also designed and manufactured by Cisco Technology Inc., of San Jose, Calif. In this case, UCS manager 112 may be a separate physical component having hardware (e.g., a processor and a memory) and computer-readable instructions stored on the memory, which when executed by the processor configures the processor to perform functions related to collecting and determining link connectivity information of blade switch 108 (e.g., north-bound and/or south-band neighbor information of blade switch 108). UCS manager 112 and its corresponding functionalities will be further described with reference to FIGS. 3 and 4.


In one example, DC 1.00 includes agent 114 deployed within each of servers 110 that is a non-UCS server designed and manufactured by manufacturers other than Cisco Technology, Inc. of San Jose, Calif. (e.g., blade server 110-3 shown in FIG. 1). In one example, one or more blade switches 108 utilized in conjunction with servers 110 are also designed and manufactured by manufacturers other than Cisco Technology Inc., of San Jose, Calif. Agents 114, as will be described below, is software running on corresponding ones of servers 110 that can periodically collect neighboring server information for containers on each such blade server as will be further described below with reference to FIGS. 5 and 6.


In one example and as shown in FIG. 1, servers 110 can be a combination of blade servers designed and manufactured by Cisco Technology, Inc. of San Jose, Calif. (e.g., blade servers 110-1 and 110-2) and blade servers designed and manufactured by manufacturers other than Cisco Technology, Inc. of San Jose, Calif. (e.g., blade server 110-3). Accordingly, DC 100 can include a combination of UCS manager 112 and one or more agents 114.


In another example, blade servers 110 of DC 100 are homogenous meaning that either all of blade servers 110 are designed and manufactured by Cisco Technology, Inc. of San Jose, Calif. (in which case, DC 100 does not include any agent 114) or all of blade servers are designed and manufactured by manufacturers other than Cisco Technology, Inc. of San Jose, Calif. (in which case, DC 100 does not include UCS manager 112).


DC 100 further includes an orchestration tool 116. Orchestration tool 116 may be any know or to be developed orchestration tool connected to servers 110 that among other functionalities, can be used to deploy containers on servers 110.


DC 100 also includes a data center network manager (DCNM) 118. DCNM 118 can also be referred to as controller 118. DCNM 118 can be a separate hardware (a processor and a memory) running computer-readable instructions that transform the processor into a special purpose processor for running network management functionalities, which include but is not limited to, providing an overview of DC 100 for operators (for purposes of provisioning, monitoring, and managing DC 100, understanding network usage, etc.). FIG. 1 illustrates an example of a visual illustration 120 of DC 100 provided by DCNM 118. DCNM 118 can communicate with various components of DC 100 via any known, or to be developed, wired and/or wireless communication means, protocols, etc., as shown in FIG. 1.



FIG. 2 illustrates components of a network device, according to an aspect of the present disclosure. Network device 250 shown in FIG. 2 can be DCNM 118, UCS manager 112 and/or agent 114 described above with reference to FIG. 1. Network device 250 includes a chipset architecture that can be used in executing the described method and generating and displaying a graphical user interface (GUI) (e.g., visual illustration 120 of FIG. 1). Network device 250 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology. Network device 250 can include a processor 255, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor 255 can communicate with a chipset 260 that can control input to and output from processor 255. In this example, chipset 260 outputs information to output 265, such as a display, and can read and write information to storage device 270, which can include magnetic media, and solid state media, for example. Chipset 260 can also read data from and write data to RAM 275. A bridge 280 for interfacing with a variety of user interface components 285 can be provided for interfacing with chipset 260. Such user interface components 285 can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to Network device 230 can come from any of a variety of sources, machine generated and/or human generated.


Chipset 260 can also interface with one or more communication interfaces 290 that can have different physical interfaces. Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 255 analyzing data stored in storage 270 or 275. Further, the machine can receive inputs from a user via user interface components 285 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 255.


It can be appreciated that Network device 250 can have more than one processor 210 or be part of a group or cluster of computing devices networked together to provide greater processing capability.


Alternatively, Network device 250 can be implemented on one of servers 110 shown in FIG. 1 instead of having a separate dedicated hardware structure as described with reference to FIG. 2.



FIG. 3 illustrates a method according to an aspect of the present disclosure. FIG. 3 will be described from the perspective of UCS manager 112 when one or more of servers (blade servers) 110 and switch (blade switch) 108 is a blade switch manufactured by Cisco Technology, Inc. of San Jose, Calif.


At S300, UCS manager 112 (via a corresponding processor such as processor 255) receives a query from DCNM 118. The query can be communicated to UCS manager 112 by 118 through any known or to be developed wired and/or wireless communication means. In one example, the query is for link connectivity information. In one example, link connectivity information includes information about connectivity of UCS blade server(s) 110 and blade switch(es) 108 (one or more FI 108). This connectivity information about the UCS blade server(s) and each blade switch 108 may be referred to as south-bound neighboring information of a corresponding blade switch 108.


In one example, in addition to the south-bound neighboring information described above, link connectivity information also includes information about connectivity of each blade switch 108 to one or more of leaf switches 106. This connectivity information about each blade switch 108 and the one or more leaf switches 106 may be referred to as north-bound neighboring information of a corresponding blade switch 108.


In other words, link connectivity information includes information related to which blade server(s) 110 are connected to which blade switch 108 and which leaf switches 106 are connected to which blade switch 108.


At S310, UCS manager 112 determines link connectivity information. In one example, UCS manager 112 determines link connectivity information by determining LLDP and/or CDP neighbor information of each blade switch 108 obtained from inspecting LLDP/CDP frames. By inspecting LLDP/CDP frames between each blade switch 108 and one or more leaf switches 106, UCS manager 112 determines which leaf switches are connected to which blade switch 108.


Similarly, by inspecting LLDP/CDP frames between each blade server 110 and each blade switch 108, UCS manager 112 determines which blade server(s) 110 are connected to which blade switch 108.


In one example, UCS manager 112 performs the above process S310 for every blade switch 108 available and operating within DC 100.


At S320, UCS manager 112 returns (sends) the link connectivity information as determined at S310 to DCNM 118. UCS manager 112 can send the link connectivity information to DCNM 118 through any known or to be developed wired and/or wireless communication means.



FIG. 4 illustrates a method, according to an aspect of the present disclosure. FIG. 4 will be described from the perspective of DCNM 118.


At S400, DCNM 118 (via a corresponding processor such as processor 255) sends a query (e.g., a query for link connectivity information) to UCS manager 112 (network component), as described above with reference to FIG. 3. The query can be communicated to UCS manager 112 by DCNM 118 through any known or to be developed wired and/or wireless communication means.


At S410, DCNM 118 receives link connectivity information determined and sent by UCS manager 112 (as described above with reference to S310 and S320 of FIG. 3).


At S420, DCNM 118 correlates the received linked connectivity information with container workload information available to DCNM 118. Container workload information includes information indicating which containers are running on which each blade server 110. In one example, DCNM 118 receives notifications (which may or may not be periodic) from orchestration tool 116. Such notifications include information on changes to container(s) state (start, stop etc.) and the server(s) (e.g., server ID) on which each container is hosted.


At S430 and based on the correlation at S420, DCNM 118 determines which containers are connected to which one of leaf switches 106. This may be referred to as container to leaf switch connectivity information.


At S440 and based on the determined information at S430, DCNM 118 generates visual representation 120 of a topology of the DC 100, that also illustrates the container to leaf switch connectivity information between container workloads and their connectivity to each of leaf switches 106 thus providing a more complete and holistic view of DC 100 connectivity and operations.



FIG. 5 illustrates a method according to an aspect of the present disclosure. FIG. 5 will be described from the perspective of one of more of agents 114 described above with reference to FIG. 1FIG. 5 corresponds to examples in which one or more of servers (blade servers) 110 and switch (blade switch) 108 are manufactured by a manufacturer other than Cisco Technology, Inc. of San Jose, Calif.


As indicated above for each blade server on which containers are running, that is not manufactured by Cisco Technology Inc., of San Jose, Calif., an agent 114 (e.g., Cisco Container Telemetry Agent) is running thereon. FIG. 5 will now be described from perspective of one agent 114. However, as evident to those having ordinary skills in the art, the process of FIG. 5 is applicable to each agent 114 running on one server 110. Furthermore and in one example, agent 114 is a set of computer-readable instructions running on a processor of the corresponding server 110.


At S500, agent 114 periodically determines (collects) link connectivity information indicating blade server 110 to blade switch 108 connectivity. In one example, agent 114 determines this link connectivity information by inspecting (monitoring) LLDP/CDP frames between the corresponding blade server 110 and blade switch 108.


In one example, the periodicity according to which agent 114 collects link connectivity information is a configurable parameter determined by network/DC 100 operators, for example (e.g., every few seconds, minutes, hours, days, etc.).


At S510, agent 114 pushes (sends) the collected link connectivity information to a collector. In one example, the collector resides on DCNM 118. The collector makes the link connectivity information available to DCNM 118. In one example, instead of having the intermediate collector component present, agent 114 directly pushes the collected link connectivity information available to DCNM 118. Agent 114 can push the collected link connectivity information to the collector/DCNM 118 through any known or to be developed wired and/or wireless communication means.


In one example, instead of agents 114 periodically pushing link connectivity information to collector and ultimately to DCNM 118, DCNM 118 directly pulls (obtains by querying, for example) the link connectivity information from each of servers 110 thus eliminating the need for agents 114 and the collector residing on DCNM 118.



FIG. 6 illustrates a method, according to an aspect of the present disclosure. FIG. 6 will be described from the perspective of DCNM 118.


At S600, DCNM 118 (via a corresponding processor such as processor 255) receives collected link connectivity information from agent 114 (directly from agent 114 or from a collector associated with agent 14, as described above). DCNM 118 can receive the collected link connectivity information through any known or to be developed wired and/or wireless communication means.


At S610, DCNM 118 queries each of leaf switches 106 for corresponding leaf switches link connectivity information through any known or to be developed wired and/or wireless communication means (e.g., for LLDP/CDP neighboring information that indicate which blade switches (e.g., blade switch 108)).


At S620, using the collected link connectivity information received at S600, leaf switches link connectivity information received at S610 and information available to DCNM 118 that indicate which containers are running on which blade servers 110 (container workload information), DCNM 118 determines container to leaf switch connectivity information that indicate which containers are connected to which ones of leaf switches 106 (container to leaf switch connectivity information).


At S630 (similar to S440 described above), based on the determined container to leaf switch connectivity information at S620, DCNM 118 generates visual representation 120 of a topology of the DC 100, that also illustrates the container to leaf switch connectivity information between container workloads and their connectivity to each of leaf switches 106 thus providing a more complete and holistic view of DC 100 connectivity and operations.


As indicated above, DC 100 may be a hybrid network of blade servers and switches manufactured by Cisco Technology Inc., of San Jose, Calif. as well as one or more manufacturers other that Cisco Technology Inc. of San Jose, Calif. Accordingly, in such hybrid structure of DC 100, a combination of processes described with reference to FIGS. 3-6 can be performed, where for each blade server/switch manufactured by Cisco Technology Inc. of San Jose, Calif., processes of FIGS. 3 and 4 are implemented while for every blade server/switch manufactured by a manufacturer other than Cisco Technology Inc. of San Jose, Calif., processes of FIGS. 5 and 6 are implemented.


For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.


In some examples the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.


Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of function Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.


Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.


The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.


Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Moreover, claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim.

Claims
  • 1. A method comprising: receiving, at a network controller, link connectivity information;receiving, at the network controller, network component link connectivity information;correlating, at the network controller, the link connectivity information with the network component link connectivity information;determining, at the network controller, container connectivity information based on the correlation of the link connectivity information with the network component link connectivity information; andgenerating a visual representation illustrating the container connectivity information.
  • 2. The method of claim 1, wherein the link connectivity information is received from one or more agents.
  • 3. The method of claim 1, wherein the network component link connectivity information is received from a plurality of network components.
  • 4. The method of claim 3, wherein the container connectivity information indicates that one or more containers are connected with one or more network components of the plurality of network components.
  • 5. The method of claim 3, further comprising: sending one or more queries to the plurality of network components for the network component connectivity information, the plurality of network component determining respective network component connectivity information.
  • 6. The method of claim 5, wherein the determining the respective network component connectivity information comprises inspecting LLDP/CDP frames.
  • 7. The method of claim 1, wherein the network component connectivity information further includes north-bound neighboring information between a first network component and a second network component.
  • 8. The method of claim 1, wherein, the determining the container connectivity information includes correlating the link connectivity information and the network component connectivity information with container workload information available to the network controller, andthe container workload information indicating which containers are running on which of the plurality of network components.
  • 9. A system comprising: at least one processor; andat least one memory storing instructions, which when executed by the at least one processor, causes the at least one processor to: receive link connectivity information;receive network component link connectivity information;correlate the link connectivity information with the network component link connectivity information;determine container connectivity information based on the correlation of the link connectivity information with the network component link connectivity information; andgenerate a visual representation illustrating the container connectivity information.
  • 10. The system of claim 9, wherein the link connectivity information is received from one or more agents.
  • 11. The system of claim 9, wherein the network component link connectivity information is received from a plurality of network components.
  • 12. The system of claim 11, wherein the container connectivity information indicates that one or more containers are connected with one or more network components of the plurality of network components.
  • 13. The system of claim 11, further comprising: sending one or more queries to the plurality of network components for the network component connectivity information, the plurality of network component determining respective network component connectivity information.
  • 14. The system of claim 13, wherein the determining the respective network component connectivity information comprises inspecting LLDP/CDP frames.
  • 15. The system of claim 9, wherein the network component connectivity information further includes north-bound neighboring information between a first network component and a second network component.
  • 16. The system of claim 9, wherein, the determining the container connectivity information includes correlating the link connectivity information and the network component connectivity information with container workload information available to the system, andthe container workload information indicating which containers are running on which of the plurality of network components.
  • 17. At least one non-transitory computer-readable medium storing instructions, which when executed by at least one processor, causes the at least one processor to: receive link connectivity information;receive network component link connectivity information;correlate the link connectivity information with the network component link connectivity information;determine container connectivity information based on the correlation of the link connectivity information with the network component link connectivity information; andgenerate a visual representation illustrating the container connectivity information.
  • 18. The at least one non-transitory computer-readable medium of claim 17, wherein the link connectivity information is received from one or more agents.
  • 19. The at least one non-transitory computer-readable medium of claim 17, wherein the network component link connectivity information is received from a plurality of network components.
  • 20. The at least one non-transitory computer-readable medium of claim 19, wherein the container connectivity information indicates that one or more containers are connected with one or more network components of the plurality of network components.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. Non-Provisional patent application Ser. No. 16/570,886, filed on Sep. 13, 2019, which is a continuation of U.S. Non-Provisional patent application Ser. No. 15/656,381, filed on Jul. 21, 2017, now U.S. Pat. No. 10,425,288, the full disclosure of each is hereby expressly incorporated by reference in its entirety.

US Referenced Citations (444)
Number Name Date Kind
5812773 Norin Sep 1998 A
5889896 Meshinsky et al. Mar 1999 A
6108782 Fletcher et al. Aug 2000 A
6178453 Mattaway et al. Jan 2001 B1
6298153 Oishi Oct 2001 B1
6343290 Cossins et al. Jan 2002 B1
6643260 Kloth et al. Nov 2003 B1
6683873 Kwok et al. Jan 2004 B1
6721804 Rubin et al. Apr 2004 B1
6733449 Krishnamurthy et al. May 2004 B1
6735631 Oehrke et al. May 2004 B1
6996615 McGuire Feb 2006 B1
7054930 Cheriton May 2006 B1
7058706 Lyer et al. Jun 2006 B1
7062571 Dale et al. Jun 2006 B1
7111177 Chauvel et al. Sep 2006 B1
7212490 Kao et al. May 2007 B1
7277948 Igarashi et al. Oct 2007 B2
7313667 Pullela et al. Dec 2007 B1
7379846 Williams et al. May 2008 B1
7480672 Hahn et al. Jan 2009 B2
7496043 Leong et al. Feb 2009 B1
7536476 Alleyne May 2009 B1
7567504 Darling et al. Jul 2009 B2
7583665 Duncan et al. Sep 2009 B1
7606147 Luft et al. Oct 2009 B2
7644437 Volpano Jan 2010 B2
7647594 Togawa Jan 2010 B2
7773510 Back et al. Aug 2010 B2
7808897 Mehta et al. Oct 2010 B1
7881957 Cohen et al. Feb 2011 B1
7917647 Cooper et al. Mar 2011 B2
8010598 Tanimoto Aug 2011 B2
8028071 Mahalmgam et al. Sep 2011 B1
8041714 Aymeloglu et al. Oct 2011 B2
8121117 Amdahl et al. Feb 2012 B1
8160063 Maltz et al. Apr 2012 B2
8171415 Appleyard et al. May 2012 B2
8194534 Pandey et al. Jun 2012 B2
8234377 Cohn Jul 2012 B2
8244559 Horvitz et al. Aug 2012 B2
8250215 Stienhans et al. Aug 2012 B2
8280880 Aymeloglu et al. Oct 2012 B1
8284664 Aybay et al. Oct 2012 B1
8301746 Head et al. Oct 2012 B2
8345692 Smith Jan 2013 B2
8406141 Couturier et al. Mar 2013 B1
8407413 Yucel et al. Mar 2013 B1
8448171 Donnellan et al. May 2013 B2
8477610 Zuo et al. Jul 2013 B2
8495356 Ashok et al. Jul 2013 B2
8495725 Ahn Jul 2013 B2
8510469 Portolani Aug 2013 B2
8514868 Hill Aug 2013 B2
8532108 Li et al. Sep 2013 B2
8533687 Greifeneder et al. Sep 2013 B1
8547974 Guruswamy et al. Oct 2013 B1
8560639 Murphy et al. Oct 2013 B2
8560663 Baucke et al. Oct 2013 B2
8589543 Dutta et al. Nov 2013 B2
8590050 Nagpal et al. Nov 2013 B2
8607225 Stevens Dec 2013 B2
8611356 Yu et al. Dec 2013 B2
8612625 Andreis et al. Dec 2013 B2
8630291 Shaffer et al. Jan 2014 B2
8639787 Lagergren et al. Jan 2014 B2
8656024 Krishnan et al. Feb 2014 B2
8660129 Brendel et al. Feb 2014 B1
8719804 Jain May 2014 B2
8775576 Hebert et al. Jul 2014 B2
8797867 Chen et al. Aug 2014 B1
8805951 Faibish et al. Aug 2014 B1
8850002 Dickinson et al. Sep 2014 B1
8850182 Fritz et al. Sep 2014 B1
8856339 Mestery et al. Oct 2014 B2
8909928 Ahmad et al. Dec 2014 B2
8918510 Gmach et al. Dec 2014 B2
8924720 Raghuram et al. Dec 2014 B2
8930747 Levijarvi et al. Jan 2015 B2
8938775 Roth et al. Jan 2015 B1
8959526 Kansal et al. Feb 2015 B2
8977754 Curry, Jr. et al. Mar 2015 B2
9009697 Breiter et al. Apr 2015 B2
9015324 Jackson Apr 2015 B2
9043439 Bicket et al. May 2015 B2
9049115 Rajendran et al. Jun 2015 B2
9063789 Beaty et al. Jun 2015 B2
9065727 Liu et al. Jun 2015 B1
9075649 Bushman et al. Jul 2015 B1
9128631 Myrah et al. Sep 2015 B2
9130846 Szabo et al. Sep 2015 B1
9164795 Vincent Oct 2015 B1
9167050 Durazzo et al. Oct 2015 B2
9201701 Boldyrev et al. Dec 2015 B2
9201704 Chang et al. Dec 2015 B2
9203784 Chang et al. Dec 2015 B2
9223634 Chang et al. Dec 2015 B2
9244776 Koza et al. Jan 2016 B2
9251114 Ancin et al. Feb 2016 B1
9264478 Hon et al. Feb 2016 B2
9294408 Dickinson et al. Mar 2016 B1
9313048 Chang et al. Apr 2016 B2
9361192 Smith et al. Jun 2016 B2
9379982 Krishna et al. Jun 2016 B1
9380075 He et al. Jun 2016 B2
9430262 Felstaine et al. Aug 2016 B1
9432245 Sorenson, III et al. Aug 2016 B1
9432294 Sharma et al. Aug 2016 B1
9444744 Sharma et al. Sep 2016 B1
9473365 Melander et al. Oct 2016 B2
9503530 Niedzielski Nov 2016 B1
9558078 Farlee et al. Jan 2017 B2
9571570 Mutnuru Feb 2017 B1
9613078 Vermeulen et al. Apr 2017 B2
9628471 Sundaram et al. Apr 2017 B1
9658876 Chang et al. May 2017 B2
9692802 Bicket et al. Jun 2017 B2
9755858 Bagepalli et al. Sep 2017 B2
10171309 Smith et al. Jan 2019 B1
10212041 Rastogi Feb 2019 B1
10657019 Chopra May 2020 B1
20010055303 Horton et al. Dec 2001 A1
20020073337 Ioele et al. Jun 2002 A1
20020143928 Maltz et al. Oct 2002 A1
20020166117 Abrams et al. Nov 2002 A1
20020174216 Shorey et al. Nov 2002 A1
20030018591 Komisky Jan 2003 A1
20030056001 Mate et al. Mar 2003 A1
20030228585 Inoko et al. Dec 2003 A1
20040004941 Malan et al. Jan 2004 A1
20040034702 He Feb 2004 A1
20040088542 Daude et al. May 2004 A1
20040095237 Chen et al. May 2004 A1
20040131059 Ayyakad et al. Jul 2004 A1
20040197079 Latvala et al. Oct 2004 A1
20040264481 Darling et al. Dec 2004 A1
20050060418 Sorokopud Mar 2005 A1
20050125424 Herriott et al. Jun 2005 A1
20060062187 Rune Mar 2006 A1
20060104286 Cheriton May 2006 A1
20060126665 Ward et al. Jun 2006 A1
20060146825 Hofstaedter et al. Jul 2006 A1
20060155875 Cheriton Jul 2006 A1
20060168338 Bruegl et al. Jul 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20070174663 Crawford et al. Jul 2007 A1
20070223487 Kajekar et al. Sep 2007 A1
20070242830 Conrado et al. Oct 2007 A1
20080005293 Bhargava et al. Jan 2008 A1
20080080524 Tsushima et al. Apr 2008 A1
20080084880 Dharwadkar Apr 2008 A1
20080165778 Ertemalp Jul 2008 A1
20080198752 Fan et al. Aug 2008 A1
20080198858 Townsley et al. Aug 2008 A1
20080201711 Amir Husain Aug 2008 A1
20080235755 Blaisdell et al. Sep 2008 A1
20090006527 Gingell, Jr. et al. Jan 2009 A1
20090019367 Cavagnari et al. Jan 2009 A1
20090031312 Mausolf et al. Jan 2009 A1
20090083183 Rao et al. Mar 2009 A1
20090138763 Arnold May 2009 A1
20090177775 Radia et al. Jul 2009 A1
20090178058 Stillwell, III et al. Jul 2009 A1
20090182874 Morford et al. Jul 2009 A1
20090265468 Annambhotla et al. Oct 2009 A1
20090265753 Anderson et al. Oct 2009 A1
20090293056 Ferris Nov 2009 A1
20090300608 Ferris et al. Dec 2009 A1
20090313562 Appleyard et al. Dec 2009 A1
20090323706 Germain et al. Dec 2009 A1
20090328031 Pouyadou et al. Dec 2009 A1
20100036903 Ahmad et al. Feb 2010 A1
20100042720 Stienhans et al. Feb 2010 A1
20100061250 Nugent Mar 2010 A1
20100110932 Doran et al. May 2010 A1
20100115341 Baker et al. May 2010 A1
20100131765 Bromley et al. May 2010 A1
20100149966 Achlioptas et al. Jun 2010 A1
20100191783 Mason et al. Jul 2010 A1
20100192157 Jackson et al. Jul 2010 A1
20100205601 Abbas et al. Aug 2010 A1
20100211782 Auradkar et al. Aug 2010 A1
20100293270 Augenstein et al. Nov 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100325199 Park et al. Dec 2010 A1
20100325441 Laurie et al. Dec 2010 A1
20100333116 Prahlad et al. Dec 2010 A1
20110016214 Jackson Jan 2011 A1
20110035754 Srinivasan Feb 2011 A1
20110055396 Dehaan Mar 2011 A1
20110055398 Dehaan et al. Mar 2011 A1
20110055470 Portolani Mar 2011 A1
20110072489 Parann-Nissany Mar 2011 A1
20110075667 Li et al. Mar 2011 A1
20110110382 Jabr et al. May 2011 A1
20110116443 Yu et al. May 2011 A1
20110126099 Anderson et al. May 2011 A1
20110138055 Daly et al. Jun 2011 A1
20110145413 Dawson et al. Jun 2011 A1
20110145657 Bishop et al. Jun 2011 A1
20110173303 Rider Jul 2011 A1
20110185063 Head et al. Jul 2011 A1
20110185065 Stanisic et al. Jul 2011 A1
20110206052 Tan et al. Aug 2011 A1
20110213966 Fu et al. Sep 2011 A1
20110219434 Betz et al. Sep 2011 A1
20110231715 Kunii et al. Sep 2011 A1
20110231899 Pulier et al. Sep 2011 A1
20110239039 Dieffenbach et al. Sep 2011 A1
20110252327 Awasthi et al. Oct 2011 A1
20110261811 Battestilli et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276675 Singh et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110283013 Grosser et al. Nov 2011 A1
20110295998 Ferris et al. Dec 2011 A1
20110305149 Scott et al. Dec 2011 A1
20110307531 Gaponenko et al. Dec 2011 A1
20110320870 Kenigsberg et al. Dec 2011 A1
20120005724 Lee Jan 2012 A1
20120036234 Staats et al. Feb 2012 A1
20120054367 Ramakrishnan et al. Mar 2012 A1
20120072318 Akiyama et al. Mar 2012 A1
20120072578 Alam Mar 2012 A1
20120072581 Tung et al. Mar 2012 A1
20120072985 Davne et al. Mar 2012 A1
20120072992 Arasaratnam et al. Mar 2012 A1
20120084445 Brock et al. Apr 2012 A1
20120084782 Chou et al. Apr 2012 A1
20120096134 Suit Apr 2012 A1
20120102193 Rathore et al. Apr 2012 A1
20120102199 Hopmann et al. Apr 2012 A1
20120131174 Ferris et al. May 2012 A1
20120137215 Kawara May 2012 A1
20120158967 Sedayao et al. Jun 2012 A1
20120159097 Jennas, II et al. Jun 2012 A1
20120167094 Suit Jun 2012 A1
20120173710 Rodriguez Jul 2012 A1
20120179909 Sagi et al. Jul 2012 A1
20120180044 Donnellan et al. Jul 2012 A1
20120182891 Lee et al. Jul 2012 A1
20120185913 Martinez et al. Jul 2012 A1
20120192016 Gotesdyner et al. Jul 2012 A1
20120192075 Ebtekar et al. Jul 2012 A1
20120201135 Ding et al. Aug 2012 A1
20120214506 Skaaksrud et al. Aug 2012 A1
20120222106 Kuehl Aug 2012 A1
20120236716 Anbazhagan et al. Sep 2012 A1
20120240113 Hur Sep 2012 A1
20120265976 Spiers et al. Oct 2012 A1
20120272025 Park et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120281708 Chauhan et al. Nov 2012 A1
20120290647 Ellison et al. Nov 2012 A1
20120297238 Watson et al. Nov 2012 A1
20120311106 Morgan Dec 2012 A1
20120311568 Jansen Dec 2012 A1
20120324092 Brown et al. Dec 2012 A1
20120324114 Dutta et al. Dec 2012 A1
20130003567 Gallant et al. Jan 2013 A1
20130013248 Brugler et al. Jan 2013 A1
20130036213 Hasan et al. Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130066940 Shao Mar 2013 A1
20130080509 Wang Mar 2013 A1
20130080624 Nagai et al. Mar 2013 A1
20130091557 Gurrapu Apr 2013 A1
20130097601 Podvialnik et al. Apr 2013 A1
20130104140 Meng et al. Apr 2013 A1
20130111540 Sabin May 2013 A1
20130117337 Dunham May 2013 A1
20130124712 Parker May 2013 A1
20130125124 Kempf et al. May 2013 A1
20130138816 Kuo et al. May 2013 A1
20130144978 Jain et al. Jun 2013 A1
20130152076 Patel Jun 2013 A1
20130152175 Hromoko et al. Jun 2013 A1
20130159097 Schory et al. Jun 2013 A1
20130159496 Hamilton et al. Jun 2013 A1
20130160008 Cawlfield et al. Jun 2013 A1
20130162753 Hendrickson et al. Jun 2013 A1
20130169666 Pacheco et al. Jul 2013 A1
20130179941 McGloin et al. Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130185433 Zhu et al. Jul 2013 A1
20130191106 Kephart et al. Jul 2013 A1
20130198374 Zalmanovitch et al. Aug 2013 A1
20130201989 Hu et al. Aug 2013 A1
20130204849 Chacko Aug 2013 A1
20130232491 Radhakrishnan et al. Sep 2013 A1
20130246588 Borowicz et al. Sep 2013 A1
20130250770 Zou et al. Sep 2013 A1
20130254415 Fullen et al. Sep 2013 A1
20130262347 Dodson Oct 2013 A1
20130283364 Chang et al. Oct 2013 A1
20130297769 Chang et al. Nov 2013 A1
20130318240 Hebert et al. Nov 2013 A1
20130318546 Kothuri et al. Nov 2013 A1
20130339949 Spiers et al. Dec 2013 A1
20140006481 Frey et al. Jan 2014 A1
20140006535 Reddy Jan 2014 A1
20140006585 Dunbar et al. Jan 2014 A1
20140040473 Ho et al. Feb 2014 A1
20140040883 Tompkins Feb 2014 A1
20140052877 Mao Feb 2014 A1
20140056146 Hu et al. Feb 2014 A1
20140059310 Du et al. Feb 2014 A1
20140074850 Noel et al. Mar 2014 A1
20140075048 Yuksel et al. Mar 2014 A1
20140075108 Dong et al. Mar 2014 A1
20140075357 Flores et al. Mar 2014 A1
20140075501 Srinivasan et al. Mar 2014 A1
20140089727 Cherkasova et al. Mar 2014 A1
20140098762 Ghai et al. Apr 2014 A1
20140108474 David Apr 2014 A1
20140108985 Scott et al. Apr 2014 A1
20140122560 Ramey et al. May 2014 A1
20140136779 Guha et al. May 2014 A1
20140140211 Chandrasekaran et al. May 2014 A1
20140141720 Princen et al. May 2014 A1
20140156557 Zeng et al. Jun 2014 A1
20140164486 Ravichandran et al. Jun 2014 A1
20140188825 Muthukkaruppan et al. Jul 2014 A1
20140189095 Lindberg et al. Jul 2014 A1
20140189125 Amies et al. Jul 2014 A1
20140215471 Cherkasova Jul 2014 A1
20140222953 Karve et al. Aug 2014 A1
20140244851 Lee Aug 2014 A1
20140245298 Zhou et al. Aug 2014 A1
20140281173 Im et al. Sep 2014 A1
20140282536 Dave et al. Sep 2014 A1
20140282611 Campbell et al. Sep 2014 A1
20140282889 Ishaya et al. Sep 2014 A1
20140289200 Kato Sep 2014 A1
20140295831 Karra et al. Oct 2014 A1
20140297569 Clark et al. Oct 2014 A1
20140297835 Buys Oct 2014 A1
20140310391 Sorensen, III et al. Oct 2014 A1
20140310417 Sorensen, III et al. Oct 2014 A1
20140310418 Sorensen, III et al. Oct 2014 A1
20140314078 Jilani Oct 2014 A1
20140317261 Shatzkamer et al. Oct 2014 A1
20140321278 Cafarelli et al. Oct 2014 A1
20140330976 van Bemmel Nov 2014 A1
20140330977 van Bemmel Nov 2014 A1
20140334488 Guichard et al. Nov 2014 A1
20140362682 Guichard et al. Dec 2014 A1
20140365680 van Bemmel Dec 2014 A1
20140366155 Chang et al. Dec 2014 A1
20140369204 Anand et al. Dec 2014 A1
20140372567 Ganesh et al. Dec 2014 A1
20140379938 Bosch et al. Dec 2014 A1
20150033086 Sasturkar et al. Jan 2015 A1
20150043576 Dixon et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150058382 St. Laurent et al. Feb 2015 A1
20150058459 Amendjian et al. Feb 2015 A1
20150071285 Kumar et al. Mar 2015 A1
20150074246 Premji et al. Mar 2015 A1
20150085870 Narasimha et al. Mar 2015 A1
20150089082 Patwardhan et al. Mar 2015 A1
20150100471 Curry, Jr. et al. Apr 2015 A1
20150103827 Quinn et al. Apr 2015 A1
20150106802 Ivanov et al. Apr 2015 A1
20150106805 Melander et al. Apr 2015 A1
20150117199 Chinnaiah Sankaran et al. Apr 2015 A1
20150117458 Gurkan et al. Apr 2015 A1
20150120914 Wada et al. Apr 2015 A1
20150124622 Kovvali et al. May 2015 A1
20150138973 Kumar et al. May 2015 A1
20150178133 Phelan et al. Jun 2015 A1
20150189009 van Bemmel Jul 2015 A1
20150215819 Bosch et al. Jul 2015 A1
20150227405 Jan et al. Aug 2015 A1
20150242204 Hassine et al. Aug 2015 A1
20150249709 Teng et al. Sep 2015 A1
20150263901 Kumar et al. Sep 2015 A1
20150280980 Bitar Oct 2015 A1
20150281067 Wu Oct 2015 A1
20150281113 Siciliano et al. Oct 2015 A1
20150309908 Pearson et al. Oct 2015 A1
20150319063 Zourzouvillys et al. Nov 2015 A1
20150326524 Tankala et al. Nov 2015 A1
20150339210 Kopp et al. Nov 2015 A1
20150358850 La Roche, Jr. et al. Dec 2015 A1
20150365324 Kumar et al. Dec 2015 A1
20150373108 Fleming et al. Dec 2015 A1
20160011925 Kulkarni et al. Jan 2016 A1
20160013990 Kulkarni et al. Jan 2016 A1
20160026684 Mukherjee et al. Jan 2016 A1
20160062786 Meng et al. Mar 2016 A1
20160094389 Jain et al. Mar 2016 A1
20160094398 Choudhury et al. Mar 2016 A1
20160094453 Jain et al. Mar 2016 A1
20160094454 Jain et al. Mar 2016 A1
20160094455 Jain et al. Mar 2016 A1
20160094456 Jain et al. Mar 2016 A1
20160094480 Kulkarni et al. Mar 2016 A1
20160094643 Jain et al. Mar 2016 A1
20160099847 Melander et al. Apr 2016 A1
20160099853 Nedeltchev et al. Apr 2016 A1
20160099864 Akiya et al. Apr 2016 A1
20160105393 Thakkar et al. Apr 2016 A1
20160127184 Bursell May 2016 A1
20160134557 Steinder et al. May 2016 A1
20160156708 Jalan et al. Jun 2016 A1
20160164780 Timmons et al. Jun 2016 A1
20160164914 Madhav et al. Jun 2016 A1
20160182378 Basavaraja et al. Jun 2016 A1
20160188527 Cherian et al. Jun 2016 A1
20160234071 Nambiar et al. Aug 2016 A1
20160239399 Babu et al. Aug 2016 A1
20160253078 Ebtekar et al. Sep 2016 A1
20160254968 Ebtekar et al. Sep 2016 A1
20160261564 Foxhoven et al. Sep 2016 A1
20160277368 Narayanaswamy et al. Sep 2016 A1
20160301603 Park et al. Oct 2016 A1
20160330080 Bhatia et al. Nov 2016 A1
20170005948 Melander et al. Jan 2017 A1
20170024260 Chandrasekaran et al. Jan 2017 A1
20170026294 Basavaraja et al. Jan 2017 A1
20170026470 Bhargava et al. Jan 2017 A1
20170041342 Efremov et al. Feb 2017 A1
20170048100 Delinocci Feb 2017 A1
20170054659 Ergin et al. Feb 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170099188 Chang et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170147297 Krishnamurthy et al. May 2017 A1
20170149878 Mutnuru May 2017 A1
20170163531 Kumar et al. Jun 2017 A1
20170171158 Hoy et al. Jun 2017 A1
20170264663 Bicket et al. Sep 2017 A1
20170339070 Chang et al. Nov 2017 A1
20170358111 Madsen Dec 2017 A1
20170359223 Hsu Dec 2017 A1
20180006894 Power et al. Jan 2018 A1
20180034747 Nataraja et al. Feb 2018 A1
20180062930 Dhesikan et al. Mar 2018 A1
20180063025 Nambiar et al. Mar 2018 A1
20180270125 Jain Sep 2018 A1
20180359171 Kommula et al. Dec 2018 A1
20190166013 Shaikh May 2019 A1
Foreign Referenced Citations (14)
Number Date Country
101719930 Jun 2010 CN
101394360 Jul 2011 CN
102164091 Aug 2011 CN
104320342 Jan 2015 CN
105740084 Jul 2016 CN
2228719 Sep 2010 EP
2439637 Apr 2012 EP
2645253 Nov 2014 EP
10-2015-0070676 May 2015 KR
M394537 Dec 2010 TW
WO 2009155574 Dec 2009 WO
WO 2010030915 Mar 2010 WO
WO 2013158707 Oct 2013 WO
WO 2016146011 Sep 2016 WO
Non-Patent Literature Citations (66)
Entry
Cisco Technology, Inc., “Cisco Expands Videoscape TV Platform Into the Cloud,” Jan. 6, 2014, Las Vegas, Nevada, Press Release, 3 pages.
Citrix, “Citrix StoreFront 2.0” White Paper, Proof of Concept Implementation Guide, Citrix Systems, Inc., 2013, 48 pages.
Citrix, “CloudBridge for Microsoft Azure Deployment Guide,” 30 pages.
Citrix, “Deployment Practices and Guidelines for NetScaler 10.5 on Amazon Web Services,” White Paper, citrix.com, 2014, 14 pages.
CSS Corp, “Enterprise Cloud Gateway (ECG)—Policy driven framework for managing multi-cloud environments,” original published on or about Feb. 11, 2012; 1 page; http://www.css-cloud.com/platform/enterprise-cloud-gateway.php.
Fang K., “LISP MAC-EID-TO-RLOC Mapping (LISP based L2VPN),” Network Working Group, Internet Draft, CISCO Systems, Jan. 2012, 12 pages.
Ford, Bryan, et al., Peer-to-Peer Communication Across Network Address Translators, In USENIX Annual Technical Conference, 2005, pp. 179-192.
Gedymin, Adam, “Cloud Computing with an emphasis on Google App Engine,” Sep. 2011, 146 pages.
Amedro, Brian, et al., “An Efficient Framework for Running Applications on Clusters, Grids and Cloud,” 2010, 17 pages.
Author Unknown, “5 Benefits of a Storage Gateway in the Cloud,” Blog, Twinstrata, Inc., Jul. 25, 2012, XP055141645, 4 pages, https://web.archive.org/web/20120725092619/http://blog.twinstrata.com/2012/07/10//5-benefits-of-a-storage-gateway-in-the-cloud.
Author Unknown, “Joint Cisco and VMWare Solution for Optimizing Virtual Desktop Delivery: Data Center 3.0: Solutions to Accelerate Data Center Virtualization,” Cisco Systems, Inc. and VMware, Inc., Sep. 2008, 10 pages.
Author Unknown, “A Look at DeltaCloud: The Multi-Cloud API,” Feb. 17, 2012, 4 pages.
Author Unknown, “About Deltacloud,” Apache Software Foundation, Aug. 18, 2013, 1 page.
Author Unknown, “Architecture for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0102, Jun. 18, 2010, 57 pages.
Author Unknown, “Cloud Infrastructure Management Interface—Common Information Model (ClMI-CIM),” Document No. DSP0264, Version 1.0.0, Dec. 14, 2012, 21 pages.
Author Unknown, “Cloud Infrastructure Management Interface (CIMI) Primer,” Document No. DSP2027, Version 1.0.1, Sep. 12, 2012, 30 pages.
Author Unknown, “cloudControl Documentation,” Aug. 25, 2013, 14 pages.
Author Unknown, “Interoperable Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0101, Nov. 11, 2009, 21 pages.
Author Unknown, “Microsoft Cloud Edge Gateway (MCE) Series Appliance,” Iron Networks, Inc., 2014, 4 pages.
Author Unknown, “Open Data Center Alliance Usage: Virtual Machine (VM) Interoperability in a Hybrid Cloud Environment Rev. 1.2,” Open Data Center Alliance, Inc., 2013, 18 pages.
Author Unknown, “Real-Time Performance Monitoring On Juniper Networks Devices, Tips and Tools for Assessing and Analyzing Network Efficiency,” Juniper Networks, Inc., May 2010, 35 pages.
Author Unknown, “Use Cases and Interactions for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS00103, Jun. 16, 2010, 75 pages.
Author Unknown, “Apache Ambari Meetup What's New,” Hortonworks Inc., Sep. 2013, 28 pages.
Author Unknown, “Introduction,” Apache Ambari project, Apache Software Foundation, 2014, 1 page.
Baker, F., “Requirements for IP Version 4 Routers,” Jun. 1995, 175 pages, Network Working Group, Cisco Systems.
Beyer, Steffen, “Module “Data:Locations?!”,” YAPC::Europe, London, UKJCA, Sep. 22-24, 2000, XP002742700, 15 pages.
Blanchet, M., “A Flexible Method for Managing the Assignment of Bits of an IPv6 Address Block,” Apr. 2003, 8 pages, Network Working Group, Viagnie.
Borovick, Lucinda, et al., “Architecting the Network for the Cloud,” IDC White Paper, Jan. 2011, 8 pages.
Bosch, Greg, “Virtualization,” last modified Apr. 2012 by B. Davison, 33 pages.
Broadcasters Audience Research Board, “What's Next,” http://lwww.barb.co.uk/whats-next, accessed Jul. 22, 2015, 2 pages.
Cisco Systems, Inc. “Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers,” Cisco White Paper, Apr. 2011, 36 pages, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.pdf.
Cisco Systems, Inc., “Cisco Unified Network Services: Overcome Obstacles to Cloud-Ready Deployments,” Cisco White Paper, Jan. 2011, 6 pages.
Cisco Systems, Inc., “Cisco Intercloud Fabric: Hybrid Cloud with Choice, Consistency, Control and Compliance,” Dec. 10, 2014, 22 pages.
Good, Nathan A., “Use Apache Deltacloud to administer multiple instances with a single API,” Dec. 17, 2012, 7 pages.
Herry, William, “Keep It Simple, Stupid: OpenStack nova-scheduler and its algorithm”, May 12, 2012, IBM, 12 pages.
Hewlett-Packard Company, “Virtual context management on network devices”, Research Disclosure, vol. 564, No. 60, Apr. 1, 2011, Mason Publications, Hampshire, GB, Apr. 1, 2011, 524.
Juniper Networks, Inc., “Recreating Real Application Traffic in Junosphere Lab,” Solution Brief, Dec. 2011, 3 pages.
Kenhui, “Musings On Cloud Computing and IT-as-a-Service: [Updated for Havana] Openstack Computer for VSphere Admins, Part 2: Nova-Scheduler and DRS”, Jun. 26, 2013, Cloud Architect Musings, 12 pages.
Kolyshkin, Kirill, “Virtualization in Linux,” Sep. 1, 2006, XP055141648, 5 pages, https://web.archive.org/web/20070120205111/httD://download.openvz.org/doc/openvz-intro.pdf.
Kumar, S., et al., “Infrastructure Service Forwarding For NSH,”Service Function Chaining Internet Draft, draft-kumar-sfc-nsh-forwarding-00, Dec. 5, 2015, 10 pages.
Kunz, Thomas, et al., “OmniCloud—The Secure and Flexible Use of Cloud Storage Services,” 2014, 30 pages.
Lerach, S.R.O., “Golem,” http://www.lerach.cz/en/products/golem, accessed Jul. 22, 2015, 2 pages.
Linthicum, David, “VM Import could be a game changer for hybrid clouds”, IntoWorld, Dec. 23, 2010, 4 pages.
Logan, Marcus, “Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications,” F5 Deployment Guide Version 1.1, 2016, 65 pages.
Lynch, Sean, “Monitoring cache with Claspin” Facebook Engineering, Sep. 19, 2012, 5 pages.
Meireles, Fernando Miguel Dias, “Integrated Management of Cloud Computing Resources,” 2013-2014, 286 pages.
Meraki, “meraki releases industry's first cloud-managed routers,” Jan. 13, 2011, 2 pages.
Mu, Shuai, et al., “uLibCloud: Providing High Available and Uniform Accessing to Multiple Cloud Storages,” 2012 IEEE, 8 pages.
Naik, Vijay K., et al., “Harmony: A Desktop Grid for Delivering Enterprise Computations,” Grid Computing, 2003, Fourth International Workshop on Proceedings, Nov. 17, 2003, pp. 1-11.
Nair, Srijith K. et al., “Towards Secure Cloud Bursting, Brokerage and Aggregation,” 2012, 8 pages, www.flexiant.com.
Nielsen, “SimMetry Audience Measurement—Technology,” http://www.nielsen-admosphere.eu/products-and-services/simmetry-audience-measurement-technology/ accessed Jul. 22, 2015, 6 pages.
Nielsen, “Television,” http://www.nielsen.com/us/en/solutions/measurement/television.html, accessed Jul. 22, 2015, 4 pages.
Open Stack, “Filter Scheduler,” updated Dec. 17, 2017, 5 pages, accessed on Dec. 18, 2017, https://docs.openstack.org/nova/latest/user/filter-scheduler.html.
Quinn, P., et al., “Network Service Header,” Internet Engineering Task Force Draft, Jul. 3, 2014, 27 pages.
Quinn, P., et al., “Service Function Chaining (SFC) Architecture,” Network Working Group, Internet Draft, draft-quinn-sfc-arch-03.txt, Jan. 22, 2014, 21 pages.
Rabadan, J., et al., “Operational Aspects of Proxy-ARP/ND in EVPN Networks,” BESS Worksgroup Internet Draft, draft-snr-bess-evpn-proxy-arp-nd-02, Oct. 6, 2015, 22 pages.
Saidi, Ali, et al., “Performance Validation of Network-Intensive Workloads on a Full-System Simulator,” Interaction between Operating System and Computer Architecture Workshop, (IOSCA 2005), Austin, Texas, Oct. 2005, 10 pages.
Shunra, “Shunra for HP Software; Enabling Confidence in Application Performance Before Deployment,” 2010, 2 pages.
Son, Jungmin, “Automatic decision system for efficient resource selection and allocation in inter-clouds,” Jun. 2013, 35 pages.
Sun, Aobing, et al., “IaaS Public Cloud Computing Platform Scheduling Model and Optimization Analysis,” Int. J. Communications, Network and System Sciences, 2011,4, 803-811, 9 pages.
Szymaniak, Michal, et al., “Latency-Driven Replica Placement”, vol. 47 No. 8, IPSJ Journal, Aug. 2006, 12 pages.
Toews, Everett, “Introduction to Apache jclouds,” Apr. 7, 2014, 23 pages.
Von Laszewski, Gregor, et al., “Design of a Dynamic Provisioning System for a Federated Cloud and Bare-metal Environment,” 2012, 8 pages.
Wikipedia, “Filter (software)”, Wikipedia, Feb. 8, 2014, 2 pages, https://en.wikipedia.org/w/index.php?title=Filter %28software%29&oldid=594544359.
Wikipedia; “Pipeline (Unix)”, Wikipedia, May 4, 2014, 4 pages, https://en.wikipedia.org/w/index.php?title=Pipeline2/028Unix%29&oldid=606980114.
Ye, Xianglong, et al., “A Novel Blocks Placement Strategy for Hadoop,” 2012 IEEE/ACTS 11th International Conference on Computer and Information Science, 2012 IEEE, 5 pages.
Related Publications (1)
Number Date Country
20220045912 A1 Feb 2022 US
Continuations (2)
Number Date Country
Parent 16570886 Sep 2019 US
Child 17510075 US
Parent 15656381 Jul 2017 US
Child 16570886 US