The present subject matter relates generally to communication networks, and more particularly, to improved interfaces for troubleshooting and resolving network issues/events.
Consumers and businesses alike are increasingly transitioning from local computing environments (e.g., on-site computers, storage, etc.) to network-based services. Network-based services offer access to customizable, scalable, and powerful computing resources over a network (e.g., the Internet). Typically, the underlying hardware and software that support these network-based services are housed in large-scale data centers, which can include hundreds or even thousands of network devices (e.g., servers, switches, processors, memory, load balancers, virtual machines (VMs), firewalls, etc.). Service providers provision, manage, and/or otherwise configure the hardware/software in these data centers in accordance with service level agreements (SLAs), customer policies, security requirements, and so on. While service providers often offer a variety customizable and scalable configuration, the sheer number of devices as well as the dynamic nature of changing customer needs often results in complex networks of interconnected devices within each data center.
In an effort to competitively meet customer needs, some services providers employ software-defined network (SDN) models as well as intent-based frameworks—which serve as abstractions of lower-level network functions—to help automate data center resource management, control, and policy enforcement. While many of these automated approaches largely eliminate the laborious task of manually configuring (and re-configuring) network devices, such automation generates a large amount of data related to network status, health, configuration parameters, errors, and so on. In turn, this large amount of data presents new challenges to efficiently troubleshoot and resolve causes of undesired behavior while at the same time minimizing interruption to network services. Moreover, in the context administration, resolving network issues/events for datacenter networks present daunting challenges because network administrators must quickly identify, prioritize, and address issues based on levels of severity, network impact, and so on. In some situations, a network administrator may be in the middle of resolving one network issue when a new higher priority issue (which requires immediate attention) presents itself. In response, the network administrator typically interrupts his/her current progress/analysis on the first issue to address the new higher priority issue. However, after resolving the new higher priority issue, the network administrator loses his/her progress and must begin anew.
The embodiments herein may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
According to one or more embodiments of the disclosure, a monitoring device/node (or module) monitors status information for a plurality of nodes in a datacenter network and identifies a first network event for a time period based the status information for at least a portion of the plurality of nodes. The monitoring device provides an interface that includes an initial display page, one or more additional display pages, selectable display objects that correspond to one or more of the plurality of nodes, and a representation of the first network event. In addition, the monitoring device generates a dynamic troubleshooting path for the first network event that tracks a user navigation between display pages, a manipulation setting for one or more of the selectable display objects, and a last-current display page. In this fashion, the dynamic troubleshooting path represents a comprehensive troubleshooting context. Notably, the dynamic troubleshooting path can be saved or “parked” as a card object, which can be subsequently retrieved. The monitoring device also provides an indication of a second network event associated with higher resolution priority relative to the first network event. In response, the monitoring device resets the interface to present the initial display page based on the second network event (e.g., with relevant information/content, etc.). The monitoring device may further retrieve the dynamic troubleshooting path for the first network event (e.g., after resolving the second network event). Retrieving the dynamic troubleshooting path for the first network event causes the monitoring device to present the last-current display page, apply the manipulation setting for the one or more selectable display objects, and load the user navigation between the display pages in a cache so the user can pick-up and continue troubleshooting the first network event without re-tracing any prior steps/analysis.
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without departing from the spirit and scope of the disclosure.
A communication network is a distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as servers, routers, virtual machines (VMs), personal computers/workstations, etc. Data centers, as mentioned above, can include a number of communication networks and in this context, distinctions are often made between underlying physical network infrastructures, which form underlay networks, and virtual network infrastructures, which form overlay networks (e.g., software defined networks (SDNs). In operation, overlay networks or virtual networks are created and layered over an underlay network.
As mentioned above, some services providers employ software-defined network (SDN) models as well as intent-based frameworks to help automate data center configuration such as resource management, control, and policy enforcement. As used herein, the term “configuration” or “configuration parameters” refers to rules, policies, priorities, protocols, attributes, objects, etc., for routing, forwarding, and/or classifying traffic in datacenter network 100.
In the context of SDN models and intent-based frameworks, network administrators can define configurations for application or software layers and implement such configurations using one or more controllers 102. In some examples, controllers 102 can represent Application Policy Infrastructure Controllers (APICs) for an intent-based Application Centric Infrastructure (ACI) framework, and can operate to provide centralized access to fabric information, application configuration, resource configuration, application-level configuration modeling for a SDN infrastructure, integration with management systems or servers, etc. In this fashion, controllers 102 provide a unified point of automation and management, policy programming, application deployment, health monitoring for datacenter network 100.
In operation, controllers 102 automates processes to render (e.g., translate, map, etc.) a network intent throughout datacenter network 100 and implement policies to enforce the network intent. For example, controllers 102 may receive a business intent at a high level of abstraction, translate it into a network intent using, e.g., distributed knowledge of the network, input from the customer, etc., and define network policy for subsequent implementation/enforcement.
Fabric 104 includes a number of interconnected network devices such as spines 106—e.g., “SPINE(s) 1-N”)—and leafs 108—“LEAF(s) 1-N”. Spines 106 forward packets based on forwarding tables and, in some instances, spines 106 can host proxy functions (e.g., parsing encapsulated packets, performing endpoint (EP) address identifier matching/mapping, etc.).
Leafs 108 route and/or bridge customer or tenant packets and apply network configurations (which may be provided by controllers 102) to incoming/outgoing traffic. Leafs 108 directly or indirectly connect fabric 104 to other network devices such as servers 110, a hypervisor 112 (including, one or more VMs 114), applications 116, an endpoint group 118 (including, endpoint (EP) devices 120), and an external network 122 (including, additional endpoints/network devices).
Servers 110 can include one or more virtual switches, routers, tunnel endpoints for tunneling packets between a hosted overlay/logical layer and an underlay layer illustrated by fabric 104.
Hypervisor 112 provide a layer of software, firmware, and/or hardware that creates, manages, and/or runs VMs 114. Hypervisor 112 allow VMs 114 to share hardware resources hosted by server 110.
VMs 114 are virtual machines hosted by hypervisors 112, however, it is also appreciated VMs 114 may include a VM manager and/or workloads running on server 110. VMs 114 and/or hypervisor 112 may migrate to other servers (not shown), as is appreciated by those skilled in the art. In such instances, configuration or deployment changes may require modifications to settings, configurations and policies applied to the migrating resources.
Applications 116 can include software applications, services, containers, appliances, functions, service chains, etc. For example, applications 116 can include a firewall, a database, a CDN server, an IDS/IPS, a deep packet inspection service, a message router, a virtual switch, etc. Applications 116 can be distributed, chained, or hosted by multiple endpoints (e.g., servers 110, VMs 114, etc.), or may run or execute entirely from a single endpoint (e.g., EP 120).
Endpoint group 118 organizes endpoints 120 (e.g., physical, logical, and/or virtual devices) based on various attributes. For example, endpoints 120 may be grouped or associated with endpoint group 118 based on VM type, workload type, application type, etc.), requirements (e.g., policy requirements, security requirements, QoS requirements, customer requirements, resource requirements, etc.), resource names (e.g., VM name, application name, etc.), profiles, platform or operating system (OS) characteristics (e.g., OS type or name including guest and/or host OS, etc.), associated networks or tenant spaces, policies, a tag, and so on. In this fashion, endpoint groups 118 are used to classify traffic, define relationships, define roles, apply rules to ingress/egress traffic, apply filters or access control lists (ACLs), define communication paths, enforce requirements, implement security or other configurations associated with endpoints 120.
Endpoints 120 can include physical and/or logical or virtual entities, such as servers, clients, VMs, hypervisors, software containers, applications, resources, network devices, workloads, etc. For example, endpoints 120 can be defined as an object to represents a physical device (e.g., server, client, switch, etc.), an application (e.g., web application, database application, etc.), a logical or virtual resource (e.g., a virtual switch, a virtual service appliance, a virtualized network function (VNF), a VM, a service chain, etc.), a container running a software resource (e.g., an application, an appliance, a VNF, a service chain, etc.), storage, a workload or workload engine, etc. Endpoints 120 have an address (e.g., an identity), a location (e.g., host, network segment, virtual routing and forwarding (VRF) instance, domain, etc.), one or more attributes (e.g., name, type, version, patch level, OS name, OS type, etc.), a tag (e.g., security tag), a profile, etc.
Devices such as controllers 102, fabric 104, and other devices in datacenter network 100 can communicate with a compliance/verification system 140. Compliance/verification system 140 generally monitors, remediates, and provide network assurance for datacenter network 100. Network assurance refers to processes for continuously and comprehensively verifying that network devices are properly configured (e.g., based on network models, etc.), enforcing network security policies, checking for compliance against business rules, remediating issues, providing alerts for network events/status, and the like. Compliance/verification system 140 can also include interfaces for communicating network information to network administrators and for troubleshooting network issues/events. Although compliance/verification system 140 is illustrated as a single system, it can include any number of components, sub-components, and may be implemented in a distributed processing architecture over multiple devices/nodes.
Network interface(s) 210 contain the mechanical, electrical, and signaling circuitry for communicating data over links coupled to one or more of the networks shown in communication environment 100. Network interfaces 210 may be configured to transmit and/or receive data using a variety of different communication protocols, as will be understood by those skilled in the art. For example, device 200 may communicate with network devices in datacenter network 100.
Memory 240 comprises a plurality of storage locations that are addressable by processor 220 for storing software programs and data structures associated with the embodiments described herein. Processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate data structures 245. An operating system 242, portions of which are typically resident in memory 240 and executed by processor 220, functionally organizes the device by, inter alia, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may comprise an illustrative routing process/service 244, and an event resolution process 246, as described herein.
It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while processes may be shown and/or described separately, those skilled in the art will appreciate that processes may be routines or modules within other processes.
Routing process/services 244 include computer executable instructions executed by processor 220 to perform functions provided by one or more routing protocols, such as the Interior Gateway Protocol (IGP) (e.g., Open Shortest Path First, “OSPF,” and Intermediate-System-to-Intermediate-System, “IS-IS”), the Border Gateway Protocol (BGP), etc., as will be understood by those skilled in the art. These functions may be configured to manage a forwarding information database including, e.g., data used to make forwarding decisions. In particular, changes in the network topology may be communicated among routers 200 using routing protocols, such as the conventional OSPF and IS-IS link-state protocols (e.g., to “converge” to an identical view of the network topology).
Routing process 244 may also perform functions related to virtual routing protocols, such as maintaining a VRF instance, or tunneling protocols, such as for MPLS, generalized MPLS (GMPLS), etc., as will be understood by those skilled in the art. In one embodiment, routing process 244 may be operable to establish dynamic VPN tunnels, such as by using a DMVPN overlay onto the network.
Routing process/services 244 may further be configured to perform additional functions such as security functions, firewall functions, AVC or similar functions, NBAR or similar functions, PfR or similar functions, combinations thereof, or the like. As would be appreciated, routing process/services 244 may be configured to perform any of its respective functions independently or in conjunction with one or more other devices. In other words, in some cases, device 200 may provide supervisory control over the operations of one or more other devices. In other cases, device 200 may be controlled in part by another device that provides supervisory control over the operations of device 200.
As noted above, administration and resolving network issues present daunting challenges that require network administrators to multi-task and triage issues/events based on severity, impact, etc. Typically, when a network administrator switches from addressing a first issue to address a second issue, current progress, analysis, and troubleshooting steps for addressing the first issue are often lost. In turn, the network administrator often must begin anew when revisiting the first issue.
As mentioned, the techniques herein may be employed by compliance/verification system 140 (or other systems/interfaces in communication therewith) to address, troubleshoot, and resolve network issues. In some aspects, the techniques are embodied by event resolution process 246 and include operations performed by a monitoring device/node. For example, event resolution process 246 can include processes to monitor status information for a plurality of nodes in a datacenter network, identify a first network event for a time period based on the status information for at least a portion of the plurality of nodes, and provide an interface (e.g., a display interface) for troubleshooting network events. For example, the interface can include a graphical representation of the first network event, an initial display page, and one or more additional display pages of selectable display objects. In this context, the selectable display objects indicate operational parameters of the plurality of nodes for the first network event.
Event resolution process 246 also includes processes to create a dynamic troubleshooting path for the first network event that tracks user navigation (troubleshooting steps) for a last-current display state, changes between display pages, manipulation of selectable display objects (e.g., for respective display pages), and so on. In this fashion, the dynamic troubleshooting path represents a comprehensive troubleshooting context that includes static information presented by the last-current display page/state as well dynamic information regarding prior steps taken (e.g., objects manipulated, pages navigated, etc.) to arrive at the last-current display page. Notably, the dynamic troubleshooting path can be saved or “parked” as a card object associated with the interface. Event resolution process 246 includes processes to provide, by the interface, an indication of a second network event associated with a higher resolution priority as compared to the first network event, update the interface to display the initial display page based on the second network event (e.g., with relevant information/content, etc.), and update the selectable display objects for the one or more additional display pages based on the second network event. Event resolution process 246 includes operations to retrieves the dynamic troubleshooting path (e.g., retrieve saved/parked cards) for the first network event based on user input (e.g., a user resolves the second network event and turns back to the first network event). Once retrieved, the event resolution process 246 resets the selectable display objects for the one or more additional display pages based on the dynamic troubleshooting path, and resets the interface to display the current display state based on the dynamic troubleshooting path.
Illustratively, the techniques described herein may be performed by hardware, software, and/or firmware, such as in accordance with the event resolution process 246, which may include computer executable instructions executed by the processor 220 (or independent processor of interfaces 210) to perform functions relating to the techniques described herein, e.g., in conjunction with routing process 244. For example, the techniques herein may be treated as extensions to conventional protocols and as such, may be processed by similar components understood in the art that execute those protocols, accordingly. Operationally, the techniques herein my generally be implemented by any central or distributed compliance/verification engine located within the computer network (e.g., compliance/verification system 140, distributed among controllers 102, fabric 104, other network devices, etc.).
As illustrated, compliance/verification system 140 includes components or modules which collectively cooperate to monitor network devices/nodes and generally ensure datacenter operations are in compliance with configurations, policies, etc. For example, compliance/verification system 140 may perform operations similar to an assurance appliance system, described in U.S. patent application Ser. No. 15/790,577, filed on Oct. 23, 2017, the contents of which are incorporated herein by reference to its entirety. As shown, compliance/verification system 140 includes a network modeling component 342, an equivalency checks component 344, a policy analysis component 346, a security adherence module 348, a TCAM utilization module 350, and an alerts/events module 352. In operation, compliance/verification system 140 can identify and verify network topologies (network modeling component 342), perform equivalency checks 344 to ensure logical configurations at controllers 102 are consistent with rules rendered by devices of fabric 104 (e.g., configurations in storage, such as TCAM), perform policy analysis component 346 to check semantic and/or syntactic compliance with intent specifications, enforce security policies/contracts (security adherence module 348), monitor and report ternary content-addressable memory (TCAM) use (TCAM utilization module 350) within the network, and generate alerts or event notifications (alerts/events module 352), and so on.
Compliance/verification system 140 provides the above discussed monitored information as well as event notifications or alerts to interfaces 300. Interfaces 300 represent a set of network administration tools for monitoring network traffic as well as troubleshooting and resolving issues within datacenter network 100. Accordingly, interfaces 300 can include the mechanical, electrical, and signaling hardware/software for exchanging data with compliance/verification system 140 and/or for communicating with one or more nodes in datacenter network 100. In operation, interfaces 300 can classify, organize, and aggregate network status information for corresponding devices/nodes into respective time periods or “epochs” and, for each time period or epoch, interfaces 300 can generate and display summary reports to a network administrator, as discussed in greater detail below.
As shown, interfaces 300 include a number of display pages 305, which organize and present information received from compliance/verification system 140. For example, this information can correspond to certain nodes, networks, sub-networks, and include parameters such as memory utilization, configuration parameters, policy analysis information, contract information, etc.
Interfaces 300 also include display objects such as events 310 and parking lot 315. Events 310 (including the illustrated “event 1”, “event 2”, event 3″, and so on) represent network events or network issues for a given time period or epoch. Events 310 are generated based on, for example, policy conflicts, configuration issues, memory usage, network address conflicts, workload balancing issues, security violations, EPG/SG violations, firewall issues, and so on. With respect to configuration issues, events 310 can include a mismatch between expected configuration parameters at a network controller (e.g., an APIC) and translated configuration parameters at one or more nodes in datacenter network 100 (e.g., leaf nodes 1-N).
Parking lot 315 represents respective troubleshooting placeholders or bookmarks and includes saved “cards” such as the illustrated “card 1”, “saved card 2”, “saved card 3”, and so on. Importantly, these troubleshooting placeholders or bookmarks represent a dynamic troubleshooting path or a comprehensive troubleshooting context, including information presented by a current display page/state as well as steps taken (e.g., objects manipulated, pages navigated, etc.) prior to arriving at the current display page. For example, a network administrator may navigate through a number of display pages, and manipulate a number of selectable objects to identify a source of a network issue. During the troubleshooting process, the network administrator can save or store the dynamic troubleshooting context, which can include the currently displayed page, the path of links to navigate display pages prior to the currently displayed page, as well as the manipulation of selectable objects on respective display pages. Upon retrieval, the dynamic troubleshooting context will re-populate and/or re-execute the prior navigation steps and display the last current display page. In this fashion, the user is presented with the last saved/current display page along with the prior troubleshooting context. The prior troubleshooting context refers to prior display settings for navigation steps leading up to the last saved/current display page.
By way of simple example, assume a user navigates through three display pages, accessed or navigated in the following order: display page 1, display page 2, and display page 3. The user further executes a save/store operation to save the dynamic troubleshooting path while display page 3 is the current page. The save/store operation saves display settings for the current page 3, display settings for prior display pages 1 and 2 (e.g., selectable object manipulations for respective display pages, etc.) and a navigation paths (e.g., links) that represent an order which display pages 1 and 2 were accessed. Upon retrieval of the dynamic troubleshooting path, the user is initially presented with display page 3 since it represents the last current display page. Display page 3 will be presented with the last saved display settings. In addition, from the display page 3, if the user selects an operation to go “back” or display a “previous” page, display page 2 will be shown (since display page 2 was accessed just prior to display page 3). Moreover, any display settings for display page 2 will be applied and displayed such that the user can continue troubleshooting without any lost context. Likewise, prior saved display settings for display page 1 will be applied and displayed.
The above example may be further appreciated with reference to
As illustrated, troubleshooting operations begin when interfaces 300 receive a ticket or alert—alerts 405a—from a ticketing system 405. Ticketing system 405 can represent an information technology helpdesk system or other type of ticketing systems, as is appreciated by those skilled in the art. For example, ticketing system 405 can operate as a component part of compliance verification system 140, interfaces 300, or as a separate/independent system (as shown). As shown, ticketing system 405 communicates with interfaces 300 (and/or with compliance/verification system 140) using one or more Application Program Interfaces (APIs) 407.
Interfaces 300 receive alert 405a and display a graphical alert indication to notify a user (e.g., a network administrator) about a network issue/condition. In turn, the user reviews alert 405a, and begins a troubleshooting process corresponding to display pages 305a. For example, the user navigates through display pages 305a to determine an appropriate time period/epoch associated with alert 405a (step 1), identifies events associated with the epoch (step 2), and identifies corresponding nodes (step 3). As the user navigates display pages 305a, the user will typically manipulate one or more selectable display objects on respective display pages (e.g., to access additional details/information). As discussed in greater detail below, the selectable display objects can be manipulated to provide more granular details regarding a particular node, sets of nodes, network conditions, configurations, parameters, policies, conflicts, and so on. In addition, during the troubleshooting process, interfaces 300 also create a dynamic troubleshooting path that tracks user navigation between the various display pages, manipulation of selectable display objects on respective display pages, and a current display state (e.g., a current display page).
As illustrated, a second, higher resolution priority alert—alert 405b—interrupts the troubleshooting process corresponding to alert 405a. Again, interfaces 300 may indicate receipt of the second alert by a graphical indication (discussed in greater detail below). Notably, the user can determine alert 405b corresponds to a higher resolution priority by a priority/ranking issued by interfaces 300 (and/or by subjective judgment as appropriate).
Due to its higher resolution priority, the user addresses alert 405b before fully resolving network issues related to alert 405a. However, before switching to troubleshoot/address alert 405b, the user saves the dynamic troubleshooting path for alert 405a, as shown by the save or “park” card operations corresponding to card 415.
As discussed above, saving or parking the dynamic troubleshooting path stores the current network context, which includes information presented by a current display page/state as well as steps taken (e.g., objects manipulated, pages navigated, etc.) prior to arriving at the current display page. In addition, saving the dynamic troubleshooting path can also include operations to reset the network context to an initial state for subsequent troubleshooting (e.g., to begin troubleshooting alert 405b). In particular, resetting the network context can include exiting the current display page, returning the user to an initial display page such as a homepage or a dashboard display page, resetting manipulation of selectable object on corresponding display pages, clearing local navigation caches, and so on.
As illustrated, the save or park operations store the dynamic troubleshooting path, represented by a card 415, which is “parked” in parking lot 315. Once stored, the user addresses alert 405b by navigating through display pages 305b. Notably, display pages 305b can include all or a portion of the same pages as display pages 305a, depending on the nature of alert 405b and information required to resolve the issues. Once resolved, the user retrieves card 415, which corresponds to the stored dynamic troubleshooting path for alert 405a. Retrieving the stored dynamic troubleshooting path for alert 405a can include operations to recall and apply the prior network context. As discussed above, the retrieval operations reload the prior network context and can include presenting the last-current display page, operations to re-load the navigation path or links for prior display pages (e.g., display pages 305a) in a local cache, as well as re-applying the manipulation (e.g., object conditions) to selectable objects on respective display pages, and so on. In this fashion, the user can continue troubleshooting alert 405a without retracing prior navigation steps. As shown here, the user continues to troubleshooting step 3, identifying corresponding nodes, step 4, checking configuration parameters, and step 5, resolving alert 405a.
Notably, although the above discussion of
The dynamic troubleshooting path(s) are represented, in part, by links between the selectable display objects where a solid line corresponds to the dynamic troubleshooting path for alert 405a and a dash line corresponds to the dynamic troubleshooting path for alert 405b. As discussed above, the user begins addressing alert 405a by navigating through various display pages and manipulating selectable display objects. The user interrupts troubleshooting operations for alert 405a upon receipt of alert 405b. Here, the troubleshooting operations are interrupted while the current display page is verify & diagnose display page 520 (as indicated by a bell with an alert icon). The user saves the dynamic troubleshooting path, as shown by card 415, which stores the comprehensive troubleshooting context. Next, the user addresses alert 405b and retrieves the saved dynamic troubleshooting path (e.g., card 415) in order to continue addressing alert 405a with the saved comprehensive network context restored/re-applied—e.g., with the last-current display page presented, the navigation path or links for prior display pages (e.g., display pages 305a) reloaded in a local cache, and the prior manipulation (e.g., object conditions) applied to selectable objects on respective display pages.
For example,
Notably, the network events are typically organized in network reports based on respective time periods. As shown, the network reports are represented by graphical icons (circles) arranged on a timeline 605. The circles can be assigned a specific color scheme (e.g., red for critical events/issues, yellow for warnings, green for status OK, etc.) and/or a certain size or shape based on a level of severity (e.g., larger radii indicate more serious network events than smaller radii, etc.). Moreover, the size or shape of the graphical icons can also represent a number of critical network events for a given network report (e.g., a larger size corresponds to a larger number or quantity of network events, etc.). In some embodiments, the graphical icon for a network report corresponds to a logarithmic scale associated with the magnitude (e.g., corresponding to a number of pixels to create the graphical icon).
As shown, the graphical icon for network report 610 is selected, which causes the interface to display a summary “analysis” window, which highlights the number of network events based on severity. Notably, the network reports may be automatically generated at periodic epochs and/or the network reports may be generated based specific network events. In this fashion, the network reports and report generation may be customized by a network administrator. As shown, the interfaces generate the network reports based on a hybrid approach—e.g., network reports are automatically generated based on a 15 minute time period and, in addition, network reports are generated based on network events. As mentioned, these network reports are represented by graphical icons arranged on timeline 605.
Dashboard display page 600 also includes a graphical icon—e.g., a pin icon 615—for saving, retrieving, and displaying stored cards corresponding to respective dynamic troubleshooting path. As discussed in greater detail below, a user can select pin icon 615 to view stored cards, save a current network context as a new card, and/or retrieve a stored card to continue troubleshooting a network event/issue. Moreover, dashboard display page 600 also includes a graphical icon—e.g., a bell icon 620—for notifying a user about new tickets, alerts, issues, and so on. In this fashion, bell icon 620 indicates receipt of an alert (or a ticket), which typically corresponds to a degraded performance or network issue for one or more nodes in the datacenter network. Notably, in one or more additional embodiments, the network reports can also be generated based on a time stamp associated with the alert (in addition to the above-discussed automatic/manual report generation). For example, the alert can be time stamped at origination to reflect a time a network issue occurred. In these embodiments, the network reports may be generated based on the time stamp associated with the alert where network events that occur at the same time (or close to the same time) as the time stamp associated with the alert are compiled into a network report.
Dashboard display page 600 further illustrates a summary view of network events and related network information corresponding to network report 610. The summary view includes a “smart events” view, showing the number of network events and an assigned severity level, as well as a “real time change analysis”, showing various tenant breakdowns of policy issues. Notably, the graphical icons shown by dashboard display page 600 may include selectable display objects that can be manipulated (e.g., selected, toggled, moved, etc.) to display more granular details regarding a particular node, sets of nodes, network conditions, configurations, parameters, policies, conflicts, and so on. As one example, a selectable object for the “top tenants by policy issue” includes an “error/type condition” object 625. In operation, a user can click on object 625, which causes the interface to display additional information (e.g., a summary window overlay/menu, embedded text below object 625, and/or navigation to a subsequent display page with additional details regarding object 625). In this fashion, object 625 provides a top level view of an error condition, which can be further broken down into more granular details based on user manipulation/selection.
As illustrated in
As mentioned above, the various display pages can include selectable objects, which cause the interfaces to display additional information and/or navigate to additional display pages to show the additional information. As shown here, the user manipulates a selectable object 710 corresponding to one of the rows (shown in a dashed line box). This manipulation, in turn, causes the interfaces to display an additional table of information corresponding to the “bridge_domain . . . ” network event. In this fashion, the user can drill down into network event details, and determine recommended remedial actions based on details for the failed condition.
For example, manipulating selectable object 710 causes the interfaces to display a description of the network event, its impact on the datacenter network (e.g., datacenter network 100), a list of affected objects, checks (including the failed condition and suggested steps) as well as an event id/code.
As mentioned, a user navigates through the various display pages to troubleshoot network events, which typically correspond to one or more alerts/tickets. Here, the user navigates to change management display page 700 and manipulates selectable object 710 in response to alert 405a. However, the troubleshooting process is interrupted by a new higher resolution priority alert, such as alert 405b. As mentioned above, the interfaces can generate alert icons, which corresponds to bell icon 620, to indicate receipt of a new alert/ticket. Here, bell icon 620 changes to include a numerical number “1” to indicate receipt of a new alert/ticket. It is appreciated this indication is for purposes of example only and that any number of icons and/or emphasis may be employed to indicate receipt of a new alert/ticket.
Turning now to
As shown, alert 405b is assigned a “critical” resolution priority, which is a higher priority than alert 405a. As discussed above, due to its higher resolution priority, the user interrupts the current troubleshooting progress for alert 405a to address the network events/issues corresponding to alert 405b.
Referring to
In particular, after selecting or manipulating the graphical icons associated with alert 405b, the interfaces reset the network context to an initial state for subsequent troubleshooting (e.g., to begin troubleshooting alert 405b). As mentioned above, the interfaces may exit the current display page, return the user to dashboard display page 600, reset manipulation of selectable object on corresponding display pages, clearing local caches of information regarding navigation for troubleshooting the first network event, and so on. For example, these operations are illustrated by a transition from
In particular, referring to
Collectively,
Procedure 800 begins at step 805 and continues to step 810 where a monitoring device monitors status information for a plurality of nodes in a datacenter network (e.g., network 100). The monitoring device can represent compliance/verification system 140, interfaces 300, combinations thereof, and the like. In operation, the monitoring device generally monitors the nodes in datacenter network to provide network assurance regarding performance, security policies, configurations, parameters, conflicts, contracts, service level agreements (SLAs), and so on.
The monitoring device also identifies, at step 815, a first network event for a time period based on the status information of a least a portion of the plurality of nodes. For example, the monitoring device can identify network events based on degraded performance or degraded network assurance parameters (e.g., policy violations, conflicts, etc.). In particular, in one embodiment, the monitoring device can compare expected configuration parameters at a network controller against translated configuration parameters at one or more of the plurality of nodes in the datacenter network. In such embodiment, the monitoring device identifies a network event based on a mismatch between the expected configuration parameters and the translated configuration parameters. These network events may be further compiled into network reports based on respective time periods. Such network events can be presented or displayed by an interface (e.g., interface 300) and arranged on a timeline, as discussed above (e.g., ref.
The monitoring device also provides an interface, such as interfaces 300, at step 820, which can include an initial display page (e.g., a dashboard display page), additional display pages, selectable display objects, and a representation of the first network event (e.g., alert 405a), discussed above. As mentioned above, a user begins troubleshooting the first network issue when a second network event (higher resolution priority issue) occurs. During the troubleshooting operations for the first network event, the monitoring device generates a dynamic troubleshooting path. This dynamic troubleshooting path represents a comprehensive troubleshooting context that tracks a user navigation between display pages, a manipulation setting for the selectable display objects (on respective display pages), and a last-current display page, which represents the last viewed page before saving the dynamic troubleshooting path.
Next, at step 830, the network monitoring device provides an indication of the second network event (e.g., alert 405b). As mentioned, the second network event may be assigned a higher resolution priority relative to the first network event, thus requiring immediate attention. In this situation, the user saves the dynamic troubleshooting path as a card object (e.g., card 415) before addressing the second network event. Saving the dynamic troubleshooting path can further cause the monitoring device to reset, at step 835, the interface to present the initial display page based on the second network event (e.g., present a dashboard display page with a summary view of relevant information for the second network event, etc.). In addition, in some embodiments, resetting the interface can also include resetting manipulation setting for selectable display objects, clearing caches corresponding to the user navigation between display pages for the first network event, and the like.
At some subsequent time (e.g., after resolving the second network event, etc.) the user retrieves the dynamic troubleshooting path for the first network event (e.g., restores card 415), as shown at step 840. Retrieving the dynamic troubleshooting path can modify the interface to present, for example, the last-current display page. In addition, retrieving the dynamic troubleshooting path can also cause the monitoring device to apply the manipulation setting for the one or more selectable display objects, and load the user navigation between the display pages in a cache. In this fashion, the user can pick up and continue troubleshooting without losing any prior analysis and without retracing any prior steps.
Procedure 800 subsequently ends at step 845, but may continue on to step 810 where, as discussed above, the monitoring device monitors status information for the plurality of nodes in the datacenter network. It should be noted that certain steps within procedure 800 may be optional, and further, the steps shown in
The techniques described herein, therefore, create a dynamic troubleshooting path that tracks both static information (e.g., a last-current displayed page) as well as dynamic information (e.g., manipulation of selectable objects, navigation between display pages, etc.). This dynamic troubleshooting path represents a comprehensive troubleshooting context and allows users to save and retrieve prior analysis for troubleshooting operations. The techniques described herein provide solutions to efficiently address network issues, multi-task between network issues, and restore all saved progress (both dynamic and static information) relating to troubleshooting operations.
While there have been shown and described illustrative embodiments to generate, store, and retrieve the dynamic troubleshooting path, and the like, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the embodiments herein. For example, the embodiments have been shown and described herein with respect to certain devices, interfaces, and systems, however it is appreciated that such embodiments are provided for purposes of example, not limitation and that the techniques disclosed herein can be incorporated as part of a larger network monitoring infrastructure.
The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium, devices, and memories (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions executing on a computer, hardware, firmware, or a combination thereof. Further, methods describing the various functions and techniques described herein can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. In addition, devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. Instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Accordingly this description is to be taken only by way of example and not to otherwise limit the scope of the embodiments herein. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the embodiments herein.
Number | Name | Date | Kind |
---|---|---|---|
5204829 | Lyu et al. | Apr 1993 | A |
6763380 | Mayton et al. | Jul 2004 | B1 |
7003562 | Mayer | Feb 2006 | B2 |
7089369 | Emberling | Aug 2006 | B2 |
7127686 | Drechsler et al. | Oct 2006 | B2 |
7360064 | Steiss et al. | Apr 2008 | B1 |
7453886 | Allan | Nov 2008 | B1 |
7505463 | Schuba et al. | Mar 2009 | B2 |
7548967 | Amyot et al. | Jun 2009 | B2 |
7552201 | Areddu et al. | Jun 2009 | B2 |
7609647 | Turk et al. | Oct 2009 | B2 |
7619989 | Guingo et al. | Nov 2009 | B2 |
7698561 | Nagendra et al. | Apr 2010 | B2 |
7743274 | Langford et al. | Jun 2010 | B2 |
7765093 | Li et al. | Jul 2010 | B2 |
8010952 | Datla et al. | Aug 2011 | B2 |
8073935 | Viswanath | Dec 2011 | B2 |
8103480 | Korn et al. | Jan 2012 | B2 |
8190719 | Furukawa | May 2012 | B2 |
8209738 | Nicol et al. | Jun 2012 | B2 |
8261339 | Aldridge et al. | Sep 2012 | B2 |
8312261 | Rao et al. | Nov 2012 | B2 |
8375117 | Venable, Sr. | Feb 2013 | B2 |
8441941 | McDade et al. | May 2013 | B2 |
8479267 | Donley et al. | Jul 2013 | B2 |
8484693 | Cox et al. | Jul 2013 | B2 |
8494977 | Yehuda et al. | Jul 2013 | B1 |
8554883 | Sankaran | Aug 2013 | B2 |
8588078 | Fugate | Nov 2013 | B1 |
8589934 | Makljenovic et al. | Nov 2013 | B2 |
8621284 | Kato | Dec 2013 | B2 |
8627328 | Mousseau et al. | Jan 2014 | B2 |
8693344 | Adams et al. | Apr 2014 | B1 |
8693374 | Murphy et al. | Apr 2014 | B1 |
8761036 | Fulton et al. | Jun 2014 | B2 |
8782182 | Chaturvedi et al. | Jul 2014 | B2 |
8824482 | Kajekar et al. | Sep 2014 | B2 |
8910143 | Cohen et al. | Dec 2014 | B2 |
8914843 | Bryan et al. | Dec 2014 | B2 |
8924798 | Jerde et al. | Dec 2014 | B2 |
9019840 | Salam et al. | Apr 2015 | B2 |
9038151 | Chua et al. | May 2015 | B1 |
9055000 | Ghosh et al. | Jun 2015 | B1 |
9106555 | Agarwal et al. | Aug 2015 | B2 |
9137096 | Yehuda et al. | Sep 2015 | B1 |
9225601 | Khurshid et al. | Dec 2015 | B2 |
9246818 | Deshpande et al. | Jan 2016 | B2 |
9264922 | Gillot et al. | Feb 2016 | B2 |
9276877 | Chua et al. | Mar 2016 | B1 |
9319300 | Huynh Van et al. | Apr 2016 | B2 |
9344348 | Ivanov et al. | May 2016 | B2 |
9369434 | Kim et al. | Jun 2016 | B2 |
9389993 | Okmyanskiy et al. | Jul 2016 | B1 |
9405553 | Branson et al. | Aug 2016 | B2 |
9444842 | Porras et al. | Sep 2016 | B2 |
9497207 | Dhawan et al. | Nov 2016 | B2 |
9497215 | Vasseur et al. | Nov 2016 | B2 |
9544224 | Chu et al. | Jan 2017 | B2 |
9548965 | Wang et al. | Jan 2017 | B2 |
9553845 | Talmor et al. | Jan 2017 | B1 |
9571502 | Basso et al. | Feb 2017 | B2 |
9571523 | Porras et al. | Feb 2017 | B2 |
9594640 | Chheda | Mar 2017 | B1 |
9596141 | McDowall | Mar 2017 | B2 |
9641249 | Kaneriya et al. | May 2017 | B2 |
9654300 | Pani | May 2017 | B2 |
9654361 | Vasseur et al. | May 2017 | B2 |
9654409 | Yadav et al. | May 2017 | B2 |
9660886 | Ye et al. | May 2017 | B1 |
9660897 | Gredler | May 2017 | B1 |
9667645 | Belani et al. | May 2017 | B1 |
9680875 | Knjazihhin et al. | Jun 2017 | B2 |
9686180 | Chu et al. | Jun 2017 | B2 |
9686296 | Murchison et al. | Jun 2017 | B1 |
9690644 | Anderson et al. | Jun 2017 | B2 |
9781004 | Danait et al. | Oct 2017 | B2 |
9787559 | Schroeder | Oct 2017 | B1 |
9998247 | Choudhury et al. | Jun 2018 | B1 |
10084795 | Akireddy et al. | Sep 2018 | B2 |
10084833 | McDonnell et al. | Sep 2018 | B2 |
10084895 | Kasat et al. | Sep 2018 | B2 |
10263836 | Jain | Apr 2019 | B2 |
20020143855 | Traversat et al. | Oct 2002 | A1 |
20020178246 | Mayer | Nov 2002 | A1 |
20030229693 | Mahlik et al. | Dec 2003 | A1 |
20040073647 | Gentile et al. | Apr 2004 | A1 |
20040168100 | Thottan et al. | Aug 2004 | A1 |
20050108389 | Kempin et al. | May 2005 | A1 |
20070011629 | Shacham et al. | Jan 2007 | A1 |
20070124437 | Chervets | May 2007 | A1 |
20070214244 | Hitokoto et al. | Sep 2007 | A1 |
20080031147 | Fieremans et al. | Feb 2008 | A1 |
20080117827 | Matsumoto et al. | May 2008 | A1 |
20080133731 | Bradley et al. | Jun 2008 | A1 |
20080168531 | Gavin | Jul 2008 | A1 |
20080172716 | Talpade et al. | Jul 2008 | A1 |
20090240758 | Pasko et al. | Sep 2009 | A1 |
20090249284 | Antosz et al. | Oct 2009 | A1 |
20100191612 | Raleigh | Jul 2010 | A1 |
20100198909 | Kosbab et al. | Aug 2010 | A1 |
20110093612 | Murakami | Apr 2011 | A1 |
20110295983 | Medved et al. | Dec 2011 | A1 |
20120054163 | Liu et al. | Mar 2012 | A1 |
20120198073 | Srikanth et al. | Aug 2012 | A1 |
20120297061 | Pedigo et al. | Nov 2012 | A1 |
20120311429 | Decker | Dec 2012 | A1 |
20130097660 | Das et al. | Apr 2013 | A1 |
20130191516 | Sears | Jul 2013 | A1 |
20140019597 | Nath et al. | Jan 2014 | A1 |
20140052645 | Hawes | Feb 2014 | A1 |
20140177638 | Bragg et al. | Jun 2014 | A1 |
20140222996 | Vasseur et al. | Aug 2014 | A1 |
20140304831 | Hidlreth et al. | Oct 2014 | A1 |
20140307556 | Zhang | Oct 2014 | A1 |
20140321277 | Lynn, Jr. et al. | Oct 2014 | A1 |
20140379915 | Yang et al. | Dec 2014 | A1 |
20150019756 | Masuda | Jan 2015 | A1 |
20150113143 | Stuart et al. | Apr 2015 | A1 |
20150124826 | Edsall et al. | May 2015 | A1 |
20150186206 | Bhattacharya et al. | Jul 2015 | A1 |
20150234695 | Cuthbert et al. | Aug 2015 | A1 |
20150244617 | Nakil et al. | Aug 2015 | A1 |
20150271104 | Chikkamath et al. | Sep 2015 | A1 |
20150295771 | Cuni et al. | Oct 2015 | A1 |
20150312125 | Subramanian | Oct 2015 | A1 |
20150365314 | Hiscock et al. | Dec 2015 | A1 |
20150381484 | Hira et al. | Dec 2015 | A1 |
20160020993 | Wu et al. | Jan 2016 | A1 |
20160021141 | Liu et al. | Jan 2016 | A1 |
20160026631 | Salam et al. | Jan 2016 | A1 |
20160036636 | Erickson et al. | Feb 2016 | A1 |
20160048420 | Gourlay et al. | Feb 2016 | A1 |
20160078220 | Scharf et al. | Mar 2016 | A1 |
20160080350 | Chaturvedi et al. | Mar 2016 | A1 |
20160099883 | Voit | Apr 2016 | A1 |
20160105317 | Zimmermann et al. | Apr 2016 | A1 |
20160112246 | Singh et al. | Apr 2016 | A1 |
20160112269 | Singh et al. | Apr 2016 | A1 |
20160149751 | Pani et al. | May 2016 | A1 |
20160164748 | Kim | Jun 2016 | A1 |
20160210207 | Chavez | Jul 2016 | A1 |
20160224277 | Batra et al. | Aug 2016 | A1 |
20160241436 | Fourie et al. | Aug 2016 | A1 |
20160254964 | Benc | Sep 2016 | A1 |
20160267384 | Salam et al. | Sep 2016 | A1 |
20160323319 | Gourlay et al. | Nov 2016 | A1 |
20160330076 | Tiwari et al. | Nov 2016 | A1 |
20160352566 | Mekkattuparamnban et al. | Dec 2016 | A1 |
20160380892 | Mahadevan et al. | Dec 2016 | A1 |
20170012840 | Zaidi | Jan 2017 | A1 |
20170026292 | Smith et al. | Jan 2017 | A1 |
20170031800 | Shani et al. | Feb 2017 | A1 |
20170031970 | Burk | Feb 2017 | A1 |
20170048110 | Wu et al. | Feb 2017 | A1 |
20170048126 | Handige Shankar et al. | Feb 2017 | A1 |
20170054758 | Maino et al. | Feb 2017 | A1 |
20170063599 | Wu et al. | Mar 2017 | A1 |
20170093630 | Foulkes | Mar 2017 | A1 |
20170093664 | Lynam et al. | Mar 2017 | A1 |
20170093750 | McBride et al. | Mar 2017 | A1 |
20170093918 | Banerjee et al. | Mar 2017 | A1 |
20170111236 | Bielenberg | Apr 2017 | A1 |
20170111259 | Wen et al. | Apr 2017 | A1 |
20170118167 | Subramanya et al. | Apr 2017 | A1 |
20170126740 | Bejarano Ardila et al. | May 2017 | A1 |
20170126792 | Halpern et al. | May 2017 | A1 |
20170134233 | Dong et al. | May 2017 | A1 |
20170163442 | Shen et al. | Jun 2017 | A1 |
20170168959 | Dodonov | Jun 2017 | A1 |
20170187577 | Nevrekar et al. | Jun 2017 | A1 |
20170195187 | Bennett et al. | Jul 2017 | A1 |
20170206129 | Yankilevich et al. | Jul 2017 | A1 |
20170222873 | Lee et al. | Aug 2017 | A1 |
20170277625 | Shtuchkin | Sep 2017 | A1 |
20170288966 | Chakra | Oct 2017 | A1 |
20170353355 | Danait et al. | Dec 2017 | A1 |
20180069754 | Dasu et al. | Mar 2018 | A1 |
20180107632 | Blinn | Apr 2018 | A1 |
20180167294 | Gupta et al. | Jun 2018 | A1 |
20180295013 | Deb | Oct 2018 | A1 |
Number | Date | Country |
---|---|---|
105471830 | Apr 2016 | CN |
105721193 | Jun 2016 | CN |
105721297 | Jun 2016 | CN |
106130766 | Nov 2016 | CN |
106603264 | Apr 2017 | CN |
103701926 | Jun 2017 | CN |
WO 2015014177 | Feb 2015 | WO |
WO 2015187337 | Dec 2015 | WO |
WO 2016011888 | Jan 2016 | WO |
WO 2016039730 | Mar 2016 | WO |
WO 2016072996 | May 2016 | WO |
WO 2016085516 | Jun 2016 | WO |
WO 2016093861 | Jun 2016 | WO |
WO 2016119436 | Aug 2016 | WO |
WO 2016130108 | Aug 2016 | WO |
WO 2016161127 | Oct 2016 | WO |
WO 2017031922 | Mar 2017 | WO |
WO 2017039606 | Mar 2017 | WO |
WO 2017105452 | Jun 2017 | WO |
Entry |
---|
Cisco Systems, Inc., “The Cisco Application Policy Infrastructure Controller Introduction: What is the Cisco Application Policy Infrastructure Controller?” Jul. 31, 2014, 19 pages. |
Jain, Praveen, et al., “In-Line Distributed and Stateful Security Policies for Applications in a Network Environment,” Cisco Systems, Inc., Aug. 16, 2016, 13 pages. |
Maldonado-Lopez, Ferney, et al., “Detection and prevention of firewall—rule conflicts on software-defined networking,” 2015 7th International Workshop on Reliable Networks Design and Modeling (RNDM), IEEE, Oct. 5, 2015, pp. 259-265. |
Vega, Andres, et al., “Troubleshooting Cisco Application Centric Infrastructure: Analytical problem solving applied to the Policy Driven Data Center,” Feb. 15, 2016, 84 pages. |
Xia, Wenfeng, et al., “A Survey on Software-Defined Networking,” IEEE Communications Surveys and Tutorials, Mar. 16, 2015, pp. 27-51. |
Akella, Aditya, et al., “A Highly Available Software Defined Fabric,” HotNets-XIII, Oct. 27-28, 2014, Los Angeles, CA, USA, Copyright 2014, ACM, pp. 1-7. |
Alsheikh, Mohammad Abu, et al., “Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Application,” Mar. 19, 2015, pp. 1-23. |
Author Unknown, “Aids to Pro-active Management of Distributed Resources through Dynamic Fault-Localization and Availability Prognosis,” FaultLocalization-TR01-CADlab, May 2006, pp. 1-9. |
Author Unknown, “Requirements for applying formal methods to software-defined networking,” Telecommunication Standardization Sector of ITU, Series Y: Global Information Infrastructure, Internet Protocol Aspects and Next-Generation Networks, Apr. 8, 2015, pp. 1-20. |
Cisco Systems, Inc., “Cisco Application Centric Infrastructure 9ACI Endpoint Groups (EPG) Usange and Design,” White Paper, May 2014, pp. 1-14. |
Cisco, “Verify Contracts and Rules in the ACI Fabric,” Cisco, Updated Aug. 19, 2016, Document ID: 119023, pp. 1-20. |
De Silva et al., “Network-wide Security Analysis,” Semantic Scholar, Oct. 25, 2011, pp. 1-11. |
Dhawan, Mohan, et al., “SPHINX: Detecting Security Attacks in Software-Defined Networks,” NDSS 2015, Feb. 8-11, 2015, San Diego, CA, USA, Copyright 2015 Internet Society, pp. 1-15. |
Fayaz, Seyed K., et al., “Efficient Network Reachability Analysis using a Succinct Control Plane Representation,” 2016, ratul.org, pp. 1-16. |
Feldmann, Anja, et al., “IP Network Configuration for Intradomain Traffic Engineering,” Semantic Scholar, accessed on Jul. 20, 2017, pp. 1-27. |
Han, Wonkyu, et al., “LPM: Layered Policy Management for Software-Defined Networks,” Mar. 8, 2016, pp. 1-8. |
Han, Yoonseon, et al., “An Intent-based Network Virtualization Platform for SDN,” 2016 I FIP, pp. 1-6. |
Kazemian, Peyman, et al., “Real Time Network Policy Checking using Header Space Analysis,” USENIX Association, 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI '13) pp. 99-111. |
Khatkar, Pankaj Kumar, “Firewall Rule Set Analysis and Visualization, A Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science,” Arizona State University, Dec. 2014, pp. 1-58. |
Le, Franck, et al., “Minerals: Using Data Mining to Detect Router Misconfigurations,” CyLab, Carnegie Mellon University, CMU-CyLab-06-008, May 23, 2006, pp. 1-14. |
Liang, Chieh-Jan Mike, et al., “SIFT: Building an Internet of Safe Things,” Microsoft, IPSN' 15, Apr. 14-16, 2015, Seattle, WA, ACM 978, pp. 1-12. |
Lindem, A., et al., “Network Device YANG Organizational Model draft-rtgyangdt-rtgwg-device-model-01,” Network Working Group, Internet-draft, Sep. 21, 2015, pp. 1-33. |
Liu, Jason, et al., “A Real-Time Network Simulation Infrastracture Based on Open VPN,” Journal of Systems and Software, Aug. 4, 2008, pp. 1-45. |
Lopes, Nuno P., et al., “Automatically verifying reachability and well-formedness in P4 Networks,” Microsoft, accessed on Jul. 18, 2017, pp. 1-13. |
Mai, Haohui, et al., “Debugging the Data Plane with Anteater,” SIGCOMM11, Aug. 15-19, 2011, pp. 1-12. |
Miller, Nancy, et al., “Collecting Network Status Information for Network-Aware Applications,” INFOCOM 2000, pp. 1-10. |
Miranda, Joao Sales Henriques, “Fault Isolation in Software Defined Networks,” www.gsd.inescid.pt, pp. 1-10. |
Moon, Daekyeong, et al., “Bridging the Software/Hardware Forwarding Divide,” Berkeley.edu, Dec. 18, 2010, pp. 1-15. |
Panda, Aurojit, et al., “SCL: Simplifying Distributed SDN Control Planes,” people.eecs.berkeley.edu, Mar. 2017, pp. 1-17. |
Shin, Seugwon, et al., “FRESCO: Modular Composable Security Services for Software-Defined Networks,” To appear in the ISOC Network and Distributed System Security Symposium, Feb. 2013, pp. 1-16. |
Shukla, Apoorv, et al., “Towards meticulous data plane monitoring,” kaust.edu.sa, access on Aug. 1, 2017, pp. 1-2. |
Tang, Yongning, et al., “Automatic belief network modeling via policy inference for SDN fault localization,” Journal of Internet Services and Applications, 2016, pp. 1-13. |
Tomar, Kuldeep, et al., “Enhancing Network Security And Performance Using Optimized ACLs,” International Journal in Foundations of Computer Science & Technology (IJFCST), vol. 4, No. 6, Nov. 2014, pp. 25-35. |
Tongaonkar, Alok, et al., “Inferring Higher Level Policies from Firewall Rules,” Proceedings of the 21st Large Installation System Administration Conference (LISA '07), Nov. 11-16, 2007, pp. 1-14. |
Yu et al., “A Flexible Framework for Wireless-Based Intelligent Sensor with Reconfigurability, Dynamic adding, and Web interface,” Conference Paper, Jul. 24, 2006, IEEE 2006, pp. 1-7. |
Zhou, Shijie, et al., “High-Performance Packet Classification on GPU,” 2014 IEEE, pp. 1-6. |
Number | Date | Country | |
---|---|---|---|
20200021482 A1 | Jan 2020 | US |