This disclosure relates to network testing.
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, is neither expressly or impliedly admitted as prior art against the present disclosure.
In a traditional packet-based network, a so-called control plane and a so-called data plane both exist directly on each network device. However, in so-called Software Defined Networking, there is an abstraction of the control plane from the network device. The control plane exists in a separate SDN controller layer. It can interact with the data plane of a network device (such as a switch or router) for example via a controller agent on the network device (using a protocol such as “OpenFlow”).
Software defined networking allows more flexible network architectures, potentially with centralised control. The SDN controller is able to oversee the whole network and thus can potentially provide better forwarding policies than would be the case in the traditional network.
However, there is a need to provide appropriate arrangements to test the correct operation of such a network.
The present disclosure addresses or mitigates problems arising from this processing.
Respective aspects and features of the present disclosure are defined in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary, but are not restrictive, of the present technology.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, in which:
Referring now to the drawings,
This type of network device, for example switches and routers, are traditionally thought of as consisting of three planes: the management plane (implemented in
In the arrangement of
The control plane 102 handles decisions such as packet forwarding and routing. On previously proposed network devices such as those of
The data plane uses the forwarding table created by the control plane to process the network data packets. It is sometimes known as the forwarding plane or user plane. Data plane forwarding is processed in dedicated hardware or high-speed code. The data plane is where most of the network device's activity occurs.
In a traditional network (non-SDN), the control plane and data plane both exist directly on the network device as shown schematically in
Software defined networking allows more flexible network architectures, potentially with centralised control. The SDN controller 200 is able to oversee the whole network and thus potentially to provide better forwarding policies than would be the case in the arrangement of
Considering the interfaces with the SDN controller 200 in
The so-called Southbound API (application programming interface) refers to the interface between the SDN controller and the network device. It has this name because it is typically drawn in a diagrammatic representation (which may or may not be entirely different to a physical arrangement or layout) as running generally downwards with respect to the SDN controller. “OpenFlow” is an example of an SDN protocol for the Southbound API.
The Northbound API refers to the communication between applications 220 and the SDN controller 200. It has this name because it is typically drawn as running generally upwards with respect to the SDN controller in a diagrammatic representation (which may or may not be entirely different to a physical arrangement or layout).
SDN can be considered as complementary to Network Virtualisation (NV) and Network Functions Virtualisation (NFV).
NV allows virtual networks to be created that are decoupled from the underlying network hardware. NFV abstracts network functions to a generic server platform rather than relying on a specific hardware device to provide the function.
SDN can provide the flexibility required for a range of NFV use cases.
There are several types of network switch available to purchase in today's market. These include so-called open switches which are switches in which the hardware and software are separate entities that can be changed independently of each other (in contrast to proprietary switches in which the software and hardware are integrated.)
So-called bare metal switches are hardware only and ready to be loaded with the operating system of the user's choice. A bare metal switch often comes with a boot loader called the Open Network Install Environment (ONIE), which allows the user to load an operating system onto the switch.
So-called white box switches and Brite box switches are bare metal switches but with an example operating system already installed. The latter type may carry a manufacturer's logo or brand name.
These various switches my generally be considered under the term “commercial off-the-shelf (“COTS”) switches”.
Video switching can place potentially high demand on the switch infrastructure, both in terms of data throughput and also timing constraints so as to provide neat (jitter or interruption-free) transitions between different video sources. In the context of video switching, the benefit of an SDN-based approach is that it allows controller software to be written that can be used to control relatively low-cost COTS switches so as to meet such constraints.
Clean video switching requires accurately timing a particular switching operation (or in other words, a change in the data plane forwarding behaviour) so that it occurs during the vertical blanking period of the video signal(s), and/or by a method to be described below.
Clean switching from one video signal to another with a non-SDN switch of the type shown in
In destination-timed switching, the destination may follow the general process of:
In source-timed switching, the sources change their packet header values to trigger a new flow on the switch 430.
In switch-timed switching, the switch 430 changes its flows at a precise time.
These options will be discussed with reference to
Destination-Timed Switching using SDN Switch (
Using an SDN switch can reduce the latency of destination-timed switching (compared to using a non-SDN switch). The switch can begin forwarding packets from the second source in advance of the destination device's request, removing some of the delay.
Referring to
A period of double buffering and switching, during a period t3 occurs 514, at the end of which the destination device issues an IGMP leave instruction to leave the group relating to S1 at a stage 516. The switch 430 sends a packet 518 to the controller 200 which then issues a request 520 to the switch to remove S1 from the group and the switch stops forwarding packets from S1. The destination device 440 is now receiving only from S2522.
Source-Timed Switching using SDN Switch (
Source-timed switching involves adding new flow rules to the switch that depend on some header information in the source packets. The sources can then change this header information at a precise time to carry out an accurately-timed switching operation.
Referring to
The BC 500 issues a request to the destination device 440 to change 604 the source to S2. The BC 500 also issues a request 606 to the controller which in turn instructs the switch 608 to add a new rule to the switch's forwarding table. The new rule is added at a stage 610.
The controller 200 instructs 612 the sources to change headers, which results in the switch 430 issuing an instruction or request 614 to the sources so that the packet headers are changed at the next switching point 616 and a new rule is applied by the switch 430 (at a stage 618 to forward packets from S2 so that the destination device 440 is receiving 620 from S2).
Note that the request 614 could be considered to be the ‘same’ message as 612, travelling via the switch. in other words, the message 612 could be an OpenFlow Packet Out command, causing the message 614 to be emitted. However, it can be useful to view the arrangement conceptually as the controller sending the message to Source S2 via the switch.
Note also that the command or message 612/614 could alternatively come from the BC at any time after the new forwarding rule has been applied.
The new rule is already applied (at the stage 608). The stage 618 is the stage when the rule starts matching the packets, because the packet headers change due to the instructions 612 and 614.
As a worked schematic example, assume initially that the source S2 is sending UDP packets with a destination address dst_addr=232.0.0.1 and a source port src_port=5000, and the destination device is on Port 1 of the Switch. The stage 608 sets a rule on the switch “Match src_addr=232.0.0.1, src_port=5001: Action=Output Port 1”. At this time the source S2 is sending with src_port=5000, so the rule does not currently match any packets, and S2's packets continue to be dropped. The instructions at 612/614 instruct the source S2 to start outputting with src_port=5001 (instead of 5000) at the next switching point. At the stage 616, the source S2 switches from src_port=5000 to 5001. The packets now match the rule 608, and start to be emitted from Port 1 to the Destination Device. At the stage 622, whatever rule was originally causing packets from Source 1 to Port 1 is removed (or timed out), so that by a stage 624 the old rule is no longer present.
Switch-Timed Switching using SDN Switch (
Switch-timed switching involves instructing the switch to update its flow table at a precise time. The switch identifies the appropriate time to update its table either by examining the RTP headers of the packets for end-of-frame information or by being compatible with the Precision Time Protocol (PTP). This method relies on the switch being able to perform the flow table update very quickly and if the packets arrive equally spaced in time this leaves a relatively small time window to achieve this. However, in example arrangements so-called “packet-shaping” may be used so that there is a longer gap between packets during the vertical blanking period of each video frame.
Referring to
The BC 500 issues a request 706 to the controller 200 which in turn instructs the switch 708 to update the flow table at a specified time. The switch updates the flow table by an operation 710 during a vertical blanking period, which the switch 430 identified by examining RTP (real time protocol) packet headers or by using the PTP (precision time protocol).
The changes to the flow table imply that after the changes have been made, packets 712 from the source S2 are provided to the switch and are forwarded 714 to the destination device 440 so that after the update to the flow table, the destination device 440 is receiving 716 from the source S2.
Note that the packets 712 will have been being sent to the switch previously. The changes to the forwarding table 710 will cause them to be forwarded from 714. Although not shown in
An SDN network is defined to a large extent by parameters and programming at the SDN controller 200. In applications such as video switching, it can be at least business-critical that the network operates as expected and as specified. For example, a failure to operate in the correct way could lead to a loss of live coverage of an important event, which could be damaging to the business of a broadcaster.
A testing arrangement, particularly suited to testing of the SDN controller 200, will now be described with reference to
In example embodiments, the SDN controller is an OpenDaylight (ODL) based SDN controller, though other types of SDN controller may be used. The controller defines the behaviour of a Simulated Test Network 810 to be described below; the System Tests verify that this behaviour is correct. Thus the functional behaviour of the SDN controller 800 can be tested indirectly. The controller can be started, stopped and configured/re-configured by a so-called Test Runner 820 to be described below. In the example embodiments this control by the Test Runner 820 can be achieved via an SSH connection using an SSH Library of the Test Runner 820.
This configuration file, or these configuration files, are used to configure the SDN controller 800, and so provide a main subject of the test. The configuration files specify the topology of the network being managed by the controller (‘topology information’), for example defining data interconnections within the test network, as well as the locations and functions (broadcast/unicast/multicast; send/receive) of the network endpoints (‘endpoint information’). The topology information and endpoint information could be one file per configuration, or split into multiple files (such as configA.topology.json & configA.endpoints.json) respectively defining the topological relationship of nodes and the network locations and properties of endpoints as discussed above. The topology information is used to generate a simulated test network or to configure a real test network. The endpoint information is used to determine which types of traffic should be tested, and which endpoints should be involved in each test.
In some examples, JSON files representing a representative configuration may be created for each network topology that should be supported. In some examples, configurations describing real world networks which are exhibiting problems can be used to test and debug problems with actual customer deployments. As mentioned, in these example embodiments, the configuration files are in JSON format.
In other examples, a different type of document or file (or documents or files) defining the network configuration can be employed. In other words, the embodiments are not limited to the use of one or multiple JSON files.
In a general sense, these network configuration files can be static, in the sense that they are generated at the outset of network design, or at least aspects of them can be cumulative in that as a device is added to the network, a portion of a network configuration file relating to the newly added device is added to the existing file. For the purposes of the present tests, however, they are considered as static (in that a test is performed on a particular network configuration, and then if a new configuration is desired to be tested, another test may be performed). For example, there can be a set of test scenarios established for a particular configuration, which the Test Runner activates one after another. In some examples, the test runner could detect all JSON configuration files in a specified directory, and run the test suite on each of those files in turn.
Some aspects of the network configuration files can be acquired in an automated operation, for example employing “LLDP” discovery using the so-called Link Layer Discovery Protocol (LLDP) which is a protocol used by network devices for advertising their identity and/or capabilities to neighbours on a network segment.
As mentioned above, more than one file or document may be used. For example, the network definition data or file(s) may comprise, as one file or multiple respective files:
topology data defining a network topology; and
endpoint data defining network locations and functions of network endpoints.
It may be convenient to provide these as separate files or documents, of file or document fragments because (for example) one of these may be automatically obtained for example using LLDP, and the other may be statically or manually established.
In some examples, all or part of the JSON configuration might be transmitted in the body of an HTTP request or response, and such a JSON or other configuration ‘document’ would then be incorporated into a larger configuration ‘document’, which might exist tangibly on disk and/or intangibly in the memory of the SDN controller.
An example of the topology data can provide data such as:
An example of the endpoint data can provide data such as:
As discussed below, the test controller circuitry 820, 840 can detect, from the endpoint data, the type of network traffic applicable to each endpoint, and can, in at least some embodiments, arrange for testing of each combination of routing for one or more of the network traffic types. By way of example, e.g. for a sender S1 and receivers R1 and R2, the multicast combinations are:
S1->{R1}
S1->{R1,R2}
S1->{R2}
The system would not however necessarily need to test both S1->{R1,R2} and S1->{R2,R1}.
Because the tests are configured using the real configuration file(s) of the SDN controller 800, the actual configuration file of a problematic deployment can be loaded directly into the test fixture and debugged as if it was a real system. This can be useful, so that users can simply provide their configuration files, and the test system can be used for triage.
Additionally, any configurations which have exposed problems or bugs (or canonical versions of those configurations) can then be added (for example permanently) to the test suite for ‘regression testing’ purposes. Regression testing is a technique to test whether computer software retains its original functionality after being subjected to changes or to interfacing with other computer software, or in other words whether previously-resolved errors or bugs resurface in later versions (or has the software “regressed” to an earlier, previously-resolved, erroneous mode of operation by subsequent modification) and is described in https://en.wikipedia.org/wiki/Regression_testing, the contents of which are incorporated by reference into the present description.
The Test Runner 820 orchestrates the system tests and generates a test report. In the example embodiments, the Test Runner 820 is implemented using the so-called RobotFramework system, which is a generic test automation framework for acceptance testing and acceptance test-driven development.
The test runner configures and starts the SDN Controller 800 under test, starts the test script which creates the test network, then triggers individual tests to be run on the network via a RPC (remote procedure call) mechanism.
The individual tests could all be run automatically by the Test Script, but running them individually from the test runner facilitates:
(a) fine grained reporting of test results, and
(b) the ability to run all tests or subsets of the test suite. (This may be important if some types of test take a long time to run.)
The Test Runner 820 can be implemented by appropriate program instructions executing using apparatus of the type shown in
The Test Script creates the Test Network and runs the individual tests.
In the example embodiments it is implemented as a Python programming language library for RobotFramework. The Test Script reads the Network Configuration File(s) to determine the topology of the test network to generate and the scope of the individual tests to perform.
The Test Script runs in the context of the Test Network where it can access the traffic generation agents running on each virtual host. Individual tests are triggered on the Test Script by the Test Runner using an RPC mechanism.
Test controller circuitry to be discussed below can be taken to encompass the functionality of the Test Runner 820 in conjunction with the test script 820.
The Simulated Test Network models the network topology specified in the Network Configuration file(s), and connects to the SDN Controller 800 Under Test.
In the example embodiments, the Test Network is implemented as a simulated network using the so-called Mininet network simulator running, for example, on apparatus shown in
However in other embodiments it could be implemented using a real, physical network using suitable routing hardware and/or VLAN (virtual LAN) configuration. For example, switches in a hardware network under test could be linked together by software-controllable hardware switches which implement selectable connections or links between ports of the switches in the hardware network under test, so that a configurable test network is generated. At least some ports of such an arrangement could be connectable under software control to apparatus providing the functionality of the traffic generation/capture agents (to be discussed below)
The Traffic Generation/Capture Agent 812 may be a small daemon which is run on each network endpoint in the Test Network, for example by executing appropriate program instructions on an apparatus of the type shown in
(i) Sending various types of network packets which are of interest (such as broadcast, unicast UDP (user datagram protocol), multicast UDP, etc.) The payloads of these packets are unique and identifiable (for example by containing so-called universally unique identifiers or UUIDs). The unique payloads are returned to the caller for future reference.
(ii) Receiving (‘sniffing’) all incoming traffic to that network endpoint, whether addressed to that endpoint or otherwise. (So called ‘promiscuous’ mode.)
(iii) Checking the sniffed packets to determine whether a packet with a given unique payload has been received or not.
Regarding the so-called Out-of-Band RPC, note that “Out-of-Band” in this context means ‘not using the test network’. In the current embodiment, the RPC is implemented using UNIX named pipes, but in other embodiments alternative channels such as secondary networks could be used. If an in-band network based RPC was used over the Test Network, it would require routing support from the SDN Controller 800 Under Test, and would create extra network traffic. This provides an example in which the test controller is configured to issue control instructions to the test traffic agents by a communication route not using the test network.
Therefore,
As discussed, in example arrangements the test network 810 is a simulated test network configured by the test controller circuitry in response to the network definition data, the network testing apparatus comprising data processing circuitry (such as apparatus shown in
Example operations of the arrangement of
The operations to be discussed below can be summarized as the test controller performing a network test by:
Referring to
Referring to operations of the test runner and test script 908, at a stage 910 the test runner 820 creates the test network so that in the operations of the test network 912 the creation of the network is noted at a stage 914. The test network connects to the SDN controller 800 at a stage 916.
The test runner 820 runs or establishes a traffic agent on or at each test network endpoint (as defined by the endpoint data) at a stage 918 so that in the section of
The test runner 820 then runs various tests at a stage 924 in which the traffic agents are caused to interact with the SDN controller 800 under test and with one another, potentially multiple times. For example, the test runner 820 can generate instructions to the test traffic agents to communicate test data packets to respective destinations amongst the test traffic agents, in response to the functions of the network endpoints defined by the endpoint data. IN an example situation in which the endpoint data defines one or more network traffic types handled by each network endpoint, the test runner 820 can be configured to generate instructions to the test traffic agents to communicate test data packets to respective destinations amongst the test traffic agents, so as to test communication between each combination of data packet source and data packet receiver for at least a network traffic type under test. Examples of traffic types include broadcast, unicast and multicast traffic types.
Then at a stage 926 the test runner 820 shuts down the test network and at a stage 928 shuts down the SDN controller 800. The test runner reports its results at a stage 930 and the process ends 932.
For a ‘Broadcast Traffic Test’ to be passed successfully, all endpoints regardless of function (H1, H2, H3 and so on) should be able to send broadcast packets to all other endpoints.
A ‘Unicast UDP Traffic’ test would behave similarly (and so is not described here separately), except for each originating host an individual packet would be sent to each receiving host in turn, and only the specified destination host should receive that packet.
Referring to
For a ‘Multicast Traffic Test’, all possible packet senders should be able to send multicast traffic on the specified group(s) to all possible combinations of all packet receivers. The Senders, Multicast Groups, and Receivers are specified in the ‘endpoint information’ (in the (JSON) configuration file(s)). The Test Script analyses the endpoint information and determines which combinations of endpoints and multicast groups should be tested. The traffic agent on each combination of Receiver(s) is instructed in turn to send an IGMP Join message to subscribe to the multicast group under test. The traffic agent on each sender is instructed to send a packet with a known, unique payload to that group. The Test Script then checks that all ‘joined’ hosts received a multicast packet with that payload, and that all ‘non-joined’ hosts did not receive the packet.
The combinations of Receivers are then instructed to send IGMP Leave packets, and the same test is performed checking that all packets are now dropped. The above sequence of tests is performed for all valid combinations of Receivers, Senders and Multicast Groups.
At a stage 1100, a multicast traffic test is initiated which is carried out by the test script sending IGMP join instructions for multicast groups such as an example group G1, for example at a stage 1102 to various ones of the traffic agents, multicast packets are then sent with associated payloads to the group G1, for example at a step 1104 and a detection 1106 is made as to whether the correct payload was received. This process is repeated for various combinations of correspondence between the traffic agents.
As discussed above, the test controller circuitry 820, 840 is configured to instruct the test traffic agents to communicate test packets by one or more n traffic types selected from the list consisting of: (i) a unicast protocol, (ii) a multicast protocol and (iii) a broadcast protocol. Given that the network definition data comprises (as discussed earlier) topology data defining a network topology; and endpoint data defining network locations and functions of network endpoints, where the endpoint data may define one or more network traffic types handled by each network endpoint, in some examples the testing regime can be established automatically in response to such network configuration data so as (for example) to test each possible routing operation available within the test network. In some embodiments, the test controller circuitry 820, 840 is configured to establish a test traffic agent 812 at each network endpoint defined by the endpoint data of the configuration data 830. In some examples, the test controller circuitry 820, 840 is configured to generate instructions to the test traffic agents 812 to communicate test data packets to respective destinations amongst the test traffic agents, in response to the functions of the network endpoints defined by the endpoint data. For example, these instructions can be provided so as to test communication between each combination of data packet source and data packet receiver for at least a network traffic type under test.
Note that the testing can encompass one or more of UDP, TCP, ICMP, ping or other types of transmission, routing and reception.
In other words, a combination of one or more of the broadcast, unicast and multicast tests discussed above can be established by the test controller circuitry 820, 840 so as to test each possible combination of one or more of:
One or more traffic types can be tested in this way, or all traffic types could be tested in a single testing procedure. This can involve performing multiple successive individual tests (within an overall testing process) to cover the various combinations, but can in this way provide a comprehensive testing regime for the test network.
the test controller configuring (at a step 1300) the test network in response to network topology data;
the test controller providing (at a step 1310) instructions to control operations of the software defined network controller and to control operations of a plurality of test traffic agents;
the software defined network controller controlling (at a step 1320) the test network to adopt a routing arrangement for data packets within the test network in response to an instruction provided by the test controller;
the test controller performing (at a step 1330) a network test by:
In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Similarly, a data signal comprising coded data generated according to the methods discussed above (whether or not embodied on a non-transitory machine-readable medium) is also considered to represent an embodiment of the present disclosure.
It will be apparent that numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended clauses, the technology may be practised otherwise than as specifically described herein.
Respective aspects and features are defined by the following numbered clauses:
1. Network testing apparatus comprising:
software defined network controller circuitry; and
test controller circuitry operable to configure a test network in response to network definition data, to provide instructions to control operations of the software defined network controller circuitry and to control operations of a plurality of test traffic agents connected to the test network;
the software defined network controller circuitry being arranged to control the test network to adopt a routing arrangement for data packets within the test network in response to an instruction provided by the test controller circuitry; and
the test controller circuitry being configured to perform a network test by instructing the software defined network controller circuitry to control the test network to adopt one or more test routing arrangements, instructing the test traffic agents to communicate test data packets to respective destinations amongst the test traffic agents using the test network and detecting whether the test packets correctly arrive at their respective destinations under a current test routing arrangement as adopted by the test network.
2. Apparatus according to clause 1, in which the test network is a simulated test network configured by the test controller circuitry in response to the network definition data, the network testing apparatus comprising data processing circuitry configured to implement, under program instruction control the simulated test network.
3. Apparatus according to clause 1 or clause 2, in which the test controller circuitry is configured to issue control instructions to the test traffic agents by a communication route not using the test network.
4. Apparatus according to any one of the preceding clauses, in which the test controller circuitry is configured to detect whether the test packets do not arrive at incorrect destinations other than their respective destinations.
5. Apparatus according to any one of the preceding clauses, in which the test controller circuitry is configured to instruct the test traffic agents to communicate test packets by one or more n traffic types selected from the list consisting of: (i) a unicast protocol, (ii) a multicast protocol and (iii) a broadcast protocol.
6. Apparatus according to any one of the preceding clauses, in which the network definition data comprises:
topology data defining a network topology; and
endpoint data defining network locations and functions of network endpoints.
7. Apparatus according to clause 6, in which the test controller circuitry is configured to establish a test traffic agent at each network endpoint defined by the endpoint data.
8. Apparatus according to clause 7, in which the test controller circuitry is configured to generate instructions to the test traffic agents to communicate test data packets to respective destinations amongst the test traffic agents, in response to the functions of the network endpoints defined by the endpoint data.
9. Apparatus according to clause 8, in which the endpoint data defines one or more network traffic types handled by each network endpoint.
10. Apparatus according to clause 9, in which the test controller circuitry is configured to generate instructions to the test traffic agents to communicate test data packets to respective destinations amongst the test traffic agents, so as to test communication between each combination of data packet source and data packet receiver for at least a network traffic type under test.
11. A method of testing a test network having a software defined network controller to control the test network to adopt a routing arrangement, the test network having associated test traffic agents controllable by a test controller, the method comprising:
the test controller configuring the test network in response to network topology data;
the test controller providing instructions to control operations of the software defined network controller and to control operations of a plurality of test traffic agents;
the software defined network controller controlling the test network to adopt a routing arrangement for data packets within the test network in response to an instruction provided by the test controller;
the test controller performing a network test by:
Number | Date | Country | Kind |
---|---|---|---|
1801767.3 | Feb 2018 | GB | national |