The present disclosure relates generally to networks, and more particularly, the flow of traffic through a global virtual network across various network fabrics integrated into a greater network tapestry.
The first deployments of ‘networks’ typically consisted of a topology with a large central computer core such as a mainframe with slave terminals connected to it directly in the same facility. This manifestation of mainframe and terminals had certain advantages allowing for distributed physical access but in the past, all users needed to be in close proximity to the core. As network transmission over distance improved, slave terminals were able to be located in remote locations further away from the mainframe. Today, this type of topology may be referred to as a central server and thin client devices which connect to it.
Then power and storage were shifted to personal computers (PCs) whose local CPU, RAM and storage allowed for computing to be contained within the PC. Today, the pendulum is swinging back. The rise of personal computers was a driver for the development of wired networking technologies, then laptops (portable computers) were the impetus for wireless networks, and later mobile phones, smart phones, tablets, phablets and other types of mobile and wireless devices were the impetus for improvements in both wired and wireless network infrastructure.
Mobile devices and improved internet connectivity at the last mile spurred a proliferation of services where host clients store, access, and retrieve their data via servers in the cloud. The Internet of Things (IoT) means more and more connected devices—many of these in LANs, PANs, Piconets, etc. and the majority of these devices must not only have an upstream connectivity but must also be found on the Internet.
Line requirements of devices connected to the internet are varied. Some are tolerant of less than ideal connectivity where other devices have an absolute requirement for low latency, zero packet loss and high bandwidth to function properly. And as the proliferation of devices continue, the sheer number of devices will present problems requiring solutions. These problems include how to connect all of these devices reliably, how to efficiently find all of these devices, and how to carry copious amounts of data between them and big data aggregation points.
The internet is comprised of connected devices which constitute a network and the connecting of these networks constitute a network of networks. As networking continues to evolve, core protocols and network types continue to mature and they have expanded to the point where network types can be referred to as a network fabric. Common network fabrics are built upon standard protocols such as IPv4 and IPv6 on top of the Ethernet standard, Fiber Channel, InfiniBand, and various other network protocols and types.
A network fabric may be defined as either a network under administration of one body which is peered to other networks on a one to one basis defined as single honed or as a one-to-many network relationship via a multi-honed peering. A network fabric may also define the scale and scope of a network protocol type from end-to-end. Ethernet defines a type of network but this can also be further classified by Internet Protocol over Ethernet, and then by which version of IP such as IPv4 which stands for Internet Protocol version 4, or IPv6 which stands for Internet Protocol version 6, and other network types. Built on top of Internet Protocol (IP) are protocols such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP/IP is more verbose and has a plethora of built-in error checking and handling for reliability of data sent versus UDP which has no stringent error checking combined with a more fluid flow control. This makes UDP more suitable than TCP for the streaming of data such as audio or video casting where a lost packet will not have a dramatically adverse effect on the consumer's experience.
In addition to different protocols and IP versions, built on top of Ethernet, Ethernet itself has different flavors such as Ethernet, Gigabit Ethernet (available in 1 or 10 or 40 or 100 Gigabit) plus other versions of it are expected to be introduced, as line carrying capacity technology improves.
InfiniBand (IB) is an alternative to Ethernet with IB utilizing different physical NIC ports, lines and plugs, and with IB operating in a similar yet different manner to IP.
To connect various computing devices together with the motivating driver for them to be able to talk with or at least pass data to each other, Ethernet is currently the most popular protocol. To connect many nodes into a high performance computing (HPC) environment, InfiniBand (IB) is the preferred choice. IB allows for native remote direct memory access (RDMA) between nodes which bypass the network authentication and elevated process and operating system (O/S) stacks of the host devices where the RDMA storage (or other) devices are connected. This facilitates the hosting of parallel file systems (PFS), providing simultaneous and rapid access for many devices.
To further define scope, each network base protocol such as Ethernet or InfiniBand and the subsequent network protocols running on top of them can be defined as a fabric. At the interconnection point between fabrics, technology such as network address translation (NAT) or equivalent method is necessary for a successful cross connect. One network protocol such as IPv4 may be encapsulated so that its packets run over another protocol such as IB via a “wrapper” protocol such as IP over InfiniBand (IPoIB). If one wanted to connect various distributed nodes of a Parallel File System (PFS) over a network which consists of some non-IB segments such as Ethernet, a wrapper such as RDMA over Converged Ethernet (RoCE) could be utilized.
While RoCE can allow for RDMA access, it is a bit counterproductive because the underlying Ethernet network does not support the true advantages of IB and therefore will present a performance lag compared with RDMA over native IB.
Different types of clients and their users have varied expectations and demands for utilizing the internet today. These expectations also define the quality of service (QoS) requirements for each of these various uses. At the most demanding end of the QoS spectrum are clients & users who require a high quality line characterized by the highest bandwidth at the lowest latency with 100% reliability and availability. Some examples are:
High Performance Computing (HPC)—one of the most demanding situations is HPC where data is huge, distributed across globally disperse locations, and requires 100% lossless transmission at the lowest possible latency. Parallel file systems (PFS) are often utilized with HPC for clients to access central or distributed data stores from both local and distant locations.
Financial Industry—although the traditional communication needs of the financial industry to execute trades utilizes relatively small packets in terms of size, the required bandwidth must be uncongested with the absolute lowest possible latency with 100% reliability. Nanoseconds matter and there can be no loss. Round Trip Time (RTT) is critical because not only does the transaction message have to get through but the confirmation acknowledgement of successful transmission has to be returned as soon as possible.
Mass Media—Live video streams in high definition covering sporting events, news broadcasts, and other purposes require high bandwidth and low latency.
At the other end of the QoS requirements spectrum exist clients & users running applications which can tolerate a certain degree of packet loss and also where latency and/or bandwidth requirements are not mission critical. Some examples are:
Streaming audio—such as internet radio for which bandwidth needs are modest and a little periodic loss will not matter and only present as a momentary bit of static.
RSS text streams—these require very little bandwidth but lossless transmission and in most cases latency is not a materially significant factor.
Data backup (off hours)—requires good enough bandwidth and latency to allow for data to be sent and confirmed but spending extra for premium lines is not justifiable.
Voice calls—where two way audio consumes lower bandwidth, and a bit of loss presents as a momentary bit of static on line.
Email sending/receiving—requires modest bandwidth and “good enough” latency to allow for messages to go through. Higher volume servers and commercial grade messaging need better QoS.
At the lowest QoS requirement demands, bandwidth availability and latency can go up or down but users are tolerant of this fluctuation because they are not willing to pay more money for better service.
At the middle of both extremes are mainstream clients & users who have various levels of QoS expectations and demands. Within the mainstream, there also exists granularity within ranges from low to high levels of expectation. Some examples are:
High end of mainstream—consists of banks, corporations, and various other types of organizations which require WAN connectivity between offices and/or centrally located applications where many distributed “thin clients” connect with a larger central system.
Middle of mainstream—cloud servers in IDC/CDN/etc. which serve consumers and SME clients.
Lower-end of mainstream—budget conscious home users.
In summary, QoS demands often drive which type of network is adopted and budgetary constraints are a factor which influences the standard of quality for the “line” purchased.
Ethernet is a combination of networking technologies and is the most widely used network type deployed from the local area networks within offices, data centers and other clusters of devices to the global backbones across the global internet.
Ethernet became the dominant network type and its widespread use is prevalent both in the LAN and across the broader internet because it was a relatively easy standard to implement and to deploy globally. As more and more devices utilize a protocol or network type, network effects come into play because it makes the decision easier for others to adopt similar technology for compatibility and other reasons.
In the data center, where concentrated computing, storage, processing and other operations spread over various rack-mounted servers, a faster transport than Ethernet was required to back-channel connect these servers together for them to share data.
Fiber channel and Infiniband (IB) are two such technologies offering ultra-low latency and high capacity bandwidth. IB's lossless and parallel transfers offer strong advantages allowing for the use of Remote Direct Memory Access (RDMA) and also offers the opportunity to deploy and utilize globally dispersed parallel file systems (PFS). The limitation of IB was that it was only deployable at a relatively short distance measured in meters. This was then extended to a few kilometers. Until recently, IB “long-distance” links were limited to within a city or between two nearby metro areas connecting data centers to each other via superfast IB over dedicated lines. Technologies now exist which allow IB to be extended over distance and to transit up to 20,000 kilometers between two devices over a dark fiber line. For example, the innovations at the physical layer developed by Bay Microsystems and Obsidian Research offer various advantages such as low latency of IB and the ability for long-distance RDMA via IB over dark fiber between remote regions.
Ethernet Internet from the LAN to Internet to LAN uses TCP/IP, UDP/IP and IPv4, IPv6 addressing protocols. The last mile connectivity refers to linking of a LAN to the network of an ISP via POP to Internet.
Ethernet has a store and forward model where a packet is received, examined and then forwarded only after the payload has been completely received and examined. Latency within a computer/router/network device to handle a packet of Ethernet traffic is approximately 100 microseconds (μs).
Infiniband (IB)—extremely low latency compared with Ethernet. It is also much less verbose than TCP/IP or UDP/IP. It runs on top of Dark Fiber Connections. Compared with Ethernet over Dark Fiber it still is relatively faster and if native IB/RDMA over IB is utilized, latency can be measured as one-way for effective transmission rather than two-way as RTT is for Ethernet. IB bandwidth under load reaches 90 to 96 percent of theoretical BW maximum, approaching true wire speed. IB features cut through switching where it receives the headers of a packet, uses logic for the forwarding decision and pipes the packet payload onward. While IB has traditionally been used within a data center, IB has evolved to break out to become a truly global transport thanks to technologies to extend IB over long distance. These new technologies extent the IB reach over very large distances over dark fiber, up to 20,000 Km.
Remote direct memory access (RDMA) over IB utilizes zero-copy networking where the packet can be directly sent via the IB NIC. This reduces CPU Load and drops latency to 1 microsecond (μs) for a packet.
Parallel File Systems (PFS) offer distributed files and folders across various devices utilizing RDMA and when combined with IB over distance, PFS clusters offer fast file access from remote locations to/from remote file stores at near wire speed.
Reliability is of paramount importance when comparing network types. Main drivers affecting type of network, network protocol, and physical path are time and distance. Latency is a measure of time for data to travel in one direction or for a round trip time (RTT) over a specified distance between two points.
In computing, the main measure of time for networking is milliseconds (ms) and for processing is microseconds (μs) or nanoseconds (ns). The granularity of a tick of time can therefore be measured either as a fraction or as decimals. For example every 1/20th or 1/10th or 1/100th of a millisecond.
How fine the granularity of a tick can be is determined by the processing power device and other factors. Measurements of latency are typically measured in milliseconds and are influenced by both network type, protocol, distance, network load, congestion, and other factors.
Table 2 compares the speed of light in a vacuum versus the speed of light inside of the glass core of an optical fiber. This illustrates the physical limitation of fiber efficiency and establishes a baseline for the theoretical best speed that be achieved through fiber. While the Refractive Index of fiber optic cables may vary slightly, an average is assumed as follows: Average of approx. 203 to 204 m/μs vs. speed of light of 299.792 m/μs for an efficiency of 68.05%.
The maximum number of available IPv4 IP Addresses is limited by the 32 bit IP address practical maximum of 4,294,967,296 (two to the power of thirty-two) IPv4 addresses. Of this sum total, there are approximately 588,514,304 reserved addresses, leaving only 3,706,452,992 public addresses available. While Internet Protocol version four (IPv4) is widely deployed, it can be characterized as a victim of its own success because the number of available IPv4 IP Addresses is almost completely exhausted. While technologies such as NATing for devices in a LAN specifically address this issue, the problem remains unsolved and unassigned IPv4 addresses are scarce.
Where the IPv4 addressing system has reached a point of exhaustion with few to zero available IPv4 IP addresses at a time when more and more are needed, IPv6 IP addresses offer a seemingly inexhaustible supply. IPv6 IP Addresses are 128 bits and therefore, the number of available IP Addresses is huge, approximately 340 undecillion or 340,282,366,920,938,463,463,374,607,431,768,211,456 possible IPv6 addresses available. While the number of available IP addresses under IPv6 is virtually unlimited compared with IPv4 address availability, the technology has been slow to be rolled out on a global basis limiting the utility of its deployment.
Many legacy networks are built with devices which are still only able to handle IPv4 addresses, presenting a conundrum. IPv6 has at its core what appears to be an ample supply of available IP addresses, however, IPv6 has not been deployed universally due to a number of factors, one of them being the CAPEX investment sunk into legacy equipment which only handle IPv4 and not both IPv4 and IPv6. Until legacy systems are replaced or upgraded to accommodate both IPv4 and IPv6, the IPv4 address constraint remains.
The Ethernet protocol has relatively high latency, poor efficiency, and low utilization rate over long distance with less than 25% efficiency with respect to line capacity when compared to InfiniBand. Problems are magnified where long distance transmission of data is negatively impacted by the performance flaws of IP based network protocols, and subsequent backwash of bandwidth delay product (BDP) at uneven peering points, and other drawbacks inherent in the native function of the protocols.
Internet connectivity is shared publically over ISP lines and as such is not as reliable as dedicated lines such as MPLS or DDN. Ethernet bandwidth (BW) under load and over long distance drops to a low percentage of the theoretical BW maximum.
There are also well known connectivity issues with respect to peering across multiple network boundaries over distance, across disparate fabrics of networks, and at network edges. These problems and challenges are addressed by a Global Virtual Network and are described in U.S. Prov. Pat. 62/108,987 the contents of which are incorporated by reference.
TCP/IP is verbose and utilizes a store & forward model which requires confirmation. It is prone to congestion slowdowns and bottlenecks through internet hops between nonequivalent segments. The result is higher latency and/or packet loss due to congestion or other factors. When a TCP/IP packet is lost or otherwise not delivered, the sender attempts to resend to ensure delivery. This can put a high demand on hardware resources including RAM and Processor use. The corollary to this is that more hardware is required to push a large amount of traffic (relative to an equivalent amount of traffic which Infiniband could handle) adding to expense and physical space requirements. Further, it leads to higher levels of energy consumption. UDP/IP is one-way and does not require the receiver to send an acknowledgement packet to the sender. This offers a significant speed advantage over TCP/IP, however the tradeoff for this speed gain is that during times of network congestion or other factors which impact reliability, if a packet is lost in transmission, there is no way for either the sender or the receiver to discover this loss.
Infiniband (IB) over dark fiber has advantages but it requires dedicated expensive equipment at both ends of an exclusive point-to-point fiber. In addition to requiring expensive HW edge devices to be installed at each end, the ongoing relatively high cost per month is required for the dark fiber. There is no automatic failover if this line is cut or fails. It is also an IB only network, therefore necessitating costly IB cards to be installed on each device within a network that will utilize this facility. Technical skill is also required both for installation and subsequent ongoing operations. Therefore, integration skill is required to take full advantage of RDMA over IB and this requires investment both in equipment and manpower both upfront and over time.
A significant CAPEX investment is required for the hardware and integration efforts if one were to build a global InfiniBand-only network. For point-to-multipoint topology integration, technical staff are required to set up the architecture and to remain on duty to monitor and maintain it. While the advantages of an IB multi-honed-backbone-to-last-mile are desirable, the upfront expense in hardware endpoint equipment and the high running cost of recurring fees for dark fiber between each point and the point-to-point typology present both a price and technical barrier which only the largest and best funded organizations can surmount.
Today, organizations have flexibility to deploy many types of networks including IPv4, IPv6, InfiniBand, Fiber Channel and other network types, within the LANs and WANs under their direct control. If they wish to have end-to-end network fabrics over distance, current solutions require them to put dedicated lines in place and to invest in middle devices to power these WAN connections.
To summarize, TCP/IP offers reliability at the cost of being verbose and is consequently slower. It requires packets to be sent and for an acknowledgement to return. Accordingly the latency of Round Trip Time (RTT) is measured as the time it takes for a packet to reach its destination AND for an acknowledgment to be returned back to its source. UDP/IP does not require an acknowledgement to be returned. However, UDP isn't tolerant to errors and loss like TCP is. Without flow control UDP is not prone to the same degree of congestion issues as TCP, however it can still suffer from IP protocol inefficiencies. Therefore, if a UDP packet is lost then neither the sender nor the receiver can know. IB has the advantage of ultra-low latency, with parallel transfer but is not widely deployed and requires its own hardware NICs, cables, routers, and other devices to operate. IP and IB are not plug-and-play compatible. To send IP over IB, it has to be encapsulated as IP over InfiniBand (IPoIB) because is not native to the IB protocol. IB has many advantages but it relatively more expensive.
Systems and methods for connecting devices via a virtual global network across network fabrics using a network tapestry are disclosed. In one embodiment the network system may comprise a first access point server in communication with a first backbone exchange server, a second access point server in communication with a second backbone exchange server, and a network tapestry comprising a first communication path connecting the first and second access point servers and a second communication path connecting the first and second backbone exchange servers.
In one embodiment the first communication path is IP over the Internet. In another embodiment the second communication path is Infiniband over dark fiber.
In other embodiments the network system further includes a first parallel file storage in communication with the first backbone exchange server, a second parallel file storage in communication with the second backbone exchange server, and the first backbone exchange server can directly write to the second parallel file storage using the second communication path without using the first communication path.
In additional embodiments the network system further includes a first firewall in the communication path between the first access point server and the first backbone exchange server and the firewall isolates the first backbone exchange server from threats present on the first communication path. In yet another embodiment the network system further includes a second firewall in the communication path between the second access point server and the second backbone exchange server and the second firewall isolates the second backbone exchange server from threats present on the second communication path.
In another embodiment the network system further includes an end point device in communication with the first access point server and a host server in communication with the second access point server. The communication protocol between the end point device and the host server may be one of InfiniBand, RDMA, IPv4, and IPv6, or other. The communication protocol may encapsulated in a different protocol between the end point device and the first access point server. The communication protocol may encapsulated in a different protocol between the second access point server and the host server. The communication protocol may encapsulated in a different protocol between the first backbone exchange server and the second backbone exchange server.
In order to facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals or references. These drawings should not be construed as limiting the present disclosure, but are intended to be illustrative only.
Abbreviations used herein, include:
A network tapestry is the joining of one or more network fabrics. It is the art of automatically connecting the various fabrics together and integrating them into end-to-end, seamless networks in parallel with each other within or over-the-top (OTT) of layer three of a global virtual network (GVN), which itself is over-the-top of base internet or fiber. This effective joining of fabrics can also be viewed as the combining of various network segments in-the-middle (ITM) of a longer network path. For problems and issues addressed by a Global Virtual Network (GVN) as well as general GVN description and its operations. See U.S. Provisional Patent Application No. 62/089,113.
Local internet connectivity supplied by ISP's is designed for best connectivity within their network. That is why locally hosted and locally CDN'ed sites perform best. They are naturally better because they are closer but they are also on one network under the control of one party or a few parties in the same region with strong peering relationships with no external regional peering edges.
A GVN with wide and broad coverage of SRV_AP's offers an EPD or PEPD with a “local” access point into the GVN, over the top of a client's existing internet connection as supplied via their ISP's connection point, most commonly a point of presence (POP), extending to all points on the global internet. The GVN utilizes over the top (OTT) from LAN's to the nearest SRV_AP and then to a shared high performance network link with aggregation point linking diverse regions separated by great distance and hooked back into the aggregation point at destination. The consumption model offers a low barrier to entry via low cost equipment as well as a pay for use model for the fractional and proportional use of high capacity fiber. The GVN is easy to deploy and operate and can include Advanced Smart Routing (ASR). The end to end network is configured to auto-create connectivity and to make automatic adjustments to changing conditions as needed.
The advantages of a Network Tapestry offered by a GVN are realized through the provision of an end-to-end solution which provides the most efficient, Secure Network Optimization (SNO) services in an automated fashion. The network tapestry is easy to install, easy to configure, and easy to use. The network tapestry results in cost savings because dedicated lines are not required, either a bandwidth model or consumption model may be used, there is a low barrier to entry, and it provides access to advanced connectivity features which would otherwise be unavailable or unaffordable for most clients.
The figures are grouped in the following sections.
Simple network topology: These figures demonstrate simple networks, one with and one without redundancy.
Global network, nodes, and performance related to distance and other factors: These figures show the impact of distance on network and define a performance to proximity ratio.
About a GVN—topology and features: These figures provides a simple introductory description of the hub and spoke topology of devices within a global virtual network (GVN) to demonstrate end-to-end performance enhancement and optimization.
Characteristics of a path—hops, segments, problems at join points of fabrics: These figures demonstrate segments between hops at network devices, peering points, how the GVN is over-the-top (OTT) of a base path, how a typical path consists of segments that each have different specifications, the impact of bandwidth delay product, and other descriptions of network conditions.
GVN overview of example topology and options: These figures show a few example topologies of a GVN and how it can connect various fabrics together, and the subsequent basic routing options offered.
Demonstration of how to set up InfiniBand network as a fabric in the tapestry: These figures describe how to build a simple IB WAN between two LANs. It further demonstrates how an IB over distance fabric can be integrated into a GVN at a physical layer.
Tapestry Topology—Blending of IP over Eth with IB over IP and IB native fabrics into tapestry: These figures describe the logic for the integration of various network fabrics into the GVN, including device connectivity, failover, load-balancing, resources sharing, device-to-device communications and other aspects of integration.
API information exchange between devices for integrated performance: These figures describe the logic for API and other device to device links.
Three Layers of the GVN, and how L3 adapts to conditions at L1 to stretch internal fabric: These figures describe the logical layers of a GVN and how these are managed across various types of network segments to extend an end-to-end network fabric.
ASR at fabric and tapestry scope: These figures demonstrate advanced smart routing (ASR) at both the base connectivity lawyer (GVN L1) and the OTT internal pathway layer (GVN L3).
Tapestry Topology—example—stitched together fabrics/LAN in Cloud as OTT2 over GVN OTT1: These figures demonstrate how an OTT GVN facilitates the option for constructs to be built on top of its internal pathway existing as a second-degree-over-the-top layer (OTT2). These can allow for the OTT1 GVN to handle the routing, QoS, and other optimizations of the base layer, and the OTT2 construct to be utilized as a fabric running through it.
Tapestry Applied—example—file mapping, xfer, availability via PFS devices: These figures demonstrate how an OTT2 layer of the GVN can be utilized as an RDMA fabric to facilitate the use of globally distributed parallel file systems (PFS), from LANs to the cloud and back.
GVN—geographic destination—fast transfer from remote region to local region: These figures describe how the integration of an IB fabric into IP fabrics within a GVN can enhance the operation of the geographic destination mechanism of the GVN.
Tapestry Applied—example—WAN: These figures describe how various fabrics can be weaved together to deliver high performance WAN connectivity between LANs.
Tapestry Logic: These figures describe the logical, physical, and other attributes of a network tapestry.
Systems Diagram—Tapestry: These figures describe the logical structure and organization of GVN network tapestry layers, modules, and elements.
This invention automatically weaves together various network fabrics into a network tapestry. This can be a component of a Global Virtual Network (GVN) which offers an over-the-top (OTT) service to clients in a plug and play manner truly offering low cost hardware and a pay for use service on top of existing internet connections offered by ISPs to clients today.
There also is a direct connection segment 2-P4 between SRV 2-A and SRV 2-C therefore this connection does not have to be relayed via an intermediary server SRV 2-B. This offers redundancy and ease of operations. And it offers different routing options from one SRV to another SRV which can be used to compare QoS and speeds and other factors.
Therefore the basic logic of the example connections between SRV 2-A to SRV 2-C with pass-through of SRV 2-B and SRV 2-A to SRV 2-C directly offer redundancy. If one server goes down then the other two can still communicate with each other. If one path goes down, between two of the servers then traffic can pass via two path segments with a server pass through.
As described in the Legend box at the bottom right, the center of each zone noted herein from a networking perspective is a Global Node. Around each Global Node are two rings which denote the type of connectivity quality zone based on the radius distance from the center of the node. This is for simplification only as many factors determine the size and shape of these zones. However, the two zones can be differentiated from each other as the closest one being a High Performance Zone and the other being an Optimal Service Area.
Global Nodes are connected to each other via long distance high performance network links.
The further a querying client or server or other type of device is from the global node, the higher the latency and at some point the distance is so great that the QoS reduction is such that the device is located in the Optimal Service Area.
Devices which are located outside of the optimal service area are expected to experience a poor QoS.
Geographic areas are indicated for example herein are SJC 3-02 for San Jose, Calif., USA, JFK 3-08 for New York, N.Y., USA, AMS 3-12 for Amsterdam, NL, NRT 3-22 for Tokyo, Japan, HKG 3-28 for Hong Kong, SAR, China, and GIG 3-30 for Rio de Janeiro, Brazil.
There are many other locations around the world within which a global node could be placed which are significant, but for simplicity's sake only a few were indicated for illustrative purposes.
There are also paths indicated between each global node such as path segment 3-P0812 between JFK 3-08 and AMS 3-12. In reality, there are a multitude of path options representing undersea cables, terrestrial cables, and other types of communication lines or links between two points. Those illustrated are meant to simplify the example illustrated. The shorter the distance combined with line speed or wire speed, the lower the latency between the points with the result of faster information transfer.
Within the device 4-100, the physical characteristics 4-110 describe the plug socket, the network plug and cable, the advantages and disadvantages of the physics of the line, the network interface card (NIC) and more. The data link 4-120 describes the nature of the data on the line such as bits per byte, frame size, parameters and other. Network 4-130 describes the protocol, wrappers, nature of packets or frames or other, and other elements. Transport 4-140 describes where flow control, error correction code (ECC) or forward error correction (FEC), algorithms, optional compression, maximum transmission unit (MTU), addressing, peering, identity, security, and other elements may be defined and configured.
Network lines and links to backhaul 4-200 defines the physical attributes and the operational characteristics of the network link from subnetwork 4-210 to the core network 4-220 or backhaul. This can also be called an uplink, a last mile to backhaul, or referred to as various other names. Characteristics which define this lines potential can also be used as benchmarks for measuring performance such as bandwidth (BW), latency, jitter, and other factors.
QoS based on distance and quality of lines from center point of origin to various devices. Latency and bandwidth problems are more prevalent and significant the further the destination is from origin. Quantifying these distances and understanding a client device's relative distance provides understanding for expected QoS.
There are two illustrated hub & spoke clusters described one in each of two regions, region A RGN-A 7-000 and region B RGN-B 7-020. Each hub demonstrates end point devices (EPD) such as 7-102 to 7-112 in RGN-A 7-000 and EPD 7-122 to 7-132 in RGN-B 7-020 which can connect to access point servers (SRV_AP) such as 7-302, 7-306, or 7-308 in RGN-A 7-000 and SRV_AP 7-322, 7-326, or 7-328. End point devices (EPD) 7-302 through 7-132 will connect with one or more multiple SRV_AP servers through one or more multiple concurrent tunnels.
SRV_APs in each region are connected to a local, corresponding backbone exchange server (SRV_BBX) 7-500 in RGN-A 7-000 and 7-520 in RGN-B 7-020. The connection path 7-P510 between SRV_BBX 7-500 and 7-520 is via fast backbone connection over fiber or other network segment. Linked SRV_BBX devices provide global connectivity. SRV_BBX may be one or more load-balanced high performance servers in a region serving as global links.
This example embodiment is based on
Not illustrated in this example embodiment are central, control server (SRV_CNTRL) servers which can service all of the devices within that region and the SRV_CNTRL may be one or more multiple master servers.
This topology can offer EPD to EIP in remote region route through GVN. Or an EIP in the same region. Or EPD to EPD in same region or EPD to an EPD in another region, or many other possibilities. These connections are secured and optimized through the GVN.
This topology offers an over-the-top (OTT) GVN layer from various networks into aggregation point for traffic to flow via a unified network tapestry over various network fabrics.
For example, 10-TH02 on EPD0 10-D0 is an internal hop inside of the tunnel between LANs and is also a path within the L3 of the GVN between LAN0 10-TH00 and LAN2 10-TH10.
The path consisting of segments from 10-EH00 to 10-EH32 is at GVN L1 the base path of the network. This figure demonstrates a global virtual network tunnel GVN Tunnel from LAN 10-TH00 to EPD-0 10-00 to SRV_AP AP-0 10-D4 to SRV_AP AP-2 10-D6 to EPD-2 10-D2 to LAN2 10-TH10 illustrating peering points between ISPs and network edges.
EDGE-00 10-B0 is the demarcation point for network access connection between the devices of LAN0 10-TH00 and ISP-0 10-FAB0.
PP-00 is the point where peering occurs between the networks of ISP-0 and ISP-2. PP-02 is the peering point between the networks of ISP-2 and ISP-4.
EDGE-2 10-B2 is the demarcation point for network access connection between devices of LAN-2 10-TH10 and the network of ISP-4.
Some advantages can be realized by placing SRV_AP-0 10-D4 at PP-00 10-B4 so that this SRV_AP directly can peer with both ISP-0 and ISP-2. More advantages can be realized by placing SRV_AP-2 at PP-2 so that this SRV_AP can directly peer with both ISP-2 and ISP-4. If the network of ISP-2 is not ideal, it is possible for traffic to be alternatively routed around ISP-2 by the GVN through another route or line or ISP or carrier.
The internal hop count through the neutral Third Layer of the GVN is six hops from LAN to LAN.
The distance between ISPs is not to scale. Furthermore, it is likely that there could be more hops within the network of an ISP but for simplicity sake, the quantity illustrated has been simplified.
The hops through the internet are from 10-EH00 through 10-EH32 and the hop count is seventeen hops.
While this figure illustrates the joining of tunnels at AP hops, this is viewed as a single tunnel by client devices within the path between LAN1 and LAN2. This singular tunnel represents the neutral Third Layer of the GVN within which it is possible to run all traffic that would normally transit over the internet, including TCP, UDP, and other protocols, plus other tunnels such as IPSec, OpenVPN, PPTP, or other. There are other advantages realized by the Third Layer of the GVN. Some include lower TTL and ability to have more control over routing plus other advantages.
From Client 11-000 to Server 11-300, the traffic transits via a local area network (LAN) 11-010 to an end point device (EPD) 11-100 to an internet service provider's (ISP) 11-200 network to a backbone 11-220 to internet 11-250 in a remote region to an internet data center's (IDC) point of presence (POP) 11-320 into the IDC's internal network 11-310 and then to the server 11-200.
As shown by this example, it is important to understand the characteristics of each segment and how that segment impacts the traffic flow with respect to the complete end-to-end pathway. An internal network or LAN 11-N100 will typically have a reasonable amount of bandwidth (BW) for internal use such as BW 11-B100 which is 10 GigE in size. The bandwidth for an ISP's network 11-N202 will also typically be fairly large as exemplified by BW 11-B202 of 40 GigE. Between those two networks, a last mile connection 11-N200 between the client location and the ISP is a relatively small 11-B200 BW of 100 Mbps. There are numerous drivers behind this but the main one is cost. An ISP will bring a pipe to a neighborhood with a bandwidth of a certain size and then will usually share this amount with many different users to each of their last mile connections. These upstream paths are the beginning segments towards the broader and wider general internet.
A backbone 11-N220 connects ISPs to each other, regions to regions, and more and backbones offer very deep and high bandwidth connectivity such as 11-B220 of 100 GigE. This could represent the carrying capacity of a strand of fiber between two points, and/or the size of the switch's capacity rating or other factors.
The internet 11-N250 in this figure is represented by dual pipes of BW 11-B250 and 11-B252 each at 40 GigE. This is an example of a multi-honed connectivity in an internet. There may be many other large pipes at the core of an internet connected together.
ISP peering 11-N320 between the internet 11-N250 and an IDC network 11-N310 is represented again by multi-honed connectivity BW of 10 GigE each for 11-B320, 11-B322, and 11-B328. This represents dedicated last mile for that data center. There may be many more communication links for an IDC.
The internal IDC network 11-N310 will typically have very high BW 11-B310 distributed amongst various internal networks which each is rated to a certain speed such as 100 GigE. The notation 2*100 GigE represents that this is a network two times 100 GigE BW.
When a server begins to serve a stream of data or a file, it will blast many packets per second based on what it assumes to be the high bandwidth 11-BW220 of a network segment such as 11-N220. The server is connected to this large pipe network segment.
If the data stream is constricted at 12-300, the loss forces the server to aggressively throttle down the stream slowing transfer, and due to the need to retransmit the lost packets, the server may reduce rate of transfer overly aggressively slowing down the total process.
The significance of BDP is that it provides a certainty in the measure of how much data can be transferred down a line from when a server starts blasting the data, and it hits a bottleneck, until when the receiving device recognizes loss and sends acknowledgement packets back to the sending server.
LAN zone zero 14-ZL00 describes a typical local area network (LAN) including the placement of firewalls with respect to an end point device (EPD) 14-100 between the LAN and the external network GVN OTT 14-202 and Internet 14-30. There is a hardware FW 14-40 between LAN 14-04 and EPD 14-100. Another HW or SW FW 14-42 is between the EPD 14-100 and the egress ingress point (EIP) 14-20 to protect from external threats emanating from Internet 14-30.
LAN zone one 14-ZL10 is similar in topology to LAN zone zero 14-ZL00 with the exception that there is no FW placed between EPD 14-110 and LAN 14-46. Internet zone zero 14-ZI00 describes an example internet topology in a region in close proximity to 14-ZL00. Internet zone one 14-ZI10 describes an example internet topology in a region in close proximity to 14-ZL10. Internet zone two 14-ZI20 describes an example internet topology in a region in close proximity to 14-ZD20. Internet zone three 14-ZI30 describes an example internet topology in a region in close proximity to 14-ZD30.
Internet data center zone two 14-ZD20 describes the topology and placement of cloud based firewalls CFW 14-46 including virtualized FW devices behind cloud FW load balancers. Internet data center zone three 14-ZD30 describes the topology and placement of cloud based firewalls CFW 14-48 including virtualized FW devices behind cloud FW load balancers. SRV_BBX 14-72 in region or zone 14-ZD20 can be connected to SRV_BBX 14-80 in other region or zone 14-ZD30 via a dark fiber connection 14-P220 over dark fiber 14-220.
SRV_BBX 14-72 uses this invention to directly write a file to parallel file storage PFS 14-82 via remote direct memory access (RDMA) over 14-P220 bypassing the stack of SRV_BBX 14-80 via path 14-P82.
SRV_BBX 14-80 uses this invention to directly write a file to parallel file storage PFS 14-74 via remote direct memory access (RDMA) over 14-P220 bypassing the stack of SRV_BBX 14-72 via path 14-P74.
Path 14-P210 can be IPv4 or some kind of standardized internet protocol over which traffic flows from SRV_AP 14-300 to and or from SRV_AP 14-310 via path 14-P210 over-the-top of the GVN via a tunnel or other type of communication path.
While the topology described herein does not have FW or traffic monitoring devices within GVN pathways, these devices could be placed on an as needed basis to further secure the flow of data.
The next main process plot route options (ASR) 15-200 utilizes sub processes server availability list 15-210 and routes list ranked 15-220 to determine the most optimal server(s) with which to build tunnels if they do not exist.
The next main process examines network segments 15-300 and utilizes sub processes measure segments 15-310 and network statistics per path 15-320 to evaluate the viability of a path to be used to send the type of traffic required. For example for very small sized data which requires the fastest path, then the shortest distance and lowest latency are of most importance and low bandwidth may be tolerated. Conversely for huge sized data which is not time sensitive in terms of delivery of the first bit, the path offering the highest bandwidth is optimal because although first bit delivery is slower than the other path, last bit arrival is expected to happen sooner due to the higher bandwidth.
The next main process check route status 15-600 and its sub processes compare routes 15-610 and test: is total path complete 15-620 ensure the deliverability of data down that path. The last main process plot best route for traffic 15-700 and its sub processes sub-algorithm: which is best path? 15-710 and is this path best for traffic type? 15-720 are used to determine and set the best route end-to-end.
Each main process and sub process are utilized to ensure that each type of traffic is carried most optimally by the tunnel best suited for that traffic type.
On this graph, the left or vertical axis is for bandwidth in percentages. It goes from 0% to 120%. The bottom or horizontal axis represents the twenty four hours of time each day, for the seven days of the week.
This example demonstrates that weekdays have a higher BW use profile than weekends and so could be an office open on weekdays only. Other use cases will have their own cyclical profile. Some may use all bandwidth all of the time while others will have times of heavy BW use and other times of lower BW use.
The key point is that fixed, dedicated lines are expensive and may be underutilized for large amounts of time. An OTT service utilizing a less expensive line providing similar quality to a dedicated line is more reasonable and cost effective. Furthermore, an OTT service based on consumption of data traffic rather than bandwidth capacity might be the fairest approach.
It is assumed that for a service offering bandwidth of a certain potential is 100% of carrying capacity for 24 hours each and every day of the week/month. The average cost per GB of traffic is low if the line in use all of the time at full potential. Factoring CAPEX on equipment, plus running costs for maintenance, IT staff costs, dedicated own dark fiber can be expensive. If an organization only pays for the BW capacity that that organization can afford—it may be shaped cutting peaks and causing times of constriction limiting use.
By offering a service based on the ACTUAL USE of a LINE, full line carrying capacity is utilized when needed, and consumption based usage ensures that the client only pays for what they use.
The quality of service (QoS) 17-102 of both the base internet path and of the connectivity through the tunnel can be constantly tested, analyzed, adjusted, to various conditions. The base connection can be optimized and the EPD can make multiple connections to one or more SRV_APs and can use multiple IP addresses and ports. Where an IPv4 internet base path between EPD and SRV_AP may be congested, an IPv6 alternative path may be a better option. Or a different route through either protocol may be able to route around problems.
From the SRV_AP 17-300, there can be connections to other regions, or bridges to other protocols or other such options. For example, the tunnel's internal path 17-P100 can be IPv6 which is encapsulated over the base IPv4 network path 17-P100. Past the SRV_AP 17-300, path 17-P110 may be IPv4 and so IPv6 tunnel content will still have to be encapsulated to run over IPv4 for transport to SRV_AP 17-110. However, path 17-112 may be native IPv6 meaning that there is no need to encapsulate IPv6 over IPv6.
Any protocol which can be encapsulated or otherwise “carried” can be run through the GVN over virtually any other protocol or fabric.
The results of the constant testing are stored and mapped to be compared with other options through that fabric as well as to understand the peering or stitching characteristics of fabrics into a tapestry.
The LAN 18-000 is both IPv4 and IPv6 as are the base segments 18-P800. The remote internet segments are either IPv4 only 18-P804 or IPv6 only 18-P806.
The key point is that for traffic entering into the GVN as in ingress into EIP 18-302, it can enter as one or the other of IPv4 or IPv6 and each is connected to their corresponding fabric through the GVN and will egress in the LAN 18-000. Address translation and mapping are critical elements at peering points.
The tunnels between EPD 19-100 and SRV_AP 19-300 and SRV_AP 19-302 are TUN 100-300 and TUN 100-302. They are an example of multiple tunnel options between EPD and best current access point server (SRV_AP) based on Server Availability and other factors such as, Destination, Type of Traffic, QoS of various base network segments between origin and destination.
Tapestry 19-500 allows for protocols to be carrying which can be “run through” various GVN paths to egress and/or ingress at egress ingress points (EIP) of the GVN.
The Cluster of GVN Devices 19-600 represents the various GVN devices operating at the physical layer combined into offering route options through GVN.
GVN Global Network OTT Internet via other Links 19-700 is the GVN Layer 2 logic with modules such as Geographic Destination, DNS services, Advanced Smart Routing (ASR), Global ASR (GASR), Server Availability, Tunnel Builder Module, Testers, Analyzers, Etc.
GVN 19-800 can be described as a construct and is what the client user sees with respect to available network paths to various EIP points at various locations through the GVN utilizing various protocols.
The tunnels TUN0 20-PT0 and TUN2 20-PT2 are over the top (OTT) of a base network link. This base network link can be one or more of many protocols.
This figure further demonstrates that there can be various different protocols operating as fabrics concurrently on the LAN side of both EPDs, such as internet protocol (IP) over Ethernet 20-112 and 20-162, InfiniBand 20-118 and 20-168, or another network protocol 20-116 and 20-166. These can run in parallel over bridges through the GVN and also can be stitched together into a tapestry.
Any protocol can flow through the GVN end to end regardless of the various underlying fabrics of network protocols in the chain of various intermediary segments. For example in
There are various possibilities with one-to-one matches, or one to another type, or one to many, or many to one, or other. From the EPD's perspective 20-100 or EPD 20-110, the end-to-end network attributes inside the tunnel are perfect for the network type between the LAN's on either end.
The global virtual network's (GVN's) tapestry over top various fabrics forms a seamless WAN circuit between them.
IB Dev A 21-200 could represent an end point device (EPD) for example EPD A as an enabling device between LAN 21-300 and a broader network. IB Dev B 21-202 could represent and end point device (EPD) for example EPD B as an enabling device for another LAN 21-302. The segment Dark Fiber C 21-100 can be a switched dedicated circuit, strand of dark fiber, dedicated line, or physical network medium.
This kind of point-to-point connectivity over dark fiber requires expensive devices at each end running on top of expensive, required Dark Fiber which needs to be installed at the locations at both ends.
IB over very long distance is made possible and is reliable because of hardware solutions from companies like Bay Microsystems or Obsidian networks.
IB over long distance is better than IP for improved global transport because it offers low latency, high bandwidth transmission.
HW is the time required for the hardware to process the network operation(s). This includes the time taken by the CPU, RAM, NIC and other components:
HW=CPU+RAM+NIC+Other components
where CPU=time required for the CPU to process the network operation(s). The bulk of time is for the CPU to process the network operation(s), but NIC and RAM do add some drag thereby increasing processing time.
In addition to the hardware time, the time required for network operation(s) also includes the time spent by the Operating System (OS), the drivers for the hardware, and the software stack including any applications. The total systems time (SYS) is:
SYS=APP|Software Stack|O/S+drivers for HW|HW
For example, in a GVN use case such as utilization of geographic destination mechanism, although IB is faster than Ethernet, over a short distance it not be worth it to combine files into a single clump by CPA/RFB, then communicate the list of files via side channel API communications, transfer the clump via chained cache and then un-clump back into individual files at the CDA in the EPD. This is because of the time that it takes to do this. However, over a medium to larger distance, the latency reduction is significant enough to warrant the extra effort to pull, cache, clump, transfer from the source region to the destination region, communicate the list of files in a clump, un-clump and serve the separate files at destination.
This analysis includes both the clump/un-clump and messaging functions of this action set/sequence. The time for CPU processing, RAM consumption, internal copy between RAM→SYS→NIC is also reduced when IB is utilized vs ETH because IB is zero-copy with direct pass of packets by application to/from the NIC.
Total time for transfer=CPU+RAM↔SYS↔NIC+Network Latency (RTT)
Algorithm(s) are utilized for evaluating best times with respect to benchmarks and also with a programmable threshold to dictate when it is efficient to use ETH or more efficient to use IB.
In summary being not just aware but acutely cognizant of various elements which add to latency in consideration of protocol use allows for algorithmic analysis to analyze features, in other cases to predict expected latency or other conditions.
It further demonstrates an added element into the GVN network path of backbone exchange servers (SRV_BBX) in the middle. The two BBX servers are connected to each other by path over an internet back bone (IBB) 23-800. This path can be IP, or IB.
The path from planes 24-900 to terminal exit 24-000 begins at start 24-910 and again offers choice of riding the train or walking with similar performance and time advantages for those that opt to take the train. This is an analogy of the decision of whether or not to use Slinghop between long-distance points or to have packets travel along extended internet paths.
Boarding a train and disembarking take some time and effort. Trains operate on a fixed or variable schedule and all occupants of a train ride together from fixed point A to fixed point B. Where walkers on the adjoining paths never stop moving.
The efficiency of a train to convey passengers is faster and more direct. People walking may take indirect paths and potentially get delayed or lost. The train gets them there via same known assured-delivery path.
The illustration of an end point device's (EPD) 25-100 back plate noting four RJ45 Ethernet ports, ETH0 25-110 operating as a WAN, and three LAN ports ETH1 25-112, ETH2 25-114, ETH3 25-116. WAN port 25-110 is the plug for the cable connection to the base internet connectivity via path 25-P100. The one InfiniBand (IB) socket IB0 25-120 is for IB cables to connect via path 25-P122 to an IB switch in the LAN 25-126 and also could connect to a parallel file system (PFS) device 25-128 or other devices.
This example embodiment further demonstrates back plates for access point server (SRV_AP) 25-300, a sling node (SRV_SLN) 25-550, and a backbone exchange server (SRV_BBX) 25-500. It also illustrations the connective pathways between devices, and also from the devices to various clouds to other devices, such as a remote SRV_SLN 25-558 and a remote SRV_BBX 25-552.
The GVN connectivity from EPD 25-100 to SRV_BBX 25-500 via SRV_AP 25-300 is OTT the ISP Last Mile connection path 25-P000 through the internet 25-000, and OTT the LAN 25-032 in the internet data center (IDC) path 25-302.
These physical ports, back plates (in front of backplanes), connection paths, and other elements described herein are for example only. The absence of IB ports on the SRV_AP 25-300 is illustrated to act as an “air gap” between end to end base protocols, where IB could be encapsulated over Ethernet for end to end IB for clients in the LAN of the EPD 25-100 such as LAN 25-016. However, SRV_APs may also have IB ports if there is native IB connectivity between them and EPDs or other devices and if the need arises.
Both of these paths have a local IP section of segments Internet 26-000 and 26-012. The latency, bandwidth and other characteristic of these local sections 26-000 and 26-012 are equivalent for both of these paths. The middle segments of the IP path are 26-P028 through 26-P056 and the latency for this path section is measured by 26-260.
The slingshot mechanism has a transfer advantage over section 26-420, however there is an amount of time added at both ends of the slingshot at stages 26-400 and 26-440. In analyzing which is the better path, the net latency for the IB slingshot path 26-260 must be directly compared against IP path 6-200.
There are two IP over Ethernet paths demonstrated, 27-P420 to 25-P436 which is IPv4 end-to-end, and 27-P420 to 27-P626 to 27-P636 which is a blend of IPv4 and IPv6 segments.
Another described base connectivity described is from SRV_AP 27-200 to backbone exchange server (SRV_BBX) 27-500 which uses a network slingshot to convey data to a remote SRV_BBX 27-510 to SRV_AP 27-202 with return traffic utilizing reciprocal slingshot mechanism, both over fiber back bone.
The TUN 27-222 is a constructed over-the-top (OTT) tunnel pathway over the base of either of these three connectivity paths. Algorithmic analysis can be applied to choose which transport type over which path is most optimal. This figure does not describe the EPD or other device which connect to the SRV_AP but they can be operating therein.
There are two types of cross regional connection path types through the GVN illustrated herein. OTT 28-600 to OTT 28-650 to OTT 28-610 which is end to end over the top of internet protocol.
The alternative path is OTT 28-600 to IBB 28-800 to OTT 28-610, where the IBB portion is a non-OTT path, possibly IB between two backbone exchange servers (SRV_BBX) 28-500 and 28-520.
Each SRV_BBX “hub” serves various access point servers (SRV_AP). Each end point device (EPD) connects with various (one or more) SRV_AP servers simultaneously so that there is redundancy and that routing options exist for traffic to move via the best connectivity from moment to moment.
Connection paths indicated can be tunnels over the top (OTT) of the IP Ethernet Internet, or tunnels over Ethernet direct links, or IB over Fiber, or IB over Ethernet (RoCE), or other type of connectivity.
Placement of SRV_BBX and SRV_AP devices are based expected demand from client's locations, locate in best IDC with respect to pipes, interconnects to serve a target region while connecting global locations.
Devices also connect to a central, control server (SRV_CNTRL) 29-200 via paths such as 29-EP112 to EPD 25-112 or path 29-P218 to SRV_AP 29-318, etc. Having these paths allow for devices to connect with SRV_CNTRL via API or alternative traffic path for information conveyance.
In some respects it simplifies the picture presented in
A GVN and its component parts offer a service to improve and secure client connectivity. Multiple “local” presences in multiple locations simultaneously, automated systems that are controllable and configurable, providing optimized connectivity realizing a cost savings with the benefits of being an MPLS substitute and providing extended high performance connectivity such as remote direct memory access (RDMA), security and privacy via encrypted tunnels, and other benefits.
A huge benefit is the ability to connect various network fabric types, such as IB in the LAN 30-108 of an EPD 30-100 with the IB LAN 30-118 of EPD 30-110 that from the client's perspective is IB end-to-end even though some base segments in the middle are not native IB but rather IP. This is achieved through either encapsulation of IB over IP, or by routing through another IB native line, or other method.
The key point is that a GVN allows for various network fabrics to operate over-the-top (OTT) of various other network fabrics at a base layer. The overall effect is the weaving together of various fabrics into a network tapestry, enabled and optimized for best performance at the highest security by the GVN.
The first API call's request 31-A2 from an access point server SRV_AP 31-300 to a central, control server SRV_CNTRL 31-200 is received, parsed and processed by SRV_CNTRL 31-200. It then triggers three more API calls all initiated by the SRV_CNTRL 31-200. Depending on the nature of the communications, these may be in sequence or can be simultaneously processed in parallel. These three additional calls of request 31-A4 to a backbone exchange server SRV_BBX 31-800 and response back 31-A6, 31-A8 request to another SRV_BBX 31-810 and its response 31-A10, and finally the third additional API call of request 31-Al2 to SRV_AP 31-302 and its response 31-A14 back to SRV_CNTRL 31-200. When all three of these “internal” calls are completed, the final response 31-A16 is returned back to SRV_AP 31-300, the device which initiated the first request of 31-A2.
The API request 31-A2 and response 31-A16 can be characterized as an open-jaw call with a requirement that it may not complete until its internal calls 31-A4 to 31-A6 involving SRV_BBX 31-800, 31-A8 to 31-A10 involving SRV_BBX 31-810, and 31-Al2 to 31-A14 involving SRV_AP 31-302 are completed. This may be because information is required by SRV_AP 31-300 before it can take a subsequent action, for measuring and integration purposes, or other reason. For example, if an end-to-end tunnel should be built from SRV_AP 31-300 through SRV_BBX 31-800 to SRV_BBX 31-810 to an SRV_AP 31-302 via paths 31-P800 to 31-P808 to 31-P810, then all of those devices may need to be configured or triggered with the appropriate information and details. This type of API call can illustrate the request to set this up via 31-A2 to SRV_CNTRL 31-200 which will then through the internal three API calls 31A4 to 31-A6, 31-A4 to 31-A10, 31-A12 to 31-A14, and the response 31-A16 can include both configuration and settings information for SRV_AP 31-300 to utilize as well as an indication from SRV_CNTRL 31-200 that the other peer devices are set and ready.
In some embodiments, 31-A4/31-A6 and 31-A8/31-A10 and 31-Al2/31-A14 are independent API calls in series/sequences. In other embodiments, 31-A4/31-A6 and 31-A8/31-A10 and 31-A12/31-A14 may be performed in parallel.
Security elements can be placed at various locations within the GVN topology illustrated herein. For example, firewalls FW 31-400 and FW 31-402 may be located along 31-P800 and 31-P810. Firewalls FW 31-400 and FW 31-402 may protect SRV_BBX 31-800 and 31-810 from internet threats ensuring secure backbone communications.
Information about secure egress and ingress points (EIP) 31-500 and 31-502 may also be a factor in this kind of API exchange.
Three internal round-trips are a dependency required for the exterior round-trip to be constituted as complete. Response (RESP) for API #1 (32-A16) will wait for internal API calls API #2 (31-A4 to 31-A6), API #3 (31-A8 to 31-A10), API #4 (31-A12 to 31-A14) to be completed before evaluating results and sending back as RESP. Only then will the Open-Jaw API be able to close and response be sent.
This type of sequence is similar to a transaction set of SQL statements. All have to complete or none are able to complete. Roll back may therefore also be possible in the event of a failure of one or more of the calls.
33-P10033-P20033-P30033-P20233-P50233-P20633-P506 represent communications between GVN devices which have a peer-pair and therefore privileged relationship with each other. EPD 33-100, SRV_AP 33-300 Other Device 33-502 may be coupled with File Storage 33-6033-6233-64 and database 33-5033-5233-54.
There exists a circular pattern of peer-pair communication illustrated from SRV_CNTRL 33-200 to EPD 33-100 via 33-P100, to SRV_AP 33-300 via 33-P300, or to other devices 33-502 via 33-P502. The EPD 33-100 communicates with SRV_CNTRL 33-200 via P200, SRV_AP 33-300 via 33-P202, and other devices 33-502 via 33-P502.
In some instances, there may be a loop of information shared between devices such as in the case when an EPD 33-100 may request information via 33-P200 from SRV_CNTRL 33-200 which is sent back to EPD 33-100 via 33-P100.
In other instances, one device may report information relevant to other devices such as an SRV_AP 33-300 reporting via 33-P202 to SRV_CNTRL 33-200 which is then sends information via 33-P100 to EPDs 33-100, or via 33-P502 to other devices 33-502.
In yet other instances, a full loop may not be required such as the sending of log information from a device such as an EPD 33-100 to SRV_CNTRL 33-200 via 33-P200, there is no need to further forward this information onward. However, logging information may at a later time be moved from repository on SRV_CNTRL 33-200 to a long-term log storage server 33-502 via 33-P502.
Direct link 33-PT02 is between devices EPD 33-100 and SRV_AP 33-300. Direct link 33-PT08 is from SRV_AP 33-300 to other devices 33-502. Direct links involve communications between devices which do not need involvement of SRV_CNTRL 33-200.
The PUSH info 33-208 from SRV_CNTRL 33-200 could be an RSS feed or other type of information publishing via 33-P208. The API-queries 33-206 from SRV_CNTRL 33-200 could be either a traditional API transaction or RESTful API call with request made via 33-P206REQ and response received via 33-P206RESP. The PUSH 33-206 and API-queries are presented to illustrate devices which do not share peer-pair relationships, action code or definition (e.g., action code and/or definition has not been obtained, action code and/or definition is obsolete), privileged status, and/or similar systems architecture with GVN devices.
Data info is stored in databases on DB 33-50 for EPD 33-100, on DB 33-52 for SRV_AP 33-300, on DB 33-54 for other devices 33-502, DB 33-58 for SRV_CNTRL 33-200, and on DB 33-56 for SRV_BBX 33-500. Furthermore two types of file storage are described herein, HFS—hierarchical file storage for storage hardware hosted on a device for its own internal access, and PFS—Parallel file storage systems which are stand alone and offer RDMA access. PFS 33-510 represents PFS file storage on another device in another location via RDMA (remote) access.
34-P500 is region to region connection between global nodes by international or cross regional link to connect IDC 1 34-002 with IDC 3 34-006. SRV_CNTRL 34-200 servers are multiple master topology with equivalent operation when interacting with various devices. A key feature is aggregation topology where a mesh of SRV_AP 34-20034-20234-21034-212 across multiple data centers in regional clusters linked via paths 34-P20034-P20234-P21034-P212 to a common SRV_BBX node 34-500 which is connected to another SRV_BBX 34-506 in another region which is a long-distance transport aggregation point for SRV_AP 34-22034-222 via paths 34-P220 and 34-P222. Device operation and collaboration is via API paths such as from SRV_AP 34-212 to SRV_CNTRL 34-200 via path 34-API-08.
The level two logic layer 35-L200 analyzes and adjusts connectivity over the level one network layer 35-L100 to best weave together various layer one fabrics to be optimized for the GVN. Peering points of fabrics and level one base connectivity are 35-S00, 35-S02, 35-S04, and 35-S06. Interaction between 35-L200 and 35-L100 are via 35-LC0102 and interaction between 35-L300 and 35-L200 are via 35-L0203. Seams between base fabrics 35-S00, 35-S02, 35-S04, 35-S06 are managed by Level two 35-L200 such that the traffic of one fabric can flow over a different fabric.
Base internet fabrics 35-100 to 35-102 can be IPv4, IPv6, IB, IPv4/IPv6, or other network type. Path through L300 is the GVN Layer visible to clients. L100 represents the physical network layer for various network segments end-to-end. L200 is the layer where the tapestry is constructed via logic, integration, address mapping, routing, and other techniques.
The tunnel is over-the-top (OTT) of other base connections and these paths represent network fabric types when available such as 36-OTT00→Internet Protocol version 4 (IPv4) which is the most ubiquitous, 36-OTT02→Internet Protocol version 6 (IPv6), 36-OTT06→InfiniBand (IB), 36-OTT08→Other—some other network type or a combination of fabrics such as IPv4/IPv6 enabled fabric over network segments.
TUN1 36-T00 represents a tunnel (or bridge) built between the two devices over-the-top (OTT) of the Internet. Could be one of 36-OTT00, 36-OTT02, 36-OTT06 or 36-OTT08 end-to-end, or could also be OTT of a combination of various different fabrics in a chain of network segments.
36-P00 is IPv4 fabric within the tunnel, 36-P02 is IPv6 fabric within the tunnel, and 36-P04 is RoCE or encapsulated RDMA over IP Ethernet, 36-P06 is IB over IP (IBoIP) as or other similar protocol, and 36-P08 can also be combination such as IPv4 and IPv6, or other. The key point is end-to-end fabric through tapestry over GVN over any other fabric or chain of various other network fabrics. Devices located either at LAN at EPD 36-100 or in the cloud at SRV_AP 36-300 see the network end-to-end as the fabric which is run through the tunnel, regardless of the underlying base connection.
For example, IPv6 37-102 can enter the Network Tapestry 37-300 via path 37-P102 and exit at fabric via path 37-P112 to IPv6 37-112, regardless of which type of fabric is in the middle that the GVN is running over the top of.
These various fabrics through the GVN can run in parallel alongside the other fabrics, with an ingress or entry point and an egress or exit point.
Paths from one point to another point over the internet will typically transit across more than one type of fabric. The GVN automatically analyzes and weaves together many different network fabrics into a network tapestry. This permits client devices to have a parallel sets of consistent end-to-end fabrics of their choice in parallel over-the-top of a variety of diverse fabric segments. The GVN is a first degree OTT (expressed as OTT1) over the base network such as the internet, and second degree OTT (OTT2) constructs are be built over top of the GVN.
The network tapestry allows for example IPv6 between EPD 38-120 to a server 38-126, but from EPD 38-120 to SRV_AP 38-320, the base connection 38-000 may be over IPv4, because the IPv6 within the tunnel be encapsulated. From the client's perspective it will be IPv6 end-to-end from origin to destination along the network path. The underlying network segments weaved together constitute a tapestry of IPv4 and IPv6 fabrics with potentially other protocols like IB weaved together.
An EPD knows which SRV_APs it can connect with by utilizing a server availability list produced specifically for that EPD based on testing, load balancing taking into account current and predicted demand from other EPDs and other factors considered by the server availability mechanism 39-222.
Therefore for each device to function according to its role, such as an EPD which will connect with an access point server (SRV_AP), that EPD should have multiple options with respect to building or rebuilding tunnels, stormy weather mode helps it deal with challenging network conditions, and for EPD devices to connect both hosts and peers, plus middle devices, core junctions, and others need to coordinate actions based on shared information.
A key feature for selecting best path type based on data being handled is that testers 39-118 and builders 39-110 work with tunnel manager 39-210 and advanced smart routing 39-228. Related firewall and security monitor 39-0140 and other modules 39-160 working at layer one 39-GVN-1 provide some support to the testers and builders. Traffic and bandwidth analyzer 39-258 and connectivity analysis 39-288 provide information which is used by traffic and bandwidth logger 39-328, and more. The EPD has a tunnel tester 39-322 as does the SRV_AP 39-312 because network path analysis should provide insight into both directions. This approach helps to detect problems with peering or bottlenecks or routing or other issues which may occur in one direction but not in the other direction of data flow.
When dealing with different types of content flow, for example a click vs content serving (images) vs a video stream or large data file are a bit different in their QoS requirements and all of these can handled differently.
To build a dynamic system which is constructs a pathway through a tunnel or series of joined tunnels 39-T01 to 39-T02 to 39-T03 at layer three 39-GIV-3, information is used not just to maintain connectivity between EPD 39-100 and SRV_AP via 39-T01 and between SRV_AP 39-300 and SRV_AP 39-302 via 39-T02, and between SRV_AP 39-302 and EPD 39-102 via 39-T03, but also the best possible bandwidth, at the lowest possible latency, and with other improvements offered.
Enhanced security is provided by auto-built multiple-tunnels between EPDs and SRV_APs, and between other devices and utilizing tunnels within tunnels, and automated secure boot at startup, dynamic tunnels manager capable of on-the-fly configuration, set up, adjustments, and more. These also lead to productivity gains through better connectivity and can provide for best secure network optimization, improved routing and more. Other functionality is both triggered by heartbeat cycles, by scheduled maintenance times and events. This functionality includes testing, logging and analysis of connectivity with automated healing, and understanding of the stitching together of various types of networks into a network tapestry provides a multi-protocol set of multiple fabrics weaved together at the base internet layer one 39-GVN-1 and any end to end path inside the tunnel 39-GVN-3. Testing can analyze the performance of LAN to GVN at both ends of tunnel 39-CTN140 and 39-CTN240, and also can compare and contrast performance and fitness of GVN 39-CTN340 vs Internet 39-CPT340 transregional sections of segments.
The advantage of the OTT over the base internet connection from a client's location at EPD 40-100 to the first SRV_AP 40-300 or SRV_AP 40-302 or SRV_AP 40-304 are that the client can use their regular line, at a lower cost over a dedicated solution, with multiple options from which to enter into the GVN. Although the EPD 40-100 is connecting over the same internet line, TUN 40-T00 and TUN 40-T02 and TUN 40-T04 may offer different quality of service (QoS) because of routing factors, congestion, peering, and capacity of pipes in the middle, and other factors, therefore multiple options improve overall QoS by providing alternatives. These TUNS also can offer different base fabrics on top of which internal fabrics can operate OTT. For example, native InfiniBand (IB) at GVN layer three 39-GVN-3 will run most efficiently if on top of IB at layer one 39-GVN-1.
The GVN is delivered as a service over the top (OTT) of a base connection to aggregation points to backbone to OTT over other fabrics with automation, including multi-layer, multi-step best path analysis via advanced smart routing (ASR), and more functionality. The more available options, the better.
The EPD 40-100 is in one location 40-M0, and SRV_APs in region 40-M2 SRV_AP 40-300, SRV_AP 40-302, and SRV_AP 40-304, and with SRV_APs in region 40-M3 SRV_AP 40-310, SRV_AP 40-312, and SRV_AP 40-314.
Because of the nature of the construct of pathways at layer three 39-GVN-3, there exists a need to mitigate the risk of looping, to prevent wrong geographic destination routing, ASR remote redirect backtrack, as well as to test for, to note and to address broken links between SRV_APs, regions, and other problems.
This diagram also demonstrates the mapping of various egress ingress points (EIP) such as 40-510, 40-512 and 40-514 both as destinations for GVN traffic to find internet fabrics beyond the GVN, as well as a routing starting point for traffic entering the GVN from those locations to be routed via layer three 39-GVN-3 to other locations such as LAN 40-000 via EPD 40-100, or other destinations available via the GVN.
Path selection is therefore based on QoS factors, fabric type at layer one 39-GVN-1, capacity vs current load, contextual mapping based on a device and its path options, and other fixed and dynamic factors.
Tests can be run in sequential order or in parallel from junction 41-020.
After testing, other processes are run at post-running of tests to clean up, and free resources 41-300. At the end of testing, log test results 41-320 saves pertinent information for reference both by the device running the tests as well as for analysis by a central control server (SRV_AP). This information can be utilized when building contextual dynamic lists of servers for a device to be able to connect with constituting a server availability list taking into account test results as well as mapping of route options for GVN path constructs.
In the instance that a tunnel test 42-110 returns poor results but that a test of an alternative tunnel 42-130 provides better connectivity, traffic load can simply be shifted to the better of the two.
It is also crucial to monitor the network use of current users 42-160 for a few reasons. One of the reasons is that performance measurements of tests need to take into account current network load because the test will be sharing bandwidth of the line and therefore may appear to produce a false low BW measure against expected line capacity. Therefore if a connection has a BW of 20 Mbps and users are using 15 Mbps of that BW during a test, it is reasonable to assume that the test will not yield more than 5 Mbps because that is all that is available to it. Another reason to monitor concurrent use is to utilize that information to set parameters for tests such that the testing itself does not impede, slow down, or otherwise interfere with QoS for clients currently using the network.
All results are shared with SRV_CNTRL 42-280 so that granular test results can be aggregated both per device and also by region, system wide, etc. so that it can be analyzed and utilized in the future.
The B level B1 43-210, B2 43-220, B3 43-230, B4 43-240, and B5 43-250 are the first connections OTT of base internet connection. The performance of paths 43-P210, 43-P220, 43-P230, 43-P240, and 43-P250 can be compared and contrasted to determine best path from a set of available paths. QoS can also factor fabric and protocol type when determining best path based on most optimal conditions.
The C level C1 43-302 through C15 43-330 are long distance connections based on data type, QoS, relative QoS of currently available alternative connections and paths through the GVN. C level are via B level which all connect with A level as a starting point.
This example embodiment can be used to describe the multi-step options available to advanced smart routing (ASR) to be used when plotting best route for traffic type and also taking into account best route based on path quality (QoS) from testing.
There are other embodiments such as a visual mapping to plot route options, to use as a framework for testing and other uses.
Actions to take could be how to handle detected packet loss 45-P310 which calls for multi-streaming of duplicate content 45-310, or for example if there is a problem with base connection 45-P340 to adjust settings 45-340 at the layer one of the GVN 39-GVN-1, or if there are segment issues 45-P380 the remedy will be to adjust protocol settings 45-390, and more.
Notification can also be triggered in at least two instances; first if a problem is detected 45-200 but not identified logic follows path 45-P300. If the base connection is up but the problem remains elusive, then support can be notified 45-240. Another example of notification is if bandwidth use is at or above capacity 45-P350, then the administrator can be notified 45-350 of this condition. There are also other events which may trigger notification.
Logging is done both of tests 45-110 and also of the remedial actions if problem was detected 45-410. These logs can be replicated to a central control server (SRV_CNTRL) for analysis and future utilization.
This example embodiment further describes same or different protocols in other regions demonstrating the weaving together of various fabrics into a network tapestry. The quality of these connections is also measured. Connectivity quality of service (QoS) from EPD 46-100 to Local Internet 46-000 is measured by QoS ISP 46-802. The performance of the tunnel is measured by QoS TUN OTT ISP to GVN 46-806. Connectivity through the GVN beyond SRV_AP 46-200 is measured by QoS GVN 46-808.
Analysis of the quality of connection through various path type options through the GVN can be utilized to determine the best path for traffic to take based on matching fabric type to data type, size, QoS requirements, and other factors. The more fabrics are understood and weaved together, the more various fabric type options are afforded by a tapestry.
Further features described are fabrics available along this network path 47-CPT300. An internet protocol version four (IPv4) path 47-400 is illustrated by segments from 47-P402 to 47-428. An internet protocol version six (IPv6) path 48-600 is illustrated by segments from 47-P612 to 47-P628. A combination IPv4 and IPv6 path 47-500 is from segment 47-512 to 47-520. A reciprocal slingshot mechanism into a Slinghop is described by path 47-800. A Slinghop integrated into and combined with an IPv4 path is demonstrated by combo path 47-900.
Automated mapping of segments and understanding section options allows for the most efficient weaving together of various network fabrics into a tapestry. Automated tests examine and evaluate all routes, including segments on the base path at level one of a GVN 39-GVN-1, and also inside the GVN Tapestry at level three of the GVN 39-GVN-3.
While there exist methods to run one type of network over another type of base network segment through encapsulation or other methods, these may be inconsistent across multiple diverse segments on the internet and therefore the GVN level two 39-GVN-2 must be able to step between network path fabric types when needed. For example IPv6 can be encapsulated over 47-P402 through 47-P408 and then can be run over native IPv6 via 47-P510 then on to 47-512 through 47-520 and then via 47-P622 to 47-P628.
The multi-dimensional over-the-top construct between EPD 48-100 to access point server (SRV_AP) 48-300 is built OTT a combined IPv4 and IPv6 pathway, with the GVN building an IP tunnel 48-112 between them, and through the tunnel a connected pathway built over top of that 48-122.
This topology further extends the edges of the LAN beyond the edge of the LAN 48-000 past the EPD 48-100 and into the cloud as a LAN extension into the cloud 48-322. This mechanism can also pull a cloud node into the EPD 48-100 acting as a local node for cloud services to be hosted via an APP or other GVN functionality.
Other advantages can be realized via this kind of tapestry construct.
A tunnel or other type of network path between two access point servers (SRV_AP) can be IP over-the-top (OTT) of the base internet or long haul or other type of Ethernet via path 49-P308 between SRV_AP 49-300 to SRV_AP 49-310. This segment is measured and analyzed by section ETH 49-020.
It also demonstrates a path option between two backbone exchange servers (SRV_BBX) 49-500 and SRV_BBX 49-510 via path 49-P500 to IBX cluster 49-038 to path 49-P510 to SRV_BBX 49-510. This segment is measured and analyzed by section IB 49-028.
This example embodiment further demonstrates multiple SRV_AP servers in IDCs in Region A 50-608 and in Region B 50-618 which offer redundancy, multiple paths, and high availability “front-line” resources for EPD's to have connectivity options governed by server availability.
In this embodiment, SRV_BBX 50-500 and SRV_BBX 50-510 act as aggregation points for their respective regions and are also a cross-regional global node offering enhanced connectivity pathways to another region global nodes and devices there.
If connections 51-300 are not ideal, the path checking and testing restarts via path 51-P102. If conditions are ideal, 51-P380, the results are logged 51-380 and then the path 51-P022 to restart at 51-020. It will wait until the next time cycle 51-040 and if it is time 51-P100, it starts again 51-100.
RDMA over IB OTT2 fabric construct is built upon a construct which is OTT of the OTT1 of the GVN.
This figure extends the edge of the RDMA fabric so that it is connected via 52-P608 as native RDMA fabric 52-P638. Authentication at the edge can be based on a number of factors at the application layer rather than at the network layer. These can toggle whether the device is discoverable, and if reads and/or writes and/or other operations are allowed on the device, the drive, the folder, the file, etc.
Maximum communications optimization for traffic via integration points on GVN to InfiniBand Server Exchange Point (SRV_BBX). SRV_BBX Parallel File System (PFS) allowing for RDMA availability for File Managers on SRV_AP's both locally and via IB transport
Another embodiment can be for example of one PFS instance 53-800 in a client's LAN A 53-102 behind an EPD 53-100 linked to two other PFS instances “in the cloud” 53-802 and 53-812. The pathway connecting these three PFS devices through the GVN can be native RDMA as a construct fabric within the greater GVN tapestry regardless of base network connectivity, and in parallel with other constructed fabrics through the GVN.
This example embodiment further illustrates the application of the network tapestry to offer native RDMA through GVN tunnels between various end points over top (OTT) of various different network fabrics.
Devices in the LAN 54-000 can access files which are physically stored on PFS file storage devices such as 54-600 and/or 54-610 via RDMA as if they were locally and directly connected to the PFS devices. Files synchronization and transfer replication via regions can also be via path 54-P510.
It also demonstrates how each server has a hierarchical file system (HFS) attached to it such as access point server (SRV_AP) 55-300 contains HFS file storage device 55-308, and backbone exchange server (SRV_BBX) 55-500 contains HFS 55-508, etc.
The two SRV_BBX servers 55-500 and 55-510 are connected via path IBB 55-580 which refers to Internet Backbone or a fiber connection or other connectivity between two regions. Each SRV_BBX is connected to one or more SRV_AP for example SRV_BBX 55-510 is linked with SRV_AP 55-310. Each SRV_BBX is connected to a native InfiniBand (IB) Cluster in their region such as IB Cluster 55-550 connected with SRV_BBX 55-500 via path 55-P500. This IB Cluster 55-550 provides logical network pathway access to PFS devices 55-552, 55-556, and 55-558 respectively. IB Cluster 55-560 similarly provides access to PFS devices 55-568, 55-566, and 55-562.
This topology as a second degree over the top OTT2 allows for native RDMA paths which are cross regional, cross fabric regardless of network fabrics at the base.
File path (FP) 56-PF102 and 56-FP108 are for file access to HFS 56-102 or to PFS 56-108 respectively, and these are a combination of device type, device ID, and folder ID where the physical file is located.
Other tables related to the files table 56-202 such as file association 56-204, servers 56-210, and users 56-206 can related to files. There may be more or less tables in an implementation.
The key point is that the GFM 56-302 at the usage layer 56-300 has indexed and organized information stored in tables at the abstraction layer 56-200, containing extensive info about each file, and where files are stored on devices at the physical layer 56-100.
Each GFM is responsible for keeping track of files stored on hierarchical file storage (HFS) devices contained within them such as SRV_AP GFM 57-300 keeping track of files stored on HFS 57-306, SRV_BBX GFM 57-500 to keep track of files stored on HFS 57-506, etc.
Each GFM on every device reports information about its files to the CGFM on the SRV_CNTRL 57-200 via API paths 57-200300, 57-200500, and 57-200510. Conversely, the CGFM also utilizes the aforementioned API paths to replicate file storage and location information to all devices.
Furthermore when files are stored, modified, deleted, or otherwise managed on parallel file system (PFS) devices such as 57-800, 57-802, 57-806, 57-810, 57-812, and 57-816, the file information is also conveyed to the CGFM 57-200 and it in turn replicates this information to all devices.
Also indicated are file transfer path 57-FP300 between SRV_BBX 57-500 and SRV_AP 57-300, and also file transfer path 57-FP500 between SRV_BBX 57-500 and SRV_BBX 57-510.
Connectivity between EPD 58-100 and SRV_AP 58-300 can be via paths 58-CP02, or 58-TP00 to 58-TP02 or between SRV_BBX 58-D550 and 58-D500 via backbone path 58-BB0.
The SRV_BBX servers allow for the geographic destination mechanism to leverage the network tapestry to realize high speed, long distance file availability via PFS as opposed to chained caching (only) client-server transfer technologies and/or other methods.
When a client has to fetch a multitude of files such as tens to more than a hundred individual files plus manage the flow of streaming data, the problems of distance can be compounded significantly.
The retrieved files are passed to the cache manager 59-D330 on the SRV_AP 59-300 where they are catalogued and clumped together into one large file 59-700 which can be saved to either parallel file system (PFS) 59-508 or PFS 59-558.
This list of catalogued files is passed to the content delivery agent (CDA) 59-D120 on the EPD 59-100 to be utilized both by the cache manager 59-D130 to de-clump and check the files, and upon successful validation to the CDA 59-D120 to serve the files to clients. The files 59-610, 59-612, 59-616, and 59-618 are served from the EPD 59-100 to the requesting client as if they were being served by the source servers.
This geographic mechanism in conjunction with other elements of a GVN provides the effect of a reverse CDN bringing remote sites to the client at local performance QoS such as low latency and high BW.
From EPD 61-100, the base connectivity path OTT is via paths 61-P022 to a point of presence (POP) 61-022 to the internet 61-020 to the POP 61-024 of the SRV_AP 61-300.
From EPD 61-110, the base connectivity path OTT is via paths 61-P032 to a point of presence (POP) 61-032 to the internet 61-030 to the POP 61-034 of the SRV_AP 61-300. This could also point to another SRV_AP not illustrated herein which could be linked to SRV_AP 61-300.
The transit path 61-P026 from POP 61-024 to SRV_AP 61-300 to POP 61-034 via 61-P036 could be the path through the internet, through the SRV_AP or by passing the SRV_AP and relying on the routing on the public network. If the EPD 61-100 wants to connect to EPD 61-102 via the internet, it may follow a different route based on policies out of the control of the GVN or either EPD.
EPD 61-100 builds a tunnel TUN 61-T00 between itself and SRV_AP 61-300. EPD 61-102 also builds a tunnel TUN 61T10 between itself and SRV_AP 61-300. One or both of these tunnels may or may not be encrypted or secured.
There can also be another tunnel, internal tunnel INT TUN 61-T20 running through both of the other tunnels, joined at the SRV_AP 61-300 through which traffic can flow. This tunnel can be the communications path through which the WAN is built connecting EPD 61-100 to EPD 61-110.
The key point is that in tunnel vs base connection connectivity can each be different network protocols. The network tapestry afforded by the GVN can be a blend of different network protocols mapped to a chain of various network segments while concurrently the GVN can be one network type end-to-end over-the-top fabric within the internal tunnel.
The paths 62-P600 to 62-600 to 62-P602 and 62-P610 to 62-610 to 62-P612 are for IP OTT internet. The paths via 62-600 are for end-to-end file transfer and the paths via 62-610 utilize chained caching of the file to take advantage of hyper-high speeds at the backbone to bring a file to a storage device as close as possible to the requesting client for a pull or recipient device for a push.
The path 62-P500 connects backbone exchange server (SRV_BBX) 62-500 to SRV_AP 62-300.
The path 62-P510 connects backbone exchange server (SRV_BBX) 62-510 to SRV_AP 62-310.
The paths 62-P800 to 62-800 to 62-P802 and 62-P810 to 62-810 to 62-P810 are for native InfiniBand (TB) over dark fiber or equivalent private line over top of which IP and/or RDMA can flow. Paths via 62-800 are for direct RDMA access to files on the PFS server where they are stored. Paths via 62-810 involve the cloning of files from source PFS device to another PFS device in another region.
Traffic choice is via most advantageous path with traffic flow decision based on traffic type via the most appropriate path type. Best flow of different data via best path type then down best “current” route path through the GVN. This is a double good.
FW 63-400 and FW 63-410 protect the internal IP communication paths 63-P300 and 63-P310 between access point server (SRV_AP) 63-300 to backbone exchange server (SRV_BBX) 63-500, and SRV_AP 63-310 to SRV_BBX 63-510 respectively.
Another protection is that paths 63-P100, 63-P300, 63-P110, and 63-P310 are internet protocol (IP) and paths 63-P500, 63-P510, and 63-P528 are InfiniBand (IB). This physical protocol jump in addition to firewalls provides a gap that makes it logically impossible for contamination between IP and IB.
SRV_BBX 64-500 acts as a common gate for SRV_AP's in Region A 64-000 such as SRV_AP 64-300.
SRV_BBX 64-510 acts as a common gate for SRV_AP's in Region B 64-010 such as SRV_AP 64-310. The SRV_AP and SRV_BBX in the same region could be located in the same internet data center (IDC) or they could be located in other IDC's in same region, connected by fast links.
A secure file system layer using RDMA over IB between SRV_BBX 64-500 and 64-510 can provide ultra-fast access to files stored on parallel file system (PFS) devices managed by global file system (GFS).
The physical ports ETH0 65-100, ETH1 65-106, and ETH2 65-108 correspond with network plugs on backplanes of the EPD. ETH0 65-102 connects with the last mile connection between the EPD 65-100 and the internet provided by the internet service provider (ISP). ETH0 65-102 connects via path 65-P022 to a point of presence (POP) 65-022 and from there to the internet 65-020 and beyond.
Tunnels TUN0 65-310 and TUN2 65-312 run over-the-top (OTT) of the last mile connectivity over and through ETH0 65-102.
ETH1 65-106 connects with LAN A 65-050 and ETH2 65-108 connects with LAN B 65-060.
Both ETH1 65-106 and ETH2 65-108 are aggregated as LAN connections within the EPD 65-100 at bridge BR0 65-104.
Routing is applied at each of a chain of virtual interfaces (VIF) between BR0 65-104 to VIF0 65-102, where routing table matches go through TUN0 65-310. For addresses which are not matched, they are passed to the VIF1 65-122 where routing table matches will push traffic to TUN2 65-312. The remaining unmatched addresses go to VIF2 65-126 which will then egress via path 65-P022.
Physical fabrics are tested and managed at each of the various physical interfaces. Over the top fabrics are constructed on top of these physical interfaces and these constitute a global virtual network (GVN). The various fabrics are weaved together into a network tapestry.
It describes the logical construct of layers for an end point device (EPD) 66-100, an access point server (SRV_AP) 66-200, and a backbone exchange server (SRV_BBX) 66-500. It also demonstrates the physical network interfaces (NIC) on each of these devices such as Ethernet NIC 66-M0 on EPD 66-100, or Ethernet NIC 66-M1, IB NIC 66-N1, Ethernet NIC 66-M2 on SRV_AP 66-200, or ETH NIC 66-M3, IB NIC 66-N2 on SRV_BBX 66-500.
Connectivity between ETH NIC 66-M0 on EPD 66-100 and ETH NIC 66-M1 on SRV_AP 66-200 via path Ethernet 66-000. Connectivity between SRV_AP 66-200 and SRV_BBX 66-500 is via either Ethernet path 66-010 or InfiniBand 66-020 providing one or the other as network connectivity options. IB NIC 66-N2 can also connect via InfiniBand path 66-030 to SRV_BBX in another region 66-510. See
It further demonstrates how the base layer can be predicated upon an InfiniBand (IB) NIC 67-N2. RDMA layer 67-R2B correlates with Internet 67-T2, and internet protocol (IP) over IB IPoIB 67-R3C correlates with Transport 67-T3, and GVN IB 67-R4D correlates with Application 67-T4.
There are also five levels of the GVN described which correspond with the three layers noted above.
GVN Level 1 68-L100 is the base network layer. GVN Level 3 68-L300 is the internal pathway which optimized traffic flows through and GVN Level 2 68-L200 is a the logic layer between Level 1 68-L100 and Level 3 68-L300 and this logic layer is where testing, analysis, mapping, routing, adjusting, encapsulating, securing, and other operations are executed to ensure best performance of Level 3 68-L300 over various options presented by Level 1 68-L100.
GVN Level 5 68-L500 is the internal pathway of a constructed element built over-the-top of the GVN internal pathway at Level 3 68-L300 which itself is built over-the-top of the base network layer Level 1 68-L100. GVN Level 4 68-L400 is a logic layer between Level 5 68-L500 and 68-L300 and it entails understanding the options available to it through the GVN, with similar testing, analysis and other operations. Of specific focus are the peering points, stepping up and down between OTT levels, mapping, protocols, and end-to-end pathway options with respect to most appropriate and efficient stitching together of segments in the middle of the path.
This example embodiment can related directly with
Local GVN 48-112, GVN on AP 48-312, and Local GVN 48-116 are all at GVN Level 3 68-L300. This layer is where performance and routing are focused on providing options for the GVN.
Local Cloud Node 48-122, LAN extension in Cloud 48-322, and Local Cloud Node 48-128 are all at GVN Level 5 68-L500. These represent the construct through the GVN.
There are two types of network interface cards indicated on the SRV_BBX Ethernet IP NIC 69-506 and IB NIC 69-510 to correspond with these different network protocols based on differences to hardware (HW).
System Software 69-130, 69-330, 69-230, and 69-530 constitute the fabric logic of the GVN to create network tapestry.
There are also communication paths indicated such as:
69-P200↔69-P430↔69-P500—API between SRV_BBX 300 and SRV_CNTRL 200.
69-P510↔SRV_BBX 69-510↔69-P810—which is pass-through to other regions. A parallel file storage device PFS 69-810 is indicated herein as an example and the BBX 69-510 can connect to many others.
69-P100↔69-P400↔69-P300—can indicate traffic or API between EPD & SRV_AP
69-P100↔69-P410↔69-P200—can indicate the API or other type of communications path between EPD and SRV_CNTRL
69-P300↔69-P436↔69-P500—is the path between SRV_AP 69-300 and SRV_BBX 69-500
69-P510↔BBX 69-510—represents the path for traffic over backbone between SRV_BBX servers connecting regional clusters across long distance, or simply joining SRV_BBX hub and spoke clusters with others, including devices such as PFS clusters, other SRV_BBX, other backbones, or more.
Global file managers 69-360, 69-260, and 69-560 catalog and manage files on both hierarchical file systems (HFS) storage devices 69-630, 69-620, 69-650 as well as parallel file systems such as 69-800 or 69-810.
Fabric managers 69-380, 69-280, and 69-580 work independently and at times in lockstep to build first degree over-the-top (OTT1) and second degree over-the-top (OTT2) layers.
This application is a U.S. National Stage application under 35 U.S.C. § 371 of International Patent Application No. PCT/IB2016/001161, filed Jun. 13, 2016, which claims priority to U.S. Provisional Application No. 62/174,394 filed on Jun. 11, 2015, the entire content of which each application is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2016/001161 | 6/13/2016 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/198961 | 12/15/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4890281 | Balboni et al. | Dec 1989 | A |
5828847 | Gehr et al. | Oct 1998 | A |
5893089 | Kikinis | Apr 1999 | A |
6209039 | Albright et al. | Mar 2001 | B1 |
6463465 | Nieuwejaar | Oct 2002 | B1 |
6477166 | Sanzi et al. | Nov 2002 | B1 |
6593863 | Pitio | Jul 2003 | B2 |
6611587 | Brown et al. | Aug 2003 | B2 |
6671361 | Goldstein | Dec 2003 | B2 |
6678241 | Gai | Jan 2004 | B1 |
6690223 | Wan | Feb 2004 | B1 |
6735207 | Prasad et al. | May 2004 | B1 |
6879995 | Chinta et al. | Apr 2005 | B1 |
6973048 | Pitio | Dec 2005 | B2 |
6996117 | Lee et al. | Feb 2006 | B2 |
7006505 | Bleszynski et al. | Feb 2006 | B1 |
7039701 | Wesley | May 2006 | B2 |
7069318 | Burbeck et al. | Jun 2006 | B2 |
7145882 | Limaye et al. | Dec 2006 | B2 |
7145922 | Pitio | Dec 2006 | B2 |
7161899 | Limaye et al. | Jan 2007 | B2 |
7161965 | Pitio | Jan 2007 | B2 |
7173902 | Daniell et al. | Feb 2007 | B2 |
7177929 | Burbeck et al. | Feb 2007 | B2 |
7221687 | Shugard | May 2007 | B2 |
7224706 | Loeffler-Lejeune | May 2007 | B2 |
7254833 | Cornelius et al. | Aug 2007 | B1 |
7269130 | Pitio | Sep 2007 | B2 |
7310348 | Trinh et al. | Dec 2007 | B2 |
7349403 | Lee et al. | Mar 2008 | B2 |
7349411 | Pitio | Mar 2008 | B2 |
7349435 | Giacomini | Mar 2008 | B2 |
7433964 | Raguram et al. | Oct 2008 | B2 |
7551623 | Feroz et al. | Jun 2009 | B1 |
7577691 | Novik et al. | Aug 2009 | B2 |
7587487 | Gunturu | Sep 2009 | B1 |
7633909 | Jones et al. | Dec 2009 | B1 |
7689722 | Timms et al. | Mar 2010 | B1 |
7742405 | Trinh et al. | Jun 2010 | B2 |
7742411 | Trinh et al. | Jun 2010 | B2 |
7801030 | Aggarwal et al. | Sep 2010 | B1 |
7822877 | Chong et al. | Oct 2010 | B2 |
7870418 | Sekaran et al. | Jan 2011 | B2 |
7886305 | Ahmed et al. | Feb 2011 | B2 |
7930339 | Tobita et al. | Apr 2011 | B2 |
7957311 | Trinh et al. | Jun 2011 | B2 |
8010751 | Yang et al. | Aug 2011 | B2 |
8064909 | Spinelli et al. | Nov 2011 | B2 |
8069258 | Howell | Nov 2011 | B1 |
8069435 | Lai | Nov 2011 | B1 |
8073777 | Barry et al. | Dec 2011 | B2 |
8107363 | Saluja | Jan 2012 | B1 |
8266672 | Moore | Sep 2012 | B2 |
8401028 | Mihaly et al. | Mar 2013 | B2 |
8422397 | Ansari et al. | Apr 2013 | B2 |
8437641 | Lee et al. | May 2013 | B2 |
8458786 | Kailash et al. | Jun 2013 | B1 |
8544065 | Archer et al. | Sep 2013 | B2 |
8611335 | Wu et al. | Dec 2013 | B1 |
8611355 | Sella et al. | Dec 2013 | B1 |
8625411 | Srivivasan et al. | Jan 2014 | B2 |
8769057 | Breau et al. | Jul 2014 | B1 |
8798060 | Vautrin et al. | Aug 2014 | B1 |
8861344 | Trinh et al. | Oct 2014 | B2 |
8874680 | Das | Oct 2014 | B1 |
8966075 | Chickering et al. | Feb 2015 | B1 |
8976798 | Border et al. | Mar 2015 | B2 |
9015310 | Ochi | Apr 2015 | B2 |
9038151 | Chua et al. | May 2015 | B1 |
9110820 | Bent et al. | Aug 2015 | B1 |
9164702 | Nesbit et al. | Oct 2015 | B1 |
9167501 | Kempf et al. | Oct 2015 | B2 |
9172603 | Padmanabhan et al. | Oct 2015 | B2 |
9241004 | April | Jan 2016 | B1 |
9253028 | DeCusatis et al. | Feb 2016 | B2 |
9277452 | Aithal et al. | Mar 2016 | B1 |
9294304 | Sindhu | Mar 2016 | B2 |
9350644 | Desai et al. | May 2016 | B2 |
9350710 | Herle et al. | May 2016 | B2 |
9351193 | Raleigh et al. | May 2016 | B2 |
9432258 | Van der Merwe et al. | Aug 2016 | B2 |
9432336 | Ostrowski | Aug 2016 | B2 |
9450817 | Bahadur et al. | Sep 2016 | B1 |
9455924 | Cicic et al. | Sep 2016 | B2 |
9461996 | Hayton et al. | Oct 2016 | B2 |
9525663 | Yuan et al. | Dec 2016 | B2 |
9525696 | Kapoor et al. | Dec 2016 | B2 |
9544137 | Brandwine | Jan 2017 | B1 |
9565117 | Dahod et al. | Feb 2017 | B2 |
9569587 | Ansari et al. | Feb 2017 | B2 |
9590820 | Shukla | Mar 2017 | B1 |
9590902 | Lin et al. | Mar 2017 | B2 |
9699001 | Addanki et al. | Jul 2017 | B2 |
9699135 | Dinha | Jul 2017 | B2 |
9729539 | Agrawal et al. | Aug 2017 | B1 |
9858559 | Raleigh et al. | Jan 2018 | B2 |
9888042 | Annamalaisami et al. | Feb 2018 | B2 |
9898317 | Nakil et al. | Feb 2018 | B2 |
10044678 | Van der Merwe et al. | Aug 2018 | B2 |
10061664 | Verkaik et al. | Aug 2018 | B2 |
10070369 | Lynn, Jr. et al. | Sep 2018 | B2 |
10078754 | Brandwine et al. | Sep 2018 | B1 |
10079839 | Bryan et al. | Sep 2018 | B1 |
10091304 | Hoffmann | Oct 2018 | B2 |
10237253 | Chen | Mar 2019 | B2 |
10331472 | Wang | Jun 2019 | B2 |
10574482 | Ore et al. | Feb 2020 | B2 |
10673712 | Gosar et al. | Jun 2020 | B1 |
10756929 | Knutsen et al. | Aug 2020 | B2 |
10904201 | Ermagan et al. | Jan 2021 | B1 |
11032187 | Hassan | Jun 2021 | B2 |
20020007350 | Yen | Jan 2002 | A1 |
20020029267 | San | Mar 2002 | A1 |
20020046253 | Uchida et al. | Apr 2002 | A1 |
20020049901 | Carvey | Apr 2002 | A1 |
20020087447 | McDonald et al. | Jul 2002 | A1 |
20020186654 | Tornar | Dec 2002 | A1 |
20030046529 | Loison et al. | Mar 2003 | A1 |
20030110214 | Sato | Jun 2003 | A1 |
20030147403 | Border et al. | Aug 2003 | A1 |
20030195973 | Savarda | Oct 2003 | A1 |
20030233551 | Kouznetsov et al. | Dec 2003 | A1 |
20040205339 | Medin | Oct 2004 | A1 |
20040268151 | Matsuda | Dec 2004 | A1 |
20050203892 | Wesley et al. | Sep 2005 | A1 |
20050208926 | Hamada | Sep 2005 | A1 |
20050235352 | Staats et al. | Oct 2005 | A1 |
20060020793 | Rogers et al. | Jan 2006 | A1 |
20060031407 | Dispensa et al. | Feb 2006 | A1 |
20060031483 | Lund et al. | Feb 2006 | A1 |
20060047944 | Kilian-Kehr | Mar 2006 | A1 |
20060075057 | Gildea et al. | Apr 2006 | A1 |
20060179150 | Farley et al. | Aug 2006 | A1 |
20060195896 | Fulp et al. | Aug 2006 | A1 |
20060225072 | Lari et al. | Oct 2006 | A1 |
20070112812 | Harvey et al. | May 2007 | A1 |
20070165672 | Keels et al. | Jul 2007 | A1 |
20070168486 | McCoy et al. | Jul 2007 | A1 |
20070168517 | Weller et al. | Jul 2007 | A1 |
20070226043 | Pietsch et al. | Sep 2007 | A1 |
20080010676 | Dosa Racz et al. | Jan 2008 | A1 |
20080043742 | Pong et al. | Feb 2008 | A1 |
20080091598 | Fauleau | Apr 2008 | A1 |
20080117927 | Donhauser et al. | May 2008 | A1 |
20080130891 | Sun et al. | Jun 2008 | A1 |
20080168377 | Stallings et al. | Jul 2008 | A1 |
20080240121 | Xiong et al. | Oct 2008 | A1 |
20080256166 | Branson et al. | Oct 2008 | A1 |
20080260151 | Fluhrer et al. | Oct 2008 | A1 |
20080301794 | Lee | Dec 2008 | A1 |
20090003223 | McCallum et al. | Jan 2009 | A1 |
20090092043 | Lapuh et al. | Apr 2009 | A1 |
20090100165 | Wesley, Sr. et al. | Apr 2009 | A1 |
20090106569 | Roh et al. | Apr 2009 | A1 |
20090122990 | Gundavelli et al. | May 2009 | A1 |
20090129386 | Rune | May 2009 | A1 |
20090132621 | Jensen et al. | May 2009 | A1 |
20090141734 | Brown | Jun 2009 | A1 |
20090144416 | Chatley et al. | Jun 2009 | A1 |
20090144443 | Vasseur et al. | Jun 2009 | A1 |
20090193428 | Dalberg et al. | Jul 2009 | A1 |
20090213754 | Melamed | Aug 2009 | A1 |
20090217109 | Sekaran et al. | Aug 2009 | A1 |
20090259798 | Wang et al. | Oct 2009 | A1 |
20100017603 | Jones | Jan 2010 | A1 |
20100131616 | Walter et al. | May 2010 | A1 |
20100250700 | O'Brien | Sep 2010 | A1 |
20100316052 | Petersen | Dec 2010 | A1 |
20100325309 | Cicic et al. | Dec 2010 | A1 |
20110007652 | Bai | Jan 2011 | A1 |
20110185006 | Raghav et al. | Jul 2011 | A1 |
20110231917 | Chaturvedi et al. | Sep 2011 | A1 |
20110247063 | Aabye et al. | Oct 2011 | A1 |
20110268435 | Mizutani et al. | Nov 2011 | A1 |
20110314473 | Yang et al. | Dec 2011 | A1 |
20120005264 | McWhirter et al. | Jan 2012 | A1 |
20120005307 | Das et al. | Jan 2012 | A1 |
20120082057 | Welin et al. | Apr 2012 | A1 |
20120105637 | Yousefi et al. | May 2012 | A1 |
20120158882 | Oehme | Jun 2012 | A1 |
20120179904 | Dunn et al. | Jul 2012 | A1 |
20120185559 | Wesley, Sr. et al. | Jul 2012 | A1 |
20120188867 | Fiorone et al. | Jul 2012 | A1 |
20120196646 | Crinon et al. | Aug 2012 | A1 |
20120210417 | Shieh | Aug 2012 | A1 |
20120270580 | Anisimov et al. | Oct 2012 | A1 |
20120320916 | Sebastian | Dec 2012 | A1 |
20130173900 | Liu | Jul 2013 | A1 |
20130247167 | Paul et al. | Sep 2013 | A1 |
20130259465 | Blair | Oct 2013 | A1 |
20130283118 | Rayner | Oct 2013 | A1 |
20130286835 | Plamondon et al. | Oct 2013 | A1 |
20130287037 | Bush et al. | Oct 2013 | A1 |
20130308471 | Krzanowski et al. | Nov 2013 | A1 |
20130318233 | Biswas et al. | Nov 2013 | A1 |
20130322255 | Dillon | Dec 2013 | A1 |
20130343180 | Kini et al. | Dec 2013 | A1 |
20140020942 | Cho et al. | Jan 2014 | A1 |
20140071835 | Sun et al. | Mar 2014 | A1 |
20140086253 | Yong | Mar 2014 | A1 |
20140101036 | Phillips et al. | Apr 2014 | A1 |
20140149552 | Carney et al. | May 2014 | A1 |
20140169214 | Nakajima | Jun 2014 | A1 |
20140181248 | Deutsch et al. | Jun 2014 | A1 |
20140210693 | Bhamidipati et al. | Jul 2014 | A1 |
20140215059 | Astiz Lezaun et al. | Jul 2014 | A1 |
20140226456 | Khan | Aug 2014 | A1 |
20140229945 | Barkai et al. | Aug 2014 | A1 |
20140237464 | Waterman et al. | Aug 2014 | A1 |
20140250066 | Calkowski et al. | Sep 2014 | A1 |
20140269712 | Kidambi | Sep 2014 | A1 |
20140278543 | Kasdon | Sep 2014 | A1 |
20140280911 | Wood et al. | Sep 2014 | A1 |
20140289826 | Croome | Sep 2014 | A1 |
20140310243 | McGee et al. | Oct 2014 | A1 |
20140337459 | Kuang et al. | Nov 2014 | A1 |
20140341023 | Kim et al. | Nov 2014 | A1 |
20140359704 | Chen | Dec 2014 | A1 |
20140369230 | Nallur | Dec 2014 | A1 |
20150006596 | Fukui et al. | Jan 2015 | A1 |
20150063117 | DiBurro et al. | Mar 2015 | A1 |
20150063360 | Thakkar et al. | Mar 2015 | A1 |
20150089582 | Dilley et al. | Mar 2015 | A1 |
20150095384 | Antony et al. | Apr 2015 | A1 |
20150128246 | Feghali et al. | May 2015 | A1 |
20150222637 | Hung et al. | Aug 2015 | A1 |
20150248434 | Avati et al. | Sep 2015 | A1 |
20150271104 | Chikkamath et al. | Sep 2015 | A1 |
20150281176 | Banfield | Oct 2015 | A1 |
20150326588 | Vissamsetty et al. | Nov 2015 | A1 |
20150363230 | Kasahara et al. | Dec 2015 | A1 |
20160006695 | Prodoehl et al. | Jan 2016 | A1 |
20160028586 | Blair | Jan 2016 | A1 |
20160028770 | Raleigh et al. | Jan 2016 | A1 |
20160055323 | Stuntebeck et al. | Feb 2016 | A1 |
20160105530 | Shribman et al. | Apr 2016 | A1 |
20160117277 | Raindel et al. | Apr 2016 | A1 |
20160119279 | Maslak et al. | Apr 2016 | A1 |
20160127492 | Malwankar et al. | May 2016 | A1 |
20160134543 | Zhang et al. | May 2016 | A1 |
20160165463 | Zhang | Jun 2016 | A1 |
20160226755 | Hammam et al. | Aug 2016 | A1 |
20160261575 | Maldaner | Sep 2016 | A1 |
20160285977 | Ng et al. | Sep 2016 | A1 |
20160308762 | Teng et al. | Oct 2016 | A1 |
20160330736 | Polehn | Nov 2016 | A1 |
20160337223 | Mackay | Nov 2016 | A1 |
20160337484 | Tola | Nov 2016 | A1 |
20160352628 | Reddy | Dec 2016 | A1 |
20160364158 | Narayanan et al. | Dec 2016 | A1 |
20160366233 | Le | Dec 2016 | A1 |
20170063920 | Thomas | Mar 2017 | A1 |
20170078922 | Raleigh et al. | Mar 2017 | A1 |
20170105142 | Hecht et al. | Apr 2017 | A1 |
20170201556 | Fox et al. | Jul 2017 | A1 |
20170230821 | Chong et al. | Aug 2017 | A1 |
20170344703 | Ansari et al. | Nov 2017 | A1 |
20180013583 | Rubenstein et al. | Jan 2018 | A1 |
20180024873 | Milliron et al. | Jan 2018 | A1 |
20180091417 | Ore et al. | Mar 2018 | A1 |
20180198756 | Dawes | Jul 2018 | A1 |
20200382341 | Ore et al. | Dec 2020 | A1 |
20210044453 | Knutsen et al. | Feb 2021 | A1 |
20210342725 | Marsden et al. | Nov 2021 | A1 |
20210345188 | Shaheen | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
1315088 | Sep 2001 | CN |
1392708 | Jan 2003 | CN |
1536824 | Oct 2004 | CN |
1754161 | Mar 2006 | CN |
1829177 | Sep 2006 | CN |
101282448 | Oct 2008 | CN |
101478533 | Jul 2009 | CN |
101599888 | Dec 2009 | CN |
101765172 | Jun 2010 | CN |
101855865 | Oct 2010 | CN |
102006646 | Apr 2011 | CN |
102209355 | Oct 2011 | CN |
102340538 | Feb 2012 | CN |
102457539 | May 2012 | CN |
102687480 | Sep 2012 | CN |
102739434 | Oct 2012 | CN |
103384992 | Nov 2013 | CN |
103828297 | May 2014 | CN |
102255794 | Jul 2014 | CN |
104320472 | Jan 2015 | CN |
1498809 | Jan 2005 | EP |
1530761 | May 2005 | EP |
1635253 | Mar 2006 | EP |
2154834 | Feb 2010 | EP |
2357763 | Aug 2011 | EP |
WO-2003025709 | Mar 2003 | WO |
WO-03041360 | May 2003 | WO |
WO-2003090018 | Oct 2003 | WO |
WO-2003088047 | Oct 2003 | WO |
WO-2003090017 | Oct 2003 | WO |
WO-2006055838 | May 2006 | WO |
WO-2008058088 | May 2008 | WO |
WO-2008067323 | Jun 2008 | WO |
WO-2010072030 | Jul 2010 | WO |
WO-2012100087 | Jul 2012 | WO |
WO-2013120069 | Aug 2013 | WO |
WO-2013135753 | Sep 2013 | WO |
WO-2015021343 | Feb 2015 | WO |
WO-2016094291 | Jun 2016 | WO |
WO-2016110785 | Jul 2016 | WO |
WO-2016123293 | Aug 2016 | WO |
WO-2016162748 | Oct 2016 | WO |
WO-2016162749 | Oct 2016 | WO |
WO-2016164612 | Oct 2016 | WO |
WO-2016198961 | Dec 2016 | WO |
Entry |
---|
Definition of “server” in Microsoft Computer Dictionary, 2002, Fifth Edition, Microsoft Press (Year: 2002). |
Definition of “backbone” in Microsoft Computer Dictionary, 2002, Fifth Edition, Microsoft Press (Year: 2002). |
“Open Radio Equipment Interface (ORI); ORI Interface Specification; Part 2: Control and Management (Release 4),” Group Specification, European Telecommunications Standards Institute (ETSI), 650, Route des Lucioles; F-06921 Sophia-Antipolis Cedex; France, vol. ORI, No. V4.1.1, Oct. 1, 2014 (185 pages). |
Robert Russell, “Introduction to RDMA Programming,” retrieved from the Internet: URL:web.archive.org/web/20140417205540/http://www.cs.unh.edu/˜rdr/rdma-intro-module.ppt Apr. 17, 2014 (76 pages). |
“Operations and Quality of Service Telegraph Services, Global Virtual Network Service,” ITU-T Standard, International Telecommunication Union, Geneva, Switzerland, No. F.16, Feb. 21, 1995, pp. 1-23 (23 pages). |
Baumgartner, A. et al., “Mobile core network virtualization: A model for combined virtual core network function placement and topology optimization,” Proceedings of the 2015 1st IEEE Conference on Network Softwarization (NetSoft), London, UK, 2015, pp. 1-9, doi: 10.1109/NETSOFT, 2015 (9 pages). |
Chen, Y. et al., “Resilient Virtual Network Service Provision in Network Virtualization Environments,” 2010 IEEE 16th International Conference on Parallel and Distributed Systems, Shanghai, China, 2010, pp. 51-58, doi: 10.1109/ICPADS.2010.26., 2010 (8 pages). |
Examination Report, dated Aug. 2, 2018, for European Patent Application No. 16734942.2 (8 pages). |
Examination Report, dated Jul. 20, 2017, for Chinese Application No. 201680004969.3 (1 page). |
Examination Report, dated Mar. 3, 2020, for Chinese Application No. 201680020937.2 (9 pages). |
Examination Report, dated Mar. 5, 2020, for Chinese Patent Application No. 201580066318.2 (10 pages). |
Examination Report, dated Oct. 19, 2018, for European Patent Application No. 16727220.2 (11 pages). |
Extended European Search Report dated Sep. 7, 2018 received in related European Patent Application No. 16744078.3 (7 pages). |
Extended European Search Report, dated Aug. 2, 2018, for European Patent Application No. 15866542.2 (8 pages). |
Extended European Search Report, dated Sep. 7, 2018, for European Patent Application No. 16777297.9 (4 pages). |
Extended Search Report, dated Nov. 29, 2018, for European Patent Application No. 16806960.7 (10 pages). |
Figueiredo, R. J. et al., “Social VPNs: Integrating Overlay and Social Networks for Seamless P2P Networking,” 2008 IEEE 17th Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, Rome, Italy, 2008, pp. 93-98, doi: 10.1109/WETICE.2008.43, 2008 (6 pages). |
International Search Report and Written Opinion, dated Apr. 8, 2016, for International Application No. PCT/US2016/015278 (9 pages). |
International Search Report and Written Opinion, dated Aug. 10, 2016, for International Application No. PCT/IB2016/000531 (20 pages). |
International Search Report and Written Opinion, dated Aug. 23, 2017, for International Application No. PCT/IB2017/000580 (6 pages). |
International Search Report and Written Opinion, dated Dec. 28, 2016, for International Application No. PCT/IB2016/001161 (7 pages). |
International Search Report and Written Opinion, dated Feb. 12, 2016, for International Application No. PCT/US2015/064242 (9 pages). |
International Search Report and Written Opinion, dated Jul. 28, 2017, for International Application No. PCT/IB2017/000557 (6 pages). |
International Search Report and Written Opinion, dated Jul. 7, 2016, for International Application No. PCT/US2016/026489 (7 pages). |
International Search Report and Written Opinion, dated Jun. 7, 2016, for International Application No. PCT/IB2016/000110 (8 pages). |
International Search Report and Written Opinion, dated May 11, 2017, for International Application No. PCT/IB2016/001867 (13 pages). |
International Search Report and Written Opinion, dated Sep. 1, 2017, for International Application No. PCT/IB2017/000613 (7 pages). |
International Search Report and Written Opinion, dated Sep. 23, 2016, for International Application No. PCT/IB2016/000528 (11 pages). |
Gong L. et al., “Revenue-Driven Virtual Network Embedding Based on Global Resource Information”, Globecom 2013, Next Generation Networking Symposium, pp. 2294-2299. (Year: 2013) (6 pages). |
Chowdhury, N.M.M.K. et al., “Virtual Network Embedding with Coordinated Node and Link Mapping”, IEEE Communications Society Subject Matter Experts for Publication in the IEEE Infocom 2009, pp. 783-791. (Year: 2009) (9 pages). |
Office Action, dated Jun. 3, 2020, for Chinese Patent Application No. 201680066545.X (11 pages). |
Office Action, dated Mar. 12, 2020, for Chinese Patent Application No. 201680032657.3 (5 pages). |
Office Action, dated Mar. 13, 2020, received in related Chinese Patent Application No. 201680021239.4 (9 pages). |
Office Action, dated May 7, 2020, for Chinese Patent Application No. 201680020878.9 (7 pages). |
Haeri, S. et al., “Global Resource Capacity Algorithm with Path Splitting for Virtual Network Embedding”, 2016 IEEE, pp. 666-669. (Year: 2016) (4 pages). |
Supplementary Partial European Search Report, dated May 20, 2019, for European Patent Application No. 16872483.9 (8 pages). |
Szeto, W. et al., “A multi-commodity flow based approach to virtual network resource allocation,” GLOBECOM1 03. IEEE Global Telecommunications Conference (IEEE Cat. No. 03CH37489), San Francisco, CA, USA, 2003, pp. 3004-3008, vol. 6, doi: 10.1109/GLOCOM.2003.1258787, 2003 (5 pages). |
Supplementary European Search Report, dated Dec. 11, 2019, for European Patent Application No. 17788882.3 (8 pages). |
Number | Date | Country | |
---|---|---|---|
20200145375 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
62174394 | Jun 2015 | US |