This invention generally relates to data transmission, and more particularly to an independent, service provider interconnection platform with advanced switching capabilities and centralized monitoring to enable rapid and efficient provisioning of Carrier Ethernet services between multiple service provider networks.
As service providers strive to increase revenues by integrating new data, voice, and video service offerings, they are also, for efficiency reasons, converging their networks to support these services over a single Internet Protocol (IP) infrastructure. To best enable this convergence, service providers are migrating to Ethernet, a highly efficient technology originally used inside premises, as the connectivity standard for transporting these services. Ethernet's cost-effectiveness is due to several factors, including its prevalence as the de facto standard for providing computer to computer connectivity. To facilitate the use of Ethernet beyond the premises and across wide areas, Carrier Ethernet service standards were developed by the Metro Ethernet Forum (MEF).
Standardized, carrier-class Carrier Ethernet service is defined by five attributes—standardization, reliability, scalability, quality of service, and service management—that distinguish it from familiar local area network or LAN-based Ethernet.
These Carrier Ethernet standards have helped to standardize hardware for the deployment of Carrier Ethernet as well as establish initial service level standards defining Carrier Ethernet services. For example, there are various classes of Carrier Ethernet service that have been defined, each with prescribed technical characteristics. Which class of service is ordered depends on the use (e.g., data, voice, and video) that the end user plans to make of the service.
Recently, the MEF adopted standards to address network-to-network interfaces (NNIs) between service providers. NNIs between multiple service providers are needed for Carrier Ethernet to be deployed on an end-to-end basis for customers because no service provider has universal or a ubiquitous coverage area. Creating Ethernet NNI interconnection standards has taken several years due to the vast differences and incompatibilities between service providers' networks, systems and services. The MEF released in January 2010 a standard for these NNI connections. The released standard is referred to as the MEF 26 Standard, defining Phase I of the external network to network interface (“ENNI”). This MEF 26 standard, however, does not, of course, significantly mitigate the time, resources, cost and coordination of actually implementing the NNI interconnection. Thus, this standard defines a language for describing the NNI interconnections but not all of the detailed Ethernet services, thus allowing carriers to continue to maintain flexibility in their service offering. The significant differences between service providers' network, systems and services will still make implementation of the NNI standards a very extensive effort that could take years to complete, potentially slowing Carrier Ethernet adoption for years to come.
Moreover, because of the large number of service providers, with each serving unique territories or geographic areas, numerous NNI standards-based connections will need to be put in place among all the various service providers in order to deploy Carrier Ethernet on an end-to-end basis; this is commonly referred to as the N2 problem, where the total number of connections required is defined by the product “N*(N−1)” which is approximately equal to the square of the number “N” of service providers needing to interconnect.
100081 As
The significant interconnection technical challenges associated with establishing NNI interconnections have been recognized by the few service providers that, due to customer demand, have gone through the effort of establishing NNI interconnections with other service providers in order to provide end-to-end Carrier Ethernet. The costs, delays, and challenges of such NNI interconnections have made this a limited option.
In order to reduce the efforts and costs associated with NNI interconnection, some service providers have sought to leverage their mutual collocation at certain carrier hotel facilities to establish NNI interconnections with other service providers also located inside these collocation facilities. A carrier hotel, also called a collocation center, is a secure physical site or building where data communications media converge and are interconnected for economy of scale reasons. It is common for numerous service providers to share the facilities of a single carrier hotel. Interconnection between service providers in such a collocation or carrier hotel facility is completed by physically cross-connecting a copper or fiber network connection from one service provider to a network connection from another service provider using the cross-connect panel in the “Meet Me” room of the collocation or carrier hotel provider. The “Meet Me” room is an area of the collocation facility dedicated for all the tenants in the building to interconnect or meet to exchange connections.
In
While this effort potentially creates a physical bonding between the service providers, this physical cross-connection is on a one to one service provider basis, thus not offering scalability to reach more than one service provider through a single connection; moreover, just establishing a physical bond is almost always insufficient to allow end-to-end Ethernet Service connectivity between the providers. Since each provider has a unique Ethernet Service definition, the configuration on one or both switches must be changed to remap one service definition to the other. For example, one provider may offer three levels of service quality while the other offers two. Unless the configuration is adjusted to map between these two providers, the end-to-end service will not perform as expected. Moreover, changing such configurations causes considerable work on the part of the provider; testing these configurations, training staff, updating procedural documents and updating Operational Support Systems.
While most collocation or carrier hotels do not provide interconnection functions beyond their “Meet Me” cross-connect panels, a limited number of collocation centers do provide some in-facility networking to enable one service provider in the facility to reach other service providers for interconnection purposes. In such a collocation facility, a network is created within the facility using switched technology such as a fast Ethernet or ATM. This networking allows two service providers to connect through the switched circuit. Such simple intra-facility functionality does reduce the number of costly physical connections but offers little to address the Ethernet Service Mapping challenges. This approach also requires that the two service providers interconnecting using this intra-facility network must both be renting space in the carrier hotel.
Further, even if some transaction and physical interconnections were made, Ethernet carriers need to establish a manner of measuring and monitoring Ethernet services they provide. Again, each Ethernet carrier has different systems and equipment for measuring quality of service, so it is complex to do so across a large number of carriers and does not scale well. Finally, even if interconnections are achieved and monitoring is employed, there are differences between each carrier's system and process for querying service, building inventory, quotations, ordering, fulfillment, service legal agreement (SLA) reporting, trouble sectionalizing, and billing harmonizing. Thus, bonding these systems would not increase efficiency between carriers.
Finally, there is a significant challenge associated with Service Quality Monitoring. A service provider typically offers a set of SLAs to the Enterprise customer to whom they are selling the service. When one of the endpoints of this service is “off-net” and the Service Provider must go through another provider to access to this endpoint, the Service Provider must find a mechanism to measure the service quality off-net. Most typically today, the method to do so requires that the provider places a Network Interface Demarc (NID) device on the off-net customer premises. This is very costly, both because of the cost of this device but also in the fact that the service provider generally does not have people to do installation, support and repair of these devices in every off-net region.
The present invention is intended to solve the above-noted business and technical problems by establishing an independent, common service provider interconnection platform with advanced switching capabilities and centralized monitoring to enable the rapid, efficient provisioning of Carrier Ethernet services between multiple service providers' networks. This invention both facilitates standards based, scalable Carrier Ethernet interconnection while also enabling service providers with incompatible services to readily interconnect through the use of advanced, proprietary network management capabilities.
One embodiment of the invention is directed to a communications system configured for enabling a plurality of service providers to interconnect via an Ethernet platform, the plurality of service providers employing disparate Ethernet protocols. The system includes a central server and a plurality of switching locations. Each one of the plurality of switching locations is communicatively connected to the central server and to the plurality of service providers. Each of the plurality of switching locations includes a plurality of Ethernet router switches, a monitoring device, a local server coupled to a plurality of databases. Each of the plurality of databases is associated with each of the service providers. A communications media is provided for interconnecting the plurality of switches, the service providers, the router switches, the connectivity device, and the local server. The system enables the service providers to be interconnected on the Ethernet platform by establishing protocol mappings between any two Ethernet protocols associated with corresponding service providers.
In one aspect of the invention, the central server comprises a customer presentation module, a management module for managing the plurality of Ethernet router switches, a service module for monitoring Ethernet services, collecting and analyzing service data, and a centralized service database for storing information associated with the plurality of service providers. The customer presentation module, the management module, the service module, the centralized service database are communicatively connected through a local interface.
Another embodiment of the invention is directed to a method for facilitating interconnections between a plurality of communication service providers through an
Ethernet switching platform. The method includes establishing a connection between each of the plurality of service providers and the Ethernet switching platform, the plurality of service providers employing disparate Ethernet protocols, determining each of the Ethernet protocols associated with each of the plurality of service providers, and establishing protocol mappings between any two Ethernet protocols for facilitating interconnections between corresponding service providers.
For a better understanding of the invention, reference may be had to preferred embodiments shown in the following drawings in which:
While the present invention may be embodied in various forms, there is shown in the drawings and will hereinafter be described some exemplary and non-limiting embodiments, with the understanding that the present disclosure is to be considered an exemplification of the invention and is not intended to limit the invention to the specific embodiments illustrated.
In this application, the use of the disjunctive is intended to include the conjunctive. The use of definite or indefinite articles is not intended to indicate cardinality. In particular, a reference to “the” object or “a” and “an” object is intended to denote also one of a possible plurality of such objects.
A preferred embodiment of the present invention provides a first service-level interconnect platform designed to join disparate service provider networks in order to enable end-to-end Carrier Ethernet across service providers' networks. Conventionally, an expansion of Carrier Ethernet has been constrained, limited to the islands of connectivity largely contained wholly within individual service provider's networks. In accordance with the present invention, a new Carrier Ethernet service-level interconnect platform is provided to integrate platform components into defined interfaces to solve the pressing need for ubiquity—the problem of each service provider needing to connect to all other service providers in order to make Carrier Ethernet widely available. The platform comprises three key elements:
Provider Systems; and
Unlike existing interconnect services described above, which provide simple, physical, local cross-connect or intra-facility networking, the present invention plays an integral role in enabling the delivery of Carrier Ethernet across disparate networks by actively participating in service interworking, harmonizing virtual bandwidth profiles, enabling different classes of service, delivering address scheme mapping, performing end user-to-end user monitoring, performing service inventory with logical organization of inter-carrier data, providing common normalized machine interfaces with adapters to existing interfaces between diverse operating systems, and providing a unique central “Marketplace” to help with the integration between buying and selling service provider processes and systems.
Now referring to
Now referring to
As a result, the platform 300 helps resolve the N2 Problem, by enabling rapid interconnection among service providers with the fewest number of total connections.
For example,
Once physical connections are established with the PSL 411, multiple virtual transport connections can be configured from the central location of the platform 300 between various service providers without any physical changes to the PSL 411. One example of such an embodiment is shown in
To interconnect with the service providers—service provider B switch 503 and service provider C switch 505—that can reach the two end users' branch locations 504 and 508, service provider A switch 501 can utilize its single, physical connection 510 to the PSL 512. The platform switch 514 configures or provisions between service provider A's physical connection 510 and the physical connections 516 and 518 of service provider switch B 503 and service provider switch C 505 multiple dedicated Ethernet virtual connections (EVCs) conforming to the Carrier Ethernet service profiles required by each of service provider A's end users. Ethernet virtual connections (EVC) are a connection between two end user network interface devices (UNIs), which are the devices located at the edge of a service provider's network between the network and the end user, that appear to be a direct and dedicated connection, but is actually a group of logic circuit resources from which specific circuits are allocated as needed to meet traffic requirements in a packet switched network. In this case, two network devices can communicate as though they have a dedicated physical connection. These virtual connections established on the PSL 512 are then mapped to or associated with virtual connections existing between the PSL 512 and each service provider's network 520 and 522, thus establishing a complete end to end virtual connection between each end user's headquarters location 502 and 506 and branch locations 504 and 508. As such, one EVC AB is comprised of three parts; a part that is through the service provider A network 520, a part that is across the PSL 512, and a part through the service provider B network 522. Another EVC AC is also composed of three parts; a part that is through the service provider A network 520, a part that is across the PSL 512, and a part through the service provider C network 524. The key is that the part of each of these EVCs AB and AC that goes through the exchange PSL 512 is not only establishing connectivity, but also remapping to allow the end-to-end services to work.
The interconnection PSL 512 can be utilized as well to interconnect service provider networks together using Carrier Ethernet not to serve an end user, but to merely exchange traffic between the service providers needing to terminate with one another.
For example, a service provider may have Internet data traffic terminating to a web address hosted by another service provider. Alternatively, a service provider may have voice traffic that needs to terminate to the network of another service provider. Instead of utilizing legacy Time Division Multiplexing (TDM) based interconnection facilities, which are much less efficient than Carrier Ethernet but have the benefits of years of standardization, the service providers could utilize the interconnection PSL 512 to implement Carrier Ethernet connectivity between their networks to terminate such traffic without the costs and expense of putting an NNI in place between their networks. As known to one of ordinary skills in the art, TDM is a technique of transmitting multiple digitized data, voice, and video signals simultaneously over one communication media. TDM is the predominant legacy transmission standard in the world.
Now referring to
In accordance with the present invention, the disclosed platform 300 can be configured not just to facilitate interconnection between two service providers, to address the N2 Problem above, but also to analyze and harmonize variances between discrete service offerings of the two service providers in order to align the individual service of one service provider to the required prescribed service profile of the other service provider in order to deliver end-to-end Carrier Ethernet service.
Network and Service Analysis
To facilitate rapid, efficient interconnection, processes are put in place to analyze and test all interconnected service provider networks and service definitions at a very detailed level.
This network analysis process would obviate the need for each service provider to perform such network profiling for every other service provider they plan to interconnect with. When, for example, a service provider seeks to purchase service from three different network operators, the network analysis functionality embedded into the platform 300 would remove the need for the service provider to analyze and test the disparate network protocols (e.g., frame size, frame rate, etc.) of the three different networks. Instead, the analysis performed in conjunction with the platform 300 is used to determine feasibility of service interconnect between these providers and determination is made on how to do these interconnects without requiring the service provider to change their service definition or do extensive interconnect testing. Because the platform 300 understands how to map communication protocols and service definitions between all of the connected service providers, the process of profiling those service providers' networks would be eliminated for all future service providers who connect to the platform 300. As a result, the platform 300 is configured to create an efficient, scalable solution for network interconnection by eliminating this burdensome step, thus mitigating the N2 Problem described above.
Service Harmonizing
Beyond mapping Ethernet Frames to allow interconnect, the platform 300 can help end-to-end Carrier Ethernet quality of service (QOS) across the service provider networks connected to the PSL 604, the platform switch 606 can, through the creation and implementation of proprietary algorithms together with the PSL switch 606, provide additional traffic management of services between service providers. This traffic management may include custom traffic shaping to match, for example, the burst sizes between two different service providers' unique traffic profiles while controlling packet loss in order to deliver the required performance of the end-to-end service. This traffic shaping function actually involves the PSL 604 transforming the data from the shape in which it was delivered to the PSL 604 by one service provider to a new shape compatible with the required parameter of the other service provider.
For example, consider a service crossing Carrier A and Carrier B. Both providers have the ability to support a sustained rate of 10 Mb/s but Carrier A supports a burst size of 50 frames, whereas Carrier B supports a burst of 25 frames. If the user injects a burst of 50 frames at line rate, then stops transmitting, this traffic will be successfully transmitted across Carrier A's network. However as soon as the traffic reaches Carrier B's network, 25 frames would be dropped as it exceeds Carrier B's Committed Burst Size. This would cause intermittent and unpredictable frame loss across the end-to-end circuit. With the disclosed PSL 604 inserted between Carrier A and Carrier B, the PSL 604 is aware of the differing burst size and is configurable to actively shape the traffic so that this frame loss does not occur. In short, it would be configured to absorb the full burst from Carrier A and shape the traffic over time into the network of Carrier B so that Carrier B never hits its burst limit.
The harmonization functionality of the platform 300 contributes to mitigating the N2 Problem. In the example described above (a service provider seeking to buy service from three different network operators), the harmonization functionality eliminates the need for the service provider to implement its own switch and to develop its own set of shaping algorithms. And this effort would have to be repeated by the next service provider that seeks buy services from those same network operators. Instead, the service providers and network operators all plug into the PSL switch 606 where a single set of algorithms performs the network harmonization. As a result, network harmonization through the platform 300 is more efficient and scalable than each interconnected network having to put in place infrastructure and harmonization functionality for every other interconnected network.
The platform 300 can also enable multiple classes of service to be delivered over a single EVC. In one exemplary embodiment,
Finally, the platform 300 can support distant Ethernet LAN (E-LAN) services and advanced tunneling schemes. E-LAN services use multipoint-to-multipoint EVCs to enable a virtual LAN-like service over a wide area. Network tunneling refers to the ability to be able to carry a payload over an incompatible network, or provide a secure path through an untrusted network. Virtual Private Networks (VPN) are accomplished via network tunneling.
Located at each PSL 802 are servers (not shown in
Service Monitoring
The platform 300 is configured to enable a unique monitoring service that assists service assurance, enables virtual connection troubleshooting, and saves the costly and complex deployment of multiple network interface devices (NID) on end users' premises. This monitoring service is also intended to enable industry standard service-level visibility of service providers' networks, thereby augmenting and enhancing the service providers' service offerings to their end users. As shown in
Moreover, in the event that the service quality degrades below a set threshold for any of the following measurements, personnel managing the PSL 902 can be notified in real-time allowing them to proactively contact both buying and selling service providers to take remedial action:
By contrast, if a service provider establishes separate NNIs with a number of other service providers, each service provider it interconnects with will most likely use different mechanisms to measure the service quality of their networks, providing numerous unique set of reports and no mechanism to proactively notify on service degradation. This would make service quality monitoring and comparisons between the service providers difficult and less efficient.
In accordance with the present invention, an exchange control system 1000 is provided to facilitate process and system interactions between buying and selling service providers connected to the platform. As shown in
As configured, the control system 1000 supports the above discussed unique central Marketplace, which can provide the following capabilities:
All of the above are done in such a way to ensure security and control to see only information for which this user is authorized to see.
Although in the above discussion of the Ethernet platform 300, members, i.e., service providers and purchasers, of a communication exchange can connect only with other members connected to the same exchange; the Ethernet platform 300 also enables members connected to one communication exchange to reach buildings served by members connected to a different exchange. This connection arrangement between members of different exchanges can be accomplished by a number of ways, such as:
The platform or PSL system 1110 may be implemented in software, firmware, hardware, or any combination thereof. For example, in one mode, the platform system 1110 is implemented in software, as an executable program, and is executed by one or more special or general purpose digital computer(s), such as a personal computer (PC; IBM-compatible, Apple-compatible, or otherwise), personal digital assistant, workstation, minicomputer, mainframe computer, computer network, “virtual network” or “internet cloud computing facility”. Therefore, computer 1100 may be representative of any computer in which the platform system 1110 resides or partially resides.
Generally, in terms of hardware architecture, as shown in
Processor 1102 is a hardware device for executing software, particularly software stored in memory 1104. Processor 1102 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the computer 1100, a semiconductor based microprocessor (in the form of a microchip or chip set), another type of microprocessor, or generally any device for executing software instructions. Examples of suitable commercially available microprocessors are as follows: a PA-RISC series microprocessor from Hewlett-Packard Company, an 80x86 or Pentium series microprocessor from Intel Corporation, a PowerPC microprocessor from IBM, a Sparc microprocessor from Sun Microsystems, Inc., or a 68xxx series microprocessor from Motorola Corporation. Processor 1002 may also represent a distributed processing architecture such as, but not limited to, SQL, Smalltalk, APL, KLisp, Snobol, Developer 200, MUMPS/Magic.
Memory 1104 can include any one or a combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, memory 1104 may incorporate electronic, magnetic, optical, and/or other types of storage media. Memory 1104 can have a distributed architecture where various components are situated remote from one another, but are still accessed by processor 1102.
The software in memory 1104 may include one or more separate programs. The separate programs comprise ordered listings of executable instructions for implementing logical functions. In the example of
The platform system 1110 may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a “source” program, the program needs to be translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the memory 1104, so as to operate properly in connection with the O/S 1112. Furthermore, the platform system 1110 can be written as (a) an object oriented programming language, which has classes of data and methods, or (b) a procedural programming language, which has routines, subroutines, and/or functions, for example but not limited to, C, C++, Pascal, Basic, Fortran, Cobol, Perl, Java, .Net, HTML, and Ada. In one embodiment, the platform system 1010 is written in Java.
The I/O devices 1106 may include input devices, for example but not limited to, input modules for PLCs, a keyboard, mouse, scanner, microphone, touch screens, interfaces for various medical devices, bar code readers, stylus, laser readers, radio-frequency device readers, etc. Furthermore, the I/O devices 1106 may also include output devices, for example but not limited to, output modules for PLCs, a printer, bar code printers, displays, etc. Finally, the I/O devices 1106 may further comprise devices that communicate with both inputs and outputs, including, but not limited to, a modulator/demodulator (modem; for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, and a router.
If the computer 1100 is a PC, workstation, PDA, or the like, the software in the memory 1004 may further include a basic input output system (BIOS) (not shown in
When computer 1100 is in operation, processor 1102 is configured to execute software stored within memory 1104, to communicate data to and from memory 1104, and to generally control operations of computer 1100 pursuant to the software. The platform system 1110, and the O/S 1112, in whole or in part, but typically the latter, may be read by processor 1102, buffered within the processor 1102, and then executed.
When the platform system 1110 is implemented in software, as is shown in
In another embodiment, where the platform system 1110 is implemented in hardware, the platform system 1110 may also be implemented with any of the following technologies, or a combination thereof, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
Although exemplary embodiments of the invention have been described in detail above, those skilled in the art will readily appreciate that many additional modifications are possible in the exemplary embodiment without materially departing from the novel teachings and advantages of the invention. Accordingly, these and all such modifications are intended to be included within the scope of this invention.
This international patent application claims priority to U.S. Provisional Patent Application No. 61/230,069 filed on Jul. 30, 2009, which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US10/43732 | 7/29/2010 | WO | 00 | 1/27/2012 |
Number | Date | Country | |
---|---|---|---|
61230069 | Jul 2009 | US |